concurrently 1.0.1
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +7 -0
- data/.gitignore +5 -0
- data/.rspec +4 -0
- data/.travis.yml +16 -0
- data/.yardopts +7 -0
- data/Gemfile +17 -0
- data/LICENSE +176 -0
- data/README.md +129 -0
- data/RELEASE_NOTES.md +49 -0
- data/Rakefile +28 -0
- data/concurrently.gemspec +33 -0
- data/ext/Ruby/thread.rb +28 -0
- data/ext/all/array.rb +24 -0
- data/ext/mruby/array.rb +19 -0
- data/ext/mruby/fiber.rb +5 -0
- data/ext/mruby/io.rb +54 -0
- data/guides/Installation.md +46 -0
- data/guides/Overview.md +335 -0
- data/guides/Performance.md +140 -0
- data/guides/Troubleshooting.md +262 -0
- data/lib/Ruby/concurrently.rb +12 -0
- data/lib/Ruby/concurrently/error.rb +4 -0
- data/lib/Ruby/concurrently/event_loop.rb +24 -0
- data/lib/Ruby/concurrently/event_loop/io_selector.rb +38 -0
- data/lib/all/concurrently/error.rb +10 -0
- data/lib/all/concurrently/evaluation.rb +109 -0
- data/lib/all/concurrently/evaluation/error.rb +18 -0
- data/lib/all/concurrently/event_loop.rb +101 -0
- data/lib/all/concurrently/event_loop/fiber.rb +37 -0
- data/lib/all/concurrently/event_loop/io_selector.rb +42 -0
- data/lib/all/concurrently/event_loop/proc_fiber_pool.rb +18 -0
- data/lib/all/concurrently/event_loop/run_queue.rb +111 -0
- data/lib/all/concurrently/proc.rb +233 -0
- data/lib/all/concurrently/proc/evaluation.rb +246 -0
- data/lib/all/concurrently/proc/fiber.rb +67 -0
- data/lib/all/concurrently/version.rb +8 -0
- data/lib/all/io.rb +248 -0
- data/lib/all/kernel.rb +201 -0
- data/lib/mruby/concurrently/proc.rb +21 -0
- data/lib/mruby/kernel.rb +15 -0
- data/mrbgem.rake +42 -0
- data/perf/_shared/stage.rb +33 -0
- data/perf/concurrent_proc_call.rb +13 -0
- data/perf/concurrent_proc_call_and_forget.rb +15 -0
- data/perf/concurrent_proc_call_detached.rb +15 -0
- data/perf/concurrent_proc_call_nonblock.rb +13 -0
- data/perf/concurrent_proc_calls.rb +49 -0
- data/perf/concurrent_proc_calls_awaiting.rb +48 -0
- metadata +144 -0
@@ -0,0 +1,140 @@
|
|
1
|
+
# Performance of Concurrently
|
2
|
+
|
3
|
+
Overall, Concurrently is able to schedule around 100k to 200k concurrent
|
4
|
+
evaluations per second. What to expect exactly is narrowed down in the
|
5
|
+
following benchmarks.
|
6
|
+
|
7
|
+
The measurements were executed with Ruby 2.4.1 on an Intel i7-5820K 3.3 GHz
|
8
|
+
running Linux 4.10. Garbage collection was disabled.
|
9
|
+
|
10
|
+
|
11
|
+
## Calling a (Concurrent) Proc
|
12
|
+
|
13
|
+
This benchmark compares all `#call` methods of a concurrent proc and a regular
|
14
|
+
proc. The mere invocation of the method is measured. The proc itself does
|
15
|
+
nothing.
|
16
|
+
|
17
|
+
Benchmarked Code
|
18
|
+
----------------
|
19
|
+
proc = proc{}
|
20
|
+
conproc = concurrent_proc{}
|
21
|
+
|
22
|
+
while elapsed_seconds < 1
|
23
|
+
# CODE #
|
24
|
+
end
|
25
|
+
|
26
|
+
Results
|
27
|
+
-------
|
28
|
+
# CODE #
|
29
|
+
proc.call: 5423106 executions in 1.0000 seconds
|
30
|
+
conproc.call: 662314 executions in 1.0000 seconds
|
31
|
+
conproc.call_nonblock: 769164 executions in 1.0000 seconds
|
32
|
+
conproc.call_detached: 269385 executions in 1.0000 seconds
|
33
|
+
conproc.call_and_forget: 306099 executions in 1.0000 seconds
|
34
|
+
|
35
|
+
Explanation of the results:
|
36
|
+
|
37
|
+
* The difference between a regular and a concurrent proc is caused by
|
38
|
+
concurrent procs being evaluated in a fiber and doing some bookkeeping.
|
39
|
+
* Of the two methods evaluating the proc in the foreground `#call_nonblock`
|
40
|
+
is faster than `#call`, because the implementation of `#call` uses
|
41
|
+
`#call_nonblock` and does a little bit more on top.
|
42
|
+
* Of the two methods evaluating the proc in the background, `#call_and_forget`
|
43
|
+
is faster because `#call_detached` additionally creates an evaluation
|
44
|
+
object.
|
45
|
+
* Running concurrent procs in the background is considerably slower because
|
46
|
+
in this setup `#call_detached` and `#call_and_forget` cannot reuse fibers.
|
47
|
+
Their evaluation is merely scheduled and not started and concluded. This
|
48
|
+
would happen during the next iteration of the event loop. But since the
|
49
|
+
`while` loop never waits for something [the loop is never entered]
|
50
|
+
[Troubleshooting/A_concurrent_proc_is_scheduled_but_never_run].
|
51
|
+
All this leads to the creation of a new fiber for each evaluation. This is
|
52
|
+
responsible for the largest chunk of time needed during the measurement.
|
53
|
+
|
54
|
+
You can run the benchmark yourself by running the [script][perf/concurrent_proc_calls.rb]:
|
55
|
+
|
56
|
+
$ perf/concurrent_proc_calls.rb
|
57
|
+
|
58
|
+
|
59
|
+
## Scheduling (Concurrent) Procs
|
60
|
+
|
61
|
+
This benchmark is closer to the real usage of Concurrently. It includes waiting
|
62
|
+
inside a concurrent proc.
|
63
|
+
|
64
|
+
Benchmarked Code
|
65
|
+
----------------
|
66
|
+
conproc = concurrent_proc{ wait 0 }
|
67
|
+
|
68
|
+
while elapsed_seconds < 1
|
69
|
+
1.times{ # CODE # }
|
70
|
+
wait 0 # to enter the event loop
|
71
|
+
end
|
72
|
+
|
73
|
+
Results
|
74
|
+
-------
|
75
|
+
# CODE #
|
76
|
+
conproc.call: 72444 executions in 1.0000 seconds
|
77
|
+
conproc.call_nonblock: 103468 executions in 1.0000 seconds
|
78
|
+
conproc.call_detached: 114882 executions in 1.0000 seconds
|
79
|
+
conproc.call_and_forget: 117425 executions in 1.0000 seconds
|
80
|
+
|
81
|
+
Explanation of the results:
|
82
|
+
|
83
|
+
* Because scheduling is now the dominant factor, there is a large drop in the
|
84
|
+
number of executions compared to just calling the procs. This makes the
|
85
|
+
number of executions when calling the proc in a non-blocking way comparable.
|
86
|
+
* Calling the proc in a blocking manner with `#call` is costly. A lot of time
|
87
|
+
is spend waiting for the result.
|
88
|
+
|
89
|
+
You can run the benchmark yourself by running the [script][perf/concurrent_proc_calls_awaiting.rb]:
|
90
|
+
|
91
|
+
$ perf/concurrent_proc_calls_awaiting.rb
|
92
|
+
|
93
|
+
|
94
|
+
## Scheduling (Concurrent) Procs and Evaluating Them in Batches
|
95
|
+
|
96
|
+
Additional to waiting inside a proc, it calls the proc 100 times at once. All
|
97
|
+
100 evaluations will then be evaluated in one batch during the next iteration
|
98
|
+
of the event loop.
|
99
|
+
|
100
|
+
This is a simulation for a server receiving multiple messages during one
|
101
|
+
iteration of the event loop and processing all of them in one go.
|
102
|
+
|
103
|
+
Benchmarked Code
|
104
|
+
----------------
|
105
|
+
conproc = concurrent_proc{ wait 0 }
|
106
|
+
|
107
|
+
while elapsed_seconds < 1
|
108
|
+
100.times{ # CODE # }
|
109
|
+
wait 0 # to enter the event loop
|
110
|
+
end
|
111
|
+
|
112
|
+
Results
|
113
|
+
-------
|
114
|
+
# CODE #
|
115
|
+
conproc.call: 76300 executions in 1.0006 seconds
|
116
|
+
conproc.call_nonblock: 186200 executions in 1.0002 seconds
|
117
|
+
conproc.call_detached: 180200 executions in 1.0000 seconds
|
118
|
+
conproc.call_and_forget: 193500 executions in 1.0004 seconds
|
119
|
+
|
120
|
+
|
121
|
+
Explanation of the results:
|
122
|
+
|
123
|
+
* `#call` does not profit from batching due to is synchronizing nature.
|
124
|
+
* The other methods show an increased throughput compared to running just a
|
125
|
+
single evaluation per event loop iteration.
|
126
|
+
|
127
|
+
The result of this benchmark is the upper bound for how many concurrent
|
128
|
+
evaluations Concurrently is able to run per second. The number of executions
|
129
|
+
does not change much with a varying batch size. Larger batches (e.g. 200+)
|
130
|
+
gradually start to get a bit slower. A batch of 1000 evaluations still handles
|
131
|
+
around 140k executions.
|
132
|
+
|
133
|
+
You can run the benchmark yourself by running the [script][perf/concurrent_proc_calls_awaiting.rb]:
|
134
|
+
|
135
|
+
$ perf/concurrent_proc_calls_awaiting.rb 100
|
136
|
+
|
137
|
+
|
138
|
+
[perf/concurrent_proc_calls.rb]: https://github.com/christopheraue/m-ruby-concurrently/blob/master/perf/concurrent_proc_calls.rb
|
139
|
+
[perf/concurrent_proc_calls_awaiting.rb]: https://github.com/christopheraue/m-ruby-concurrently/blob/master/perf/concurrent_proc_calls_awaiting.rb
|
140
|
+
[Troubleshooting/A_concurrent_proc_is_scheduled_but_never_run]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/file/guides/Troubleshooting.md#A_concurrent_proc_is_scheduled_but_never_run
|
@@ -0,0 +1,262 @@
|
|
1
|
+
# Troubleshooting
|
2
|
+
|
3
|
+
To get an idea about the inner workings of Concurrently have a look at the
|
4
|
+
[Flow of control][] section in the overview.
|
5
|
+
|
6
|
+
## A concurrent proc is scheduled but never run
|
7
|
+
|
8
|
+
Consider the following script:
|
9
|
+
|
10
|
+
```ruby
|
11
|
+
#!/bin/env ruby
|
12
|
+
|
13
|
+
concurrently do
|
14
|
+
puts "I will be forgotten, like tears in the rain."
|
15
|
+
end
|
16
|
+
|
17
|
+
puts "Unicorns!"
|
18
|
+
```
|
19
|
+
|
20
|
+
Running it will only print:
|
21
|
+
|
22
|
+
```
|
23
|
+
Unicorns!
|
24
|
+
```
|
25
|
+
|
26
|
+
`concurrently{}` is a shortcut for `concurrent_proc{}.call_and_forget`
|
27
|
+
which in turn does not evaluate its code right away but schedules it to run
|
28
|
+
during the next iteration of the event loop. But, since the root evaluation did
|
29
|
+
not await anything the event loop has never been entered and the evaluation of
|
30
|
+
the concurrent proc has never been started.
|
31
|
+
|
32
|
+
A more subtle variation of this behavior occurs in the following scenario:
|
33
|
+
|
34
|
+
```ruby
|
35
|
+
#!/bin/env ruby
|
36
|
+
|
37
|
+
concurrently do
|
38
|
+
puts "Unicorns!"
|
39
|
+
wait 2
|
40
|
+
puts "I will be forgotten, like tears in the rain."
|
41
|
+
end
|
42
|
+
|
43
|
+
wait 1
|
44
|
+
```
|
45
|
+
|
46
|
+
Running it will also only print:
|
47
|
+
|
48
|
+
```
|
49
|
+
Unicorns!
|
50
|
+
```
|
51
|
+
|
52
|
+
This time, the root evaluation does await something, namely the end of a one
|
53
|
+
second time frame. Because of this, the evaluation of the `concurrently` block
|
54
|
+
is indeed started and immediately waits for two seconds. After one second the
|
55
|
+
root evaluation is resumed and exits. The `concurrently` block is never awoken
|
56
|
+
again from its now eternal beauty sleep.
|
57
|
+
|
58
|
+
## A call is blocking the entire execution.
|
59
|
+
|
60
|
+
```ruby
|
61
|
+
#!/bin/env ruby
|
62
|
+
|
63
|
+
r,w = IO.pipe
|
64
|
+
|
65
|
+
concurrently do
|
66
|
+
w.write 'Wake up!'
|
67
|
+
end
|
68
|
+
|
69
|
+
r.readpartial 32
|
70
|
+
```
|
71
|
+
|
72
|
+
Here, although we are practically waiting for `r` to be readable we do so in a
|
73
|
+
blocking manner (`IO#readpartial` is blocking). This brings the whole process
|
74
|
+
to a halt, the event loop will not be entered and the `concurrently` block will
|
75
|
+
not be run. It will not be written to the pipe which in turn creates a nice
|
76
|
+
deadlock.
|
77
|
+
|
78
|
+
You can use blocking calls to deal with I/O. But you should await readiness of
|
79
|
+
the IO before. If instead of just `r.readpartial 32` we write:
|
80
|
+
|
81
|
+
```ruby
|
82
|
+
r.await_readable
|
83
|
+
r.readpartial 32
|
84
|
+
```
|
85
|
+
|
86
|
+
we suspend the root evaluation, switch to the event loop which runs the
|
87
|
+
`concurrently` block and once there is something to read from `r` the root
|
88
|
+
evaluation is resumed.
|
89
|
+
|
90
|
+
This approach is not perfect. It is not very efficient if we do not need to
|
91
|
+
await readability at all and could read from `r` immediately. But it is still
|
92
|
+
better than blocking everything by default.
|
93
|
+
|
94
|
+
The most efficient way is doing a non-blocking read and only await readability
|
95
|
+
if it is not readable:
|
96
|
+
|
97
|
+
```ruby
|
98
|
+
begin
|
99
|
+
r.read_nonblock 32
|
100
|
+
rescue IO::WaitReadable
|
101
|
+
r.await_readable
|
102
|
+
retry
|
103
|
+
end
|
104
|
+
```
|
105
|
+
|
106
|
+
## The event loop is jammed by too many or too expensive evaluations
|
107
|
+
|
108
|
+
Let's talk about a concurrent proc with an infinite loop:
|
109
|
+
|
110
|
+
```ruby
|
111
|
+
evaluation = concurrent_proc do
|
112
|
+
loop do
|
113
|
+
puts "To infinity! And beyond!"
|
114
|
+
end
|
115
|
+
end.call_detached
|
116
|
+
|
117
|
+
concurrently do
|
118
|
+
evaluation.conclude_to :cancelled
|
119
|
+
end
|
120
|
+
```
|
121
|
+
|
122
|
+
When the concurrent proc is scheduled to run it runs and runs and runs and
|
123
|
+
never finishes. The event loop is never entered again and the other concurrent
|
124
|
+
proc concluding the evaluation is never started.
|
125
|
+
|
126
|
+
A less extreme example is something like:
|
127
|
+
|
128
|
+
```ruby
|
129
|
+
concurrent_proc do
|
130
|
+
loop do
|
131
|
+
wait 0.1
|
132
|
+
puts "timer triggered at: #{Time.now.strftime('%H:%M:%S.%L')}"
|
133
|
+
concurrently do
|
134
|
+
sleep 1 # defers the entire event loop
|
135
|
+
end
|
136
|
+
end
|
137
|
+
end.call
|
138
|
+
|
139
|
+
# => timer triggered at: 16:08:17.704
|
140
|
+
# => timer triggered at: 16:08:18.705
|
141
|
+
# => timer triggered at: 16:08:19.705
|
142
|
+
# => timer triggered at: 16:08:20.705
|
143
|
+
# => timer triggered at: 16:08:21.706
|
144
|
+
```
|
145
|
+
|
146
|
+
This is a timer that is supposed to run every 0.1 seconds and creates another
|
147
|
+
evaluation that takes a full second to complete. But since it takes so long the
|
148
|
+
loop also only gets a chance to run every second leading to a delay of 0.9
|
149
|
+
seconds between the time the timer is supposed to run and the time it actually
|
150
|
+
ran.
|
151
|
+
|
152
|
+
## Forking the process causes issues
|
153
|
+
|
154
|
+
A fork inherits the main thread and with it the event loop with all its
|
155
|
+
internal state from the parent. This is a problem since fibers created in
|
156
|
+
the parent process cannot be resume in the forked process. Trying to do so
|
157
|
+
raises a "fiber called across stack rewinding barrier" error. Also, we
|
158
|
+
probably do not want to continue watching the parent's IOs.
|
159
|
+
|
160
|
+
To fix this, the event loop has to be [reinitialized][Concurrently::EventLoop#reinitialize!]
|
161
|
+
directly after forking:
|
162
|
+
|
163
|
+
```ruby
|
164
|
+
fork do
|
165
|
+
Concurrently::EventLoop.current.reinitialize!
|
166
|
+
# ...
|
167
|
+
end
|
168
|
+
|
169
|
+
# ...
|
170
|
+
```
|
171
|
+
|
172
|
+
While reinitializing the event loop clears its list of IOs watched for
|
173
|
+
readiness, the IOs themselves are left untouched. You are responsible for
|
174
|
+
managing IOs (e.g. closing them).
|
175
|
+
|
176
|
+
## Errors tear down the event loop
|
177
|
+
|
178
|
+
Every concurrent proc rescues the following errors happening during its
|
179
|
+
evaluation: `NoMemoryError`, `ScriptError`, `SecurityError`, `StandardError`
|
180
|
+
and `SystemStackError`. These are all errors that should not have an immediate
|
181
|
+
influence on other evaluations or the application as a whole. They will not
|
182
|
+
leak to the event loop and will not tear it down.
|
183
|
+
|
184
|
+
All other errors happening inside a concurrent proc *will* tear down the
|
185
|
+
event loop. These error types are: `SignalException`, `SystemExit` and the
|
186
|
+
general `Exception`. In such a case the event loop exits by raising a
|
187
|
+
[Concurrently::Error][].
|
188
|
+
|
189
|
+
If your application rescues the error when the event loop is teared down
|
190
|
+
and continues running (irb does this, for example) it will do so with a
|
191
|
+
[reinitialized event loop] [Concurrently::EventLoop#reinitialize!].
|
192
|
+
|
193
|
+
## Using Plain Fibers
|
194
|
+
|
195
|
+
In principle, you can safely use plain ruby fibers alongside concurrent procs.
|
196
|
+
Just make sure you are exclusively operating on these fibers to not
|
197
|
+
accidentally interfere with the fibers managed by Concurrently. Be
|
198
|
+
especially careful with `Fiber.yield` and `Fiber.current` inside a concurrent
|
199
|
+
proc.
|
200
|
+
|
201
|
+
## Fiber-local variables are treated as thread-local
|
202
|
+
|
203
|
+
In Ruby, `Thread#[]`, `#[]=`, `#key?` and `#keys` operate on variables local
|
204
|
+
to the current fiber and not the current thread. This behavior is not noticed
|
205
|
+
most of the time because people rarely work explicitly with fibers. Then, each
|
206
|
+
thread has exactly one fiber and thread-local and fiber-local variables behave
|
207
|
+
the same way.
|
208
|
+
|
209
|
+
But if fibers come into play and a single thread starts switching between them,
|
210
|
+
these methods cause errors instantly. Since Concurrently is built upon fibers
|
211
|
+
it needs to sail around those issues. Most of the time the real intention is to
|
212
|
+
set variables local to the current thread; just like the receiver of said
|
213
|
+
methods suggests. For this reason, `Thread#[]`, `#[]=`, `#key?` and `#keys` are
|
214
|
+
boldly redirected to `Thread#thread_variable_get`, `#thread_variable_set`,
|
215
|
+
`#thread_variable?` and `#thread_variables`.
|
216
|
+
|
217
|
+
If you belong to the ones using fibers with variables indeed intended to be
|
218
|
+
fiber-local, you have two options: 1) Don't use Concurrently or 2) change all
|
219
|
+
these fibers to concurrent procs and use their evaluation's [data store]
|
220
|
+
[Concurrently::Proc::Evaluation#brackets] to store the variables.
|
221
|
+
|
222
|
+
```ruby
|
223
|
+
fiber = Fiber.new do
|
224
|
+
Thread.current[:key] = "I intend to be fiber-local!"
|
225
|
+
puts Thread.current[:key]
|
226
|
+
end
|
227
|
+
|
228
|
+
fiber.resume
|
229
|
+
```
|
230
|
+
|
231
|
+
becomes:
|
232
|
+
|
233
|
+
```ruby
|
234
|
+
conproc = concurrent_proc do
|
235
|
+
Concurrently::Evaluation.current[:key] = "I'm evaluation-local!"
|
236
|
+
puts Concurrently::Evaluation.current[:key]
|
237
|
+
end
|
238
|
+
|
239
|
+
conproc.call
|
240
|
+
```
|
241
|
+
|
242
|
+
## FiberError: mprotect failed
|
243
|
+
|
244
|
+
Each concurrent evaluation runs in a fiber. If your application creates more
|
245
|
+
concurrent evaluations than are concluded, more and more fibers need to be
|
246
|
+
created. At some point the creation of additional fibers fails with
|
247
|
+
"FiberError: mprotect failed". This is caused by hitting the limit for the the
|
248
|
+
number of distinct memory maps a process can have. The corresponding linux
|
249
|
+
kernel parameter is `/proc/sys/vm/max_map_count` and has default value of 64k.
|
250
|
+
Each fiber creates two memory maps leading to a default maximum of around 30k
|
251
|
+
fibers. To create more fibers the `max_map_count` needs to be increased.
|
252
|
+
|
253
|
+
```
|
254
|
+
$ sysctl -w vm.max_map_count=65530
|
255
|
+
```
|
256
|
+
|
257
|
+
See also: https://stackoverflow.com/a/11685165/3323185
|
258
|
+
|
259
|
+
[Flow of control]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/file/guides/Overview.md#Flow+of+control
|
260
|
+
[Concurrently::EventLoop#reinitialize!]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/EventLoop#reinitialize!-instance_method
|
261
|
+
[Concurrently::Error]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/Error
|
262
|
+
[Concurrently::Proc::Evaluation#brackets]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/Proc/Evaluation#%5B%5D-instance_method
|
@@ -0,0 +1,12 @@
|
|
1
|
+
require "fiber"
|
2
|
+
require "nio"
|
3
|
+
require "hitimes"
|
4
|
+
require "callbacks_attachable"
|
5
|
+
|
6
|
+
root = File.dirname File.dirname File.dirname __FILE__
|
7
|
+
files =
|
8
|
+
Dir[File.join(root, 'ext', 'all', '**', '*.rb')].sort +
|
9
|
+
Dir[File.join(root, 'ext', 'Ruby', '**', '*.rb')].sort +
|
10
|
+
Dir[File.join(root, 'lib', 'all', '**', '*.rb')].sort +
|
11
|
+
Dir[File.join(root, 'lib', 'Ruby', '**', '*.rb')].sort
|
12
|
+
files.each{ |f| require f }
|
@@ -0,0 +1,24 @@
|
|
1
|
+
module Concurrently
|
2
|
+
# @api ruby_patches
|
3
|
+
# @since 1.0.0
|
4
|
+
class EventLoop
|
5
|
+
# Attach an event loop to every thread in Ruby.
|
6
|
+
def self.current
|
7
|
+
Thread.current.__concurrently_event_loop__
|
8
|
+
end
|
9
|
+
|
10
|
+
# Use hitimes for a faster calculation of time intervals.
|
11
|
+
time_module = Module.new do
|
12
|
+
def reinitialize!
|
13
|
+
@clock = Hitimes::Interval.new.tap(&:start)
|
14
|
+
super
|
15
|
+
end
|
16
|
+
|
17
|
+
def lifetime
|
18
|
+
@clock.to_f
|
19
|
+
end
|
20
|
+
end
|
21
|
+
|
22
|
+
prepend time_module
|
23
|
+
end
|
24
|
+
end
|
@@ -0,0 +1,38 @@
|
|
1
|
+
module Concurrently
|
2
|
+
# @api private
|
3
|
+
# Let Ruby use nio to select IOs.
|
4
|
+
class EventLoop::IOSelector
|
5
|
+
def initialize(event_loop)
|
6
|
+
@run_queue = event_loop.run_queue
|
7
|
+
@selector = NIO::Selector.new
|
8
|
+
end
|
9
|
+
|
10
|
+
def awaiting?
|
11
|
+
not @selector.empty?
|
12
|
+
end
|
13
|
+
|
14
|
+
def await_reader(io, evaluation)
|
15
|
+
monitor = @selector.register(io, :r)
|
16
|
+
monitor.value = evaluation
|
17
|
+
end
|
18
|
+
|
19
|
+
def await_writer(io, evaluation)
|
20
|
+
monitor = @selector.register(io, :w)
|
21
|
+
monitor.value = evaluation
|
22
|
+
end
|
23
|
+
|
24
|
+
def cancel_reader(io)
|
25
|
+
@selector.deregister(io)
|
26
|
+
end
|
27
|
+
|
28
|
+
def cancel_writer(io)
|
29
|
+
@selector.deregister(io)
|
30
|
+
end
|
31
|
+
|
32
|
+
def process_ready_in(waiting_time)
|
33
|
+
@selector.select(waiting_time) do |monitor|
|
34
|
+
@run_queue.resume_evaluation! monitor.value, true
|
35
|
+
end
|
36
|
+
end
|
37
|
+
end
|
38
|
+
end
|