concurrently 1.0.1 → 1.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.gitignore +1 -1
- data/.travis.yml +8 -3
- data/README.md +70 -60
- data/RELEASE_NOTES.md +16 -1
- data/Rakefile +98 -14
- data/concurrently.gemspec +16 -12
- data/ext/mruby/io.rb +1 -1
- data/guides/Overview.md +191 -66
- data/guides/Performance.md +300 -102
- data/guides/Troubleshooting.md +28 -28
- data/lib/Ruby/concurrently/proc/evaluation/error.rb +10 -0
- data/lib/all/concurrently/error.rb +0 -3
- data/lib/all/concurrently/evaluation.rb +8 -12
- data/lib/all/concurrently/event_loop.rb +1 -1
- data/lib/all/concurrently/event_loop/fiber.rb +3 -3
- data/lib/all/concurrently/event_loop/io_selector.rb +1 -1
- data/lib/all/concurrently/event_loop/run_queue.rb +29 -17
- data/lib/all/concurrently/proc.rb +13 -13
- data/lib/all/concurrently/proc/evaluation.rb +29 -29
- data/lib/all/concurrently/proc/evaluation/error.rb +13 -0
- data/lib/all/concurrently/proc/fiber.rb +3 -6
- data/lib/all/concurrently/version.rb +1 -1
- data/lib/all/io.rb +118 -41
- data/lib/all/kernel.rb +82 -29
- data/lib/mruby/concurrently/event_loop/io_selector.rb +46 -0
- data/lib/mruby/kernel.rb +1 -1
- data/mrbgem.rake +28 -17
- data/mruby_builds/build_config.rb +67 -0
- data/perf/Ruby/stage.rb +23 -0
- data/perf/benchmark_call_methods.rb +32 -0
- data/perf/benchmark_call_methods_waiting.rb +52 -0
- data/perf/benchmark_wait_methods.rb +38 -0
- data/perf/mruby/stage.rb +8 -0
- data/perf/profile_await_readable.rb +10 -0
- data/perf/{concurrent_proc_call.rb → profile_call.rb} +1 -5
- data/perf/{concurrent_proc_call_and_forget.rb → profile_call_and_forget.rb} +1 -5
- data/perf/{concurrent_proc_call_detached.rb → profile_call_detached.rb} +1 -5
- data/perf/{concurrent_proc_call_nonblock.rb → profile_call_nonblock.rb} +1 -5
- data/perf/profile_wait.rb +7 -0
- data/perf/stage.rb +47 -0
- data/perf/stage/benchmark.rb +47 -0
- data/perf/stage/benchmark/code_gen.rb +29 -0
- data/perf/stage/benchmark/code_gen/batch.rb +41 -0
- data/perf/stage/benchmark/code_gen/single.rb +38 -0
- metadata +27 -23
- data/ext/mruby/array.rb +0 -19
- data/lib/Ruby/concurrently/error.rb +0 -4
- data/perf/_shared/stage.rb +0 -33
- data/perf/concurrent_proc_calls.rb +0 -49
- data/perf/concurrent_proc_calls_awaiting.rb +0 -48
data/guides/Performance.md
CHANGED
@@ -1,140 +1,338 @@
|
|
1
1
|
# Performance of Concurrently
|
2
2
|
|
3
|
-
|
4
|
-
|
5
|
-
|
3
|
+
The measurements were executed on an Intel i7-5820K 3.3 GHz running Linux 4.10.
|
4
|
+
Garbage collection was disabled. The benchmark runs the code in batches to
|
5
|
+
reduce the overhead of the benchmark harness.
|
6
6
|
|
7
|
-
|
8
|
-
running Linux 4.10. Garbage collection was disabled.
|
7
|
+
## Mere Invocation of Concurrent Procs
|
9
8
|
|
9
|
+
This benchmark compares all call methods of a [concurrent proc][Concurrently::Proc]
|
10
|
+
and a regular proc. The procs itself do nothing. The results represent the
|
11
|
+
baseline for how fast Concurrently is able to work. It can't get any faster
|
12
|
+
than that.
|
10
13
|
|
11
|
-
|
12
|
-
|
13
|
-
|
14
|
-
|
15
|
-
|
16
|
-
|
17
|
-
|
18
|
-
|
19
|
-
|
20
|
-
|
21
|
-
|
22
|
-
|
23
|
-
|
24
|
-
|
14
|
+
Benchmarks
|
15
|
+
----------
|
16
|
+
proc.call:
|
17
|
+
test_proc = proc{}
|
18
|
+
batch = Array.new(100)
|
19
|
+
|
20
|
+
while elapsed_seconds < 1
|
21
|
+
batch.each{ test_proc.call }
|
22
|
+
end
|
23
|
+
|
24
|
+
conproc.call:
|
25
|
+
test_proc = concurrent_proc{}
|
26
|
+
batch = Array.new(100)
|
27
|
+
|
28
|
+
while elapsed_seconds < 1
|
29
|
+
batch.each{ test_proc.call }
|
30
|
+
end
|
31
|
+
|
32
|
+
conproc.call_nonblock:
|
33
|
+
test_proc = concurrent_proc{}
|
34
|
+
batch = Array.new(100)
|
35
|
+
|
36
|
+
while elapsed_seconds < 1
|
37
|
+
batch.each{ test_proc.call_nonblock }
|
38
|
+
end
|
39
|
+
|
40
|
+
conproc.call_detached:
|
41
|
+
test_proc = concurrent_proc{}
|
42
|
+
batch = Array.new(100)
|
43
|
+
|
44
|
+
while elapsed_seconds < 1
|
45
|
+
batch.each{ test_proc.call_detached }
|
46
|
+
wait 0
|
47
|
+
end
|
48
|
+
|
49
|
+
conproc.call_and_forget:
|
50
|
+
test_proc = concurrent_proc{}
|
51
|
+
batch = Array.new(100)
|
52
|
+
|
53
|
+
while elapsed_seconds < 1
|
54
|
+
batch.each{ test_proc.call_and_forget }
|
55
|
+
wait 0
|
56
|
+
end
|
57
|
+
|
58
|
+
Results for ruby 2.4.1
|
59
|
+
----------------------
|
60
|
+
proc.call: 11048400 executions in 1.0000 seconds
|
61
|
+
conproc.call: 734000 executions in 1.0000 seconds
|
62
|
+
conproc.call_nonblock: 857800 executions in 1.0001 seconds
|
63
|
+
conproc.call_detached: 464800 executions in 1.0002 seconds
|
64
|
+
conproc.call_and_forget: 721800 executions in 1.0001 seconds
|
25
65
|
|
26
|
-
Results
|
27
|
-
|
28
|
-
|
29
|
-
|
30
|
-
conproc.
|
31
|
-
conproc.
|
32
|
-
conproc.
|
33
|
-
|
66
|
+
Results for mruby 1.3.0
|
67
|
+
-----------------------
|
68
|
+
proc.call: 4771700 executions in 1.0000 seconds
|
69
|
+
conproc.call: 362000 executions in 1.0002 seconds
|
70
|
+
conproc.call_nonblock: 427400 executions in 1.0000 seconds
|
71
|
+
conproc.call_detached: 188900 executions in 1.0005 seconds
|
72
|
+
conproc.call_and_forget: 383400 executions in 1.0002 seconds
|
73
|
+
|
74
|
+
*conproc.call_detached* and *conproc.call_and_forget* call `wait 0` after each
|
75
|
+
batch so the scheduled evaluations have [a chance to run]
|
76
|
+
[Troubleshooting/A_concurrent_proc_is_scheduled_but_never_run]. Otherwise,
|
77
|
+
their evaluations were merely scheduled and not started and concluded like it
|
78
|
+
is happening in the other cases. This makes the benchmarks comparable.
|
34
79
|
|
35
80
|
Explanation of the results:
|
36
81
|
|
37
82
|
* The difference between a regular and a concurrent proc is caused by
|
38
83
|
concurrent procs being evaluated in a fiber and doing some bookkeeping.
|
39
|
-
* Of the two methods evaluating the proc in the foreground
|
40
|
-
is faster than
|
41
|
-
|
42
|
-
|
43
|
-
|
84
|
+
* Of the two methods evaluating the proc in the foreground
|
85
|
+
[Concurrently::Proc#call_nonblock][] is faster than [Concurrently::Proc#call][],
|
86
|
+
because the implementation of [Concurrently::Proc#call][] uses
|
87
|
+
[Concurrently::Proc#call_nonblock][] and does a little bit more on top.
|
88
|
+
* Of the two methods evaluating the proc in the background,
|
89
|
+
[Concurrently::Proc#call_and_forget][] is faster because
|
90
|
+
[Concurrently::Proc#call_detached][] additionally creates an evaluation
|
44
91
|
object.
|
45
|
-
* Running concurrent procs in the background is
|
46
|
-
|
47
|
-
|
48
|
-
|
49
|
-
|
50
|
-
[Troubleshooting/A_concurrent_proc_is_scheduled_but_never_run].
|
51
|
-
All this leads to the creation of a new fiber for each evaluation. This is
|
52
|
-
responsible for the largest chunk of time needed during the measurement.
|
92
|
+
* Running concurrent procs in the background is slower than running them in the
|
93
|
+
foreground because their evaluations need to be scheduled.
|
94
|
+
* Overall, mruby is about half as fast as Ruby.
|
95
|
+
|
96
|
+
You can run this benchmark yourself by executing:
|
53
97
|
|
54
|
-
|
98
|
+
$ rake benchmark[call_methods]
|
55
99
|
|
56
|
-
$ perf/concurrent_proc_calls.rb
|
57
100
|
|
101
|
+
## Mere Waiting
|
58
102
|
|
59
|
-
|
103
|
+
This benchmark measures the number of times per second we can
|
60
104
|
|
61
|
-
|
62
|
-
|
105
|
+
* wait an amount of time,
|
106
|
+
* await readability of an IO object and
|
107
|
+
* await writability of an IO object.
|
63
108
|
|
64
|
-
|
65
|
-
|
66
|
-
|
67
|
-
|
68
|
-
|
69
|
-
|
70
|
-
|
71
|
-
|
109
|
+
Like with calling a proc doing nothing this defines what maximum performance
|
110
|
+
to expect in these cases.
|
111
|
+
|
112
|
+
Benchmarks
|
113
|
+
----------
|
114
|
+
wait:
|
115
|
+
test_proc = proc do
|
116
|
+
wait 0 # schedule the proc be resumed ASAP
|
117
|
+
end
|
118
|
+
|
119
|
+
batch = Array.new(100)
|
120
|
+
|
121
|
+
while elapsed_seconds < 1
|
122
|
+
batch.each{ test_proc.call }
|
123
|
+
end
|
124
|
+
|
125
|
+
await_readable:
|
126
|
+
test_proc = proc do |r,w|
|
127
|
+
r.await_readable
|
128
|
+
end
|
129
|
+
|
130
|
+
batch = Array.new(100) do |idx|
|
131
|
+
IO.pipe.tap{ |r,w| w.write '0' }
|
132
|
+
end
|
133
|
+
|
134
|
+
while elapsed_seconds < 1
|
135
|
+
batch.each{ |*args| test_proc.call(*args) }
|
136
|
+
end
|
137
|
+
|
138
|
+
await_writable:
|
139
|
+
test_proc = proc do |r,w|
|
140
|
+
w.await_writable
|
141
|
+
end
|
142
|
+
|
143
|
+
batch = Array.new(100) do |idx|
|
144
|
+
IO.pipe
|
145
|
+
end
|
146
|
+
|
147
|
+
while elapsed_seconds < 1
|
148
|
+
batch.each{ |*args| test_proc.call(*args) }
|
149
|
+
end
|
150
|
+
|
151
|
+
Results for ruby 2.4.1
|
152
|
+
----------------------
|
153
|
+
wait: 291100 executions in 1.0001 seconds
|
154
|
+
await_readable: 147800 executions in 1.0005 seconds
|
155
|
+
await_writable: 148300 executions in 1.0003 seconds
|
72
156
|
|
73
|
-
Results
|
74
|
-
|
75
|
-
|
76
|
-
|
77
|
-
|
78
|
-
conproc.call_detached: 114882 executions in 1.0000 seconds
|
79
|
-
conproc.call_and_forget: 117425 executions in 1.0000 seconds
|
157
|
+
Results for mruby 1.3.0
|
158
|
+
-----------------------
|
159
|
+
wait: 104300 executions in 1.0002 seconds
|
160
|
+
await_readable: 132600 executions in 1.0006 seconds
|
161
|
+
await_writable: 130500 executions in 1.0005 seconds
|
80
162
|
|
81
163
|
Explanation of the results:
|
82
164
|
|
83
|
-
*
|
84
|
-
|
85
|
-
|
86
|
-
|
87
|
-
|
88
|
-
|
89
|
-
|
165
|
+
* In Ruby, waiting an amount of time is much faster than awaiting readiness of
|
166
|
+
I/O because it does not need to enter the underlying poll call.
|
167
|
+
* In mruby, awaiting readiness of I/O is actually faster than just waiting an
|
168
|
+
amount of time. Scheduling an evaluation to resume at a specific time
|
169
|
+
involves amongst other things inserting it into an array at the right index.
|
170
|
+
mruby implements many Array methods in plain ruby which makes them noticeably
|
171
|
+
slower.
|
90
172
|
|
91
|
-
|
173
|
+
You can run this benchmark yourself by executing:
|
92
174
|
|
175
|
+
$ rake benchmark[wait_methods]
|
93
176
|
|
94
|
-
## Scheduling (Concurrent) Procs and Evaluating Them in Batches
|
95
177
|
|
96
|
-
|
97
|
-
100 evaluations will then be evaluated in one batch during the next iteration
|
98
|
-
of the event loop.
|
178
|
+
## Waiting Inside Concurrent Procs
|
99
179
|
|
100
|
-
|
101
|
-
|
180
|
+
Concurrent procs show different performance depending on how they are called
|
181
|
+
and if their evaluation needs to wait or not. This benchmark explores these
|
182
|
+
differences and serves as a guide which call method provides the best
|
183
|
+
performance in these scenarios.
|
102
184
|
|
103
|
-
|
104
|
-
|
105
|
-
|
106
|
-
|
107
|
-
|
108
|
-
|
109
|
-
|
110
|
-
|
185
|
+
Benchmarks
|
186
|
+
----------
|
187
|
+
call:
|
188
|
+
test_proc = concurrent_proc{}
|
189
|
+
batch = Array.new(100)
|
190
|
+
|
191
|
+
while elapsed_seconds < 1
|
192
|
+
batch.each{ test_proc.call }
|
193
|
+
# Concurrently::Proc#call already synchronizes the results of evaluations
|
194
|
+
end
|
195
|
+
|
196
|
+
call_nonblock:
|
197
|
+
test_proc = concurrent_proc{}
|
198
|
+
batch = Array.new(100)
|
199
|
+
|
200
|
+
while elapsed_seconds < 1
|
201
|
+
batch.each{ test_proc.call_nonblock }
|
202
|
+
end
|
203
|
+
|
204
|
+
call_detached:
|
205
|
+
test_proc = concurrent_proc{}
|
206
|
+
batch = Array.new(100)
|
207
|
+
|
208
|
+
while elapsed_seconds < 1
|
209
|
+
evaluations = batch.map{ test_proc.call_detached }
|
210
|
+
evaluations.each{ |evaluation| evaluation.await_result }
|
211
|
+
end
|
212
|
+
|
213
|
+
call_and_forget:
|
214
|
+
test_proc = concurrent_proc{}
|
215
|
+
batch = Array.new(100)
|
216
|
+
|
217
|
+
while elapsed_seconds < 1
|
218
|
+
batch.each{ test_proc.call_and_forget }
|
219
|
+
wait 0
|
220
|
+
end
|
221
|
+
|
222
|
+
waiting call:
|
223
|
+
test_proc = concurrent_proc{ wait 0 }
|
224
|
+
batch = Array.new(100)
|
225
|
+
|
226
|
+
while elapsed_seconds < 1
|
227
|
+
batch.each{ test_proc.call }
|
228
|
+
# Concurrently::Proc#call already synchronizes the results of evaluations
|
229
|
+
end
|
230
|
+
|
231
|
+
waiting call_nonblock:
|
232
|
+
test_proc = concurrent_proc{ wait 0 }
|
233
|
+
batch = Array.new(100)
|
234
|
+
|
235
|
+
while elapsed_seconds < 1
|
236
|
+
evaluations = batch.map{ test_proc.call_nonblock }
|
237
|
+
evaluations.each{ |evaluation| evaluation.await_result }
|
238
|
+
end
|
239
|
+
|
240
|
+
waiting call_detached:
|
241
|
+
test_proc = concurrent_proc{ wait 0 }
|
242
|
+
batch = Array.new(100)
|
243
|
+
|
244
|
+
while elapsed_seconds < 1
|
245
|
+
evaluations = batch.map{ test_proc.call_detached }
|
246
|
+
evaluations.each{ |evaluation| evaluation.await_result }
|
247
|
+
end
|
248
|
+
|
249
|
+
waiting call_and_forget:
|
250
|
+
test_proc = concurrent_proc{ wait 0 }
|
251
|
+
batch = Array.new(100)
|
252
|
+
|
253
|
+
while elapsed_seconds < 1
|
254
|
+
batch.each{ test_proc.call_and_forget }
|
255
|
+
wait 0
|
256
|
+
end
|
257
|
+
|
258
|
+
Results for ruby 2.4.1
|
259
|
+
----------------------
|
260
|
+
call: 687600 executions in 1.0001 seconds
|
261
|
+
call_nonblock: 855600 executions in 1.0001 seconds
|
262
|
+
call_detached: 426400 executions in 1.0000 seconds
|
263
|
+
call_and_forget: 722200 executions in 1.0000 seconds
|
264
|
+
waiting call: 90300 executions in 1.0005 seconds
|
265
|
+
waiting call_nonblock: 191800 executions in 1.0001 seconds
|
266
|
+
waiting call_detached: 190300 executions in 1.0003 seconds
|
267
|
+
waiting call_and_forget: 207100 executions in 1.0001 seconds
|
111
268
|
|
112
|
-
Results
|
113
|
-
|
114
|
-
|
115
|
-
|
116
|
-
|
117
|
-
|
118
|
-
|
269
|
+
Results for mruby 1.3.0
|
270
|
+
-----------------------
|
271
|
+
call: 319900 executions in 1.0003 seconds
|
272
|
+
call_nonblock: 431700 executions in 1.0002 seconds
|
273
|
+
call_detached: 158400 executions in 1.0006 seconds
|
274
|
+
call_and_forget: 397700 executions in 1.0002 seconds
|
275
|
+
waiting call: 49900 executions in 1.0015 seconds
|
276
|
+
waiting call_nonblock: 74600 executions in 1.0001 seconds
|
277
|
+
waiting call_detached: 73300 executions in 1.0006 seconds
|
278
|
+
waiting call_and_forget: 85200 executions in 1.0008 seconds
|
119
279
|
|
280
|
+
`wait 0` is used as a stand in for all wait methods. Measurements of concurrent
|
281
|
+
procs doing nothing are included for comparision.
|
120
282
|
|
121
283
|
Explanation of the results:
|
122
284
|
|
123
|
-
*
|
124
|
-
|
125
|
-
|
285
|
+
* [Concurrently::Proc#call][] is the slowest if the concurrent proc needs to
|
286
|
+
wait. Immediately synchronizing the result for each and every evaluation
|
287
|
+
introduces a noticeable overhead.
|
288
|
+
* [Concurrently::Proc#call_nonblock][] and [Concurrently::Proc#call_detached][]
|
289
|
+
perform similarly. When started [Concurrently::Proc#call_nonblock][] skips
|
290
|
+
some work related to waiting that [Concurrently::Proc#call_detached][] is
|
291
|
+
already doing. Now, when the concurrent proc actually waits
|
292
|
+
[Concurrently::Proc#call_nonblock][] needs to make up for this skipped work.
|
293
|
+
This puts its performance in the same region as the one of
|
294
|
+
[Concurrently::Proc#call_detached][].
|
295
|
+
* [Concurrently::Proc#call_and_forget][] is the fastest way to wait inside a
|
296
|
+
concurrent proc. It comes at the cost that the result of the evaluation
|
297
|
+
cannot be returned.
|
298
|
+
|
299
|
+
To find the fastest way to evaluate a proc it has to be considered if the proc
|
300
|
+
does or does not wait most of the time and if its result is needed:
|
301
|
+
|
302
|
+
<table>
|
303
|
+
<tr>
|
304
|
+
<th></th>
|
305
|
+
<th>result needed</th>
|
306
|
+
<th>result not needed</th>
|
307
|
+
</tr>
|
308
|
+
<tr>
|
309
|
+
<th>waits almost always</th>
|
310
|
+
<td><code>#call_nonblock</code> or<br/><code>#call_detached</code></td>
|
311
|
+
<td><code>#call_and_forget</code></td>
|
312
|
+
</tr>
|
313
|
+
<tr>
|
314
|
+
<th>waits almost never</th>
|
315
|
+
<td><code>#call_nonblock</code></td>
|
316
|
+
<td><code>#call_nonblock</code></td>
|
317
|
+
</tr>
|
318
|
+
</table>
|
126
319
|
|
127
|
-
|
128
|
-
|
129
|
-
|
130
|
-
|
131
|
-
|
320
|
+
[Kernel#concurrently][] calls [Concurrently::Proc#call_detached][] under the
|
321
|
+
hood as a reasonable default. [Concurrently::Proc#call_detached][] has the
|
322
|
+
easiest interface and provides good performance especially in the most common
|
323
|
+
use case of Concurrently: waiting for an event to happen.
|
324
|
+
[Concurrently::Proc#call_nonblock][] and [Concurrently::Proc#call_and_forget][]
|
325
|
+
are there to squeeze out more performance in some edge cases.
|
132
326
|
|
133
|
-
You can run
|
327
|
+
You can run this benchmark yourself by executing:
|
134
328
|
|
135
|
-
$
|
329
|
+
$ rake benchmark[call_methods_waiting]
|
136
330
|
|
137
331
|
|
138
|
-
[
|
139
|
-
[
|
140
|
-
[
|
332
|
+
[Troubleshooting/A_concurrent_proc_is_scheduled_but_never_run]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/file/guides/Troubleshooting.md#A_concurrent_proc_is_scheduled_but_never_run
|
333
|
+
[Concurrently::Proc]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/Proc
|
334
|
+
[Concurrently::Proc#call]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/Proc#call-instance_method
|
335
|
+
[Concurrently::Proc#call_nonblock]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/Proc#call_nonblock-instance_method
|
336
|
+
[Concurrently::Proc#call_detached]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/Proc#call_detached-instance_method
|
337
|
+
[Concurrently::Proc#call_and_forget]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/Proc#call_and_forget-instance_method
|
338
|
+
[Kernel#concurrently]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Kernel#concurrently-instance_method
|
data/guides/Troubleshooting.md
CHANGED
@@ -3,7 +3,7 @@
|
|
3
3
|
To get an idea about the inner workings of Concurrently have a look at the
|
4
4
|
[Flow of control][] section in the overview.
|
5
5
|
|
6
|
-
##
|
6
|
+
## An evaluation is scheduled but never run
|
7
7
|
|
8
8
|
Consider the following script:
|
9
9
|
|
@@ -23,11 +23,11 @@ Running it will only print:
|
|
23
23
|
Unicorns!
|
24
24
|
```
|
25
25
|
|
26
|
-
`concurrently{}` is a shortcut for `concurrent_proc{}.
|
26
|
+
`concurrently{}` is a shortcut for `concurrent_proc{}.call_detached`
|
27
27
|
which in turn does not evaluate its code right away but schedules it to run
|
28
|
-
during the next iteration of the event loop. But, since the
|
29
|
-
not await anything the event loop has never been entered and the
|
30
|
-
|
28
|
+
during the next iteration of the event loop. But, since the main evaluation did
|
29
|
+
not await anything the event loop has never been entered and the concurrent
|
30
|
+
evaluation has never been started.
|
31
31
|
|
32
32
|
A more subtle variation of this behavior occurs in the following scenario:
|
33
33
|
|
@@ -49,10 +49,10 @@ Running it will also only print:
|
|
49
49
|
Unicorns!
|
50
50
|
```
|
51
51
|
|
52
|
-
This time, the
|
52
|
+
This time, the main evaluation does await something, namely the end of a one
|
53
53
|
second time frame. Because of this, the evaluation of the `concurrently` block
|
54
54
|
is indeed started and immediately waits for two seconds. After one second the
|
55
|
-
|
55
|
+
main evaluation is resumed and exits. The `concurrently` block is never awoken
|
56
56
|
again from its now eternal beauty sleep.
|
57
57
|
|
58
58
|
## A call is blocking the entire execution.
|
@@ -83,8 +83,8 @@ r.await_readable
|
|
83
83
|
r.readpartial 32
|
84
84
|
```
|
85
85
|
|
86
|
-
we suspend the
|
87
|
-
`concurrently` block and once there is something to read from `r` the
|
86
|
+
we suspend the main evaluation, switch to the event loop which runs the
|
87
|
+
`concurrently` block and once there is something to read from `r` the main
|
88
88
|
evaluation is resumed.
|
89
89
|
|
90
90
|
This approach is not perfect. It is not very efficient if we do not need to
|
@@ -105,28 +105,28 @@ end
|
|
105
105
|
|
106
106
|
## The event loop is jammed by too many or too expensive evaluations
|
107
107
|
|
108
|
-
Let's talk about a concurrent
|
108
|
+
Let's talk about a concurrent evaluation with an infinite loop:
|
109
109
|
|
110
110
|
```ruby
|
111
|
-
evaluation =
|
111
|
+
evaluation = concurrently do
|
112
112
|
loop do
|
113
113
|
puts "To infinity! And beyond!"
|
114
114
|
end
|
115
|
-
end
|
115
|
+
end
|
116
116
|
|
117
117
|
concurrently do
|
118
118
|
evaluation.conclude_to :cancelled
|
119
119
|
end
|
120
120
|
```
|
121
121
|
|
122
|
-
When the
|
123
|
-
never finishes. The event loop is never entered again and the other
|
124
|
-
|
122
|
+
When the loop evaluation is scheduled to run it runs and runs and runs and
|
123
|
+
never finishes. The event loop is never entered again and the other evaluation
|
124
|
+
concluding the evaluation is never started.
|
125
125
|
|
126
126
|
A less extreme example is something like:
|
127
127
|
|
128
128
|
```ruby
|
129
|
-
|
129
|
+
concurrently do
|
130
130
|
loop do
|
131
131
|
wait 0.1
|
132
132
|
puts "timer triggered at: #{Time.now.strftime('%H:%M:%S.%L')}"
|
@@ -134,7 +134,7 @@ concurrent_proc do
|
|
134
134
|
sleep 1 # defers the entire event loop
|
135
135
|
end
|
136
136
|
end
|
137
|
-
end.
|
137
|
+
end.await_result
|
138
138
|
|
139
139
|
# => timer triggered at: 16:08:17.704
|
140
140
|
# => timer triggered at: 16:08:18.705
|
@@ -175,20 +175,20 @@ managing IOs (e.g. closing them).
|
|
175
175
|
|
176
176
|
## Errors tear down the event loop
|
177
177
|
|
178
|
-
Every
|
179
|
-
|
180
|
-
|
181
|
-
|
182
|
-
|
178
|
+
Every evaluation rescues the following errors: `NoMemoryError`, `ScriptError`,
|
179
|
+
`SecurityError`, `StandardError` and `SystemStackError`. These are all errors
|
180
|
+
that should not have an immediate influence on other evaluations or the
|
181
|
+
application as a whole. They will not leak to the event loop and will not tear
|
182
|
+
it down.
|
183
183
|
|
184
|
-
All other errors happening
|
185
|
-
|
186
|
-
|
187
|
-
|
184
|
+
All other errors happening during an evaluation *will* tear down the event
|
185
|
+
loop. These error types are: `SignalException`, `SystemExit` and the general
|
186
|
+
`Exception`. In such a case the event loop exits by re-raising the causing
|
187
|
+
error.
|
188
188
|
|
189
189
|
If your application rescues the error when the event loop is teared down
|
190
190
|
and continues running (irb does this, for example) it will do so with a
|
191
|
-
[reinitialized event loop]
|
191
|
+
[reinitialized event loop][Concurrently::EventLoop#reinitialize!].
|
192
192
|
|
193
193
|
## Using Plain Fibers
|
194
194
|
|
@@ -196,7 +196,7 @@ In principle, you can safely use plain ruby fibers alongside concurrent procs.
|
|
196
196
|
Just make sure you are exclusively operating on these fibers to not
|
197
197
|
accidentally interfere with the fibers managed by Concurrently. Be
|
198
198
|
especially careful with `Fiber.yield` and `Fiber.current` inside a concurrent
|
199
|
-
|
199
|
+
evaluation.
|
200
200
|
|
201
201
|
## Fiber-local variables are treated as thread-local
|
202
202
|
|