concurrently 1.0.1 → 1.1.0

Sign up to get free protection for your applications and to get access to all the features.
Files changed (51) hide show
  1. checksums.yaml +4 -4
  2. data/.gitignore +1 -1
  3. data/.travis.yml +8 -3
  4. data/README.md +70 -60
  5. data/RELEASE_NOTES.md +16 -1
  6. data/Rakefile +98 -14
  7. data/concurrently.gemspec +16 -12
  8. data/ext/mruby/io.rb +1 -1
  9. data/guides/Overview.md +191 -66
  10. data/guides/Performance.md +300 -102
  11. data/guides/Troubleshooting.md +28 -28
  12. data/lib/Ruby/concurrently/proc/evaluation/error.rb +10 -0
  13. data/lib/all/concurrently/error.rb +0 -3
  14. data/lib/all/concurrently/evaluation.rb +8 -12
  15. data/lib/all/concurrently/event_loop.rb +1 -1
  16. data/lib/all/concurrently/event_loop/fiber.rb +3 -3
  17. data/lib/all/concurrently/event_loop/io_selector.rb +1 -1
  18. data/lib/all/concurrently/event_loop/run_queue.rb +29 -17
  19. data/lib/all/concurrently/proc.rb +13 -13
  20. data/lib/all/concurrently/proc/evaluation.rb +29 -29
  21. data/lib/all/concurrently/proc/evaluation/error.rb +13 -0
  22. data/lib/all/concurrently/proc/fiber.rb +3 -6
  23. data/lib/all/concurrently/version.rb +1 -1
  24. data/lib/all/io.rb +118 -41
  25. data/lib/all/kernel.rb +82 -29
  26. data/lib/mruby/concurrently/event_loop/io_selector.rb +46 -0
  27. data/lib/mruby/kernel.rb +1 -1
  28. data/mrbgem.rake +28 -17
  29. data/mruby_builds/build_config.rb +67 -0
  30. data/perf/Ruby/stage.rb +23 -0
  31. data/perf/benchmark_call_methods.rb +32 -0
  32. data/perf/benchmark_call_methods_waiting.rb +52 -0
  33. data/perf/benchmark_wait_methods.rb +38 -0
  34. data/perf/mruby/stage.rb +8 -0
  35. data/perf/profile_await_readable.rb +10 -0
  36. data/perf/{concurrent_proc_call.rb → profile_call.rb} +1 -5
  37. data/perf/{concurrent_proc_call_and_forget.rb → profile_call_and_forget.rb} +1 -5
  38. data/perf/{concurrent_proc_call_detached.rb → profile_call_detached.rb} +1 -5
  39. data/perf/{concurrent_proc_call_nonblock.rb → profile_call_nonblock.rb} +1 -5
  40. data/perf/profile_wait.rb +7 -0
  41. data/perf/stage.rb +47 -0
  42. data/perf/stage/benchmark.rb +47 -0
  43. data/perf/stage/benchmark/code_gen.rb +29 -0
  44. data/perf/stage/benchmark/code_gen/batch.rb +41 -0
  45. data/perf/stage/benchmark/code_gen/single.rb +38 -0
  46. metadata +27 -23
  47. data/ext/mruby/array.rb +0 -19
  48. data/lib/Ruby/concurrently/error.rb +0 -4
  49. data/perf/_shared/stage.rb +0 -33
  50. data/perf/concurrent_proc_calls.rb +0 -49
  51. data/perf/concurrent_proc_calls_awaiting.rb +0 -48
@@ -1,140 +1,338 @@
1
1
  # Performance of Concurrently
2
2
 
3
- Overall, Concurrently is able to schedule around 100k to 200k concurrent
4
- evaluations per second. What to expect exactly is narrowed down in the
5
- following benchmarks.
3
+ The measurements were executed on an Intel i7-5820K 3.3 GHz running Linux 4.10.
4
+ Garbage collection was disabled. The benchmark runs the code in batches to
5
+ reduce the overhead of the benchmark harness.
6
6
 
7
- The measurements were executed with Ruby 2.4.1 on an Intel i7-5820K 3.3 GHz
8
- running Linux 4.10. Garbage collection was disabled.
7
+ ## Mere Invocation of Concurrent Procs
9
8
 
9
+ This benchmark compares all call methods of a [concurrent proc][Concurrently::Proc]
10
+ and a regular proc. The procs itself do nothing. The results represent the
11
+ baseline for how fast Concurrently is able to work. It can't get any faster
12
+ than that.
10
13
 
11
- ## Calling a (Concurrent) Proc
12
-
13
- This benchmark compares all `#call` methods of a concurrent proc and a regular
14
- proc. The mere invocation of the method is measured. The proc itself does
15
- nothing.
16
-
17
- Benchmarked Code
18
- ----------------
19
- proc = proc{}
20
- conproc = concurrent_proc{}
21
-
22
- while elapsed_seconds < 1
23
- # CODE #
24
- end
14
+ Benchmarks
15
+ ----------
16
+ proc.call:
17
+ test_proc = proc{}
18
+ batch = Array.new(100)
19
+
20
+ while elapsed_seconds < 1
21
+ batch.each{ test_proc.call }
22
+ end
23
+
24
+ conproc.call:
25
+ test_proc = concurrent_proc{}
26
+ batch = Array.new(100)
27
+
28
+ while elapsed_seconds < 1
29
+ batch.each{ test_proc.call }
30
+ end
31
+
32
+ conproc.call_nonblock:
33
+ test_proc = concurrent_proc{}
34
+ batch = Array.new(100)
35
+
36
+ while elapsed_seconds < 1
37
+ batch.each{ test_proc.call_nonblock }
38
+ end
39
+
40
+ conproc.call_detached:
41
+ test_proc = concurrent_proc{}
42
+ batch = Array.new(100)
43
+
44
+ while elapsed_seconds < 1
45
+ batch.each{ test_proc.call_detached }
46
+ wait 0
47
+ end
48
+
49
+ conproc.call_and_forget:
50
+ test_proc = concurrent_proc{}
51
+ batch = Array.new(100)
52
+
53
+ while elapsed_seconds < 1
54
+ batch.each{ test_proc.call_and_forget }
55
+ wait 0
56
+ end
57
+
58
+ Results for ruby 2.4.1
59
+ ----------------------
60
+ proc.call: 11048400 executions in 1.0000 seconds
61
+ conproc.call: 734000 executions in 1.0000 seconds
62
+ conproc.call_nonblock: 857800 executions in 1.0001 seconds
63
+ conproc.call_detached: 464800 executions in 1.0002 seconds
64
+ conproc.call_and_forget: 721800 executions in 1.0001 seconds
25
65
 
26
- Results
27
- -------
28
- # CODE #
29
- proc.call: 5423106 executions in 1.0000 seconds
30
- conproc.call: 662314 executions in 1.0000 seconds
31
- conproc.call_nonblock: 769164 executions in 1.0000 seconds
32
- conproc.call_detached: 269385 executions in 1.0000 seconds
33
- conproc.call_and_forget: 306099 executions in 1.0000 seconds
66
+ Results for mruby 1.3.0
67
+ -----------------------
68
+ proc.call: 4771700 executions in 1.0000 seconds
69
+ conproc.call: 362000 executions in 1.0002 seconds
70
+ conproc.call_nonblock: 427400 executions in 1.0000 seconds
71
+ conproc.call_detached: 188900 executions in 1.0005 seconds
72
+ conproc.call_and_forget: 383400 executions in 1.0002 seconds
73
+
74
+ *conproc.call_detached* and *conproc.call_and_forget* call `wait 0` after each
75
+ batch so the scheduled evaluations have [a chance to run]
76
+ [Troubleshooting/A_concurrent_proc_is_scheduled_but_never_run]. Otherwise,
77
+ their evaluations were merely scheduled and not started and concluded like it
78
+ is happening in the other cases. This makes the benchmarks comparable.
34
79
 
35
80
  Explanation of the results:
36
81
 
37
82
  * The difference between a regular and a concurrent proc is caused by
38
83
  concurrent procs being evaluated in a fiber and doing some bookkeeping.
39
- * Of the two methods evaluating the proc in the foreground `#call_nonblock`
40
- is faster than `#call`, because the implementation of `#call` uses
41
- `#call_nonblock` and does a little bit more on top.
42
- * Of the two methods evaluating the proc in the background, `#call_and_forget`
43
- is faster because `#call_detached` additionally creates an evaluation
84
+ * Of the two methods evaluating the proc in the foreground
85
+ [Concurrently::Proc#call_nonblock][] is faster than [Concurrently::Proc#call][],
86
+ because the implementation of [Concurrently::Proc#call][] uses
87
+ [Concurrently::Proc#call_nonblock][] and does a little bit more on top.
88
+ * Of the two methods evaluating the proc in the background,
89
+ [Concurrently::Proc#call_and_forget][] is faster because
90
+ [Concurrently::Proc#call_detached][] additionally creates an evaluation
44
91
  object.
45
- * Running concurrent procs in the background is considerably slower because
46
- in this setup `#call_detached` and `#call_and_forget` cannot reuse fibers.
47
- Their evaluation is merely scheduled and not started and concluded. This
48
- would happen during the next iteration of the event loop. But since the
49
- `while` loop never waits for something [the loop is never entered]
50
- [Troubleshooting/A_concurrent_proc_is_scheduled_but_never_run].
51
- All this leads to the creation of a new fiber for each evaluation. This is
52
- responsible for the largest chunk of time needed during the measurement.
92
+ * Running concurrent procs in the background is slower than running them in the
93
+ foreground because their evaluations need to be scheduled.
94
+ * Overall, mruby is about half as fast as Ruby.
95
+
96
+ You can run this benchmark yourself by executing:
53
97
 
54
- You can run the benchmark yourself by running the [script][perf/concurrent_proc_calls.rb]:
98
+ $ rake benchmark[call_methods]
55
99
 
56
- $ perf/concurrent_proc_calls.rb
57
100
 
101
+ ## Mere Waiting
58
102
 
59
- ## Scheduling (Concurrent) Procs
103
+ This benchmark measures the number of times per second we can
60
104
 
61
- This benchmark is closer to the real usage of Concurrently. It includes waiting
62
- inside a concurrent proc.
105
+ * wait an amount of time,
106
+ * await readability of an IO object and
107
+ * await writability of an IO object.
63
108
 
64
- Benchmarked Code
65
- ----------------
66
- conproc = concurrent_proc{ wait 0 }
67
-
68
- while elapsed_seconds < 1
69
- 1.times{ # CODE # }
70
- wait 0 # to enter the event loop
71
- end
109
+ Like with calling a proc doing nothing this defines what maximum performance
110
+ to expect in these cases.
111
+
112
+ Benchmarks
113
+ ----------
114
+ wait:
115
+ test_proc = proc do
116
+ wait 0 # schedule the proc be resumed ASAP
117
+ end
118
+
119
+ batch = Array.new(100)
120
+
121
+ while elapsed_seconds < 1
122
+ batch.each{ test_proc.call }
123
+ end
124
+
125
+ await_readable:
126
+ test_proc = proc do |r,w|
127
+ r.await_readable
128
+ end
129
+
130
+ batch = Array.new(100) do |idx|
131
+ IO.pipe.tap{ |r,w| w.write '0' }
132
+ end
133
+
134
+ while elapsed_seconds < 1
135
+ batch.each{ |*args| test_proc.call(*args) }
136
+ end
137
+
138
+ await_writable:
139
+ test_proc = proc do |r,w|
140
+ w.await_writable
141
+ end
142
+
143
+ batch = Array.new(100) do |idx|
144
+ IO.pipe
145
+ end
146
+
147
+ while elapsed_seconds < 1
148
+ batch.each{ |*args| test_proc.call(*args) }
149
+ end
150
+
151
+ Results for ruby 2.4.1
152
+ ----------------------
153
+ wait: 291100 executions in 1.0001 seconds
154
+ await_readable: 147800 executions in 1.0005 seconds
155
+ await_writable: 148300 executions in 1.0003 seconds
72
156
 
73
- Results
74
- -------
75
- # CODE #
76
- conproc.call: 72444 executions in 1.0000 seconds
77
- conproc.call_nonblock: 103468 executions in 1.0000 seconds
78
- conproc.call_detached: 114882 executions in 1.0000 seconds
79
- conproc.call_and_forget: 117425 executions in 1.0000 seconds
157
+ Results for mruby 1.3.0
158
+ -----------------------
159
+ wait: 104300 executions in 1.0002 seconds
160
+ await_readable: 132600 executions in 1.0006 seconds
161
+ await_writable: 130500 executions in 1.0005 seconds
80
162
 
81
163
  Explanation of the results:
82
164
 
83
- * Because scheduling is now the dominant factor, there is a large drop in the
84
- number of executions compared to just calling the procs. This makes the
85
- number of executions when calling the proc in a non-blocking way comparable.
86
- * Calling the proc in a blocking manner with `#call` is costly. A lot of time
87
- is spend waiting for the result.
88
-
89
- You can run the benchmark yourself by running the [script][perf/concurrent_proc_calls_awaiting.rb]:
165
+ * In Ruby, waiting an amount of time is much faster than awaiting readiness of
166
+ I/O because it does not need to enter the underlying poll call.
167
+ * In mruby, awaiting readiness of I/O is actually faster than just waiting an
168
+ amount of time. Scheduling an evaluation to resume at a specific time
169
+ involves amongst other things inserting it into an array at the right index.
170
+ mruby implements many Array methods in plain ruby which makes them noticeably
171
+ slower.
90
172
 
91
- $ perf/concurrent_proc_calls_awaiting.rb
173
+ You can run this benchmark yourself by executing:
92
174
 
175
+ $ rake benchmark[wait_methods]
93
176
 
94
- ## Scheduling (Concurrent) Procs and Evaluating Them in Batches
95
177
 
96
- Additional to waiting inside a proc, it calls the proc 100 times at once. All
97
- 100 evaluations will then be evaluated in one batch during the next iteration
98
- of the event loop.
178
+ ## Waiting Inside Concurrent Procs
99
179
 
100
- This is a simulation for a server receiving multiple messages during one
101
- iteration of the event loop and processing all of them in one go.
180
+ Concurrent procs show different performance depending on how they are called
181
+ and if their evaluation needs to wait or not. This benchmark explores these
182
+ differences and serves as a guide which call method provides the best
183
+ performance in these scenarios.
102
184
 
103
- Benchmarked Code
104
- ----------------
105
- conproc = concurrent_proc{ wait 0 }
106
-
107
- while elapsed_seconds < 1
108
- 100.times{ # CODE # }
109
- wait 0 # to enter the event loop
110
- end
185
+ Benchmarks
186
+ ----------
187
+ call:
188
+ test_proc = concurrent_proc{}
189
+ batch = Array.new(100)
190
+
191
+ while elapsed_seconds < 1
192
+ batch.each{ test_proc.call }
193
+ # Concurrently::Proc#call already synchronizes the results of evaluations
194
+ end
195
+
196
+ call_nonblock:
197
+ test_proc = concurrent_proc{}
198
+ batch = Array.new(100)
199
+
200
+ while elapsed_seconds < 1
201
+ batch.each{ test_proc.call_nonblock }
202
+ end
203
+
204
+ call_detached:
205
+ test_proc = concurrent_proc{}
206
+ batch = Array.new(100)
207
+
208
+ while elapsed_seconds < 1
209
+ evaluations = batch.map{ test_proc.call_detached }
210
+ evaluations.each{ |evaluation| evaluation.await_result }
211
+ end
212
+
213
+ call_and_forget:
214
+ test_proc = concurrent_proc{}
215
+ batch = Array.new(100)
216
+
217
+ while elapsed_seconds < 1
218
+ batch.each{ test_proc.call_and_forget }
219
+ wait 0
220
+ end
221
+
222
+ waiting call:
223
+ test_proc = concurrent_proc{ wait 0 }
224
+ batch = Array.new(100)
225
+
226
+ while elapsed_seconds < 1
227
+ batch.each{ test_proc.call }
228
+ # Concurrently::Proc#call already synchronizes the results of evaluations
229
+ end
230
+
231
+ waiting call_nonblock:
232
+ test_proc = concurrent_proc{ wait 0 }
233
+ batch = Array.new(100)
234
+
235
+ while elapsed_seconds < 1
236
+ evaluations = batch.map{ test_proc.call_nonblock }
237
+ evaluations.each{ |evaluation| evaluation.await_result }
238
+ end
239
+
240
+ waiting call_detached:
241
+ test_proc = concurrent_proc{ wait 0 }
242
+ batch = Array.new(100)
243
+
244
+ while elapsed_seconds < 1
245
+ evaluations = batch.map{ test_proc.call_detached }
246
+ evaluations.each{ |evaluation| evaluation.await_result }
247
+ end
248
+
249
+ waiting call_and_forget:
250
+ test_proc = concurrent_proc{ wait 0 }
251
+ batch = Array.new(100)
252
+
253
+ while elapsed_seconds < 1
254
+ batch.each{ test_proc.call_and_forget }
255
+ wait 0
256
+ end
257
+
258
+ Results for ruby 2.4.1
259
+ ----------------------
260
+ call: 687600 executions in 1.0001 seconds
261
+ call_nonblock: 855600 executions in 1.0001 seconds
262
+ call_detached: 426400 executions in 1.0000 seconds
263
+ call_and_forget: 722200 executions in 1.0000 seconds
264
+ waiting call: 90300 executions in 1.0005 seconds
265
+ waiting call_nonblock: 191800 executions in 1.0001 seconds
266
+ waiting call_detached: 190300 executions in 1.0003 seconds
267
+ waiting call_and_forget: 207100 executions in 1.0001 seconds
111
268
 
112
- Results
113
- -------
114
- # CODE #
115
- conproc.call: 76300 executions in 1.0006 seconds
116
- conproc.call_nonblock: 186200 executions in 1.0002 seconds
117
- conproc.call_detached: 180200 executions in 1.0000 seconds
118
- conproc.call_and_forget: 193500 executions in 1.0004 seconds
269
+ Results for mruby 1.3.0
270
+ -----------------------
271
+ call: 319900 executions in 1.0003 seconds
272
+ call_nonblock: 431700 executions in 1.0002 seconds
273
+ call_detached: 158400 executions in 1.0006 seconds
274
+ call_and_forget: 397700 executions in 1.0002 seconds
275
+ waiting call: 49900 executions in 1.0015 seconds
276
+ waiting call_nonblock: 74600 executions in 1.0001 seconds
277
+ waiting call_detached: 73300 executions in 1.0006 seconds
278
+ waiting call_and_forget: 85200 executions in 1.0008 seconds
119
279
 
280
+ `wait 0` is used as a stand in for all wait methods. Measurements of concurrent
281
+ procs doing nothing are included for comparision.
120
282
 
121
283
  Explanation of the results:
122
284
 
123
- * `#call` does not profit from batching due to is synchronizing nature.
124
- * The other methods show an increased throughput compared to running just a
125
- single evaluation per event loop iteration.
285
+ * [Concurrently::Proc#call][] is the slowest if the concurrent proc needs to
286
+ wait. Immediately synchronizing the result for each and every evaluation
287
+ introduces a noticeable overhead.
288
+ * [Concurrently::Proc#call_nonblock][] and [Concurrently::Proc#call_detached][]
289
+ perform similarly. When started [Concurrently::Proc#call_nonblock][] skips
290
+ some work related to waiting that [Concurrently::Proc#call_detached][] is
291
+ already doing. Now, when the concurrent proc actually waits
292
+ [Concurrently::Proc#call_nonblock][] needs to make up for this skipped work.
293
+ This puts its performance in the same region as the one of
294
+ [Concurrently::Proc#call_detached][].
295
+ * [Concurrently::Proc#call_and_forget][] is the fastest way to wait inside a
296
+ concurrent proc. It comes at the cost that the result of the evaluation
297
+ cannot be returned.
298
+
299
+ To find the fastest way to evaluate a proc it has to be considered if the proc
300
+ does or does not wait most of the time and if its result is needed:
301
+
302
+ <table>
303
+ <tr>
304
+ <th></th>
305
+ <th>result needed</th>
306
+ <th>result not needed</th>
307
+ </tr>
308
+ <tr>
309
+ <th>waits almost always</th>
310
+ <td><code>#call_nonblock</code> or<br/><code>#call_detached</code></td>
311
+ <td><code>#call_and_forget</code></td>
312
+ </tr>
313
+ <tr>
314
+ <th>waits almost never</th>
315
+ <td><code>#call_nonblock</code></td>
316
+ <td><code>#call_nonblock</code></td>
317
+ </tr>
318
+ </table>
126
319
 
127
- The result of this benchmark is the upper bound for how many concurrent
128
- evaluations Concurrently is able to run per second. The number of executions
129
- does not change much with a varying batch size. Larger batches (e.g. 200+)
130
- gradually start to get a bit slower. A batch of 1000 evaluations still handles
131
- around 140k executions.
320
+ [Kernel#concurrently][] calls [Concurrently::Proc#call_detached][] under the
321
+ hood as a reasonable default. [Concurrently::Proc#call_detached][] has the
322
+ easiest interface and provides good performance especially in the most common
323
+ use case of Concurrently: waiting for an event to happen.
324
+ [Concurrently::Proc#call_nonblock][] and [Concurrently::Proc#call_and_forget][]
325
+ are there to squeeze out more performance in some edge cases.
132
326
 
133
- You can run the benchmark yourself by running the [script][perf/concurrent_proc_calls_awaiting.rb]:
327
+ You can run this benchmark yourself by executing:
134
328
 
135
- $ perf/concurrent_proc_calls_awaiting.rb 100
329
+ $ rake benchmark[call_methods_waiting]
136
330
 
137
331
 
138
- [perf/concurrent_proc_calls.rb]: https://github.com/christopheraue/m-ruby-concurrently/blob/master/perf/concurrent_proc_calls.rb
139
- [perf/concurrent_proc_calls_awaiting.rb]: https://github.com/christopheraue/m-ruby-concurrently/blob/master/perf/concurrent_proc_calls_awaiting.rb
140
- [Troubleshooting/A_concurrent_proc_is_scheduled_but_never_run]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/file/guides/Troubleshooting.md#A_concurrent_proc_is_scheduled_but_never_run
332
+ [Troubleshooting/A_concurrent_proc_is_scheduled_but_never_run]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/file/guides/Troubleshooting.md#A_concurrent_proc_is_scheduled_but_never_run
333
+ [Concurrently::Proc]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/Proc
334
+ [Concurrently::Proc#call]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/Proc#call-instance_method
335
+ [Concurrently::Proc#call_nonblock]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/Proc#call_nonblock-instance_method
336
+ [Concurrently::Proc#call_detached]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/Proc#call_detached-instance_method
337
+ [Concurrently::Proc#call_and_forget]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/Proc#call_and_forget-instance_method
338
+ [Kernel#concurrently]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Kernel#concurrently-instance_method
@@ -3,7 +3,7 @@
3
3
  To get an idea about the inner workings of Concurrently have a look at the
4
4
  [Flow of control][] section in the overview.
5
5
 
6
- ## A concurrent proc is scheduled but never run
6
+ ## An evaluation is scheduled but never run
7
7
 
8
8
  Consider the following script:
9
9
 
@@ -23,11 +23,11 @@ Running it will only print:
23
23
  Unicorns!
24
24
  ```
25
25
 
26
- `concurrently{}` is a shortcut for `concurrent_proc{}.call_and_forget`
26
+ `concurrently{}` is a shortcut for `concurrent_proc{}.call_detached`
27
27
  which in turn does not evaluate its code right away but schedules it to run
28
- during the next iteration of the event loop. But, since the root evaluation did
29
- not await anything the event loop has never been entered and the evaluation of
30
- the concurrent proc has never been started.
28
+ during the next iteration of the event loop. But, since the main evaluation did
29
+ not await anything the event loop has never been entered and the concurrent
30
+ evaluation has never been started.
31
31
 
32
32
  A more subtle variation of this behavior occurs in the following scenario:
33
33
 
@@ -49,10 +49,10 @@ Running it will also only print:
49
49
  Unicorns!
50
50
  ```
51
51
 
52
- This time, the root evaluation does await something, namely the end of a one
52
+ This time, the main evaluation does await something, namely the end of a one
53
53
  second time frame. Because of this, the evaluation of the `concurrently` block
54
54
  is indeed started and immediately waits for two seconds. After one second the
55
- root evaluation is resumed and exits. The `concurrently` block is never awoken
55
+ main evaluation is resumed and exits. The `concurrently` block is never awoken
56
56
  again from its now eternal beauty sleep.
57
57
 
58
58
  ## A call is blocking the entire execution.
@@ -83,8 +83,8 @@ r.await_readable
83
83
  r.readpartial 32
84
84
  ```
85
85
 
86
- we suspend the root evaluation, switch to the event loop which runs the
87
- `concurrently` block and once there is something to read from `r` the root
86
+ we suspend the main evaluation, switch to the event loop which runs the
87
+ `concurrently` block and once there is something to read from `r` the main
88
88
  evaluation is resumed.
89
89
 
90
90
  This approach is not perfect. It is not very efficient if we do not need to
@@ -105,28 +105,28 @@ end
105
105
 
106
106
  ## The event loop is jammed by too many or too expensive evaluations
107
107
 
108
- Let's talk about a concurrent proc with an infinite loop:
108
+ Let's talk about a concurrent evaluation with an infinite loop:
109
109
 
110
110
  ```ruby
111
- evaluation = concurrent_proc do
111
+ evaluation = concurrently do
112
112
  loop do
113
113
  puts "To infinity! And beyond!"
114
114
  end
115
- end.call_detached
115
+ end
116
116
 
117
117
  concurrently do
118
118
  evaluation.conclude_to :cancelled
119
119
  end
120
120
  ```
121
121
 
122
- When the concurrent proc is scheduled to run it runs and runs and runs and
123
- never finishes. The event loop is never entered again and the other concurrent
124
- proc concluding the evaluation is never started.
122
+ When the loop evaluation is scheduled to run it runs and runs and runs and
123
+ never finishes. The event loop is never entered again and the other evaluation
124
+ concluding the evaluation is never started.
125
125
 
126
126
  A less extreme example is something like:
127
127
 
128
128
  ```ruby
129
- concurrent_proc do
129
+ concurrently do
130
130
  loop do
131
131
  wait 0.1
132
132
  puts "timer triggered at: #{Time.now.strftime('%H:%M:%S.%L')}"
@@ -134,7 +134,7 @@ concurrent_proc do
134
134
  sleep 1 # defers the entire event loop
135
135
  end
136
136
  end
137
- end.call
137
+ end.await_result
138
138
 
139
139
  # => timer triggered at: 16:08:17.704
140
140
  # => timer triggered at: 16:08:18.705
@@ -175,20 +175,20 @@ managing IOs (e.g. closing them).
175
175
 
176
176
  ## Errors tear down the event loop
177
177
 
178
- Every concurrent proc rescues the following errors happening during its
179
- evaluation: `NoMemoryError`, `ScriptError`, `SecurityError`, `StandardError`
180
- and `SystemStackError`. These are all errors that should not have an immediate
181
- influence on other evaluations or the application as a whole. They will not
182
- leak to the event loop and will not tear it down.
178
+ Every evaluation rescues the following errors: `NoMemoryError`, `ScriptError`,
179
+ `SecurityError`, `StandardError` and `SystemStackError`. These are all errors
180
+ that should not have an immediate influence on other evaluations or the
181
+ application as a whole. They will not leak to the event loop and will not tear
182
+ it down.
183
183
 
184
- All other errors happening inside a concurrent proc *will* tear down the
185
- event loop. These error types are: `SignalException`, `SystemExit` and the
186
- general `Exception`. In such a case the event loop exits by raising a
187
- [Concurrently::Error][].
184
+ All other errors happening during an evaluation *will* tear down the event
185
+ loop. These error types are: `SignalException`, `SystemExit` and the general
186
+ `Exception`. In such a case the event loop exits by re-raising the causing
187
+ error.
188
188
 
189
189
  If your application rescues the error when the event loop is teared down
190
190
  and continues running (irb does this, for example) it will do so with a
191
- [reinitialized event loop] [Concurrently::EventLoop#reinitialize!].
191
+ [reinitialized event loop][Concurrently::EventLoop#reinitialize!].
192
192
 
193
193
  ## Using Plain Fibers
194
194
 
@@ -196,7 +196,7 @@ In principle, you can safely use plain ruby fibers alongside concurrent procs.
196
196
  Just make sure you are exclusively operating on these fibers to not
197
197
  accidentally interfere with the fibers managed by Concurrently. Be
198
198
  especially careful with `Fiber.yield` and `Fiber.current` inside a concurrent
199
- proc.
199
+ evaluation.
200
200
 
201
201
  ## Fiber-local variables are treated as thread-local
202
202