concurrently 1.0.1 → 1.1.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/.gitignore +1 -1
- data/.travis.yml +8 -3
- data/README.md +70 -60
- data/RELEASE_NOTES.md +16 -1
- data/Rakefile +98 -14
- data/concurrently.gemspec +16 -12
- data/ext/mruby/io.rb +1 -1
- data/guides/Overview.md +191 -66
- data/guides/Performance.md +300 -102
- data/guides/Troubleshooting.md +28 -28
- data/lib/Ruby/concurrently/proc/evaluation/error.rb +10 -0
- data/lib/all/concurrently/error.rb +0 -3
- data/lib/all/concurrently/evaluation.rb +8 -12
- data/lib/all/concurrently/event_loop.rb +1 -1
- data/lib/all/concurrently/event_loop/fiber.rb +3 -3
- data/lib/all/concurrently/event_loop/io_selector.rb +1 -1
- data/lib/all/concurrently/event_loop/run_queue.rb +29 -17
- data/lib/all/concurrently/proc.rb +13 -13
- data/lib/all/concurrently/proc/evaluation.rb +29 -29
- data/lib/all/concurrently/proc/evaluation/error.rb +13 -0
- data/lib/all/concurrently/proc/fiber.rb +3 -6
- data/lib/all/concurrently/version.rb +1 -1
- data/lib/all/io.rb +118 -41
- data/lib/all/kernel.rb +82 -29
- data/lib/mruby/concurrently/event_loop/io_selector.rb +46 -0
- data/lib/mruby/kernel.rb +1 -1
- data/mrbgem.rake +28 -17
- data/mruby_builds/build_config.rb +67 -0
- data/perf/Ruby/stage.rb +23 -0
- data/perf/benchmark_call_methods.rb +32 -0
- data/perf/benchmark_call_methods_waiting.rb +52 -0
- data/perf/benchmark_wait_methods.rb +38 -0
- data/perf/mruby/stage.rb +8 -0
- data/perf/profile_await_readable.rb +10 -0
- data/perf/{concurrent_proc_call.rb → profile_call.rb} +1 -5
- data/perf/{concurrent_proc_call_and_forget.rb → profile_call_and_forget.rb} +1 -5
- data/perf/{concurrent_proc_call_detached.rb → profile_call_detached.rb} +1 -5
- data/perf/{concurrent_proc_call_nonblock.rb → profile_call_nonblock.rb} +1 -5
- data/perf/profile_wait.rb +7 -0
- data/perf/stage.rb +47 -0
- data/perf/stage/benchmark.rb +47 -0
- data/perf/stage/benchmark/code_gen.rb +29 -0
- data/perf/stage/benchmark/code_gen/batch.rb +41 -0
- data/perf/stage/benchmark/code_gen/single.rb +38 -0
- metadata +27 -23
- data/ext/mruby/array.rb +0 -19
- data/lib/Ruby/concurrently/error.rb +0 -4
- data/perf/_shared/stage.rb +0 -33
- data/perf/concurrent_proc_calls.rb +0 -49
- data/perf/concurrent_proc_calls_awaiting.rb +0 -48
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA1:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: a2433cf85e8e22dae4f8753c3f0bf8da68d0a307
|
4
|
+
data.tar.gz: 5fc75f82d8f23b989b9f73171cb11aa8dd7dc6e7
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 6a69bfe30abb26229e12195fc514c1a2766c0cf1531a5c72a4bf872ef358deaa5e28eae06eae537786f82a159a0071695913b93cc367989d30b90cc489b18879
|
7
|
+
data.tar.gz: 28990344bb12e54ca9318f7396753e2a929cdbf80a8942ce2c00a2742b4e82a003c4e7c3db5e1f0001658bcbb815fd821cd3b1fc0bdbd8b1df0fddd2c30155cc
|
data/.gitignore
CHANGED
data/.travis.yml
CHANGED
@@ -1,16 +1,21 @@
|
|
1
1
|
language: ruby
|
2
2
|
|
3
|
-
script: rake
|
3
|
+
script: rake test:$RUBY
|
4
4
|
sudo: false
|
5
5
|
|
6
6
|
rvm:
|
7
7
|
- 2.4.1
|
8
8
|
- 2.3.4
|
9
9
|
- 2.2.7
|
10
|
+
- ruby-head
|
10
11
|
|
11
12
|
env:
|
12
|
-
-
|
13
|
+
- RUBY=ruby
|
13
14
|
|
14
15
|
matrix:
|
15
16
|
include:
|
16
|
-
- env:
|
17
|
+
- env: RUBY=mruby[1.3.0]
|
18
|
+
- env: RUBY=mruby[master]
|
19
|
+
allow_failures:
|
20
|
+
- rvm: ruby-head
|
21
|
+
- env: RUBY=mruby[master]
|
data/README.md
CHANGED
@@ -2,84 +2,86 @@
|
|
2
2
|
|
3
3
|
[![Build Status](https://secure.travis-ci.org/christopheraue/m-ruby-concurrently.svg?branch=master)](http://travis-ci.org/christopheraue/m-ruby-concurrently)
|
4
4
|
|
5
|
-
Concurrently is a concurrency framework for Ruby and mruby
|
6
|
-
code can be
|
7
|
-
|
8
|
-
|
9
|
-
|
10
|
-
|
11
|
-
|
12
|
-
|
13
|
-
|
14
|
-
|
15
|
-
|
16
|
-
|
17
|
-
|
18
|
-
|
19
|
-
|
20
|
-
|
21
|
-
|
22
|
-
|
23
|
-
|
24
|
-
|
25
|
-
|
26
|
-
|
27
|
-
|
28
|
-
|
29
|
-
|
30
|
-
|
31
|
-
|
32
|
-
|
33
|
-
|
34
|
-
|
35
|
-
|
36
|
-
|
37
|
-
|
38
|
-
|
39
|
-
|
40
|
-
|
41
|
-
|
42
|
-
|
43
|
-
|
44
|
-
|
45
|
-
|
5
|
+
Concurrently is a concurrency framework for Ruby and mruby built upon
|
6
|
+
fibers. With it code can be evaluated independently in its own execution
|
7
|
+
context similar to a thread. Execution contexts are called *evaluations* in
|
8
|
+
Concurrently and are created with [Kernel#concurrently][]:
|
9
|
+
|
10
|
+
```ruby
|
11
|
+
hello = concurrently do
|
12
|
+
wait 0.2 # seconds
|
13
|
+
"hello"
|
14
|
+
end
|
15
|
+
|
16
|
+
world = concurrently do
|
17
|
+
wait 0.1 # seconds
|
18
|
+
"world"
|
19
|
+
end
|
20
|
+
|
21
|
+
puts "#{hello.await_result} #{world.await_result}"
|
22
|
+
```
|
23
|
+
|
24
|
+
In this example we have three evaluations: The root evaluation and two more
|
25
|
+
concurrent evaluations started by said root evaluation. The root evaluation
|
26
|
+
waits until both concurrent evaluations were concluded and then prints "hello
|
27
|
+
world".
|
28
|
+
|
29
|
+
|
30
|
+
## Synchronization with events
|
31
|
+
|
32
|
+
Evaluations can be synchronized with certain events by waiting for them. These
|
33
|
+
events are:
|
34
|
+
|
35
|
+
* an elapsed time period ([Kernel#wait][]),
|
36
|
+
* readability and writability of IO ([IO#await_readable][],
|
37
|
+
[IO#await_writable][]) and
|
38
|
+
* the result of another evaluation ([Concurrently::Proc::Evaluation#await_result][]).
|
39
|
+
|
40
|
+
Since evaluations run independently they are not blocking other evaluations
|
41
|
+
while waiting.
|
42
|
+
|
43
|
+
|
44
|
+
## Concurrent I/O
|
45
|
+
|
46
|
+
When doing I/O it is important to do it **non-blocking**. If the IO object is
|
47
|
+
not ready use [IO#await_readable][] and [IO#await_writable][] to await
|
48
|
+
readiness.
|
49
|
+
|
50
|
+
For more about non-blocking I/O, see the core ruby docs for
|
51
|
+
[IO#read_nonblock][] and [IO#write_nonblock][].
|
46
52
|
|
47
53
|
This is a little server reading from an IO and printing the received messages:
|
48
54
|
|
49
55
|
```ruby
|
50
|
-
|
56
|
+
# Let's start with creating a pipe to connect client and server
|
57
|
+
r,w = IO.pipe
|
58
|
+
|
59
|
+
# Server:
|
60
|
+
# We let the server code run concurrently so it runs independently. It reads
|
61
|
+
# from the pipe non-blocking and awaits readability if the pipe is not readable.
|
62
|
+
concurrently do
|
51
63
|
while true
|
52
64
|
begin
|
53
|
-
puts
|
65
|
+
puts r.read_nonblock 32
|
54
66
|
rescue IO::WaitReadable
|
55
|
-
|
67
|
+
r.await_readable
|
56
68
|
retry
|
57
69
|
end
|
58
70
|
end
|
59
71
|
end
|
60
|
-
```
|
61
|
-
|
62
|
-
Now, we create a pipe and start the server with the read end of it:
|
63
|
-
|
64
|
-
```ruby
|
65
|
-
r,w = IO.pipe
|
66
|
-
server.call_detached r
|
67
|
-
```
|
68
|
-
|
69
|
-
Finally, we write messages to the write end of the pipe every 0.5 seconds:
|
70
72
|
|
71
|
-
|
73
|
+
# Client:
|
74
|
+
# The client writes to the pipe every 0.5 seconds
|
72
75
|
puts "#{Time.now.strftime('%H:%M:%S.%L')} (Start time)"
|
73
|
-
|
74
76
|
while true
|
75
77
|
wait 0.5
|
76
78
|
w.write Time.now.strftime('%H:%M:%S.%L')
|
77
79
|
end
|
78
80
|
```
|
79
81
|
|
80
|
-
The
|
81
|
-
|
82
|
-
|
82
|
+
The root evaluation is effectively blocked by waiting or writing messages.
|
83
|
+
But since the server runs concurrently it is not affected by this and happily
|
84
|
+
prints its received messages.
|
83
85
|
|
84
86
|
This is the output:
|
85
87
|
|
@@ -105,7 +107,7 @@ This is the output:
|
|
105
107
|
## Supported Ruby Versions
|
106
108
|
|
107
109
|
* Ruby 2.2.7+
|
108
|
-
* mruby 1.3+
|
110
|
+
* mruby 1.3+
|
109
111
|
|
110
112
|
|
111
113
|
## Development
|
@@ -121,6 +123,14 @@ Concurrently is licensed under the Apache License, Version 2.0. Please see the
|
|
121
123
|
file called LICENSE.
|
122
124
|
|
123
125
|
|
126
|
+
[Kernel#concurrently]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Kernel#concurrently-instance_method
|
127
|
+
[Kernel#wait]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Kernel#wait-instance_method
|
128
|
+
[IO#await_readable]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/IO#await_readable-instance_method
|
129
|
+
[IO#await_writable]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/IO#await_writable-instance_method
|
130
|
+
[Concurrently::Proc::Evaluation#await_result]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/Proc/Evaluation#await_result-instance_method
|
131
|
+
[IO#read_nonblock]: https://ruby-doc.org/core/IO.html#method-i-read_nonblock
|
132
|
+
[IO#write_nonblock]: https://ruby-doc.org/core/IO.html#method-i-write_nonblock
|
133
|
+
|
124
134
|
[installation]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/file/guides/Installation.md
|
125
135
|
[overview]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/file/guides/Overview.md
|
126
136
|
[documentation]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/index
|
data/RELEASE_NOTES.md
CHANGED
@@ -1,5 +1,20 @@
|
|
1
1
|
# Release Notes
|
2
|
-
|
2
|
+
|
3
|
+
## 1.1.0 (2017-07-10)
|
4
|
+
|
5
|
+
### Improvements
|
6
|
+
* Improved error reporting
|
7
|
+
* Improved benchmarks and profiling and made them work for mruby
|
8
|
+
* Improved documentation
|
9
|
+
* Improved overall performance
|
10
|
+
|
11
|
+
### Extended [IO](http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/IO) interface
|
12
|
+
* [#await_read](http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/IO#await_read-instance_method)
|
13
|
+
* [#await_written](http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/IO#await_written-instance_method)
|
14
|
+
|
15
|
+
### Extended [Kernel](http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Kernel) interface
|
16
|
+
* [#await_fastest](http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Kernel#await_fastest-instance_method)
|
17
|
+
|
3
18
|
## 1.0.0 (2017-06-26)
|
4
19
|
|
5
20
|
### Added Support for
|
data/Rakefile
CHANGED
@@ -1,28 +1,112 @@
|
|
1
1
|
Dir.chdir File.dirname __FILE__
|
2
2
|
|
3
|
-
|
4
|
-
|
5
|
-
|
3
|
+
perf_dir = File.expand_path "perf"
|
4
|
+
|
5
|
+
# Ruby
|
6
|
+
ruby = {
|
7
|
+
test: "rspec" ,
|
8
|
+
benchmark: "ruby -Iperf/Ruby -rstage",
|
9
|
+
profile: "ruby -Iperf/Ruby -rstage" }
|
10
|
+
|
11
|
+
mruby_dir = File.expand_path "mruby_builds"
|
12
|
+
mruby = {
|
13
|
+
src: "#{mruby_dir}/_source",
|
14
|
+
cfg: "#{mruby_dir}/build_config.rb",
|
15
|
+
test: "#{mruby_dir}/test/bin/mrbtest",
|
16
|
+
benchmark: "#{mruby_dir}/benchmark/bin/mruby",
|
17
|
+
profile: "#{mruby_dir}/profile/bin/mruby" }
|
18
|
+
|
19
|
+
namespace :test do
|
20
|
+
desc "Run the Ruby test suite"
|
21
|
+
task :ruby do
|
22
|
+
sh ruby[:test]
|
23
|
+
end
|
24
|
+
|
25
|
+
desc "Run the mruby test suite"
|
26
|
+
task :mruby, [:reference] => "mruby:build" do
|
27
|
+
sh mruby[:test]
|
6
28
|
end
|
7
29
|
end
|
8
30
|
|
9
|
-
|
10
|
-
|
11
|
-
|
12
|
-
|
13
|
-
|
14
|
-
|
31
|
+
desc "Run the Ruby and mruby test suites"
|
32
|
+
task test: %w(test:ruby test:mruby)
|
33
|
+
|
34
|
+
namespace :benchmark do
|
35
|
+
desc "Run the benchmark #{perf_dir}/benchmark_[name].rb with Ruby"
|
36
|
+
task :ruby, [:name, :batch_size] do |t, args|
|
37
|
+
args.with_defaults name: "wait_methods", batch_size: 100
|
38
|
+
file = "#{perf_dir}/benchmark_#{args.name}.rb"
|
39
|
+
sh "#{ruby[:benchmark]} #{file} #{args.batch_size}"
|
15
40
|
end
|
16
41
|
|
17
|
-
|
18
|
-
|
42
|
+
desc "Run the benchmark #{perf_dir}/benchmark_[name].rb with mruby"
|
43
|
+
task :mruby, [:name, :batch_size] => "mruby:build" do |t, args|
|
44
|
+
args.with_defaults name: "wait_methods", batch_size: 100
|
45
|
+
file = "#{perf_dir}/benchmark_#{args.name}.rb"
|
46
|
+
sh "#{mruby[:benchmark]} #{file} #{args.batch_size}"
|
19
47
|
end
|
48
|
+
end
|
49
|
+
|
50
|
+
desc "Run the benchmark #{perf_dir}/benchmark_[name].rb for Ruby and mruby"
|
51
|
+
task :benchmark, [:name, :batch_size] => "mruby:build" do |t, args|
|
52
|
+
args.with_defaults name: "wait_methods", batch_size: 100
|
53
|
+
file = "#{perf_dir}/benchmark_#{args.name}.rb"
|
54
|
+
sh "#{ruby[:benchmark]} #{file} #{args.batch_size}", verbose: false
|
55
|
+
sh "#{mruby[:benchmark]} #{file} #{args.batch_size} skip_header", verbose: false
|
56
|
+
end
|
20
57
|
|
21
|
-
|
22
|
-
|
58
|
+
namespace :profile do
|
59
|
+
desc "Create a code profile by running #{perf_dir}/profile_[name].rb with Ruby"
|
60
|
+
task :ruby, [:name] do |t, args|
|
61
|
+
args.with_defaults name: "call"
|
62
|
+
file = "#{perf_dir}/profile_#{args.name}.rb"
|
63
|
+
sh "#{ruby[:profile]} #{file}"
|
64
|
+
end
|
65
|
+
|
66
|
+
desc "Create a code profile by running #{perf_dir}/profile_[name].rb with mruby"
|
67
|
+
task :mruby, [:name] => "mruby:build" do |t, args|
|
68
|
+
args.with_defaults name: "call"
|
69
|
+
file = "#{perf_dir}/profile_#{args.name}.rb"
|
70
|
+
sh "#{mruby[:profile]} #{file}"
|
23
71
|
end
|
24
72
|
end
|
25
73
|
|
26
|
-
|
74
|
+
namespace :mruby do
|
75
|
+
file mruby[:src] do
|
76
|
+
sh "git clone --depth=1 git://github.com/mruby/mruby.git #{mruby[:src]}"
|
77
|
+
end
|
78
|
+
|
79
|
+
desc "Checkout a tag or commit of the mruby source. Executes: git checkout reference"
|
80
|
+
task :checkout, [:reference] => mruby[:src] do |t, args|
|
81
|
+
args.with_defaults reference: 'master'
|
82
|
+
`cd #{mruby[:src]} && git fetch --tags`
|
83
|
+
current_ref = `cd #{mruby[:src]} && git rev-parse HEAD`
|
84
|
+
checkout_ref = `cd #{mruby[:src]} && git rev-parse #{args.reference}`
|
85
|
+
if checkout_ref != current_ref
|
86
|
+
Rake::Task['mruby:clean'].invoke
|
87
|
+
sh "cd #{mruby[:src]} && git checkout #{args.reference}"
|
88
|
+
end
|
89
|
+
end
|
90
|
+
|
91
|
+
desc "Build mruby"
|
92
|
+
task :build, [:reference] => :checkout do
|
93
|
+
sh "cd #{mruby[:src]} && MRUBY_CONFIG=#{mruby[:cfg]} rake"
|
94
|
+
end
|
95
|
+
|
96
|
+
desc "Clean the mruby build"
|
97
|
+
task clean: mruby[:src] do
|
98
|
+
sh "cd #{mruby[:src]} && MRUBY_CONFIG=#{mruby[:cfg]} rake deep_clean"
|
99
|
+
end
|
100
|
+
|
101
|
+
desc "Update the source of mruby"
|
102
|
+
task pull: :clean do
|
103
|
+
sh "cd #{mruby[:src]} && git pull"
|
104
|
+
end
|
105
|
+
|
106
|
+
desc "Delete the mruby source"
|
107
|
+
task delete: mruby[:src] do
|
108
|
+
sh "rm -rf #{mruby[:src]}"
|
109
|
+
end
|
110
|
+
end
|
27
111
|
|
28
112
|
task default: :test
|
data/concurrently.gemspec
CHANGED
@@ -4,19 +4,23 @@ Gem::Specification.new do |spec|
|
|
4
4
|
spec.name = "concurrently"
|
5
5
|
spec.version = Concurrently::VERSION
|
6
6
|
spec.summary = %q{A concurrency framework based on fibers}
|
7
|
-
spec.description =
|
8
|
-
Concurrently is a concurrency framework for Ruby and mruby
|
9
|
-
code can be
|
7
|
+
spec.description = <<'DESC'
|
8
|
+
Concurrently is a concurrency framework for Ruby and mruby based on
|
9
|
+
fibers. With it code can be evaluated independently in its own execution
|
10
|
+
context similar to a thread:
|
10
11
|
|
11
|
-
|
12
|
-
|
13
|
-
|
14
|
-
|
15
|
-
|
16
|
-
|
17
|
-
|
18
|
-
|
19
|
-
|
12
|
+
hello = concurrently do
|
13
|
+
wait 0.2 # seconds
|
14
|
+
"hello"
|
15
|
+
end
|
16
|
+
|
17
|
+
world = concurrently do
|
18
|
+
wait 0.1 # seconds
|
19
|
+
"world"
|
20
|
+
end
|
21
|
+
|
22
|
+
puts "#{hello.await_result} #{world.await_result}"
|
23
|
+
DESC
|
20
24
|
|
21
25
|
spec.homepage = "https://github.com/christopheraue/m-ruby-concurrently"
|
22
26
|
spec.license = "Apache-2.0"
|
data/ext/mruby/io.rb
CHANGED
@@ -29,7 +29,7 @@ class IO
|
|
29
29
|
#
|
30
30
|
# @see https://ruby-doc.org/core-1.9.3/IO.html#method-i-read_nonblock
|
31
31
|
# Ruby's documentation for IO#read_nonblock
|
32
|
-
def read_nonblock(maxlen, outbuf =
|
32
|
+
def read_nonblock(maxlen, outbuf = '')
|
33
33
|
if IO.select [self], nil, nil, 0
|
34
34
|
sysread(maxlen, outbuf)
|
35
35
|
else
|
data/guides/Overview.md
CHANGED
@@ -1,64 +1,123 @@
|
|
1
1
|
# An Overview of Concurrently
|
2
2
|
|
3
|
-
|
4
|
-
|
5
|
-
examples about a topic follow the
|
3
|
+
The [README][] already introduced the basic interface of Concurrently.
|
4
|
+
This document explores the underlying concepts and explains how all parts work
|
5
|
+
together. For even more details and examples about a specific topic follow the
|
6
|
+
interspersed links to the [API documentation][].
|
7
|
+
|
8
|
+
Let's start with the concept of an *evaluation*.
|
9
|
+
|
6
10
|
|
7
11
|
## Evaluations
|
8
12
|
|
9
|
-
An evaluation is an
|
10
|
-
|
11
|
-
|
12
|
-
providing access to its future result or offering the ability to inject a
|
13
|
-
result manually. Once the evaluation has a result it is *concluded*.
|
13
|
+
An evaluation is an independent execution context. It is similar to a thread or
|
14
|
+
a fiber since it can be suspended and resumed independently from other
|
15
|
+
evaluations.
|
14
16
|
|
15
17
|
Every ruby program already has an implicit [root evaluation][Concurrently::Evaluation]
|
16
|
-
running.
|
18
|
+
running. Unless you explicitly tell your program to evaluate code concurrently
|
19
|
+
it is evaluated in the root evaluation. The root evaluation runs as long as
|
20
|
+
your program is running. Thus it is never concluded and its result cannot be
|
21
|
+
awaited.
|
22
|
+
|
23
|
+
Evaluating code with `concurrently(&block)` is done in its own type of
|
24
|
+
[evaluation][Concurrently::Proc::Evaluation]. Contrary to the root evaluation,
|
25
|
+
this evaluation has an end with a result. Next to its similarity to a thread
|
26
|
+
resp. fiber it is also similar to a future or a promise. It provides access
|
27
|
+
to its (future) result and offers the ability to shortcut its execution by
|
28
|
+
manually injecting a result. Once the evaluation has a result it is *concluded*.
|
29
|
+
|
30
|
+
```ruby
|
31
|
+
# This is the root evaluation
|
32
|
+
|
33
|
+
concurrently do
|
34
|
+
# This is a concurrent evaluation
|
35
|
+
end
|
36
|
+
|
37
|
+
concurrently do
|
38
|
+
# This is another concurrent evaluation
|
39
|
+
end
|
40
|
+
```
|
41
|
+
|
42
|
+
|
43
|
+
## Concurrent Evaluation of Code
|
44
|
+
|
45
|
+
Evaluating a piece of code concurrently involves three distinct phases:
|
17
46
|
|
18
|
-
|
47
|
+
1 2 3
|
48
|
+
evaluation0 ---+-------------+--->
|
49
|
+
| |
|
50
|
+
evaluation1 `--+-----+----´
|
51
|
+
| |
|
52
|
+
t io
|
19
53
|
|
20
|
-
|
21
|
-
|
22
|
-
|
54
|
+
1. Invocation: evaluation0 kicks off evaluation1 to process the code in. It
|
55
|
+
does it asynchronously by not waiting for evaluation1 to finish.
|
56
|
+
2. Computation: evaluation0 and evaluation1 run independently from each
|
57
|
+
other. evaluation1 synchronizes itself with other events (e.g. with time or
|
58
|
+
I/O).
|
59
|
+
3. Synchronization: If evaluation0 is interested in the result of evaluation1
|
60
|
+
it has to wait for it. This synchronizes evaluation1 with evaluation0 again.
|
61
|
+
If evaluation1 has not finished yet evaluation0 blocks until it has.
|
23
62
|
|
24
|
-
|
63
|
+
Every tool Concurrently offers is linked to one of these phases.
|
64
|
+
|
65
|
+
|
66
|
+
### Invocation
|
67
|
+
|
68
|
+
To start evaluating code concurrently use [Kernel#concurrently][]:
|
25
69
|
|
26
70
|
```ruby
|
27
|
-
|
71
|
+
evaluation = concurrently do
|
28
72
|
# code to run concurrently
|
29
73
|
end
|
30
74
|
```
|
31
75
|
|
32
|
-
|
33
|
-
|
34
|
-
|
35
|
-
[Kernel#concurrently] is a shortcut for
|
36
|
-
|
76
|
+
It returns immediately with a handle to the started evaluation. The evaluation
|
77
|
+
will be processed in the background.
|
78
|
+
|
79
|
+
[Kernel#concurrently][] is actually a shortcut for
|
80
|
+
|
37
81
|
```ruby
|
38
|
-
|
82
|
+
evaluation = concurrent_proc do
|
39
83
|
# code to run concurrently
|
40
|
-
end
|
84
|
+
end.call_detached
|
85
|
+
```
|
86
|
+
|
87
|
+
In general, you do not need to work with concurrent procs directly. Just use
|
88
|
+
[Kernel#concurrently][]. But concurrent procs give you a finer control over
|
89
|
+
how the code is evaluated. This comes in handy for optimizing performance.
|
41
90
|
|
42
|
-
# is equivalent to:
|
43
91
|
|
44
|
-
|
92
|
+
#### Concurrent Procs
|
93
|
+
|
94
|
+
The [concurrent proc][Concurrently::Proc] looks and feels just like a regular
|
95
|
+
proc. In fact, [Concurrently::Proc][] inherits from `Proc`. It is created with
|
96
|
+
[Kernel#concurrent_proc][]:
|
97
|
+
|
98
|
+
```ruby
|
99
|
+
conproc = concurrent_proc do
|
45
100
|
# code to run concurrently
|
46
|
-
end
|
101
|
+
end
|
47
102
|
```
|
48
103
|
|
49
|
-
|
104
|
+
Concurrent procs can be used the same way regular procs are. For example, they
|
105
|
+
can be passed around or called multiple times with different arguments.
|
50
106
|
|
51
|
-
|
107
|
+
When called a concurrent proc kicks of an evaluation of its code. A concurrent
|
108
|
+
proc has four methods to call it. Depending on which method is used the code
|
109
|
+
is evaluated slightly differently.
|
52
110
|
|
53
|
-
The first two evaluate the concurrent proc immediately in the
|
111
|
+
The first two methods evaluate the concurrent proc immediately in the
|
112
|
+
foreground:
|
54
113
|
|
55
|
-
* [Concurrently::Proc#call][] blocks the
|
56
|
-
|
57
|
-
|
58
|
-
* [Concurrently::Proc#call_nonblock][] will not block the
|
59
|
-
|
60
|
-
|
61
|
-
|
114
|
+
* [Concurrently::Proc#call][] blocks the evaluation it has been called from
|
115
|
+
until its own evaluation is concluded. Then it returns the result. This
|
116
|
+
behaves just like `Proc#call`.
|
117
|
+
* [Concurrently::Proc#call_nonblock][] will not block the evaluation it has
|
118
|
+
been called from if it needs to wait. Instead, it immediately returns its
|
119
|
+
own [evaluation][Concurrently::Proc::Evaluation]. If it can be evaluated
|
120
|
+
without waiting it returns the result.
|
62
121
|
|
63
122
|
The other two schedule the concurrent proc to be run in the background. The
|
64
123
|
evaluation is not started right away but is deferred until the the next
|
@@ -68,15 +127,30 @@ iteration of the event loop:
|
|
68
127
|
* [Concurrently::Proc#call_and_forget][] does not give access to the evaluation
|
69
128
|
and returns `nil`.
|
70
129
|
|
130
|
+
The different methods to call a concurrent proc have an impact on the execution
|
131
|
+
speed. In general, [Concurrently::Proc#call_detached][] represents a good
|
132
|
+
middle ground between ease of use and performance. For an in-depth analysis of
|
133
|
+
the performance implications of each call method have a look at the
|
134
|
+
[performance documentation][performance]. It offers a guide what to use if
|
135
|
+
every cpu cycle counts.
|
136
|
+
|
137
|
+
|
138
|
+
### Computation
|
139
|
+
|
140
|
+
In the computation phase the evaluation works through its code. While doing so
|
141
|
+
it can synchronize itself with different events.
|
71
142
|
|
72
|
-
|
143
|
+
All synchronization methods are named `await_*`. As usual, there is an exception
|
144
|
+
to the rule: Waiting an amount of time is done with [Kernel#wait][].
|
73
145
|
|
74
|
-
|
146
|
+
#### Synchronization with Time
|
147
|
+
|
148
|
+
To defer the current evaluation for a fixed amount of time use [Kernel#wait][].
|
75
149
|
|
76
150
|
* Doing something after X seconds:
|
77
151
|
|
78
152
|
```ruby
|
79
|
-
|
153
|
+
concurrently do
|
80
154
|
wait X
|
81
155
|
do_it!
|
82
156
|
end
|
@@ -85,7 +159,7 @@ To defer the current evaluation for a fixed time use [Kernel#wait][].
|
|
85
159
|
* Doing something every X seconds. This is a timer:
|
86
160
|
|
87
161
|
```ruby
|
88
|
-
|
162
|
+
concurrently do
|
89
163
|
loop do
|
90
164
|
wait X
|
91
165
|
do_it!
|
@@ -96,7 +170,7 @@ To defer the current evaluation for a fixed time use [Kernel#wait][].
|
|
96
170
|
* Doing something after X seconds, every Y seconds, Z times:
|
97
171
|
|
98
172
|
```ruby
|
99
|
-
|
173
|
+
concurrently do
|
100
174
|
wait X
|
101
175
|
Z.times do
|
102
176
|
do_it!
|
@@ -105,33 +179,33 @@ To defer the current evaluation for a fixed time use [Kernel#wait][].
|
|
105
179
|
end
|
106
180
|
```
|
107
181
|
|
182
|
+
* Doing something at a given point in time:
|
108
183
|
|
109
|
-
|
184
|
+
```ruby
|
185
|
+
concurrently do
|
186
|
+
time = Time.new(2042,7,10, 16,13,26) # 10 July 2042, 16:13:26
|
187
|
+
wait (time-Time.now).to_f
|
188
|
+
do_it!
|
189
|
+
end
|
190
|
+
```
|
110
191
|
|
111
|
-
|
112
|
-
|
113
|
-
and
|
192
|
+
#### Synchronization with I/O
|
193
|
+
|
194
|
+
To read and write from an IO and wait until the operation is complete without
|
195
|
+
blocking other evaluations use [IO#await_read][] and [IO#await_written][].
|
114
196
|
|
115
197
|
```ruby
|
116
198
|
r,w = IO.pipe
|
117
199
|
|
118
200
|
concurrently do
|
119
201
|
wait 1
|
120
|
-
w.
|
121
|
-
end
|
122
|
-
|
123
|
-
concurrently do
|
124
|
-
# This runs while r awaits readability.
|
202
|
+
w.await_written "Continue!"
|
125
203
|
end
|
126
204
|
|
127
|
-
concurrently do
|
128
|
-
# This runs while r awaits readability.
|
129
|
-
end
|
130
|
-
|
131
|
-
# Read from r. It will take one second until there is input.
|
132
|
-
message = r.concurrently_read 1024
|
133
205
|
|
134
|
-
|
206
|
+
# Read from r. It will take one second until there is input because r must
|
207
|
+
# wait until the string has been written to w.
|
208
|
+
r.await_read 1024 # prints "Continue!"
|
135
209
|
|
136
210
|
r.close
|
137
211
|
w.close
|
@@ -157,23 +231,69 @@ end
|
|
157
231
|
```
|
158
232
|
|
159
233
|
|
160
|
-
|
234
|
+
#### Synchronization with Results of Evaluations
|
235
|
+
|
236
|
+
Results of other evaluations can be waited for with
|
237
|
+
[Concurrently::Proc::Evaluation#await_result][]:
|
238
|
+
|
239
|
+
```ruby
|
240
|
+
mailbox = concurrently do
|
241
|
+
wait 1
|
242
|
+
'message'
|
243
|
+
end
|
244
|
+
|
245
|
+
forwarder = concurrently do
|
246
|
+
"FW: #{mailbox.await_result}"
|
247
|
+
end
|
248
|
+
|
249
|
+
# It will take one second until there is a message in the mailbox
|
250
|
+
puts forwarder.await_result # prints "FW: message"
|
251
|
+
```
|
252
|
+
|
253
|
+
To wait for the fastest in a list of evaluations use
|
254
|
+
[Kernel#await_fastest][]:
|
255
|
+
|
256
|
+
```ruby
|
257
|
+
mailbox1 = concurrently do
|
258
|
+
wait 1
|
259
|
+
'slow message'
|
260
|
+
end
|
261
|
+
|
262
|
+
mailbox2 = concurrently do
|
263
|
+
wait 0.5
|
264
|
+
'fast message'
|
265
|
+
end
|
266
|
+
|
267
|
+
mailbox = await_fastest(mailbox1, mailbox2)
|
268
|
+
mailbox.await_result # => "fast message"
|
269
|
+
```
|
270
|
+
|
271
|
+
|
272
|
+
### Synchronization
|
273
|
+
|
274
|
+
Synchronizing the invoking evaluation with the result of the invoked one is
|
275
|
+
done as described in the section about [synchronizing results of evaluations]
|
276
|
+
(#Synchronization_with_Results_of_Evaluations).
|
277
|
+
|
278
|
+
|
279
|
+
## About the Event Loop
|
161
280
|
|
162
281
|
To understand when code is run (and when it is not) it is necessary to know
|
163
282
|
a little bit more about the way Concurrently works.
|
164
283
|
|
165
|
-
Concurrently lets every
|
166
|
-
These event loops
|
167
|
-
|
168
|
-
ordered by the time they are supposed to run. The run
|
169
|
-
|
284
|
+
Concurrently lets every thread run an [event loop][Concurrently::EventLoop].
|
285
|
+
These event loops work silently in the background and are responsible for
|
286
|
+
watching IOs and scheduling evaluations. Evaluations are scheduled by putting
|
287
|
+
them into a run queue ordered by the time they are supposed to run. The run
|
288
|
+
queue is then worked off sequentially up to the point corresponding to the
|
289
|
+
current time. If two evaluations are scheduled to run at the same time the
|
170
290
|
evaluation scheduled first is run first.
|
171
291
|
|
172
292
|
Event loops *do not* run parallel to your application's code at the exact same
|
173
293
|
time (e.g. on another cpu core). Instead, your code yields to them if it
|
174
294
|
waits for something: **The event loop is (and only is) entered if your code
|
175
|
-
calls
|
176
|
-
|
295
|
+
calls one of the synchronization methods.** Later, when your code can be
|
296
|
+
resumed the event loop schedules the corresponding evaluation to run again.
|
177
297
|
|
178
298
|
Keep in mind, that an event loop **must never be interrupted, blocked or
|
179
299
|
overloaded.** A healthy event loop is one that can respond to new events
|
@@ -317,6 +437,9 @@ Keep in mind, that to focus on the use of Concurrently the example does not
|
|
317
437
|
take error handling for I/O, properly closing all connections and other details
|
318
438
|
into account.
|
319
439
|
|
440
|
+
[README]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/file/README.md
|
441
|
+
[API documentation]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/index
|
442
|
+
[performance]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/file/guides/Performance.md
|
320
443
|
[Concurrently::Evaluation]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/Evaluation
|
321
444
|
[Concurrently::Proc]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/Proc
|
322
445
|
[Concurrently::Proc#call]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/Proc#call-instance_method
|
@@ -324,12 +447,14 @@ into account.
|
|
324
447
|
[Concurrently::Proc#call_detached]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/Proc#call_detached-instance_method
|
325
448
|
[Concurrently::Proc#call_and_forget]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/Proc#call_and_forget-instance_method
|
326
449
|
[Concurrently::Proc::Evaluation]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/Proc/Evaluation
|
450
|
+
[Concurrently::Proc::Evaluation#await_result]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/Proc/Evaluation#await_result-instance_method
|
327
451
|
[Concurrently::EventLoop]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Concurrently/EventLoop
|
328
452
|
[Kernel#concurrent_proc]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Kernel#concurrent_proc-instance_method
|
329
453
|
[Kernel#concurrently]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Kernel#concurrently-instance_method
|
330
454
|
[Kernel#wait]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Kernel#wait-instance_method
|
455
|
+
[Kernel#await_fastest]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/Kernel#await_fastest-instance_method
|
331
456
|
[IO#await_readable]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/IO#await_readable-instance_method
|
332
457
|
[IO#await_writable]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/IO#await_writable-instance_method
|
333
|
-
[IO#
|
334
|
-
[IO#
|
458
|
+
[IO#await_read]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/IO#await_read-instance_method
|
459
|
+
[IO#await_written]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/IO#await_written-instance_method
|
335
460
|
[Troubleshooting]: http://www.rubydoc.info/github/christopheraue/m-ruby-concurrently/file/guides/Troubleshooting.md
|