polyphony 0.20 → 0.21

Sign up to get free protection for your applications and to get access to all the features.
Files changed (48) hide show
  1. checksums.yaml +4 -4
  2. data/.gitbook.yaml +1 -2
  3. data/.rubocop.yml +1 -0
  4. data/CHANGELOG.md +10 -0
  5. data/Gemfile.lock +1 -1
  6. data/README.md +18 -449
  7. data/TODO.md +0 -10
  8. data/docs/README.md +39 -0
  9. data/docs/getting-started/installing.md +28 -0
  10. data/docs/getting-started/tutorial.md +133 -0
  11. data/docs/summary.md +37 -3
  12. data/docs/technical-overview/concurrency.md +47 -0
  13. data/docs/technical-overview/design-principles.md +112 -0
  14. data/docs/technical-overview/exception-handling.md +34 -41
  15. data/docs/technical-overview/extending.md +80 -0
  16. data/docs/technical-overview/faq.md +74 -0
  17. data/docs/technical-overview/fiber-scheduling.md +23 -52
  18. data/docs/user-guide/web-server.md +129 -0
  19. data/examples/core/01-spinning-up-coprocesses.rb +21 -0
  20. data/examples/core/02-awaiting-coprocesses.rb +18 -0
  21. data/examples/core/03-interrupting.rb +34 -0
  22. data/examples/core/04-no-auto-run.rb +18 -0
  23. data/examples/core/mem-usage.rb +34 -0
  24. data/examples/core/spin_error.rb +0 -1
  25. data/examples/core/spin_uncaught_error.rb +0 -1
  26. data/examples/core/wait_for_signal.rb +14 -0
  27. data/examples/http/http_server_graceful.rb +25 -0
  28. data/examples/http/http_server_simple.rb +11 -0
  29. data/examples/interfaces/redis_pubsub_perf.rb +1 -1
  30. data/ext/gyro/async.c +4 -40
  31. data/ext/gyro/child.c +0 -42
  32. data/ext/gyro/io.c +0 -41
  33. data/lib/polyphony/core/coprocess.rb +8 -0
  34. data/lib/polyphony/core/supervisor.rb +29 -10
  35. data/lib/polyphony/extensions/core.rb +1 -1
  36. data/lib/polyphony/http/server/http2.rb +20 -4
  37. data/lib/polyphony/http/server/http2_stream.rb +35 -3
  38. data/lib/polyphony/version.rb +1 -1
  39. data/lib/polyphony.rb +17 -5
  40. data/test/test_async.rb +14 -7
  41. data/test/test_coprocess.rb +42 -12
  42. data/test/test_core.rb +26 -0
  43. data/test/test_io.rb +14 -5
  44. data/test/test_signal.rb +6 -10
  45. metadata +17 -5
  46. data/docs/getting-started/getting-started.md +0 -10
  47. data/examples/core/spin.rb +0 -14
  48. data/examples/core/spin_cancel.rb +0 -17
@@ -1,4 +1,137 @@
1
1
  # Tutorial
2
2
 
3
+ ## Building a Simple Echo Server with Polyphony
3
4
 
5
+ In order to demonstrate how to use Polyphony, let's write an echo server, which
6
+ accepts TCP connections and sends back whatever it receives from the client.
4
7
 
8
+ We'll start by opening a server socket:
9
+
10
+ ```ruby
11
+ require 'polyphony'
12
+
13
+ server = TCPServer.open('127.0.0.1', 1234)
14
+ puts 'Echoing on port 1234...'
15
+ ```
16
+
17
+ Next, we'll add a loop accepting connections:
18
+
19
+ ```ruby
20
+ while (client = server.accept)
21
+ handle_client(client)
22
+ end
23
+ ```
24
+
25
+ The `handle_client` method is almost trivial:
26
+
27
+ ```ruby
28
+ def handle_client(client)
29
+ while (data = client.gets)
30
+ client.write('you said: ', data.chomp, "!\n")
31
+ end
32
+ rescue Errno::ECONNRESET
33
+ puts 'Connection reset by client'
34
+ end
35
+ ```
36
+
37
+ ### Adding Concurrency
38
+
39
+ Up until now, we did nothing about concurrency. In fact, our code will not be
40
+ able to handle more than one client at a time, because the accept loop cannot
41
+ continue to run until the call to `#handle_client` returns, and that will not
42
+ happen as long as the read loop is not done.
43
+
44
+ Fortunately, Polyphony makes it super easy to do more than one thing at once.
45
+ Let's spin up a separate coprocess for each client:
46
+
47
+ ```ruby
48
+ while (client = server.accept)
49
+ spin { handle_client(client) }
50
+ end
51
+ ```
52
+
53
+ Now, our little program can handle virtually thousands of clients, all with a
54
+ little sprinkling of `spin`. Let's discuss how this works. The `Kernel#spin`
55
+ method starts a new coprocess, a separate context of execution based on [Ruby
56
+ fibers](https://ruby-doc.org/core-2.6.5/Fiber.html). A coprocess may be
57
+ arbitrarily suspended and resumed, and Polyphony takes advantage of this fact
58
+ to implement a concurrent execution environment without the use of threads.
59
+
60
+ The call to `server.accept` suspends the *root coprocess* until a connection is
61
+ made, allowing other coprocesses to continue running. Likewise, the call to
62
+ `client.gets` suspends the *client's coprocess* until incoming data becomes
63
+ available. All this is handled automatically by Polyphony, and the only hint
64
+ that our program is concurrent is that innocent call to `spin`.
65
+
66
+ Let's consider the advantage of the Polyphony approach:
67
+
68
+ - We didn't need to create custom handler classes with callbacks.
69
+ - We didn't need to use custom classes or APIs for our networking code.
70
+ - Our code is terse, easy to read and - most importantly - expresses the order of events clearly and without being split across callbacks.
71
+ - We have a server that can scale to thousands of clients without breaking a sweat.
72
+
73
+ ## Handling Inactive Connections
74
+
75
+ Now that we have a working concurrent echo server, let's add some bells and
76
+ whistles. First of all, let's get rid of clients that are not active. We'll do
77
+ this by using a Polyphony construct called a cancel scope. Cancel Scope define
78
+ an execution context that can cancel any operation ocurring within its scope:
79
+
80
+ ```ruby
81
+ def handle_client(client)
82
+ Polyphony::CancelScope.new(timeout: 10) do |scope|
83
+ while (data = client.gets)
84
+ scope.reset_timeout
85
+ client.write('you said: ', data.chomp, "!\n")
86
+ end
87
+ end
88
+ rescue Errno::ECONNRESET
89
+ puts 'Connection reset by client'
90
+ ensure
91
+ client.close
92
+ end
93
+ ```
94
+
95
+ The cancel scope is initialized with a timeout of 10 seconds. Any blocking
96
+ operation ocurring within the cancel scope will be interrupted once 10 seconds
97
+ have elapsed. In order to keep the connection alive while the client is active,
98
+ we call `scope.reset_timeout` each time data is received from the client, and
99
+ thus reset the cancel scope timer.
100
+
101
+ In addition, we use an `ensure` block to make sure the client connection is
102
+ closed, whether or not it was interrupted by the cancel scope timer. The habit
103
+ of always cleaning up using `ensure` in the face of interruptions is a
104
+ fundamental element of using Polyphony. It makes your code robust, even in a
105
+ highly concurrent execution environment.
106
+
107
+ Here's the complete source code for our Polyphony-based echo server:
108
+
109
+ ```ruby
110
+ require 'polyphony/auto_run'
111
+
112
+ server = TCPServer.open('127.0.0.1', 1234)
113
+ puts 'Echoing on port 1234...'
114
+
115
+ def handle_client(client)
116
+ Polyphony::CancelScope.new(timeout: 10) do |scope|
117
+ while (data = client.gets)
118
+ scope.reset_timeout
119
+ client.write('you said: ', data.chomp, "!\n")
120
+ end
121
+ end
122
+ rescue Errno::ECONNRESET
123
+ puts 'Connection reset by client'
124
+ ensure
125
+ client.close
126
+ end
127
+
128
+ while (client = server.accept)
129
+ spin { handle_client(client) }
130
+ end
131
+ ```
132
+
133
+ ## Learning More
134
+
135
+ Polyphony is still new, and the present documentation is far from being
136
+ complete. For more information read the [technical overview](technical-overview/concurrency.md)
137
+ or look at the [included examples](#).
data/docs/summary.md CHANGED
@@ -1,12 +1,46 @@
1
1
  # Table of contents
2
2
 
3
- * [Introduction](../README.md)
3
+ * [Polyphony - Easy Concurrency for Ruby](../README.md)
4
4
 
5
5
  ## Getting Started
6
6
 
7
- * [Installing](getting-started/getting-started.md)
7
+ * [Installing](getting-started/installing.md)
8
8
  * [Tutorial](getting-started/tutorial.md)
9
9
 
10
10
  ## Technical overview
11
11
 
12
- * [Error handling](technical-overview/error-handling.md)
12
+ * [Design Principles](technical-overview/design-principles.md)
13
+ * [Concurrency the Easy Way](technical-overview/concurrency.md)
14
+ * [How Fibers are Scheduled](technical-overview/fiber-scheduling.md)
15
+ * [Exception Handling](technical-overview/exception-handling.md)
16
+ * [Frequently Asked Questions](technical-overview/faq.md)
17
+
18
+ ## Using Polyphony
19
+
20
+ * [Coprocesses](#)
21
+ * [Supervisors](#)
22
+ * [Cancel Scopes](#)
23
+ * [Throttlers](#)
24
+ * [Resource Pools](#)
25
+ * [Synchronisation](#)
26
+ * [Web Server](user-guide/web-server.md)
27
+ * [Websocket Server](#)
28
+ * [Reactor API](#)
29
+
30
+ ## API Reference
31
+
32
+ * [Polyphony::CancelScope](#)
33
+ * [Polyphony::Coprocess](#)
34
+ * [Gyro](#)
35
+ * [Gyro::Async](#)
36
+ * [Gyro::Child](#)
37
+ * [Gyro::IO](#)
38
+ * [Gyro::Timer](#)
39
+ * [Kernel](#)
40
+ * [Polyphony](#)
41
+ * [Polyphony::Mutex](#)
42
+ * [Polyphony::Pulser](#)
43
+ * [Polyphony::ResourcePool](#)
44
+ * [Polyphony::Throttler](#)
45
+
46
+ ## [Contributing to Polyphony](contributing.md)
@@ -0,0 +1,47 @@
1
+ # Concurrency the Easy Way
2
+
3
+ Concurrency is a major consideration for modern programmers. Applications and digital platforms are nowadays expected to do multiple things at once: serve multiple clients, process multiple background jobs, talk to multiple external services. Concurrency is the property of our programming environment allowing us to schedule and control multiple ongoing operations.
4
+
5
+ Traditionally, concurrency has been achieved by using multiple processes or threads. Both approaches have proven problematic. Processes consume relatively a lot of memory, and are relatively difficult to coordinate. Threads consume less memory than processes and make it difficult to synchronize access to shared resources, often leading to race conditions and memory corruption. Using threads often necessitates either using special-purpose thread-safe data structures, or otherwise protecting shared resource access using mutexes and critical sections. In addition, dynamic languages such as Ruby and Python will synchronize multiple threads using a global interpreter lock, which means thread execution cannot be parallelized. Furthermore, the amount of threads and processes on a single system is relatively limited, to the order of several hundreds or a few thousand at most.
6
+
7
+ Polyphony offers a third way to write concurrent programs, by using a Ruby construct called [fibers](https://ruby-doc.org/core-2.6.5/Fiber.html). Fibers, based on the idea of [coroutines](https://en.wikipedia.org/wiki/Coroutine), provide a way to run a computation that can be suspended and resumed at any moment. For example, a computation waiting for a reply from a database can suspend itself, transferring control to another ongoing computation, and be resumed once the database has sent back its reply. Meanwhile, another computation is started that opens a socket to a remote service, and then suspends itself, waiting for the connection to be established.
8
+
9
+ This form of concurrency, called cooperative concurrency \(in contrast to pre-emptive concurrency, like threads and processes\), offers many advantages, especially for applications that are [I/O bound](https://en.wikipedia.org/wiki/I/O_bound). Fibers are very lightweight \(starting at about 20KB\), can be context-switched faster than threads or processes, and literally millions of them can be created on a single system - the only limiting factor is available memory.
10
+
11
+ Polyphony takes Ruby's fibers and adds a way to schedule and switch between fibers automatically whenever a blocking operation is started, such as waiting for a TCP connection to be established, or waiting for an I/O object to be readable, or waiting for a timer to elapse. In addition, Polyphony patches the stock Ruby classes to support its concurrency model, letting developers use all of Ruby's stdlib, for example `Net::HTTP` and `Mail` while reaping the benefits of lightweight, highly performant, fiber-based concurrency.
12
+
13
+ Writing concurrent applications using Polyphony's fiber-based concurrency model offers a significant performance advantage. Computational tasks can be broken down into many fine-grained concurrent operations that cost very little in memory and context-switching time. More importantly, this concurrency model lets developers express their ideas in a sequential manner, leading to source code that is easy to read and reason about.
14
+
15
+ ## Coprocesses - Polyphony's basic unit of concurrency
16
+
17
+ While stock Ruby fibers can be used with Polyphony without any problem, the API they provide is very basic, and necessitates writing quite a bit of boilerplate code whenever they need to be synchronized, interrupted or otherwise controlled. For this reason, Polyphony provides entities that encapsulate fibers and provide a richer API, making it easier to compose concurrent applications employing fibers.
18
+
19
+ A coprocess can be thought of as a fiber with enhanced powers. It makes sure any exception raised while it's running is [handled correctly](exception-handling.md). It can be interrupted or `await`ed \(just like `Thread#join`\). It provides methods for controlling its execution. Moreover, coprocesses can pass messages between themselves, turning them into autonomous actors in a fine-grained concurrent environment.
20
+
21
+ ## Higher-Order Concurrency Constructs
22
+
23
+ Polyphony also provides several methods and constructs for controlling multiple coprocesses. Methods like `cancel_after` and `move_on_after` allow interrupting a coprocess that's blocking on any arbitrary operation.
24
+
25
+ Cancel scopes \(borrowed from the brilliant Python library [Trio](https://trio.readthedocs.io/en/stable/)\) allows cancelling ongoing operations for any reason with more control over cancelling behaviour.
26
+
27
+ Supervisors allow controlling multiple coprocesses. They offer enhanced exception handling and can be nested to create complex supervision trees ala [Erlang](https://adoptingerlang.org/docs/development/supervision_trees/).
28
+
29
+ Some other constructs offered by Polyphony:
30
+
31
+ * `Mutex` - a mutex used to synchronize access to a single shared resource.
32
+ * `ResourcePool` - used for synchronizing access to a limited amount of shared
33
+
34
+ resources, for example a pool of database connections.
35
+
36
+ * `Throttler` - used for throttling repeating operations, for example throttling
37
+
38
+ access to a shared resource, or throttling incoming requests.
39
+
40
+ ## A Compelling Concurrency Solution for Ruby
41
+
42
+ > The goal of Ruby is to make programmers happy.
43
+
44
+ — Yukihiro “Matz” Matsumoto
45
+
46
+ Polyphony's goal is to make programmers even happier by offering them an easy way to write concurrent applications in Ruby. Polyphony aims to show that Ruby can be used for developing sufficiently high-performance applications, while offering all the advantages of Ruby, with source code that is easy to read and understand.
47
+
@@ -0,0 +1,112 @@
1
+ # Design Principles
2
+
3
+ Polyphony was created in order to enable creating high-performance concurrent
4
+ applications in Ruby, by utilizing Ruby fibers together with the
5
+ [libev](http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod) event reactor
6
+ library. Polyphony's design is based on the following principles:
7
+
8
+ - Polyphony's concurrency model should feel "baked-in". The API should allow
9
+ concurrency with minimal effort. Polyphny should allow creating small
10
+ concurrent programs with as little boilerplate code as possible. There
11
+ should be no calls to initialize the event reactor, or other ceremonial code:
12
+
13
+ ```ruby
14
+ require 'polyphony/auto_run'
15
+
16
+ 10.times {
17
+ # start 10 coprocesses, each sleeping for 1 second
18
+ spin { sleep 1 }
19
+ }
20
+
21
+ puts 'going to sleep now'
22
+ ```
23
+
24
+ - Blocking operations should yield to the reactor without any decoration or
25
+ wrapper APIs. This means no `async/await` notation, and no built-in concept of
26
+ deferred computation.
27
+
28
+ ```ruby
29
+ # in Polyphony, I/O ops block the current fiber, but implicitly yield to other
30
+ # concurrent coprocesses:
31
+ clients.each { |client|
32
+ spin { client.puts 'Elvis has left the chatroom' }
33
+ }
34
+ ```
35
+
36
+ - Concurrency primitives should be accessible using idiomatic Ruby techniques
37
+ (blocks, method chaining...) and should feel as much as possible "part of the
38
+ language". The resulting API is based more on methods and less on classes,
39
+ for example `spin` or `move_on_after`, leading to a coding style that is both
40
+ more compact and more legible:
41
+
42
+ ```ruby
43
+ coprocess = spin {
44
+ move_on_after(3) {
45
+ do_something_slow
46
+ }
47
+ }
48
+ ```
49
+ - Polyphony should embrace Ruby's standard `raise/rescue/ensure` exception
50
+ handling mechanism:
51
+
52
+ ```ruby
53
+ cancel_after(0.5) do
54
+ puts 'going to sleep'
55
+ sleep 1
56
+ # this will not be printed
57
+ puts 'wokeup'
58
+ ensure
59
+ # this will be printed
60
+ puts 'done sleeping'
61
+ end
62
+ ```
63
+
64
+ - Concurrency primitives should allow creating higher-order concurrent
65
+ constructs through composition. This is done primarily through supervisors and
66
+ cancel scopes:
67
+
68
+ ```ruby
69
+ # wait for multiple coprocesses
70
+ supervise { |s|
71
+ clients.each { |client|
72
+ s.spin { client.puts 'Elvis has left the chatroom' }
73
+ }
74
+ }
75
+ ```
76
+
77
+ - The internal reactor design should embrace fibers rather than be based on
78
+ invoking callbacks. The internal design of most reactor libraries is based on
79
+ callbacks. The design for Polyphony should center on suspending and resuming
80
+ fibers:
81
+
82
+ ```ruby
83
+ # psuedo-code for Gyro::Timer, the internal timer class
84
+ def Gyro::Timer.await
85
+ @fiber = Fiber.current
86
+ # the libev event reactor uses callbacks for handling events, Polyphony uses
87
+ # callbacks for switching between fibers
88
+ EV.start_timer(@interval) { @fiber.transfer }
89
+ end
90
+ ```
91
+
92
+ - Use of extensive monkey patching of Ruby core modules and classes such as
93
+ `Kernel`, `Fiber`, `IO` and `Timeout`. This allows porting over non-Polyphony
94
+ code, as well as using a larger part of stdlib in a concurrent manner, without
95
+ having to use custom non-standard network classes or other glue code.
96
+
97
+ ```ruby
98
+ require 'polyphony'
99
+
100
+ # use TCPServer from Ruby's stdlib
101
+ server = TCPServer.open('127.0.0.1', 1234)
102
+ while (client = server.accept)
103
+ spin do
104
+ while (data = client.gets)
105
+ client.write('you said: ', data.chomp, "!\n")
106
+ end
107
+ end
108
+ end
109
+ ```
110
+
111
+ - Development of techniques and tools for coverting callback-based APIs to
112
+ fiber-based ones.
@@ -1,14 +1,9 @@
1
- # Exception Handling in a Multi-Fiber Environment
1
+ # Exception Handling
2
2
 
3
- Ruby employs a pretty robust exception handling mechanism. An raised exception
4
- will bubble up the call stack until a suitable exception handler is found, based
5
- on the exception's class. In addition, the exception will include a stack trace
6
- showing the execution path from the exception's locus back to the program's
7
- entry point. Unfortunately, when exceptions are raised while switching between
8
- fibers, stack traces will only include partial information. Here's a simple
9
- demonstration:
3
+ Ruby employs a pretty robust exception handling mechanism. An raised exception will bubble up the call stack until a suitable exception handler is found, based on the exception's class. In addition, the exception will include a stack trace showing the execution path from the exception's locus back to the program's entry point. Unfortunately, when exceptions are raised while switching between fibers, stack traces will only include partial information. Here's a simple demonstration:
4
+
5
+ _fiber\_exception.rb_
10
6
 
11
- *fiber_exception.rb*
12
7
  ```ruby
13
8
  require 'fiber'
14
9
 
@@ -27,19 +22,15 @@ f.transfer
27
22
 
28
23
  Running the above program will give us:
29
24
 
30
- ```
25
+ ```text
31
26
  Traceback (most recent call last):
32
27
  1: from fiber_exception.rb:9:in `block (2 levels) in <main>'
33
28
  fiber_exception.rb:4:in `fail!': foobar (RuntimeError)
34
29
  ```
35
30
 
36
- So, the stack trace includes two frames: the exception's locus on line 4 and the
37
- call site at line 9. But we have no information on how we got to line 9. Let's
38
- imagine if we had more complete information about the sequence of execution. In
39
- fact, what is missing is information about how the different fibers were
40
- created. If we had that, our stack trace would have looked something like this:
31
+ So, the stack trace includes two frames: the exception's locus on line 4 and the call site at line 9. But we have no information on how we got to line 9. Let's imagine if we had more complete information about the sequence of execution. In fact, what is missing is information about how the different fibers were created. If we had that, our stack trace would have looked something like this:
41
32
 
42
- ```
33
+ ```text
43
34
  Traceback (most recent call last):
44
35
  4: from fiber_exception.rb:13:in `<main>'
45
36
  3: from fiber_exception.rb:7:in `Fiber.new'
@@ -48,19 +39,9 @@ Traceback (most recent call last):
48
39
  fiber_exception.rb:4:in `fail!': foobar (RuntimeError)
49
40
  ```
50
41
 
51
- In order to achieve this, Polyphony patches `Fiber.new` to keep track of the
52
- call stack at the moment the fiber was created, as well as the fiber from which
53
- the call happened. In addition, Polyphony patches `Exception#backtrace` in order
54
- to synthesize a complete stack trace based on the call stack information stored
55
- for the current fiber. This is done recursively through the chain of fibers
56
- leading up to the current location. What we end up with is a record of the
57
- entire sequence of (possibly intermittent) execution leading up to the point
58
- where the exception was raised.
42
+ In order to achieve this, Polyphony patches `Fiber.new` to keep track of the call stack at the moment the fiber was created, as well as the fiber from which the call happened. In addition, Polyphony patches `Exception#backtrace` in order to synthesize a complete stack trace based on the call stack information stored for the current fiber. This is done recursively through the chain of fibers leading up to the current location. What we end up with is a record of the entire sequence of \(possibly intermittent\) execution leading up to the point where the exception was raised.
59
43
 
60
- In addition, the backtrace is sanitized to remove stack frames originating from
61
- the Polyphony code itself, which hides away the Polyphony plumbing and lets
62
- developers concentrate on their own code. The sanitizing of exception backtraces
63
- can be disabled by setting the `Exception.__disable_sanitized_backtrace__` flag:
44
+ In addition, the backtrace is sanitized to remove stack frames originating from the Polyphony code itself, which hides away the Polyphony plumbing and lets developers concentrate on their own code. The sanitizing of exception backtraces can be disabled by setting the `Exception.__disable_sanitized_backtrace__` flag:
64
45
 
65
46
  ```ruby
66
47
  Exception.__disable_sanitized_backtrace__ = true
@@ -69,10 +50,7 @@ Exception.__disable_sanitized_backtrace__ = true
69
50
 
70
51
  ## Cleaning up after exceptions
71
52
 
72
- A major issue when handling exceptions is cleaning up - freeing up resources
73
- that have been allocated, cancelling ongoing operations, etc. Polyphony allows
74
- using the normal `ensure` statement for cleaning up. Have a look at Polyphony's
75
- implementation of `Kernel#sleep`:
53
+ A major issue when handling exceptions is cleaning up - freeing up resources that have been allocated, cancelling ongoing operations, etc. Polyphony allows using the normal `ensure` statement for cleaning up. Have a look at Polyphony's implementation of `Kernel#sleep`:
76
54
 
77
55
  ```ruby
78
56
  def sleep(duration)
@@ -83,12 +61,27 @@ ensure
83
61
  end
84
62
  ```
85
63
 
86
- This method creates a one-shot timer with the given duration and then suspends
87
- the current fiber, waiting for the timer to fire and then resume the fiber.
88
- While the awaiting fiber is suspended, other operations might be going on, which
89
- might interrupt the `sleep` operation by scheduling the awaiting fiber with an
90
- exception, for example a `MoveOn` or a `Cancel` exception. For this reason, we
91
- need to *ensure* that the timer will be stopped, regardless of whether it has
92
- fired or not. We call `timer.stop` inside an ensure block, thus ensuring that
93
- the timer will have stopped once the awaiting fiber has resumed, even if it has
94
- not fired.
64
+ This method creates a one-shot timer with the given duration and then suspends the current fiber, waiting for the timer to fire and then resume the fiber. While the awaiting fiber is suspended, other operations might be going on, which might interrupt the `sleep` operation by scheduling the awaiting fiber with an exception, for example a `MoveOn` or a `Cancel` exception. For this reason, we need to _ensure_ that the timer will be stopped, regardless of whether it has fired or not. We call `timer.stop` inside an ensure block, thus ensuring that the timer will have stopped once the awaiting fiber has resumed, even if it has not fired.
65
+
66
+ ## Bubbling Up - A Robust Solution for Uncaught Exceptions
67
+
68
+ One of the "annoying" things about exceptions is that for them to be useful, you have to intercept them \(using `rescue`\). If you forget to do that, you'll end up with uncaught exceptions that can wreak havoc. For example, by default a Ruby `Thread` in which an exception was raised without being caught, will simply terminate with the exception silently swallowed.
69
+
70
+ To prevent the same from happening with fibers, Polyphony provides a mechanism that lets uncaught exceptions bubble up through the chain of calling fibers. Let's discuss the following example:
71
+
72
+ ```ruby
73
+ require 'polyphony'
74
+
75
+ spin do
76
+ spin do
77
+ spin do
78
+ spin do
79
+ raise 'foo'
80
+ end.await
81
+ end.await
82
+ end.await
83
+ end.await
84
+ ```
85
+
86
+ In this example, there are four coprocesses, nested one within the other. An exception is raised in the inner most coprocess, and having no exception handler, will bubble up through the different enclosing coprocesses, until reaching the top-most level, that of the root fiber, at which point the exception will cause the program to halt and print an error message.
87
+
@@ -0,0 +1,80 @@
1
+ # Extending Polyphony
2
+
3
+ Polyphony was designed to ease the transition from blocking APIs and
4
+ callback-based API to non-blocking, fiber-based ones. It is important to
5
+ understand that not all blocking calls can be easily converted into
6
+ non-blocking calls. That might be the case with Ruby gems based on C-extensions,
7
+ such as database libraries. In that case, Polyphony's built-in
8
+ [thread pool](#threadpool) might be used for offloading such blocking calls.
9
+
10
+ ### Adapting callback-based APIs
11
+
12
+ Some of the most common patterns in Ruby APIs is the callback pattern, in which
13
+ the API takes a block as a callback to be called upon completion of a task. One
14
+ such example can be found in the excellent
15
+ [http_parser.rb](https://github.com/tmm1/http_parser.rb/) gem, which is used by
16
+ Polyphony itself to provide HTTP 1 functionality. The `HTTP:Parser` provides
17
+ multiple hooks, or callbacks, for being notified when an HTTP request is
18
+ complete. The typical callback-based setup is as follows:
19
+
20
+ ```ruby
21
+ require 'http/parser'
22
+ @parser = Http::Parser.new
23
+
24
+ def on_receive(data)
25
+ @parser < data
26
+ end
27
+
28
+ @parser.on_message_complete do |env|
29
+ process_request(env)
30
+ end
31
+ ```
32
+
33
+ A program using `http_parser.rb` in conjunction with Polyphony might do the
34
+ following:
35
+
36
+ ```ruby
37
+ require 'http/parser'
38
+ require 'polyphony'
39
+
40
+ def handle_client(client)
41
+ parser = Http::Parser.new
42
+ req = nil
43
+ parser.on_message_complete { |env| req = env }
44
+ loop do
45
+ parser << client.read
46
+ if req
47
+ handle_request(req)
48
+ req = nil
49
+ end
50
+ end
51
+ end
52
+ ```
53
+
54
+ Another possibility would be to monkey-patch `Http::Parser` in order to
55
+ encapsulate the state of the request:
56
+
57
+ ```ruby
58
+ class Http::Parser
59
+ def setup
60
+ self.on_message_complete = proc { @request_complete = true }
61
+ end
62
+
63
+ def parser(data)
64
+ self << data
65
+ return nil unless @request_complete
66
+
67
+ @request_complete = nil
68
+ self
69
+ end
70
+ end
71
+
72
+ def handle_client(client)
73
+ parser = Http::Parser.new
74
+ loop do
75
+ if req == parser.parse(client.read)
76
+ handle_request(req)
77
+ end
78
+ end
79
+ end
80
+ ```
@@ -0,0 +1,74 @@
1
+ # Frequently Asked Questions
2
+
3
+ ## Why not just use callbacks instead of fibers?
4
+
5
+ It is true that reactor engines such as libev use callbacks to handle events. There's also programming platforms such as [node.js](https://nodejs.org/) that base their entire API on the callback pattern. [EventMachine](https://www.rubydoc.info/gems/eventmachine/1.2.7) is a popular reactor library for Ruby that uses callbacks for handling events.
6
+
7
+ Using callbacks means splitting your application logic into disjunct pieces of code. Consider the following example:
8
+
9
+ ```ruby
10
+ require 'eventmachine'
11
+
12
+ module EchoServer
13
+ def post_init
14
+ puts '-- someone connected to the echo server!'
15
+ end
16
+
17
+ def receive_data data
18
+ send_data ">>>you sent: #{data}"
19
+ close_connection if data =~ /quit/i
20
+ end
21
+
22
+ def unbind
23
+ puts '-- someone disconnected from the echo server!'
24
+ end
25
+ end
26
+
27
+ # Note that this will block current thread.
28
+ EventMachine.run {
29
+ EventMachine.start_server '127.0.0.1', 8081, EchoServer
30
+ }
31
+ ```
32
+
33
+ The client-handling code is split across three different callback methods. Compare this to the following equivalent using Polyphony:
34
+
35
+ ```ruby
36
+ require 'polyphony/auto_run'
37
+
38
+ server = TCPServer.open('127.0.0.1', 8081)
39
+ while (client = server.accept)
40
+ spin do
41
+ puts '-- someone connected to the echo server!'
42
+ while (data = client.gets)
43
+ client << ">>>you sent: #{data}"
44
+ break if data =~ /quit/i
45
+ end
46
+ ensure
47
+ client.close
48
+ puts '-- someone disconnected from the echo server!'
49
+ end
50
+ end
51
+ ```
52
+
53
+ The Polyphony version is both more terse and explicit at the same time. It explicitly accepts connections on the server port, and the entire logic handling each client connection is contained in a single block. The order of the different actions - printing to the console, then echoing client messages, then finally closing the client connection and printing again to the console - is easy to grok. The echoing of client messages is also explicit: a simple loop waiting for a message, then responding to the client. In addition, we can use an `ensure` block to correctly cleanup even if exceptions are raised while handling the client.
54
+
55
+ Using callbacks also makes it much more difficult to debug your program. when callbacks are used to handle events, the stack trace will necessarily start at the reactor, and thus lack any information about how the event came to be in the first place. Contrast this with Polyphony, where stack traces show the entire _sequence of events_ leading up to the present point in the code.
56
+
57
+ In conclusion:
58
+
59
+ * Callbacks cause the splitting of logic into disjunct chunks.
60
+ * Callbacks do not provide a good error handling solution.
61
+ * Callbacks often lead to code bloat.
62
+ * Callbacks are harder to debug.
63
+
64
+ ## If callbacks suck, why not use promises?
65
+
66
+ Promises have gained a lot of traction during the last few years as an
67
+ alternative to callbacks, above all in the Javascript community. While promises have been at a certain point considered for use in Polyphony, they were not found to offer enough of a benefit. Promises still cause split logic, are quite verbose and provide a non-native exception handling mechanism. In addition, they do not make it easier to debug your code.
68
+
69
+ ## Why is awaiting implicit? Why not use explicit async/await?
70
+
71
+ Actually, async/await was contemplated while developing Polyphony, but at a certain point it was decided to abandon these methods / decorators in favor of a more implicit approach. The most crucial issue with async/await is that it prevents the use of anything from Ruby's stdlib. Any operation involving stdlib classes needs to be wrapped in boilerplate.
72
+
73
+ Instead, we have decided to make blocking operations implicit and thus allow the use of common APIs such as `Kernel#sleep` or `IO.popen` in a transparent manner. After all, these APIs in their stock form block execution just as well.
74
+