polyphony 1.1 → 1.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (41) hide show
  1. checksums.yaml +4 -4
  2. data/.github/workflows/test.yml +1 -1
  3. data/.github/workflows/test_io_uring.yml +1 -1
  4. data/.rubocop.yml +16 -8
  5. data/CHANGELOG.md +13 -0
  6. data/README.md +2 -1
  7. data/docs/advanced-io.md +141 -44
  8. data/docs/cancellation.md +213 -0
  9. data/docs/readme.md +2 -1
  10. data/examples/core/enumerator.rb +92 -0
  11. data/examples/io/https_server_sni_2.rb +1 -1
  12. data/ext/polyphony/backend_common.c +11 -0
  13. data/ext/polyphony/backend_common.h +2 -0
  14. data/ext/polyphony/backend_io_uring.c +1 -1
  15. data/ext/polyphony/backend_libev.c +1 -1
  16. data/ext/polyphony/polyphony.h +3 -1
  17. data/lib/polyphony/core/debug.rb +24 -29
  18. data/lib/polyphony/core/exceptions.rb +0 -3
  19. data/lib/polyphony/core/sync.rb +0 -3
  20. data/lib/polyphony/core/thread_pool.rb +1 -5
  21. data/lib/polyphony/core/throttler.rb +0 -1
  22. data/lib/polyphony/core/timer.rb +7 -9
  23. data/lib/polyphony/extensions/exception.rb +0 -1
  24. data/lib/polyphony/extensions/fiber.rb +41 -28
  25. data/lib/polyphony/extensions/io.rb +86 -93
  26. data/lib/polyphony/extensions/kernel.rb +52 -16
  27. data/lib/polyphony/extensions/object.rb +7 -6
  28. data/lib/polyphony/extensions/openssl.rb +6 -8
  29. data/lib/polyphony/extensions/pipe.rb +5 -7
  30. data/lib/polyphony/extensions/socket.rb +28 -37
  31. data/lib/polyphony/extensions/thread.rb +2 -4
  32. data/lib/polyphony/extensions/timeout.rb +0 -1
  33. data/lib/polyphony/version.rb +1 -1
  34. data/lib/polyphony.rb +4 -7
  35. data/polyphony.gemspec +2 -2
  36. data/test/test_fiber.rb +6 -6
  37. data/test/test_global_api.rb +3 -3
  38. data/test/test_io.rb +2 -2
  39. data/test/test_socket.rb +2 -2
  40. data/test/test_supervise.rb +1 -1
  41. metadata +6 -4
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 61fa840f595a0e75da1d6e1b761506b18fa90d5a94b25ce5832d22aae2e08fbd
4
- data.tar.gz: 7d3a79225f68bb889ff0491ced1249c2679a222a3f79a5c4cf48f9571865ebc4
3
+ metadata.gz: d8069af9a528319585e1112ca1c1fc97981ae7106da100701691d7caf1478fdc
4
+ data.tar.gz: c5ec274b188332a18f8de7b595fad457f25cfc79bf903c05db0aaf7bf20507dc
5
5
  SHA512:
6
- metadata.gz: 32d61cf7e0858e704fc39fe317048e3e531afcf97da70b9b3d1e4b59a078e2f5c8c65a3f72c656b61285df9e5c99d7430938403560ca6e8e7aa5d920dada274e
7
- data.tar.gz: c5c1488b1cec55810ffccf8116d70e8312ccb7d93875c1047ab7608595eb81fda9601f44ef308b3e1f35319d683b6ceda423284c9cda76c6f44882a7341e681a
6
+ metadata.gz: f4e8fbd79cce5062a1fa054838186c925fa626c29faf86f589889f418838b681923863d07603036c2276bd3644ce99eac1f84cdc57b7c57e97787bceec6f4cd0
7
+ data.tar.gz: 22dbe0e38bc0d46ac1dcd543b0ae2b5e4a282c286f6dd40d823347542cbb37e2f21cf3c7c5503389f80f4b057b644c19c683d346a3500ada6a39e83bdbd1516f
@@ -8,7 +8,7 @@ jobs:
8
8
  fail-fast: false
9
9
  matrix:
10
10
  os: [ubuntu-latest, macos-latest]
11
- ruby: ['3.0', '3.1', '3.2', 'head']
11
+ ruby: ['3.1', '3.2', 'head']
12
12
 
13
13
  name: >-
14
14
  ${{matrix.os}}, ${{matrix.ruby}}
@@ -8,7 +8,7 @@ jobs:
8
8
  fail-fast: false
9
9
  matrix:
10
10
  os: [ubuntu-latest]
11
- ruby: ['3.0', '3.1', '3.2', 'head']
11
+ ruby: ['3.1', '3.2', 'head']
12
12
 
13
13
  name: >-
14
14
  ${{matrix.os}}, ${{matrix.ruby}}
data/.rubocop.yml CHANGED
@@ -64,8 +64,7 @@ Layout/HashAlignment:
64
64
 
65
65
  Naming/AccessorMethodName:
66
66
  Exclude:
67
- - lib/polyphony/http/server/http1.rb
68
- - lib/polyphony/http/server/http2_stream.rb
67
+ - lib/polyphony/extensions/fiber.rb
69
68
  - examples/**/*.rb
70
69
 
71
70
  Naming/MethodName:
@@ -74,16 +73,11 @@ Naming/MethodName:
74
73
 
75
74
  Lint/SuppressedException:
76
75
  Exclude:
77
- - lib/polyphony/http/server/http1.rb
78
- - lib/polyphony/http/server/http2.rb
79
- - lib/polyphony/http/server/http2_stream.rb
80
- - lib/polyphony/http/server.rb
81
76
  - examples/**/*.rb
82
77
 
83
78
  Metrics/MethodLength:
84
79
  Max: 12
85
80
  Exclude:
86
- - lib/polyphony/http/server/rack.rb
87
81
  - lib/polyphony/extensions/io.rb
88
82
  - lib/polyphony/extensions/fiber.rb
89
83
  - test/**/*.rb
@@ -96,7 +90,6 @@ Metrics/ModuleLength:
96
90
 
97
91
  Metrics/ClassLength:
98
92
  Exclude:
99
- - lib/polyphony/http/server/http1.rb
100
93
  - lib/polyphony/extensions/io.rb
101
94
  - lib/polyphony/extensions/fiber.rb
102
95
  - lib/polyphony/extensions/object.rb
@@ -176,6 +169,21 @@ Lint/RaiseException:
176
169
  Lint/StructNewOverride:
177
170
  Enabled: true
178
171
 
172
+ Style/NegatedIf:
173
+ Enabled: false
174
+
175
+ Style/NegatedWhile:
176
+ Enabled: false
177
+
178
+ Style/CombinableLoops:
179
+ Enabled: false
180
+
181
+ Style/InfiniteLoop:
182
+ Enabled: false
183
+
184
+ Style/RedundantReturn:
185
+ Enabled: false
186
+
179
187
  Style/ExponentialNotation:
180
188
  Enabled: true
181
189
 
data/CHANGELOG.md CHANGED
@@ -1,3 +1,16 @@
1
+ ## 1.2 2023-06-17
2
+
3
+ - Require Ruby 3.1 or newer
4
+ - Add cancellation doc page
5
+ - Cleanup code
6
+ - Accept array of fiber in `Fiber.await` (in addition to accepting multiple fibers)
7
+ - Automatically create backend for thread if not already created (#100)
8
+ - Fix trap API when used with debug gem (#100)
9
+
10
+ ## 1.1.1 2023-06-08
11
+
12
+ - Minor improvements to documentation
13
+
1
14
  ## 1.1 2023-06-08
2
15
 
3
16
  - Add advanced I/O doc page
data/README.md CHANGED
@@ -51,7 +51,7 @@ the hood, Polyphony uses
51
51
  In order to use Polyphony you need to have:
52
52
 
53
53
  - Linux or MacOS (support for Windows will come at a later stage)
54
- - Ruby (MRI) 3.0 or newer
54
+ - Ruby (MRI) 3.1 or newer
55
55
 
56
56
  ### Installing the Polyphony Gem
57
57
 
@@ -77,6 +77,7 @@ $ gem install polyphony
77
77
 
78
78
  - [Overview](docs/overview.md)
79
79
  - [Tutorial](docs/tutorial.md)
80
+ - [All About Cancellation: How to Stop Concurrent Operations](docs/cancellation.md)
80
81
  - [Advanced I/O with Polyphony](docs/advanced-io.md)
81
82
  - [Cheat-Sheet](docs/cheat-sheet.md)
82
83
  - [FAQ](docs/faq.md)
data/docs/advanced-io.md CHANGED
@@ -1,5 +1,7 @@
1
1
  # @title Advanced I/O with Polyphony
2
2
 
3
+ # Advanced I/O with Polyphony
4
+
3
5
  ## Using splice for moving data between files and sockets
4
6
 
5
7
  Splice is linux-specific API that lets you move data between two file
@@ -10,12 +12,15 @@ size. Using splice, you can avoid the cost of having to load a file's content
10
12
  into memory, in order to send it to a TCP connection.
11
13
 
12
14
  In order to use `splice`, at least one of the file descriptors involved needs to
13
- be a pipe. This is because in Linux, pipes are actually kernel buffers. The
14
- normal way of using splice is that first you splice data from the source fd to
15
- the pipe (to its *write* fd), and then you splice data from the pipe (from its
16
- *read* fd) to the destination fd.
15
+ be a pipe. This is because in Linux, pipes are actually kernel buffers. The idea
16
+ is that you first move data from a source fd into a kernel buffer, then you move
17
+ data from the kernel buffer to the destination fd. In some cases, this lets the
18
+ Linux kernel completely avoid having to copy data in order to move it from the
19
+ source to the destination. So the normal way of using splice is that first you
20
+ splice data from the source fd to the pipe (to its *write* fd), and then you
21
+ splice data from the pipe (from its *read* fd) to the destination fd.
17
22
 
18
- Here's how we do splicing using Polyphony:
23
+ Here's how you can use splice with Polyphony:
19
24
 
20
25
  ```ruby
21
26
  def send_file_using_splice(src, dest)
@@ -25,24 +30,29 @@ def send_file_using_splice(src, dest)
25
30
  pipe = Polyphony::Pipe.new
26
31
  loop do
27
32
  # splices data from src to the pipe
28
- bytes_spliced = IO.splice(src, pipe, 2**14)
29
- break if bytes_spliced == 0 # EOF
33
+ bytes_available = IO.splice(src, pipe, 2**14)
34
+ break if bytes_available == 0 # EOF
30
35
 
31
36
  # splices data from the pipe to the dest
32
- IO.splice(pipe, dest, bytes_spliced)
37
+ while (bytes_avilable > 0)
38
+ written = IO.splice(pipe, dest, bytes_avilable)
39
+ bytes_avilable -= written
40
+ end
33
41
  end
34
42
  end
35
43
  ```
36
44
 
37
45
  Let's examine the code above. First of all, we have a loop that repeatedly
38
- splices data in chunks of 16KB. We break from the loop once EOF is encountered.
39
- Secondly, on each iteration of the loop we perform two splice operations
40
- sequentially. So, we need to repeatedly perform two splice operations, one after
41
- the other. Would there be a better way to do this?
46
+ splices data in chunks of 16KB, using the `IO.splice` API provided by Polyphony.
47
+ We break from the loop once EOF is encountered. Secondly, for moving data from
48
+ the pipe to the destination, we need to make sure *all* data made avilable on
49
+ the pipe has been spliced to the destination, since the call to `IO.splice` can
50
+ actually write fewer bytes than specified. So, we need to repeatedly perform two
51
+ splice operations, one after the other, and we need to make sure all data is
52
+ spliced to the destination. Would there be a better way to do this?
42
53
 
43
- Fortunately, Polyphony provides just the tools needed to do that. Firstly, we
44
- can tell Polyphony to splice data repeatedly until EOF is encountered by passing
45
- a negative max size:
54
+ Fortunately, with Polyphony there is! Firstly, we can tell Polyphony to splice
55
+ data repeatedly until EOF is encountered by passing a negative max size:
46
56
 
47
57
  ```ruby
48
58
  IO.splice(src, pipe, -2**14)
@@ -62,25 +72,29 @@ def send_file_using_splice(src, dest)
62
72
  end
63
73
  IO.splice(pipe, dest, -2**14)
64
74
  end
75
+
76
+ # +----+ IO.splice() +------+ IO.splice() +--------+
77
+ # | io |-------------->| pipe |-------------->| socket |
78
+ # +----+ +------+ +--------+
65
79
  ```
66
80
 
67
81
  There are a few things to notice here: While we have two concurrent operations
68
- running in two separate fibers, their are still inter-dependent in their
69
- individual progress, as one is filling a kernel buffer, and the other is
70
- flushing it, and thus the progress of whole will be bound by the slowest
71
- operation.
72
-
73
- Imagine an HTTP server that serves a large file to a slow client, or a client
74
- with a bad network connection. The web server is perfectly capable of reading
75
- the file from its disk very fast, but sending data to the HTTP client can be
76
- much much slower. The second splice operation, splicing from the pipe to the
77
- destination, will flush the kernel much more slowly that it is being filled. At
78
- a certain point, the buffer is full, and the first splice operation from the
79
- source to the pipe cannot continue. It will need to wait for the other splice
80
- operation to progress, in order to continue filling the buffer. This is called
81
- back-pressure propagation, and we get it automatically.
82
-
83
- So let's look at all the things we didn't need to do: we didn't need to read
82
+ running in two separate fibers, they are still inter-dependent in their
83
+ progress, as one is filling a kernel buffer, and the other is flushing it, and
84
+ thus the progress of the whole will be bound by the slowest operation.
85
+
86
+ Take an HTTP server that serves a large file to a slow client, or a client with
87
+ a bad network connection. The web server is perfectly capable of reading the
88
+ file from its disk very fast, but sending data to the HTTP client can be much
89
+ much slower. The second splice operation, splicing from the pipe to the
90
+ destination, will flush the kernel buffer much more slowly that it is being
91
+ filled. At a certain point, the buffer is full, and the first splice operation
92
+ from the source to the pipe cannot continue. It will need to wait for the other
93
+ splice operation to progress, in order to continue filling the buffer. This is
94
+ called back-pressure propagation, it's a good thing, and we get it
95
+ automatically.
96
+
97
+ Let's now look at all the things we didn't need to do: we didn't need to read
84
98
  data into a Ruby string (which is costly in CPU time, in memory, and eventually
85
99
  in GC pressure), we didn't need to manage a buffer and take care of
86
100
  synchronizing access to the buffer. We got to move data from the source to the
@@ -97,17 +111,17 @@ end
97
111
  ```
98
112
 
99
113
  The `IO.double_splice` creates a pipe and repeatedly splices data concurrently
100
- from the source to pipe and from the pipe to the destination until the source is
101
- exhausted. All this, without needing to instantiate a `Polyphony::Pipe` object,
102
- and without needing to spin up a second fiber, further minimizing memory use and
103
- GC pressure.
114
+ from the source to the pipe and from the pipe to the destination until the
115
+ source is exhausted. All this, without needing to instantiate a
116
+ `Polyphony::Pipe` object, and without needing to spin up a second fiber, further
117
+ minimizing memory use and GC pressure.
104
118
 
105
119
  ## Compressing and decompressing in-flight data
106
120
 
107
121
  You might be familiar with Ruby's [zlib](https://github.com/ruby/zlib) gem (docs
108
122
  [here](https://rubyapi.org/3.2/o/zlib)), which can be used to compress and
109
123
  uncompress data using the popular gzip format. Imagine we want to implement an
110
- HTTP server that can serve files compresszed using gzip:
124
+ HTTP server that can serve files compressed using gzip:
111
125
 
112
126
  ```ruby
113
127
  def serve_compressed_file(socket, file)
@@ -117,10 +131,10 @@ def serve_compressed_file(socket, file)
117
131
  end
118
132
  ```
119
133
 
120
- In the above example, we have read the file contents into a Ruby string, then
121
- passed the contents to `Zlib.gzip`, which returned the compressed contents in
122
- another Ruby string, then wrote the compressed data to the socket. We can see
123
- how this can lead to large allocations of memory (if the file is large), and
134
+ In the above example, we read the file contents into a Ruby string, then pass
135
+ the contents to `Zlib.gzip`, which returns the compressed contents in another
136
+ Ruby string, then write the compressed data to the socket. We can see how this
137
+ can lead to lots of memory allocations (especially if the file is large), and
124
138
  more pressure on the Ruby GC. How can we improve this?
125
139
 
126
140
  One way would be to utilise Zlib's `GzipWriter` class:
@@ -165,7 +179,7 @@ through some object that parses the data, or otherwise manipulates it. Normally,
165
179
  we would write a loop that repeatedly reads the data from the source, then
166
180
  passes it to the parser object. Imagine we have data transmitted using the
167
181
  `MessagePack` format that we need to convert back into its original form. We
168
- might do something like this:
182
+ might do something like the folowing:
169
183
 
170
184
  ```ruby
171
185
  def with_message_pack_data_from_io(io, &block)
@@ -215,10 +229,93 @@ With `IO#feed_loop` we get to write even less code, and as with `IO#read_loop`,
215
229
  `IO#feed_loop` is implemented at the C-extension level using a tight loop that
216
230
  maximizes performance.
217
231
 
232
+ ## Fast and easy chunked transfer-encoding in HTTP/1
233
+
234
+ [Chunked transfer
235
+ encoding](https://en.wikipedia.org/wiki/Chunked_transfer_encoding) is a great
236
+ way to serve HTTP responses of arbitrary size, because we don't need to know
237
+ their size in advance, which means we don't necessarily need to hold them in
238
+ memory, or perform expensive fstat calls to get file metadata. Sending HTTP
239
+ responses in chunked transfer encoding is simple enough:
240
+
241
+ ```ruby
242
+ def send_chunked_response_from_io(socket, io)
243
+ while true
244
+ chunk = io.read(MAX_CHUNK_SIZE)
245
+ socket << "#{chunk.bytesize.to_s(16)}\r\n#{chunk}\r\n"
246
+ break if chunk.empty?
247
+ end
248
+ end
249
+ ```
250
+
251
+ Note how we read the chunk into memory and then send it on to the client. Would
252
+ it be possible to splice the data instead? Let's see how that would look:
253
+
254
+ ```ruby
255
+ def send_chunked_response_from_io(socket, io)
256
+ pipe = Polyphony::Pipe.new
257
+ while true
258
+ bytes_spliced = IO.splice(io, pipe, MAX_CHUNK_SIZE)
259
+ socket << "#{bytes_spliced.to_s(16)}\r\n"
260
+ IO.splice(pipe, socket, bytes_spliced) if bytes_spliced > 0
261
+ socket << "\r\n"
262
+ break if bytes_spliced == 0
263
+ end
264
+ end
265
+ ```
266
+
267
+ In the code above, while we avoid having to read chunks of the source data into
268
+ Ruby strings, we now perform 3 I/O operations for each chunk: writing the chunk
269
+ size, splicing the data from the pipe (the kernel buffer), and finally writing
270
+ the `"\r\n"` delimiter. We can probably write some more complex logic to reduce
271
+ this to 2 operations (coalescing the two write operations into one), but still
272
+ this implementation involves a lot of back and forth between our code, the
273
+ Polyphony I/O backend, and the operating system.
274
+
275
+ Fortunately, Polyphony provides a special API for sending HTTP chunked
276
+ responses:
277
+
278
+ ```ruby
279
+ def send_chunked_response_from_io(socket, io)
280
+ IO.http1_splice_chunked(io, socket, MAX_CHUNK_SIZE)
281
+ end
282
+ ```
283
+
284
+ A single method call replaces the whole mechanism we devised above, and in
285
+ addition Polyphony makes sure to perform it with the minimum possible number of
286
+ I/O operations!
287
+
288
+ # Sending compressed data using chunked transfer encoding
289
+
290
+ We can now combine the different APIs discussed above to create even more
291
+ complex behaviour. Let's see how we can send an HTTP response using compressed
292
+ content encoding and chunked transfer encoding:
293
+
294
+ ```ruby
295
+ def send_compressed_chunked_response_from_io(socket, io)
296
+ pipe = Polyphony::Pipe.new
297
+ spin { IO.gzip(io, pipe) }
298
+ IO.http1_splice_chunked(pipe, socket, MAX_CHUNK_SIZE)
299
+ end
300
+
301
+ # +----+ IO.gzip() +------+ IO.http1_splice_chunked() +--------+
302
+ # | io |------------>| pipe |---------------------------->| socket |
303
+ # +----+ +------+ +--------+
304
+ ```
305
+
306
+ The code above looks simple enough, but it actually packs a lot of power in just
307
+ 3 lines of code: we create a pipe, then spin up a fiber that compresses data
308
+ from `io` into the pipe. We then splice data from the pipe to the socket using
309
+ chunked transfer encoding. As discussed above, we do this without actually
310
+ allocating any Ruby strings for holding the data, we take maximum advantage of
311
+ kernel buffers (a.k.a. pipes) and we perform the two operations - compressing
312
+ the data and sending it to the client - concurrently.
313
+
218
314
  ## Conclusion
219
315
 
220
316
  In this article we have looked at some of the advanced I/O functionality
221
- provided by Polyphony, which lets us write less code, have it run faster, and
222
- minimize memory allocations and pressure on the Ruby GC. Feel free to browse the
223
- [IO examples](https://github.com/digital-fabric/polyphony/tree/master/examples/io)
317
+ provided by Polyphony, which lets us write less code, have it run faster, have
318
+ it run concurrently, and minimize memory allocations and pressure on the Ruby
319
+ GC. Feel free to browse the [IO
320
+ examples](https://github.com/digital-fabric/polyphony/tree/master/examples/io)
224
321
  included in Polyphony.
@@ -0,0 +1,213 @@
1
+ # @title All About Cancellation: How to Stop Concurrent Operations
2
+
3
+ # All About Cancellation: How to Stop Concurrent Operations
4
+
5
+ ## The Problem of Cancellation
6
+
7
+ Being able to cancel an operation is an important aspect of concurrent
8
+ programming. When you have multiple operations going on at the same time, you
9
+ want to be able to stop an operation in certain circumstances. Imagine sending a
10
+ an HTTP request to some server, and waiting for it to respond. We can wait
11
+ forever, or we can use some kind of mechanism for stopping the operation and
12
+ declaring it a failure. This mechanism, which is generally called cancellation,
13
+ plays a crucial part in how Polyphony works. Let's examine how operations are
14
+ cancelled in Polyphony.
15
+
16
+ ## Cancellation in Polyphony
17
+
18
+ In Polyphony, every operation can be cancelled in the same way, using the same
19
+ APIs. Polyphony provides multiple APIs that can be used to stop an ongoing
20
+ operation, but the underlying mechanism is always the same: the fiber running
21
+ the ongoing operation is scheduled with an exception.
22
+
23
+ Let's revisit how fibers are run in Polyphony (this is covered in more detail in
24
+ the overview document). When a waiting fiber is ready to continue, it is
25
+ scheduled with the result of the operation which it was waiting for. If the
26
+ waiting fiber is scheduled with an exception *before* the operation it is
27
+ waiting for is completed, the operation is stopped, and the exception is raised
28
+ in the context of the fiber once it is switched to. What this means is that any
29
+ fiber waiting for a long-running operation to complete can be stopped at any
30
+ moment, with Polyphony taking care of actually stopping the operation, whether
31
+ it is reading from a file, or from a socket, or waiting for a timer to elapse.
32
+
33
+ On top of this general mechanism of cancellation, Polyphony provides
34
+ cancellation APIs with differing semantics that can be employed by the
35
+ developer. For example, `move_on_after` can be used to stop an operation after a
36
+ timeout without raising an exception, while `cancel_after` can be used to raise
37
+ an exception that must be handled. There's also the `Fiber#restart` API which,
38
+ as its name suggests, allows one to restart any fiber, which might be very
39
+ useful for retrying complex operations.
40
+
41
+ Let's examine how a concurrent operation is stopped in Polyphony:
42
+
43
+ ```ruby
44
+ sleeper = spin { sleep 1 }
45
+ sleep 0.5
46
+ sleeper.raise 'Foo'
47
+ ```
48
+
49
+ In the example above, we spin up a fiber that sleeps for 1 second, we then sleep
50
+ for half a second, and cancel `sleeper` by raising an exception in its context.
51
+ This causes the sleep operation to be cancelled and the fiber to be stopped. The
52
+ exception is further propagated to the context of the main fiber, and the
53
+ program finally exits with an exception message.
54
+
55
+ Another way to stop a concurrent operation is to use the `Fiber#move_on` method,
56
+ which causes the fiber to stop, but without raising an exception:
57
+
58
+ ```ruby
59
+ sleeper = spin { sleep 1; :foo }
60
+ sleep 0.5
61
+ sleeper.move_on :bar
62
+ result = sleeper.await #=> :bar
63
+ ```
64
+
65
+ Using `Fiber#move_on`, we avoid raising an exception which then needs to be
66
+ rescued, and instead cause the fiber to stop, with its return value being the
67
+ value given to `Fiber#move_on`. In the code above, the fiber's result will be
68
+ set to `:bar` instead of `:foo`.
69
+
70
+ ## Using Timeouts
71
+
72
+ Timeouts are probably the most common reason for cancelling an operation. While
73
+ different Ruby gems provide their own APIs and mechanisms for setting timeouts
74
+ (core Ruby has also recently introduced timeout settings for IO operations),
75
+ Polyphony provides a uniform interface for stopping *any* long-running operation
76
+ based on a timeout, using either the core ruby `Timeout` class, or the
77
+ `move_on_after` and `cancel_after` that Polyphony provides.
78
+
79
+ Before we discuss the different timeout APIs, we can first explore how to create
80
+ a timeout mechanism from scratch in Polyphony:
81
+
82
+ ```ruby
83
+ class MyTimeoutError < RuntimeError
84
+ end
85
+
86
+ def with_timeout(duration)
87
+ timeout_fiber = spin do
88
+ sleep duration
89
+ raise MyTimeoutError
90
+ end
91
+ yield
92
+ ensure
93
+ timeout_fiber.stop # this is the same as timeout_fiber.move_on
94
+ end
95
+
96
+ # Usage example:
97
+ with_timeout(5) { sleep 1; :foo } #=> :foo
98
+ with_timeout(5) { sleep 10; :bar } #=> MyTimeoutError raised!
99
+ ```
100
+
101
+ In the code above, we create a `with_timeout` method that takes a duration
102
+ argument. It starts by spinning up a fiber that will sleep for the given
103
+ duration, then raise a custom exception. It then runs the given block by calling
104
+ `yield`. If the given block stops running before the timeout, it exists
105
+ normally, not before making sure to stop the timeout fiber. If the given block
106
+ runs longer than the timeout, the exception raised by the timeout fiber will be
107
+ propagated to the fiber running the block, causing it to be stopped.
108
+
109
+ Now that we have an idea of how we can construct timeouts, let's look at the
110
+ different timeout APIs included in Polyphony:
111
+
112
+ ```ruby
113
+ # Timeout without raising an exception
114
+ move_on_after(5) { ... }
115
+
116
+ # Timeout without raising an exception, returning an arbitrary value
117
+ move_on_after(5, with_value: :foo) { ... } #=> :foo (in case of a timeout)
118
+
119
+ # Timeout raising an exception
120
+ cancel_after(5) { ... } #=> raises a Polyphony::Cancel exception
121
+
122
+ # Timeout raising a custom exception
123
+ cancel_after(5, with_exception: MyExceptionClass) { ... } #=> raises the given exception
124
+
125
+ # Timeout using the Timeout API
126
+ Timeout.timeout(5) { ... } #=> raises Timeout::Error
127
+ ```
128
+
129
+ ## Resetting Ongoing Operations
130
+
131
+ In addition to offering a uniform API for cancelling operations and setting
132
+ timeouts, Polyphony also allows you to reset, or restart, ongoing operations.
133
+ Let's imagine an active search feature that shows the user search results while
134
+ they're typing their search term. How we go about implementing this? We would
135
+ like to show the user search results, but if the user hits another key before
136
+ the results are received from the database, we'd like to cancel the operation
137
+ and relaunch the search. Let's see how Polyphony let's us do this:
138
+
139
+ ```ruby
140
+ searcher = spin do
141
+ peer, term = receive
142
+ results = get_search_results_from_db(term)
143
+ peer << results
144
+ end
145
+
146
+ def search_term_updated(term)
147
+ spin do
148
+ searcher.restart
149
+ searcher << [Fiber.current, term]
150
+ results = receive
151
+ update_search_results(results)
152
+ end
153
+ end
154
+ ```
155
+
156
+ In the example above we use fiber message passing in order to communicate
157
+ between two concurrent operations. Each time `search_term_updated` is called, we
158
+ *restart* the `searcher` fiber, send the term to it, wait for the results and
159
+ them update them in the UI.
160
+
161
+ ## Resettable Timeouts
162
+
163
+ Here's another example of restarting: we have a TCP server that accepts
164
+ connection but would like to close connections after one minute of inactivity.
165
+ We can use a timeout for that, but each time we receive data from the client, we
166
+ need to reset the timeout. Here's how we can do this:
167
+
168
+ ```ruby
169
+ def handle_connection(conn)
170
+ timeout = spin do
171
+ sleep 60
172
+ raise Polyphony::Cancel
173
+ end
174
+ conn.recv_loop do |msg|
175
+ timeout.reset # same as timeout.restart
176
+ handle_message(msg)
177
+ end
178
+ rescue Polyphony::Cancel
179
+ puts 'Closing connection due to inactivity!'
180
+ ensure
181
+ timeout.stop
182
+ end
183
+
184
+ server.accept_loop { |conn| handle_connection(conn) }
185
+ ```
186
+
187
+ In the code above, we create a timeout fiber that sleeps for one minute, then
188
+ raises an exception. We then run a loop waiting for messages from the client,
189
+ and each time a message arrives we reset the timeout. In fact, the standard
190
+ `#move_on_after` and `#cancel_after` APIs also propose a way to reset timeouts.
191
+ Let's examine how to do just that:
192
+
193
+ ```ruby
194
+ def handle_connection(conn)
195
+ cancel_after(60) do |timeout|
196
+ conn.recv_loop do |msg|
197
+ timeout.reset
198
+ handle_message(msg)
199
+ end
200
+ end
201
+ rescue Polyphony::Cancel
202
+ puts 'Closing connection due to inactivity!'
203
+ end
204
+
205
+ server.accept_loop { |conn| handle_connection(conn) }
206
+ ```
207
+
208
+ Here, instead of hand-rolling our own timeout mechanism, we use `#cancel_after`
209
+ but give it a block that takes an argument. When the block is called, this
210
+ argument is actually the timeout fiber that `#cancel_after` spins up, which lets
211
+ us reset it just like in the example before. Also notice how we don't need to
212
+ cleanup the timeout in the ensure block, as `#cancel_after` takes care of it by
213
+ itself.
data/docs/readme.md CHANGED
@@ -53,7 +53,7 @@ the hood, Polyphony uses
53
53
  In order to use Polyphony you need to have:
54
54
 
55
55
  - Linux or MacOS (support for Windows will come at a later stage)
56
- - Ruby (MRI) 3.0 or newer
56
+ - Ruby (MRI) 3.1 or newer
57
57
 
58
58
  ### Installing the Polyphony Gem
59
59
 
@@ -79,6 +79,7 @@ $ gem install polyphony
79
79
 
80
80
  - {file:/docs/overview.md Overview}
81
81
  - {file:/docs/tutorial.md Tutorial}
82
+ - {file:/docs/cancellation.md All About Cancellation: How to Stop Concurrent Operations}
82
83
  - {file:/docs/advanced-io.md Advanced I/O with Polyphony}
83
84
  - {file:/docs/cheat-sheet.md Cheat-Sheet}
84
85
  - {file:/docs/faq.md FAQ}