polyphony 1.0 → 1.0.1
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/.yardopts +1 -0
- data/CHANGELOG.md +12 -3
- data/README.md +1 -0
- data/TODO.md +1 -13
- data/docs/cheat-sheet.md +248 -0
- data/docs/design-principles.md +59 -3
- data/docs/faq.md +15 -32
- data/docs/fiber-scheduling.md +14 -12
- data/docs/overview.md +140 -35
- data/docs/readme.md +4 -3
- data/docs/tutorial.md +19 -149
- data/ext/polyphony/polyphony.c +2 -1
- data/lib/polyphony/extensions/io.rb +171 -161
- data/lib/polyphony/extensions/pipe.rb +3 -5
- data/lib/polyphony/extensions/socket.rb +3 -12
- data/lib/polyphony/version.rb +1 -1
- metadata +3 -2
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: f60a881ccb01cfce59bb3c84420bba574c2b1cd502df987b958a02b649fb4415
|
4
|
+
data.tar.gz: a138759174aba3944ca5e4faedff676d9c9807a50907ee03d18b2cec50f625c5
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 8167c82b2ebd31f4625dac5cbcf70a610b8e45d062b0ef5d4e2044493b6c9ca25a68b67d8b6948ece9ed3a414f2977c4f72175ce0fc63ce58545acc721f09d73
|
7
|
+
data.tar.gz: b91049699af9824aeb2fa051d859f35ff0f1b4ba289b7ea143a347a5ad8a882ec5a4fb42502742b3e4d19bf295f9a1d5bf3a9e15f38a061efdcf50a1741f0bce
|
data/.yardopts
CHANGED
data/CHANGELOG.md
CHANGED
@@ -1,3 +1,9 @@
|
|
1
|
+
## 1.0.1 2023-05-14
|
2
|
+
|
3
|
+
- Add cheat-sheet
|
4
|
+
- Improve and bring up to date doc files: overview, tutorial, FAQ
|
5
|
+
- Fix image refs in docs (#99) (thanks @paulhenrich)
|
6
|
+
|
1
7
|
## 1.0 2023-05-11
|
2
8
|
|
3
9
|
- More work on docs.
|
@@ -288,7 +294,8 @@
|
|
288
294
|
|
289
295
|
## 0.53.0 2021-04-23
|
290
296
|
|
291
|
-
- Implement `Backend#splice`, `Backend#splice_to_eof`, along with `IO#splice`,
|
297
|
+
- Implement `Backend#splice`, `Backend#splice_to_eof`, along with `IO#splice`,
|
298
|
+
`IO#splice_to_eof`
|
292
299
|
|
293
300
|
## 0.52.0 2021-02-28
|
294
301
|
|
@@ -530,7 +537,8 @@
|
|
530
537
|
|
531
538
|
## 0.38 2020-04-13
|
532
539
|
|
533
|
-
- Fix post-fork segfault if parent process has multiple threads with active
|
540
|
+
- Fix post-fork segfault if parent process has multiple threads with active
|
541
|
+
watchers
|
534
542
|
|
535
543
|
## 0.37 2020-04-07
|
536
544
|
|
@@ -764,7 +772,8 @@
|
|
764
772
|
## 0.14 2019-05-17
|
765
773
|
|
766
774
|
- Use chunked encoding in HTTP 1 response
|
767
|
-
- Rewrite `IO#read`, `#readpartial`, `#write` in C (about 30% performance
|
775
|
+
- Rewrite `IO#read`, `#readpartial`, `#write` in C (about 30% performance
|
776
|
+
improvement)
|
768
777
|
- Add method delegation to `ResourcePool`
|
769
778
|
- Optimize PG::Connection#async_exec
|
770
779
|
- Fix `Coprocess#cancel!`
|
data/README.md
CHANGED
data/TODO.md
CHANGED
@@ -37,9 +37,7 @@
|
|
37
37
|
|
38
38
|
-----------------------------------------------------
|
39
39
|
|
40
|
-
-
|
41
|
-
- Adapter for Pry and IRB (Which fixes #5 and #6)
|
42
|
-
- allow backend selection at runtime
|
40
|
+
- allow backend selection at runtime?
|
43
41
|
- Debugging
|
44
42
|
- Eat your own dogfood: need a good tool to check what's going on when some
|
45
43
|
test fails
|
@@ -138,20 +136,10 @@
|
|
138
136
|
}
|
139
137
|
```
|
140
138
|
|
141
|
-
|
142
|
-
|
143
|
-
|
144
|
-
|
145
|
-
|
146
|
-
|
147
139
|
- Docs
|
148
|
-
- landing page:
|
149
|
-
- links to the interesting stuff
|
150
|
-
- benchmarks
|
151
140
|
- explain difference between `sleep` and `suspend`
|
152
141
|
- discuss using `snooze` for ensuring responsiveness when executing CPU-bound work
|
153
142
|
|
154
|
-
|
155
143
|
### Some more API work, more docs
|
156
144
|
|
157
145
|
- sintra app with database access (postgresql)
|
data/docs/cheat-sheet.md
ADDED
@@ -0,0 +1,248 @@
|
|
1
|
+
# @title Cheat Sheet
|
2
|
+
|
3
|
+
# Cheat Sheet
|
4
|
+
|
5
|
+
## Fibers
|
6
|
+
|
7
|
+
### Start a fiber
|
8
|
+
|
9
|
+
```ruby
|
10
|
+
fiber = spin do
|
11
|
+
do_some_work
|
12
|
+
end
|
13
|
+
```
|
14
|
+
|
15
|
+
### Run a loop in a fiber
|
16
|
+
|
17
|
+
```ruby
|
18
|
+
fiber = spin_loop do
|
19
|
+
iterate_on_something
|
20
|
+
end
|
21
|
+
```
|
22
|
+
|
23
|
+
### Stop a fiber
|
24
|
+
|
25
|
+
```ruby
|
26
|
+
fiber.stop
|
27
|
+
# or:
|
28
|
+
fiber.interrupt
|
29
|
+
```
|
30
|
+
|
31
|
+
### Wait for a fiber to terminate
|
32
|
+
|
33
|
+
```ruby
|
34
|
+
fiber.await
|
35
|
+
# or:
|
36
|
+
fiber.join
|
37
|
+
```
|
38
|
+
|
39
|
+
### Wait for multiple fibers to terminate
|
40
|
+
|
41
|
+
```ruby
|
42
|
+
Fiber.await(fiber1, fiber2, ...)
|
43
|
+
# or:
|
44
|
+
Fiber.join(fiber1, fiber2, ...)
|
45
|
+
```
|
46
|
+
|
47
|
+
### Raise an exception in a fiber
|
48
|
+
|
49
|
+
```ruby
|
50
|
+
fiber.raise(SomeException)
|
51
|
+
# or:
|
52
|
+
fiber.raise(SomeException, 'Exception message')
|
53
|
+
```
|
54
|
+
|
55
|
+
### Restart a fiber
|
56
|
+
|
57
|
+
```ruby
|
58
|
+
fiber.restart
|
59
|
+
# or:
|
60
|
+
fiber.reset
|
61
|
+
```
|
62
|
+
|
63
|
+
## Control Fiber Scheduling
|
64
|
+
|
65
|
+
### Yield to other fibers during a lengthy CPU-bound operation
|
66
|
+
|
67
|
+
```ruby
|
68
|
+
def calculate_some_stuff(n)
|
69
|
+
acc = 0
|
70
|
+
n.times do |i|
|
71
|
+
acc += big_calc(acc, i)
|
72
|
+
snooze if (i % 1000) == 0
|
73
|
+
end
|
74
|
+
end
|
75
|
+
```
|
76
|
+
|
77
|
+
### Suspend fiber
|
78
|
+
|
79
|
+
```ruby
|
80
|
+
suspend
|
81
|
+
```
|
82
|
+
|
83
|
+
### Schedule fiber
|
84
|
+
|
85
|
+
```ruby
|
86
|
+
fiber.schedule
|
87
|
+
# or:
|
88
|
+
fiber.schedule(some_value)
|
89
|
+
```
|
90
|
+
|
91
|
+
## Message Passing
|
92
|
+
|
93
|
+
### Send a message to a fiber
|
94
|
+
|
95
|
+
```ruby
|
96
|
+
fiber << message
|
97
|
+
# or:
|
98
|
+
fiber.send << message
|
99
|
+
```
|
100
|
+
### Receive a message
|
101
|
+
|
102
|
+
```ruby
|
103
|
+
message = receive
|
104
|
+
# or, using deconstructing assign
|
105
|
+
a, b, c = receive
|
106
|
+
```
|
107
|
+
|
108
|
+
## Using Timers and Sleeping
|
109
|
+
|
110
|
+
### Sleep for a specific duration
|
111
|
+
|
112
|
+
```ruby
|
113
|
+
sleep 1 # sleeps for 1 second
|
114
|
+
```
|
115
|
+
|
116
|
+
### Sleep infinitely
|
117
|
+
|
118
|
+
```ruby
|
119
|
+
sleep
|
120
|
+
# or:
|
121
|
+
suspend
|
122
|
+
```
|
123
|
+
|
124
|
+
### Perform an operation repeatedly with a given time interval
|
125
|
+
|
126
|
+
```ruby
|
127
|
+
every(10) { do_something } # perform an operation once every 10 seconds
|
128
|
+
```
|
129
|
+
|
130
|
+
### Perform an operation repeatedly with a given frequency
|
131
|
+
|
132
|
+
```ruby
|
133
|
+
throttled_loop(10) { do_something } # perform an operation 10 times per second
|
134
|
+
```
|
135
|
+
|
136
|
+
### Timeout, raising an exception
|
137
|
+
|
138
|
+
```ruby
|
139
|
+
# On timeout, a Polyphony::Cancel exception is raised
|
140
|
+
cancel_after(10) { do_something_slow } # timeout after 10 seconds
|
141
|
+
|
142
|
+
# Or, using the stock Timeout API, raising a Timeout::Error
|
143
|
+
Timeout.timeout(10) { do_something_slow }
|
144
|
+
```
|
145
|
+
|
146
|
+
### Timeout without raising an exception
|
147
|
+
|
148
|
+
```ruby
|
149
|
+
# On timeout, result will be set to nil
|
150
|
+
result = move_on_after(10) { do_something_slow }
|
151
|
+
|
152
|
+
# Or, with a specific value:
|
153
|
+
result = move_on_after(10, with_value: 'foo') { do_something_slow }
|
154
|
+
```
|
155
|
+
|
156
|
+
### Resettable timeout
|
157
|
+
|
158
|
+
```ruby
|
159
|
+
# works with move_on_after as well
|
160
|
+
cancel_after(10) do |timeout|
|
161
|
+
while (data = client.gets)
|
162
|
+
timeout&.reset
|
163
|
+
do_something_with_data(client)
|
164
|
+
end
|
165
|
+
end
|
166
|
+
```
|
167
|
+
|
168
|
+
## Advanced I/O
|
169
|
+
|
170
|
+
### Splice data between two file descriptors
|
171
|
+
|
172
|
+
```ruby
|
173
|
+
# At least one file descriptor should be a pipe
|
174
|
+
dest.splice_from(source, 8192) #=> returns number of bytes spliced
|
175
|
+
# or:
|
176
|
+
IO.splice(src, dest, 8192)
|
177
|
+
```
|
178
|
+
|
179
|
+
### Splice data to EOF
|
180
|
+
|
181
|
+
```ruby
|
182
|
+
dest.splice_from(source, -8192) #=> returns number of bytes spliced
|
183
|
+
# or:
|
184
|
+
IO.splice(src, dest, -8192)
|
185
|
+
```
|
186
|
+
|
187
|
+
### Tee data between two file descriptors (allowing splicing data twice)
|
188
|
+
|
189
|
+
```ruby
|
190
|
+
dest2.tee_from(source, 8192)
|
191
|
+
dest1.splice_from(source, 8192)
|
192
|
+
# or:
|
193
|
+
IO.tee(src, dest2)
|
194
|
+
IO.splice(src, dest2)
|
195
|
+
```
|
196
|
+
|
197
|
+
### Splice data between two arbitrary file descriptors, without creating a pipe
|
198
|
+
|
199
|
+
```ruby
|
200
|
+
# This will automatically create a pipe, and splice data from source to
|
201
|
+
# destination until EOF is encountered.
|
202
|
+
IO.double_splice(src, dest)
|
203
|
+
```
|
204
|
+
|
205
|
+
### Create a pipe
|
206
|
+
|
207
|
+
A `Polyphony::Pipe` instance encapsulates a pipe with two file descriptors. It
|
208
|
+
can be used just like any other IO instance for reading, writing, splicing etc.
|
209
|
+
|
210
|
+
```ruby
|
211
|
+
pipe = Polyphony::Pipe.new
|
212
|
+
# or:
|
213
|
+
pipe = Polyphony.pipe
|
214
|
+
```
|
215
|
+
|
216
|
+
### Compress/uncompress data between two IOs
|
217
|
+
|
218
|
+
```ruby
|
219
|
+
IO.gzip(src, dest)
|
220
|
+
IO.gunzip(src, dest)
|
221
|
+
IO.deflate(src, dest)
|
222
|
+
IO.inflate(src, dest)
|
223
|
+
```
|
224
|
+
|
225
|
+
### Accept connections in a loop
|
226
|
+
|
227
|
+
```ruby
|
228
|
+
server_socket.accept_loop do |conn|
|
229
|
+
handle_connection(conn)
|
230
|
+
end
|
231
|
+
```
|
232
|
+
|
233
|
+
### Read data in a loop until EOF
|
234
|
+
|
235
|
+
```ruby
|
236
|
+
connection.read_loop do |data|
|
237
|
+
handle_data(data)
|
238
|
+
end
|
239
|
+
```
|
240
|
+
|
241
|
+
### Read data in a loop and feed it to a parser
|
242
|
+
|
243
|
+
```ruby
|
244
|
+
unpacker = MessagePack::Unpacker.new
|
245
|
+
reader = spin do
|
246
|
+
io.feed_loop(unpacker, :feed_each) { |msg| handle_msg(msg) }
|
247
|
+
end
|
248
|
+
```
|
data/docs/design-principles.md
CHANGED
@@ -43,12 +43,12 @@ programmers, a perplexing unfamiliar corner right at the heart of Ruby.
|
|
43
43
|
|
44
44
|
## The History of Polyphony
|
45
45
|
|
46
|
-
Polyphony started as an experiment, but over about
|
46
|
+
Polyphony started as an experiment, but over about three years of slow, jerky
|
47
47
|
evolution turned into something I'm really excited to share with the Ruby
|
48
|
-
community. Polyphony's design is both similar and different than the projects
|
48
|
+
community. Polyphony's design is both similar to and different than the projects
|
49
49
|
mentioned above.
|
50
50
|
|
51
|
-
Polyphony today
|
51
|
+
Polyphony today looks nothing like the way it began. A careful examination of the
|
52
52
|
[CHANGELOG](https://github.com/digital-fabric/polyphony/blob/master/CHANGELOG.md)
|
53
53
|
would show how Polyphony explored not only different event reactor designs, but
|
54
54
|
also different API designs incorporating various concurrent paradigms such as
|
@@ -153,3 +153,59 @@ Polyphony's design is based on the following principles:
|
|
153
153
|
}
|
154
154
|
end
|
155
155
|
```
|
156
|
+
|
157
|
+
- Enhance Ruby's I/O capabilities by providing [additional
|
158
|
+
APIs](./advanced-io.md) for splicing (on Linux) and moving data between file
|
159
|
+
descriptors. Polyphony provides APIs for compressing / uncompressing data on
|
160
|
+
the fly between file descriptors. This in turn enables the creation of
|
161
|
+
arbitrarily-complex data manipulation pipelines that maximize performance and
|
162
|
+
provide automatic backpressure.
|
163
|
+
|
164
|
+
## Emergent Patterns for Ruby Apps Using Polyphony
|
165
|
+
|
166
|
+
During the development of Polyphony and its usage in a few small- and
|
167
|
+
medium-size custom Ruby apps, a few patterns have emerged. We belive embracing
|
168
|
+
these patterns will lead to better-written concurrent programs that take
|
169
|
+
advantage of all the benefits provided by Polyphony. Here are some of them:
|
170
|
+
|
171
|
+
- Infinite loops make sense for fibers: normally, developers are taught to avoid
|
172
|
+
writing infinite loops, and to make sure any loop will be ended at one point.
|
173
|
+
With Polyphony, however, a fiber can run an infinite loop, performing work
|
174
|
+
such as responding to messages received on its mailbox, without the programmer
|
175
|
+
having to worry about it blocking the entire program:
|
176
|
+
|
177
|
+
```ruby
|
178
|
+
server = spin do
|
179
|
+
loop do
|
180
|
+
client, data = receive
|
181
|
+
result = do_something_with_data(data)
|
182
|
+
client << result
|
183
|
+
end
|
184
|
+
end
|
185
|
+
```
|
186
|
+
|
187
|
+
In the above example, a `server` of some sorts runs an infinite loop, taking
|
188
|
+
messages off its mailbox and handling them, sending the result back to the
|
189
|
+
corresponding client fiber. The programmer does not need to worry about
|
190
|
+
signalling the `server` fiber when it's time to finish its work. A simple call
|
191
|
+
to `server.stop` will stop it, as of course will its parent fiber stopping.
|
192
|
+
|
193
|
+
- Message passing between fibers as a means to synchronize and pass data between
|
194
|
+
different parts of the application. The message passing ability integrated
|
195
|
+
into Polyphony allows writing programs where each fiber is responsible for a
|
196
|
+
single task, and receives its work by popping messages off its mailbox. If we
|
197
|
+
reconsider the above example, here's how a client might talk to the `server`
|
198
|
+
fiber:
|
199
|
+
|
200
|
+
```ruby
|
201
|
+
results = incoming_data.map do |data|
|
202
|
+
server << [Fiber.current, data]
|
203
|
+
receive
|
204
|
+
end
|
205
|
+
```
|
206
|
+
|
207
|
+
Look at all the things we don't need to do: we don't need to worry about
|
208
|
+
synchronizing access to shared variables between the different parts of the
|
209
|
+
app, and we also don't need to worry about how to handle backpressure - the
|
210
|
+
work will progress as fast as the slowest part in the app, without any
|
211
|
+
requests accumulating unnecessarily.
|
data/docs/faq.md
CHANGED
@@ -115,7 +115,7 @@ between them, which is much easier to achieve using `Fiber#transfer`. In
|
|
115
115
|
addition, using `Fiber#transfer` allows us to perform blocking operations from
|
116
116
|
the main fiber, which is not possible when using `Fiber#resume`.
|
117
117
|
|
118
|
-
## Why does Polyphony
|
118
|
+
## Why does Polyphony reimplement core APIs such as `IO#read` and `Kernel#sleep`?
|
119
119
|
|
120
120
|
Polyphony "patches" some Ruby core and stdlib APIs, providing behavioraly
|
121
121
|
compatible fiber-aware implementations. We believe Polyphony has the potential
|
@@ -123,20 +123,21 @@ to profoundly change the way concurrent Ruby apps are written. Polyphony is
|
|
123
123
|
therefore designed to feel as much as possible like an integral part of the Ruby
|
124
124
|
runtime.
|
125
125
|
|
126
|
-
|
126
|
+
# Does Polyphony implement the FiberScheduler API?
|
127
127
|
|
128
|
-
|
129
|
-
|
130
|
-
|
131
|
-
|
132
|
-
|
128
|
+
`FiberScheduler` is an API that was added to Ruby 3.0. It's not an
|
129
|
+
implementation, only an interface. There are several implementations of the
|
130
|
+
`FiberScheduler` API, with varying levels of maturity. Polyphony has a very
|
131
|
+
opinionated design that does not really parallel the `FiberScheduler` API. It
|
132
|
+
might be possible, though, to develop a compatibility layer on top of Polyphony
|
133
|
+
that implements the `FiberScheduler` API at some point in the future.
|
133
134
|
|
134
135
|
## Can I use Polyphony in a multithreaded program?
|
135
136
|
|
136
|
-
Yes
|
137
|
-
however important to note that Polyphony places the
|
138
|
-
concurrency model, which is highly beneficial for
|
139
|
-
web servers and web apps.
|
137
|
+
Yes. Polyphony fully supports multi-threaded programs, and implements per-thread
|
138
|
+
fiber-scheduling. It is however important to note that Polyphony places the
|
139
|
+
emphasis on a multi-fiber concurrency model, which is highly beneficial for
|
140
|
+
I/O-bound workloads, such as web servers and web apps.
|
140
141
|
|
141
142
|
Because of Ruby's [global interpreter
|
142
143
|
lock](https://en.wikipedia.org/wiki/Global_interpreter_lock), multiple threads
|
@@ -145,25 +146,6 @@ are such a better fit for I/O bound Ruby programs. Threads should really be used
|
|
145
146
|
when performing synchronous operations that are not fiber-aware, such as running
|
146
147
|
an expensive SQLite query, or some other expensive system call.
|
147
148
|
|
148
|
-
## How Does Polyphony Fit Into the Ruby's Future Concurrency Plans
|
149
|
-
|
150
|
-
To our understanding, two things are currently on the horizon when it comes to
|
151
|
-
concurrency in Ruby: [auto-fibers](https://bugs.ruby-lang.org/issues/13618), and
|
152
|
-
[guilds](https://olivierlacan.com/posts/concurrency-in-ruby-3-with-guilds/).
|
153
|
-
While the auto-fibers proposal introduces an event reactor into Ruby and
|
154
|
-
automates waiting for file descriptors to become ready, allowing scheduling
|
155
|
-
other fibers meanwhile. It is still too early to see how Polyphony can coexist
|
156
|
-
with that. Another proposal is the addition of a fiber-aware [event
|
157
|
-
selector](https://bugs.ruby-lang.org/issues/14736). It is our intention to
|
158
|
-
eventually contribute to the discussion in this area by proposing a uniform
|
159
|
-
fiber scheduler interface that could be implemented in pure Ruby in order to
|
160
|
-
support all platforms and multiple Ruby runtimes.
|
161
|
-
|
162
|
-
The guilds proposal, on the other hand, promises to be a perfect match for
|
163
|
-
Polyphony's fiber-based concurrency model. Guilds will allow true parallelism
|
164
|
-
and together with Polyphony will allow taking full advantage of multiple CPU
|
165
|
-
cores in a single Ruby process.
|
166
|
-
|
167
149
|
## Can I run Rails using Polyphony?
|
168
150
|
|
169
151
|
We haven't yet tested Rails with Polyphony, but most probably not. We do plan to
|
@@ -178,5 +160,6 @@ Feel free to create issues and contribute pull requests.
|
|
178
160
|
## Who is behind this project?
|
179
161
|
|
180
162
|
I'm Sharon Rosner, an independent software developer living in France. Here's my
|
181
|
-
[github profile](https://github.com/noteflakes). You can
|
182
|
-
[
|
163
|
+
[github profile](https://github.com/noteflakes). You can sponsor my open source
|
164
|
+
work [here](https://github.com/sponsors/noteflakes). You can contact me by
|
165
|
+
writing to [sharon@noteflakes.com](mailto:sharon@noteflakes.com).
|
data/docs/fiber-scheduling.md
CHANGED
@@ -66,29 +66,31 @@ Switchpoint will also occur when the currently running fiber has terminated.
|
|
66
66
|
|
67
67
|
## Scheduler-less scheduling
|
68
68
|
|
69
|
-
Polyphony relies on [
|
70
|
-
|
71
|
-
|
72
|
-
|
73
|
-
|
69
|
+
Polyphony relies on [io_uring](https://man.archlinux.org/man/io_uring.7) or
|
70
|
+
[libev](http://software.schmorp.de/pkg/libev.html) for handling I/O operations
|
71
|
+
such as reading or writing to file descriptors, waiting for timers or processes.
|
72
|
+
In most event reactor-based libraries and frameworks, such as `nio4r`,
|
73
|
+
`EventMachine` or `node.js`, the entire application is run inside of a reactor
|
74
|
+
loop, and event callbacks are used to schedule user-supplied code *from inside
|
75
|
+
the loop*.
|
74
76
|
|
75
77
|
In Polyphony, however, we have chosen a concurrency model that does not use a
|
76
78
|
loop to schedule fibers. In fact, in Polyphony there's no outer reactor loop,
|
77
79
|
and there's no *scheduler* per se running on a separate execution context.
|
78
80
|
|
79
81
|
Instead, Polyphony maintains for each thread a run queue, a list of `:runnable`
|
80
|
-
fibers. If no fiber is `:runnable`, Polyphony will run the
|
81
|
-
at least one event has occurred. Events
|
82
|
-
fibers onto the run queue. Finally,
|
83
|
-
the run queue, which will run until
|
84
|
-
control is transferred to the next
|
82
|
+
fibers. If no fiber is `:runnable`, Polyphony will run the underlying event
|
83
|
+
reactor (using io_uring or libev) until at least one event has occurred. Events
|
84
|
+
are handled by adding the corresponding fibers onto the run queue. Finally,
|
85
|
+
control is transferred to the first fiber on the run queue, which will run until
|
86
|
+
it blocks or terminates, at which point control is transferred to the next
|
87
|
+
runnable fiber.
|
85
88
|
|
86
89
|
This approach has numerous benefits:
|
87
90
|
|
88
91
|
- No separate reactor fiber that needs to be resumed on each blocking operation,
|
89
92
|
leading to less context switches, and less bookkeeping.
|
90
|
-
- Clear separation between the
|
91
|
-
scheduling code.
|
93
|
+
- Clear separation between the I/O backend code and the fiber scheduling code.
|
92
94
|
- Much less time is spent in event loop callbacks, letting the event loop run
|
93
95
|
more efficiently.
|
94
96
|
- Fibers are switched outside of the event reactor code, making it easier to
|