polyphony 0.99.6 → 1.0.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.yardopts +1 -0
- data/CHANGELOG.md +16 -3
- data/README.md +1 -0
- data/TODO.md +1 -13
- data/docs/cheat-sheet.md +248 -0
- data/docs/design-principles.md +59 -3
- data/docs/faq.md +15 -32
- data/docs/fiber-scheduling.md +14 -12
- data/docs/overview.md +140 -35
- data/docs/readme.md +4 -3
- data/docs/tutorial.md +19 -149
- data/ext/polyphony/polyphony.c +2 -1
- data/lib/polyphony/extensions/io.rb +171 -161
- data/lib/polyphony/extensions/pipe.rb +3 -5
- data/lib/polyphony/extensions/socket.rb +3 -12
- data/lib/polyphony/version.rb +1 -1
- metadata +3 -2
data/docs/overview.md
CHANGED
@@ -6,7 +6,7 @@
|
|
6
6
|
|
7
7
|
Polyphony is a new Ruby library for building concurrent applications in Ruby.
|
8
8
|
Polyphony provides a comprehensive, structured concurrency model based on Ruby
|
9
|
-
fibers and using libev
|
9
|
+
fibers and using `io_uring` or `libev` for high-performance I/O operations.
|
10
10
|
|
11
11
|
Polyphony is designed to maximize developer happiness. It provides a natural and
|
12
12
|
fluent API for writing concurrent Ruby apps while using the stock Ruby APIs such
|
@@ -15,11 +15,6 @@ multi-fiber environment. In addition, Polyphony offers a solid
|
|
15
15
|
exception-handling experience that builds on and enhances Ruby's
|
16
16
|
exception-handling mechanisms.
|
17
17
|
|
18
|
-
Polyphony includes a full-blown HTTP server implementation with integrated
|
19
|
-
support for HTTP 1 & 2, WebSockets, TLS/SSL termination and more. Polyphony also
|
20
|
-
provides fiber-aware adapters for connecting to PostgreSQL and Redis servers.
|
21
|
-
More adapters are being actively developed.
|
22
|
-
|
23
18
|
## Taking Polyphony for a Spin
|
24
19
|
|
25
20
|
Polyphony is different from other reactor-based solutions for Ruby in that
|
@@ -60,32 +55,69 @@ come in on `STDIN` and then switches control to the `counter` fiber. When the
|
|
60
55
|
other work. If no other fiber is ready to run, Polyphony simply waits for at
|
61
56
|
least one event to occur, and then resumes the corresponding fiber.
|
62
57
|
|
58
|
+
## What are Fibers and What are They Good For?
|
59
|
+
|
60
|
+
Fibers are some of Ruby's most underappreciated hidden gems. Up until now,
|
61
|
+
fibers have been used mostly as the underlying mechanism for implementing
|
62
|
+
lazy enumerators and asynchronous generators. Fibers encapsulate, in short,
|
63
|
+
an execution context that can be paused and resumed at will.
|
64
|
+
|
65
|
+
Fibers are also at the heart of Polyphony's concurrency model. Polyphony employs
|
66
|
+
fibers as a way to run multiple tasks at once, each task advancing at its own
|
67
|
+
pace, pausing when waiting for an event to occur, and automatically resuming
|
68
|
+
when that event has occurred.
|
69
|
+
|
70
|
+
Take for example a web app: in order to fulfil an incoming request, multiple
|
71
|
+
steps are required: querying the database, fetching cached entries from Redis,
|
72
|
+
talking to third-party services such as Twilio or AWS S3. Each step can last
|
73
|
+
tens of milliseconds, and blocks the current thread. Such an app is said to be
|
74
|
+
I/O-bound, that is, it mostly spends its time waiting for some external
|
75
|
+
services.
|
76
|
+
|
77
|
+
The traditional approach to handling multiple requests concurrently is to employ
|
78
|
+
multiple threads or processes, but this approach has numerous disavantages:
|
79
|
+
|
80
|
+
- Both threads and processes are heavyweight, in both memory consmption and
|
81
|
+
the cost associated with context-switching.
|
82
|
+
- Threads introduce hard-to-debug race conditions, and do not offer true
|
83
|
+
parallelism, owing to Ruby's GVL.
|
84
|
+
- Processes are more difficult to coordinate, since they do not share memory.
|
85
|
+
- Both threads and processes are limited to a few thousand at best on a single
|
86
|
+
machine. Trying to spawn a thread per client essentially limits the scaling
|
87
|
+
capacity of your system.
|
88
|
+
|
89
|
+
Polyphony eschews both threads and processes in favor of fibers as the basic
|
90
|
+
unit of concurrency. The idea is that any time a blocking I/O operation occurs,
|
91
|
+
the current fiber is paused, and another fiber which has been marked as
|
92
|
+
*runnable* is resumed. This way, your Ruby code can keep on handling incoming
|
93
|
+
HTTP requests as they come with a scaling capacity that is virtually only
|
94
|
+
limited by available memory.
|
95
|
+
|
63
96
|
## Fibers vs Threads
|
64
97
|
|
65
|
-
|
66
|
-
|
67
|
-
|
68
|
-
context that can be paused and resumed by the application, and has no
|
69
|
-
counterpart at the OS level.
|
98
|
+
A thread is an OS abstraction that is controlled by the OS, while a fiber
|
99
|
+
represents an execution context that can be paused and resumed by the
|
100
|
+
application, and has no counterpart at the OS level.
|
70
101
|
|
71
102
|
When used for writing concurrent programming, fibers offer multiple benefits
|
72
|
-
over threads. They consume
|
73
|
-
|
74
|
-
|
75
|
-
given enough RAM. Those advantages make fibers a compelling solution for
|
76
|
-
pervasively concurrent applications, even when using a dynamic
|
77
|
-
language such as Ruby.
|
103
|
+
over threads. They consume less RAM than threads, and switching between them is
|
104
|
+
faster than switching between threads. In addition, since fibers require no
|
105
|
+
cooperation from the OS, an application can create literally millions of them
|
106
|
+
given enough RAM. Those advantages make fibers a compelling solution for
|
107
|
+
creating pervasively concurrent applications, even when using a dynamic
|
108
|
+
high-level "slow" language such as Ruby.
|
78
109
|
|
79
110
|
Ruby programs will only partly benefit from using mutiple threads for processing
|
80
|
-
work loads (due to the GVL), but fibers are a great match
|
81
|
-
|
82
|
-
|
83
|
-
|
84
|
-
|
111
|
+
work loads (due to the GVL), but fibers are a great match for programs that are
|
112
|
+
I/O bound (that means spending most of their time talking to the outside world).
|
113
|
+
A fiber-based web-server, for example, can juggle tens of thousands of active
|
114
|
+
concurrent connections, each advancing at its own pace, consuming minimal CPU
|
115
|
+
time.
|
85
116
|
|
86
|
-
|
87
|
-
|
88
|
-
cross-thread communication using [fiber
|
117
|
+
That said, Polyphony fully supports multithreading, with each thread having its
|
118
|
+
own fiber run queue and its own `io_uring` or `libev`-based I/O backend.
|
119
|
+
Polyphony even enables cross-thread communication using [fiber
|
120
|
+
messaging](#message-passing).
|
89
121
|
|
90
122
|
## Fibers vs Callbacks
|
91
123
|
|
@@ -111,6 +143,89 @@ in the business logic, as local variables. And finally, the sequential
|
|
111
143
|
programming style makes it much easier to debug your code, since stack traces
|
112
144
|
contain the entire history of execution from the app's inception.
|
113
145
|
|
146
|
+
## Switchpoints and the Fiber-Switching Dance
|
147
|
+
|
148
|
+
In order to make pausing and resuming fibers completely automatic and painfree,
|
149
|
+
we need to know when an operation is going to block, and when it can be
|
150
|
+
completed without blocking. Operations that might block execution are considered
|
151
|
+
*switchpoints*. A switchpoint is a point in time at which control might switch
|
152
|
+
from the currently running fiber to another fiber that is in a runnable state.
|
153
|
+
Switchpoints may occur in any of the following cases:
|
154
|
+
|
155
|
+
- On a call to any blocking operation, such as `#sleep`, `Fiber#await`,
|
156
|
+
`Thread#join` etc.
|
157
|
+
- On fiber termination
|
158
|
+
- On a call to `#suspend`
|
159
|
+
- On a call to `#snooze`
|
160
|
+
- On a call to `Thread#switch_fiber`
|
161
|
+
|
162
|
+
At any switchpoint, the following takes place:
|
163
|
+
|
164
|
+
- Check if any fiber is runnable, that is, ready to continue processing.
|
165
|
+
- If no fiber is runnable, watch for events (see below) and wait for at least
|
166
|
+
one fiber to become runnable.
|
167
|
+
- Pause the current fiber and switch to the first runnable fiber, which resumes
|
168
|
+
at the point it was last paused.
|
169
|
+
|
170
|
+
The automatic switching between fibers is complemented by employing
|
171
|
+
[libev](http://software.schmorp.de/pkg/libev.html), a multi-platform high
|
172
|
+
performance event reactor that allows listening to I/O, timer and other events.
|
173
|
+
At every switchpoint where no fibers are runnable, the libev evet loop is run
|
174
|
+
until events occur, which in turn cause the relevant fibers to become runnable.
|
175
|
+
|
176
|
+
Let's examine a simple example:
|
177
|
+
|
178
|
+
```ruby
|
179
|
+
require 'polyphony'
|
180
|
+
|
181
|
+
spin do
|
182
|
+
puts "Going to sleep..."
|
183
|
+
sleep 1
|
184
|
+
puts "Woke up"
|
185
|
+
end
|
186
|
+
|
187
|
+
suspend
|
188
|
+
puts "We're done"
|
189
|
+
```
|
190
|
+
|
191
|
+
The above program does nothing exceptional, it just sleeps for 1 second and
|
192
|
+
prints a bunch of messages. But it is enough to demonstrate how concurrency
|
193
|
+
works in Polyphony. Here's a flow chart of the transfer of control:
|
194
|
+
|
195
|
+
<img src="https://github.com/digital-fabric/polyphony/raw/master/docs/assets/sleeping-fiber.svg">
|
196
|
+
|
197
|
+
Here's the actual sequence of execution (in pseudo-code)
|
198
|
+
|
199
|
+
```ruby
|
200
|
+
# (main fiber)
|
201
|
+
sleeper = spin { ... } # The main fiber spins up a new fiber marked as runnable
|
202
|
+
suspend # The main fiber suspends, waiting for all other work to finish
|
203
|
+
Thread.current.switch_fiber # Polyphony looks for other runnable fibers
|
204
|
+
|
205
|
+
# (sleeper fiber)
|
206
|
+
puts "Going to sleep..." # The sleeper fiber starts running
|
207
|
+
sleep 1 # The sleeper fiber goes to sleep
|
208
|
+
Gyro::Timer.new(1, 0).await # A timer event watcher is setup and yields
|
209
|
+
Thread.current.switch_fiber # Polyphony looks for other runnable fibers
|
210
|
+
Thread.current.backend.poll # With no work left, the event loop is ran
|
211
|
+
fiber.schedule # The timer event fires, scheduling the sleeper fiber
|
212
|
+
# <= The sleep method returns
|
213
|
+
puts "Woke up"
|
214
|
+
Thread.current.switch_fiber # With the fiber done, Polyphony looks for work
|
215
|
+
|
216
|
+
# with no more work, control is returned to the main fiber
|
217
|
+
# (main fiber)
|
218
|
+
# <=
|
219
|
+
# With no more work left, the main fiber is resumed and the suspend call returns
|
220
|
+
puts "We're done"
|
221
|
+
```
|
222
|
+
|
223
|
+
What we have done in fact is we multiplexed two different contexts of execution
|
224
|
+
(fibers) onto a single thread, each fiber continuing at its own pace and
|
225
|
+
yielding control when waiting for something to happen. This context-switching
|
226
|
+
dance, performed automatically by Polyphony behind the scenes, enables building
|
227
|
+
highly-concurrent Ruby apps, with minimal impact on performance.
|
228
|
+
|
114
229
|
## Structured Concurrency
|
115
230
|
|
116
231
|
Polyphony's tagline is "fine-grained concurrency for Ruby", because it makes it
|
@@ -445,13 +560,3 @@ connection loops waiting for data to be read from the socket. Once the data
|
|
445
560
|
arrives, it is fed to the HTTP parser. The HTTP parser will call the
|
446
561
|
`on_headers_complete` callback, which simply adds a request to the requests
|
447
562
|
queue. The code then continues to handle any requests still in the queue.
|
448
|
-
|
449
|
-
## Future Directions
|
450
|
-
|
451
|
-
Polyphony is a young project, and will still need a lot of development effort to
|
452
|
-
reach version 1.0. Here are some of the exciting directions we're working on.
|
453
|
-
|
454
|
-
- Support for more core and stdlib APIs
|
455
|
-
- More adapters for gems with C-extensions, such as `mysql`, `sqlite3` etc
|
456
|
-
- Use `io_uring` backend as alternative to the libev backend
|
457
|
-
- More concurrency constructs for building highly concurrent applications
|
data/docs/readme.md
CHANGED
@@ -77,9 +77,10 @@ $ gem install polyphony
|
|
77
77
|
|
78
78
|
## Usage
|
79
79
|
|
80
|
-
- {file:/docs/overview.md
|
81
|
-
- {file:/docs/tutorial.md
|
82
|
-
- {file:/docs/
|
80
|
+
- {file:/docs/overview.md Overview}
|
81
|
+
- {file:/docs/tutorial.md Tutorial}
|
82
|
+
- {file:/docs/cheat-sheet.md Cheat-Sheet}
|
83
|
+
- {file:/docs/faq.md FAQ}
|
83
84
|
|
84
85
|
## Technical Discussion
|
85
86
|
|
data/docs/tutorial.md
CHANGED
@@ -2,137 +2,15 @@
|
|
2
2
|
|
3
3
|
# Tutorial
|
4
4
|
|
5
|
-
|
6
|
-
|
7
|
-
|
8
|
-
|
9
|
-
|
10
|
-
## What are Fibers and What are They Good For?
|
11
|
-
|
12
|
-
Fibers are some of Ruby's most underappreciated hidden gems. Up until now,
|
13
|
-
fibers have been used mostly as the underlying mechanism for implementing
|
14
|
-
lazy enumerators and asynchronous generators. Fibers encapsulate, in short,
|
15
|
-
an execution context that can be paused and resumed at will.
|
16
|
-
|
17
|
-
Fibers are also at the heart of Polyphony's concurrency model. Polyphony employs
|
18
|
-
fibers as a way to run multiple tasks at once, each task advancing at its own
|
19
|
-
pace, pausing when waiting for an event to occur, and automatically resuming
|
20
|
-
when that event has occurred.
|
21
|
-
|
22
|
-
Take for example a web app: in order to fulfil an incoming request, multiple
|
23
|
-
steps are required: querying the database, fetching cached entries from Redis,
|
24
|
-
talking to third-party services such as Twilio or AWS S3. Each step can last
|
25
|
-
tens of milliseconds, and blocks the current thread. Such an app is said to be
|
26
|
-
I/O-bound, that is, it mostly spends its time waiting for some external
|
27
|
-
services.
|
28
|
-
|
29
|
-
The traditional approach to handling multiple requests concurrently is to employ
|
30
|
-
multiple threads or processes, but this approach has numerous disavantages:
|
31
|
-
|
32
|
-
- Both threads and processes are heavyweight, in both memory consmption and
|
33
|
-
the cost associated with context-switching.
|
34
|
-
- Threads introduce hard-to-debug race conditions, and do not offer true
|
35
|
-
parallelism, owing to Ruby's GVL.
|
36
|
-
- Processes are more difficult to coordinate, since they do not share memory.
|
37
|
-
- Both threads and processes are limited to a few thousand at best on a single
|
38
|
-
machine. Trying to spawn a thread per client essentially limits the scaling
|
39
|
-
capacity of your system.
|
40
|
-
|
41
|
-
Polyphony eschews both threads and processes in favor of fibers as the basic
|
42
|
-
unit of concurrency. The idea is that any time a blocking I/O operation occurs,
|
43
|
-
the current fiber is paused, and another fiber which has been marked as
|
44
|
-
*runnable* is resumed. This way, your Ruby code can keep on handling incoming
|
45
|
-
HTTP requests as they come with a scaling capacity that is virtually only
|
46
|
-
limited by available memory.
|
47
|
-
|
48
|
-
## Switchpoints and the Fiber-Switching Dance
|
49
|
-
|
50
|
-
In order to make pausing and resuming fibers completely automatic and painfree,
|
51
|
-
we need to know when an operation is going to block, and when it can be
|
52
|
-
completed without blocking. Operations that might block execution are considered
|
53
|
-
*switchpoints*. A switchpoint is a point in time at which control might switch
|
54
|
-
from the currently running fiber to another fiber that is in a runnable state.
|
55
|
-
Switchpoints may occur in any of the following cases:
|
56
|
-
|
57
|
-
- On a call to any blocking operation, such as `#sleep`, `Fiber#await`,
|
58
|
-
`Thread#join` etc.
|
59
|
-
- On fiber termination
|
60
|
-
- On a call to `#suspend`
|
61
|
-
- On a call to `#snooze`
|
62
|
-
- On a call to `Thread#switch_fiber`
|
63
|
-
|
64
|
-
At any switchpoint, the following takes place:
|
65
|
-
|
66
|
-
- Check if any fiber is runnable, that is, ready to continue processing.
|
67
|
-
- If no fiber is runnable, watch for events (see below) and wait for at least
|
68
|
-
one fiber to become runnable.
|
69
|
-
- Pause the current fiber and switch to the first runnable fiber, which resumes
|
70
|
-
at the point it was last paused.
|
71
|
-
|
72
|
-
The automatic switching between fibers is complemented by employing
|
73
|
-
[libev](http://software.schmorp.de/pkg/libev.html), a multi-platform high
|
74
|
-
performance event reactor that allows listening to I/O, timer and other events.
|
75
|
-
At every switchpoint where no fibers are runnable, the libev evet loop is run
|
76
|
-
until events occur, which in turn cause the relevant fibers to become runnable.
|
77
|
-
|
78
|
-
Let's examine a simple example:
|
79
|
-
|
80
|
-
```ruby
|
81
|
-
require 'polyphony'
|
82
|
-
|
83
|
-
spin do
|
84
|
-
puts "Going to sleep..."
|
85
|
-
sleep 1
|
86
|
-
puts "Woke up"
|
87
|
-
end
|
88
|
-
|
89
|
-
suspend
|
90
|
-
puts "We're done"
|
91
|
-
```
|
92
|
-
|
93
|
-
The above program does nothing exceptional, it just sleeps for 1 second and
|
94
|
-
prints a bunch of messages. But it is enough to demonstrate how concurrency
|
95
|
-
works in Polyphony. Here's a flow chart of the transfer of control:
|
96
|
-
|
97
|
-
<img src="https://github.com/digital-fabric/polyphony/raw/master/docs/assets/sleeping-fiber.png">
|
98
|
-
|
99
|
-
Here's the actual sequence of execution (in pseudo-code)
|
100
|
-
|
101
|
-
```ruby
|
102
|
-
# (main fiber)
|
103
|
-
sleeper = spin { ... } # The main fiber spins up a new fiber marked as runnable
|
104
|
-
suspend # The main fiber suspends, waiting for all other work to finish
|
105
|
-
Thread.current.switch_fiber # Polyphony looks for other runnable fibers
|
106
|
-
|
107
|
-
# (sleeper fiber)
|
108
|
-
puts "Going to sleep..." # The sleeper fiber starts running
|
109
|
-
sleep 1 # The sleeper fiber goes to sleep
|
110
|
-
Gyro::Timer.new(1, 0).await # A timer event watcher is setup and yields
|
111
|
-
Thread.current.switch_fiber # Polyphony looks for other runnable fibers
|
112
|
-
Thread.current.backend.poll # With no work left, the event loop is ran
|
113
|
-
fiber.schedule # The timer event fires, scheduling the sleeper fiber
|
114
|
-
# <= The sleep method returns
|
115
|
-
puts "Woke up"
|
116
|
-
Thread.current.switch_fiber # With the fiber done, Polyphony looks for work
|
117
|
-
|
118
|
-
# with no more work, control is returned to the main fiber
|
119
|
-
# (main fiber)
|
120
|
-
# <=
|
121
|
-
# With no more work left, the main fiber is resumed and the suspend call returns
|
122
|
-
puts "We're done"
|
123
|
-
```
|
124
|
-
|
125
|
-
What we have done in fact is we multiplexed two different contexts of execution
|
126
|
-
(fibers) onto a single thread, each fiber continuing at its own pace and
|
127
|
-
yielding control when waiting for something to happen. This context-switching
|
128
|
-
dance, performed automatically by Polyphony behind the scenes, enables building
|
129
|
-
highly-concurrent Ruby apps, with minimal impact on performance.
|
5
|
+
In this tutorial we'll show how to build a simple fiber-based server using
|
6
|
+
Polyphony, how to make it concurrent and how to make it resilient to errors.
|
7
|
+
We'll assume you have read the [overview](./overview.md). If you haven't yet,
|
8
|
+
please go and read it now before continuing with this tutorial.
|
130
9
|
|
131
10
|
## Building a Simple Echo Server with Polyphony
|
132
11
|
|
133
|
-
|
134
|
-
|
135
|
-
from the client.
|
12
|
+
Here's what we want to build: a concurrent echo server. Our server will accept
|
13
|
+
TCP connections and send back whatever it receives from the client.
|
136
14
|
|
137
15
|
We'll start by opening a server socket:
|
138
16
|
|
@@ -189,7 +67,7 @@ innocent call to `#spin`.
|
|
189
67
|
|
190
68
|
Here's a flow chart showing the transfer of control between the different fibers:
|
191
69
|
|
192
|
-
<img src="https://github.com/digital-fabric/polyphony/raw/master/docs/assets/echo-fibers.
|
70
|
+
<img src="https://github.com/digital-fabric/polyphony/raw/master/docs/assets/echo-fibers.svg">
|
193
71
|
|
194
72
|
Let's consider the advantage of the Polyphony concurrency model:
|
195
73
|
|
@@ -205,14 +83,15 @@ Let's consider the advantage of the Polyphony concurrency model:
|
|
205
83
|
|
206
84
|
Now that we have a working concurrent echo server, let's add some bells and
|
207
85
|
whistles. First of all, let's get rid of clients that are not active. We'll do
|
208
|
-
this by
|
86
|
+
this by wrapping our read loop in a call to `cancel_after`:
|
209
87
|
|
210
88
|
```ruby
|
211
89
|
def handle_client(client)
|
212
|
-
|
213
|
-
|
214
|
-
|
215
|
-
|
90
|
+
cancel_after(10) do |timeout|
|
91
|
+
while (data = client.gets)
|
92
|
+
timeout.restart
|
93
|
+
client << data
|
94
|
+
end
|
216
95
|
end
|
217
96
|
rescue Polyphony::Cancel
|
218
97
|
client.puts 'Closing connection due to inactivity.'
|
@@ -228,14 +107,15 @@ then cancel its parent. The call to `client.gets` blocks until new data is
|
|
228
107
|
available. If no new data is available, the `timeout` fiber will finish
|
229
108
|
sleeping, and then cancel the client handling fiber by raising a
|
230
109
|
`Polyphony::Cancel` exception. However, if new data is received, the `timeout`
|
231
|
-
fiber is restarted, causing to begin sleeping again for 10 seconds. If the
|
110
|
+
fiber is restarted, causing it to begin sleeping again for 10 seconds. If the
|
232
111
|
client has closed the connection, or some other exception occurs, the `timeout`
|
233
|
-
fiber is automatically stopped as it is a child of the
|
112
|
+
fiber is automatically stopped as it is a child of the fiber running the
|
113
|
+
`handle_client` method.
|
234
114
|
|
235
115
|
The habit of always cleaning up using `ensure` in the face of potential
|
236
116
|
interruptions is a fundamental element of using Polyphony correctly. This makes
|
237
117
|
your code robust, even in a highly chaotic concurrent execution environment
|
238
|
-
where
|
118
|
+
where fibers can be started, restarted and interrupted at any time.
|
239
119
|
|
240
120
|
## Implementing graceful shutdown
|
241
121
|
|
@@ -261,8 +141,7 @@ def client_loop(client, timeout = nil)
|
|
261
141
|
end
|
262
142
|
|
263
143
|
def handle_client(client)
|
264
|
-
|
265
|
-
client_loop(client, timeout)
|
144
|
+
cancel_after(10) { |timeout| client_loop(client, timeout) }
|
266
145
|
rescue Polyphony::Cancel
|
267
146
|
client.puts 'Closing connection due to inactivity.'
|
268
147
|
rescue Polyphony::Terminate
|
@@ -304,8 +183,7 @@ def client_loop(client, timeout = nil)
|
|
304
183
|
end
|
305
184
|
|
306
185
|
def handle_client(client)
|
307
|
-
|
308
|
-
client_loop(client, timeout)
|
186
|
+
cancel_after(10) { |timeout| client_loop(client, timeout) }
|
309
187
|
rescue Polyphony::Cancel
|
310
188
|
client.puts 'Closing connection due to inactivity.'
|
311
189
|
rescue Polyphony::Terminate
|
@@ -324,11 +202,3 @@ while (client = server.accept)
|
|
324
202
|
spin { handle_client(client) }
|
325
203
|
end
|
326
204
|
```
|
327
|
-
|
328
|
-
## What Else Can I Do with Polyphony?
|
329
|
-
|
330
|
-
Polyphony currently provides support for any library that uses Ruby's stock
|
331
|
-
`socket` and `openssl` classes. Polyphony also includes adapters for the `pg`,
|
332
|
-
`redis` and `irb` gems. It also includes an implementation of an integrated HTTP
|
333
|
-
1 / HTTP 2 / websockets web server with support for TLS termination, ALPN
|
334
|
-
protocol selection and preliminary rack support.
|
data/ext/polyphony/polyphony.c
CHANGED
@@ -260,7 +260,8 @@ VALUE Polyphony_backend_sleep(VALUE self, VALUE duration) {
|
|
260
260
|
}
|
261
261
|
|
262
262
|
/* Splices data from the given source to the given destination, returning the
|
263
|
-
* number of bytes spliced.
|
263
|
+
* number of bytes spliced. If maxlen is negative, splices repeatedly
|
264
|
+
* using absolute value of maxlen until EOF is encountered.
|
264
265
|
*
|
265
266
|
* @param src [IO] source
|
266
267
|
* @param dest [IO] destination
|