polyphony 0.41 → 0.42
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/CHANGELOG.md +9 -0
- data/Gemfile.lock +5 -5
- data/Rakefile +1 -1
- data/TODO.md +19 -9
- data/docs/_config.yml +56 -7
- data/docs/_sass/custom/custom.scss +0 -30
- data/docs/_sass/overrides.scss +0 -46
- data/docs/{user-guide → _user-guide}/all-about-timers.md +0 -0
- data/docs/_user-guide/index.md +9 -0
- data/docs/{user-guide → _user-guide}/web-server.md +0 -0
- data/docs/api-reference/index.md +9 -0
- data/docs/api-reference/polyphony-process.md +1 -1
- data/docs/api-reference/thread.md +1 -1
- data/docs/faq.md +21 -11
- data/docs/getting-started/index.md +10 -0
- data/docs/getting-started/installing.md +2 -6
- data/docs/getting-started/overview.md +507 -0
- data/docs/getting-started/tutorial.md +27 -19
- data/docs/index.md +1 -1
- data/docs/main-concepts/concurrency.md +0 -5
- data/docs/main-concepts/design-principles.md +2 -12
- data/docs/main-concepts/index.md +9 -0
- data/examples/core/01-spinning-up-fibers.rb +1 -0
- data/examples/core/03-interrupting.rb +4 -1
- data/examples/core/04-handling-signals.rb +19 -0
- data/examples/performance/thread-vs-fiber/polyphony_server.rb +6 -18
- data/examples/performance/thread-vs-fiber/polyphony_server_read_loop.rb +58 -0
- data/examples/performance/xx-array.rb +11 -0
- data/examples/performance/xx-fiber-switch.rb +9 -0
- data/examples/performance/xx-snooze.rb +15 -0
- data/ext/polyphony/fiber.c +0 -3
- data/ext/polyphony/libev_agent.c +234 -19
- data/ext/polyphony/libev_queue.c +3 -1
- data/ext/polyphony/polyphony.c +0 -10
- data/ext/polyphony/polyphony.h +6 -6
- data/ext/polyphony/thread.c +8 -36
- data/lib/polyphony.rb +5 -2
- data/lib/polyphony/core/channel.rb +2 -2
- data/lib/polyphony/core/global_api.rb +2 -2
- data/lib/polyphony/core/resource_pool.rb +2 -2
- data/lib/polyphony/extensions/core.rb +2 -3
- data/lib/polyphony/version.rb +1 -1
- data/polyphony.gemspec +1 -1
- data/test/test_agent.rb +49 -2
- metadata +16 -20
- data/docs/_includes/head.html +0 -40
- data/docs/_includes/nav.html +0 -51
- data/docs/_includes/prevnext.html +0 -17
- data/docs/_layouts/default.html +0 -106
- data/docs/api-reference.md +0 -11
- data/docs/api-reference/gyro-async.md +0 -57
- data/docs/api-reference/gyro-child.md +0 -29
- data/docs/api-reference/gyro-queue.md +0 -44
- data/docs/api-reference/gyro-timer.md +0 -51
- data/docs/api-reference/gyro.md +0 -25
- data/docs/getting-started.md +0 -10
- data/docs/main-concepts.md +0 -10
- data/docs/user-guide.md +0 -10
- data/examples/core/forever_sleep.rb +0 -19
@@ -0,0 +1,507 @@
|
|
1
|
+
---
|
2
|
+
layout: page
|
3
|
+
title: Overview
|
4
|
+
parent: Getting Started
|
5
|
+
nav_order: 2
|
6
|
+
---
|
7
|
+
|
8
|
+
# Polyphony - an Overview
|
9
|
+
{: .no_toc }
|
10
|
+
|
11
|
+
## Table of contents
|
12
|
+
{: .no_toc .text-delta }
|
13
|
+
|
14
|
+
- TOC
|
15
|
+
{:toc}
|
16
|
+
|
17
|
+
---
|
18
|
+
|
19
|
+
## Introduction
|
20
|
+
|
21
|
+
Polyphony is a new Ruby library for building concurrent applications in Ruby.
|
22
|
+
Polyphony provides a comprehensive, structured concurrency model based on Ruby
|
23
|
+
fibers and using libev as a high-performance event reactor.
|
24
|
+
|
25
|
+
Polyphony is designed to maximize developer happiness. It provides a natural and
|
26
|
+
fluent API for writing concurrent Ruby apps while using the stock Ruby APIs such
|
27
|
+
as `IO`, `Process`, `Socket`, `OpenSSL` and `Net::HTTP` in a concurrent
|
28
|
+
multi-fiber environment. In addition, Polyphony offers a solid
|
29
|
+
exception-handling experience that builds on and enhances Ruby's
|
30
|
+
exception-handling mechanisms.
|
31
|
+
|
32
|
+
Polyphony includes a full-blown HTTP server implementation with integrated
|
33
|
+
support for HTTP 1 & 2, WebSockets, TLS/SSL termination and more. Polyphony also
|
34
|
+
provides fiber-aware adapters for connecting to PostgreSQL and Redis servers.
|
35
|
+
More adapters are being actively developed.
|
36
|
+
|
37
|
+
### Features
|
38
|
+
{: .no_toc }
|
39
|
+
|
40
|
+
- Co-operative scheduling of concurrent tasks using Ruby fibers.
|
41
|
+
- High-performance event reactor for handling I/O, timer, and other events.
|
42
|
+
- Natural, sequential programming style that makes it easy to reason about
|
43
|
+
concurrent code.
|
44
|
+
- Abstractions and constructs for controlling the execution of concurrent code:
|
45
|
+
supervisors, cancel scopes, throttling, resource pools etc.
|
46
|
+
- Code can use native networking classes and libraries, growing support for
|
47
|
+
third-party gems such as pg and redis.
|
48
|
+
- Use stdlib classes such as TCPServer and TCPSocket and Net::HTTP.
|
49
|
+
- Impressive performance and scalability characteristics, in terms of both
|
50
|
+
throughput and memory consumption (see below)
|
51
|
+
|
52
|
+
## Taking Polyphony for a Spin
|
53
|
+
|
54
|
+
Polyphony is different from other reactor-based solutions for Ruby in that
|
55
|
+
there's no need to use special classes for building your app, and there's no
|
56
|
+
need to setup reactor loops. Everything works the same except you can perform
|
57
|
+
multiple operations at the same time by creating fibers. In order to start a new
|
58
|
+
concurrent operation, you simply use `Kernel#spin`, which spins up a new fiber
|
59
|
+
and schedules it for running:
|
60
|
+
|
61
|
+
```ruby
|
62
|
+
require 'polyphony'
|
63
|
+
|
64
|
+
# Kernel#spin returns a Fiber instance
|
65
|
+
counter = spin do
|
66
|
+
count = 1
|
67
|
+
loop do
|
68
|
+
sleep 1
|
69
|
+
puts "count: #{count}"
|
70
|
+
count += 1
|
71
|
+
end
|
72
|
+
end
|
73
|
+
|
74
|
+
puts "Press return to stop this program"
|
75
|
+
gets
|
76
|
+
```
|
77
|
+
|
78
|
+
The above program spins up a fiber named `counter`, which counts to infinity.
|
79
|
+
Meanwhile, the *main* fiber waits for input from the user, and then exits.
|
80
|
+
Notice how we haven't introduced any custom classes, and how we used stock APIs
|
81
|
+
such as `Kernel#sleep` and `Kernel#gets`. The only hint that this program is
|
82
|
+
concurrent is the call to `Kernel#spin`.
|
83
|
+
|
84
|
+
Behind the scenes, Polyphony takes care of automatically switching between
|
85
|
+
fibers, letting each fiber advance at its own pace according to its duties. For
|
86
|
+
example, when the main fiber calls `gets`, Polyphony starts waiting for data to
|
87
|
+
come in on `STDIN` and then switches control to the `counter` fiber. When the
|
88
|
+
`counter` fiber calls `sleep 1`, Polyphony starts a timer, and goes looking for
|
89
|
+
other work. If no other fiber is ready to run, Polyphony simply waits for at
|
90
|
+
least one event to occur, and then resumes the corresponding fiber.
|
91
|
+
|
92
|
+
## Fibers vs Threads
|
93
|
+
|
94
|
+
Most Ruby developers are familiar with threads, but fibers remain a little
|
95
|
+
explored and little understood concept in the Ruby language. While A thread is
|
96
|
+
an OS abstraction that is controlled by the OS, a fiber represents an execution
|
97
|
+
context that can be paused and resumed by the application, and has no
|
98
|
+
counterpart at the OS level.
|
99
|
+
|
100
|
+
When used for writing concurrent programming, fibers offer multiple benefits
|
101
|
+
over threads. They consume much less RAM than threads, and switching between
|
102
|
+
them is faster than switching between threads. In addition, since fibers require
|
103
|
+
no cooperation from the OS, an application can create literally millions of them
|
104
|
+
given enough RAM. Those advantages make fibers a compelling solution for creating
|
105
|
+
pervasively concurrent applications, even when using a dynamic high-level "slow"
|
106
|
+
language such as Ruby.
|
107
|
+
|
108
|
+
Ruby programs will only partly benefit from using mutiple threads for processing
|
109
|
+
work loads (due to the GVL), but fibers are a great match mostly for programs
|
110
|
+
that are I/O bound (that means spending most of their time talking to the
|
111
|
+
outside world). A fiber-based web-server, for example, can juggle thousands of
|
112
|
+
active concurrent connections, each advancing at its own pace, consuming only a
|
113
|
+
single CPU core.
|
114
|
+
|
115
|
+
Nevertheless, Polyphony fully supports multithreading, with each thread having
|
116
|
+
its own fiber run queue and its own libev event loop. In addition, Polyphony
|
117
|
+
enables cross-thread communication using
|
118
|
+
|
119
|
+
## Fibers vs Callbacks
|
120
|
+
|
121
|
+
Programming environments such as Node.js and libraries such as EventMachine have
|
122
|
+
popularized the usage of event loops for achieving concurrency. The application
|
123
|
+
is wrapped in a loop that polls for events and fires application-provided
|
124
|
+
callbacks that act on those events - for example receiving data on a socket
|
125
|
+
connection, or waiting for a timer to elapse.
|
126
|
+
|
127
|
+
While these callback-based solutions are established technologies and are used
|
128
|
+
frequently to build concurrent apps, they do have some major drawbacks. Firstly,
|
129
|
+
they force the developer to split the business logic into small pieces, each
|
130
|
+
being ran inside of a callback. Secondly, they complicate state management,
|
131
|
+
because state associated with the business logic cannot be kept *with* the
|
132
|
+
business logic, it has to be stored elsewhere. Finally, callback-based
|
133
|
+
concurrency complicates debugging, since a stacktrace at any given point in time
|
134
|
+
will always originate in the event loop, and will not contain any information on
|
135
|
+
the chain of events leading to the present moment.
|
136
|
+
|
137
|
+
Fibers, in contrast, let the developer express the business logic in a
|
138
|
+
sequential, easy to read manner: do this, then that. State can be stored right
|
139
|
+
in the business logic, as local variables. And finally, the sequential
|
140
|
+
programming style makes it much easier to debug your code, since stack traces
|
141
|
+
contain the entire history of execution from the app's inception.
|
142
|
+
|
143
|
+
## Structured Concurrency
|
144
|
+
|
145
|
+
Polyphony's tagline is "fine-grained concurrency for Ruby", because it makes it
|
146
|
+
really easy to spin up literally thousands of fibers that perform concurrent
|
147
|
+
work. But running such a large number of concurrent operations also means you
|
148
|
+
need tools for managing all that concurrency.
|
149
|
+
|
150
|
+
For that purpose, Polyphony follows a paradigm called *structured concurrency*.
|
151
|
+
The basic idea behind structured concurrency is that fibers are organised in a
|
152
|
+
hierarchy starting from the main fiber. A fiber spun by any given fiber is
|
153
|
+
considered a child of that fiber, and its lifetime is guaranteed to be limited
|
154
|
+
to that of its parent fiber. That is why in the example above, the `counter`
|
155
|
+
fiber is automatically stopped when the main fiber stops running.
|
156
|
+
|
157
|
+
The same goes for exception handling. Whenever an error occurs, if no suitable
|
158
|
+
`rescue` block has been defined for the fiber in which the exception was raised,
|
159
|
+
the exception will bubble up through the fiber's parent, grandparent etc, until
|
160
|
+
the exception is handled, up to the main fiber. If the exception was not
|
161
|
+
handled, the program will exit and dump the exception information just like a
|
162
|
+
normal Ruby program.
|
163
|
+
|
164
|
+
## Controlling Fiber Execution
|
165
|
+
|
166
|
+
Polyphony offers a wide range of APIs for controlling fibers that make it easy
|
167
|
+
to prevent your program turning into an incontrollable concurrent mess. In order
|
168
|
+
to control fibers, Polyphony introduces various APIs for stopping fibers,
|
169
|
+
scheduling fibers, awaiting for fibers to terminate, and even restarting them:
|
170
|
+
|
171
|
+
```ruby
|
172
|
+
f = spin do
|
173
|
+
puts "going to sleep"
|
174
|
+
sleep 1
|
175
|
+
puts "done sleeping"
|
176
|
+
ensure
|
177
|
+
puts "stopped"
|
178
|
+
end
|
179
|
+
|
180
|
+
sleep 0.5
|
181
|
+
f.stop
|
182
|
+
f.restart
|
183
|
+
f.await
|
184
|
+
```
|
185
|
+
|
186
|
+
The output of the above program will be:
|
187
|
+
|
188
|
+
```
|
189
|
+
going to sleep
|
190
|
+
stopped
|
191
|
+
going to sleep
|
192
|
+
done sleeping
|
193
|
+
stopped
|
194
|
+
```
|
195
|
+
|
196
|
+
The `Fiber#await` method waits for a fiber to terminate, and returns the fiber's
|
197
|
+
return value:
|
198
|
+
|
199
|
+
```ruby
|
200
|
+
a = spin { sleep 1; :foo }
|
201
|
+
b = spin { a.await }
|
202
|
+
b.await #=> :foo
|
203
|
+
```
|
204
|
+
|
205
|
+
In the program above the main fiber waits for fiber `b` to terminate, and `b`
|
206
|
+
waits for fiber `a` to terminate. The return value of `a.await` is `:foo`, and
|
207
|
+
hence the return value of `b.await` is also `foo`.
|
208
|
+
|
209
|
+
If we need to wait for multiple fibers, we can use `Fiber::await` or
|
210
|
+
`Fiber::select`:
|
211
|
+
|
212
|
+
```ruby
|
213
|
+
# get result of a bunch of fibers
|
214
|
+
fibers = 3.times.map { |i| spin { i * 10 } }
|
215
|
+
Fiber.await(*fibers) #=> [0, 10, 20]
|
216
|
+
|
217
|
+
# get the fastest reply of a bunch of URLs
|
218
|
+
fibers = urls.map { |u| spin { [u, HTTParty.get(u)] } }
|
219
|
+
# Fiber.select returns an array containing the fiber and its result
|
220
|
+
Fiber.select(*fibers) #=> [fiber, [url, result]]
|
221
|
+
```
|
222
|
+
|
223
|
+
Finally, fibers can be supervised, in a similar manner to Erlang supervision
|
224
|
+
trees. The `Kernel#supervise` method will wait for all child fibers to terminate
|
225
|
+
before returning, and can optionally restart any child fiber that has terminated
|
226
|
+
normally or with an exception:
|
227
|
+
|
228
|
+
```ruby
|
229
|
+
fiber1 = spin { sleep 1; raise 'foo' }
|
230
|
+
fiber2 = spin { sleep 1 }
|
231
|
+
|
232
|
+
supervise # blocks and then propagates the error raised in fiber1
|
233
|
+
```
|
234
|
+
|
235
|
+
## Message Passing
|
236
|
+
|
237
|
+
Polyphony also provides a comprehensive solution for using fibers as actors, in
|
238
|
+
a similar fashion to Erlang processes. Fibers can exchange messages between each
|
239
|
+
other, allowing each part of a concurrent system to function in a completely
|
240
|
+
autonomous manner. For example, a chat application can encapsulate each chat
|
241
|
+
room in a completely self-contained fiber:
|
242
|
+
|
243
|
+
```ruby
|
244
|
+
def chat_room
|
245
|
+
subscribers = []
|
246
|
+
|
247
|
+
loop do
|
248
|
+
# receive waits for a message to come in
|
249
|
+
case receive
|
250
|
+
# Using Ruby 2.7's pattern matching
|
251
|
+
in [:subscribe, subscriber]
|
252
|
+
subscribers << subscriber
|
253
|
+
in [:unsubscribe, subscriber]
|
254
|
+
subscribers.delete subscriber
|
255
|
+
in [:add_message, name, message]
|
256
|
+
subscribers.each { |s| s.call(name, message) }
|
257
|
+
end
|
258
|
+
end
|
259
|
+
end
|
260
|
+
|
261
|
+
CHAT_ROOMS = Hash.new do |h, n|
|
262
|
+
h[n] = spin { chat_room }
|
263
|
+
end
|
264
|
+
```
|
265
|
+
|
266
|
+
Notice how the state (the `subscribers` variable) stays local, and how the logic
|
267
|
+
of the chat room is expressed in a way that is both compact and easy to extend.
|
268
|
+
Also notice how the chat room is written as an infinite loop. This is a common
|
269
|
+
pattern in Polyphony, since fibers can always be stopped at any moment.
|
270
|
+
|
271
|
+
The code for handling a chat room user might be expressed as follows:
|
272
|
+
|
273
|
+
```ruby
|
274
|
+
def chat_user_handler(user_name, connection)
|
275
|
+
room = nil
|
276
|
+
message_subscriber = proc do |name, message|
|
277
|
+
connection.puts "#{name}: #{message}"
|
278
|
+
end
|
279
|
+
while command = connection.gets
|
280
|
+
case command
|
281
|
+
when /^connect (.+)/
|
282
|
+
room&.send [:unsubscribe, message_subscriber]
|
283
|
+
room = CHAT_ROOMS[$1]
|
284
|
+
when "disconnect"
|
285
|
+
room&.send [:unsubscribe, message_subscriber]
|
286
|
+
room = nil
|
287
|
+
when /^send (.+)/
|
288
|
+
room&.send [:add_message, user_name, $1]
|
289
|
+
end
|
290
|
+
end
|
291
|
+
end
|
292
|
+
```
|
293
|
+
|
294
|
+
## Other Concurrency Constructs
|
295
|
+
|
296
|
+
Polyphony includes various constructs that complement fibers. Resource pools
|
297
|
+
provide a generic solution for controlling concurrent access to limited
|
298
|
+
resources, such as database connections. A resource pool assures only one fiber
|
299
|
+
has access to a given resource at any time:
|
300
|
+
|
301
|
+
```ruby
|
302
|
+
DB_CONNECTIONS = Polyphony::ResourcePool.new(limit: 5) do
|
303
|
+
PG.connect(DB_OPTS)
|
304
|
+
end
|
305
|
+
|
306
|
+
def query_records(sql)
|
307
|
+
DB_CONNECTIONS.acquire do |db|
|
308
|
+
db.query(sql).to_a
|
309
|
+
end
|
310
|
+
end
|
311
|
+
```
|
312
|
+
|
313
|
+
Throttlers can be useful for rate limiting, for example preventing blacklisting
|
314
|
+
your system in case it sends too many emails, even across fibers:
|
315
|
+
|
316
|
+
```ruby
|
317
|
+
MAX_EMAIL_RATE = 10 # max. 10 emails per second
|
318
|
+
EMAIL_THROTTLER = Polyphony::Throttler.new(MAX_EMAIL_RATE)
|
319
|
+
|
320
|
+
def send_email(addr, content)
|
321
|
+
EMAIL_THROTTLER.process do
|
322
|
+
...
|
323
|
+
end
|
324
|
+
end
|
325
|
+
```
|
326
|
+
|
327
|
+
In addition, various global methods (defined on the `Kernel` module) provide
|
328
|
+
common functionality, such as using timeouts:
|
329
|
+
|
330
|
+
```ruby
|
331
|
+
# perform an delayed action (in a separate fiber)
|
332
|
+
after(10) { notify_user }
|
333
|
+
|
334
|
+
# perform a recurring action with time drift correction
|
335
|
+
every(1) { p Time.now }
|
336
|
+
|
337
|
+
# perform an operation with timeout without raising an exception
|
338
|
+
move_on_after(10) { perform_query }
|
339
|
+
|
340
|
+
# perform an operation with timeout, raising a Polyphony::Cancel exception
|
341
|
+
cancel_after(10) { perform_query }
|
342
|
+
```
|
343
|
+
|
344
|
+
## The System Agent
|
345
|
+
|
346
|
+
In order to implement automatic fiber switching when performing blocking
|
347
|
+
operations, Polyphony introduces a concept called the *system agent*. The system
|
348
|
+
agent is an object having a uniform interface, that performs all blocking
|
349
|
+
operations.
|
350
|
+
|
351
|
+
While a standard event loop-based solution would implement a blocking call
|
352
|
+
separately from the fiber scheduling, the system agent integrates the two to
|
353
|
+
create a blocking call that is already knows how to switch and schedule fibers.
|
354
|
+
For example, in Polyphony all APIs having to do with reading from files or
|
355
|
+
sockets end up calling `Thread.current.agent.read`, which does all the work.
|
356
|
+
|
357
|
+
This design offers some major advantages over other designs. It minimizes memory
|
358
|
+
allocations, of both Ruby objects and C structures. For example, instead of
|
359
|
+
having to allocate libev watchers on the heap and then pass them around, they
|
360
|
+
are allocated on the stack instead, which saves up on both memory and CPU cycles.
|
361
|
+
|
362
|
+
In addition, the agent interface includes two methods that allow maximizing
|
363
|
+
server performance by accepting connections and reading from sockets in a tight
|
364
|
+
loop. Here's a naive implementation of an HTTP/1 server:
|
365
|
+
|
366
|
+
```ruby
|
367
|
+
require 'http/parser'
|
368
|
+
require 'polyphony'
|
369
|
+
|
370
|
+
def handle_client(socket)
|
371
|
+
parser = Http::Parser.new
|
372
|
+
reqs = []
|
373
|
+
parser.on_message_complete = proc { |env| reqs << { foo: :bar } }
|
374
|
+
|
375
|
+
Thread.current.agent.read_loop(socket) do |data|
|
376
|
+
parser << data
|
377
|
+
reqs.each { |r| reply(socket, r) }
|
378
|
+
reqs.clear
|
379
|
+
end
|
380
|
+
end
|
381
|
+
|
382
|
+
def reply(socket)
|
383
|
+
data = "Hello world!\n"
|
384
|
+
headers = "Content-Type: text/plain\r\nContent-Length: #{data.bytesize}\r\n"
|
385
|
+
socket.write "HTTP/1.1 200 OK\r\n#{headers}\r\n#{data}"
|
386
|
+
end
|
387
|
+
|
388
|
+
server = TCPServer.open('0.0.0.0', 1234)
|
389
|
+
puts "listening on port 1234"
|
390
|
+
|
391
|
+
Thread.current.agent.accept_loop(server) do |client|
|
392
|
+
spin { handle_client(client) }
|
393
|
+
end
|
394
|
+
```
|
395
|
+
|
396
|
+
The `#read_loop` and `#accept_loop` agent methods implement tight loops that
|
397
|
+
provide a significant boost to performance (up to +30% better throughput.)
|
398
|
+
|
399
|
+
Currently, Polyphony includes a single system agent based on
|
400
|
+
[libev](http://pod.tst.eu/http://cvs.schmorp.de/libev/ev.pod). In the future,
|
401
|
+
Polyphony will include other platform-specific system agents, such as a Windows
|
402
|
+
agent using
|
403
|
+
[IOCP](https://docs.microsoft.com/en-us/windows/win32/fileio/i-o-completion-ports),
|
404
|
+
or an [io_uring](https://unixism.net/loti/what_is_io_uring.html) agent,
|
405
|
+
which might be a game-changer for writing highly-concurrent Ruby-based web apps.
|
406
|
+
|
407
|
+
## Writing Web Apps with Polyphony
|
408
|
+
|
409
|
+
Polyphony includes a full-featured web server implementation that supports
|
410
|
+
HTTP/1, HTTP/2, and WebSockets, can perform SSL termination (with automatic ALPN
|
411
|
+
protocol selection), and has preliminary support for Rack (the de-facto standard
|
412
|
+
Ruby web app interface).
|
413
|
+
|
414
|
+
The Polyphony HTTP server has a unique design that calls the application's
|
415
|
+
request handler after all request headers have been received. This allows the
|
416
|
+
application to better deal with slow client attacks, big file uploads, and also
|
417
|
+
to minimize costly memory allocation and GC'ing.
|
418
|
+
|
419
|
+
Preliminary benchmarks show the Polyphony web server to be about 3X as fast as
|
420
|
+
Puma and 20X as fast as Unicorn. With SSL termination and using HTTP 2, the
|
421
|
+
Polyphony web server is about 2X as fast as Falcon.
|
422
|
+
|
423
|
+
### HTTP/1
|
424
|
+
|
425
|
+
|Server|requests/sec|average latency|max latency|
|
426
|
+
|------|-----------:|--------------:|----------:|
|
427
|
+
|Puma|
|
428
|
+
|Unicorn|
|
429
|
+
|Agoo|
|
430
|
+
|Polyphony|
|
431
|
+
|
432
|
+
### HTTP/2 with SSL Termination
|
433
|
+
|
434
|
+
|Server|requests/sec|average latency|max latency|
|
435
|
+
|------|-----------:|--------------:|----------:|
|
436
|
+
|Falcon|
|
437
|
+
|Polyphony|
|
438
|
+
|
439
|
+
(*Non-official benchmark with a basic "Hello world" Rack application. The usual
|
440
|
+
caveats regarding benchmarks should be applied here.*)
|
441
|
+
|
442
|
+
## Integrating Polyphony with other Gems
|
443
|
+
|
444
|
+
Polyphony aims to be a comprehensive concurrency solution for Ruby, and to
|
445
|
+
enable developers to use a maximum of core and stdlib APIs transparently in a
|
446
|
+
multi-fiber envrionment. Polyphony also provides adapters for common gems such
|
447
|
+
as postgres and redis, allowing using those gems in a fiber-aware manner.
|
448
|
+
|
449
|
+
For gems that do not yet have a fiber-aware adapter, Polyphony offers a general
|
450
|
+
solution in the form of a thread pool. A thread pool lets you offload blocking
|
451
|
+
method calls (that block the entire thread) onto worker threads, letting you
|
452
|
+
continue with other work while waiting for the call to return. For example,
|
453
|
+
here's how an `sqlite` adapter might work:
|
454
|
+
|
455
|
+
```ruby
|
456
|
+
class SQLite3::Database
|
457
|
+
THREAD_POOL = Polyphony::ThreadPool.new
|
458
|
+
|
459
|
+
alias_method :orig_execute, :execute
|
460
|
+
def execute(sql, *args)
|
461
|
+
THREAD_POOL.process { orig_execute(sql, *args) }
|
462
|
+
end
|
463
|
+
end
|
464
|
+
```
|
465
|
+
|
466
|
+
Other cases might require converting a callback-based interface into a blocking
|
467
|
+
fiber-aware one. Here's (a simplified version of) how Polyphony uses the
|
468
|
+
callback-based `http_parser.rb` gem to parse incoming HTTP/1 requests:
|
469
|
+
|
470
|
+
```ruby
|
471
|
+
class HTTP1Adapter
|
472
|
+
...
|
473
|
+
|
474
|
+
def on_headers_complete(headers)
|
475
|
+
@pending_requests << Request.new(headers, self)
|
476
|
+
end
|
477
|
+
|
478
|
+
def each(&block)
|
479
|
+
while (data = @connection.readpartial(8192))
|
480
|
+
# feed parser
|
481
|
+
@parser << data
|
482
|
+
while (request = @pending_requests.shift)
|
483
|
+
block.call(request)
|
484
|
+
return unless request.keep_alive?
|
485
|
+
end
|
486
|
+
end
|
487
|
+
end
|
488
|
+
|
489
|
+
...
|
490
|
+
end
|
491
|
+
```
|
492
|
+
|
493
|
+
In the code snippet above, the solution is quite simple. The fiber handling the
|
494
|
+
connection loops waiting for data to be read from the socket. Once the data
|
495
|
+
arrives, it is fed to the HTTP parser. The HTTP parser will call the
|
496
|
+
`on_headers_complete` callback, which simply adds a request to the requests
|
497
|
+
queue. The code then continues to handle any requests still in the queue.
|
498
|
+
|
499
|
+
## Future Directions
|
500
|
+
|
501
|
+
Polyphony is a young project, and will still need a lot of development effort to
|
502
|
+
reach version 1.0. Here are some of the exciting directions we're working on.
|
503
|
+
|
504
|
+
- Support for more core and stdlib APIs
|
505
|
+
- More adapters for gems with C-extensions, such as `mysql`, `sqlite3` etc
|
506
|
+
- Use `io_uring` agent as alternative to the libev agent
|
507
|
+
- More concurrency constructs for building highly concurrent applications
|