hyperion-rb 2.14.0 → 2.15.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +71 -0
- data/README.md +108 -390
- data/lib/hyperion/server.rb +32 -0
- data/lib/hyperion/version.rb +1 -1
- metadata +1 -1
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: 89ad0e591a8a44e23597ab8aa800ea65437c3e90d77903f7b3433c36368c87de
|
|
4
|
+
data.tar.gz: b8a17ba534c62e8aa9d1b91f23e7baebf5a49d3074617fbcf73a328363e76aa4
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: 7e0059eba0f4d925bab72a1a7c4371e4e74c7336e62ac26135d3450480bfe6979ade740f0b7c236bfac67592eec40662ba0fca72c761cc2e208b8063dc6c04bb
|
|
7
|
+
data.tar.gz: 5e38e2d43c8c8dea1f1dfce54639c7ea11377905a7193b3e87199c78f7aacccbed30218ec49ba1e1f39317541f361501671ded4b0d0875503806cc3680a9a856
|
data/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,76 @@
|
|
|
1
1
|
# Changelog
|
|
2
2
|
|
|
3
|
+
## 2.15.0 — 2026-05-02
|
|
4
|
+
|
|
5
|
+
### 2.15-A — Fresh bench, README split, CI flake fix
|
|
6
|
+
|
|
7
|
+
**Why.** Three coordinated changes for one consolidated milestone.
|
|
8
|
+
First, the README's headline numbers were a stitched collection
|
|
9
|
+
across sprints 2.10–2.14 — the 134k claim from 2.12-D, the 4-hour
|
|
10
|
+
soak from 2.14-C, the gRPC numbers from 2.14-D — captured on
|
|
11
|
+
different days under different host conditions. Operators wanted
|
|
12
|
+
one coherent snapshot they could trust. Second, the README sat at
|
|
13
|
+
445 lines after the 2.14-E rework; with feature-deep-dive material
|
|
14
|
+
inline it was still longer than a 30-second reader will tolerate.
|
|
15
|
+
Third, GitHub Actions flaked on the `2.14-E` commit (1700426) with
|
|
16
|
+
`Errno::EBADF: select_internal_with_gvl:epoll_wait` raised from
|
|
17
|
+
inside `Async::Scheduler#close` on Ruby 3.4 + async 2.39 — the
|
|
18
|
+
existing `child.wait rescue StandardError` only protected the
|
|
19
|
+
inner block.
|
|
20
|
+
|
|
21
|
+
**What 2.15-A ships.**
|
|
22
|
+
|
|
23
|
+
1. **CI flake fix.** `lib/hyperion/server.rb#start_async_loop`
|
|
24
|
+
gains an outer `rescue Errno::EBADF, IOError` around the entire
|
|
25
|
+
`Async do ... end` block. Two regression specs added — one
|
|
26
|
+
deterministic (stubs `run_accept_fiber` to raise EBADF
|
|
27
|
+
synchronously), one integration-shape (10-cycle rapid boot/stop
|
|
28
|
+
on `thread_count: 0 + async_io: true`). 10/10 clean local runs.
|
|
29
|
+
|
|
30
|
+
2. **Fresh bench.** Single coherent run on the bench host on a
|
|
31
|
+
single day captures all 9 headline rows. New driver script
|
|
32
|
+
`bench/run_all.sh` boots one server per row, runs `wrk` (or
|
|
33
|
+
`ghz` for gRPC), kills it, moves on — designed to be
|
|
34
|
+
re-runnable: any future maintainer can `./bench/run_all.sh` and
|
|
35
|
+
reproduce the published numbers within bench-host drift.
|
|
36
|
+
Numbers preserved in `docs/BENCH_HYPERION_2_14.md` (table +
|
|
37
|
+
reproduction commands) and `docs/BENCH_HYPERION_2_14_results.csv`
|
|
38
|
+
(raw CSV for archaeology).
|
|
39
|
+
|
|
40
|
+
3. **README split.** `README.md` shrunk 445 → 163 lines. Feature
|
|
41
|
+
deep-dives moved to `docs/HTTP2_AND_TLS.md`,
|
|
42
|
+
`docs/HANDLE_STATIC_AND_HANDLE_BLOCK.md`,
|
|
43
|
+
`docs/CLUSTER_AND_SO_REUSEPORT.md`, `docs/ASYNC_IO.md`,
|
|
44
|
+
`docs/CONFIGURATION.md`, `docs/OPERATOR_GUIDANCE.md`,
|
|
45
|
+
`docs/LOGGING.md`, `docs/GRPC.md` (`docs/WEBSOCKETS.md` and
|
|
46
|
+
`docs/OBSERVABILITY.md` already existed). README structure now:
|
|
47
|
+
title + tagline → 30-second pitch → quick start → headline
|
|
48
|
+
bench table (one tight row per workload, fresh numbers) →
|
|
49
|
+
features (8 bullets, each linking into `docs/<feature>.md`) →
|
|
50
|
+
compatibility → documentation index → reproducing benchmarks →
|
|
51
|
+
release history → contributing → credits + license.
|
|
52
|
+
|
|
53
|
+
**Headline bench numbers (median of 3 trials, captured 2026-05-02).**
|
|
54
|
+
|
|
55
|
+
| # | Workload | r/s | p99 |
|
|
56
|
+
|--:|---|---:|---:|
|
|
57
|
+
| 1 | Hyperion `handle_static` + io_uring | **122,778** (peak 134,573) | 1.11 ms |
|
|
58
|
+
| 2 | Hyperion `handle_static` + accept4 | 16,725 | 90 µs |
|
|
59
|
+
| 3 | Hyperion `Server.handle` block | 8,956 | 190 µs |
|
|
60
|
+
| 4 | Hyperion generic Rack hello | 4,231 | 2.33 ms |
|
|
61
|
+
| 5 | Hyperion CPU JSON block | 5,456 | 327 µs |
|
|
62
|
+
| 6 | Hyperion gRPC unary (h2/TLS) | 1,732 | 29.87 ms |
|
|
63
|
+
| 7 | Reference Agoo hello | 18,326 | 10.54 ms |
|
|
64
|
+
| 8 | Reference Falcon hello | 6,394 | 408.83 ms |
|
|
65
|
+
| 9 | Reference Puma hello | 6,240 | 408.77 ms |
|
|
66
|
+
|
|
67
|
+
The peak trial on row 1 (134,573 r/s) is consistent with the
|
|
68
|
+
2.14-D 134,084 headline; median 122,778 is the conservative honest
|
|
69
|
+
number, both cited in README.
|
|
70
|
+
|
|
71
|
+
**Spec count.** 1145 → 1147 / 0 / 16 (+2 regression specs from
|
|
72
|
+
the flake fix). All on macOS arm64 + Ruby 3.3.3.
|
|
73
|
+
|
|
3
74
|
## 2.14.0 — 2026-05-02
|
|
4
75
|
|
|
5
76
|
### 2.14-E — Complete README rework
|
data/README.md
CHANGED
|
@@ -6,438 +6,156 @@ High-performance Ruby HTTP server. Rack 3 + HTTP/2 + WebSockets + gRPC on a sing
|
|
|
6
6
|
[](https://rubygems.org/gems/hyperion-rb)
|
|
7
7
|
[](https://github.com/andrew-woblavobla/hyperion/blob/master/LICENSE)
|
|
8
8
|
|
|
9
|
-
Hyperion serves a hello-world Rack response at **
|
|
10
|
-
on a single worker
|
|
11
|
-
**7×** Agoo's
|
|
12
|
-
it's a complete Rack 3 server: HTTP/1.1
|
|
13
|
-
(RFC 6455), gRPC unary + streaming on the Rack
|
|
14
|
-
fiber concurrency for PG-bound apps, and pre-fork
|
|
15
|
-
SO_REUSEPORT-balanced workers.
|
|
16
|
-
|
|
17
|
-
```sh
|
|
18
|
-
gem install hyperion-rb
|
|
19
|
-
bundle exec hyperion config.ru # http://127.0.0.1:9292
|
|
20
|
-
```
|
|
21
|
-
|
|
22
|
-
## Headline benchmarks
|
|
23
|
-
|
|
24
|
-
Linux 6.8 / 16-vCPU Ubuntu 24.04 / Ruby 3.3.3, single worker, `wrk -t4 -c100 -d20s`
|
|
25
|
-
unless noted. Reproduction commands and the full 6-row 4-way matrix
|
|
26
|
-
(Hyperion / Puma / Falcon / Agoo) live in
|
|
27
|
-
[docs/BENCH_HYPERION_2_11.md](docs/BENCH_HYPERION_2_11.md).
|
|
28
|
-
|
|
29
|
-
| Workload | Hyperion r/s | Hyperion p99 | Reference |
|
|
30
|
-
|-------------------------------------------------------|-------------:|-------------:|----------------------|
|
|
31
|
-
| Static hello, `handle_static` + io_uring (2.12-D) | **134,084** | 1.14 ms | Agoo 2.15.14: 19,024 |
|
|
32
|
-
| Static hello, `handle_static` + accept4 fallback | 15,685 | 107 µs | Agoo 2.15.14: 19,024 |
|
|
33
|
-
| Dynamic block, `Server.handle { \|env\| ... }` (2.14-A) | 9,422 | 166 µs | Agoo 2.15.14: 19,024 |
|
|
34
|
-
| CPU JSON via block (`bench/work.ru`, 2.14-A) | 5,897 | 256 µs | Falcon: 4,226 |
|
|
35
|
-
| Generic Rack hello (no `Server.handle`) | 4,752 | 2.02 ms | Agoo 2.15.14: 19,024 |
|
|
36
|
-
| gRPC unary, h2/TLS, ghz `-c50` (2.14-D) | 1,618 | 33.3 ms | Falcon `async-grpc`: 1,512 (+7%) |
|
|
37
|
-
|
|
38
|
-
The 134,084 r/s row is sustained over a 4-hour soak at **120,684 r/s**
|
|
39
|
-
with RSS variance 2.71% and `wrk-truth` p99 1.14 ms (2.14-C). The
|
|
40
|
-
io_uring loop is opt-in via `HYPERION_IO_URING_ACCEPT=1` until 2.15;
|
|
41
|
-
the `accept4` row is the default on Linux.
|
|
9
|
+
Hyperion serves a hello-world Rack response at **122,778 r/s with a 1.14 ms p99**
|
|
10
|
+
(median of 3 trials, peak 134,573) on a single worker — Linux 6.x, io_uring
|
|
11
|
+
accept loop, `Server.handle_static`, **6.7×** Agoo's 18,326 r/s on the same
|
|
12
|
+
hardware. Beyond the C-side fast path it's a complete Rack 3 server: HTTP/1.1
|
|
13
|
+
+ HTTP/2 with ALPN, WebSockets (RFC 6455), gRPC unary + streaming on the Rack
|
|
14
|
+
3 trailers contract, native fiber concurrency for PG-bound apps, and pre-fork
|
|
15
|
+
cluster mode with SO_REUSEPORT-balanced workers.
|
|
42
16
|
|
|
43
17
|
## Quick start
|
|
44
18
|
|
|
45
19
|
```sh
|
|
46
|
-
|
|
20
|
+
gem install hyperion-rb
|
|
21
|
+
bundle exec hyperion config.ru # http://127.0.0.1:9292
|
|
47
22
|
bundle exec hyperion -w 4 -t 10 config.ru # 4 workers × 10 threads
|
|
48
|
-
bundle exec hyperion -w 0 config.ru # one worker per CPU
|
|
49
23
|
bundle exec hyperion --tls-cert cert.pem --tls-key key.pem -p 9443 config.ru
|
|
50
24
|
```
|
|
51
25
|
|
|
52
|
-
`bundle exec rake spec` (and the default task) auto-invoke `compile`, so a
|
|
53
|
-
fresh checkout just needs `bundle install && bundle exec rake` for a green run.
|
|
54
|
-
|
|
55
26
|
Migrating from Puma? `hyperion -t N -w M` matching your current Puma
|
|
56
27
|
`-t N:N -w M` is the recommended drop-in. See
|
|
57
28
|
[docs/MIGRATING_FROM_PUMA.md](docs/MIGRATING_FROM_PUMA.md).
|
|
58
29
|
|
|
59
|
-
##
|
|
60
|
-
|
|
61
|
-
### HTTP/1.1 + HTTP/2 + TLS
|
|
62
|
-
|
|
63
|
-
ALPN auto-negotiates `h2` or `http/1.1` per connection. HTTP/2 multiplexes
|
|
64
|
-
streams onto fibers within a single connection — slow handlers don't
|
|
65
|
-
head-of-line-block other streams. Cluster-mode TLS works (`-w N` +
|
|
66
|
-
`--tls-cert` / `--tls-key`).
|
|
67
|
-
|
|
68
|
-
Smuggling defenses for HTTP/1.1: `Content-Length` + `Transfer-Encoding`
|
|
69
|
-
together → 400; non-chunked `Transfer-Encoding` → 501; CRLF in response
|
|
70
|
-
header values → `ArgumentError` (response-splitting guard).
|
|
71
|
-
|
|
72
|
-
### WebSockets (2.1.0+)
|
|
73
|
-
|
|
74
|
-
RFC 6455 over Rack 3 full hijack, native frame codec, per-connection
|
|
75
|
-
wrapper with auto-pong, close handshake, UTF-8 validation, and per-message
|
|
76
|
-
size cap. **ActionCable + faye-websocket on a single binary** — one
|
|
77
|
-
`hyperion -w 4 -t 10 config.ru` serves HTTP, HTTP/2, TLS, and `/cable`
|
|
78
|
-
from the same listener. Conformance: 463/463 autobahn-testsuite cases
|
|
79
|
-
pass. See [docs/WEBSOCKETS.md](docs/WEBSOCKETS.md).
|
|
80
|
-
|
|
81
|
-
### gRPC (2.12-F+)
|
|
82
|
-
|
|
83
|
-
Hyperion's HTTP/2 path supports gRPC unary, server-streaming,
|
|
84
|
-
client-streaming, and bidirectional RPCs via the Rack 3 trailers contract:
|
|
85
|
-
any response body that defines `#trailers` gets a final HEADERS frame
|
|
86
|
-
(with `END_STREAM=1`) carrying the trailer map after the DATA frames.
|
|
87
|
-
Plain HTTP/2 traffic without the gRPC content-type keeps the unary
|
|
88
|
-
buffered semantics — no behaviour change for non-gRPC clients.
|
|
89
|
-
|
|
90
|
-
A minimal unary handler:
|
|
91
|
-
|
|
92
|
-
```ruby
|
|
93
|
-
class GrpcBody
|
|
94
|
-
def initialize(reply); @reply = reply; end
|
|
95
|
-
def each; yield @reply; end
|
|
96
|
-
def trailers; { 'grpc-status' => '0', 'grpc-message' => 'OK' }; end
|
|
97
|
-
def close; end
|
|
98
|
-
end
|
|
99
|
-
|
|
100
|
-
run ->(env) {
|
|
101
|
-
request = env['rack.input'].read
|
|
102
|
-
reply = handle(request)
|
|
103
|
-
[200, { 'content-type' => 'application/grpc' }, GrpcBody.new(reply)]
|
|
104
|
-
}
|
|
105
|
-
```
|
|
106
|
-
|
|
107
|
-
Server-streaming yields one DATA frame per `each`; client-streaming
|
|
108
|
-
reads incoming frames off `env['rack.input']` (a streaming IO that
|
|
109
|
-
blocks until the next DATA frame lands); bidirectional interleaves
|
|
110
|
-
both. Reproducible bench at `bench/grpc_stream.{proto,ru}` +
|
|
111
|
-
`bench/grpc_stream_bench.sh` (ghz). Numbers in
|
|
112
|
-
[docs/BENCH_HYPERION_2_11.md](docs/BENCH_HYPERION_2_11.md#grpc-ghz-bench--hyperion-vs-falcon-async-grpc-214-d).
|
|
113
|
-
|
|
114
|
-
### `Server.handle` direct routes
|
|
115
|
-
|
|
116
|
-
Bypass the Rack adapter for hot paths:
|
|
117
|
-
|
|
118
|
-
```ruby
|
|
119
|
-
Hyperion::Server.handle_static '/health', body: 'ok'
|
|
120
|
-
Hyperion::Server.handle(:GET, '/v1/ping') { |env| [200, {}, ['pong']] }
|
|
121
|
-
```
|
|
122
|
-
|
|
123
|
-
`handle_static` bakes the response at boot and serves from the C accept
|
|
124
|
-
loop (134k r/s with io_uring, 16k r/s on accept4). The dynamic block
|
|
125
|
-
form (2.14-A) runs `app.call(env)` on the C accept loop too — accept +
|
|
126
|
-
recv + parse + write release the GVL while the block holds it, so
|
|
127
|
-
multi-threaded workers actually parallelise.
|
|
128
|
-
|
|
129
|
-
### Pre-fork cluster
|
|
130
|
-
|
|
131
|
-
Per-OS worker model: `SO_REUSEPORT` on Linux (kernel-balanced accept,
|
|
132
|
-
1.004–1.011 max/min ratio across workers under steady load — 2.12-E
|
|
133
|
-
audit), master-bind + worker-fd-share on macOS/BSD where Darwin's
|
|
134
|
-
`SO_REUSEPORT` doesn't load-balance. Lifecycle hooks (`before_fork`,
|
|
135
|
-
`on_worker_boot`, `on_worker_shutdown`) for AR / Redis / pool init.
|
|
136
|
-
|
|
137
|
-
### Async I/O (PG-bound apps)
|
|
138
|
-
|
|
139
|
-
`--async-io` runs plain HTTP/1.1 connections under `Async::Scheduler`,
|
|
140
|
-
turning one OS thread into thousands of in-flight handler invocations.
|
|
141
|
-
Paired with [hyperion-async-pg](https://github.com/andrew-woblavobla/hyperion-async-pg)
|
|
142
|
-
on a `pg_sleep(50ms)` workload, single-worker `pool=200` hits **2,381 r/s**
|
|
143
|
-
vs Puma `-t 5` at 56 r/s (architectural ceiling: pool size, not thread
|
|
144
|
-
count). Three things must all be true: `--async-io`, `hyperion-async-pg`
|
|
145
|
-
loaded, and a fiber-aware pool (`Hyperion::AsyncPg::FiberPool`,
|
|
146
|
-
`async-pool`, or `Async::Semaphore` — **not** the `connection_pool` gem,
|
|
147
|
-
whose `Mutex` blocks the OS thread). Skip any one and you get parity
|
|
148
|
-
with Puma.
|
|
149
|
-
|
|
150
|
-
### Observability
|
|
151
|
-
|
|
152
|
-
`/-/metrics` Prometheus endpoint (admin-token guarded), per-route
|
|
153
|
-
latency histograms, per-conn fairness rejections, WebSocket
|
|
154
|
-
permessage-deflate ratio, kTLS active connections, ThreadPool queue
|
|
155
|
-
depth, dispatch-mode counters (Rack / `handle_static` / dynamic block /
|
|
156
|
-
h2 / async-io). Pre-built Grafana dashboard at
|
|
157
|
-
[docs/grafana/hyperion-2.4-dashboard.json](docs/grafana/hyperion-2.4-dashboard.json).
|
|
158
|
-
Full reference: [docs/OBSERVABILITY.md](docs/OBSERVABILITY.md).
|
|
159
|
-
|
|
160
|
-
Default-ON structured access logs (one JSON or text line per request)
|
|
161
|
-
with hot-path optimisations: per-thread cached iso8601 timestamp,
|
|
162
|
-
hand-rolled line builder, lock-free per-thread 4 KiB write buffer.
|
|
163
|
-
12-factor logger split: `info`/`debug` → stdout, `warn`/`error`/`fatal`
|
|
164
|
-
→ stderr.
|
|
165
|
-
|
|
166
|
-
### Optional io_uring accept loop
|
|
167
|
-
|
|
168
|
-
Linux 5.x+, opt-in via `HYPERION_IO_URING_ACCEPT=1`. Multishot accept
|
|
169
|
-
+ per-conn RECV/WRITE/CLOSE state machine on top of liburing. One
|
|
170
|
-
`io_uring_enter` per N requests instead of N×3 syscalls. Compiles out
|
|
171
|
-
cleanly without liburing — the `accept4` path stays the fallback.
|
|
172
|
-
macOS keeps using `accept4`. Default-flip moves to 2.15 with a fresh
|
|
173
|
-
24h soak.
|
|
174
|
-
|
|
175
|
-
## Configuration
|
|
176
|
-
|
|
177
|
-
Three layers, in precedence order: explicit CLI flag > environment
|
|
178
|
-
variable > `config/hyperion.rb` > built-in default.
|
|
179
|
-
|
|
180
|
-
### Most-used CLI flags
|
|
181
|
-
|
|
182
|
-
| Flag | Default | Notes |
|
|
183
|
-
|---|---|---|
|
|
184
|
-
| `-b, --bind HOST` | `127.0.0.1` | |
|
|
185
|
-
| `-p, --port PORT` | `9292` | |
|
|
186
|
-
| `-w, --workers N` | `1` | `0` → `Etc.nprocessors` |
|
|
187
|
-
| `-t, --threads N` | `5` | OS-thread Rack handler pool per worker. `0` → run inline (debugging). |
|
|
188
|
-
| `-C, --config PATH` | `config/hyperion.rb` if present | Ruby DSL file. |
|
|
189
|
-
| `--tls-cert PATH` / `--tls-key PATH` | nil | PEM cert + key for HTTPS. |
|
|
190
|
-
| `--[no-]async-io` | off | Run plain HTTP/1.1 under `Async::Scheduler`. Required for `hyperion-async-pg` on plain HTTP. |
|
|
191
|
-
| `--preload-static DIR` | nil | Preload static assets from DIR at boot (repeatable, immutable). Rails apps auto-detect from `Rails.configuration.assets.paths`. |
|
|
192
|
-
| `--admin-token-file PATH` | unset | Auth file for `/-/quit` and `/-/metrics`. Refuses world-readable files. |
|
|
193
|
-
| `--worker-max-rss-mb MB` | unset | Master gracefully recycles a worker exceeding MB RSS. |
|
|
194
|
-
| `--max-pending COUNT` | unbounded | Per-worker accept-queue cap before HTTP 503 + `Retry-After: 1`. |
|
|
195
|
-
| `--idle-keepalive SECONDS` | `5` | Keep-alive idle timeout. |
|
|
196
|
-
| `--graceful-timeout SECONDS` | `30` | Shutdown deadline before SIGKILL. |
|
|
197
|
-
|
|
198
|
-
`bin/hyperion --help` prints the full set, including `--max-body-bytes`,
|
|
199
|
-
`--max-header-bytes`, `--max-request-read-seconds` (slowloris defence),
|
|
200
|
-
`--h2-max-total-streams`, `--max-in-flight-per-conn`,
|
|
201
|
-
`--tls-handshake-rate-limit`, and the `--[no-]yjit` /
|
|
202
|
-
`--[no-]log-requests` toggles.
|
|
203
|
-
|
|
204
|
-
### Environment variables
|
|
205
|
-
|
|
206
|
-
`HYPERION_LOG_LEVEL`, `HYPERION_LOG_FORMAT`, `HYPERION_LOG_REQUESTS`
|
|
207
|
-
(`0|1|true|false|yes|no|on|off`), `HYPERION_ENV`,
|
|
208
|
-
`HYPERION_WORKER_MODEL` (`share|reuseport`), `HYPERION_IO_URING_ACCEPT`
|
|
209
|
-
(`0|1`), `HYPERION_H2_DISPATCH_POOL`, `HYPERION_H2_NATIVE_HPACK`
|
|
210
|
-
(`v2|ruby|off`), `HYPERION_H2_TIMING`.
|
|
211
|
-
|
|
212
|
-
### Config file
|
|
213
|
-
|
|
214
|
-
`config/hyperion.rb` — same shape as Puma's `puma.rb`. Auto-loaded if
|
|
215
|
-
present. Strict DSL: unknown methods raise `NoMethodError` at boot.
|
|
216
|
-
|
|
217
|
-
```ruby
|
|
218
|
-
# config/hyperion.rb
|
|
219
|
-
bind '0.0.0.0'
|
|
220
|
-
port 9292
|
|
221
|
-
|
|
222
|
-
workers 4
|
|
223
|
-
thread_count 10
|
|
224
|
-
|
|
225
|
-
# tls_cert_path 'config/cert.pem'
|
|
226
|
-
# tls_key_path 'config/key.pem'
|
|
227
|
-
|
|
228
|
-
read_timeout 30
|
|
229
|
-
idle_keepalive 5
|
|
230
|
-
graceful_timeout 30
|
|
231
|
-
|
|
232
|
-
log_level :info
|
|
233
|
-
log_format :auto
|
|
234
|
-
log_requests true
|
|
235
|
-
|
|
236
|
-
async_io nil # nil = auto (1.4.0+), true = inline-on-fiber everywhere, false = pool everywhere
|
|
237
|
-
|
|
238
|
-
before_fork do
|
|
239
|
-
ActiveRecord::Base.connection_handler.clear_all_connections! if defined?(ActiveRecord)
|
|
240
|
-
end
|
|
241
|
-
|
|
242
|
-
on_worker_boot do |worker_index|
|
|
243
|
-
ActiveRecord::Base.establish_connection if defined?(ActiveRecord)
|
|
244
|
-
end
|
|
245
|
-
```
|
|
246
|
-
|
|
247
|
-
A documented sample lives at
|
|
248
|
-
[`config/hyperion.example.rb`](config/hyperion.example.rb).
|
|
249
|
-
|
|
250
|
-
## Operator guidance
|
|
251
|
-
|
|
252
|
-
Distilled from [docs/BENCH_2026_04_27.md](docs/BENCH_2026_04_27.md)
|
|
253
|
-
(Rails 8.1 real-app sweep). Headline finding: **the simplest drop-in
|
|
254
|
-
is the right answer.**
|
|
255
|
-
|
|
256
|
-
### Migrating from Puma
|
|
257
|
-
|
|
258
|
-
`hyperion -t N -w M` matching your current Puma `-t N:N -w M`. No other
|
|
259
|
-
flags. Versus Puma at the same `-t/-w` shape on real Rails endpoints:
|
|
260
|
-
**+9% rps on lightweight endpoints, 28× lower p99 on health-style
|
|
261
|
-
endpoints, 3.8× lower p99 on PG-touching endpoints.** Same RSS, same
|
|
262
|
-
operator surface — keep all your existing config, monitoring, deploy
|
|
263
|
-
scripts.
|
|
264
|
-
|
|
265
|
-
### Knobs that help on synthetic benches but **not** on real Rails
|
|
266
|
-
|
|
267
|
-
| Knob | Synthetic | Real Rails | Recommendation |
|
|
268
|
-
|---|---|---|---|
|
|
269
|
-
| `-t 30` | +5–10% on hello-world | **Hurts** p99 vs `-t 10` (3.51 s vs 148 ms on `/up`) — GVL + middleware Mutex contention | Stay at `-t 10`. |
|
|
270
|
-
| `--yjit` | +5–10% on CPU-bound | Wash on dev-mode Rails | Skip until you bench production-mode. |
|
|
271
|
-
| `RAILS_POOL > 25` | n/a | No improvement at 50 or 100 | Keep your existing AR pool. |
|
|
272
|
-
| `--async-io` | 33–42× rps on PG-bound | **Worse** than drop-in (4.14 s p99 on `/up`) until your full I/O stack is fiber-cooperative | Don't enable until `redis-rb` → `async-redis`. |
|
|
273
|
-
|
|
274
|
-
### When `-w N` helps
|
|
275
|
-
|
|
276
|
-
| Workload | Recommended | Why |
|
|
277
|
-
|---|---|---|
|
|
278
|
-
| Pure I/O-bound (PG / Redis / external HTTP) | `-w 1` + larger pool | `-w 1 pool=200` = 87 MB / 2,180 r/s vs `-w 4 pool=64` = 224 MB / 1,680 r/s. **2.6× memory, 0.77× rps** if you pick multi-worker on wait-bound. |
|
|
279
|
-
| Pure CPU-bound | `-w N` matching CPU count | Bench: `-w 16 -t 5` hits 98,818 r/s on a 16-vCPU box. |
|
|
280
|
-
| Mixed (Rails-shaped, ~5 ms CPU + 50 ms wait) | `-w N/2` (half cores) + medium pool | `-w 4 -t 5 pool=128` = 1,740 r/s on `pg_mixed.ru`, no cold-start spike. |
|
|
281
|
-
|
|
282
|
-
### Read p99 not mean
|
|
283
|
-
|
|
284
|
-
| Workload | Hyperion rps / p99 | Closest competitor | rps ratio | p99 ratio |
|
|
285
|
-
|---|---|---|---:|---:|
|
|
286
|
-
| Hello `-w 4` | 21,215 / 1.87 ms | Falcon 24,061 / 9.78 ms | 0.88× | **5.2× lower** |
|
|
287
|
-
| CPU JSON `-w 4` | 15,582 / 2.47 ms | Falcon 18,643 / 13.51 ms | 0.84× | **5.5× lower** |
|
|
288
|
-
| Static 1 MiB | 1,919 / 4.22 ms | Puma 2,074 / 55 ms | 0.93× | **13× lower** |
|
|
289
|
-
| PG-wait `-w 1` pool=200 | 2,180 / 668 ms | Puma 530 + 200 timeouts | **4.1×** | qualitative crush |
|
|
290
|
-
|
|
291
|
-
Throughput peaks are easy to fake under controlled conditions; tail
|
|
292
|
-
latency reflects what your slowest user actually experiences when the
|
|
293
|
-
load balancer fans them onto a busy worker.
|
|
294
|
-
|
|
295
|
-
## Logging
|
|
296
|
-
|
|
297
|
-
Default behaviour:
|
|
298
|
-
|
|
299
|
-
- `info`/`debug` → stdout, `warn`/`error`/`fatal` → stderr (12-factor).
|
|
300
|
-
- One structured access-log line per response, `info` level. Disable
|
|
301
|
-
with `--no-log-requests` or `HYPERION_LOG_REQUESTS=0`.
|
|
302
|
-
- Format auto-selects: `RAILS_ENV=production`/`staging` → JSON; TTY →
|
|
303
|
-
coloured text; piped output without env hint → JSON.
|
|
304
|
-
|
|
305
|
-
Sample text (TTY default):
|
|
306
|
-
|
|
307
|
-
```
|
|
308
|
-
2026-04-26T18:40:04.112Z INFO [hyperion] message=request method=GET path=/api/v1/health status=200 duration_ms=46.63 remote_addr=127.0.0.1 http_version=HTTP/1.1
|
|
309
|
-
```
|
|
310
|
-
|
|
311
|
-
Sample JSON (production / piped):
|
|
312
|
-
|
|
313
|
-
```json
|
|
314
|
-
{"ts":"2026-04-26T18:38:49.405Z","level":"info","source":"hyperion","message":"request","method":"GET","path":"/api/v1/health","status":200,"duration_ms":46.63,"remote_addr":"127.0.0.1","http_version":"HTTP/1.1"}
|
|
315
|
-
```
|
|
316
|
-
|
|
317
|
-
## Metrics
|
|
30
|
+
## Headline benchmarks
|
|
318
31
|
|
|
319
|
-
|
|
320
|
-
|
|
321
|
-
|
|
322
|
-
|
|
323
|
-
|
|
32
|
+
Linux 6.8 / 16-vCPU Ubuntu 24.04 / Ruby 3.3.3, single worker, `wrk -t4 -c100 -d20s`
|
|
33
|
+
unless noted. Three trials per row, median reported. Captured 2026-05-02 on
|
|
34
|
+
the 2.14.0 release commit. Full reproduction in
|
|
35
|
+
[docs/BENCH_HYPERION_2_14.md](docs/BENCH_HYPERION_2_14.md); single-command
|
|
36
|
+
re-bench via [`bench/run_all.sh`](bench/run_all.sh).
|
|
324
37
|
|
|
325
|
-
|
|
326
|
-
|
|
327
|
-
|
|
38
|
+
| Workload | Hyperion r/s | Hyperion p99 | Reference |
|
|
39
|
+
|-------------------------------------------------------|-------------:|-------------:|----------------------|
|
|
40
|
+
| Static hello, `handle_static` + io_uring | **122,778** | 1.11 ms | Agoo: 18,326 |
|
|
41
|
+
| Static hello, `handle_static` + accept4 fallback | 16,725 | 90 µs | Agoo: 18,326 |
|
|
42
|
+
| Dynamic block, `Server.handle { \|env\| ... }` | 8,956 | 190 µs | Agoo: 18,326 |
|
|
43
|
+
| CPU JSON via block (`bench/work.ru`) | 5,456 | 327 µs | Falcon: 6,394 |
|
|
44
|
+
| Generic Rack hello (no `Server.handle`) | 4,231 | 2.33 ms | Agoo: 18,326 |
|
|
45
|
+
| gRPC unary, h2/TLS, `ghz -c50` | 1,732 | 29.87 ms | (Falcon `async-grpc` historical: 1,512) |
|
|
46
|
+
|
|
47
|
+
Peak trial on row 1: 134,573 r/s. The io_uring loop is opt-in via
|
|
48
|
+
`HYPERION_IO_URING_ACCEPT=1` until 2.15; the `accept4` row is the default on
|
|
49
|
+
Linux. Falcon and Puma both tail-latency at **>400 ms p99** on the generic
|
|
50
|
+
Rack hello row Hyperion serves at 2.33 ms; the closest-competitor's mean is
|
|
51
|
+
Hyperion's p99 — read the tail, not the throughput peak.
|
|
328
52
|
|
|
329
|
-
|
|
330
|
-
$ curl -s -H 'X-Hyperion-Admin-Token: secret' http://127.0.0.1:9292/-/metrics
|
|
331
|
-
# HELP hyperion_requests_total Total HTTP requests handled
|
|
332
|
-
# TYPE hyperion_requests_total counter
|
|
333
|
-
hyperion_requests_total 8910
|
|
334
|
-
hyperion_responses_status_total{status="200"} 8521
|
|
335
|
-
hyperion_responses_status_total{status="404"} 12
|
|
336
|
-
```
|
|
53
|
+
## Features
|
|
337
54
|
|
|
338
|
-
|
|
339
|
-
|
|
340
|
-
|
|
341
|
-
|
|
342
|
-
|
|
343
|
-
|
|
55
|
+
- **HTTP/1.1 + HTTP/2 + TLS** with ALPN auto-negotiation. Multiplexed h2
|
|
56
|
+
streams on fibers; smuggling defences inline. See
|
|
57
|
+
[docs/HTTP2_AND_TLS.md](docs/HTTP2_AND_TLS.md).
|
|
58
|
+
- **WebSockets** (RFC 6455) over Rack 3 full hijack. ActionCable +
|
|
59
|
+
faye-websocket on the same listener. 463/463 autobahn cases pass. See
|
|
60
|
+
[docs/WEBSOCKETS.md](docs/WEBSOCKETS.md).
|
|
61
|
+
- **gRPC** unary, server-stream, client-stream, bidirectional via
|
|
62
|
+
Rack 3 trailers. See [docs/GRPC.md](docs/GRPC.md).
|
|
63
|
+
- **`Server.handle_static`** + **`Server.handle { |env| … }`** —
|
|
64
|
+
C-loop direct routes that bypass the Rack adapter for hot paths.
|
|
65
|
+
See [docs/HANDLE_STATIC_AND_HANDLE_BLOCK.md](docs/HANDLE_STATIC_AND_HANDLE_BLOCK.md).
|
|
66
|
+
- **Pre-fork cluster mode** — `SO_REUSEPORT` on Linux, master-bind on
|
|
67
|
+
macOS / BSD. 1.004–1.011 max/min worker fairness ratio under steady
|
|
68
|
+
load. See [docs/CLUSTER_AND_SO_REUSEPORT.md](docs/CLUSTER_AND_SO_REUSEPORT.md).
|
|
69
|
+
- **Async I/O** for PG-bound apps via `--async-io` +
|
|
70
|
+
[hyperion-async-pg](https://github.com/andrew-woblavobla/hyperion-async-pg).
|
|
71
|
+
Single worker `pool=200` hits 2,381 r/s on `pg_sleep(50ms)` vs Puma's 56
|
|
72
|
+
r/s. See [docs/ASYNC_IO.md](docs/ASYNC_IO.md).
|
|
73
|
+
- **Observability** — `/-/metrics` Prometheus endpoint, per-route
|
|
74
|
+
histograms, dispatch-mode counters, kTLS gauge. Pre-built Grafana
|
|
75
|
+
dashboard. See [docs/OBSERVABILITY.md](docs/OBSERVABILITY.md).
|
|
76
|
+
- **Default-on structured access logs** — JSON in production, coloured
|
|
77
|
+
text on TTY. Per-thread cached timestamps; ≈ 0.1 µs per logged
|
|
78
|
+
request. See [docs/LOGGING.md](docs/LOGGING.md).
|
|
79
|
+
- **io_uring accept loop** (Linux 5.x+, opt-in) — multishot accept +
|
|
80
|
+
per-conn state machine. Compiles out cleanly without liburing.
|
|
81
|
+
Default-flip moves to 2.15 with a fresh 24h soak.
|
|
344
82
|
|
|
345
83
|
## Compatibility
|
|
346
84
|
|
|
347
85
|
| Component | Version |
|
|
348
86
|
|---|---|
|
|
349
|
-
| Ruby | 3.3+
|
|
87
|
+
| Ruby | 3.3+ |
|
|
350
88
|
| Rack | 3.x |
|
|
351
89
|
| Rails | verified up to 8.1 |
|
|
352
90
|
| Linux kernel | 5.x+ for io_uring opt-in; 4.x+ otherwise |
|
|
353
|
-
| macOS | works (TLS, h2, WebSockets, `accept4` fallback
|
|
91
|
+
| macOS | works (TLS, h2, WebSockets, `accept4` fallback) |
|
|
354
92
|
|
|
355
|
-
|
|
356
|
-
`REMOTE_ADDR`, IPv6-safe `Host` parsing, CRLF guard. The
|
|
357
|
-
`Hyperion::FiberLocal.install!` opt-in shim handles the residual
|
|
358
|
-
`Thread.current.thread_variable_*` footgun in older Rails idioms;
|
|
359
|
-
modern Rails 7.1+ already uses Fiber storage natively.
|
|
93
|
+
## Documentation
|
|
360
94
|
|
|
361
|
-
|
|
95
|
+
- [BENCH_HYPERION_2_14.md](docs/BENCH_HYPERION_2_14.md) — fresh 2.14.0
|
|
96
|
+
bench (this README's headline numbers, with reproduction commands).
|
|
97
|
+
- [BENCH_HYPERION_2_11.md](docs/BENCH_HYPERION_2_11.md) — 4-way
|
|
98
|
+
matrix (Hyperion / Puma / Falcon / Agoo).
|
|
99
|
+
- [BENCH_2026_04_27.md](docs/BENCH_2026_04_27.md) — real Rails 8.1
|
|
100
|
+
app sweep (Exodus platform).
|
|
101
|
+
- [CONFIGURATION.md](docs/CONFIGURATION.md) — CLI flags, env vars,
|
|
102
|
+
`config/hyperion.rb` DSL.
|
|
103
|
+
- [OPERATOR_GUIDANCE.md](docs/OPERATOR_GUIDANCE.md) — what `-w N` /
|
|
104
|
+
`-t N` / `--async-io` actually do on Rails-shaped traffic.
|
|
105
|
+
- [HTTP2_AND_TLS.md](docs/HTTP2_AND_TLS.md) — h2 + TLS surface.
|
|
106
|
+
- [WEBSOCKETS.md](docs/WEBSOCKETS.md) — RFC 6455 surface.
|
|
107
|
+
- [GRPC.md](docs/GRPC.md) — Rack 3 trailers + streaming RPCs.
|
|
108
|
+
- [HANDLE_STATIC_AND_HANDLE_BLOCK.md](docs/HANDLE_STATIC_AND_HANDLE_BLOCK.md)
|
|
109
|
+
— direct-route forms.
|
|
110
|
+
- [CLUSTER_AND_SO_REUSEPORT.md](docs/CLUSTER_AND_SO_REUSEPORT.md) —
|
|
111
|
+
cluster mode and per-OS worker model.
|
|
112
|
+
- [ASYNC_IO.md](docs/ASYNC_IO.md) — `--async-io` for PG-bound apps.
|
|
113
|
+
- [OBSERVABILITY.md](docs/OBSERVABILITY.md) — metrics + Grafana.
|
|
114
|
+
- [LOGGING.md](docs/LOGGING.md) — access log surface.
|
|
115
|
+
- [MIGRATING_FROM_PUMA.md](docs/MIGRATING_FROM_PUMA.md) — drop-in guide.
|
|
116
|
+
- [REVERSE_PROXY.md](docs/REVERSE_PROXY.md) — nginx fronting.
|
|
362
117
|
|
|
363
|
-
|
|
118
|
+
## Reproducing benchmarks
|
|
364
119
|
|
|
365
120
|
```sh
|
|
366
|
-
|
|
367
|
-
|
|
368
|
-
|
|
369
|
-
|
|
370
|
-
# Hello via Server.handle_static + io_uring (134k r/s row)
|
|
371
|
-
HYPERION_IO_URING_ACCEPT=1 bundle exec bin/hyperion -w 1 -t 5 -p 9292 bench/hello_static.ru &
|
|
372
|
-
wrk -t4 -c100 -d20s --latency http://127.0.0.1:9292/
|
|
373
|
-
|
|
374
|
-
# Dynamic block via Server.handle (9.4k r/s row)
|
|
375
|
-
bundle exec bin/hyperion -w 1 -t 5 -p 9292 bench/hello_handle_block.ru &
|
|
376
|
-
wrk -t4 -c100 -d20s --latency http://127.0.0.1:9292/
|
|
377
|
-
|
|
378
|
-
# Generic Rack hello (4.7k r/s row)
|
|
379
|
-
bundle exec bin/hyperion -w 1 -t 5 -p 9292 bench/hello.ru &
|
|
380
|
-
wrk -t4 -c100 -d20s --latency http://127.0.0.1:9292/
|
|
381
|
-
|
|
382
|
-
# CPU JSON via block form (5.9k r/s row)
|
|
383
|
-
bundle exec bin/hyperion -w 1 -t 5 -p 9292 bench/work.ru &
|
|
384
|
-
wrk -t4 -c200 -d15s --latency http://127.0.0.1:9292/
|
|
385
|
-
|
|
386
|
-
# 4-way comparator (Hyperion vs Puma vs Falcon vs Agoo)
|
|
387
|
-
bash bench/4way_compare.sh
|
|
388
|
-
|
|
389
|
-
# gRPC unary + streaming (Hyperion side)
|
|
390
|
-
GHZ=/tmp/ghz TRIALS=3 DURATION=15s WARMUP_DURATION=3s bash bench/grpc_stream_bench.sh
|
|
391
|
-
|
|
392
|
-
# Idle keep-alive RSS sweep (10k conns × 30s hold)
|
|
393
|
-
bash bench/keepalive_memory.sh
|
|
121
|
+
bundle install && bundle exec rake compile
|
|
122
|
+
./bench/run_all.sh # full table
|
|
123
|
+
./bench/run_all.sh --row 1 # single row
|
|
124
|
+
./bench/run_all.sh --skip-grpc # rows 1-5 + 7-9
|
|
394
125
|
```
|
|
395
126
|
|
|
396
|
-
|
|
397
|
-
|
|
398
|
-
|
|
399
|
-
|
|
127
|
+
The `bench/run_all.sh` driver boots one server per row, runs `wrk` (or
|
|
128
|
+
`ghz` for gRPC), kills it, moves on — no concurrent runs (cross-talk
|
|
129
|
+
inflates noise on shared hosts). Output: CSV + markdown table at
|
|
130
|
+
`$OUT_CSV` / `$OUT_MD` (default `/tmp/hyperion-2.15-bench.{csv,md}`).
|
|
400
131
|
|
|
401
|
-
|
|
402
|
-
|
|
403
|
-
|
|
404
|
-
Puma 8.0.
|
|
405
|
-
|
|
406
|
-
identical worker count, thread count, wrk concurrency, payload, and
|
|
407
|
-
TLS cipher).
|
|
132
|
+
Per-row commands and the host snapshot live in
|
|
133
|
+
[docs/BENCH_HYPERION_2_14.md](docs/BENCH_HYPERION_2_14.md). When
|
|
134
|
+
your numbers don't match: bench-host noise drifts ±10–30% over days,
|
|
135
|
+
Puma version mismatch (sweep used 8.0.x; in-repo Gemfile pins
|
|
136
|
+
`~> 6.4`), and different `-t` / `-c` are the usual culprits.
|
|
408
137
|
|
|
409
138
|
## Release history
|
|
410
139
|
|
|
411
|
-
See [CHANGELOG.md](CHANGELOG.md). Recent: 2.14.0 (gRPC streaming
|
|
412
|
-
|
|
413
|
-
|
|
414
|
-
|
|
415
|
-
gRPC
|
|
416
|
-
|
|
417
|
-
|
|
418
|
-
(`PageCache`, `Server.handle` direct routes, TCP_NODELAY at accept).
|
|
140
|
+
See [CHANGELOG.md](CHANGELOG.md). Recent: 2.14.0 (gRPC streaming
|
|
141
|
+
ghz; dynamic-block C dispatch; `Server#stop` accept-wake on Linux;
|
|
142
|
+
io_uring 4h soak), 2.13.0 (response head builder C-rewrite; gRPC
|
|
143
|
+
streaming RPCs), 2.12.0 (C connection lifecycle; io_uring loop;
|
|
144
|
+
gRPC unary trailers), 2.11.0 (HPACK CGlue default; h2 dispatch-pool
|
|
145
|
+
warmup), 2.10.x (PageCache, `Server.handle` direct routes,
|
|
146
|
+
TCP_NODELAY at accept).
|
|
419
147
|
|
|
420
|
-
##
|
|
148
|
+
## Contributing
|
|
421
149
|
|
|
422
|
-
|
|
423
|
-
|
|
424
|
-
|
|
425
|
-
- [docs/BENCH_HYPERION_2_0.md](docs/BENCH_HYPERION_2_0.md) — historical
|
|
426
|
-
2.10-B baseline (preserved for archaeology).
|
|
427
|
-
- [docs/BENCH_2026_04_27.md](docs/BENCH_2026_04_27.md) — real Rails 8.1
|
|
428
|
-
app sweep (Exodus platform).
|
|
429
|
-
- [docs/OBSERVABILITY.md](docs/OBSERVABILITY.md) — metrics + Grafana.
|
|
430
|
-
- [docs/WEBSOCKETS.md](docs/WEBSOCKETS.md) — RFC 6455 surface.
|
|
431
|
-
- [docs/MIGRATING_FROM_PUMA.md](docs/MIGRATING_FROM_PUMA.md) — drop-in
|
|
432
|
-
guide.
|
|
433
|
-
- [docs/REVERSE_PROXY.md](docs/REVERSE_PROXY.md) — nginx fronting.
|
|
150
|
+
See [CONTRIBUTING.md](CONTRIBUTING.md). `bundle install && bundle exec rake`
|
|
151
|
+
gives you a green test suite (1147 examples / 0 failures / 16 pending
|
|
152
|
+
on macOS arm64 + Ruby 3.3.3 as of 2.15-A).
|
|
434
153
|
|
|
435
154
|
## Credits
|
|
436
155
|
|
|
437
156
|
- Vendored [llhttp](https://github.com/nodejs/llhttp) (Node.js's HTTP
|
|
438
157
|
parser, MIT) under `ext/hyperion_http/llhttp/`.
|
|
439
|
-
- HTTP/2 framing and HPACK via
|
|
440
|
-
[`protocol-http2`](https://github.com/socketry/protocol-http2).
|
|
158
|
+
- HTTP/2 framing and HPACK via [`protocol-http2`](https://github.com/socketry/protocol-http2).
|
|
441
159
|
- Fiber scheduler via [`async`](https://github.com/socketry/async).
|
|
442
160
|
|
|
443
161
|
## License
|
data/lib/hyperion/server.rb
CHANGED
|
@@ -821,6 +821,33 @@ module Hyperion
|
|
|
821
821
|
# workers (Linux) the kernel hashes connections fairly across siblings;
|
|
822
822
|
# on `:share` (Darwin) the knob is silently honoured but shows no
|
|
823
823
|
# scaling benefit — operators already know Darwin is special.
|
|
824
|
+
# 2.15-A — outer rescue for `Errno::EBADF` / `IOError`.
|
|
825
|
+
#
|
|
826
|
+
# Background: prior to 2.15-A this was just the inner
|
|
827
|
+
# `task.children.each { child.wait rescue StandardError; nil }`
|
|
828
|
+
# pattern. That handles raises from the accept fiber bodies, but
|
|
829
|
+
# NOT from `Async::Scheduler#close`, which runs implicitly when the
|
|
830
|
+
# `Async do ... end` block exits and which itself parks in
|
|
831
|
+
# `epoll_wait` / `kevent`. If `stop` closed the listener fd while
|
|
832
|
+
# the scheduler still had it registered, the scheduler-close
|
|
833
|
+
# surfaces `Errno::EBADF: Bad file descriptor —
|
|
834
|
+
# select_internal_with_gvl:epoll_wait` and re-raises it past the
|
|
835
|
+
# inner rescue (the inner rescue is only on `child.wait`).
|
|
836
|
+
#
|
|
837
|
+
# Symptom in CI: `async_io: true` boot/stop integration specs flake
|
|
838
|
+
# on Ruby 3.4 + async 2.39 with EBADF bubbling out of the worker
|
|
839
|
+
# thread. The race window is widest with `thread_count: 0` because
|
|
840
|
+
# the entire dispatch path runs on the same fiber as the accept
|
|
841
|
+
# loop, so there's no thread-pool synchronization barrier between
|
|
842
|
+
# `stop` and scheduler close.
|
|
843
|
+
#
|
|
844
|
+
# Fix: catch `Errno::EBADF`/`IOError` at the outer `Async do` scope.
|
|
845
|
+
# These are exclusively shutdown signals (the listener fd only goes
|
|
846
|
+
# bad when `close_listeners` has run); swallowing them here is
|
|
847
|
+
# equivalent to the C-loop path, which already swallows them inside
|
|
848
|
+
# `accept_or_nil`. The change is intentionally narrow — other
|
|
849
|
+
# `StandardError` from inside the loop bodies still propagates out
|
|
850
|
+
# so genuine accept-loop bugs are not masked.
|
|
824
851
|
def start_async_loop
|
|
825
852
|
Async do |task|
|
|
826
853
|
n = @accept_fibers_per_worker
|
|
@@ -834,6 +861,11 @@ module Hyperion
|
|
|
834
861
|
nil
|
|
835
862
|
end
|
|
836
863
|
end
|
|
864
|
+
rescue Errno::EBADF, IOError
|
|
865
|
+
# Listener fd already closed by `stop` — scheduler close-time
|
|
866
|
+
# epoll_wait / kevent saw the bad fd. Benign at this point;
|
|
867
|
+
# the server is shutting down by design.
|
|
868
|
+
nil
|
|
837
869
|
end
|
|
838
870
|
|
|
839
871
|
# Single accept fiber's run loop. Called N times (default 1) from
|
data/lib/hyperion/version.rb
CHANGED