hyperion-rb 1.3.0 → 1.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 53235f97fb1e384507f62373cd12180ade1eecc8df6f9ce75145ba60403a983e
4
- data.tar.gz: 1c58ede296c54a098d26cae2b69837f23bf60fb5ce4239c033446f583f12a4df
3
+ metadata.gz: 154fbd1bc72c4eeb5e0c740354ced74c6fa8ee4295fab8e8551ae83f78313c53
4
+ data.tar.gz: 4eaea8250dc8315318152c4c54a16803cc8a7b81236ecaa5dad9adee1d1ab95b
5
5
  SHA512:
6
- metadata.gz: 8484e7168d8ba27312edece5c86af770ef0604bf85d13cdde37c5f4c87b9de0417216f241e073340bedeabef292fc4ec032a7379a76a836e236f6f129c97bcd3
7
- data.tar.gz: 89ac23881d0ddd4beff79d08551fa6f7e8399948c1607a3a32475202780170dc9c036f1ef93bd1f17dc5a34e1d91d9901bf3e4119e9efea05d5aa528b22271ff
6
+ metadata.gz: bed63c053e0f6d24876cde01346809ce830e3548382aab430798ee60b7056f926609190064a38d4f39a0c56b43945f4cc9b1b57fe0c938f24f3e95f1a2e499da
7
+ data.tar.gz: 79decaf41c5b5755e2af9e0ee5475fce3c74bade418dcd7032309a4fa9d9061388cf3757a8ca7191b373ada6c31d2b3eb2453827f569f86a9caeb7970ff27f48
data/CHANGELOG.md CHANGED
@@ -1,5 +1,26 @@
1
1
  # Changelog
2
2
 
3
+ ## [1.3.1] - 2026-04-27
4
+
5
+ Documentation + observability follow-ups for the 1.3.0 `--async-io` feature. No behaviour changes to existing code paths.
6
+
7
+ ### Added
8
+ - **Dispatch-path metrics** — `Hyperion::Server` now bumps two new counters so operators can verify which path served their requests:
9
+ - `:requests_threadpool_dispatched` — HTTP/1.1 connection handed to the worker pool (or served inline in `start_raw_loop` when `thread_count: 0`).
10
+ - `:requests_async_dispatched` — HTTP/1.1 connection served inline on the accept-loop fiber under `--async-io`.
11
+ HTTP/2 streams are not bucketed (per-stream counters cover them); the rare TLS+`thread_count: 0` config is also un-counted to avoid misclassification.
12
+ - **`docs/MIGRATING_FROM_PUMA.md`** — new "Fiber-cooperative I/O for PG-bound apps" section near the top, with the Linux 50 ms `pg_sleep` bench summary and the three-prerequisite checklist (`async_io: true` + `hyperion-async-pg` + fiber-aware pool).
13
+ - **README** — `async_io` documented in the config-DSL example block; the new dispatch-path counters listed in the Metrics table.
14
+ - **Specs** — two new examples in `spec/hyperion/server_async_io_spec.rb`:
15
+ - `async_io: true` + `thread_count: 0` boots cleanly and serves a request under a scheduler.
16
+ - Thread-decoupling proof: 5 concurrent requests against a 200 ms fiber-yielding handler complete in <600 ms wall (vs. ~1.0 s if serialized), locking in the architectural promise from the README.
17
+
18
+ ### Changed
19
+ - N/A — no behavioural changes; metrics are additive, docs are additive.
20
+
21
+ ### Fixed
22
+ - N/A.
23
+
3
24
  ## [1.3.0] - 2026-04-27
4
25
 
5
26
  Adds the structural moat for fiber-cooperative I/O. No breaking changes.
data/README.md CHANGED
@@ -65,21 +65,45 @@ Bench is **wait-bound** — ~3-4 ms median is the PG + Redis round-trip, dwarfin
65
65
 
66
66
  ### Async I/O — fiber concurrency on PG-bound apps
67
67
 
68
- `bench/pg_concurrent.ru` (50 ms PG query per request, pool sized for the server's concurrency model). macOS, Postgres over WAN, wrk `-t4 -c200 -d20s`:
68
+ Ubuntu 24.04 / 16 vCPU / Ruby 3.3.3, Postgres 17 over WAN, `wrk -t4 -c200 -d20s`. Single worker (`-w 1`) unless noted. All configs returned 0 non-2xx and 0 timeouts. RSS sampled mid-run via `ps -o rss`.
69
69
 
70
- | | r/s | p99 |
71
- |---|---:|---:|
72
- | Puma 7.2 `-t 5` + plain pg (pool=5) | 88.9 | 2.31 s |
73
- | **Hyperion 1.3.0 `--async-io -t 5` + hyperion-async-pg (FiberPool=64)** | **1,103.7** | **237 ms** |
70
+ **Wait-bound workload** (`bench/pg_concurrent.ru`: `SELECT pg_sleep(0.05)` + tiny JSON):
74
71
 
75
- **12.4× throughput, 9.7× lower p99.** Puma is bottlenecked at `threads × 1 in-flight query` because plain `pg` blocks the OS thread on `recv()`. Hyperion + async-pg + a fiber-aware pool decouples concurrency from threads: 5 OS threads serve 64 concurrent in-flight queries via fiber cooperation. Theoretical ceiling at pool=64 + 50 ms query = 1280 r/s; achieved 1103 r/s = 86% of it.
72
+ | | r/s | p99 | RSS | vs Puma `-t 5` |
73
+ |---|---:|---:|---:|---:|
74
+ | Puma 8.0 `-t 5` pool=5 | 56.5 | 3.88 s | 87 MB | 1.0× |
75
+ | Puma 8.0 `-t 30` pool=30 | 402.1 | 880 ms | 99 MB | 7.1× |
76
+ | Puma 8.0 `-t 100` pool=100 | 1067.4 | 557 ms | 121 MB | 18.9× |
77
+ | **Hyperion `--async-io -t 5`** pool=32 | 400.4 | 878 ms | 123 MB | 7.1× |
78
+ | **Hyperion `--async-io -t 5`** pool=64 | 778.9 | 638 ms | 133 MB | 13.8× |
79
+ | **Hyperion `--async-io -t 5`** pool=128 | 1344.2 | 536 ms | 148 MB | 23.8× |
80
+ | **Hyperion `--async-io -t 5` pool=200** | **2381.4** | **471 ms** | **164 MB** | **42.2×** |
81
+ | Hyperion `--async-io -w 4 -t 5` pool=64 | 1937.5 | 4.84 s | 416 MB | 34.3× (cold-start p99 — see note) |
82
+ | Falcon 0.55.3 `--count 1` pool=128 | 1665.7 | 516 ms | 141 MB | 29.5× |
83
+
84
+ **Mixed CPU+wait** (`bench/pg_mixed.ru`: same query + 50-key JSON serialization, ~5 ms CPU):
85
+
86
+ | | r/s | p99 | RSS | vs Puma `-t 30` |
87
+ |---|---:|---:|---:|---:|
88
+ | Puma 8.0 `-t 30` pool=30 | 351.7 | 963 ms | 127 MB | 1.0× |
89
+ | Hyperion `--async-io -t 5` pool=32 | 371.2 | 919 ms | 151 MB | 1.05× |
90
+ | Hyperion `--async-io -t 5` pool=64 | 741.5 | 681 ms | 161 MB | 2.1× |
91
+ | **Hyperion `--async-io -t 5` pool=128** | **1739.9** | **512 ms** | **201 MB** | **4.9×** |
92
+ | Falcon `--count 1` pool=128 | 1642.1 | 531 ms | 213 MB | 4.7× |
93
+
94
+ **Takeaways:**
95
+ 1. **Linear scaling with pool size** under `--async-io` — `r/s ≈ pool × 12` on this WAN bench. Single-worker pool=200 hits 2381 r/s, **42× Puma `-t 5`** and **5.9× Puma's best** (`-t 30`).
96
+ 2. **Mixed workload doesn't kill the win** — Hyperion `--async-io` pool=128 actually goes *up* on mixed (1740 vs 1344 r/s) because CPU work overlaps other fibers' PG-wait windows. This is the honest "what happens to a real Rails handler" answer.
97
+ 3. **Hyperion ≈ Falcon within 3-7%** across pool sizes; both fiber-native architectures extract similar value from `hyperion-async-pg`.
98
+ 4. **RSS at single-worker scale isn't the architectural moat** — Linux thread stacks are demand-paged; PG connection buffers dominate RSS at pool sizes ≤ 200. The MB-vs-GB story shows up at **idle keep-alive connection scale** (10k+ conns), not in this PG-bound throughput bench. See [Concurrency at scale](#concurrency-at-scale-architectural-advantages) for the connection-count win.
99
+ 5. **`-w 4` cold-start caveat** — multi-worker p99 inflates because the bench rackup uses lazy per-process pool init (each worker pays full pool fill on its first request). Production apps avoid this with `on_worker_boot { Hyperion::AsyncPg::FiberPool.new(...).fill }`.
76
100
 
77
101
  Three things must all be true to get this win:
78
102
  1. **`async_io: true`** in your Hyperion config (or `--async-io` CLI flag). Default is off to keep 1.2.0's raw-loop perf for fiber-unaware apps.
79
103
  2. **`hyperion-async-pg`** installed: `gem 'hyperion-async-pg', require: 'hyperion/async_pg'` + `Hyperion::AsyncPg.install!` at boot.
80
- 3. **Fiber-aware connection pool.** The popular `connection_pool` gem is NOT — its Mutex blocks the OS thread. Use [`async-pool`](https://github.com/socketry/async-pool), `Async::Semaphore`, or hand-roll one (see `bench/pg_concurrent.ru` for a 30-line FiberPool example).
104
+ 3. **Fiber-aware connection pool.** The popular `connection_pool` gem is NOT — its Mutex blocks the OS thread. Use [`async-pool`](https://github.com/socketry/async-pool), `Async::Semaphore`, or hand-roll one (see `bench/pg_concurrent.ru` for a ~30-line FiberPool example).
81
105
 
82
- Skip any of these and you get parity with Puma at the same `-t`. Run the bench yourself: `MODE=async DATABASE_URL=... PG_POOL_SIZE=64 bundle exec hyperion --async-io -t 5 bench/pg_concurrent.ru` (in the [hyperion-async-pg](https://github.com/andrew-woblavobla/hyperion-async-pg) repo).
106
+ Skip any of these and you get parity with Puma at the same `-t`. Run the bench yourself: `MODE=async DATABASE_URL=... PG_POOL_SIZE=200 bundle exec hyperion --async-io -t 5 bench/pg_concurrent.ru` (in the [hyperion-async-pg](https://github.com/andrew-woblavobla/hyperion-async-pg) repo).
83
107
 
84
108
  ### CPU-bound JSON workload
85
109
 
@@ -240,6 +264,8 @@ log_requests true
240
264
 
241
265
  fiber_local_shim false
242
266
 
267
+ async_io false # When true, the plain HTTP/1.1 accept loop runs each connection on a fiber under Async::Scheduler instead of handing it to a worker thread. Required for fiber-cooperative I/O (e.g. hyperion-async-pg). ~5% throughput hit on hello-world; in exchange one OS thread serves N concurrent in-flight DB queries on wait-bound workloads. TLS / HTTP/2 paths always use the async loop and ignore this flag.
268
+
243
269
  before_fork do
244
270
  ActiveRecord::Base.connection_handler.clear_all_connections! if defined?(ActiveRecord)
245
271
  end
@@ -304,6 +330,8 @@ The default-ON access log path is engineered to stay near-zero cost:
304
330
  | `parse_errors` | HTTP parse failures → 400. |
305
331
  | `app_errors` | Rack app raised → 500. |
306
332
  | `read_timeouts` | Per-connection read deadline hit. |
333
+ | `requests_threadpool_dispatched` | HTTP/1.1 connection handed to the worker pool (or served inline in `start_raw_loop` when `thread_count: 0`). The default dispatch path. |
334
+ | `requests_async_dispatched` | HTTP/1.1 connection served inline on the accept-loop fiber under `--async-io`. Operators can use the ratio against `requests_threadpool_dispatched` to verify fiber-cooperative I/O is actually engaged. |
307
335
 
308
336
  ```ruby
309
337
  require 'hyperion'
@@ -151,11 +151,14 @@ module Hyperion
151
151
 
152
152
  apply_timeout(socket)
153
153
  if @thread_pool
154
- unless @thread_pool.submit_connection(socket, @app,
155
- max_request_read_seconds: @max_request_read_seconds)
154
+ if @thread_pool.submit_connection(socket, @app,
155
+ max_request_read_seconds: @max_request_read_seconds)
156
+ Hyperion.metrics.increment(:requests_threadpool_dispatched)
157
+ else
156
158
  reject_connection(socket)
157
159
  end
158
160
  else
161
+ Hyperion.metrics.increment(:requests_threadpool_dispatched)
159
162
  Connection.new.serve(socket, @app, max_request_read_seconds: @max_request_read_seconds)
160
163
  end
161
164
  end
@@ -180,7 +183,9 @@ module Hyperion
180
183
  if socket.is_a?(::OpenSSL::SSL::SSLSocket) && socket.alpn_protocol == 'h2'
181
184
  # HTTP/2: each stream runs on a fiber inside Http2Handler. The
182
185
  # handler still uses the pool's `#call` for app.call hops on each
183
- # stream (one per stream, not one per connection).
186
+ # stream (one per stream, not one per connection). Per-stream
187
+ # counters live inside Http2Handler; we don't bump either of the
188
+ # H1 dispatch buckets here — neither fits the h2 model cleanly.
184
189
  Http2Handler.new(app: @app, thread_pool: @thread_pool, h2_settings: @h2_settings).serve(socket)
185
190
  elsif @async_io
186
191
  # async_io plain HTTP/1.1: serve inline on the calling fiber so the
@@ -190,17 +195,23 @@ module Hyperion
190
195
  # one thread can serve N concurrent in-flight DB queries. The
191
196
  # thread pool is intentionally bypassed here: handing the socket
192
197
  # to a worker thread strips the scheduler context.
198
+ Hyperion.metrics.increment(:requests_async_dispatched)
193
199
  Connection.new.serve(socket, @app, max_request_read_seconds: @max_request_read_seconds)
194
200
  elsif @thread_pool
195
201
  # HTTP/1.1 (e.g. TLS-wrapped after ALPN picked http/1.1): hand the
196
202
  # connection to a worker thread. The fiber that called dispatch
197
203
  # returns immediately. On overflow, reject with 503 + close.
198
- unless @thread_pool.submit_connection(socket, @app,
199
- max_request_read_seconds: @max_request_read_seconds)
204
+ if @thread_pool.submit_connection(socket, @app,
205
+ max_request_read_seconds: @max_request_read_seconds)
206
+ Hyperion.metrics.increment(:requests_threadpool_dispatched)
207
+ else
200
208
  reject_connection(socket)
201
209
  end
202
210
  else
203
- # No pool (thread_count: 0): inline on the calling fiber.
211
+ # No pool (thread_count: 0) on the TLS / async-wrap path. Rare
212
+ # config — neither dispatch bucket fits cleanly. Leave un-counted
213
+ # rather than misclassify; the request still shows up in
214
+ # :requests_total via Connection.
204
215
  Connection.new.serve(socket, @app, max_request_read_seconds: @max_request_read_seconds)
205
216
  end
206
217
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Hyperion
4
- VERSION = '1.3.0'
4
+ VERSION = '1.3.1'
5
5
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: hyperion-rb
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.3.0
4
+ version: 1.3.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Andrey Lobanov