nnq 0.6.1 → 0.8.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 19bceda0339689e8850efe39120590002821f5c859eaf33e4ea1065af317bcaa
4
- data.tar.gz: 2a15dd79b5cf4f8a41c564e89cf40af7296de8765e8e070de12057b4bc0dedf9
3
+ metadata.gz: 19ad156b056b2948c31b8447aef51f07d6fde7838c7f448b13cf3ebcc0b84ccf
4
+ data.tar.gz: d781f021467df323ac687bd08247c503b018d2961d8499e3cce50504b7df70a8
5
5
  SHA512:
6
- metadata.gz: 1e14a21e6d620df7770c27868f5fdb8ed945fb04137b8817c12fdaa2fdfca1e36a952e6fc4afc5be60562dfd1ff8f43d9d1781ce9a4fb6a1c3a8963cd3738233
7
- data.tar.gz: 13aae01f03df0b0eca53aa0329d0f2d35b036b4f52744f83033cc46297eae39a60b2c639ad144c9ee3af63cee7d3780c3fff036d30f418e078ccb89253f70761
6
+ metadata.gz: f74b75020a0ae60b9caf974e92e85cf9fdbd56c7e0bb9e819a7ea0e783b0f7f2036112ed0698c723a699eb070bcac684ed354192c425d750c8e9e858c1d24561
7
+ data.tar.gz: 1d4e6ac56699c2ec8706c595f419ef8e39278e7aee4bc1d02fcf75ee31d91c167d182c74ab8176a3e41dbaaf029ca5b50d49fb0148bcc9d804278f229cb0049e
data/CHANGELOG.md CHANGED
@@ -1,5 +1,95 @@
1
1
  # Changelog
2
2
 
3
+ ## 0.8.1 — 2026-04-19
4
+
5
+ - **Fix close-race in `ConnectionLifecycle#tear_down!`.** The fd was
6
+ closed before sibling pumps were cancelled, which woke recv fibers
7
+ parked in `io_wait` with `IOError: stream closed in another thread`.
8
+ `@barrier.stop` now runs before `@conn.close`, so blocking reads
9
+ unwind via `Async::Stop` before the fd goes away.
10
+
11
+ ## 0.8.0 — 2026-04-19
12
+
13
+ - **Uniform frozen + `BINARY` message contract across transports.**
14
+ `Socket#coerce_binary` replaces the old `frozen_binary` + `.b.freeze`
15
+ copy on the hot send path. Every send method runs its body through
16
+ `coerce_binary`, which:
17
+ - coerces non-String bodies via `#to_str` (nil / `42` / `:foo` raise
18
+ `NoMethodError` instead of producing a zero-byte frame);
19
+ - re-tags unfrozen non-BINARY bodies to `Encoding::BINARY` in place —
20
+ a flag flip, no copy;
21
+ - freezes the body.
22
+
23
+ Receivers always see a frozen BINARY-tagged body: TCP/IPC get it via
24
+ the recv-pump freeze, inproc gets it via `Pipe#send_message`, which
25
+ only allocates for the pathological case of a frozen non-BINARY body
26
+ (the typical `# frozen_string_literal: true` UTF-8 literal). Bodies
27
+ returned by REP/REQ/SURVEYOR/RESPONDENT (cooked and raw) are frozen
28
+ by `parse_backtrace` and the REQ/SURVEYOR id-parsing paths. Mutation
29
+ bugs surface as `FrozenError` instead of silently corrupting a shared
30
+ reference on the inproc fast path. Inproc throughput pays ~20-30%
31
+ for the contract; TCP/IPC unaffected.
32
+
33
+ - **Benchmarks send fresh strings per iteration.** `BenchHelper.run`
34
+ passes an unfrozen `"x" * size` through to the burst closure; the
35
+ `measure` / `measure_roundtrip` bursts `.dup` it before each send.
36
+ More realistic than reusing one frozen payload and hitting every
37
+ fast path in `coerce_binary` + `Pipe#send_message`.
38
+
39
+ ## 0.7.0 — 2026-04-18
40
+
41
+ - **Inproc transport now uses a queue-based `Inproc::Pipe`** instead
42
+ of a Unix `socketpair(2)` running the full SP protocol.
43
+ `NNQ::Transport::Inproc::Pipe` duck-types `NNQ::Connection` and
44
+ transfers frozen Strings through a pair of `Async::Queue`s (one
45
+ per direction). No framing, no handshake, no kernel buffer copy.
46
+ When a routing strategy supplies an SP backtrace header
47
+ (REQ/REP/SURVEYOR), it's prepended before enqueue so the receive
48
+ side sees the same layout as the TCP/IPC path and `parse_backtrace`
49
+ keeps working unchanged. The new `Engine#connection_ready(conn,
50
+ endpoint:)` and `ConnectionLifecycle#ready_direct!` entry points
51
+ register a pipe as ready without the SP handshake phase.
52
+ - **Inproc direct-recv fast path.** When a routing strategy exposes a
53
+ `#direct_recv_for(conn)` hook, the peer pipe enqueues directly into
54
+ the routing recv queue via `Pipe#wire_direct_recv`, bypassing both
55
+ the intermediate pipe queue and the recv pump fiber. PULL, BUS,
56
+ PAIR, SUB, REP, RESPONDENT, SURVEYOR, and the `*_raw` variants all
57
+ implement the hook; REQ (promise-based) stays on the fiber path.
58
+ Cuts three fiber hops to one on the steady-state recv path.
59
+ - **Routing pumps shed their `@pump_tasks` bookkeeping.** `bus`, `pub`,
60
+ `surveyor`, and `surveyor_raw` no longer track per-connection pump
61
+ tasks in a hash. Pumps are spawned under
62
+ `@engine.connections[conn].barrier`, so `ConnectionLifecycle#tear_down!`
63
+ already cascade-cancels them on `barrier.stop` — the hash was dead
64
+ weight.
65
+ - **Transport registry is pluggable.** `NNQ::Engine.transports` is now a
66
+ mutable class-level `Hash` instead of a frozen constant; each built-in
67
+ transport (`tcp`, `ipc`, `inproc`) self-registers at load with
68
+ `Engine.transports["…"] = self`. External transports (e.g. `nnq-zstd`'s
69
+ `zstd+tcp://`) can register themselves the same way.
70
+ - **`ConnectionLifecycle` calls `transport.wrap_connection(conn, engine)`
71
+ after handshake.** Transports that implement the hook can return a
72
+ delegating wrapper that layers compression / TLS / instrumentation
73
+ over the raw `NNQ::Connection` without the engine caring. Transports
74
+ without the hook (tcp/ipc/inproc) pass through unchanged.
75
+ - **`lib/nnq.rb` restructured to mirror `lib/omq.rb`.** Requires split
76
+ into Core / Transport / Socket-types sections. New
77
+ `lib/nnq/constants.rb` owns `MonitorEvent`, the `CONNECTION_LOST` /
78
+ `CONNECTION_FAILED` error arrays, and `NNQ.freeze_for_ractors!` — all
79
+ previously scattered across `engine.rb`, `reconnect.rb`,
80
+ `monitor_event.rb`, and the top-level `nnq.rb`. `monitor_event.rb` is
81
+ removed (absorbed into constants).
82
+ - **Benchmarks: richer scaffolding, measured via `Async::Clock`.**
83
+ `BenchHelper` gains `NNQ_BENCH_SIZES` / `NNQ_BENCH_TRANSPORTS` /
84
+ `NNQ_BENCH_PEERS` env overrides, a `measure_roundtrip` helper for
85
+ REQ/REP-style patterns, and a `wait_subscribed` helper that closes
86
+ the gap between TCP connect and SUBSCRIBE propagation. All elapsed
87
+ measurements use `Async::Clock.measure { … }` blocks instead of
88
+ `Process.clock_gettime`. `bench/report.rb --update-readme` now
89
+ falls back to the most recent row per cell across all history, so a
90
+ partial bench run refreshes only the cells it covers instead of
91
+ clobbering untouched cells with "—".
92
+
3
93
  ## 0.6.1 — 2026-04-15
4
94
 
5
95
  - **Verbose trace (`-vvv`) now fires for cooked REQ/REP/RESPONDENT
data/lib/nnq/bus.rb CHANGED
@@ -13,7 +13,7 @@ module NNQ
13
13
  #
14
14
  class BUS0 < Socket
15
15
  def send(body)
16
- body = frozen_binary(body)
16
+ body = coerce_binary(body)
17
17
  Reactor.run { @engine.routing.send(body) }
18
18
  end
19
19
 
@@ -0,0 +1,58 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "socket"
4
+ require "io/stream"
5
+
6
+ module NNQ
7
+ # Lifecycle event emitted by {Socket#monitor}.
8
+ #
9
+ # @!attribute [r] type
10
+ # @return [Symbol] event type (:listening, :connected, :disconnected, ...)
11
+ # @!attribute [r] endpoint
12
+ # @return [String, nil] the endpoint involved
13
+ # @!attribute [r] detail
14
+ # @return [Hash, nil] extra context
15
+ #
16
+ MonitorEvent = Data.define(:type, :endpoint, :detail) do
17
+ def initialize(type:, endpoint: nil, detail: nil)
18
+ super
19
+ end
20
+ end
21
+
22
+
23
+ # Errors that indicate an established connection went away. Used by
24
+ # the recv loop, routing pumps, and connection lifecycle to silently
25
+ # terminate (the connection lifecycle's #lost! handler decides
26
+ # whether to reconnect). Not frozen at load time — transport plugins
27
+ # append to this before the first bind/connect, which freezes both
28
+ # arrays.
29
+ CONNECTION_LOST = [
30
+ EOFError,
31
+ IOError,
32
+ Errno::ECONNRESET,
33
+ Errno::EPIPE,
34
+ ]
35
+
36
+
37
+ # Errors raised when a peer cannot be reached. Triggers a reconnect
38
+ # retry rather than propagating.
39
+ CONNECTION_FAILED = [
40
+ Errno::ECONNREFUSED,
41
+ Errno::EHOSTUNREACH,
42
+ Errno::ENETUNREACH,
43
+ Errno::ENOENT,
44
+ Errno::EPIPE,
45
+ Errno::ETIMEDOUT,
46
+ Socket::ResolutionError,
47
+ ]
48
+
49
+
50
+ # Freezes module-level state so NNQ sockets can be used inside Ractors.
51
+ # Call this once before spawning any Ractors that create NNQ sockets.
52
+ #
53
+ def self.freeze_for_ractors!
54
+ CONNECTION_LOST.freeze
55
+ CONNECTION_FAILED.freeze
56
+ Engine.transports.freeze
57
+ end
58
+ end
@@ -104,6 +104,15 @@ module NNQ
104
104
  end
105
105
 
106
106
 
107
+ # Registers an already-ready connection object (e.g. an
108
+ # {Transport::Inproc::Pipe}) without running the SP handshake.
109
+ # Transports that frame on the wire should call {#handshake!}
110
+ # instead.
111
+ def ready_direct!(conn)
112
+ ready!(conn)
113
+ end
114
+
115
+
107
116
  # Starts a supervisor for this connection. Must be called after
108
117
  # all per-connection pumps (recv loop, send pump) have been
109
118
  # spawned on the connection barrier. The supervisor blocks until
@@ -121,6 +130,7 @@ module NNQ
121
130
  private
122
131
 
123
132
  def ready!(conn)
133
+ conn = wrap_connection(conn)
124
134
  @conn = conn
125
135
  @engine.connections[conn] = self
126
136
  transition!(:ready)
@@ -141,6 +151,11 @@ module NNQ
141
151
  def tear_down!(reconnect: false)
142
152
  return if @state == :closed
143
153
  transition!(:closed)
154
+ # Cancel sibling pumps BEFORE closing the fd. If we close first,
155
+ # any pump still parked in io_wait wakes up with
156
+ # `IOError: stream closed in another thread`. The caller is the
157
+ # supervisor task, which is NOT in the barrier — no self-stop.
158
+ @barrier.stop
144
159
  if @conn
145
160
  @engine.connections.delete(@conn)
146
161
  @engine.routing.connection_removed(@conn) if @engine.routing.respond_to?(:connection_removed)
@@ -149,10 +164,6 @@ module NNQ
149
164
  @engine.resolve_all_peers_gone_if_empty
150
165
  end
151
166
  @engine.maybe_reconnect(@endpoint) if reconnect
152
- # Cancel every sibling pump of this connection. The caller is
153
- # the supervisor task, which is NOT in the barrier — so there
154
- # is no self-stop risk.
155
- @barrier.stop
156
167
  end
157
168
 
158
169
 
@@ -170,6 +181,19 @@ module NNQ
170
181
  end
171
182
 
172
183
 
184
+ # Post-handshake transport wrap. A transport that implements
185
+ # `wrap_connection(conn)` (e.g. nnq-zstd's zstd+tcp) returns a
186
+ # delegating wrapper that adds a layer (compression, TLS, …)
187
+ # without the engine caring. Unknown or hook-less transports pass
188
+ # through unchanged.
189
+ def wrap_connection(conn)
190
+ return conn unless @endpoint
191
+ transport = @engine.transport_for(@endpoint)
192
+ return conn unless transport.respond_to?(:wrap_connection)
193
+ transport.wrap_connection(conn, @engine)
194
+ end
195
+
196
+
173
197
  # Handshake timeout: same logic as TCP.connect_timeout — derived
174
198
  # from reconnect_interval (floor 0.5s). Prevents a hang when the
175
199
  # peer accepts the TCP connection but never sends an SP greeting.
@@ -2,31 +2,6 @@
2
2
 
3
3
  module NNQ
4
4
  class Engine
5
- # Connection errors that should trigger a reconnect retry rather
6
- # than propagate. Mutable at load time so plugins (e.g. a future
7
- # TLS transport) can append their own error classes; frozen on
8
- # first {Engine#connect}.
9
- CONNECTION_FAILED = [
10
- Errno::ECONNREFUSED,
11
- Errno::EHOSTUNREACH,
12
- Errno::ENETUNREACH,
13
- Errno::ENOENT,
14
- Errno::EPIPE,
15
- Errno::ETIMEDOUT,
16
- Socket::ResolutionError,
17
- ]
18
-
19
- # Errors that indicate an established connection went away. Used
20
- # by the recv loop and pumps to silently terminate (the connection
21
- # lifecycle's #lost! handler decides whether to reconnect).
22
- CONNECTION_LOST = [
23
- EOFError,
24
- IOError,
25
- Errno::ECONNRESET,
26
- Errno::EPIPE,
27
- ]
28
-
29
-
30
5
  # Schedules reconnect attempts with exponential back-off.
31
6
  #
32
7
  # Runs a background task that loops until a connection is
@@ -61,7 +36,7 @@ module NNQ
61
36
  sleep quantized_wait(delay) if delay > 0
62
37
  break if @engine.closed?
63
38
  begin
64
- @engine.transport_for(@endpoint).connect(@endpoint, @engine)
39
+ @engine.transport_for(@endpoint).connect(@endpoint, @engine, **@engine.dial_opts_for(@endpoint))
65
40
  break
66
41
  rescue *CONNECTION_FAILED, *CONNECTION_LOST => e
67
42
  delay = next_delay(delay, max_delay)
data/lib/nnq/engine.rb CHANGED
@@ -4,16 +4,9 @@ require "async"
4
4
  require "async/clock"
5
5
  require "set"
6
6
  require "protocol/sp"
7
- require_relative "error"
8
- require_relative "connection"
9
- require_relative "monitor_event"
10
- require_relative "reactor"
11
7
  require_relative "engine/socket_lifecycle"
12
8
  require_relative "engine/connection_lifecycle"
13
9
  require_relative "engine/reconnect"
14
- require_relative "transport/tcp"
15
- require_relative "transport/ipc"
16
- require_relative "transport/inproc"
17
10
 
18
11
  module NNQ
19
12
  # Per-socket orchestrator. Owns the listener set, the connection map
@@ -25,11 +18,18 @@ module NNQ
25
18
  # no HWM bookkeeping, no mechanisms, no heartbeat, no monitor queue.
26
19
  #
27
20
  class Engine
28
- TRANSPORTS = {
29
- "tcp" => Transport::TCP,
30
- "ipc" => Transport::IPC,
31
- "inproc" => Transport::Inproc,
32
- }
21
+ # Scheme → transport module registry. Each transport file
22
+ # self-registers on require; plugins (e.g. nnq-zstd) add more:
23
+ #
24
+ # NNQ::Engine.transports["zstd+tcp"] = NNQ::Transport::ZstdTcp
25
+ #
26
+ @transports = {}
27
+
28
+
29
+ class << self
30
+ # @return [Hash{String => Module}] registered transports
31
+ attr_reader :transports
32
+ end
33
33
 
34
34
 
35
35
  # @return [Integer] our SP protocol id (e.g. Protocols::PUSH_V0)
@@ -95,6 +95,7 @@ module NNQ
95
95
  @monitor_queue = nil
96
96
  @verbose_monitor = false
97
97
  @dialed = Set.new
98
+ @dial_opts = {} # endpoint => kwargs for transport.connect on reconnect
98
99
  @routing = yield(self)
99
100
  end
100
101
 
@@ -191,9 +192,9 @@ module NNQ
191
192
 
192
193
 
193
194
  # Binds to +endpoint+. Synchronous: errors propagate.
194
- def bind(endpoint)
195
+ def bind(endpoint, **opts)
195
196
  transport = transport_for(endpoint)
196
- listener = transport.bind(endpoint, self)
197
+ listener = transport.bind(endpoint, self, **opts)
197
198
  listener.start_accept_loop(@lifecycle.barrier) do |io, framing = :tcp|
198
199
  handle_accepted(io, endpoint: endpoint, framing: framing)
199
200
  end
@@ -207,12 +208,13 @@ module NNQ
207
208
  # actual dial happens inside a background reconnect task that
208
209
  # retries with exponential back-off until the peer becomes
209
210
  # reachable. Inproc connect is synchronous and instant.
210
- def connect(endpoint)
211
+ def connect(endpoint, **opts)
211
212
  @dialed << endpoint
213
+ @dial_opts[endpoint] = opts unless opts.empty?
212
214
  @last_endpoint = endpoint
213
215
 
214
216
  if endpoint.start_with?("inproc://")
215
- transport_for(endpoint).connect(endpoint, self)
217
+ transport_for(endpoint).connect(endpoint, self, **opts)
216
218
  else
217
219
  emit_monitor_event(:connect_delayed, endpoint: endpoint)
218
220
  Reconnect.schedule(endpoint, @options, @lifecycle.barrier, self, delay: 0)
@@ -220,6 +222,14 @@ module NNQ
220
222
  end
221
223
 
222
224
 
225
+ # Transport options captured from {#connect} for +endpoint+. Used by
226
+ # {Reconnect} to re-dial with the original kwargs. Empty hash for
227
+ # endpoints connected without extra options.
228
+ def dial_opts_for(endpoint)
229
+ @dial_opts[endpoint] || {}
230
+ end
231
+
232
+
223
233
  # Schedules a reconnect for +endpoint+ if auto-reconnect is enabled
224
234
  # and the endpoint is still in the dialed set. Called from the
225
235
  # connection lifecycle's `lost!` path.
@@ -235,7 +245,7 @@ module NNQ
235
245
  # transport from the URL each iteration.
236
246
  def transport_for(endpoint)
237
247
  scheme = endpoint[/\A([a-z+]+):\/\//i, 1] or raise Error, "no scheme: #{endpoint}"
238
- TRANSPORTS[scheme] or raise Error, "unsupported transport: #{scheme}"
248
+ Engine.transports[scheme] or raise Error, "unsupported transport: #{scheme}"
239
249
  end
240
250
 
241
251
 
@@ -263,6 +273,19 @@ module NNQ
263
273
  end
264
274
 
265
275
 
276
+ # Registers an already-connected, framing-free pipe (inproc). Skips
277
+ # the SP handshake entirely — {Transport::Inproc::Pipe} is a Ruby
278
+ # duck-type for {NNQ::Connection} and has no wire protocol.
279
+ def connection_ready(conn, endpoint:)
280
+ lifecycle = ConnectionLifecycle.new(self, endpoint: endpoint, framing: :inproc)
281
+ lifecycle.ready_direct!(conn)
282
+ spawn_recv_loop(conn) if @routing.respond_to?(:enqueue) && @connections.key?(conn)
283
+ lifecycle.start_supervisor!
284
+ rescue ConnectionRejected
285
+ # routing rejected this peer (e.g. PAIR already bonded)
286
+ end
287
+
288
+
266
289
  # Spawns a task under the given parent barrier (defaults to the
267
290
  # socket-level barrier). Used by routing strategies (e.g. PUSH send
268
291
  # pump) to attach long-lived fibers to the engine's lifecycle. The
@@ -345,9 +368,21 @@ module NNQ
345
368
 
346
369
 
347
370
  def spawn_recv_loop(conn)
371
+ # Inproc fast-path: wire the peer pipe to enqueue directly into
372
+ # the routing recv queue, skipping both the recv pump fiber and
373
+ # the intermediate pipe queue. Cuts three fiber hops to one on
374
+ # PUSH/PULL and peers.
375
+ if conn.is_a?(Transport::Inproc::Pipe) && conn.peer && @routing.respond_to?(:direct_recv_for)
376
+ queue, transform = @routing.direct_recv_for(conn)
377
+ if queue
378
+ conn.peer.wire_direct_recv(queue, transform)
379
+ return
380
+ end
381
+ end
382
+
348
383
  @connections[conn].barrier.async(annotation: "nnq recv #{conn.endpoint}") do
349
384
  loop do
350
- body = conn.receive_message
385
+ body = conn.receive_message.freeze
351
386
  if @verbose_monitor
352
387
  preview = @routing.respond_to?(:preview_body) ? @routing.preview_body(body) : body
353
388
  emit_verbose_msg_received(preview)
data/lib/nnq/pair.rb CHANGED
@@ -10,7 +10,7 @@ module NNQ
10
10
  #
11
11
  class PAIR0 < Socket
12
12
  def send(body)
13
- body = frozen_binary(body)
13
+ body = coerce_binary(body)
14
14
  Reactor.run { @engine.routing.send(body) }
15
15
  end
16
16
 
data/lib/nnq/pub_sub.rb CHANGED
@@ -12,7 +12,7 @@ module NNQ
12
12
  #
13
13
  class PUB0 < Socket
14
14
  def send(body)
15
- body = frozen_binary(body)
15
+ body = coerce_binary(body)
16
16
  Reactor.run { @engine.routing.send(body) }
17
17
  end
18
18
 
data/lib/nnq/push_pull.rb CHANGED
@@ -11,7 +11,7 @@ module NNQ
11
11
  #
12
12
  class PUSH0 < Socket
13
13
  def send(body)
14
- body = frozen_binary(body)
14
+ body = coerce_binary(body)
15
15
  Reactor.run { @engine.routing.send(body) }
16
16
  end
17
17
 
data/lib/nnq/req_rep.rb CHANGED
@@ -18,7 +18,7 @@ module NNQ
18
18
  # raw mode — use {#send} / {#receive} there.
19
19
  def send_request(body)
20
20
  raise Error, "REQ#send_request not available in raw mode" if raw?
21
- body = frozen_binary(body)
21
+ body = coerce_binary(body)
22
22
  Reactor.run { @engine.routing.send_request(body) }
23
23
  end
24
24
 
@@ -29,7 +29,7 @@ module NNQ
29
29
  # cooked mode.
30
30
  def send(body, header:)
31
31
  raise Error, "REQ#send not available in cooked mode" unless raw?
32
- body = frozen_binary(body)
32
+ body = coerce_binary(body)
33
33
  Reactor.run { @engine.routing.send(body, header: header) }
34
34
  end
35
35
 
@@ -74,7 +74,7 @@ module NNQ
74
74
  # came from. Raises in raw mode.
75
75
  def send_reply(body)
76
76
  raise Error, "REP#send_reply not available in raw mode" if raw?
77
- body = frozen_binary(body)
77
+ body = coerce_binary(body)
78
78
  Reactor.run { @engine.routing.send_reply(body) }
79
79
  end
80
80
 
@@ -84,7 +84,7 @@ module NNQ
84
84
  # tuple). Silent drop if +to+ is closed. Raises in cooked mode.
85
85
  def send(body, to:, header:)
86
86
  raise Error, "REP#send not available in cooked mode" unless raw?
87
- body = frozen_binary(body)
87
+ body = coerce_binary(body)
88
88
  Reactor.run { @engine.routing.send(body, to: to, header: header) }
89
89
  end
90
90
 
@@ -27,7 +27,7 @@ module NNQ
27
27
  hops += 1
28
28
 
29
29
  if word.getbyte(0) & 0x80 != 0
30
- return [body.byteslice(0, offset), body.byteslice(offset..)]
30
+ return [body.byteslice(0, offset).freeze, body.byteslice(offset..).freeze]
31
31
  end
32
32
  end
33
33
 
@@ -22,7 +22,6 @@ module NNQ
22
22
  def initialize(engine)
23
23
  @engine = engine
24
24
  @queues = {} # conn => Async::LimitedQueue
25
- @pump_tasks = {} # conn => Async::Task
26
25
  @recv_queue = Async::Queue.new
27
26
  end
28
27
 
@@ -44,6 +43,13 @@ module NNQ
44
43
  end
45
44
 
46
45
 
46
+ # Inproc fast-path hook: peer pipe enqueues directly into the
47
+ # shared recv queue — identity transform, no backtrace or filter.
48
+ def direct_recv_for(_conn)
49
+ [@recv_queue, nil]
50
+ end
51
+
52
+
47
53
  # @return [String, nil] message body, or nil once the socket is closed
48
54
  def receive
49
55
  @recv_queue.dequeue
@@ -52,18 +58,13 @@ module NNQ
52
58
 
53
59
  def connection_added(conn)
54
60
  queue = Async::LimitedQueue.new(@engine.options.send_hwm)
55
- @queues[conn] = queue
56
- @pump_tasks[conn] = spawn_pump(conn, queue)
61
+ @queues[conn] = queue
62
+ spawn_pump(conn, queue)
57
63
  end
58
64
 
59
65
 
60
66
  def connection_removed(conn)
61
67
  @queues.delete(conn)
62
- task = @pump_tasks.delete(conn)
63
- return unless task
64
- return if task == Async::Task.current
65
- task.stop
66
- rescue IOError, Errno::EPIPE
67
68
  end
68
69
 
69
70
 
@@ -73,8 +74,6 @@ module NNQ
73
74
 
74
75
 
75
76
  def close
76
- @pump_tasks.each_value(&:stop)
77
- @pump_tasks.clear
78
77
  @queues.clear
79
78
  @recv_queue.enqueue(nil)
80
79
  end
@@ -48,6 +48,13 @@ module NNQ
48
48
  end
49
49
 
50
50
 
51
+ # Inproc fast-path hook: peer pipe enqueues straight into the
52
+ # local recv queue.
53
+ def direct_recv_for(_conn)
54
+ [@recv_queue, nil]
55
+ end
56
+
57
+
51
58
  # First-pipe-wins. Raising {ConnectionRejected} tells the
52
59
  # ConnectionLifecycle to tear down the just-registered connection
53
60
  # without ever exposing it to pumps.
@@ -19,9 +19,8 @@ module NNQ
19
19
  #
20
20
  class Pub
21
21
  def initialize(engine)
22
- @engine = engine
23
- @queues = {} # conn => Async::LimitedQueue
24
- @pump_tasks = {} # conn => Async::Task
22
+ @engine = engine
23
+ @queues = {} # conn => Async::LimitedQueue
25
24
  end
26
25
 
27
26
 
@@ -42,19 +41,13 @@ module NNQ
42
41
  # control into the new task body, which parks on queue.dequeue;
43
42
  # at that park the publisher fiber can run and must already see
44
43
  # this peer's queue.
45
- @queues[conn] = queue
46
- @pump_tasks[conn] = spawn_pump(conn, queue)
44
+ @queues[conn] = queue
45
+ spawn_pump(conn, queue)
47
46
  end
48
47
 
49
48
 
50
49
  def connection_removed(conn)
51
50
  @queues.delete(conn)
52
- task = @pump_tasks.delete(conn)
53
- return unless task
54
- return if task == Async::Task.current
55
- task.stop
56
- rescue IOError, Errno::EPIPE
57
- # pump was mid-flush; already unwinding
58
51
  end
59
52
 
60
53
 
@@ -65,8 +58,6 @@ module NNQ
65
58
 
66
59
 
67
60
  def close
68
- @pump_tasks.each_value(&:stop)
69
- @pump_tasks.clear
70
61
  @queues.clear
71
62
  end
72
63
 
@@ -22,6 +22,14 @@ module NNQ
22
22
  end
23
23
 
24
24
 
25
+ # Inproc fast-path hook: return the routing recv queue so the
26
+ # peer pipe can enqueue directly, skipping the recv pump fiber.
27
+ # Identity transform — PULL bodies are the user payload already.
28
+ def direct_recv_for(_conn)
29
+ [@queue, nil]
30
+ end
31
+
32
+
25
33
  # @return [String, nil] message body, or nil if the queue was closed
26
34
  def receive
27
35
  @queue.dequeue
@@ -89,6 +89,17 @@ module NNQ
89
89
  end
90
90
 
91
91
 
92
+ # Inproc fast-path hook: peer pipe parses the backtrace and
93
+ # enqueues the same [conn, btrace, payload] tuple the pump would.
94
+ def direct_recv_for(conn)
95
+ transform = lambda do |body|
96
+ btrace, payload = parse_backtrace(body)
97
+ btrace ? [conn, btrace, payload] : nil
98
+ end
99
+ [@recv_queue, transform]
100
+ end
101
+
102
+
92
103
  def connection_removed(conn)
93
104
  @mutex.synchronize do
94
105
  @pending = nil if @pending && @pending[0] == conn
@@ -56,6 +56,16 @@ module NNQ
56
56
  end
57
57
 
58
58
 
59
+ # Inproc fast-path hook.
60
+ def direct_recv_for(conn)
61
+ transform = lambda do |wire_bytes|
62
+ header, payload = parse_backtrace(wire_bytes)
63
+ header ? [conn, header, payload] : nil
64
+ end
65
+ [@recv_queue, transform]
66
+ end
67
+
68
+
59
69
  def close
60
70
  @recv_queue.enqueue(nil)
61
71
  end
@@ -71,7 +71,7 @@ module NNQ
71
71
  def enqueue(body, _conn)
72
72
  return if body.bytesize < 4
73
73
  id = body.unpack1("N")
74
- payload = body.byteslice(4..)
74
+ payload = body.byteslice(4..).freeze
75
75
 
76
76
  @mutex.synchronize do
77
77
  if @outstanding && @outstanding[0] == id