nnq 0.6.0 → 0.7.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 0a336ac1e24bc6210ddeac6731163baab6f8980d9eebbe3235fac13157416fcf
4
- data.tar.gz: f049bf9038235487966cae1c8920b452abf9984e3776b2b93cbbd95e067fb485
3
+ metadata.gz: c55340c77dffd3e4fdbc8419fb463e608a6e635adc577582e295b8af20fdc3e0
4
+ data.tar.gz: 6bb76f07c3732a4e69ad9a8e93a6a852f5ca04aac40fedc0df346ec9a2d74297
5
5
  SHA512:
6
- metadata.gz: dcef45943a41f1bc53bbcf8ecc0c34c2779e4a16dea04e8f78854cf6a9debe11168596b47b764728e49393f61c319d41e2d79e489b37dec809adc59cbc0741f0
7
- data.tar.gz: fd7aaa30c57d5b8fbfd6279aba8b96423e201837ca4ef0144a315ec020da5e0183aaede0c7327fc49bf1bd318894099e4fe02657a12aebf98c1d895582583c62
6
+ metadata.gz: 1cb190f96ace0a94363ea6a0b24b9d8b563363fbb3fb84e0bb4927a8ef818949270f20dec984828405bd15068400ace40aa5a9017d52321e2f9e10aa5235d973
7
+ data.tar.gz: 036a2a8203a7a26afad8d67fd23d9606d4947561ec0d8d31893b6efb116b62f623560afc6a4293fdbbd8846c39fa76aa9885eb47c9ab11900d552f5bc4377ecd
data/CHANGELOG.md CHANGED
@@ -1,5 +1,94 @@
1
1
  # Changelog
2
2
 
3
+ ## 0.7.0 — 2026-04-18
4
+
5
+ - **Inproc transport now uses a queue-based `Inproc::Pipe`** instead
6
+ of a Unix `socketpair(2)` running the full SP protocol.
7
+ `NNQ::Transport::Inproc::Pipe` duck-types `NNQ::Connection` and
8
+ transfers frozen Strings through a pair of `Async::Queue`s (one
9
+ per direction). No framing, no handshake, no kernel buffer copy.
10
+ When a routing strategy supplies an SP backtrace header
11
+ (REQ/REP/SURVEYOR), it's prepended before enqueue so the receive
12
+ side sees the same layout as the TCP/IPC path and `parse_backtrace`
13
+ keeps working unchanged. The new `Engine#connection_ready(conn,
14
+ endpoint:)` and `ConnectionLifecycle#ready_direct!` entry points
15
+ register a pipe as ready without the SP handshake phase.
16
+ - **Inproc direct-recv fast path.** When a routing strategy exposes a
17
+ `#direct_recv_for(conn)` hook, the peer pipe enqueues directly into
18
+ the routing recv queue via `Pipe#wire_direct_recv`, bypassing both
19
+ the intermediate pipe queue and the recv pump fiber. PULL, BUS,
20
+ PAIR, SUB, REP, RESPONDENT, SURVEYOR, and the `*_raw` variants all
21
+ implement the hook; REQ (promise-based) stays on the fiber path.
22
+ Cuts three fiber hops to one on the steady-state recv path.
23
+
24
+ Inproc PUSH/PULL single-peer throughput (Ruby 4.0.2):
25
+
26
+ | Size | Before (no JIT) | After (no JIT) | After (+YJIT) |
27
+ |---|---|---|---|
28
+ | 128 B | 122k msg/s | 350k msg/s | 1,226k msg/s |
29
+ | 2 KiB | 87k msg/s | 360k msg/s | 1,458k msg/s |
30
+ | 32 KiB | 21k msg/s | 261k msg/s | 887k msg/s |
31
+
32
+ - **Routing pumps shed their `@pump_tasks` bookkeeping.** `bus`, `pub`,
33
+ `surveyor`, and `surveyor_raw` no longer track per-connection pump
34
+ tasks in a hash. Pumps are spawned under
35
+ `@engine.connections[conn].barrier`, so `ConnectionLifecycle#tear_down!`
36
+ already cascade-cancels them on `barrier.stop` — the hash was dead
37
+ weight.
38
+ - **Transport registry is pluggable.** `NNQ::Engine.transports` is now a
39
+ mutable class-level `Hash` instead of a frozen constant; each built-in
40
+ transport (`tcp`, `ipc`, `inproc`) self-registers at load with
41
+ `Engine.transports["…"] = self`. External transports (e.g. `nnq-zstd`'s
42
+ `zstd+tcp://`) can register themselves the same way.
43
+ - **`ConnectionLifecycle` calls `transport.wrap_connection(conn, engine)`
44
+ after handshake.** Transports that implement the hook can return a
45
+ delegating wrapper that layers compression / TLS / instrumentation
46
+ over the raw `NNQ::Connection` without the engine caring. Transports
47
+ without the hook (tcp/ipc/inproc) pass through unchanged.
48
+ - **`lib/nnq.rb` restructured to mirror `lib/omq.rb`.** Requires split
49
+ into Core / Transport / Socket-types sections. New
50
+ `lib/nnq/constants.rb` owns `MonitorEvent`, the `CONNECTION_LOST` /
51
+ `CONNECTION_FAILED` error arrays, and `NNQ.freeze_for_ractors!` — all
52
+ previously scattered across `engine.rb`, `reconnect.rb`,
53
+ `monitor_event.rb`, and the top-level `nnq.rb`. `monitor_event.rb` is
54
+ removed (absorbed into constants).
55
+ - **Benchmarks: richer scaffolding, measured via `Async::Clock`.**
56
+ `BenchHelper` gains `NNQ_BENCH_SIZES` / `NNQ_BENCH_TRANSPORTS` /
57
+ `NNQ_BENCH_PEERS` env overrides, a `measure_roundtrip` helper for
58
+ REQ/REP-style patterns, and a `wait_subscribed` helper that closes
59
+ the gap between TCP connect and SUBSCRIBE propagation. All elapsed
60
+ measurements use `Async::Clock.measure { … }` blocks instead of
61
+ `Process.clock_gettime`. `bench/report.rb --update-readme` now
62
+ falls back to the most recent row per cell across all history, so a
63
+ partial bench run refreshes only the cells it covers instead of
64
+ clobbering untouched cells with "—".
65
+
66
+ ## 0.6.1 — 2026-04-15
67
+
68
+ - **Verbose trace (`-vvv`) now fires for cooked REQ/REP/RESPONDENT
69
+ sends.** Cooked `Req#send_request`, `Rep#send_reply`, and
70
+ `Respondent#send_reply` bypass `send_pump` and write to the
71
+ connection directly, so they were never emitting `:message_sent`
72
+ monitor events — `-vvv` only ever showed the `<<` recv side. Each
73
+ now calls `emit_verbose_msg_sent(body)` after the write. Raw
74
+ REQ/REP/RESPONDENT sends get the same treatment (raw surveyor
75
+ already emitted via its per-peer send pump).
76
+ - **Verbose recv previews strip the SP backtrace header.** The recv
77
+ loop used to emit the raw wire body, so `-vvv` traces for
78
+ REQ/REP/SURVEYOR/RESPONDENT showed the 4-byte request/survey id
79
+ (or a multi-word backtrace stack) in front of the payload. Routing
80
+ strategies now expose an optional `preview_body(wire)` hook; the
81
+ engine calls it before emitting `:message_received` so the trace
82
+ shows just the payload.
83
+ - **`Engine#close` drains the monitor queue before cancelling
84
+ tasks.** The monitor consumer fiber lives under the socket-level
85
+ barrier, so `barrier.stop` used to `Async::Stop` it before it had
86
+ a chance to drain trailing events. `close` now emits `:closed`,
87
+ enqueues the nil sentinel, and awaits the stored `monitor_task`
88
+ before stopping the barrier. Fixes flaky `-vvv` traces on
89
+ short-lived sockets where the last `:message_received` event
90
+ would occasionally be lost.
91
+
3
92
  ## 0.6.0 — 2026-04-15
4
93
 
5
94
  - **NNG-style raw mode for REQ/REP and SURVEYOR/RESPONDENT.** Constructing
@@ -0,0 +1,58 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "socket"
4
+ require "io/stream"
5
+
6
+ module NNQ
7
+ # Lifecycle event emitted by {Socket#monitor}.
8
+ #
9
+ # @!attribute [r] type
10
+ # @return [Symbol] event type (:listening, :connected, :disconnected, ...)
11
+ # @!attribute [r] endpoint
12
+ # @return [String, nil] the endpoint involved
13
+ # @!attribute [r] detail
14
+ # @return [Hash, nil] extra context
15
+ #
16
+ MonitorEvent = Data.define(:type, :endpoint, :detail) do
17
+ def initialize(type:, endpoint: nil, detail: nil)
18
+ super
19
+ end
20
+ end
21
+
22
+
23
+ # Errors that indicate an established connection went away. Used by
24
+ # the recv loop, routing pumps, and connection lifecycle to silently
25
+ # terminate (the connection lifecycle's #lost! handler decides
26
+ # whether to reconnect). Not frozen at load time — transport plugins
27
+ # append to this before the first bind/connect, which freezes both
28
+ # arrays.
29
+ CONNECTION_LOST = [
30
+ EOFError,
31
+ IOError,
32
+ Errno::ECONNRESET,
33
+ Errno::EPIPE,
34
+ ]
35
+
36
+
37
+ # Errors raised when a peer cannot be reached. Triggers a reconnect
38
+ # retry rather than propagating.
39
+ CONNECTION_FAILED = [
40
+ Errno::ECONNREFUSED,
41
+ Errno::EHOSTUNREACH,
42
+ Errno::ENETUNREACH,
43
+ Errno::ENOENT,
44
+ Errno::EPIPE,
45
+ Errno::ETIMEDOUT,
46
+ Socket::ResolutionError,
47
+ ]
48
+
49
+
50
+ # Freezes module-level state so NNQ sockets can be used inside Ractors.
51
+ # Call this once before spawning any Ractors that create NNQ sockets.
52
+ #
53
+ def self.freeze_for_ractors!
54
+ CONNECTION_LOST.freeze
55
+ CONNECTION_FAILED.freeze
56
+ Engine.transports.freeze
57
+ end
58
+ end
@@ -104,6 +104,15 @@ module NNQ
104
104
  end
105
105
 
106
106
 
107
+ # Registers an already-ready connection object (e.g. an
108
+ # {Transport::Inproc::Pipe}) without running the SP handshake.
109
+ # Transports that frame on the wire should call {#handshake!}
110
+ # instead.
111
+ def ready_direct!(conn)
112
+ ready!(conn)
113
+ end
114
+
115
+
107
116
  # Starts a supervisor for this connection. Must be called after
108
117
  # all per-connection pumps (recv loop, send pump) have been
109
118
  # spawned on the connection barrier. The supervisor blocks until
@@ -121,6 +130,7 @@ module NNQ
121
130
  private
122
131
 
123
132
  def ready!(conn)
133
+ conn = wrap_connection(conn)
124
134
  @conn = conn
125
135
  @engine.connections[conn] = self
126
136
  transition!(:ready)
@@ -170,6 +180,19 @@ module NNQ
170
180
  end
171
181
 
172
182
 
183
+ # Post-handshake transport wrap. A transport that implements
184
+ # `wrap_connection(conn)` (e.g. nnq-zstd's zstd+tcp) returns a
185
+ # delegating wrapper that adds a layer (compression, TLS, …)
186
+ # without the engine caring. Unknown or hook-less transports pass
187
+ # through unchanged.
188
+ def wrap_connection(conn)
189
+ return conn unless @endpoint
190
+ transport = @engine.transport_for(@endpoint)
191
+ return conn unless transport.respond_to?(:wrap_connection)
192
+ transport.wrap_connection(conn, @engine)
193
+ end
194
+
195
+
173
196
  # Handshake timeout: same logic as TCP.connect_timeout — derived
174
197
  # from reconnect_interval (floor 0.5s). Prevents a hang when the
175
198
  # peer accepts the TCP connection but never sends an SP greeting.
@@ -2,31 +2,6 @@
2
2
 
3
3
  module NNQ
4
4
  class Engine
5
- # Connection errors that should trigger a reconnect retry rather
6
- # than propagate. Mutable at load time so plugins (e.g. a future
7
- # TLS transport) can append their own error classes; frozen on
8
- # first {Engine#connect}.
9
- CONNECTION_FAILED = [
10
- Errno::ECONNREFUSED,
11
- Errno::EHOSTUNREACH,
12
- Errno::ENETUNREACH,
13
- Errno::ENOENT,
14
- Errno::EPIPE,
15
- Errno::ETIMEDOUT,
16
- Socket::ResolutionError,
17
- ]
18
-
19
- # Errors that indicate an established connection went away. Used
20
- # by the recv loop and pumps to silently terminate (the connection
21
- # lifecycle's #lost! handler decides whether to reconnect).
22
- CONNECTION_LOST = [
23
- EOFError,
24
- IOError,
25
- Errno::ECONNRESET,
26
- Errno::EPIPE,
27
- ]
28
-
29
-
30
5
  # Schedules reconnect attempts with exponential back-off.
31
6
  #
32
7
  # Runs a background task that loops until a connection is
@@ -61,7 +36,7 @@ module NNQ
61
36
  sleep quantized_wait(delay) if delay > 0
62
37
  break if @engine.closed?
63
38
  begin
64
- @engine.transport_for(@endpoint).connect(@endpoint, @engine)
39
+ @engine.transport_for(@endpoint).connect(@endpoint, @engine, **@engine.dial_opts_for(@endpoint))
65
40
  break
66
41
  rescue *CONNECTION_FAILED, *CONNECTION_LOST => e
67
42
  delay = next_delay(delay, max_delay)
data/lib/nnq/engine.rb CHANGED
@@ -4,16 +4,9 @@ require "async"
4
4
  require "async/clock"
5
5
  require "set"
6
6
  require "protocol/sp"
7
- require_relative "error"
8
- require_relative "connection"
9
- require_relative "monitor_event"
10
- require_relative "reactor"
11
7
  require_relative "engine/socket_lifecycle"
12
8
  require_relative "engine/connection_lifecycle"
13
9
  require_relative "engine/reconnect"
14
- require_relative "transport/tcp"
15
- require_relative "transport/ipc"
16
- require_relative "transport/inproc"
17
10
 
18
11
  module NNQ
19
12
  # Per-socket orchestrator. Owns the listener set, the connection map
@@ -25,11 +18,18 @@ module NNQ
25
18
  # no HWM bookkeeping, no mechanisms, no heartbeat, no monitor queue.
26
19
  #
27
20
  class Engine
28
- TRANSPORTS = {
29
- "tcp" => Transport::TCP,
30
- "ipc" => Transport::IPC,
31
- "inproc" => Transport::Inproc,
32
- }
21
+ # Scheme → transport module registry. Each transport file
22
+ # self-registers on require; plugins (e.g. nnq-zstd) add more:
23
+ #
24
+ # NNQ::Engine.transports["zstd+tcp"] = NNQ::Transport::ZstdTcp
25
+ #
26
+ @transports = {}
27
+
28
+
29
+ class << self
30
+ # @return [Hash{String => Module}] registered transports
31
+ attr_reader :transports
32
+ end
33
33
 
34
34
 
35
35
  # @return [Integer] our SP protocol id (e.g. Protocols::PUSH_V0)
@@ -70,6 +70,10 @@ module NNQ
70
70
  attr_accessor :monitor_queue
71
71
 
72
72
 
73
+ # @return [Async::Task, nil] the monitor consumer task, if any
74
+ attr_accessor :monitor_task
75
+
76
+
73
77
  # @return [Boolean] when true, {#emit_verbose_monitor_event} forwards
74
78
  # per-message traces (:message_sent / :message_received) to the
75
79
  # monitor queue. Set by {Socket#monitor} via its +verbose:+ kwarg.
@@ -91,6 +95,7 @@ module NNQ
91
95
  @monitor_queue = nil
92
96
  @verbose_monitor = false
93
97
  @dialed = Set.new
98
+ @dial_opts = {} # endpoint => kwargs for transport.connect on reconnect
94
99
  @routing = yield(self)
95
100
  end
96
101
 
@@ -187,9 +192,9 @@ module NNQ
187
192
 
188
193
 
189
194
  # Binds to +endpoint+. Synchronous: errors propagate.
190
- def bind(endpoint)
195
+ def bind(endpoint, **opts)
191
196
  transport = transport_for(endpoint)
192
- listener = transport.bind(endpoint, self)
197
+ listener = transport.bind(endpoint, self, **opts)
193
198
  listener.start_accept_loop(@lifecycle.barrier) do |io, framing = :tcp|
194
199
  handle_accepted(io, endpoint: endpoint, framing: framing)
195
200
  end
@@ -203,12 +208,13 @@ module NNQ
203
208
  # actual dial happens inside a background reconnect task that
204
209
  # retries with exponential back-off until the peer becomes
205
210
  # reachable. Inproc connect is synchronous and instant.
206
- def connect(endpoint)
211
+ def connect(endpoint, **opts)
207
212
  @dialed << endpoint
213
+ @dial_opts[endpoint] = opts unless opts.empty?
208
214
  @last_endpoint = endpoint
209
215
 
210
216
  if endpoint.start_with?("inproc://")
211
- transport_for(endpoint).connect(endpoint, self)
217
+ transport_for(endpoint).connect(endpoint, self, **opts)
212
218
  else
213
219
  emit_monitor_event(:connect_delayed, endpoint: endpoint)
214
220
  Reconnect.schedule(endpoint, @options, @lifecycle.barrier, self, delay: 0)
@@ -216,6 +222,14 @@ module NNQ
216
222
  end
217
223
 
218
224
 
225
+ # Transport options captured from {#connect} for +endpoint+. Used by
226
+ # {Reconnect} to re-dial with the original kwargs. Empty hash for
227
+ # endpoints connected without extra options.
228
+ def dial_opts_for(endpoint)
229
+ @dial_opts[endpoint] || {}
230
+ end
231
+
232
+
219
233
  # Schedules a reconnect for +endpoint+ if auto-reconnect is enabled
220
234
  # and the endpoint is still in the dialed set. Called from the
221
235
  # connection lifecycle's `lost!` path.
@@ -231,7 +245,7 @@ module NNQ
231
245
  # transport from the URL each iteration.
232
246
  def transport_for(endpoint)
233
247
  scheme = endpoint[/\A([a-z+]+):\/\//i, 1] or raise Error, "no scheme: #{endpoint}"
234
- TRANSPORTS[scheme] or raise Error, "unsupported transport: #{scheme}"
248
+ Engine.transports[scheme] or raise Error, "unsupported transport: #{scheme}"
235
249
  end
236
250
 
237
251
 
@@ -259,6 +273,19 @@ module NNQ
259
273
  end
260
274
 
261
275
 
276
+ # Registers an already-connected, framing-free pipe (inproc). Skips
277
+ # the SP handshake entirely — {Transport::Inproc::Pipe} is a Ruby
278
+ # duck-type for {NNQ::Connection} and has no wire protocol.
279
+ def connection_ready(conn, endpoint:)
280
+ lifecycle = ConnectionLifecycle.new(self, endpoint: endpoint, framing: :inproc)
281
+ lifecycle.ready_direct!(conn)
282
+ spawn_recv_loop(conn) if @routing.respond_to?(:enqueue) && @connections.key?(conn)
283
+ lifecycle.start_supervisor!
284
+ rescue ConnectionRejected
285
+ # routing rejected this peer (e.g. PAIR already bonded)
286
+ end
287
+
288
+
262
289
  # Spawns a task under the given parent barrier (defaults to the
263
290
  # socket-level barrier). Used by routing strategies (e.g. PUSH send
264
291
  # pump) to attach long-lived fibers to the engine's lifecycle. The
@@ -286,6 +313,15 @@ module NNQ
286
313
  # collection mutates during iteration, so snapshot the values.
287
314
  @connections.values.each(&:close!)
288
315
 
316
+ # Emit :closed, seal the monitor queue, and wait for the monitor
317
+ # fiber to drain it before cancelling tasks. Without this join,
318
+ # trailing :message_received events that the recv pump enqueued
319
+ # just before close would be lost when the barrier.stop below
320
+ # Async::Stops the monitor fiber mid-dequeue.
321
+ emit_monitor_event(:closed)
322
+ close_monitor_queue
323
+ @monitor_task&.wait
324
+
289
325
  # Cascade-cancel every remaining task (reconnect loops, accept
290
326
  # loops, supervisors) in one shot.
291
327
  @lifecycle.barrier&.stop
@@ -295,8 +331,6 @@ module NNQ
295
331
  # Unblock anyone waiting on peer_connected when the socket is
296
332
  # closed before a peer ever arrived.
297
333
  @lifecycle.peer_connected.resolve(nil) unless @lifecycle.peer_connected.resolved?
298
- emit_monitor_event(:closed)
299
- close_monitor_queue
300
334
  end
301
335
 
302
336
 
@@ -334,10 +368,25 @@ module NNQ
334
368
 
335
369
 
336
370
  def spawn_recv_loop(conn)
371
+ # Inproc fast-path: wire the peer pipe to enqueue directly into
372
+ # the routing recv queue, skipping both the recv pump fiber and
373
+ # the intermediate pipe queue. Cuts three fiber hops to one on
374
+ # PUSH/PULL and peers.
375
+ if conn.is_a?(Transport::Inproc::Pipe) && conn.peer && @routing.respond_to?(:direct_recv_for)
376
+ queue, transform = @routing.direct_recv_for(conn)
377
+ if queue
378
+ conn.peer.wire_direct_recv(queue, transform)
379
+ return
380
+ end
381
+ end
382
+
337
383
  @connections[conn].barrier.async(annotation: "nnq recv #{conn.endpoint}") do
338
384
  loop do
339
385
  body = conn.receive_message
340
- emit_verbose_msg_received(body)
386
+ if @verbose_monitor
387
+ preview = @routing.respond_to?(:preview_body) ? @routing.preview_body(body) : body
388
+ emit_verbose_msg_received(preview)
389
+ end
341
390
  @routing.enqueue(body, conn)
342
391
  rescue *CONNECTION_LOST, Async::Stop
343
392
  break
@@ -22,7 +22,6 @@ module NNQ
22
22
  def initialize(engine)
23
23
  @engine = engine
24
24
  @queues = {} # conn => Async::LimitedQueue
25
- @pump_tasks = {} # conn => Async::Task
26
25
  @recv_queue = Async::Queue.new
27
26
  end
28
27
 
@@ -44,6 +43,13 @@ module NNQ
44
43
  end
45
44
 
46
45
 
46
+ # Inproc fast-path hook: peer pipe enqueues directly into the
47
+ # shared recv queue — identity transform, no backtrace or filter.
48
+ def direct_recv_for(_conn)
49
+ [@recv_queue, nil]
50
+ end
51
+
52
+
47
53
  # @return [String, nil] message body, or nil once the socket is closed
48
54
  def receive
49
55
  @recv_queue.dequeue
@@ -52,18 +58,13 @@ module NNQ
52
58
 
53
59
  def connection_added(conn)
54
60
  queue = Async::LimitedQueue.new(@engine.options.send_hwm)
55
- @queues[conn] = queue
56
- @pump_tasks[conn] = spawn_pump(conn, queue)
61
+ @queues[conn] = queue
62
+ spawn_pump(conn, queue)
57
63
  end
58
64
 
59
65
 
60
66
  def connection_removed(conn)
61
67
  @queues.delete(conn)
62
- task = @pump_tasks.delete(conn)
63
- return unless task
64
- return if task == Async::Task.current
65
- task.stop
66
- rescue IOError, Errno::EPIPE
67
68
  end
68
69
 
69
70
 
@@ -73,8 +74,6 @@ module NNQ
73
74
 
74
75
 
75
76
  def close
76
- @pump_tasks.each_value(&:stop)
77
- @pump_tasks.clear
78
77
  @queues.clear
79
78
  @recv_queue.enqueue(nil)
80
79
  end
@@ -48,6 +48,13 @@ module NNQ
48
48
  end
49
49
 
50
50
 
51
+ # Inproc fast-path hook: peer pipe enqueues straight into the
52
+ # local recv queue.
53
+ def direct_recv_for(_conn)
54
+ [@recv_queue, nil]
55
+ end
56
+
57
+
51
58
  # First-pipe-wins. Raising {ConnectionRejected} tells the
52
59
  # ConnectionLifecycle to tear down the just-registered connection
53
60
  # without ever exposing it to pumps.
@@ -19,9 +19,8 @@ module NNQ
19
19
  #
20
20
  class Pub
21
21
  def initialize(engine)
22
- @engine = engine
23
- @queues = {} # conn => Async::LimitedQueue
24
- @pump_tasks = {} # conn => Async::Task
22
+ @engine = engine
23
+ @queues = {} # conn => Async::LimitedQueue
25
24
  end
26
25
 
27
26
 
@@ -42,19 +41,13 @@ module NNQ
42
41
  # control into the new task body, which parks on queue.dequeue;
43
42
  # at that park the publisher fiber can run and must already see
44
43
  # this peer's queue.
45
- @queues[conn] = queue
46
- @pump_tasks[conn] = spawn_pump(conn, queue)
44
+ @queues[conn] = queue
45
+ spawn_pump(conn, queue)
47
46
  end
48
47
 
49
48
 
50
49
  def connection_removed(conn)
51
50
  @queues.delete(conn)
52
- task = @pump_tasks.delete(conn)
53
- return unless task
54
- return if task == Async::Task.current
55
- task.stop
56
- rescue IOError, Errno::EPIPE
57
- # pump was mid-flush; already unwinding
58
51
  end
59
52
 
60
53
 
@@ -65,8 +58,6 @@ module NNQ
65
58
 
66
59
 
67
60
  def close
68
- @pump_tasks.each_value(&:stop)
69
- @pump_tasks.clear
70
61
  @queues.clear
71
62
  end
72
63
 
@@ -22,6 +22,14 @@ module NNQ
22
22
  end
23
23
 
24
24
 
25
+ # Inproc fast-path hook: return the routing recv queue so the
26
+ # peer pipe can enqueue directly, skipping the recv pump fiber.
27
+ # Identity transform — PULL bodies are the user payload already.
28
+ def direct_recv_for(_conn)
29
+ [@queue, nil]
30
+ end
31
+
32
+
25
33
  # @return [String, nil] message body, or nil if the queue was closed
26
34
  def receive
27
35
  @queue.dequeue
@@ -70,6 +70,14 @@ module NNQ
70
70
 
71
71
  return if conn.closed?
72
72
  conn.send_message(body, header: btrace)
73
+ @engine.emit_verbose_msg_sent(body)
74
+ end
75
+
76
+
77
+ # Strips the backtrace header for verbose trace previews.
78
+ def preview_body(wire)
79
+ _, payload = parse_backtrace(wire)
80
+ payload || wire
73
81
  end
74
82
 
75
83
 
@@ -81,6 +89,17 @@ module NNQ
81
89
  end
82
90
 
83
91
 
92
+ # Inproc fast-path hook: peer pipe parses the backtrace and
93
+ # enqueues the same [conn, btrace, payload] tuple the pump would.
94
+ def direct_recv_for(conn)
95
+ transform = lambda do |body|
96
+ btrace, payload = parse_backtrace(body)
97
+ btrace ? [conn, btrace, payload] : nil
98
+ end
99
+ [@recv_queue, transform]
100
+ end
101
+
102
+
84
103
  def connection_removed(conn)
85
104
  @mutex.synchronize do
86
105
  @pending = nil if @pending && @pending[0] == conn
@@ -36,11 +36,18 @@ module NNQ
36
36
  return if to.closed?
37
37
  return if Backtrace.too_many_hops?(header)
38
38
  to.send_message(body, header: header)
39
+ @engine.emit_verbose_msg_sent(body)
39
40
  rescue ClosedError
40
41
  # peer went away between receive and send — drop
41
42
  end
42
43
 
43
44
 
45
+ def preview_body(wire)
46
+ _, payload = parse_backtrace(wire)
47
+ payload || wire
48
+ end
49
+
50
+
44
51
  # Called by the engine recv loop.
45
52
  def enqueue(wire_bytes, conn)
46
53
  header, payload = parse_backtrace(wire_bytes)
@@ -49,6 +56,16 @@ module NNQ
49
56
  end
50
57
 
51
58
 
59
+ # Inproc fast-path hook.
60
+ def direct_recv_for(conn)
61
+ transform = lambda do |wire_bytes|
62
+ header, payload = parse_backtrace(wire_bytes)
63
+ header ? [conn, header, payload] : nil
64
+ end
65
+ [@recv_queue, transform]
66
+ end
67
+
68
+
52
69
  def close
53
70
  @recv_queue.enqueue(nil)
54
71
  end
@@ -56,6 +56,7 @@ module NNQ
56
56
  conn = pick_peer
57
57
  header = [id].pack("N")
58
58
  conn.send_message(body, header: header)
59
+ @engine.emit_verbose_msg_sent(body)
59
60
  promise.wait
60
61
  ensure
61
62
  @mutex.synchronize do
@@ -81,6 +82,12 @@ module NNQ
81
82
  end
82
83
 
83
84
 
85
+ # Strips the 4-byte request id for verbose trace previews.
86
+ def preview_body(wire)
87
+ wire.byteslice(4..) || wire
88
+ end
89
+
90
+
84
91
  def close
85
92
  @mutex.synchronize do
86
93
  @outstanding&.last&.reject(NNQ::Error.new("REQ socket closed"))
@@ -26,6 +26,13 @@ module NNQ
26
26
  def send(body, header:)
27
27
  conn = pick_peer
28
28
  conn.send_message(body, header: header)
29
+ @engine.emit_verbose_msg_sent(body)
30
+ end
31
+
32
+
33
+ def preview_body(wire)
34
+ _, payload = parse_backtrace(wire)
35
+ payload || wire
29
36
  end
30
37
 
31
38
 
@@ -41,6 +48,16 @@ module NNQ
41
48
  end
42
49
 
43
50
 
51
+ # Inproc fast-path hook.
52
+ def direct_recv_for(conn)
53
+ transform = lambda do |wire_bytes|
54
+ header, payload = parse_backtrace(wire_bytes)
55
+ header ? [conn, header, payload] : nil
56
+ end
57
+ [@recv_queue, transform]
58
+ end
59
+
60
+
44
61
  def close
45
62
  @recv_queue.enqueue(nil)
46
63
  end
@@ -52,6 +52,14 @@ module NNQ
52
52
 
53
53
  return if conn.closed?
54
54
  conn.send_message(body, header: btrace)
55
+ @engine.emit_verbose_msg_sent(body)
56
+ end
57
+
58
+
59
+ # Strips the backtrace header for verbose trace previews.
60
+ def preview_body(wire)
61
+ _, payload = parse_backtrace(wire)
62
+ payload || wire
55
63
  end
56
64
 
57
65
 
@@ -63,6 +71,16 @@ module NNQ
63
71
  end
64
72
 
65
73
 
74
+ # Inproc fast-path hook.
75
+ def direct_recv_for(conn)
76
+ transform = lambda do |body|
77
+ btrace, payload = parse_backtrace(body)
78
+ btrace ? [conn, btrace, payload] : nil
79
+ end
80
+ [@recv_queue, transform]
81
+ end
82
+
83
+
66
84
  def connection_removed(conn)
67
85
  @mutex.synchronize do
68
86
  @pending = nil if @pending && @pending[0] == conn
@@ -29,10 +29,17 @@ module NNQ
29
29
  return if to.closed?
30
30
  return if Backtrace.too_many_hops?(header)
31
31
  to.send_message(body, header: header)
32
+ @engine.emit_verbose_msg_sent(body)
32
33
  rescue ClosedError
33
34
  end
34
35
 
35
36
 
37
+ def preview_body(wire)
38
+ _, payload = parse_backtrace(wire)
39
+ payload || wire
40
+ end
41
+
42
+
36
43
  def enqueue(wire_bytes, conn)
37
44
  header, payload = parse_backtrace(wire_bytes)
38
45
  return unless header
@@ -40,6 +47,16 @@ module NNQ
40
47
  end
41
48
 
42
49
 
50
+ # Inproc fast-path hook.
51
+ def direct_recv_for(conn)
52
+ transform = lambda do |wire_bytes|
53
+ header, payload = parse_backtrace(wire_bytes)
54
+ header ? [conn, header, payload] : nil
55
+ end
56
+ [@recv_queue, transform]
57
+ end
58
+
59
+
43
60
  def close
44
61
  @recv_queue.enqueue(nil)
45
62
  end
@@ -38,6 +38,13 @@ module NNQ
38
38
  end
39
39
 
40
40
 
41
+ # Inproc fast-path hook: filter via the subscription list in the
42
+ # transform, then enqueue only matching bodies.
43
+ def direct_recv_for(_conn)
44
+ [@queue, ->(body) { matches?(body) ? body : nil }]
45
+ end
46
+
47
+
41
48
  # @return [String, nil]
42
49
  def receive
43
50
  @queue.dequeue
@@ -24,7 +24,6 @@ module NNQ
24
24
  def initialize(engine)
25
25
  @engine = engine
26
26
  @queues = {} # conn => Async::LimitedQueue
27
- @pump_tasks = {} # conn => Async::Task
28
27
  @recv_queue = Async::Queue.new
29
28
  @current_id = nil
30
29
  @mutex = Mutex.new
@@ -78,22 +77,36 @@ module NNQ
78
77
  end
79
78
 
80
79
 
80
+ # Inproc fast-path hook. Transform filters replies by current
81
+ # survey id and strips the 4-byte header, mirroring #enqueue.
82
+ def direct_recv_for(_conn)
83
+ mutex = @mutex
84
+ transform = lambda do |body|
85
+ next nil if body.bytesize < 4
86
+ id = body.unpack1("N")
87
+ payload = body.byteslice(4..)
88
+ match = mutex.synchronize { @current_id == id }
89
+ match ? payload : nil
90
+ end
91
+ [@recv_queue, transform]
92
+ end
93
+
94
+
95
+ # Strips the 4-byte survey id for verbose trace previews.
96
+ def preview_body(wire)
97
+ wire.byteslice(4..) || wire
98
+ end
99
+
100
+
81
101
  def connection_added(conn)
82
- queue = Async::LimitedQueue.new(@engine.options.send_hwm)
83
- @queues[conn] = queue
84
- @pump_tasks[conn] = spawn_pump(conn, queue)
102
+ queue = Async::LimitedQueue.new(@engine.options.send_hwm)
103
+ @queues[conn] = queue
104
+ spawn_pump(conn, queue)
85
105
  end
86
106
 
87
107
 
88
108
  def connection_removed(conn)
89
109
  @queues.delete(conn)
90
- task = @pump_tasks.delete(conn)
91
-
92
- return unless task
93
- return if task == Async::Task.current
94
-
95
- task.stop
96
- rescue IOError, Errno::EPIPE
97
110
  end
98
111
 
99
112
 
@@ -103,8 +116,6 @@ module NNQ
103
116
 
104
117
 
105
118
  def close
106
- @pump_tasks.each_value(&:stop)
107
- @pump_tasks.clear
108
119
  @queues.clear
109
120
  @recv_queue.enqueue(nil)
110
121
  end
@@ -23,7 +23,6 @@ module NNQ
23
23
  def initialize(engine)
24
24
  @engine = engine
25
25
  @queues = {} # conn => Async::LimitedQueue
26
- @pump_tasks = {} # conn => Async::Task
27
26
  @recv_queue = Async::LimitedQueue.new(engine.options.recv_hwm)
28
27
  end
29
28
 
@@ -47,22 +46,31 @@ module NNQ
47
46
  end
48
47
 
49
48
 
49
+ # Inproc fast-path hook.
50
+ def direct_recv_for(conn)
51
+ transform = lambda do |wire_bytes|
52
+ header, payload = parse_backtrace(wire_bytes)
53
+ header ? [conn, header, payload] : nil
54
+ end
55
+ [@recv_queue, transform]
56
+ end
57
+
58
+
59
+ def preview_body(wire)
60
+ _, payload = parse_backtrace(wire)
61
+ payload || wire
62
+ end
63
+
64
+
50
65
  def connection_added(conn)
51
- queue = Async::LimitedQueue.new(@engine.options.send_hwm)
52
- @queues[conn] = queue
53
- @pump_tasks[conn] = spawn_pump(conn, queue)
66
+ queue = Async::LimitedQueue.new(@engine.options.send_hwm)
67
+ @queues[conn] = queue
68
+ spawn_pump(conn, queue)
54
69
  end
55
70
 
56
71
 
57
72
  def connection_removed(conn)
58
73
  @queues.delete(conn)
59
- task = @pump_tasks.delete(conn)
60
-
61
- return unless task
62
- return if task == Async::Task.current
63
-
64
- task.stop
65
- rescue IOError, Errno::EPIPE
66
74
  end
67
75
 
68
76
 
@@ -72,8 +80,6 @@ module NNQ
72
80
 
73
81
 
74
82
  def close
75
- @pump_tasks.each_value(&:stop)
76
- @pump_tasks.clear
77
83
  @queues.clear
78
84
  @recv_queue.enqueue(nil)
79
85
  end
data/lib/nnq/socket.rb CHANGED
@@ -2,11 +2,6 @@
2
2
 
3
3
  require "async/queue"
4
4
 
5
- require_relative "options"
6
- require_relative "engine"
7
- require_relative "monitor_event"
8
- require_relative "reactor"
9
-
10
5
  module NNQ
11
6
  # Socket base class. Subclasses (PUSH, PULL, ...) wire up a routing
12
7
  # strategy and the SP protocol id.
@@ -50,15 +45,15 @@ module NNQ
50
45
  end
51
46
 
52
47
 
53
- def bind(endpoint)
48
+ def bind(endpoint, **opts)
54
49
  ensure_parent_task
55
- Reactor.run { @engine.bind(endpoint) }
50
+ Reactor.run { @engine.bind(endpoint, **opts) }
56
51
  end
57
52
 
58
53
 
59
- def connect(endpoint)
54
+ def connect(endpoint, **opts)
60
55
  ensure_parent_task
61
- Reactor.run { @engine.connect(endpoint) }
56
+ Reactor.run { @engine.connect(endpoint, **opts) }
62
57
  end
63
58
 
64
59
 
@@ -126,13 +121,14 @@ module NNQ
126
121
  @engine.verbose_monitor = verbose
127
122
 
128
123
  Reactor.run do
129
- @engine.spawn_task(annotation: "nnq monitor") do
124
+ @engine.monitor_task = @engine.spawn_task(annotation: "nnq monitor") do
130
125
  while (event = queue.dequeue)
131
126
  block.call(event)
132
127
  end
133
128
  rescue Async::Stop
134
129
  ensure
135
130
  @engine.monitor_queue = nil
131
+ @engine.monitor_task = nil
136
132
  block.call(MonitorEvent.new(type: :monitor_stopped))
137
133
  end
138
134
  end
@@ -0,0 +1,131 @@
1
+ # frozen_string_literal: true
2
+
3
+ module NNQ
4
+ module Transport
5
+ module Inproc
6
+ # Queue-based in-process pipe. Duck-types {NNQ::Connection} so
7
+ # routing strategies, the recv loop, and the send pump work
8
+ # against it unchanged.
9
+ #
10
+ # No wire framing: bodies are transferred as frozen Strings
11
+ # through a pair of {Async::Queue} (one per direction). When an
12
+ # SP backtrace header is supplied (REQ/REP/SURVEYOR paths), it's
13
+ # prepended before enqueue so {#receive_message} returns an
14
+ # already-prefixed body — matching the TCP/IPC framing semantic
15
+ # so routing's `parse_backtrace` parses the same layout either
16
+ # way.
17
+ #
18
+ # Direct-recv fast path: when a routing strategy calls
19
+ # {#wire_direct_recv} on the peer side of a pipe pair, subsequent
20
+ # {#send_message} calls enqueue straight into the consumer's
21
+ # recv queue — the intermediate pipe queue and the recv pump
22
+ # fiber are both skipped. Cuts three fiber hops to one and is
23
+ # what lets inproc PUSH/PULL clear 1M msg/s on YJIT.
24
+ #
25
+ # Wiring happens synchronously inside {Transport::Inproc.connect}
26
+ # (before the call returns to the caller), so there's no window
27
+ # in which a send can precede a wire — no pending buffer needed.
28
+ #
29
+ # Close protocol: {#close} enqueues a `nil` sentinel onto the
30
+ # send side (or the direct queue if wired). The peer's recv loop
31
+ # sees `nil`, raises `EOFError`, and unwinds via its connection
32
+ # supervisor.
33
+ class Pipe
34
+ # @return [String, nil] endpoint URI this pipe was established on
35
+ attr_reader :endpoint
36
+
37
+ # @return [Pipe, nil] the other end of the pair
38
+ attr_accessor :peer
39
+
40
+ # @return [Async::Queue, nil] when non-nil, {#send_message}
41
+ # enqueues here instead of into @send_queue.
42
+ attr_reader :direct_recv_queue
43
+
44
+
45
+ def initialize(send_queue:, recv_queue:, endpoint:)
46
+ @send_queue = send_queue
47
+ @recv_queue = recv_queue
48
+ @endpoint = endpoint
49
+ @closed = false
50
+ @peer = nil
51
+ @direct_recv_queue = nil
52
+ @direct_recv_transform = nil
53
+ end
54
+
55
+
56
+ # Wires the direct-recv fast path. After this call, messages
57
+ # sent on this pipe bypass the intermediate pipe queue and
58
+ # land directly in +queue+.
59
+ #
60
+ # @param queue [Async::Queue]
61
+ # @param transform [Proc, nil] optional per-message transform;
62
+ # return nil to drop the message (used by filter/parse
63
+ # strategies like SUB or REP).
64
+ def wire_direct_recv(queue, transform)
65
+ @direct_recv_transform = transform
66
+ @direct_recv_queue = queue
67
+ end
68
+
69
+
70
+ def send_message(body, header: nil)
71
+ raise ClosedError, "connection closed" if @closed
72
+ wire = header ? header + body : body
73
+
74
+ if (q = @direct_recv_queue)
75
+ item = @direct_recv_transform ? @direct_recv_transform.call(wire) : wire
76
+ q.enqueue(item) unless item.nil?
77
+ else
78
+ @send_queue.enqueue(wire)
79
+ end
80
+ end
81
+
82
+
83
+ alias write_message send_message
84
+
85
+
86
+ def write_messages(bodies)
87
+ raise ClosedError, "connection closed" if @closed
88
+
89
+ if (q = @direct_recv_queue)
90
+ transform = @direct_recv_transform
91
+ bodies.each do |body|
92
+ item = transform ? transform.call(body) : body
93
+ q.enqueue(item) unless item.nil?
94
+ end
95
+ else
96
+ bodies.each { |body| @send_queue.enqueue(body) }
97
+ end
98
+ end
99
+
100
+
101
+ # No-op — Async::Queue has no IO buffer to flush.
102
+ def flush
103
+ nil
104
+ end
105
+
106
+
107
+ def receive_message
108
+ item = @recv_queue.dequeue
109
+ raise EOFError, "connection closed" if item.nil?
110
+ item
111
+ end
112
+
113
+
114
+ def closed?
115
+ @closed
116
+ end
117
+
118
+
119
+ def close
120
+ return if @closed
121
+ @closed = true
122
+ # Close sentinel goes on whichever queue the peer is reading.
123
+ # When direct-wired, @send_queue is unused; hit the direct
124
+ # queue so the consumer unblocks.
125
+ (@direct_recv_queue || @send_queue).enqueue(nil)
126
+ end
127
+
128
+ end
129
+ end
130
+ end
131
+ end
@@ -1,21 +1,24 @@
1
1
  # frozen_string_literal: true
2
2
 
3
- require "socket"
4
- require "io/stream"
3
+ require "async/queue"
4
+
5
+ require_relative "inproc/pipe"
5
6
 
6
7
  module NNQ
7
8
  module Transport
8
9
  # In-process transport. Both peers live in the same process and
9
- # exchange frames over a Unix socketpair — no network, no address.
10
+ # exchange frozen Strings through a pair of {Async::Queue}s — no
11
+ # wire framing, no socketpair, no SP handshake.
10
12
  #
11
- # Unlike omq's DirectPipe, inproc here still runs through
12
- # Protocol::SP: the socketpair just replaces TCP. Kernel buffering
13
- # across the pair is plenty to avoid contention for typical
14
- # in-process message sizes, and reusing the SP handshake + framing
15
- # keeps the transport ~40 LOC instead of a parallel Connection
16
- # implementation.
13
+ # The historical implementation ran through a Unix `socketpair(2)`
14
+ # and the full SP protocol, making inproc roughly as expensive as
15
+ # IPC. Swapping to {Inproc::Pipe} (duck-types {NNQ::Connection})
16
+ # drops the kernel buffer copy, the framing encode/decode, and the
17
+ # handshake inproc becomes a pure in-process queue transfer.
17
18
  #
18
19
  module Inproc
20
+ Engine.transports["inproc"] = self
21
+
19
22
  @registry = {}
20
23
  @mutex = Mutex.new
21
24
 
@@ -26,7 +29,7 @@ module NNQ
26
29
  # @param endpoint [String] e.g. "inproc://my-endpoint"
27
30
  # @param engine [Engine]
28
31
  # @return [Listener]
29
- def bind(endpoint, engine)
32
+ def bind(endpoint, engine, **)
30
33
  @mutex.synchronize do
31
34
  raise Error, "inproc endpoint already bound: #{endpoint}" if @registry.key?(endpoint)
32
35
  @registry[endpoint] = engine
@@ -36,28 +39,27 @@ module NNQ
36
39
  end
37
40
 
38
41
 
39
- # Connects +engine+ to a bound inproc endpoint. Creates a Unix
40
- # socketpair, hands one side to the bound engine (accepted),
41
- # the other to the connecting engine (connected). Both sides
42
- # run the normal SP handshake concurrently.
42
+ # Connects +engine+ to a bound inproc endpoint. Creates a Pipe
43
+ # pair one queue per direction and registers each side with
44
+ # its owning engine via {Engine#connection_ready}. No handshake
45
+ # runs; both ends are live as soon as the pipes are wired.
43
46
  #
44
47
  # @param endpoint [String]
45
48
  # @param engine [Engine]
46
49
  # @return [void]
47
- def connect(endpoint, engine)
50
+ def connect(endpoint, engine, **)
48
51
  bound = @mutex.synchronize { @registry[endpoint] }
49
52
  raise Error, "inproc endpoint not bound: #{endpoint}" unless bound
50
53
 
51
- a, b = UNIXSocket.pair
54
+ a_to_b = Async::Queue.new
55
+ b_to_a = Async::Queue.new
56
+ client = Pipe.new(send_queue: a_to_b, recv_queue: b_to_a, endpoint: endpoint)
57
+ server = Pipe.new(send_queue: b_to_a, recv_queue: a_to_b, endpoint: endpoint)
58
+ client.peer = server
59
+ server.peer = client
52
60
 
53
- # Handshake on the bound side must run concurrently with
54
- # ours — if we called bound.handle_accepted synchronously
55
- # it would block on reading our greeting before we've had
56
- # a chance to write it.
57
- bound.spawn_task(annotation: "nnq inproc accept #{endpoint}") do
58
- bound.handle_accepted(IO::Stream::Buffered.wrap(b), endpoint: endpoint)
59
- end
60
- engine.handle_connected(IO::Stream::Buffered.wrap(a), endpoint: endpoint)
61
+ bound.connection_ready(server, endpoint: endpoint)
62
+ engine.connection_ready(client, endpoint: endpoint)
61
63
  end
62
64
 
63
65
 
@@ -84,7 +86,7 @@ module NNQ
84
86
  end
85
87
 
86
88
 
87
- # No accept loop: inproc connects synchronously.
89
+ # No accept loop: inproc connects are fully synchronous.
88
90
  def start_accept_loop(_parent_task, &_on_accepted)
89
91
  end
90
92
 
@@ -13,13 +13,16 @@ module NNQ
13
13
  # verbatim.
14
14
  #
15
15
  module IPC
16
+ Engine.transports["ipc"] = self
17
+
18
+
16
19
  class << self
17
20
  # Binds an IPC server.
18
21
  #
19
22
  # @param endpoint [String] e.g. "ipc:///tmp/nnq.sock" or "ipc://@abstract"
20
23
  # @param engine [Engine]
21
24
  # @return [Listener]
22
- def bind(endpoint, engine)
25
+ def bind(endpoint, engine, **)
23
26
  path = parse_path(endpoint)
24
27
  sock_path = to_socket_path(path)
25
28
 
@@ -35,7 +38,7 @@ module NNQ
35
38
  # @param endpoint [String]
36
39
  # @param engine [Engine]
37
40
  # @return [void]
38
- def connect(endpoint, engine)
41
+ def connect(endpoint, engine, **)
39
42
  path = parse_path(endpoint)
40
43
  sock_path = to_socket_path(path)
41
44
  sock = UNIXSocket.new(sock_path)
@@ -11,13 +11,16 @@ module NNQ
11
11
  # accept inside an Async fiber.
12
12
  #
13
13
  module TCP
14
+ Engine.transports["tcp"] = self
15
+
16
+
14
17
  class << self
15
18
  # Binds a TCP server to +endpoint+.
16
19
  #
17
20
  # @param endpoint [String] e.g. "tcp://127.0.0.1:5570" or "tcp://127.0.0.1:0"
18
21
  # @param engine [Engine]
19
22
  # @return [Listener]
20
- def bind(endpoint, engine)
23
+ def bind(endpoint, engine, **)
21
24
  host, port = parse_endpoint(endpoint)
22
25
  host = "0.0.0.0" if host == "*"
23
26
  server = TCPServer.new(host, port)
@@ -34,7 +37,7 @@ module NNQ
34
37
  # @param endpoint [String]
35
38
  # @param engine [Engine]
36
39
  # @return [void]
37
- def connect(endpoint, engine)
40
+ def connect(endpoint, engine, **)
38
41
  host, port = parse_endpoint(endpoint)
39
42
  sock = ::Socket.tcp(host, port, connect_timeout: connect_timeout(engine.options))
40
43
 
data/lib/nnq/version.rb CHANGED
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module NNQ
4
- VERSION = "0.6.0"
4
+ VERSION = "0.7.0"
5
5
  end
data/lib/nnq.rb CHANGED
@@ -1,23 +1,24 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  require "protocol/sp"
4
+ require "io/stream"
4
5
 
5
- module NNQ
6
- # Freezes module-level state so NNQ sockets can be used inside Ractors.
7
- # Call this once before spawning any Ractors that create NNQ sockets.
8
- #
9
- def self.freeze_for_ractors!
10
- Engine::CONNECTION_FAILED.freeze
11
- Engine::CONNECTION_LOST.freeze
12
- Engine::TRANSPORTS.freeze
13
- end
14
- end
15
6
 
7
+ # Core
16
8
  require_relative "nnq/version"
17
- require_relative "nnq/error"
9
+ require_relative "nnq/constants"
10
+ require_relative "nnq/reactor"
18
11
  require_relative "nnq/options"
12
+ require_relative "nnq/error"
19
13
  require_relative "nnq/connection"
20
14
  require_relative "nnq/engine"
15
+
16
+ # Transport
17
+ require_relative "nnq/transport/inproc"
18
+ require_relative "nnq/transport/tcp"
19
+ require_relative "nnq/transport/ipc"
20
+
21
+ # Socket types
21
22
  require_relative "nnq/socket"
22
23
  require_relative "nnq/push_pull"
23
24
  require_relative "nnq/pair"
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: nnq
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.6.0
4
+ version: 0.7.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Patrik Wenger
@@ -66,12 +66,12 @@ files:
66
66
  - lib/nnq.rb
67
67
  - lib/nnq/bus.rb
68
68
  - lib/nnq/connection.rb
69
+ - lib/nnq/constants.rb
69
70
  - lib/nnq/engine.rb
70
71
  - lib/nnq/engine/connection_lifecycle.rb
71
72
  - lib/nnq/engine/reconnect.rb
72
73
  - lib/nnq/engine/socket_lifecycle.rb
73
74
  - lib/nnq/error.rb
74
- - lib/nnq/monitor_event.rb
75
75
  - lib/nnq/options.rb
76
76
  - lib/nnq/pair.rb
77
77
  - lib/nnq/pub_sub.rb
@@ -97,6 +97,7 @@ files:
97
97
  - lib/nnq/socket.rb
98
98
  - lib/nnq/surveyor_respondent.rb
99
99
  - lib/nnq/transport/inproc.rb
100
+ - lib/nnq/transport/inproc/pipe.rb
100
101
  - lib/nnq/transport/ipc.rb
101
102
  - lib/nnq/transport/tcp.rb
102
103
  - lib/nnq/version.rb
@@ -1,18 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module NNQ
4
- # Lifecycle event emitted by {Socket#monitor}.
5
- #
6
- # @!attribute [r] type
7
- # @return [Symbol] event type (:listening, :connected, :disconnected, ...)
8
- # @!attribute [r] endpoint
9
- # @return [String, nil] the endpoint involved
10
- # @!attribute [r] detail
11
- # @return [Hash, nil] extra context
12
- #
13
- MonitorEvent = Data.define(:type, :endpoint, :detail) do
14
- def initialize(type:, endpoint: nil, detail: nil)
15
- super
16
- end
17
- end
18
- end