omq 0.17.9 → 0.19.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 4041ada423542869c748bcef4e7d0957ad05f0a3e5198decef74c04a3c0a342a
4
- data.tar.gz: b667b4fee1c36caf02b91cb15a1f874fb745aa9d131c4d7c71f4cb4bf3ff4a46
3
+ metadata.gz: f01c18a7a887e0f2811abff2251f1b2c9b201091e800248fb852b92bedbcaf06
4
+ data.tar.gz: d32b3e07223d906a3bb25fc8542409bb5c05af36b509f7fe70c91419d0836cf9
5
5
  SHA512:
6
- metadata.gz: 66b7b32a35cccdebb9accbebf6e9aefa80c1d7ef9945de75146c764f587f281ecc9ca2b39e4baf1c5d6e6a1127d1c9194bfc446bd9c598991c7fbbb28c1be270
7
- data.tar.gz: 8a26404f5a9132cea10463c2948c079dddac5c92c43ec1ba840675f7a465e92501d7df9e8eb7b2136781acf9592051cf9e52baed4d02bcc971d7e2c052c1de20
6
+ metadata.gz: c0b26ad39e26fe586cec53371bc4b1beeaf357b51f32e90ad8d889a8413f62c08c13be2d8f787e3528d4fca6bccc1ceae21454474465aff25f9418c37460d957
7
+ data.tar.gz: b8d3acedee187c9c757268af173b55bd28daf502f8e054f92ef670c59a1781e2b2f73267425c239d27acc7cb036b0030ab562a12918c7db205dc34f4bcf8e38d
data/CHANGELOG.md CHANGED
@@ -1,5 +1,104 @@
1
1
  # Changelog
2
2
 
3
+ ## 0.19.0 — 2026-04-12
4
+
5
+ ### Added
6
+
7
+ - **Verbose-monitor helpers `Engine#emit_verbose_msg_sent` and
8
+ `#emit_verbose_msg_received`.** Used by `RecvPump` and every
9
+ send-pump routing strategy (`conn_send_pump`, `round_robin`,
10
+ `pair`, `fan_out`) to emit `:message_sent` / `:message_received`
11
+ monitor events with a connection reference. When the connection
12
+ exposes `#last_wire_size_out` / `#last_wire_size_in` (as the
13
+ `omq-rfc-zstd` `CompressionConnection` wrapper does), the event
14
+ detail includes `wire_size:` so verbose traces can annotate
15
+ compressed message previews with the post-compression byte count.
16
+ `RecvPump` now emits the trace *before* enqueueing the message
17
+ so the monitor fiber runs before the application fiber, which
18
+ preserves log-before-body ordering at `-vvv`.
19
+
20
+ ### Changed
21
+
22
+ - **`OMQ::Transport::TCP` normalizes host shorthands.** `tcp://*:PORT`
23
+ now binds *dual-stack* (both `0.0.0.0` and `::` on the same port,
24
+ with `IPV6_V6ONLY` set) rather than IPv4-only `0.0.0.0`, matching
25
+ [Puma v8.0.0's behavior](https://github.com/puma/puma/releases/tag/v8.0.0).
26
+ `tcp://:PORT`, `tcp://localhost:PORT`, and `tcp://*:PORT` on the
27
+ connect side all normalize to the loopback host — `::1` on
28
+ IPv6-capable machines (at least one non-loopback, non-link-local
29
+ IPv6 address), otherwise `127.0.0.1`. Explicit addresses
30
+ (`0.0.0.0`, `::`, `127.0.0.1`, `::1`) pass through unchanged.
31
+ Documented in `GETTING_STARTED.md` under "TCP host shorthands".
32
+ This normalization previously lived in `omq-cli` and is now
33
+ shared by all callers.
34
+
35
+ - **TCP accept loop uses `Socket.tcp_server_sockets`** instead of
36
+ manually iterating `Addrinfo.getaddrinfo` + `TCPServer.new`.
37
+ `tcp_server_sockets` handles dual-stack port coordination and
38
+ `IPV6_V6ONLY` automatically. `Listener#servers` now holds
39
+ `Socket` instances rather than `TCPServer`; `#accept` returns
40
+ `[client, addrinfo]` pairs, which the accept loop destructures.
41
+
42
+ - **`Listener#start_accept_loops` uses `yield`** instead of capturing
43
+ the block as an explicit `&on_accepted` proc. The block is bound
44
+ to the enclosing method even when invoked from inside a spawned
45
+ `Async::Task`, so the explicit capture was unnecessary. Applies
46
+ to both TCP and IPC transports.
47
+
48
+ ## 0.18.0 — 2026-04-12
49
+
50
+ ### Changed
51
+
52
+ - **Renamed `Socket#_attach` → `#attach_endpoints` and `#_init_engine` →
53
+ `#init_engine`.** Both are now public so plugin gems can call them
54
+ without reaching into private API. Internal callers updated.
55
+
56
+ - **Routing registry exposed via `Routing.registry`.** `omq.rb`'s
57
+ `freeze_for_ractors!` no longer reaches in via `instance_variable_get`.
58
+
59
+ ### Fixed
60
+
61
+ - **Test helper deadlock.** `Kernel#Async` override in `test_helper.rb`
62
+ was wrapping every `Async do` block in a `with_timeout`, including
63
+ the reactor thread's own root task. With a 1s timeout the reactor
64
+ task died mid-suite and subsequent `Reactor.run` calls hung forever.
65
+ The override now only wraps blocks running on the main thread.
66
+
67
+ - **`wait_connected` test helper uses `Async::Barrier`** for parallel
68
+ fork-join across all sockets instead of a sequential `Async{}` array.
69
+
70
+ - **`examples/zguide/03_pipeline.rb` flake.** The example sent 20 tasks
71
+ to 3 PUSH workers and asserted that all three got some — but PUSH
72
+ work-stealing on inproc lets the first pump fiber to wake grab a
73
+ whole batch (256 messages) before yielding, so worker-0 always took
74
+ everything. Fixed by waiting on each worker's `peer_connected`
75
+ promise via `Async::Barrier` and bumping the burst above one
76
+ pump's batch cap.
77
+
78
+ ### Documentation
79
+
80
+ - **Documented work-stealing as a deviation from libzmq.** README
81
+ routing tables now say "Work-stealing" instead of "Round-robin"
82
+ for PUSH/REQ/DEALER/SCATTER/CLIENT, with a callout explaining the
83
+ burst-vs-steady distribution behavior. DESIGN.md's "Per-socket HWM"
84
+ section gained a user-visible-consequence note covering the same.
85
+
86
+ - **Lifecycle boundary docs.** `ConnectionLifecycle` and
87
+ `SocketLifecycle` now carry explicit class-level comments
88
+ delimiting their scopes (per-connection arc vs. per-socket state)
89
+ and referencing each other.
90
+
91
+ - **API doc fill-in.** Added missing YARD comments on
92
+ `RecvPump::FAIRNESS_MESSAGES` / `FAIRNESS_BYTES`,
93
+ `RecvPump#start_with_transform` / `#start_direct`, several
94
+ `FanOut` send-pump methods, and the TCP/IPC `apply_buffer_sizes`
95
+ helpers.
96
+
97
+ - **`Engine#drain_send_queues` flagged with TODO.** The 1 ms busy-poll
98
+ is non-trivial to fix cleanly (needs a "queue fully drained" signal
99
+ threaded through every routing strategy), so it's marked rather
100
+ than reworked here.
101
+
3
102
  ## 0.17.8 — 2026-04-10
4
103
 
5
104
  ### Fixed
data/README.md CHANGED
@@ -153,22 +153,24 @@ All sockets are thread-safe. Default HWM is 1000 messages per socket. `max_messa
153
153
 
154
154
  | Pattern | Send | Receive | When HWM full |
155
155
  |---------|------|---------|---------------|
156
- | **REQ** / **REP** | Round-robin / route-back | Fair-queue | Block |
156
+ | **REQ** / **REP** | Work-stealing / route-back | Fair-queue | Block |
157
157
  | **PUB** / **SUB** | Fan-out to subscribers | Subscription filter | Drop |
158
- | **PUSH** / **PULL** | Round-robin to workers | Fair-queue | Block |
159
- | **DEALER** / **ROUTER** | Round-robin / identity-route | Fair-queue | Block |
158
+ | **PUSH** / **PULL** | Work-stealing to workers | Fair-queue | Block |
159
+ | **DEALER** / **ROUTER** | Work-stealing / identity-route | Fair-queue | Block |
160
160
  | **XPUB** / **XSUB** | Fan-out (subscription events) | Fair-queue | Drop |
161
161
  | **PAIR** | Exclusive 1-to-1 | Exclusive 1-to-1 | Block |
162
162
 
163
+ > **Work-stealing vs. round-robin.** libzmq uses strict per-pipe round-robin for outbound load balancing — message N goes to peer N mod K regardless of whether that peer is busy. OMQ uses **work-stealing**: one shared send queue per socket and N pump fibers that race to drain it. Whichever pump is ready next picks up the next batch, so a slow peer can't stall the pipeline. The trade-off: distribution is not strict round-robin under bursts. If a producer enqueues a large burst before any pump fiber gets scheduled, the first pump to wake will dequeue up to one whole batch (256 messages or 512 KB, whichever hits first) in a single non-blocking drain — so a tight `n.times { sock << msg }` loop on a small `n` may dump everything on one peer. Slow or steady producers don't see this: each pump dequeues one message, writes, re-parks, and the FIFO wait queue gives every pump a fair turn. Burst distribution also evens out once the burst exceeds one pump's batch cap. See [DESIGN.md](DESIGN.md#per-socket-hwm-not-per-connection) for the full reasoning.
164
+
163
165
  #### Draft (single-frame only)
164
166
 
165
167
  These require the `omq-draft` gem.
166
168
 
167
169
  | Pattern | Send | Receive | When HWM full |
168
170
  |---------|------|---------|---------------|
169
- | **CLIENT** / **SERVER** | Round-robin / routing-ID | Fair-queue | Block |
171
+ | **CLIENT** / **SERVER** | Work-stealing / routing-ID | Fair-queue | Block |
170
172
  | **RADIO** / **DISH** | Group fan-out | Group filter | Drop |
171
- | **SCATTER** / **GATHER** | Round-robin | Fair-queue | Block |
173
+ | **SCATTER** / **GATHER** | Work-stealing | Fair-queue | Block |
172
174
  | **PEER** | Routing-ID | Fair-queue | Block |
173
175
  | **CHANNEL** | Exclusive 1-to-1 | Exclusive 1-to-1 | Block |
174
176
 
@@ -2,7 +2,15 @@
2
2
 
3
3
  module OMQ
4
4
  class Engine
5
- # Owns the full arc of one connection: handshake → ready → closed.
5
+ # Owns the full arc of *one* connection: handshake → ready → closed.
6
+ #
7
+ # Scope boundary: ConnectionLifecycle tracks a single peer link
8
+ # (one ZMTP connection or one inproc DirectPipe). SocketLifecycle
9
+ # owns the socket-wide state above it — first-peer/last-peer
10
+ # signaling, reconnect enable flag, the parent task tree, and the
11
+ # open → closing → closed transitions that gate close-time drain.
12
+ # A socket has exactly one SocketLifecycle and zero-or-more
13
+ # ConnectionLifecycles beneath it.
6
14
  #
7
15
  # Centralizes the ordering of side effects (monitor events, routing
8
16
  # registration, promise resolution, reconnect scheduling) so the
@@ -19,10 +27,14 @@ module OMQ
19
27
  # lost connection.
20
28
  #
21
29
  class ConnectionLifecycle
22
- class InvalidTransition < RuntimeError; end
30
+
31
+ class InvalidTransition < RuntimeError
32
+ end
33
+
23
34
 
24
35
  STATES = %i[new handshaking ready closed].freeze
25
36
 
37
+
26
38
  TRANSITIONS = {
27
39
  new: %i[handshaking ready closed].freeze,
28
40
  handshaking: %i[ready closed].freeze,
@@ -34,12 +46,15 @@ module OMQ
34
46
  # @return [Protocol::ZMTP::Connection, Transport::Inproc::DirectPipe, nil]
35
47
  attr_reader :conn
36
48
 
49
+
37
50
  # @return [String, nil]
38
51
  attr_reader :endpoint
39
52
 
53
+
40
54
  # @return [Symbol] current state
41
55
  attr_reader :state
42
56
 
57
+
43
58
  # @return [Async::Barrier] holds all per-connection pump tasks
44
59
  # (send pump, recv pump, reaper, heartbeat). When the connection
45
60
  # is torn down, {#tear_down!} calls `@barrier.stop` to take down
@@ -58,6 +73,7 @@ module OMQ
58
73
  @done = done
59
74
  @state = :new
60
75
  @conn = nil
76
+
61
77
  # Nest the per-connection barrier under the socket-level barrier
62
78
  # so every pump spawned via +@barrier.async+ is also tracked by
63
79
  # the socket barrier — {Engine#stop}/{Engine#close} cascade
@@ -74,21 +90,26 @@ module OMQ
74
90
  #
75
91
  def handshake!(io, as_server:)
76
92
  transition!(:handshaking)
77
- conn = Protocol::ZMTP::Connection.new(
78
- io,
93
+ conn = Protocol::ZMTP::Connection.new io,
79
94
  socket_type: @engine.socket_type.to_s,
80
95
  identity: @engine.options.identity,
81
96
  as_server: as_server,
82
97
  mechanism: @engine.options.mechanism&.dup,
83
- max_message_size: @engine.options.max_message_size,
84
- )
85
- Async::Task.current.with_timeout(handshake_timeout) { conn.handshake! }
98
+ max_message_size: @engine.options.max_message_size
99
+
100
+ Async::Task.current.with_timeout(handshake_timeout) do
101
+ conn.handshake!
102
+ end
103
+
86
104
  Heartbeat.start(@barrier, conn, @engine.options, @engine.tasks)
87
105
  ready!(conn)
88
106
  @conn
89
107
  rescue Protocol::ZMTP::Error, *CONNECTION_LOST, Async::TimeoutError => error
90
- @engine.emit_monitor_event(:handshake_failed, endpoint: @endpoint, detail: { error: error })
108
+ @engine.emit_monitor_event :handshake_failed,
109
+ endpoint: @endpoint, detail: { error: error }
110
+
91
111
  conn&.close
112
+
92
113
  # Full tear-down with reconnect: without this, spawn_connection's
93
114
  # ensure-block close! sees :closed and skips maybe_reconnect,
94
115
  # leaving the endpoint dead. Race is exposed when a peer RSTs
@@ -128,6 +149,7 @@ module OMQ
128
149
 
129
150
  private
130
151
 
152
+
131
153
  def ready!(conn)
132
154
  conn = @engine.connection_wrapper.call(conn) if @engine.connection_wrapper
133
155
  @conn = conn
@@ -136,6 +158,7 @@ module OMQ
136
158
  @engine.routing.connection_added(@conn)
137
159
  @engine.peer_connected.resolve(@conn)
138
160
  transition!(:ready)
161
+
139
162
  # No supervisor if nothing to supervise: inproc DirectPipes
140
163
  # wire the recv/send paths synchronously (no task-based pumps),
141
164
  # and isolated unit tests use a FakeEngine without pumps at all.
@@ -182,6 +205,7 @@ module OMQ
182
205
  @done&.resolve(true)
183
206
  @engine.resolve_all_peers_gone_if_empty
184
207
  @engine.maybe_reconnect(@endpoint) if reconnect
208
+
185
209
  # Cancel every sibling pump of this connection. The caller is
186
210
  # the supervisor task, which is NOT in the barrier — so there
187
211
  # is no self-stop risk.
@@ -45,6 +45,7 @@ module OMQ
45
45
  end
46
46
  end
47
47
 
48
+
48
49
  private
49
50
 
50
51
 
@@ -60,7 +61,8 @@ module OMQ
60
61
  break
61
62
  rescue *CONNECTION_LOST, *CONNECTION_FAILED, Protocol::ZMTP::Error
62
63
  delay = next_delay(delay, max_delay)
63
- @engine.emit_monitor_event(:connect_retried, endpoint: @endpoint, detail: { interval: delay })
64
+ @engine.emit_monitor_event :connect_retried,
65
+ endpoint: @endpoint, detail: { interval: delay }
64
66
  end
65
67
  end
66
68
  end
@@ -106,6 +108,7 @@ module OMQ
106
108
  ri
107
109
  end
108
110
  end
111
+
109
112
  end
110
113
  end
111
114
  end
@@ -13,7 +13,16 @@ module OMQ
13
13
  # of a megamorphic `transform.call` dispatch inside a shared loop.
14
14
  #
15
15
  class RecvPump
16
+ # Max messages read from one connection before yielding to the
17
+ # scheduler. Prevents a busy peer from starving its siblings in
18
+ # fair-queue recv sockets.
16
19
  FAIRNESS_MESSAGES = 64
20
+
21
+
22
+ # Max bytes read from one connection before yielding. Only counted
23
+ # for ZMTP connections (inproc skips the check). Complements
24
+ # {FAIRNESS_MESSAGES}: small-message floods are bounded by count,
25
+ # large-message floods by bytes.
17
26
  FAIRNESS_BYTES = 1 << 20 # 1 MB
18
27
 
19
28
 
@@ -67,6 +76,14 @@ module OMQ
67
76
  private
68
77
 
69
78
 
79
+ # Recv loop with per-message transform (e.g. Marshal.load for
80
+ # cross-Ractor transport). Kept separate from {#start_direct} so
81
+ # YJIT sees a monomorphic transform.call site.
82
+ #
83
+ # @param parent [Async::Task, Async::Barrier]
84
+ # @param transform [Proc]
85
+ # @return [Async::Task]
86
+ #
70
87
  def start_with_transform(parent, transform)
71
88
  conn, recv_queue, engine, count_bytes = @conn, @recv_queue, @engine, @count_bytes
72
89
 
@@ -77,8 +94,12 @@ module OMQ
77
94
  while count < FAIRNESS_MESSAGES && bytes < FAIRNESS_BYTES
78
95
  msg = conn.receive_message
79
96
  msg = transform.call(msg).freeze
97
+ # Emit the verbose trace BEFORE enqueueing so the monitor
98
+ # fiber is woken before the application fiber -- the
99
+ # async scheduler is FIFO on the ready list, so this
100
+ # preserves log-before-stdout ordering for -vvv traces.
101
+ engine.emit_verbose_msg_received(conn, msg)
80
102
  recv_queue.enqueue(msg)
81
- engine.emit_verbose_monitor_event(:message_received, parts: msg)
82
103
  count += 1
83
104
  bytes += msg.sum(&:bytesize) if count_bytes
84
105
  end
@@ -93,6 +114,11 @@ module OMQ
93
114
  end
94
115
 
95
116
 
117
+ # Recv loop without transform — the hot path for native OMQ use.
118
+ #
119
+ # @param parent [Async::Task, Async::Barrier]
120
+ # @return [Async::Task]
121
+ #
96
122
  def start_direct(parent)
97
123
  conn, recv_queue, engine, count_bytes = @conn, @recv_queue, @engine, @count_bytes
98
124
 
@@ -102,8 +128,8 @@ module OMQ
102
128
  bytes = 0
103
129
  while count < FAIRNESS_MESSAGES && bytes < FAIRNESS_BYTES
104
130
  msg = conn.receive_message
131
+ engine.emit_verbose_msg_received(conn, msg)
105
132
  recv_queue.enqueue(msg)
106
- engine.emit_verbose_monitor_event(:message_received, parts: msg)
107
133
  count += 1
108
134
  bytes += msg.sum(&:bytesize) if count_bytes
109
135
  end
@@ -116,6 +142,7 @@ module OMQ
116
142
  @engine.signal_fatal_error(error)
117
143
  end
118
144
  end
145
+
119
146
  end
120
147
  end
121
148
  end
@@ -6,6 +6,13 @@ module OMQ
6
6
  # the first-peer / last-peer signaling promises, the reconnect flag,
7
7
  # and the captured parent task for the socket's task tree.
8
8
  #
9
+ # Scope boundary: SocketLifecycle is per-socket and outlives every
10
+ # individual peer link. ConnectionLifecycle is per-connection and
11
+ # handles one handshake → ready → closed arc beneath it. Roughly:
12
+ # SocketLifecycle answers "is this socket open and do we have any
13
+ # peers?", ConnectionLifecycle answers "is this specific peer link
14
+ # ready / lost?".
15
+ #
9
16
  # Engine delegates state queries here and uses it to coordinate the
10
17
  # ordering of close-time side effects. This consolidates six ivars
11
18
  # (`@state`, `@peer_connected`, `@all_peers_gone`, `@reconnect_enabled`,
@@ -13,10 +20,13 @@ module OMQ
13
20
  # explicit transitions.
14
21
  #
15
22
  class SocketLifecycle
16
- class InvalidTransition < RuntimeError; end
23
+ class InvalidTransition < RuntimeError
24
+ end
25
+
17
26
 
18
27
  STATES = %i[new open closing closed].freeze
19
28
 
29
+
20
30
  TRANSITIONS = {
21
31
  new: %i[open closed].freeze,
22
32
  open: %i[closing closed].freeze,
@@ -28,27 +38,33 @@ module OMQ
28
38
  # @return [Symbol]
29
39
  attr_reader :state
30
40
 
41
+
31
42
  # @return [Async::Promise] resolves with the first connected peer
32
43
  attr_reader :peer_connected
33
44
 
45
+
34
46
  # @return [Async::Promise] resolves once all peers are gone (after having had peers)
35
47
  attr_reader :all_peers_gone
36
48
 
49
+
37
50
  # @return [Async::Task, Async::Barrier, Async::Semaphore, nil] root of
38
51
  # the socket's task tree (may be user-provided via +parent:+ on
39
52
  # {Socket#bind} / {Socket#connect}; falls back to the current
40
53
  # Async task or the shared Reactor root)
41
54
  attr_reader :parent_task
42
55
 
56
+
43
57
  # @return [Boolean] true if parent_task is the shared Reactor thread
44
58
  attr_reader :on_io_thread
45
59
 
60
+
46
61
  # @return [Async::Barrier] holds every socket-scoped task (connection
47
62
  # supervisors, reconnect loops, heartbeat, monitor, accept loops).
48
63
  # {Engine#stop} and {Engine#close} call +barrier.stop+ to cascade
49
64
  # teardown through every per-connection barrier in one shot.
50
65
  attr_reader :barrier
51
66
 
67
+
52
68
  # @return [Boolean] whether auto-reconnect is enabled
53
69
  attr_accessor :reconnect_enabled
54
70
 
@@ -128,6 +144,7 @@ module OMQ
128
144
 
129
145
  private
130
146
 
147
+
131
148
  def transition!(new_state)
132
149
  allowed = TRANSITIONS[@state]
133
150
  unless allowed&.include?(new_state)
@@ -135,6 +152,7 @@ module OMQ
135
152
  end
136
153
  @state = new_state
137
154
  end
155
+
138
156
  end
139
157
  end
140
158
  end
data/lib/omq/engine.rb CHANGED
@@ -20,6 +20,7 @@ module OMQ
20
20
  #
21
21
  @transports = {}
22
22
 
23
+
23
24
  class << self
24
25
  # @return [Hash{String => Module}] registered transports
25
26
  attr_reader :transports
@@ -83,6 +84,10 @@ module OMQ
83
84
  # @param value [Async::Queue, nil] queue for monitor events
84
85
  #
85
86
  attr_writer :monitor_queue
87
+
88
+
89
+ # @return [Boolean] when true, every monitor event is also printed
90
+ # to stderr for debugging. Set via {Socket#monitor}.
86
91
  attr_accessor :verbose_monitor
87
92
 
88
93
 
@@ -96,6 +101,7 @@ module OMQ
96
101
  @lifecycle.reconnect_enabled = value
97
102
  end
98
103
 
104
+
99
105
  # Optional proc that wraps new connections (e.g. for serialization).
100
106
  # Called with the raw connection; must return the (possibly wrapped) connection.
101
107
  #
@@ -399,7 +405,7 @@ module OMQ
399
405
  def signal_fatal_error(error)
400
406
  return unless @lifecycle.open?
401
407
  @fatal_error = begin
402
- raise OMQ::SocketDeadError, "internal error killed #{@socket_type} socket"
408
+ raise SocketDeadError, "internal error killed #{@socket_type} socket"
403
409
  rescue => wrapped
404
410
  wrapped
405
411
  end
@@ -455,6 +461,28 @@ module OMQ
455
461
  end
456
462
 
457
463
 
464
+ # Emits a :message_sent verbose event and enriches it with the
465
+ # on-wire (post-compression) byte size if +conn+ exposes
466
+ # +last_wire_size_out+ (installed by ZMTP-Zstd etc.).
467
+ def emit_verbose_msg_sent(conn, parts)
468
+ return unless @verbose_monitor
469
+ detail = { parts: parts }
470
+ detail[:wire_size] = conn.last_wire_size_out if conn.respond_to?(:last_wire_size_out)
471
+ emit_monitor_event(:message_sent, detail: detail)
472
+ end
473
+
474
+
475
+ # Emits a :message_received verbose event and enriches it with the
476
+ # on-wire (pre-decompression) byte size if +conn+ exposes
477
+ # +last_wire_size_in+.
478
+ def emit_verbose_msg_received(conn, parts)
479
+ return unless @verbose_monitor
480
+ detail = { parts: parts }
481
+ detail[:wire_size] = conn.last_wire_size_in if conn.respond_to?(:last_wire_size_in)
482
+ emit_monitor_event(:message_received, detail: detail)
483
+ end
484
+
485
+
458
486
  # Looks up the transport module for an endpoint URI.
459
487
  #
460
488
  # @param endpoint [String] endpoint URI (e.g. "tcp://...", "inproc://...")
@@ -467,8 +495,10 @@ module OMQ
467
495
  raise ArgumentError, "unsupported transport: #{endpoint}"
468
496
  end
469
497
 
498
+
470
499
  private
471
500
 
501
+
472
502
  def spawn_connection(io, as_server:, endpoint: nil)
473
503
  task = @lifecycle.barrier&.async(transient: true, annotation: "conn #{endpoint}") do
474
504
  done = Async::Promise.new
@@ -488,6 +518,11 @@ module OMQ
488
518
  end
489
519
 
490
520
 
521
+ # TODO: replace the 1 ms busy-poll with a promise/condition that
522
+ # the send pump resolves when its queue hits empty. The loop exists
523
+ # because there is currently no signal for "send queue fully
524
+ # drained"; fixing it cleanly requires plumbing a notifier through
525
+ # every routing strategy, so it is flagged rather than fixed here.
491
526
  def drain_send_queues(timeout)
492
527
  return unless @routing.respond_to?(:send_queues_drained?)
493
528
  deadline = timeout ? Async::Clock.now + timeout : nil
data/lib/omq/pair.rb CHANGED
@@ -12,8 +12,8 @@ module OMQ
12
12
  # @param backend [Symbol, nil] :ruby (default) or :ffi
13
13
  #
14
14
  def initialize(endpoints = nil, linger: 0, backend: nil)
15
- _init_engine(:PAIR, linger: linger, backend: backend)
16
- _attach(endpoints, default: :connect)
15
+ init_engine(:PAIR, linger: linger, backend: backend)
16
+ attach_endpoints(endpoints, default: :connect)
17
17
  end
18
18
  end
19
19
  end
data/lib/omq/pub_sub.rb CHANGED
@@ -13,8 +13,8 @@ module OMQ
13
13
  # @param backend [Symbol, nil] :ruby (default) or :ffi
14
14
  #
15
15
  def initialize(endpoints = nil, linger: 0, on_mute: :drop_newest, conflate: false, backend: nil)
16
- _init_engine(:PUB, linger: linger, on_mute: on_mute, conflate: conflate, backend: backend)
17
- _attach(endpoints, default: :bind)
16
+ init_engine(:PUB, linger: linger, on_mute: on_mute, conflate: conflate, backend: backend)
17
+ attach_endpoints(endpoints, default: :bind)
18
18
  end
19
19
  end
20
20
 
@@ -36,8 +36,8 @@ module OMQ
36
36
  # @param on_mute [Symbol] :block (default), :drop_newest, or :drop_oldest
37
37
  #
38
38
  def initialize(endpoints = nil, linger: 0, subscribe: nil, on_mute: :block, backend: nil)
39
- _init_engine(:SUB, linger: linger, on_mute: on_mute, backend: backend)
40
- _attach(endpoints, default: :connect)
39
+ init_engine(:SUB, linger: linger, on_mute: on_mute, backend: backend)
40
+ attach_endpoints(endpoints, default: :connect)
41
41
  self.subscribe(subscribe) unless subscribe.nil?
42
42
  end
43
43
 
@@ -75,8 +75,8 @@ module OMQ
75
75
  # @param backend [Symbol, nil] :ruby (default) or :ffi
76
76
  #
77
77
  def initialize(endpoints = nil, linger: 0, on_mute: :drop_newest, backend: nil)
78
- _init_engine(:XPUB, linger: linger, on_mute: on_mute, backend: backend)
79
- _attach(endpoints, default: :bind)
78
+ init_engine(:XPUB, linger: linger, on_mute: on_mute, backend: backend)
79
+ attach_endpoints(endpoints, default: :bind)
80
80
  end
81
81
  end
82
82
 
@@ -95,8 +95,8 @@ module OMQ
95
95
  # @param backend [Symbol, nil] :ruby (default) or :ffi
96
96
  #
97
97
  def initialize(endpoints = nil, linger: 0, subscribe: nil, on_mute: :block, backend: nil)
98
- _init_engine(:XSUB, linger: linger, on_mute: on_mute, backend: backend)
99
- _attach(endpoints, default: :connect)
98
+ init_engine(:XSUB, linger: linger, on_mute: on_mute, backend: backend)
99
+ attach_endpoints(endpoints, default: :connect)
100
100
  send("\x01#{subscribe}".b) unless subscribe.nil?
101
101
  end
102
102
  end
data/lib/omq/push_pull.rb CHANGED
@@ -13,8 +13,8 @@ module OMQ
13
13
  # @param backend [Symbol, nil] :ruby (default) or :ffi
14
14
  #
15
15
  def initialize(endpoints = nil, linger: 0, send_hwm: nil, send_timeout: nil, backend: nil)
16
- _init_engine(:PUSH, linger: linger, send_hwm: send_hwm, send_timeout: send_timeout, backend: backend)
17
- _attach(endpoints, default: :connect)
16
+ init_engine(:PUSH, linger: linger, send_hwm: send_hwm, send_timeout: send_timeout, backend: backend)
17
+ attach_endpoints(endpoints, default: :connect)
18
18
  end
19
19
  end
20
20
 
@@ -31,8 +31,8 @@ module OMQ
31
31
  # @param backend [Symbol, nil] :ruby (default) or :ffi
32
32
  #
33
33
  def initialize(endpoints = nil, linger: 0, recv_hwm: nil, recv_timeout: nil, backend: nil)
34
- _init_engine(:PULL, linger: linger, recv_hwm: recv_hwm, recv_timeout: recv_timeout, backend: backend)
35
- _attach(endpoints, default: :bind)
34
+ init_engine(:PULL, linger: linger, recv_hwm: recv_hwm, recv_timeout: recv_timeout, backend: backend)
35
+ attach_endpoints(endpoints, default: :bind)
36
36
  end
37
37
  end
38
38
  end
@@ -16,11 +16,13 @@ module OMQ
16
16
  # @raise [IO::TimeoutError] if timeout exceeded
17
17
  #
18
18
  def dequeue(timeout: @options.read_timeout)
19
- Reactor.run { with_timeout(timeout) { @engine.dequeue_recv } }
19
+ Reactor.run(timeout:) { @engine.dequeue_recv }
20
20
  end
21
21
 
22
+
22
23
  alias_method :pop, :dequeue
23
24
 
25
+
24
26
  # Waits for the next message indefinitely (ignores read_timeout).
25
27
  #
26
28
  # @return [Array<String>] message parts