omq 0.17.9 → 0.18.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +54 -0
- data/README.md +7 -5
- data/lib/omq/engine/connection_lifecycle.rb +32 -8
- data/lib/omq/engine/reconnect.rb +4 -1
- data/lib/omq/engine/recv_pump.rb +23 -0
- data/lib/omq/engine/socket_lifecycle.rb +19 -1
- data/lib/omq/engine.rb +14 -1
- data/lib/omq/pair.rb +2 -2
- data/lib/omq/pub_sub.rb +8 -8
- data/lib/omq/push_pull.rb +4 -4
- data/lib/omq/queue_interface.rb +3 -1
- data/lib/omq/reactor.rb +41 -20
- data/lib/omq/readable.rb +3 -1
- data/lib/omq/req_rep.rb +4 -4
- data/lib/omq/router_dealer.rb +4 -4
- data/lib/omq/routing/conn_send_pump.rb +9 -3
- data/lib/omq/routing/dealer.rb +2 -0
- data/lib/omq/routing/fair_queue.rb +14 -3
- data/lib/omq/routing/fan_out.rb +39 -2
- data/lib/omq/routing/req.rb +10 -1
- data/lib/omq/routing.rb +5 -4
- data/lib/omq/socket.rb +44 -58
- data/lib/omq/transport/inproc/direct_pipe.rb +16 -2
- data/lib/omq/transport/inproc.rb +41 -7
- data/lib/omq/transport/ipc.rb +29 -9
- data/lib/omq/transport/tcp.rb +21 -5
- data/lib/omq/version.rb +1 -1
- data/lib/omq/writable.rb +5 -1
- data/lib/omq.rb +2 -1
- metadata +3 -3
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: f4c49622de9b6972433ec90b148e0d20e68f69d5bb4939debd5abd5095d0396b
|
|
4
|
+
data.tar.gz: a70873e070a90ac7b804585231180926ef030e8ffeb9cf65fc44e889dc3619e2
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: a4c994245259282909189bcd44d69ef9153b52ef1dd2675b8cf8635b8970ed9c630048b55837cdc0fa20ecb04c00edd9b3bff17e5d9ad93d70107eb8701b08a3
|
|
7
|
+
data.tar.gz: d3f88cd73a8ea21fb572518a8f9b8128244f2bb10cfcf0e43b8ac73a797158ef1c2a59fe8fe75be407eac6277f8ebd05bd7ab554f6fa3a4d1e215ccaf3c034ab
|
data/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,59 @@
|
|
|
1
1
|
# Changelog
|
|
2
2
|
|
|
3
|
+
## 0.18.0 — 2026-04-12
|
|
4
|
+
|
|
5
|
+
### Changed
|
|
6
|
+
|
|
7
|
+
- **Renamed `Socket#_attach` → `#attach_endpoints` and `#_init_engine` →
|
|
8
|
+
`#init_engine`.** Both are now public so plugin gems can call them
|
|
9
|
+
without reaching into private API. Internal callers updated.
|
|
10
|
+
|
|
11
|
+
- **Routing registry exposed via `Routing.registry`.** `omq.rb`'s
|
|
12
|
+
`freeze_for_ractors!` no longer reaches in via `instance_variable_get`.
|
|
13
|
+
|
|
14
|
+
### Fixed
|
|
15
|
+
|
|
16
|
+
- **Test helper deadlock.** `Kernel#Async` override in `test_helper.rb`
|
|
17
|
+
was wrapping every `Async do` block in a `with_timeout`, including
|
|
18
|
+
the reactor thread's own root task. With a 1s timeout the reactor
|
|
19
|
+
task died mid-suite and subsequent `Reactor.run` calls hung forever.
|
|
20
|
+
The override now only wraps blocks running on the main thread.
|
|
21
|
+
|
|
22
|
+
- **`wait_connected` test helper uses `Async::Barrier`** for parallel
|
|
23
|
+
fork-join across all sockets instead of a sequential `Async{}` array.
|
|
24
|
+
|
|
25
|
+
- **`examples/zguide/03_pipeline.rb` flake.** The example sent 20 tasks
|
|
26
|
+
to 3 PUSH workers and asserted that all three got some — but PUSH
|
|
27
|
+
work-stealing on inproc lets the first pump fiber to wake grab a
|
|
28
|
+
whole batch (256 messages) before yielding, so worker-0 always took
|
|
29
|
+
everything. Fixed by waiting on each worker's `peer_connected`
|
|
30
|
+
promise via `Async::Barrier` and bumping the burst above one
|
|
31
|
+
pump's batch cap.
|
|
32
|
+
|
|
33
|
+
### Documentation
|
|
34
|
+
|
|
35
|
+
- **Documented work-stealing as a deviation from libzmq.** README
|
|
36
|
+
routing tables now say "Work-stealing" instead of "Round-robin"
|
|
37
|
+
for PUSH/REQ/DEALER/SCATTER/CLIENT, with a callout explaining the
|
|
38
|
+
burst-vs-steady distribution behavior. DESIGN.md's "Per-socket HWM"
|
|
39
|
+
section gained a user-visible-consequence note covering the same.
|
|
40
|
+
|
|
41
|
+
- **Lifecycle boundary docs.** `ConnectionLifecycle` and
|
|
42
|
+
`SocketLifecycle` now carry explicit class-level comments
|
|
43
|
+
delimiting their scopes (per-connection arc vs. per-socket state)
|
|
44
|
+
and referencing each other.
|
|
45
|
+
|
|
46
|
+
- **API doc fill-in.** Added missing YARD comments on
|
|
47
|
+
`RecvPump::FAIRNESS_MESSAGES` / `FAIRNESS_BYTES`,
|
|
48
|
+
`RecvPump#start_with_transform` / `#start_direct`, several
|
|
49
|
+
`FanOut` send-pump methods, and the TCP/IPC `apply_buffer_sizes`
|
|
50
|
+
helpers.
|
|
51
|
+
|
|
52
|
+
- **`Engine#drain_send_queues` flagged with TODO.** The 1 ms busy-poll
|
|
53
|
+
is non-trivial to fix cleanly (needs a "queue fully drained" signal
|
|
54
|
+
threaded through every routing strategy), so it's marked rather
|
|
55
|
+
than reworked here.
|
|
56
|
+
|
|
3
57
|
## 0.17.8 — 2026-04-10
|
|
4
58
|
|
|
5
59
|
### Fixed
|
data/README.md
CHANGED
|
@@ -153,22 +153,24 @@ All sockets are thread-safe. Default HWM is 1000 messages per socket. `max_messa
|
|
|
153
153
|
|
|
154
154
|
| Pattern | Send | Receive | When HWM full |
|
|
155
155
|
|---------|------|---------|---------------|
|
|
156
|
-
| **REQ** / **REP** |
|
|
156
|
+
| **REQ** / **REP** | Work-stealing / route-back | Fair-queue | Block |
|
|
157
157
|
| **PUB** / **SUB** | Fan-out to subscribers | Subscription filter | Drop |
|
|
158
|
-
| **PUSH** / **PULL** |
|
|
159
|
-
| **DEALER** / **ROUTER** |
|
|
158
|
+
| **PUSH** / **PULL** | Work-stealing to workers | Fair-queue | Block |
|
|
159
|
+
| **DEALER** / **ROUTER** | Work-stealing / identity-route | Fair-queue | Block |
|
|
160
160
|
| **XPUB** / **XSUB** | Fan-out (subscription events) | Fair-queue | Drop |
|
|
161
161
|
| **PAIR** | Exclusive 1-to-1 | Exclusive 1-to-1 | Block |
|
|
162
162
|
|
|
163
|
+
> **Work-stealing vs. round-robin.** libzmq uses strict per-pipe round-robin for outbound load balancing — message N goes to peer N mod K regardless of whether that peer is busy. OMQ uses **work-stealing**: one shared send queue per socket and N pump fibers that race to drain it. Whichever pump is ready next picks up the next batch, so a slow peer can't stall the pipeline. The trade-off: distribution is not strict round-robin under bursts. If a producer enqueues a large burst before any pump fiber gets scheduled, the first pump to wake will dequeue up to one whole batch (256 messages or 512 KB, whichever hits first) in a single non-blocking drain — so a tight `n.times { sock << msg }` loop on a small `n` may dump everything on one peer. Slow or steady producers don't see this: each pump dequeues one message, writes, re-parks, and the FIFO wait queue gives every pump a fair turn. Burst distribution also evens out once the burst exceeds one pump's batch cap. See [DESIGN.md](DESIGN.md#per-socket-hwm-not-per-connection) for the full reasoning.
|
|
164
|
+
|
|
163
165
|
#### Draft (single-frame only)
|
|
164
166
|
|
|
165
167
|
These require the `omq-draft` gem.
|
|
166
168
|
|
|
167
169
|
| Pattern | Send | Receive | When HWM full |
|
|
168
170
|
|---------|------|---------|---------------|
|
|
169
|
-
| **CLIENT** / **SERVER** |
|
|
171
|
+
| **CLIENT** / **SERVER** | Work-stealing / routing-ID | Fair-queue | Block |
|
|
170
172
|
| **RADIO** / **DISH** | Group fan-out | Group filter | Drop |
|
|
171
|
-
| **SCATTER** / **GATHER** |
|
|
173
|
+
| **SCATTER** / **GATHER** | Work-stealing | Fair-queue | Block |
|
|
172
174
|
| **PEER** | Routing-ID | Fair-queue | Block |
|
|
173
175
|
| **CHANNEL** | Exclusive 1-to-1 | Exclusive 1-to-1 | Block |
|
|
174
176
|
|
|
@@ -2,7 +2,15 @@
|
|
|
2
2
|
|
|
3
3
|
module OMQ
|
|
4
4
|
class Engine
|
|
5
|
-
# Owns the full arc of one connection: handshake → ready → closed.
|
|
5
|
+
# Owns the full arc of *one* connection: handshake → ready → closed.
|
|
6
|
+
#
|
|
7
|
+
# Scope boundary: ConnectionLifecycle tracks a single peer link
|
|
8
|
+
# (one ZMTP connection or one inproc DirectPipe). SocketLifecycle
|
|
9
|
+
# owns the socket-wide state above it — first-peer/last-peer
|
|
10
|
+
# signaling, reconnect enable flag, the parent task tree, and the
|
|
11
|
+
# open → closing → closed transitions that gate close-time drain.
|
|
12
|
+
# A socket has exactly one SocketLifecycle and zero-or-more
|
|
13
|
+
# ConnectionLifecycles beneath it.
|
|
6
14
|
#
|
|
7
15
|
# Centralizes the ordering of side effects (monitor events, routing
|
|
8
16
|
# registration, promise resolution, reconnect scheduling) so the
|
|
@@ -19,10 +27,14 @@ module OMQ
|
|
|
19
27
|
# lost connection.
|
|
20
28
|
#
|
|
21
29
|
class ConnectionLifecycle
|
|
22
|
-
|
|
30
|
+
|
|
31
|
+
class InvalidTransition < RuntimeError
|
|
32
|
+
end
|
|
33
|
+
|
|
23
34
|
|
|
24
35
|
STATES = %i[new handshaking ready closed].freeze
|
|
25
36
|
|
|
37
|
+
|
|
26
38
|
TRANSITIONS = {
|
|
27
39
|
new: %i[handshaking ready closed].freeze,
|
|
28
40
|
handshaking: %i[ready closed].freeze,
|
|
@@ -34,12 +46,15 @@ module OMQ
|
|
|
34
46
|
# @return [Protocol::ZMTP::Connection, Transport::Inproc::DirectPipe, nil]
|
|
35
47
|
attr_reader :conn
|
|
36
48
|
|
|
49
|
+
|
|
37
50
|
# @return [String, nil]
|
|
38
51
|
attr_reader :endpoint
|
|
39
52
|
|
|
53
|
+
|
|
40
54
|
# @return [Symbol] current state
|
|
41
55
|
attr_reader :state
|
|
42
56
|
|
|
57
|
+
|
|
43
58
|
# @return [Async::Barrier] holds all per-connection pump tasks
|
|
44
59
|
# (send pump, recv pump, reaper, heartbeat). When the connection
|
|
45
60
|
# is torn down, {#tear_down!} calls `@barrier.stop` to take down
|
|
@@ -58,6 +73,7 @@ module OMQ
|
|
|
58
73
|
@done = done
|
|
59
74
|
@state = :new
|
|
60
75
|
@conn = nil
|
|
76
|
+
|
|
61
77
|
# Nest the per-connection barrier under the socket-level barrier
|
|
62
78
|
# so every pump spawned via +@barrier.async+ is also tracked by
|
|
63
79
|
# the socket barrier — {Engine#stop}/{Engine#close} cascade
|
|
@@ -74,21 +90,26 @@ module OMQ
|
|
|
74
90
|
#
|
|
75
91
|
def handshake!(io, as_server:)
|
|
76
92
|
transition!(:handshaking)
|
|
77
|
-
conn = Protocol::ZMTP::Connection.new
|
|
78
|
-
io,
|
|
93
|
+
conn = Protocol::ZMTP::Connection.new io,
|
|
79
94
|
socket_type: @engine.socket_type.to_s,
|
|
80
95
|
identity: @engine.options.identity,
|
|
81
96
|
as_server: as_server,
|
|
82
97
|
mechanism: @engine.options.mechanism&.dup,
|
|
83
|
-
max_message_size: @engine.options.max_message_size
|
|
84
|
-
|
|
85
|
-
Async::Task.current.with_timeout(handshake_timeout)
|
|
98
|
+
max_message_size: @engine.options.max_message_size
|
|
99
|
+
|
|
100
|
+
Async::Task.current.with_timeout(handshake_timeout) do
|
|
101
|
+
conn.handshake!
|
|
102
|
+
end
|
|
103
|
+
|
|
86
104
|
Heartbeat.start(@barrier, conn, @engine.options, @engine.tasks)
|
|
87
105
|
ready!(conn)
|
|
88
106
|
@conn
|
|
89
107
|
rescue Protocol::ZMTP::Error, *CONNECTION_LOST, Async::TimeoutError => error
|
|
90
|
-
@engine.emit_monitor_event
|
|
108
|
+
@engine.emit_monitor_event :handshake_failed,
|
|
109
|
+
endpoint: @endpoint, detail: { error: error }
|
|
110
|
+
|
|
91
111
|
conn&.close
|
|
112
|
+
|
|
92
113
|
# Full tear-down with reconnect: without this, spawn_connection's
|
|
93
114
|
# ensure-block close! sees :closed and skips maybe_reconnect,
|
|
94
115
|
# leaving the endpoint dead. Race is exposed when a peer RSTs
|
|
@@ -128,6 +149,7 @@ module OMQ
|
|
|
128
149
|
|
|
129
150
|
private
|
|
130
151
|
|
|
152
|
+
|
|
131
153
|
def ready!(conn)
|
|
132
154
|
conn = @engine.connection_wrapper.call(conn) if @engine.connection_wrapper
|
|
133
155
|
@conn = conn
|
|
@@ -136,6 +158,7 @@ module OMQ
|
|
|
136
158
|
@engine.routing.connection_added(@conn)
|
|
137
159
|
@engine.peer_connected.resolve(@conn)
|
|
138
160
|
transition!(:ready)
|
|
161
|
+
|
|
139
162
|
# No supervisor if nothing to supervise: inproc DirectPipes
|
|
140
163
|
# wire the recv/send paths synchronously (no task-based pumps),
|
|
141
164
|
# and isolated unit tests use a FakeEngine without pumps at all.
|
|
@@ -182,6 +205,7 @@ module OMQ
|
|
|
182
205
|
@done&.resolve(true)
|
|
183
206
|
@engine.resolve_all_peers_gone_if_empty
|
|
184
207
|
@engine.maybe_reconnect(@endpoint) if reconnect
|
|
208
|
+
|
|
185
209
|
# Cancel every sibling pump of this connection. The caller is
|
|
186
210
|
# the supervisor task, which is NOT in the barrier — so there
|
|
187
211
|
# is no self-stop risk.
|
data/lib/omq/engine/reconnect.rb
CHANGED
|
@@ -45,6 +45,7 @@ module OMQ
|
|
|
45
45
|
end
|
|
46
46
|
end
|
|
47
47
|
|
|
48
|
+
|
|
48
49
|
private
|
|
49
50
|
|
|
50
51
|
|
|
@@ -60,7 +61,8 @@ module OMQ
|
|
|
60
61
|
break
|
|
61
62
|
rescue *CONNECTION_LOST, *CONNECTION_FAILED, Protocol::ZMTP::Error
|
|
62
63
|
delay = next_delay(delay, max_delay)
|
|
63
|
-
@engine.emit_monitor_event
|
|
64
|
+
@engine.emit_monitor_event :connect_retried,
|
|
65
|
+
endpoint: @endpoint, detail: { interval: delay }
|
|
64
66
|
end
|
|
65
67
|
end
|
|
66
68
|
end
|
|
@@ -106,6 +108,7 @@ module OMQ
|
|
|
106
108
|
ri
|
|
107
109
|
end
|
|
108
110
|
end
|
|
111
|
+
|
|
109
112
|
end
|
|
110
113
|
end
|
|
111
114
|
end
|
data/lib/omq/engine/recv_pump.rb
CHANGED
|
@@ -13,7 +13,16 @@ module OMQ
|
|
|
13
13
|
# of a megamorphic `transform.call` dispatch inside a shared loop.
|
|
14
14
|
#
|
|
15
15
|
class RecvPump
|
|
16
|
+
# Max messages read from one connection before yielding to the
|
|
17
|
+
# scheduler. Prevents a busy peer from starving its siblings in
|
|
18
|
+
# fair-queue recv sockets.
|
|
16
19
|
FAIRNESS_MESSAGES = 64
|
|
20
|
+
|
|
21
|
+
|
|
22
|
+
# Max bytes read from one connection before yielding. Only counted
|
|
23
|
+
# for ZMTP connections (inproc skips the check). Complements
|
|
24
|
+
# {FAIRNESS_MESSAGES}: small-message floods are bounded by count,
|
|
25
|
+
# large-message floods by bytes.
|
|
17
26
|
FAIRNESS_BYTES = 1 << 20 # 1 MB
|
|
18
27
|
|
|
19
28
|
|
|
@@ -67,6 +76,14 @@ module OMQ
|
|
|
67
76
|
private
|
|
68
77
|
|
|
69
78
|
|
|
79
|
+
# Recv loop with per-message transform (e.g. Marshal.load for
|
|
80
|
+
# cross-Ractor transport). Kept separate from {#start_direct} so
|
|
81
|
+
# YJIT sees a monomorphic transform.call site.
|
|
82
|
+
#
|
|
83
|
+
# @param parent [Async::Task, Async::Barrier]
|
|
84
|
+
# @param transform [Proc]
|
|
85
|
+
# @return [Async::Task]
|
|
86
|
+
#
|
|
70
87
|
def start_with_transform(parent, transform)
|
|
71
88
|
conn, recv_queue, engine, count_bytes = @conn, @recv_queue, @engine, @count_bytes
|
|
72
89
|
|
|
@@ -93,6 +110,11 @@ module OMQ
|
|
|
93
110
|
end
|
|
94
111
|
|
|
95
112
|
|
|
113
|
+
# Recv loop without transform — the hot path for native OMQ use.
|
|
114
|
+
#
|
|
115
|
+
# @param parent [Async::Task, Async::Barrier]
|
|
116
|
+
# @return [Async::Task]
|
|
117
|
+
#
|
|
96
118
|
def start_direct(parent)
|
|
97
119
|
conn, recv_queue, engine, count_bytes = @conn, @recv_queue, @engine, @count_bytes
|
|
98
120
|
|
|
@@ -116,6 +138,7 @@ module OMQ
|
|
|
116
138
|
@engine.signal_fatal_error(error)
|
|
117
139
|
end
|
|
118
140
|
end
|
|
141
|
+
|
|
119
142
|
end
|
|
120
143
|
end
|
|
121
144
|
end
|
|
@@ -6,6 +6,13 @@ module OMQ
|
|
|
6
6
|
# the first-peer / last-peer signaling promises, the reconnect flag,
|
|
7
7
|
# and the captured parent task for the socket's task tree.
|
|
8
8
|
#
|
|
9
|
+
# Scope boundary: SocketLifecycle is per-socket and outlives every
|
|
10
|
+
# individual peer link. ConnectionLifecycle is per-connection and
|
|
11
|
+
# handles one handshake → ready → closed arc beneath it. Roughly:
|
|
12
|
+
# SocketLifecycle answers "is this socket open and do we have any
|
|
13
|
+
# peers?", ConnectionLifecycle answers "is this specific peer link
|
|
14
|
+
# ready / lost?".
|
|
15
|
+
#
|
|
9
16
|
# Engine delegates state queries here and uses it to coordinate the
|
|
10
17
|
# ordering of close-time side effects. This consolidates six ivars
|
|
11
18
|
# (`@state`, `@peer_connected`, `@all_peers_gone`, `@reconnect_enabled`,
|
|
@@ -13,10 +20,13 @@ module OMQ
|
|
|
13
20
|
# explicit transitions.
|
|
14
21
|
#
|
|
15
22
|
class SocketLifecycle
|
|
16
|
-
class InvalidTransition < RuntimeError
|
|
23
|
+
class InvalidTransition < RuntimeError
|
|
24
|
+
end
|
|
25
|
+
|
|
17
26
|
|
|
18
27
|
STATES = %i[new open closing closed].freeze
|
|
19
28
|
|
|
29
|
+
|
|
20
30
|
TRANSITIONS = {
|
|
21
31
|
new: %i[open closed].freeze,
|
|
22
32
|
open: %i[closing closed].freeze,
|
|
@@ -28,27 +38,33 @@ module OMQ
|
|
|
28
38
|
# @return [Symbol]
|
|
29
39
|
attr_reader :state
|
|
30
40
|
|
|
41
|
+
|
|
31
42
|
# @return [Async::Promise] resolves with the first connected peer
|
|
32
43
|
attr_reader :peer_connected
|
|
33
44
|
|
|
45
|
+
|
|
34
46
|
# @return [Async::Promise] resolves once all peers are gone (after having had peers)
|
|
35
47
|
attr_reader :all_peers_gone
|
|
36
48
|
|
|
49
|
+
|
|
37
50
|
# @return [Async::Task, Async::Barrier, Async::Semaphore, nil] root of
|
|
38
51
|
# the socket's task tree (may be user-provided via +parent:+ on
|
|
39
52
|
# {Socket#bind} / {Socket#connect}; falls back to the current
|
|
40
53
|
# Async task or the shared Reactor root)
|
|
41
54
|
attr_reader :parent_task
|
|
42
55
|
|
|
56
|
+
|
|
43
57
|
# @return [Boolean] true if parent_task is the shared Reactor thread
|
|
44
58
|
attr_reader :on_io_thread
|
|
45
59
|
|
|
60
|
+
|
|
46
61
|
# @return [Async::Barrier] holds every socket-scoped task (connection
|
|
47
62
|
# supervisors, reconnect loops, heartbeat, monitor, accept loops).
|
|
48
63
|
# {Engine#stop} and {Engine#close} call +barrier.stop+ to cascade
|
|
49
64
|
# teardown through every per-connection barrier in one shot.
|
|
50
65
|
attr_reader :barrier
|
|
51
66
|
|
|
67
|
+
|
|
52
68
|
# @return [Boolean] whether auto-reconnect is enabled
|
|
53
69
|
attr_accessor :reconnect_enabled
|
|
54
70
|
|
|
@@ -128,6 +144,7 @@ module OMQ
|
|
|
128
144
|
|
|
129
145
|
private
|
|
130
146
|
|
|
147
|
+
|
|
131
148
|
def transition!(new_state)
|
|
132
149
|
allowed = TRANSITIONS[@state]
|
|
133
150
|
unless allowed&.include?(new_state)
|
|
@@ -135,6 +152,7 @@ module OMQ
|
|
|
135
152
|
end
|
|
136
153
|
@state = new_state
|
|
137
154
|
end
|
|
155
|
+
|
|
138
156
|
end
|
|
139
157
|
end
|
|
140
158
|
end
|
data/lib/omq/engine.rb
CHANGED
|
@@ -20,6 +20,7 @@ module OMQ
|
|
|
20
20
|
#
|
|
21
21
|
@transports = {}
|
|
22
22
|
|
|
23
|
+
|
|
23
24
|
class << self
|
|
24
25
|
# @return [Hash{String => Module}] registered transports
|
|
25
26
|
attr_reader :transports
|
|
@@ -83,6 +84,10 @@ module OMQ
|
|
|
83
84
|
# @param value [Async::Queue, nil] queue for monitor events
|
|
84
85
|
#
|
|
85
86
|
attr_writer :monitor_queue
|
|
87
|
+
|
|
88
|
+
|
|
89
|
+
# @return [Boolean] when true, every monitor event is also printed
|
|
90
|
+
# to stderr for debugging. Set via {Socket#monitor}.
|
|
86
91
|
attr_accessor :verbose_monitor
|
|
87
92
|
|
|
88
93
|
|
|
@@ -96,6 +101,7 @@ module OMQ
|
|
|
96
101
|
@lifecycle.reconnect_enabled = value
|
|
97
102
|
end
|
|
98
103
|
|
|
104
|
+
|
|
99
105
|
# Optional proc that wraps new connections (e.g. for serialization).
|
|
100
106
|
# Called with the raw connection; must return the (possibly wrapped) connection.
|
|
101
107
|
#
|
|
@@ -399,7 +405,7 @@ module OMQ
|
|
|
399
405
|
def signal_fatal_error(error)
|
|
400
406
|
return unless @lifecycle.open?
|
|
401
407
|
@fatal_error = begin
|
|
402
|
-
raise
|
|
408
|
+
raise SocketDeadError, "internal error killed #{@socket_type} socket"
|
|
403
409
|
rescue => wrapped
|
|
404
410
|
wrapped
|
|
405
411
|
end
|
|
@@ -467,8 +473,10 @@ module OMQ
|
|
|
467
473
|
raise ArgumentError, "unsupported transport: #{endpoint}"
|
|
468
474
|
end
|
|
469
475
|
|
|
476
|
+
|
|
470
477
|
private
|
|
471
478
|
|
|
479
|
+
|
|
472
480
|
def spawn_connection(io, as_server:, endpoint: nil)
|
|
473
481
|
task = @lifecycle.barrier&.async(transient: true, annotation: "conn #{endpoint}") do
|
|
474
482
|
done = Async::Promise.new
|
|
@@ -488,6 +496,11 @@ module OMQ
|
|
|
488
496
|
end
|
|
489
497
|
|
|
490
498
|
|
|
499
|
+
# TODO: replace the 1 ms busy-poll with a promise/condition that
|
|
500
|
+
# the send pump resolves when its queue hits empty. The loop exists
|
|
501
|
+
# because there is currently no signal for "send queue fully
|
|
502
|
+
# drained"; fixing it cleanly requires plumbing a notifier through
|
|
503
|
+
# every routing strategy, so it is flagged rather than fixed here.
|
|
491
504
|
def drain_send_queues(timeout)
|
|
492
505
|
return unless @routing.respond_to?(:send_queues_drained?)
|
|
493
506
|
deadline = timeout ? Async::Clock.now + timeout : nil
|
data/lib/omq/pair.rb
CHANGED
|
@@ -12,8 +12,8 @@ module OMQ
|
|
|
12
12
|
# @param backend [Symbol, nil] :ruby (default) or :ffi
|
|
13
13
|
#
|
|
14
14
|
def initialize(endpoints = nil, linger: 0, backend: nil)
|
|
15
|
-
|
|
16
|
-
|
|
15
|
+
init_engine(:PAIR, linger: linger, backend: backend)
|
|
16
|
+
attach_endpoints(endpoints, default: :connect)
|
|
17
17
|
end
|
|
18
18
|
end
|
|
19
19
|
end
|
data/lib/omq/pub_sub.rb
CHANGED
|
@@ -13,8 +13,8 @@ module OMQ
|
|
|
13
13
|
# @param backend [Symbol, nil] :ruby (default) or :ffi
|
|
14
14
|
#
|
|
15
15
|
def initialize(endpoints = nil, linger: 0, on_mute: :drop_newest, conflate: false, backend: nil)
|
|
16
|
-
|
|
17
|
-
|
|
16
|
+
init_engine(:PUB, linger: linger, on_mute: on_mute, conflate: conflate, backend: backend)
|
|
17
|
+
attach_endpoints(endpoints, default: :bind)
|
|
18
18
|
end
|
|
19
19
|
end
|
|
20
20
|
|
|
@@ -36,8 +36,8 @@ module OMQ
|
|
|
36
36
|
# @param on_mute [Symbol] :block (default), :drop_newest, or :drop_oldest
|
|
37
37
|
#
|
|
38
38
|
def initialize(endpoints = nil, linger: 0, subscribe: nil, on_mute: :block, backend: nil)
|
|
39
|
-
|
|
40
|
-
|
|
39
|
+
init_engine(:SUB, linger: linger, on_mute: on_mute, backend: backend)
|
|
40
|
+
attach_endpoints(endpoints, default: :connect)
|
|
41
41
|
self.subscribe(subscribe) unless subscribe.nil?
|
|
42
42
|
end
|
|
43
43
|
|
|
@@ -75,8 +75,8 @@ module OMQ
|
|
|
75
75
|
# @param backend [Symbol, nil] :ruby (default) or :ffi
|
|
76
76
|
#
|
|
77
77
|
def initialize(endpoints = nil, linger: 0, on_mute: :drop_newest, backend: nil)
|
|
78
|
-
|
|
79
|
-
|
|
78
|
+
init_engine(:XPUB, linger: linger, on_mute: on_mute, backend: backend)
|
|
79
|
+
attach_endpoints(endpoints, default: :bind)
|
|
80
80
|
end
|
|
81
81
|
end
|
|
82
82
|
|
|
@@ -95,8 +95,8 @@ module OMQ
|
|
|
95
95
|
# @param backend [Symbol, nil] :ruby (default) or :ffi
|
|
96
96
|
#
|
|
97
97
|
def initialize(endpoints = nil, linger: 0, subscribe: nil, on_mute: :block, backend: nil)
|
|
98
|
-
|
|
99
|
-
|
|
98
|
+
init_engine(:XSUB, linger: linger, on_mute: on_mute, backend: backend)
|
|
99
|
+
attach_endpoints(endpoints, default: :connect)
|
|
100
100
|
send("\x01#{subscribe}".b) unless subscribe.nil?
|
|
101
101
|
end
|
|
102
102
|
end
|
data/lib/omq/push_pull.rb
CHANGED
|
@@ -13,8 +13,8 @@ module OMQ
|
|
|
13
13
|
# @param backend [Symbol, nil] :ruby (default) or :ffi
|
|
14
14
|
#
|
|
15
15
|
def initialize(endpoints = nil, linger: 0, send_hwm: nil, send_timeout: nil, backend: nil)
|
|
16
|
-
|
|
17
|
-
|
|
16
|
+
init_engine(:PUSH, linger: linger, send_hwm: send_hwm, send_timeout: send_timeout, backend: backend)
|
|
17
|
+
attach_endpoints(endpoints, default: :connect)
|
|
18
18
|
end
|
|
19
19
|
end
|
|
20
20
|
|
|
@@ -31,8 +31,8 @@ module OMQ
|
|
|
31
31
|
# @param backend [Symbol, nil] :ruby (default) or :ffi
|
|
32
32
|
#
|
|
33
33
|
def initialize(endpoints = nil, linger: 0, recv_hwm: nil, recv_timeout: nil, backend: nil)
|
|
34
|
-
|
|
35
|
-
|
|
34
|
+
init_engine(:PULL, linger: linger, recv_hwm: recv_hwm, recv_timeout: recv_timeout, backend: backend)
|
|
35
|
+
attach_endpoints(endpoints, default: :bind)
|
|
36
36
|
end
|
|
37
37
|
end
|
|
38
38
|
end
|
data/lib/omq/queue_interface.rb
CHANGED
|
@@ -16,11 +16,13 @@ module OMQ
|
|
|
16
16
|
# @raise [IO::TimeoutError] if timeout exceeded
|
|
17
17
|
#
|
|
18
18
|
def dequeue(timeout: @options.read_timeout)
|
|
19
|
-
Reactor.run
|
|
19
|
+
Reactor.run(timeout:) { @engine.dequeue_recv }
|
|
20
20
|
end
|
|
21
21
|
|
|
22
|
+
|
|
22
23
|
alias_method :pop, :dequeue
|
|
23
24
|
|
|
25
|
+
|
|
24
26
|
# Waits for the next message indefinitely (ignores read_timeout).
|
|
25
27
|
#
|
|
26
28
|
# @return [Array<String>] message parts
|
data/lib/omq/reactor.rb
CHANGED
|
@@ -15,12 +15,15 @@ module OMQ
|
|
|
15
15
|
# thread are dispatched to the IO thread via {.run}.
|
|
16
16
|
#
|
|
17
17
|
module Reactor
|
|
18
|
+
THREAD_NAME = 'omq-io'
|
|
19
|
+
|
|
18
20
|
@mutex = Mutex.new
|
|
19
21
|
@thread = nil
|
|
20
22
|
@root_task = nil
|
|
21
23
|
@work_queue = nil
|
|
22
24
|
@lingers = Hash.new(0) # linger value → count of active sockets
|
|
23
25
|
|
|
26
|
+
|
|
24
27
|
class << self
|
|
25
28
|
# Returns the root Async task inside the shared IO thread.
|
|
26
29
|
# Starts the thread exactly once (double-checked lock).
|
|
@@ -29,15 +32,19 @@ module OMQ
|
|
|
29
32
|
#
|
|
30
33
|
def root_task
|
|
31
34
|
return @root_task if @root_task
|
|
35
|
+
|
|
32
36
|
@mutex.synchronize do
|
|
33
37
|
return @root_task if @root_task
|
|
34
|
-
|
|
35
|
-
|
|
36
|
-
@
|
|
37
|
-
@thread.
|
|
38
|
-
@
|
|
38
|
+
|
|
39
|
+
ready = Thread::Queue.new
|
|
40
|
+
@work_queue = Async::Queue.new
|
|
41
|
+
@thread = Thread.new { run_reactor(ready) }
|
|
42
|
+
@thread.name = THREAD_NAME
|
|
43
|
+
@root_task = ready.pop
|
|
44
|
+
|
|
39
45
|
at_exit { stop! }
|
|
40
46
|
end
|
|
47
|
+
|
|
41
48
|
@root_task
|
|
42
49
|
end
|
|
43
50
|
|
|
@@ -50,16 +57,20 @@ module OMQ
|
|
|
50
57
|
#
|
|
51
58
|
# @return [Object] the block's return value
|
|
52
59
|
#
|
|
53
|
-
def run(&block)
|
|
54
|
-
|
|
55
|
-
|
|
60
|
+
def run(timeout: nil, &block)
|
|
61
|
+
task = Async::Task.current?
|
|
62
|
+
|
|
63
|
+
if task
|
|
64
|
+
if timeout
|
|
65
|
+
task.with_timeout(timeout, IO::TimeoutError) { yield }
|
|
66
|
+
else
|
|
67
|
+
yield
|
|
68
|
+
end
|
|
56
69
|
else
|
|
57
|
-
result =
|
|
70
|
+
result = Async::Promise.new
|
|
58
71
|
root_task # ensure started
|
|
59
|
-
@work_queue.push([block, result])
|
|
60
|
-
|
|
61
|
-
raise value if status == :error
|
|
62
|
-
value
|
|
72
|
+
@work_queue.push([block, result, timeout])
|
|
73
|
+
result.wait
|
|
63
74
|
end
|
|
64
75
|
end
|
|
65
76
|
|
|
@@ -90,17 +101,21 @@ module OMQ
|
|
|
90
101
|
#
|
|
91
102
|
def stop!
|
|
92
103
|
return unless @thread&.alive?
|
|
104
|
+
|
|
93
105
|
max_linger = @lingers.empty? ? 0 : @lingers.keys.max
|
|
94
106
|
@work_queue&.push(nil)
|
|
95
107
|
@thread&.join(max_linger + 1)
|
|
108
|
+
|
|
96
109
|
@thread = nil
|
|
97
110
|
@root_task = nil
|
|
98
111
|
@work_queue = nil
|
|
99
112
|
@lingers = Hash.new(0)
|
|
100
113
|
end
|
|
101
114
|
|
|
115
|
+
|
|
102
116
|
private
|
|
103
117
|
|
|
118
|
+
|
|
104
119
|
# Runs the shared Async reactor.
|
|
105
120
|
#
|
|
106
121
|
# Processes work items dispatched via {.run} while engine
|
|
@@ -111,18 +126,24 @@ module OMQ
|
|
|
111
126
|
def run_reactor(ready)
|
|
112
127
|
Async do |task|
|
|
113
128
|
ready.push(task)
|
|
129
|
+
|
|
114
130
|
loop do
|
|
115
|
-
item = @work_queue.dequeue
|
|
116
|
-
|
|
117
|
-
|
|
118
|
-
task.async do
|
|
119
|
-
|
|
120
|
-
|
|
121
|
-
|
|
131
|
+
item = @work_queue.dequeue or break
|
|
132
|
+
block, result, timeout = item
|
|
133
|
+
|
|
134
|
+
task.async do |t|
|
|
135
|
+
if timeout
|
|
136
|
+
result.fulfill do
|
|
137
|
+
t.with_timeout(timeout, IO::TimeoutError) { block.call }
|
|
138
|
+
end
|
|
139
|
+
else
|
|
140
|
+
result.fulfill { block.call }
|
|
141
|
+
end
|
|
122
142
|
end
|
|
123
143
|
end
|
|
124
144
|
end
|
|
125
145
|
end
|
|
146
|
+
|
|
126
147
|
end
|
|
127
148
|
end
|
|
128
149
|
end
|
data/lib/omq/readable.rb
CHANGED
|
@@ -14,7 +14,9 @@ module OMQ
|
|
|
14
14
|
# @raise [IO::TimeoutError] if read_timeout exceeded
|
|
15
15
|
#
|
|
16
16
|
def receive
|
|
17
|
-
Reactor.run
|
|
17
|
+
Reactor.run timeout: @options.read_timeout do |task|
|
|
18
|
+
@engine.dequeue_recv
|
|
19
|
+
end
|
|
18
20
|
end
|
|
19
21
|
|
|
20
22
|
|
data/lib/omq/req_rep.rb
CHANGED
|
@@ -12,8 +12,8 @@ module OMQ
|
|
|
12
12
|
# @param backend [Symbol, nil] :ruby (default) or :ffi
|
|
13
13
|
#
|
|
14
14
|
def initialize(endpoints = nil, linger: 0, backend: nil)
|
|
15
|
-
|
|
16
|
-
|
|
15
|
+
init_engine(:REQ, linger: linger, backend: backend)
|
|
16
|
+
attach_endpoints(endpoints, default: :connect)
|
|
17
17
|
end
|
|
18
18
|
end
|
|
19
19
|
|
|
@@ -29,8 +29,8 @@ module OMQ
|
|
|
29
29
|
# @param backend [Symbol, nil] :ruby (default) or :ffi
|
|
30
30
|
#
|
|
31
31
|
def initialize(endpoints = nil, linger: 0, backend: nil)
|
|
32
|
-
|
|
33
|
-
|
|
32
|
+
init_engine(:REP, linger: linger, backend: backend)
|
|
33
|
+
attach_endpoints(endpoints, default: :bind)
|
|
34
34
|
end
|
|
35
35
|
end
|
|
36
36
|
end
|