rabbitmq 0.2.5 → 1.0.0.pre.pre
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/README.md +67 -2
- data/lib/rabbitmq.rb +1 -1
- data/lib/rabbitmq/channel.rb +117 -81
- data/lib/rabbitmq/client.rb +332 -0
- data/lib/rabbitmq/{connection/transport.rb → client/connection.rb} +32 -11
- data/lib/rabbitmq/ffi.rb +486 -483
- data/lib/rabbitmq/util.rb +6 -3
- metadata +6 -7
- data/lib/rabbitmq/connection.rb +0 -382
- data/lib/rabbitmq/connection/channel_manager.rb +0 -61
- data/lib/rabbitmq/connection/event_manager.rb +0 -55
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA1:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: bbcea6833e05f45e3fad47a964344db56b8b35e4
|
4
|
+
data.tar.gz: df9d1eada5b15a428b2f40534c4b1ca19871545b
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 7c555f53f68264cdabc58c23bd4222d6771f33c6841a74efe28c448a653cb51dbbee3e5d8c12279c9c96ad82114a6473d1a91790c21d4b7e34d5123a8822bf82
|
7
|
+
data.tar.gz: 7a041a8493561c0b10eab3c7d7133047e67f404925ec5e17a2c22832cec92846ca0b05326397039c5f203c2400185466024921a1d1f6b52dcb22136506a0167b
|
data/README.md
CHANGED
@@ -1,6 +1,71 @@
|
|
1
1
|
# rabbitmq
|
2
2
|
|
3
|
-
[![Build Status](https://travis-ci.org/jemc/ruby-rabbitmq.png)](https://travis-ci.org/jemc/ruby-rabbitmq)
|
4
3
|
[![Gem Version](https://badge.fury.io/rb/rabbitmq.png)](http://badge.fury.io/rb/rabbitmq)
|
4
|
+
[![Build Status](https://circleci.com/gh/jemc/ruby-rabbitmq/tree/master.svg?style=svg)](https://circleci.com/gh/jemc/ruby-rabbitmq/tree/master)
|
5
5
|
|
6
|
-
A Ruby RabbitMQ client library based on FFI bindings for librabbitmq.
|
6
|
+
A Ruby RabbitMQ client library based on FFI bindings for librabbitmq.
|
7
|
+
|
8
|
+
## Design Goals
|
9
|
+
|
10
|
+
- Provide a minimal API for creating useful RabbitMQ applications in Ruby.
|
11
|
+
- Use a minimal resource and execution path footprint.
|
12
|
+
- No library-imposed background or watchdog threads.
|
13
|
+
- Favor directness over convenience.
|
14
|
+
- Use an existing protocol library (librabbitmq) instead of reinventing one.
|
15
|
+
- Avoid making precluding assumptions about what a user needs.
|
16
|
+
|
17
|
+
## Who should use this library?
|
18
|
+
|
19
|
+
This library was born out dissatisfaction with some of the design decisions made by existing RabbitMQ client libraries for Ruby and the lack of flexibility those libraries afforded to their users in deciding how best to integrate RabbitMQ into their application. This library is for users who know what they want, know the patterns they need to use from RabbitMQ best practices, and want to proceed with a library that provides them the minimal platform they need without getting in their way.
|
20
|
+
|
21
|
+
This library runs no background threads and leaves it up to the user to explicitly invoke the RabbitMQ event loop as a part of their application. This library aims to be easy to integrate with any sensible event processing framework or pattern, or to act as the main event loop driver itself. Users should know what kind of patterns they wish to use, and understand how they can use blocking calls, nonblocking calls, and/or timeouts to implement their application elegantly and efficiently.
|
22
|
+
|
23
|
+
This library does not provide thread-safe client objects. Multithreaded applications should pass data between threads instead of sharing client objects between threads. Not supporting concurrent access to connection state is consistent with the underlying C library and allows for less obfuscation in the codebase and better efficiency, avoiding the costs of acquiring locks and passing through session changes. Users should be comfortable with code patterns that prevent unwanted sharing of the client objects, as doing so may cause catastrophic application failures like segmentation faults in the underlying C library.
|
24
|
+
|
25
|
+
## Usage
|
26
|
+
|
27
|
+
```bash
|
28
|
+
gem install rabbitmq
|
29
|
+
ruby examples/publish_500.rb
|
30
|
+
ruby examples/consume_500.rb
|
31
|
+
```
|
32
|
+
|
33
|
+
```ruby
|
34
|
+
# examples/publish_500.rb
|
35
|
+
require 'rabbitmq'
|
36
|
+
|
37
|
+
publisher = RabbitMQ::Client.new.start.channel
|
38
|
+
queue = "some_queue"
|
39
|
+
exchange = "" # default exchange
|
40
|
+
publisher.queue_declare(queue)
|
41
|
+
|
42
|
+
500.times do |i|
|
43
|
+
publisher.basic_publish("message #{i}", exchange, queue, persistent: true)
|
44
|
+
end
|
45
|
+
```
|
46
|
+
|
47
|
+
```ruby
|
48
|
+
# examples/consume_500.rb
|
49
|
+
require 'rabbitmq'
|
50
|
+
|
51
|
+
consumer = RabbitMQ::Client.new.start.channel
|
52
|
+
consumer.basic_qos(prefetch_count: 500)
|
53
|
+
consumer.basic_consume("some_queue")
|
54
|
+
|
55
|
+
count = 0
|
56
|
+
consumer.on :basic_deliver do |message|
|
57
|
+
puts message[:body]
|
58
|
+
if (count += 1) >= 500
|
59
|
+
consumer.basic_ack(message[:properties][:delivery_tag], multiple: true)
|
60
|
+
consumer.break!
|
61
|
+
end
|
62
|
+
end
|
63
|
+
|
64
|
+
consumer.run_loop!
|
65
|
+
```
|
66
|
+
|
67
|
+
## Contributing
|
68
|
+
|
69
|
+
Performance and implementation improvements, bug fixes, and documentation expansions are always welcome! Create a patch or pull request with a focused approach that solves exactly one problem, and it's very likely to be pulled in.
|
70
|
+
|
71
|
+
For new features, please file an issue ticket explaining the use case and why the new feature should be a part of this library rather than a part of a separate wrapper or convenience library. If a feature meets the design goals of the project and is not yet implemented (such as SSL support, or other missing primitives critical to certain applications), it's likely to be approved quickly.
|
data/lib/rabbitmq.rb
CHANGED
data/lib/rabbitmq/channel.rb
CHANGED
@@ -1,18 +1,35 @@
|
|
1
1
|
|
2
2
|
module RabbitMQ
|
3
|
+
|
4
|
+
# A {Channel} holds a connection to a RabbitMQ server and is associated
|
5
|
+
# with a specific channel id number for categorizing message flow.
|
6
|
+
# It also provides convenient wrapper methods for common uses of
|
7
|
+
# the underlying {Client}.
|
8
|
+
#
|
9
|
+
# A {Channel} is not threadsafe; both the {Channel} and its associated
|
10
|
+
# {Client} should not be shared between threads. If they are shared without
|
11
|
+
# appropriate locking mechanisms, the behavior is undefined and might result
|
12
|
+
# in catastrophic process failures like segmentation faults in the underlying
|
13
|
+
# C library. A {Channel} can be safely used in a multithreaded application by
|
14
|
+
# only passing control and message data between threads.
|
15
|
+
#
|
16
|
+
# To use a {Channel} effectively, it is necessary to understand the
|
17
|
+
# methods available in the underlying AMQP protocol. Please refer to
|
18
|
+
# the protocol documentation for more information about specific methods:
|
19
|
+
# http://www.rabbitmq.com/amqp-0-9-1-reference.html
|
20
|
+
#
|
3
21
|
class Channel
|
4
22
|
|
5
|
-
attr_reader :
|
23
|
+
attr_reader :client
|
6
24
|
attr_reader :id
|
7
25
|
|
8
|
-
# Don't create a {Channel} directly; call {
|
26
|
+
# Don't create a {Channel} directly; call {Client#channel} instead.
|
9
27
|
# @api private
|
10
|
-
def initialize(
|
11
|
-
@
|
12
|
-
@
|
13
|
-
|
14
|
-
|
15
|
-
@finalizer = self.class.send :create_finalizer_for, @connection, @id
|
28
|
+
def initialize(client, conn, id, finalizer)
|
29
|
+
@client = client
|
30
|
+
@conn = conn
|
31
|
+
@id = id
|
32
|
+
@finalizer = finalizer
|
16
33
|
ObjectSpace.define_finalizer self, @finalizer
|
17
34
|
end
|
18
35
|
|
@@ -20,7 +37,7 @@ module RabbitMQ
|
|
20
37
|
# This will be called automatically by the object finalizer after
|
21
38
|
# the object becomes unreachable by the VM and is garbage collected,
|
22
39
|
# but you may want to call it explicitly if you plan to reuse the same
|
23
|
-
# channel
|
40
|
+
# channel id in another {Channel} instance explicitly.
|
24
41
|
#
|
25
42
|
# @return [Channel] self.
|
26
43
|
#
|
@@ -34,198 +51,217 @@ module RabbitMQ
|
|
34
51
|
self
|
35
52
|
end
|
36
53
|
|
37
|
-
# @see {
|
38
|
-
def
|
39
|
-
@
|
54
|
+
# @see {Client#send_request}
|
55
|
+
def send_request(*args)
|
56
|
+
@client.send_request(@id, *args)
|
40
57
|
end
|
41
58
|
|
42
|
-
# @see {
|
43
|
-
def
|
44
|
-
@
|
59
|
+
# @see {Client#fetch_response}
|
60
|
+
def fetch_response(*args)
|
61
|
+
@client.fetch_response(@id, *args)
|
45
62
|
end
|
46
63
|
|
47
|
-
# @see {
|
48
|
-
def
|
49
|
-
@
|
64
|
+
# @see {Client#on_event}
|
65
|
+
def on_event(*args, &block)
|
66
|
+
@client.on_event(@id, *args, &block)
|
50
67
|
end
|
68
|
+
alias_method :on, :on_event
|
51
69
|
|
52
|
-
#
|
53
|
-
|
54
|
-
|
55
|
-
Proc.new do
|
56
|
-
connection.send(:release_channel, id)
|
57
|
-
end
|
70
|
+
# @see {Client#clear_event_handler}
|
71
|
+
def clear_event_handler(*args)
|
72
|
+
@client.clear_event_handler(@id, *args)
|
58
73
|
end
|
59
74
|
|
60
|
-
|
61
|
-
|
62
|
-
|
63
|
-
|
64
|
-
|
65
|
-
|
66
|
-
|
75
|
+
# @see {Client#run_loop!}
|
76
|
+
# The block will be yielded all non-exception events *for any channel*.
|
77
|
+
def run_loop!(*args, &block)
|
78
|
+
@client.run_loop!(*args, &block)
|
79
|
+
end
|
80
|
+
|
81
|
+
# @see {Client#break!}
|
82
|
+
def break!
|
83
|
+
@client.break!
|
67
84
|
end
|
68
85
|
|
69
86
|
##
|
70
87
|
# Exchange operations
|
71
88
|
|
72
89
|
def exchange_declare(name, type, **opts)
|
73
|
-
|
90
|
+
send_request :exchange_declare, {
|
74
91
|
exchange: name,
|
75
92
|
type: type,
|
76
93
|
passive: opts.fetch(:passive, false),
|
77
94
|
durable: opts.fetch(:durable, false),
|
78
95
|
auto_delete: opts.fetch(:auto_delete, false),
|
79
96
|
internal: opts.fetch(:internal, false),
|
80
|
-
|
97
|
+
}
|
98
|
+
fetch_response :exchange_declare_ok
|
81
99
|
end
|
82
100
|
|
83
101
|
def exchange_delete(name, **opts)
|
84
|
-
|
102
|
+
send_request :exchange_delete, {
|
85
103
|
exchange: name,
|
86
|
-
if_unused: opts.fetch(:if_unused, false)
|
87
|
-
|
104
|
+
if_unused: opts.fetch(:if_unused, false)
|
105
|
+
}
|
106
|
+
fetch_response :exchange_delete_ok
|
88
107
|
end
|
89
108
|
|
90
109
|
def exchange_bind(source, destination, **opts)
|
91
|
-
|
110
|
+
send_request :exchange_bind, {
|
92
111
|
source: source,
|
93
112
|
destination: destination,
|
94
113
|
routing_key: opts.fetch(:routing_key, ""),
|
95
|
-
arguments: opts.fetch(:arguments, {})
|
96
|
-
|
114
|
+
arguments: opts.fetch(:arguments, {})
|
115
|
+
}
|
116
|
+
fetch_response :exchange_bind_ok
|
97
117
|
end
|
98
118
|
|
99
119
|
def exchange_unbind(source, destination, **opts)
|
100
|
-
|
120
|
+
send_request :exchange_unbind, {
|
101
121
|
source: source,
|
102
122
|
destination: destination,
|
103
123
|
routing_key: opts.fetch(:routing_key, ""),
|
104
|
-
arguments: opts.fetch(:arguments, {})
|
105
|
-
|
124
|
+
arguments: opts.fetch(:arguments, {})
|
125
|
+
}
|
126
|
+
fetch_response :exchange_unbind_ok
|
106
127
|
end
|
107
128
|
|
108
129
|
##
|
109
130
|
# Queue operations
|
110
131
|
|
111
132
|
def queue_declare(name, **opts)
|
112
|
-
|
133
|
+
send_request :queue_declare, {
|
113
134
|
queue: name,
|
114
135
|
passive: opts.fetch(:passive, false),
|
115
136
|
durable: opts.fetch(:durable, false),
|
116
137
|
exclusive: opts.fetch(:exclusive, false),
|
117
138
|
auto_delete: opts.fetch(:auto_delete, false),
|
118
|
-
arguments: opts.fetch(:arguments, {})
|
119
|
-
|
139
|
+
arguments: opts.fetch(:arguments, {})
|
140
|
+
}
|
141
|
+
fetch_response :queue_declare_ok
|
120
142
|
end
|
121
143
|
|
122
144
|
def queue_bind(name, exchange, **opts)
|
123
|
-
|
145
|
+
send_request :queue_bind, {
|
124
146
|
queue: name,
|
125
147
|
exchange: exchange,
|
126
148
|
routing_key: opts.fetch(:routing_key, ""),
|
127
|
-
arguments: opts.fetch(:arguments, {})
|
128
|
-
|
149
|
+
arguments: opts.fetch(:arguments, {})
|
150
|
+
}
|
151
|
+
fetch_response :queue_bind_ok
|
129
152
|
end
|
130
153
|
|
131
154
|
def queue_unbind(name, exchange, **opts)
|
132
|
-
|
155
|
+
send_request :queue_unbind, {
|
133
156
|
queue: name,
|
134
157
|
exchange: exchange,
|
135
158
|
routing_key: opts.fetch(:routing_key, ""),
|
136
|
-
arguments: opts.fetch(:arguments, {})
|
137
|
-
|
159
|
+
arguments: opts.fetch(:arguments, {})
|
160
|
+
}
|
161
|
+
fetch_response :queue_unbind_ok
|
138
162
|
end
|
139
163
|
|
140
164
|
def queue_purge(name)
|
141
|
-
|
165
|
+
send_request :queue_purge, { queue: name }
|
166
|
+
fetch_response :queue_purge_ok
|
142
167
|
end
|
143
168
|
|
144
169
|
def queue_delete(name, **opts)
|
145
|
-
|
170
|
+
send_request :queue_delete, {
|
146
171
|
queue: name,
|
147
172
|
if_unused: opts.fetch(:if_unused, false),
|
148
|
-
if_empty: opts.fetch(:if_empty, false)
|
149
|
-
|
173
|
+
if_empty: opts.fetch(:if_empty, false)
|
174
|
+
}
|
175
|
+
fetch_response :queue_delete_ok
|
150
176
|
end
|
151
177
|
|
152
178
|
##
|
153
179
|
# Consumer operations
|
154
180
|
|
155
181
|
def basic_qos(**opts)
|
156
|
-
|
182
|
+
send_request :basic_qos, {
|
157
183
|
prefetch_count: opts.fetch(:prefetch_count, 0),
|
158
184
|
prefetch_size: opts.fetch(:prefetch_size, 0),
|
159
|
-
global: opts.fetch(:global, false)
|
160
|
-
|
185
|
+
global: opts.fetch(:global, false)
|
186
|
+
}
|
187
|
+
fetch_response :basic_qos_ok
|
161
188
|
end
|
162
189
|
|
163
190
|
def basic_consume(queue, consumer_tag="", **opts)
|
164
|
-
|
191
|
+
send_request :basic_consume, {
|
165
192
|
queue: queue,
|
166
193
|
consumer_tag: consumer_tag,
|
167
194
|
no_local: opts.fetch(:no_local, false),
|
168
195
|
no_ack: opts.fetch(:no_ack, false),
|
169
196
|
exclusive: opts.fetch(:exclusive, false),
|
170
|
-
arguments: opts.fetch(:arguments, {})
|
171
|
-
|
197
|
+
arguments: opts.fetch(:arguments, {})
|
198
|
+
}
|
199
|
+
fetch_response :basic_consume_ok
|
172
200
|
end
|
173
201
|
|
174
202
|
def basic_cancel(consumer_tag)
|
175
|
-
|
203
|
+
send_request :basic_cancel, { consumer_tag: consumer_tag }
|
204
|
+
fetch_response :basic_cancel_ok
|
176
205
|
end
|
177
206
|
|
178
207
|
##
|
179
208
|
# Transaction operations
|
180
209
|
|
181
210
|
def tx_select
|
182
|
-
|
211
|
+
send_request :tx_select
|
212
|
+
fetch_response :tx_select_ok
|
183
213
|
end
|
184
214
|
|
185
215
|
def tx_commit
|
186
|
-
|
216
|
+
send_request :tx_commit
|
217
|
+
fetch_response :tx_commit_ok
|
187
218
|
end
|
188
219
|
|
189
220
|
def tx_rollback
|
190
|
-
|
221
|
+
send_request :tx_rollback
|
222
|
+
fetch_response :tx_rollback_ok
|
191
223
|
end
|
192
224
|
|
193
225
|
##
|
194
226
|
# Message operations
|
195
227
|
|
196
228
|
def basic_get(queue, **opts)
|
197
|
-
|
229
|
+
send_request :basic_get, {
|
198
230
|
queue: queue,
|
199
|
-
no_ack: opts.fetch(:no_ack, false)
|
200
|
-
|
231
|
+
no_ack: opts.fetch(:no_ack, false)
|
232
|
+
}
|
233
|
+
fetch_response [:basic_get_ok, :basic_get_empty]
|
201
234
|
end
|
202
235
|
|
203
236
|
def basic_ack(delivery_tag, **opts)
|
204
|
-
|
237
|
+
send_request :basic_ack, {
|
205
238
|
delivery_tag: delivery_tag,
|
206
|
-
multiple: opts.fetch(:multiple, false)
|
207
|
-
|
239
|
+
multiple: opts.fetch(:multiple, false)
|
240
|
+
}
|
241
|
+
true
|
208
242
|
end
|
209
243
|
|
210
244
|
def basic_nack(delivery_tag, **opts)
|
211
|
-
|
245
|
+
send_request :basic_nack, {
|
212
246
|
delivery_tag: delivery_tag,
|
213
247
|
multiple: opts.fetch(:multiple, false),
|
214
|
-
requeue: opts.fetch(:requeue, true)
|
215
|
-
|
248
|
+
requeue: opts.fetch(:requeue, true)
|
249
|
+
}
|
250
|
+
true
|
216
251
|
end
|
217
252
|
|
218
253
|
def basic_reject(delivery_tag, **opts)
|
219
|
-
|
254
|
+
send_request :basic_reject, {
|
220
255
|
delivery_tag: delivery_tag,
|
221
|
-
requeue: opts.fetch(:requeue, true)
|
222
|
-
|
256
|
+
requeue: opts.fetch(:requeue, true)
|
257
|
+
}
|
258
|
+
true
|
223
259
|
end
|
224
260
|
|
225
261
|
def basic_publish(body, exchange, routing_key, **opts)
|
226
|
-
body = FFI::Bytes.from_s(body)
|
227
|
-
exchange = FFI::Bytes.from_s(exchange)
|
228
|
-
routing_key = FFI::Bytes.from_s(routing_key)
|
262
|
+
body = FFI::Bytes.from_s(body.to_s)
|
263
|
+
exchange = FFI::Bytes.from_s(exchange.to_s)
|
264
|
+
routing_key = FFI::Bytes.from_s(routing_key.to_s)
|
229
265
|
properties = FFI::BasicProperties.new.apply(
|
230
266
|
content_type: opts.fetch(:content_type, nil),
|
231
267
|
content_encoding: opts.fetch(:content_encoding, nil),
|
@@ -243,7 +279,7 @@ module RabbitMQ
|
|
243
279
|
)
|
244
280
|
|
245
281
|
Util.error_check :"publishing a message",
|
246
|
-
FFI.amqp_basic_publish(
|
282
|
+
FFI.amqp_basic_publish(@conn.ptr, @id,
|
247
283
|
exchange,
|
248
284
|
routing_key,
|
249
285
|
opts.fetch(:mandatory, false),
|
@@ -0,0 +1,332 @@
|
|
1
|
+
|
2
|
+
require_relative 'client/connection'
|
3
|
+
|
4
|
+
module RabbitMQ
|
5
|
+
|
6
|
+
# A {Client} holds a connection to a RabbitMQ server and has facilities
|
7
|
+
# for sending events to and handling received events from that server.
|
8
|
+
#
|
9
|
+
# A {Client} is not threadsafe; both the {Client} and any {Channel}s linked
|
10
|
+
# to it should not be shared between threads. If they are shared without
|
11
|
+
# appropriate locking mechanisms, the behavior is undefined and might result
|
12
|
+
# in catastrophic process failures like segmentation faults in the underlying
|
13
|
+
# C library. A {Client} can be safely used in a multithreaded application by
|
14
|
+
# only passing control and message data between threads.
|
15
|
+
#
|
16
|
+
# To use a {Client} effectively, it is necessary to understand the
|
17
|
+
# methods available in the underlying AMQP protocol. Please refer to
|
18
|
+
# the protocol documentation for more information about specific methods:
|
19
|
+
# http://www.rabbitmq.com/amqp-0-9-1-reference.html
|
20
|
+
#
|
21
|
+
class Client
|
22
|
+
|
23
|
+
# Create a new {Client} instance with the given properties.
|
24
|
+
# There are several ways to convey connection info:
|
25
|
+
#
|
26
|
+
# @example with a URL string
|
27
|
+
# RabbitMQ::Client.new("amqp://user:password@host:1234/vhost")
|
28
|
+
#
|
29
|
+
# @example with explicit options
|
30
|
+
# RabbitMQ::Client.new(user: "user", password: "password", port: 1234)
|
31
|
+
#
|
32
|
+
# @example with both URL string and explicit options
|
33
|
+
# RabbitMQ::Client.new("amqp://host:1234", user: "user", password: "password")
|
34
|
+
#
|
35
|
+
# Parsed options from a URL will be applied first, then any options given
|
36
|
+
# explicitly will override those parsed. If any options are ambiguous, they
|
37
|
+
# will have the default values:
|
38
|
+
# {
|
39
|
+
# user: "guest",
|
40
|
+
# password: "guest",
|
41
|
+
# host: "localhost",
|
42
|
+
# vhost: "/",
|
43
|
+
# port: 5672,
|
44
|
+
# ssl: false,
|
45
|
+
# max_channels: RabbitMQ::FFI::CHANNEL_MAX_ID, # absolute maximum
|
46
|
+
# max_frame_size: 131072,
|
47
|
+
# }
|
48
|
+
#
|
49
|
+
def initialize(*args)
|
50
|
+
@conn = Connection.new(*args)
|
51
|
+
|
52
|
+
@open_channels = {}
|
53
|
+
@released_channels = {}
|
54
|
+
@event_handlers = Hash.new { |h,k| h[k] = {} }
|
55
|
+
@incoming_events = Hash.new { |h,k| h[k] = {} }
|
56
|
+
|
57
|
+
@protocol_timeout = DEFAULT_PROTOCOL_TIMEOUT
|
58
|
+
end
|
59
|
+
|
60
|
+
# Initiate the connection with the server. It is necessary to call this
|
61
|
+
# before any other communication, including creating a {#channel}.
|
62
|
+
def start
|
63
|
+
close # Close if already open
|
64
|
+
@conn.start
|
65
|
+
self
|
66
|
+
end
|
67
|
+
|
68
|
+
# Gracefully close the connection with the server. This will
|
69
|
+
# be done automatically on garbage collection if not called explicitly.
|
70
|
+
def close
|
71
|
+
@conn.close
|
72
|
+
release_all_channels
|
73
|
+
self
|
74
|
+
end
|
75
|
+
|
76
|
+
# Free the native resources associated with this object. This will
|
77
|
+
# be done automatically on garbage collection if not called explicitly.
|
78
|
+
def destroy
|
79
|
+
@conn.destroy
|
80
|
+
self
|
81
|
+
end
|
82
|
+
|
83
|
+
# The timeout to use when waiting for protocol events, in seconds.
|
84
|
+
# By default, this has the value of {DEFAULT_PROTOCOL_TIMEOUT}.
|
85
|
+
# When set, it affects operations like {#fetch_response} and {#run_loop!}.
|
86
|
+
attr_accessor :protocol_timeout
|
87
|
+
DEFAULT_PROTOCOL_TIMEOUT = 30 # seconds
|
88
|
+
|
89
|
+
def user; @conn.options.fetch(:user); end
|
90
|
+
def password; @conn.options.fetch(:password); end
|
91
|
+
def host; @conn.options.fetch(:host); end
|
92
|
+
def vhost; @conn.options.fetch(:vhost); end
|
93
|
+
def port; @conn.options.fetch(:port); end
|
94
|
+
def ssl?; @conn.options.fetch(:ssl); end
|
95
|
+
def max_channels; @conn.options.fetch(:max_channels); end
|
96
|
+
def max_frame_size; @conn.options.fetch(:max_frame_size); end
|
97
|
+
|
98
|
+
# Send a request on the given channel with the given type and properties.
|
99
|
+
#
|
100
|
+
# @param channel_id [Integer] The channel number to send on.
|
101
|
+
# @param method [Symbol] The type of protocol method to send.
|
102
|
+
# @param properties [Hash] The properties to apply to the method.
|
103
|
+
# @raise [RabbitMQ::FFI::Error] if a library exception occurs.
|
104
|
+
#
|
105
|
+
def send_request(channel_id, method, properties={})
|
106
|
+
Util.error_check :"sending a request",
|
107
|
+
@conn.send_method(Integer(channel_id), method.to_sym, properties)
|
108
|
+
|
109
|
+
nil
|
110
|
+
end
|
111
|
+
|
112
|
+
# Wait for a specific response on the given channel of the given type
|
113
|
+
# and return the event data for the response when it is received.
|
114
|
+
# Any other events received will be processed or stored internally.
|
115
|
+
#
|
116
|
+
# @param channel_id [Integer] The channel number to watch for.
|
117
|
+
# @param method [Symbol,Array<Symbol>] The protocol method(s) to watch for.
|
118
|
+
# @param timeout [Float] The maximum time to wait for a response in seconds;
|
119
|
+
# uses the value of {#protocol_timeout} by default.
|
120
|
+
# @raise [RabbitMQ::ServerError] if any error event is received.
|
121
|
+
# @raise [RabbitMQ::FFI::Error::Timeout] if no event is received.
|
122
|
+
# @raise [RabbitMQ::FFI::Error] if a library exception occurs.
|
123
|
+
# @return [Hash] the response data received.
|
124
|
+
#
|
125
|
+
def fetch_response(channel_id, method, timeout: protocol_timeout)
|
126
|
+
methods = Array(method).map(&:to_sym)
|
127
|
+
timeout = Float(timeout) if timeout
|
128
|
+
fetch_response_internal(Integer(channel_id), methods, timeout)
|
129
|
+
end
|
130
|
+
|
131
|
+
# Register a handler for events on the given channel of the given type.
|
132
|
+
# Only one handler for each event type may be registered at a time.
|
133
|
+
# If no callable or block is given, the handler will be cleared.
|
134
|
+
#
|
135
|
+
# @param channel_id [Integer] The channel number to watch for.
|
136
|
+
# @param method [Symbol] The type of protocol method to watch for.
|
137
|
+
# @param callable [#call,nil] The callable handler if no block is given.
|
138
|
+
# @param block [Proc,nil] The handler block to register.
|
139
|
+
# @return [Proc,#call,nil] The given block or callable.
|
140
|
+
# @yieldparam event [Hash] The event passed to the handler.
|
141
|
+
#
|
142
|
+
def on_event(channel_id, method, callable=nil, &block)
|
143
|
+
handler = block || callable
|
144
|
+
raise ArgumentError, "expected block or callable as the event handler" \
|
145
|
+
unless handler.respond_to?(:call)
|
146
|
+
|
147
|
+
@event_handlers[Integer(channel_id)][method.to_sym] = handler
|
148
|
+
handler
|
149
|
+
end
|
150
|
+
|
151
|
+
# Unregister the event handler associated with the given channel and method.
|
152
|
+
#
|
153
|
+
# @param channel_id [Integer] The channel number to watch for.
|
154
|
+
# @param method [Symbol] The type of protocol method to watch for.
|
155
|
+
# @return [Proc,nil] This removed handler, if any.
|
156
|
+
#
|
157
|
+
def clear_event_handler(channel_id, method)
|
158
|
+
@event_handlers[Integer(channel_id)].delete(method.to_sym)
|
159
|
+
end
|
160
|
+
|
161
|
+
# Fetch and handle events in a loop that blocks the calling thread.
|
162
|
+
# The loop will continue until the {#break!} method is called from within
|
163
|
+
# an event handler, or until the given timeout duration has elapsed.
|
164
|
+
#
|
165
|
+
# @param timeout [Float] the maximum time to run the loop, in seconds;
|
166
|
+
# if none is given, the value is {#protocol_timeout} or until {#break!}
|
167
|
+
# @param block [Proc,nil] if given, the block will be yielded each
|
168
|
+
# non-exception event received on any channel. Other handlers or
|
169
|
+
# response fetchings that match the event will still be processed,
|
170
|
+
# as the block does not consume the event or replace the handlers.
|
171
|
+
# @return [undefined] assume no value - reserved for future use.
|
172
|
+
#
|
173
|
+
def run_loop!(timeout: protocol_timeout, &block)
|
174
|
+
timeout = Float(timeout) if timeout
|
175
|
+
@breaking = false
|
176
|
+
fetch_events(timeout, &block)
|
177
|
+
nil
|
178
|
+
end
|
179
|
+
|
180
|
+
# Stop iterating from within an execution of the {#run_loop!} method.
|
181
|
+
# Call this method only from within an event handler.
|
182
|
+
# It will take effect only after the handler finishes running.
|
183
|
+
#
|
184
|
+
# @return [nil]
|
185
|
+
#
|
186
|
+
def break!
|
187
|
+
@breaking = true
|
188
|
+
nil
|
189
|
+
end
|
190
|
+
|
191
|
+
# Open a new channel of communication and return a new {Channel} object
|
192
|
+
# with convenience methods for communicating on that channel. The
|
193
|
+
# channel will be automatically released if the {Channel} instance is
|
194
|
+
# garbage collected, or if the {Client} connection is {#close}d.
|
195
|
+
#
|
196
|
+
# @param id [Integer,nil] The channel id number to use. If nil or not
|
197
|
+
# given, a unique channel number will be chosen automatically.
|
198
|
+
# @raise [ArgumentError] If the given channel id number is not unique or
|
199
|
+
# if the given channel id number is greater than {#max_channels}.
|
200
|
+
# @return [Channel] The new channel handle.
|
201
|
+
#
|
202
|
+
def channel(id=nil)
|
203
|
+
id = allocate_channel(id)
|
204
|
+
finalizer = Proc.new { release_channel(id) }
|
205
|
+
Channel.new(self, @conn, id, finalizer)
|
206
|
+
end
|
207
|
+
|
208
|
+
# Open the specified channel.
|
209
|
+
private def open_channel(id)
|
210
|
+
Util.error_check :"opening a new channel",
|
211
|
+
@conn.send_method(id, :channel_open)
|
212
|
+
|
213
|
+
fetch_response(id, :channel_open_ok)
|
214
|
+
end
|
215
|
+
|
216
|
+
# Re-open the specified channel after unexpected closure.
|
217
|
+
private def reopen_channel(id)
|
218
|
+
Util.error_check :"acknowledging server-initated channel closure",
|
219
|
+
@conn.send_method(id, :channel_close_ok)
|
220
|
+
|
221
|
+
Util.error_check :"reopening channel after server-initated closure",
|
222
|
+
@conn.send_method(id, :channel_open)
|
223
|
+
|
224
|
+
fetch_response(id, :channel_open_ok)
|
225
|
+
end
|
226
|
+
|
227
|
+
# Verify or choose a channel id number that is available for use.
|
228
|
+
private def allocate_channel(id=nil)
|
229
|
+
if id
|
230
|
+
id = Integer(id)
|
231
|
+
raise ArgumentError, "channel #{id} is already in use" if @open_channels[id]
|
232
|
+
elsif @released_channels.empty?
|
233
|
+
id = (@open_channels.keys.sort.last || 0) + 1
|
234
|
+
else
|
235
|
+
id = @released_channels.keys.first
|
236
|
+
end
|
237
|
+
raise ArgumentError, "channel #{id} is too high" if id > max_channels
|
238
|
+
|
239
|
+
already_open = @released_channels.delete(id)
|
240
|
+
open_channel(id) unless already_open
|
241
|
+
|
242
|
+
@open_channels[id] = true
|
243
|
+
@event_handlers[id] ||= {}
|
244
|
+
|
245
|
+
id
|
246
|
+
end
|
247
|
+
|
248
|
+
# Release the given channel id to be reused later and clear its handlers.
|
249
|
+
private def release_channel(id)
|
250
|
+
@open_channels.delete(id)
|
251
|
+
@event_handlers.delete(id)
|
252
|
+
@released_channels[id] = true
|
253
|
+
end
|
254
|
+
|
255
|
+
# Release all channel ids to be reused later.
|
256
|
+
private def release_all_channels
|
257
|
+
@open_channels.clear
|
258
|
+
@event_handlers.clear
|
259
|
+
@released_channels.clear
|
260
|
+
end
|
261
|
+
|
262
|
+
# Execute the handler for this type of event, if any.
|
263
|
+
private def handle_incoming_event(event)
|
264
|
+
if (handlers = @event_handlers[event.fetch(:channel)])
|
265
|
+
if (handler = (handlers[event.fetch(:method)]))
|
266
|
+
handler.call(event)
|
267
|
+
end
|
268
|
+
end
|
269
|
+
end
|
270
|
+
|
271
|
+
# Store the event in short-term storage for retrieval by fetch_response.
|
272
|
+
# If another event is received with the same method name, it will
|
273
|
+
# overwrite this one - fetch_response gets the latest or next by method.
|
274
|
+
# Raises an exception if the incoming event is an error condition.
|
275
|
+
private def store_incoming_event(event)
|
276
|
+
method = event.fetch(:method)
|
277
|
+
|
278
|
+
case method
|
279
|
+
when :channel_close
|
280
|
+
raise_if_server_error!(event)
|
281
|
+
when :connection_close
|
282
|
+
raise_if_server_error!(event)
|
283
|
+
else
|
284
|
+
@incoming_events[event.fetch(:channel)][method] = event
|
285
|
+
end
|
286
|
+
end
|
287
|
+
|
288
|
+
# Raise an exception if the incoming event is an error condition.
|
289
|
+
# Also takes action to reopen the channel or close the connection.
|
290
|
+
private def raise_if_server_error!(event)
|
291
|
+
if (exc = ServerError.from(event))
|
292
|
+
if exc.is_a?(ServerError::ChannelError)
|
293
|
+
reopen_channel(event.fetch(:channel)) # recover by reopening the channel
|
294
|
+
elsif exc.is_a?(ServerError::ConnectionError)
|
295
|
+
close # can't recover here - close and let the user recover manually
|
296
|
+
end
|
297
|
+
raise exc
|
298
|
+
end
|
299
|
+
end
|
300
|
+
|
301
|
+
# Internal implementation of the {#run_loop!} method.
|
302
|
+
private def fetch_events(timeout=protocol_timeout, start=Time.now)
|
303
|
+
@conn.garbage_collect
|
304
|
+
|
305
|
+
while (event = @conn.fetch_next_event(timeout, start))
|
306
|
+
handle_incoming_event(event)
|
307
|
+
store_incoming_event(event)
|
308
|
+
yield event if block_given?
|
309
|
+
break if @breaking
|
310
|
+
end
|
311
|
+
end
|
312
|
+
|
313
|
+
# Internal implementation of the {#fetch_response} method.
|
314
|
+
private def fetch_response_internal(channel_id, methods, timeout=protocol_timeout, start=Time.now)
|
315
|
+
methods.each { |method|
|
316
|
+
found = @incoming_events[channel_id].delete(method)
|
317
|
+
return found if found
|
318
|
+
}
|
319
|
+
|
320
|
+
@conn.garbage_collect_channel(channel_id)
|
321
|
+
|
322
|
+
while (event = @conn.fetch_next_event(timeout, start))
|
323
|
+
handle_incoming_event(event)
|
324
|
+
return event if channel_id == event.fetch(:channel) \
|
325
|
+
&& methods.include?(event.fetch(:method))
|
326
|
+
store_incoming_event(event)
|
327
|
+
end
|
328
|
+
|
329
|
+
raise FFI::Error::Timeout, "waiting for response"
|
330
|
+
end
|
331
|
+
end
|
332
|
+
end
|