ione-rpc 1.0.0.pre0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: 80f4a0fbab22cefd0de19d54c779637b15129ea8
4
+ data.tar.gz: 35d701df7cba437f67fb5fd505b457d49d43ad7c
5
+ SHA512:
6
+ metadata.gz: ea81e63e8b59f31ac42b60be6e432480a365efa8f7de7da8475eb99acf8877445045e93c77634d14d9eb717cf9e4301b7c0bee2b82dc0366a3bd381b2b61e47e
7
+ data.tar.gz: a2384cb4793ed2dbbe7a344ee535d74844d386b94ff531aa0abad4a12ed1b5cc30461a90ca90cbdd787b761e1ee31b3936665f8c9ef164928387d02354571bc0
data/.yardopts ADDED
@@ -0,0 +1,5 @@
1
+ --no-private
2
+ --protected
3
+ --markup markdown
4
+ lib/**/*.rb
5
+ -- README
data/README.md ADDED
@@ -0,0 +1,267 @@
1
+ # Ione RPC framework
2
+
3
+ [![Build Status](https://travis-ci.org/iconara/ione-rpc.png?branch=master)](https://travis-ci.org/iconara/ione-rpc)
4
+ [![Coverage Status](https://coveralls.io/repos/iconara/ione-rpc/badge.png)](https://coveralls.io/r/iconara/ione-rpc)
5
+ [![Blog](http://b.repl.ca/v1/blog-ione-ff69b4.png)](http://architecturalatrocities.com/tagged/ione)
6
+
7
+ _If you're reading this on GitHub, please note that this is the readme for the development version and that some features described here might not yet have been released. You can find the readme for a specific version either through [rubydoc.info](http://rubydoc.info/find/gems?q=ione-rpc) or via the release tags ([here is an example](https://github.com/iconara/ione-rpc/tree/v1.0.0))._
8
+
9
+ Ione RPC is a framework for writing server and client components for your Ruby applications. You need to write the request handling logic, but the framework handles most of the hard things for you – including automatic reconnections, load balancing, framing and request multiplexing.
10
+
11
+ # Installing
12
+
13
+ There is currently no gem release of Ione RPC, but you can install it from git with Bundler:
14
+
15
+ ```ruby
16
+ # in Gemfile
17
+ gem 'ione-rpc', github: 'iconara/ione-rpc'
18
+ ```
19
+
20
+ # Example
21
+
22
+ To communicate the client and the server need to agree on how messages should be encoded. In Ione RPC the client and server need a _codec_ which they will use to encode and decode messages. The easiest way to create a codec is to use `Ione::Rpc::StandardCodec` which takes an object that conforms to the (more or less) standard Ruby encoder protocol that libraries like JSON, YAML, MessagePack and others implement: `#dump` for encoding, `#load` for decoding (technically it's `.dump` and `.load`, but it depends on the perspective).
23
+
24
+ `StandardCodec` is stateless, so you can assign your codec to a constant:
25
+
26
+ ```ruby
27
+ CODEC = Ione::Rpc::StandardCodec(JSON)
28
+ ```
29
+
30
+ Using JSON for encoding isn't the most efficient, but you can easily change to MessagePack when needed, or write a little bit more code and use something like [Protocol Buffers](https://code.google.com/p/protobuf/).
31
+
32
+ ### A server
33
+
34
+ When we have a codec the next step is to create the server component. Servers need to implement the `#handle_request` method, and return a future with the response.
35
+
36
+ ```ruby
37
+ class TranslateServer < Ione::Rpc::Server
38
+ def initialize(port)
39
+ super(port, MY_CODEC)
40
+ end
41
+
42
+ def handle_request(request, _)
43
+ case request['message']
44
+ when 'Hello world'
45
+ Ione::Future.resolved('translation' => 'Hallo welt')
46
+ else
47
+ Ione::Future.resolved('error' => 'Entschuldigung, ich verstehe nich')
48
+ end
49
+ end
50
+ end
51
+ ```
52
+
53
+ It might seem like unnecessary overhead to have to create a future when you just want to return a response – but think of the possibilities: the request handling can be completely asynchronous. Your server will most likely just transform the request into one or more requests to a database, or other network services, and if they are handled asynchronously your server will use very few resources and be able to process lots of requests.
54
+
55
+ Please note that you must absolutely not do any blocking operations in `#handle_request` as they would block the whole server.
56
+
57
+ When you have your server class you need to instantiate it and start it:
58
+
59
+ ```ruby
60
+ server = TranslateServer.new(3333)
61
+ started_future = server.start
62
+ started_future.on_value do |s|
63
+ puts "Server running on port #{s.port}"
64
+ end
65
+ ```
66
+
67
+ Servers can implement a method called `#handle_connection` to get notified when a client connects – this can be used create some kind of per-connection state, for example – and there are some options that can be set to control low level network settings, but apart from that, but most of the time the code you see above is all that is required.
68
+
69
+ The server will run in a background thread. If your application is just the server you need to make sure that the main application thread doesn't exit, because that means that the process will exit and the server stops. You can call `sleep` with no argument to put the main thread to sleep forever. The application will still exit when `kill`ed, on ctrl-C, or when you call `Kernel.exit`.
70
+
71
+ ### A client
72
+
73
+ The client is even simpler than the server. In its simplest form this is all you need:
74
+
75
+ ```ruby
76
+ client = Ione::Rpc::Client.new(CODEC, hosts: %w[node1.example.com:3333 node2.example.com:3333])
77
+ ```
78
+
79
+ You can give the client a list of a single host, or many, it will connect to them all and randomize which one to talk to for each request. When a connection is lost the client will automatically try to reconnect, but use the other connections for requests in the meantime.
80
+
81
+ You can add more hosts with `#add_host` and you can tell the client to disconnect from a host (or stop trying to reconnect) with `#remove_host`.
82
+
83
+ To send requests you need to start your client, and then use `#send_request`:
84
+
85
+ ```ruby
86
+ started_future = client.start
87
+ started_future.on_value do
88
+ response_future = client.send_request('message' => 'Hello world')
89
+ response_future.on_value do |response|
90
+ puts response['translation']
91
+ end
92
+ end
93
+ ```
94
+
95
+ The client takes care of encoding your request into bytes and send them over the network to the server, wait for the response, decode the response and deliver it back to your code.
96
+
97
+ Maybe you got a bit of a yucky feeling when you read the code above? Did it remind you of the callback hell from Node.js? Everything in Ione RPC that is not instantaneous returns a future. Futures are more pleasant to work with than callbacks, because they compose, so let's rewrite it to take advantage of the combinatorial powers of futures:
98
+
99
+ ```ruby
100
+ response_future = client.start.flat_map do |client|
101
+ client.send_request('message' => 'Hello world')
102
+ end
103
+ translation_future = response_future.map do |response|
104
+ response['translation']
105
+ end
106
+ translation_future.on_value do |translation|
107
+ puts translation
108
+ end
109
+ ```
110
+
111
+ That's better. It's still callbacks, of sorts, but these compose. `Ione::Future#flat_map` lets you chain asynchronous operations together and get a future that is the result of the last operation. `Ione::Future#map` is the non-asynchronous version that just transforms the result of a future to something else, just like `Array#map`.
112
+
113
+ If any of the operations in the chain fail the returned future fails and the operations after the failing one are never performed. There's a more complex example of working with futures further down.
114
+
115
+ If you don't care about being asynchronous you can use `Ione::Future#value` to wait for the result of a future to be available:
116
+
117
+ ```ruby
118
+ client.start.value
119
+ response = client.send_request('message' => 'Hello world').value
120
+ puts response['translation']
121
+ ```
122
+
123
+ If you choose to do it the asynchronous way just remember to not do any blocking operations (like calling `#value` on a future) in methods like `#flat_map`, `#map` or `#on_value`. Doing that will block the whole IO system and can lead to very strange bugs.
124
+
125
+ # A more advanced client
126
+
127
+ As you saw above you don't need to create a client class, but if you do there's some more features you can use.
128
+
129
+ First of all creating a client class means that you can hide the shape of the messages and present a higher level interface:
130
+
131
+ ```ruby
132
+ class TranslationClient < Ione::Rpc::Client
133
+ def initialize(hosts)
134
+ super(CODEC, hosts: hosts)
135
+ end
136
+
137
+ def translate(message)
138
+ send_request('message' => message)
139
+ end
140
+ end
141
+ ```
142
+
143
+ If you read the part above about how the client randomly selected which server to talk to and though that that wasn't very useful, there's a way to override that, just implement `#choose_connection`:
144
+
145
+ ```ruby
146
+ class TranslationClient < Ione::Rpc::Client
147
+ def initialize(hosts)
148
+ super(CODEC, hosts: hosts)
149
+ end
150
+
151
+ def translate(message)
152
+ send_request('message' => message, 'routing_key' => message.hash)
153
+ end
154
+
155
+ def choose_connection(connections, request)
156
+ connections[request['routing_key'] % connections.size]
157
+ end
158
+ end
159
+ ```
160
+
161
+ The `#choose_connection` method lets you decide which connection to use for each request. In this example the connection is selected based on the hash of the message, which means that every time the message "Hello world" is sent it will be sent to the same server, but other messages will be sent to others. It doesn't say _which_ server to choose, just that it should always be the same. The connection objects implement `#host` and `#port` so if you want to do routing that picks a specific server that's possible too.
162
+
163
+ As mentioned above, when a server goes down the client will try to reconnect to it. By default it will try to reconnect forever, at decreasing intervals (up to a max which by default is around a minute), or until you call `#remove_host`. You can control how many times the client will try to reconnect by implementing `#reconnect?`:
164
+
165
+ ```ruby
166
+ class TranslationClient < Ione::Rpc::Client
167
+ # ...
168
+
169
+ def reconnect?(host, port, attempts)
170
+ attempts < 5
171
+ end
172
+ ```
173
+
174
+ The method gets the host and port and the number of attempts, and if you return false the reconnection attempts will stop and the host/port combination will be removed, just as if you called `#remove_host`.
175
+
176
+ Sometimes you implement a protocol that requires the client to send a "startup" message, something that initializes the connections, a hello from the client if you will. You can do this manually, but there's also a special hook for that:
177
+
178
+ ```ruby
179
+ class TranslationClient < Ione::Rpc::Client
180
+ # ...
181
+
182
+ def initialize_connection(connection)
183
+ send_request({'hello' => {'from' => 'me'}}, connection)
184
+ end
185
+ end
186
+ ```
187
+
188
+ `#initialize_connection` gets the newly established connection as argument and must return a future that resolves when the connection has been properly initialized. You can use the special form of `#send_request` that takes a second argument to send a requets on a specific connection – this is very important, otherwise your initialization message could be sent over another connection, which wouldn't be very useful.
189
+
190
+ # Working with futures
191
+
192
+ ```ruby
193
+ all_done_future = update_user_awesomeness('sue@example.com', 8)
194
+ all_done_future.on_value do
195
+ puts 'All done'
196
+ end
197
+
198
+ # ...
199
+
200
+ def update_user_awesomeness(email, new_awesomeness_level)
201
+ posts_future = @db.execute('SELECT id FROM posts WHERE author = ?', email)
202
+ # #flat_map composes two asynchronous operations, it returns immediately with a new
203
+ # future that resolves only when the whole chain of operations is complete.
204
+ # In other words: the block below will not run now, but when there is a result
205
+ # from the database query. The future that is returned *represents* the result
206
+ # of the chain of operations performed on the initial result from the database.
207
+ posts_future.flat_map do |result|
208
+ # Don't confuse the #map below with Future#map, this is just a regular
209
+ # Array#map, transforming each row from the database query into something new.
210
+ update_futures = result.map do |row|
211
+ # Each row is used to send another database query, which returns another
212
+ # future, so the result of this #map block will be an array of futures.
213
+ update_post_awesomeness(row['id'], new_awesomeness_level)
214
+ end
215
+ # The database queries launched in the #map block will all execute in parallel
216
+ # but we want to know when all of them are done. For this we can use Future.all,
217
+ # which (surprise!) returns a new future, but one that resolves when *all* of the
218
+ # source futures resolve – it lets you converge after launching multiple parallel
219
+ # operations. Future.all transforms a list of futures of values to a future of a
220
+ # list of values, or in pseudo types: List[Future[V]] -> Future[List[V]].
221
+ Ione::Futures.all(*update_futures)
222
+ end
223
+ # We end up here almost immediately since the #flat_map doesn't run its block,
224
+ # until it has to. What we return is the return value from the #flat_map call, which is a
225
+ # future that will eventually resolve when all of the parallel operations we
226
+ # launched are done
227
+ end
228
+
229
+ def update_post_awesomeness(id, new_awesomeness_level, retry_attempts=3)
230
+ f = @db.execute('UPDATE posts SET awesomeness = ? WHERE id = ?', new_awesomeness_level, row['id'
231
+ # To handle failure we'll use the complement to #flat_map, which is #fallback. When a
232
+ # future fails, any chained operations will never happen, but sometimes you want to
233
+ # try again, or do some other operation when an error occurs. For this you can use
234
+ # #fallback to transform the failed operation into a successful one.
235
+ f = f.fallback do |error|
236
+ # Instead of the result of the parent future we get the error, and we can decide
237
+ # what to do based on whether or not it is fatal or not.
238
+ if error.is_a?(TryAgainError) && retry_attempts > 0
239
+ # In this case we want to try again, so we call the method recursively
240
+ # and decrement the number of remaning retries. This will make sure
241
+ # that we don't try forever, it's usually a bad idea to never give up.
242
+ update_post_awesomeness(id, new_awesomeness_level, retry_attempts - 1)
243
+ else
244
+ # If you can't recover from the error you can just raise it again and it will be
245
+ # as if you didn't do anything.
246
+ raise error
247
+ end
248
+ end
249
+ f
250
+ end
251
+ ```
252
+
253
+ Please refer to [the `Ione::Future` documentation](http://rubydoc.info/gems/ione/frames) for the full story on futures. Coincidentally the code above is more or less how [cql-rb](https://github.com/iconara/cql-rb), the Cassandra driver where Ione came from, works internally (everything but the `TryAgainError`).
254
+
255
+ # How to contribute
256
+
257
+ [See CONTRIBUTING.md](CONTRIBUTING.md)
258
+
259
+ # Copyright
260
+
261
+ Copyright 2014 Theo Hultberg/Iconara and contributors.
262
+
263
+ _Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License You may obtain a copy of the License at_
264
+
265
+ [http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0)
266
+
267
+ _Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License._
@@ -0,0 +1,302 @@
1
+ # encoding: utf-8
2
+
3
+ module Ione
4
+ module Rpc
5
+ # This is the base class of client peers.
6
+ #
7
+ # You can either create a subclass and add your own high-level convenience
8
+ # methods for constructing and sending your custom requests, or you can
9
+ # create a standalone client object and call {#send_request}.
10
+ #
11
+ # A subclass may optionally implement {#initialize_connection} to send a
12
+ # message immediately on a successful connection, and {#choose_connection}
13
+ # to decide which connection to use for a request.
14
+ #
15
+ # The client will handle connections to multiple server peers, and
16
+ # automatically reconnect to them when they disconnect.
17
+ class Client
18
+ # Create a new client with the specified codec and options.
19
+ #
20
+ # @param [Object] codec the protocol codec to use to encode requests and
21
+ # decode responses. See {Ione::Rpc::Codec}.
22
+ # @param [Hash] options
23
+ # @option options [Array<String>] :hosts the host (and ports) to connect
24
+ # to, specified either as an array of host (String) and port (Integer)
25
+ # pairs (e.g. `[['host1', 1111], [`host2`, 2222]]`) or an array of
26
+ # strings on the format host:port (e.g. `['host1:1111', 'host2:2222']`).
27
+ # @option options [Ione::Io::IoReactor] :io_reactor use this option to
28
+ # make the client use an existing IO reactor and not create its own.
29
+ # Please note that {#stop} will still stop the reactor.
30
+ # @option options [Integer] :connection_timeout (5) the number of seconds
31
+ # to wait for connections to be established before failing.
32
+ # @option options [Integer] :max_channels (128) the maximum number of
33
+ # channels supported for each connection.
34
+ # @option options [Logger] :logger a logger conforming to the standard
35
+ # Ruby logger API that will be used to log significant events like
36
+ # request failures.
37
+ def initialize(codec, options={})
38
+ @codec = codec
39
+ @lock = Mutex.new
40
+ @connection_timeout = options[:connection_timeout] || 5
41
+ @io_reactor = options[:io_reactor] || Io::IoReactor.new
42
+ @max_channels = options[:max_channels] || 128
43
+ @logger = options[:logger]
44
+ @hosts = []
45
+ @connections = []
46
+ Array(options[:hosts]).each { |h| add_host(*h) }
47
+ end
48
+
49
+ # A client is connected when it has at least one open connection.
50
+ def connected?
51
+ @lock.synchronize { @connections.any? }
52
+ end
53
+
54
+ # Start the client and connect to all hosts. This also starts the IO
55
+ # reactor if it was not already started.
56
+ #
57
+ # The returned future resolves when all hosts have been connected to, and
58
+ # if one or more fails to connect the client will periodically try again,
59
+ # and the future will not resolve until all of them have connected.
60
+ #
61
+ # @return [Ione::Future<Ione::Rpc::Client>] a future that resolves to the
62
+ # client when all hosts have been connected to.
63
+ def start
64
+ @io_reactor.start.flat_map { connect_all }.map(self)
65
+ end
66
+
67
+ # Stop the client and close all connections. This also stops the IO
68
+ # reactor if it has not already stopped.
69
+ #
70
+ # @return [Ione::Future<Ione::Rpc::Client>] a future that resolves to the
71
+ # client when all connections have closed and the IO reactor has stopped.
72
+ def stop
73
+ @lock.synchronize { @connections = [] }
74
+ @io_reactor.stop.map(self)
75
+ end
76
+
77
+ # Add an additional host to connect to. This can be done either before
78
+ # or after the client is started.
79
+ #
80
+ # @param [String] hostname the host to connect to, or the host:port pair (in
81
+ # which case the port parameter should be `nil`).
82
+ # @param [Integer] port the host to connect to, or `nil` if the host is
83
+ # a string on the format host:port.
84
+ # @return [Ione::Future<Ione::Rpc::Client>] a future that resolves to the
85
+ # client when the host has been connected to.
86
+ def add_host(hostname, port=nil)
87
+ hostname, port = normalize_address(hostname, port)
88
+ promise = nil
89
+ @lock.synchronize do
90
+ _, _, promise = @hosts.find { |h, p, _| h == hostname && p == port }
91
+ if promise
92
+ return promise.future
93
+ else
94
+ promise = Promise.new
95
+ @hosts << [hostname, port, promise]
96
+ end
97
+ end
98
+ if @io_reactor.running?
99
+ promise.observe(connect(hostname, port))
100
+ end
101
+ promise.future.map(self)
102
+ end
103
+
104
+ # Remove a host and disconnect any connections to it. This can be done
105
+ # either before or after the client is started.
106
+ #
107
+ # @param [String] hostname the host to connect to, or the host:port pair (in
108
+ # which case the port parameter should be `nil`).
109
+ # @param [Integer] port the host to connect to, or `nil` if the host is
110
+ # a string on the format host:port.
111
+ # @return [Ione::Future<Ione::Rpc::Client] a future that resolves to the
112
+ # client (immediately, this is mostly to be consistent with #add_host)
113
+ def remove_host(hostname, port=nil)
114
+ hostname, port = normalize_address(hostname, port)
115
+ @lock.synchronize do
116
+ index = @hosts.index { |h, p, _| h == hostname && p == port }
117
+ if index
118
+ @hosts.delete_at(index)
119
+ if (connection = @connections.find { |c| c.host == hostname && c.port == port })
120
+ connection.close
121
+ end
122
+ end
123
+ end
124
+ Future.resolved(self)
125
+ end
126
+
127
+ # Send a request to a server peer. The peer chosen is determined by the
128
+ # Implementation of {#choose_connection}, which is random selection by
129
+ # default.
130
+ #
131
+ # If a connection closes between the point where it was chosen and when
132
+ # the message was written to it, the request is retried on another
133
+ # connection. For all other errors the request is not retried and it is
134
+ # up to the caller to determine if the request is safe to retry.
135
+ #
136
+ # If a logger has been specified the following will be logged:
137
+ # * A warning when a connection has closed and the request will be retried
138
+ # * A warning when a request fails for another reason
139
+ # * A warning when there are no open connections
140
+ #
141
+ # @param [Object] request the request to send.
142
+ # @param [Object] connection the connection to send the request on. This
143
+ # parameter is internal and should only be used from {#initialize_connection}.
144
+ # @return [Ione::Future<Object>] a future that resolves to the response
145
+ # from the server, or fails because there was an error while processing
146
+ # the request (this is not the same thing as the server sending an
147
+ # error response – that is protocol specific and up to the implementation
148
+ # to handle), or when there was no connection open.
149
+ def send_request(request, connection=nil)
150
+ connection = connection || @lock.synchronize { choose_connection(@connections, request) }
151
+ if connection
152
+ f = connection.send_message(request)
153
+ f = f.fallback do |error|
154
+ if error.is_a?(Io::ConnectionClosedError)
155
+ @logger.warn('Request failed because the connection closed, retrying') if @logger
156
+ send_request(request)
157
+ else
158
+ raise error
159
+ end
160
+ end
161
+ f.on_failure do |error|
162
+ @logger.warn('Request failed: %s' % error.message) if @logger
163
+ end
164
+ f
165
+ else
166
+ @logger.warn('Could not send request: not connected') if @logger
167
+ Future.failed(Io::ConnectionError.new('Not connected'))
168
+ end
169
+ rescue => e
170
+ Future.failed(e)
171
+ end
172
+
173
+ protected
174
+
175
+ # Override this method to send a request when a connection has been
176
+ # established, but before the future returned by {#start} resolves.
177
+ #
178
+ # It's important that if you need to send a special message to initialize
179
+ # a connection that you send it to the right connection. To do this pass
180
+ # the connection as second argument to {#send_request}, see the example
181
+ # below.
182
+ #
183
+ # @example Sending a startup request
184
+ # def initialize_connection(connection)
185
+ # send_request(MyStartupRequest.new, connection)
186
+ # end
187
+ #
188
+ # @return [Ione::Future] a future that resolves when the initialization
189
+ # is complete. If this future fails the connection fails.
190
+ def initialize_connection(connection)
191
+ Future.resolved
192
+ end
193
+
194
+ # Override this method to implement custom request routing strategies.
195
+ # Before a request is encoded and sent over a connection this method will
196
+ # be called with all available connections and the request object (i.e.
197
+ # the object passed to {#send_request}).
198
+ #
199
+ # The default implementation picks a random connection.
200
+ #
201
+ # The connection objects have a `#host` property that use if you want to
202
+ # do routing based on host.
203
+ #
204
+ # @example Routing messages consistently based on a property of the request
205
+ # def choose_connection(connections, request)
206
+ # connections[request.some_property.hash % connections.size]
207
+ # end
208
+ #
209
+ # @param [Array<Object>] connections all the open connections.
210
+ # @param [Object] request the request to be sent.
211
+ # @return [Object] the connection that should receive the request.
212
+ def choose_connection(connections, request)
213
+ connections.sample
214
+ end
215
+
216
+ def reconnect?(host, port, attempts)
217
+ true
218
+ end
219
+
220
+ private
221
+
222
+ def connect_all
223
+ hosts = @lock.synchronize { @hosts.dup }
224
+ futures = hosts.map do |host, port, promise|
225
+ f = connect(host, port)
226
+ promise.observe(f)
227
+ f
228
+ end
229
+ Future.all(*futures)
230
+ end
231
+
232
+ def connect(host, port, next_timeout=nil, attempts=1)
233
+ if @io_reactor.running?
234
+ @logger.debug('Connecting to %s:%d' % [host, port]) if @logger
235
+ f = @io_reactor.connect(host, port, @connection_timeout) do |connection|
236
+ create_connection(connection)
237
+ end
238
+ f.on_value(&method(:handle_connected))
239
+ f = f.fallback do |e|
240
+ if connect?(host, port) && reconnect?(host, port, attempts)
241
+ timeout = next_timeout || @connection_timeout
242
+ max_timeout = @connection_timeout * 10
243
+ next_timeout = [timeout * 2, max_timeout].min
244
+ @logger.warn('Failed connecting to %s:%d, will try again in %ds' % [host, port, timeout]) if @logger
245
+ ff = @io_reactor.schedule_timer(timeout)
246
+ ff.flat_map do
247
+ connect(host, port, next_timeout, attempts + 1)
248
+ end
249
+ else
250
+ @logger.info('Not reconnecting to %s:%d' % [host, port]) if @logger
251
+ remove_host(host, port)
252
+ raise e
253
+ end
254
+ end
255
+ f.flat_map do |connection|
256
+ initialize_connection(connection).map(connection)
257
+ end
258
+ else
259
+ Future.failed(Io::ConnectionError.new('IO reactor stopped while connecting to %s:%d' % [host, port]))
260
+ end
261
+ end
262
+
263
+ def create_connection(raw_connection)
264
+ Ione::Rpc::ClientPeer.new(raw_connection, @codec, @max_channels)
265
+ end
266
+
267
+ def handle_connected(connection)
268
+ @logger.info('Connected to %s:%d' % [connection.host, connection.port]) if @logger
269
+ connection.on_closed { |error| handle_disconnected(connection, error) }
270
+ if connect?(connection.host, connection.port)
271
+ @lock.synchronize { @connections << connection }
272
+ else
273
+ connection.close
274
+ end
275
+ end
276
+
277
+ def connect?(host, port)
278
+ hosts = @lock.synchronize { @hosts.dup }
279
+ hosts.any? { |h, p, _| h == host && p == port }
280
+ end
281
+
282
+ def handle_disconnected(connection, error=nil)
283
+ message = 'Connection to %s:%d closed' % [connection.host, connection.port]
284
+ if error
285
+ @logger.warn(message << ' unexpectedly: ' << error.message) if @logger
286
+ else
287
+ @logger.info(message) if @logger
288
+ end
289
+ @lock.synchronize { @connections.delete(connection) }
290
+ connect(connection.host, connection.port) if error
291
+ end
292
+
293
+ def normalize_address(host, port)
294
+ if port.nil?
295
+ host, port = host.split(':')
296
+ end
297
+ port = port.to_i
298
+ return host, port
299
+ end
300
+ end
301
+ end
302
+ end
@@ -0,0 +1,78 @@
1
+ # encoding: utf-8
2
+
3
+ require 'ione'
4
+
5
+
6
+ module Ione
7
+ module Rpc
8
+ # @private
9
+ class ClientPeer < Peer
10
+ def initialize(connection, codec, max_channels)
11
+ super(connection, codec)
12
+ @lock = Mutex.new
13
+ @channels = [nil] * max_channels
14
+ @queue = []
15
+ end
16
+
17
+ def send_message(request)
18
+ promise = Ione::Promise.new
19
+ channel = @lock.synchronize do
20
+ take_channel(promise)
21
+ end
22
+ if channel
23
+ write_message(request, channel)
24
+ else
25
+ @lock.synchronize do
26
+ @queue << [request, promise]
27
+ end
28
+ end
29
+ promise.future
30
+ end
31
+
32
+ private
33
+
34
+ def handle_message(response, channel)
35
+ promise = @lock.synchronize do
36
+ promise = @channels[channel]
37
+ @channels[channel] = nil
38
+ promise
39
+ end
40
+ if promise
41
+ promise.fulfill(response)
42
+ end
43
+ flush_queue
44
+ end
45
+
46
+ def flush_queue
47
+ @lock.synchronize do
48
+ count = 0
49
+ max = @queue.size
50
+ while count < max
51
+ request, promise = @queue[count]
52
+ if (channel = take_channel(promise))
53
+ write_message(request, channel)
54
+ count += 1
55
+ else
56
+ break
57
+ end
58
+ end
59
+ @queue = @queue.drop(count)
60
+ end
61
+ end
62
+
63
+ def take_channel(promise)
64
+ if (channel = @channels.index(nil))
65
+ @channels[channel] = promise
66
+ channel
67
+ end
68
+ end
69
+
70
+ def handle_closed(cause=nil)
71
+ error = Io::ConnectionClosedError.new('Connection closed')
72
+ promises_to_fail = @lock.synchronize { @channels.reject(&:nil?) }
73
+ promises_to_fail.each { |p| p.fail(error) }
74
+ super
75
+ end
76
+ end
77
+ end
78
+ end