fiber_connection_pool 0.1.2 → 0.2.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 654c0a80fecfc77daa838a3a0611ad534e4413fc
4
- data.tar.gz: cbb3bd2ccb7f50fc534d98f976cfe2ec42c87302
3
+ metadata.gz: 3805c4817fd7ddab4eaafcdf30bacdd99dd79646
4
+ data.tar.gz: 2af6d4c85fd8d672b7d8c0b9eaea6a02a76e3658
5
5
  SHA512:
6
- metadata.gz: 4f01cfc453ecfc286069f18ac34545c079a64214b2e99dee2a2ef3ef6c1414e72c2da586d653d23e61a35d363c62f28bcd45f160bab9d5d93323cf92e2d9765a
7
- data.tar.gz: c0995f388b1552a246cc9f68ab1ebcc929cac66a088b0aa1a4d0743bae84b5f34f853e34b0164d1c323e458e81be4a249c0d0d52fe7fc9347b8e610ac76af2a2
6
+ metadata.gz: 65a29c9021d57a6f7de8390aa28cf56a5d330ed7b1259dd760b6b4dd5e5be67d8a7e5b1d0c127fb62d7b7c0d252746866269e4fd2a134cf2ab92e8dcba5a9b8c
7
+ data.tar.gz: ae6b1518828313527b91be4005e6cdc510f364158ed7626df6dff083feee43166006267baba0aae70f519df54bc20ee93bdf0b94e0fd97072bacf3fd6b4e823b
data/.travis.yml CHANGED
@@ -4,9 +4,3 @@ rvm:
4
4
  - 1.9.3
5
5
  - 2.0.0
6
6
  - rbx-19mode
7
-
8
- matrix:
9
- allow_failures:
10
- - branches:
11
- except:
12
- - master
data/README.md CHANGED
@@ -47,25 +47,31 @@ It just keeps an array (the internal pool) holding the result of running
47
47
  the given block _size_ times. Inside the reactor loop (either EventMachine's or Celluloid's),
48
48
  each request is wrapped on a Fiber, and then `pool` plays its magic.
49
49
 
50
- When a method `query_me` is called on `pool` and it's not one of its own methods,
51
- then it:
50
+ ``` ruby
51
+ results = pool.query_me(sql)
52
+ ```
52
53
 
53
- 1. reserves one connection from the internal pool and associates it __with the current Fiber__
54
- 2. if no connection is available, then that Fiber stays on a _pending_ queue, and __is yielded__
55
- 3. when a connection is available, then the pool calls `query_me` on that `MyFancyConnection` instance
56
- 4. when `query_me` returns, the reserved instance is released again,
57
- and the next Fiber on the _pending_ queue __is resumed__
58
- 5. the return value is sent back to the caller
54
+ When a method `query_me` is called on `pool` it:
55
+
56
+ 1. Reserves one connection from the internal pool and associates it __with the current fiber__.
57
+ 2. If no connection is available, then that fiber stays on a _pending_ queue,
58
+ and __is yielded__ until another connection is released.
59
+ 3. When a connection is available, then the pool calls `query_me` on that `MyFancyConnection` instance.
60
+ 4. When `query_me` returns, the reserved instance is released again,
61
+ and the next fiber on the _pending_ queue __is resumed__.
62
+ 5. The return value is sent back to the caller.
59
63
 
60
64
  Methods from `MyFancyConnection` instance should yield the fiber before
61
65
  perform any blocking IO. That returns control to te underlying reactor,
62
66
  that spawns another fiber to process the next request, while the previous
63
67
  one is still waiting for the IO response. That new fiber will get its own
64
68
  connection from the pool, or else it will yield until there
65
- is one available.
69
+ is one available. That behaviour is implemented on `Mysql2::EM::Client`
70
+ from [em-synchrony](https://github.com/igrigorik/em-synchrony),
71
+ and on a patched version of [ruby-mysql](https://github.com/rubencaro/ruby-mysql), for example.
66
72
 
67
- The whole process looks synchronous from the Fiber perspective, _because it is_.
68
- The Fiber will really block ( _yield_ ) until it gets the result.
73
+ The whole process looks synchronous from the fiber perspective, _because it is_ indeed.
74
+ The fiber will really block ( or _yield_ ) until it gets the result.
69
75
 
70
76
  ``` ruby
71
77
  results = pool.query_me(sql)
@@ -77,21 +83,21 @@ The magic resides on the fact that other fibers are being processed while this o
77
83
  Not thread-safe
78
84
  ------------------
79
85
 
80
- `FiberConnectionPool` is not thread-safe right now. You will not be able to use it
86
+ `FiberConnectionPool` is not thread-safe. You will not be able to use it
81
87
  from different threads, as eventually it will try to resume a Fiber that resides
82
88
  on a different Thread. That will raise a FiberError( _"calling a fiber across threads"_ ).
83
- Maybe one day we add that feature too.
89
+ Maybe one day we add that feature too. Or maybe it's not worth the added code complexity.
84
90
 
85
- We have tested it on Goliath servers having one pool on each server instance, and on Reel servers
86
- having one pool on each Actor thread. Take a look at the `examples` folder for details.
91
+ We use it with no need to be thread-safe on Goliath servers having one pool on each server instance,
92
+ and on Reel servers having one pool on each Actor thread. Take a look at the `examples` folder for details.
87
93
 
88
- MySQL specific
94
+ Generic
89
95
  ------------------
90
96
 
91
- By now we have only thought and tested it to be used with MySQL connections.
92
- For EventMachine by using `Mysql2::EM::Client` from [em-synchrony](https://github.com/igrigorik/em-synchrony).
97
+ We use it extensively with MySQL connections with Goliath servers by using `Mysql2::EM::Client`
98
+ from [em-synchrony](https://github.com/igrigorik/em-synchrony).
93
99
  And for Celluloid by using a patched version of [ruby-mysql](https://github.com/rubencaro/ruby-mysql).
94
- We plan on removing any MySQL specific code, so it becomes completely generic. Does not seem so hard to achieve.
100
+ By >=0.2 there is no MySQL-specific code, so it can be used with any kind of connection that can be fibered.
95
101
 
96
102
  Reacting to connection failure
97
103
  ------------------
@@ -102,8 +108,81 @@ react as you would do normally.
102
108
 
103
109
  You have to be aware that the connection instance will remain in the pool, and other fibers
104
110
  will surely use it. If the Exception you rescued indicates that the connection should be
105
- recreated, you can call `recreate_connection` passing it a new instance. The instance that
106
- just failed will be replaced inside the pool by the brand new connection.
111
+ recreated or treated somehow, there's a way to access that particular connection:
112
+
113
+ ``` ruby
114
+ begin
115
+
116
+ pool.bad_query('will make me worse')
117
+
118
+ rescue BadQueryMadeMeWorse
119
+
120
+ pool.with_failed_connection do |connection|
121
+ puts "Replacing #{connection.inspect} with a new one!"
122
+ MyFancyConnection.new
123
+ end
124
+
125
+ end
126
+ ```
127
+
128
+ The pool saves the connection when it raises an exception on a fiber, and with `with_failed_connection` lets
129
+ you execute a block of code over it. It must return a connection instance, and it will be put inside the pool
130
+ in place of the failed one. It can be the same instance after being fixed, or maybe a new one.
131
+ The call to `with_failed_connection` must be made from the very same
132
+ fiber that raised the exception.
133
+
134
+ Also the reference to the failed connection will be lost after any method execution from that
135
+ fiber. So you must call `with_failed_connection` before any other method that may acquire a new instance from the pool.
136
+
137
+ Any reference to a failed connection is released when the fiber is dead, but as you must access it from the fiber itself, worry should not.
138
+
139
+ Save data
140
+ -------------------
141
+
142
+ Sometimes we need to get something more than de return value from the `query_me` call, but that _something_ is related to _that_ call on _that_ connection.
143
+ For example, maybe you need to call `affected_rows` right after the query was made on that particular connection.
144
+ If you make that extra calls on the `pool` object, it will acquire a new connection from the pool an run on it. So it's useless.
145
+ There is a way to gather all that data from the connection so we can work on it, but also release the connection for other fiber to use it.
146
+
147
+ ``` ruby
148
+ # define the pool
149
+ pool = FiberConnectionPool.new(:size => 5){ MyFancyConnection.new }
150
+
151
+ # add a request to save data for each successful call on a connection
152
+ # will save the return value inside a hash on the key ':affected_rows'
153
+ # and make it available for the fiber that made the call
154
+ pool.save_data(:affected_rows) do |connection|
155
+ connection.affected_rows
156
+ end
157
+ ```
158
+
159
+ Then from our fiber:
160
+
161
+ ``` ruby
162
+ pool.query_me('affecting 5 rows right now')
163
+
164
+ # recover gathered data for this fiber
165
+ puts pool.gathered_data
166
+ => { :affected_rows => 5 }
167
+ ```
168
+
169
+ You must access the gathered data from the same fiber that triggered its gathering.
170
+ Also any new call to `query_me` or any other method from the connection would execute the block again,
171
+ overwriting that position on the hash (unless you code to prevent it, of course). Usually you would use the gathered data
172
+ right after you made the query that generated it. But you could:
173
+
174
+ ``` ruby
175
+ # save only the first run
176
+ pool.save_data(:affected_rows) do |connection|
177
+ pool.gathered_data[:affected_rows] || connection.affected_rows
178
+ end
179
+ ```
180
+
181
+ You can define as much `save_data` blocks as you want, and run any wonder ruby lets you. But great power comes with great responsability.
182
+ You must consider that any requests for saving data are executed for _every call_ on the pool from that fiber.
183
+ So keep it stupid simple, and blindly fast. At least as much as you can. That would affect performance otherwise.
184
+
185
+ Any gathered_data is released when the fiber is dead, but as you must access it from the fiber itself, worry should not.
107
186
 
108
187
  Supported Platforms
109
188
  -------------------
@@ -111,11 +190,6 @@ Supported Platforms
111
190
  Used in production environments on Ruby 1.9.3 and 2.0.0.
112
191
  Tested against Ruby 1.9.3, 2.0.0, and rbx-19mode ([See details..](http://travis-ci.org/rubencaro/fiber_connection_pool)).
113
192
 
114
- TODOS
193
+ More to come !
115
194
  -------------------
116
-
117
- * no MySQL-specific code
118
- * better testing
119
- * improve reaction to failure
120
- * better in-code docs
121
- * make thread-safe
195
+ See [issues](https://github.com/rubencaro/fiber_connection_pool/issues?direction=desc&sort=updated&state=open)
@@ -6,7 +6,7 @@ Gem::Specification.new do |s|
6
6
  s.version = FiberConnectionPool::VERSION
7
7
  s.platform = Gem::Platform::RUBY
8
8
  s.authors = ["Ruben Caro", "Oriol Francès"]
9
- s.email = ["ruben@lanuez.org"]
9
+ s.email = ["ruben.caro@lanuez.org"]
10
10
  s.homepage = "https://github.com/rubencaro/fiber_connection_pool"
11
11
  s.summary = "Fiber-based generic connection pool for Ruby"
12
12
  s.description = "Fiber-based generic connection pool for Ruby, allowing
@@ -14,8 +14,7 @@ Gem::Specification.new do |s|
14
14
  as provided by EventMachine or Celluloid."
15
15
 
16
16
  s.files = `git ls-files`.split("\n")
17
- s.test_files = `git ls-files -- {test,spec,features}/*`.split("\n")
18
- s.executables = `git ls-files -- bin/*`.split("\n").map{ |f| File.basename(f) }
17
+ s.test_files = `git ls-files -- test/*`.split("\n")
19
18
  s.require_paths = ["lib"]
20
19
  s.license = "GPLv3"
21
20
 
@@ -0,0 +1,6 @@
1
+
2
+ class NoBackupConnection < Exception
3
+ def initialize
4
+ super "No backup connection for this fiber!"
5
+ end
6
+ end
@@ -1,9 +1,13 @@
1
1
  require 'fiber'
2
+ require_relative 'fiber_connection_pool/exceptions'
2
3
 
3
4
  class FiberConnectionPool
4
- VERSION = '0.1.2'
5
+ VERSION = '0.2.0'
5
6
 
6
- attr_accessor :saved_data
7
+ RESERVED_BACKUP_TTL_SECS = 30 # reserved backup cleanup trigger
8
+ SAVED_DATA_TTL_SECS = 30 # saved_data cleanup trigger
9
+
10
+ attr_accessor :saved_data, :reserved_backup
7
11
 
8
12
  # Initializes the pool with 'size' instances
9
13
  # running the given block to get each one. Ex:
@@ -16,33 +20,113 @@ class FiberConnectionPool
16
20
  @saved_data = {} # placeholder for requested save data
17
21
  @reserved = {} # map of in-progress connections
18
22
  @reserved_backup = {} # backup map of in-progress connections, to catch failures
23
+ @last_backup_cleanup = Time.now # reserved backup cleanup trigger
19
24
  @available = [] # pool of free connections
20
25
  @pending = [] # pending reservations (FIFO)
26
+ @save_data_requests = {} # blocks to be yielded to save data
27
+ @last_data_cleanup = Time.now # saved_data cleanup trigger
21
28
 
22
29
  @available = Array.new(opts[:size].to_i) { yield }
23
30
  end
24
31
 
32
+ # DEPRECATED: use save_data
25
33
  def save_data_for_fiber
26
- @saved_data[Fiber.current.object_id] ||= {}
34
+ nil
27
35
  end
28
36
 
37
+ # DEPRECATED: use release_data
29
38
  def stop_saving_data_for_fiber
30
- @saved_data.delete Fiber.current.object_id
39
+ @saved_data.delete Fiber.current
31
40
  end
32
41
 
33
- ##
34
- # avoid method_missing for most common methods
42
+ # Add a save_data request to the pool.
43
+ # The given block will be executed after each successful
44
+ # call to -any- method on the connection.
45
+ # The connection and the method name are passed to the block.
46
+ #
47
+ # The returned value will be saved in pool.saved_data[Fiber.current][key],
48
+ # and will be kept as long as the fiber stays alive.
49
+ #
50
+ # Ex:
51
+ #
52
+ # # (...right after pool's creation...)
53
+ # pool.save_data(:hey_or_hoo) do |conn, method|
54
+ # return 'hey' if method == 'query'
55
+ # 'hoo'
56
+ # end
57
+ #
58
+ # # (...from a reactor fiber...)
59
+ # myfiber = Fiber.current
60
+ # pool.query('select anything from anywhere')
61
+ # puts pool.saved_data[myfiber][:hey_or_hoo]
62
+ # => 'hey'
63
+ #
64
+ # # (...eventually fiber dies...)
65
+ # puts pool.saved_data[myfiber].inspect
66
+ # => nil
67
+ #
68
+ def save_data(key, &block)
69
+ @save_data_requests[key] = block
70
+ end
71
+
72
+ # Return the gathered data for this fiber
73
+ #
74
+ def gathered_data
75
+ @saved_data[Fiber.current]
76
+ end
77
+
78
+ # Clear any save_data requests in the pool.
79
+ # No data will be saved after this, unless new requests are added with #save_data.
80
+ #
81
+ def clear_save_data_requests
82
+ @save_data_requests = {}
83
+ end
84
+
85
+ # Delete any saved_data for given fiber
86
+ #
87
+ def release_data(fiber)
88
+ @saved_data.delete(fiber)
89
+ end
90
+
91
+ # Delete any saved_data held for dead fibers
92
+ #
93
+ def save_data_cleanup
94
+ @saved_data.dup.each do |k,v|
95
+ @saved_data.delete(k) if not k.alive?
96
+ end
97
+ @last_data_cleanup = Time.now
98
+ end
99
+
100
+ # Avoid method_missing stack for 'query'
35
101
  #
36
102
  def query(sql)
37
- execute(false,'query') do |conn|
103
+ execute('query') do |conn|
38
104
  conn.query sql
39
105
  end
40
106
  end
41
107
 
108
+ # True if the given connection is anywhere inside the pool
109
+ #
110
+ def has_connection?(conn)
111
+ (@available + @reserved.values).include?(conn)
112
+ end
42
113
 
114
+ # DEPRECATED: use with_failed_connection
43
115
  def recreate_connection(new_conn)
44
- bad_conn = @reserved_backup[Fiber.current.object_id]
45
- release_backup Fiber.current
116
+ with_failed_connection { new_conn }
117
+ end
118
+
119
+ # Identify the connection that just failed for current fiber.
120
+ # Pass it to the given block, which must return a valid instance of connection.
121
+ # After that, put the new connection into the pool in failed connection's place.
122
+ # Raises NoBackupConnection if cannot find the failed connection instance.
123
+ #
124
+ def with_failed_connection
125
+ f = Fiber.current
126
+ bad_conn = @reserved_backup[f]
127
+ raise NoBackupConnection.new if bad_conn.nil?
128
+ new_conn = yield bad_conn
129
+ release_backup f
46
130
  @available.reject!{ |v| v == bad_conn }
47
131
  @reserved.reject!{ |k,v| v == bad_conn }
48
132
  @available.push new_conn
@@ -53,35 +137,62 @@ class FiberConnectionPool
53
137
  end
54
138
  end
55
139
 
140
+ # Delete any backups held for dead fibers
141
+ #
142
+ def backup_cleanup
143
+ @reserved_backup.dup.each do |k,v|
144
+ @reserved_backup.delete(k) if not k.alive?
145
+ end
146
+ @last_backup_cleanup = Time.now
147
+ end
148
+
56
149
  private
57
150
 
58
151
  # Choose first available connection and pass it to the supplied
59
- # block. This will block indefinitely until there is an available
152
+ # block. This will block (yield) indefinitely until there is an available
60
153
  # connection to service the request.
61
- def execute(async,method)
154
+ #
155
+ # After running the block, save requested data and release the connection.
156
+ #
157
+ def execute(method)
62
158
  f = Fiber.current
63
-
64
159
  begin
160
+ # get a connection and use it
65
161
  conn = acquire(f)
66
162
  retval = yield conn
67
- if !@saved_data[Fiber.current.object_id].nil?
68
- @saved_data[Fiber.current.object_id]['affected_rows'] = conn.affected_rows
69
- end
70
- release_backup(f) if !async and method == 'query'
163
+
164
+ # save anything requested
165
+ process_save_data(f, conn, method)
166
+
167
+ # successful run, release_backup
168
+ release_backup(f)
169
+
71
170
  retval
72
171
  ensure
73
- release(f) if not async
172
+ release(f)
74
173
  end
75
174
  end
76
175
 
77
- # Acquire a lock on a connection and assign it to executing fiber
78
- # - if connection is available, pass it back to the calling block
79
- # - if pool is full, yield the current fiber until connection is available
80
- def acquire(fiber)
176
+ # Run each save_data_block over the given connection
177
+ # and save the data for the given fiber.
178
+ # Also perform cleanup if TTL is past
179
+ #
180
+ def process_save_data(fiber, conn, method)
181
+ @save_data_requests.each do |key,block|
182
+ @saved_data[fiber] ||= {}
183
+ @saved_data[fiber][key] = block.call(conn, method)
184
+ end
185
+ # try cleanup
186
+ save_data_cleanup if (Time.now - @last_data_cleanup) >= SAVED_DATA_TTL_SECS
187
+ end
81
188
 
189
+ # Acquire a lock on a connection and assign it to given fiber
190
+ # If no connection is available, yield the given fiber on the pending array
191
+ #
192
+ def acquire(fiber)
82
193
  if conn = @available.pop
83
194
  @reserved[fiber.object_id] = conn
84
- @reserved_backup[fiber.object_id] = conn
195
+ @reserved_backup[fiber] = conn
85
196
  conn
86
197
  else
87
198
  Fiber.yield @pending.push fiber
@@ -90,13 +201,18 @@ class FiberConnectionPool
90
201
  end
91
202
 
92
203
  # Release connection from the backup hash
204
+ # Also perform cleanup if TTL is past
205
+ #
93
206
  def release_backup(fiber)
94
- @reserved_backup.delete(fiber.object_id)
207
+ @reserved_backup.delete(fiber)
208
+ # try cleanup
209
+ backup_cleanup if (Time.now - @last_backup_cleanup) >= RESERVED_BACKUP_TTL_SECS
95
210
  end
96
211
 
97
212
  # Release connection assigned to the supplied fiber and
98
213
  # resume any other pending connections (which will
99
214
  # immediately try to run acquire on the pool)
215
+ #
100
216
  def release(fiber)
101
217
  @available.push(@reserved.delete(fiber.object_id)).compact!
102
218
 
@@ -107,29 +223,13 @@ class FiberConnectionPool
107
223
 
108
224
  # Allow the pool to behave as the underlying connection
109
225
  #
110
- # If the requesting method begins with "a" prefix, then
111
- # hijack the callbacks and errbacks to fire a connection
112
- # pool release whenever the request is complete. Otherwise
113
- # yield the connection within execute method and release
114
- # once it is complete (assumption: fiber will yield until
115
- # data is available, or request is complete)
226
+ # Yield the connection within execute method and release
227
+ # once it is complete (assumption: fiber will yield while
228
+ # waiting for IO, allowing the reactor run other fibers)
116
229
  #
117
230
  def method_missing(method, *args, &blk)
118
- async = (method[0,1] == "a")
119
-
120
- execute(async,method) do |conn|
121
- df = conn.send(method, *args, &blk)
122
-
123
- if async
124
- fiber = Fiber.current
125
- df.callback do
126
- release(fiber)
127
- release_backup(fiber)
128
- end
129
- df.errback { release(fiber) }
130
- end
131
-
132
- df
231
+ execute(method) do |conn|
232
+ conn.send(method, *args, &blk)
133
233
  end
134
234
  end
135
235
  end
@@ -1,4 +1,3 @@
1
- Thread.abort_on_exception = true
2
1
  require 'helper'
3
2
 
4
3
  class TestFiberConnectionPool < Minitest::Test
@@ -6,11 +5,12 @@ class TestFiberConnectionPool < Minitest::Test
6
5
  def test_blocking_behaviour
7
6
  # get pool and fibers
8
7
  pool = FiberConnectionPool.new(:size => 5) { ::BlockingConnection.new(:delay => 0.05) }
8
+ info = { :threads => [], :fibers => [], :instances => []}
9
9
 
10
- fibers = Array.new(15){ Fiber.new { pool.do_something } }
10
+ fibers = Array.new(15){ Fiber.new { pool.do_something(info) } }
11
11
 
12
12
  a = Time.now
13
- result = fibers.map(&:resume)
13
+ fibers.each{ |f| f.resume }
14
14
  b = Time.now
15
15
 
16
16
  # 15 fibers on a size 5 pool, but -blocking- connections
@@ -20,36 +20,28 @@ class TestFiberConnectionPool < Minitest::Test
20
20
  # Also we only use the first connection from the pool,
21
21
  # because as we are -blocking- it's always available
22
22
  # again for the next request
23
- assert_equal 1, result.uniq.count
23
+ # we should have visited 1 thread, 15 fibers and 1 instances
24
+ info.dup.each{ |k,v| info[k] = v.uniq }
25
+ assert_equal 1, info[:threads].count
26
+ assert_equal 15, info[:fibers].count
27
+ assert_equal 1, info[:instances].count
24
28
  end
25
29
 
26
30
  def test_em_synchrony_behaviour
27
- require 'em-synchrony'
28
-
29
- a = b = nil
30
31
  info = { :threads => [], :fibers => [], :instances => []}
31
32
 
32
- EM.synchrony do
33
- # get pool and fibers
34
- pool = FiberConnectionPool.new(:size => 5) { ::EMSynchronyConnection.new(:delay => 0.05) }
33
+ # get pool and fibers
34
+ pool = FiberConnectionPool.new(:size => 5) { ::EMSynchronyConnection.new(:delay => 0.05) }
35
35
 
36
- fibers = Array.new(15){ Fiber.new { pool.do_something(info) } }
36
+ fibers = Array.new(15){ Fiber.new { pool.do_something(info) } }
37
37
 
38
- a = Time.now
39
- fibers.each{ |f| f.resume }
40
- # wait all fibers to end
41
- while fibers.any?{ |f| f.alive? } do
42
- EM::Synchrony.sleep 0.01
43
- end
44
- b = Time.now
45
- EM.stop
46
- end
38
+ lapse = run_em_reactor fibers
47
39
 
48
40
  # 15 fibers on a size 5 pool, and -non-blocking- connections
49
41
  # with a 0.05 delay we expect to spend at least: 0.05*15/5 = 0.15
50
42
  # plus some breeze lost on precision on the wait loop
51
43
  # then we should be under 0.20 for sure
52
- assert_operator((b - a), :<, 0.20)
44
+ assert_operator(lapse, :<, 0.20)
53
45
 
54
46
  # we should have visited 1 thread, 15 fibers and 5 instances
55
47
  info.dup.each{ |k,v| info[k] = v.uniq }
@@ -58,6 +50,11 @@ class TestFiberConnectionPool < Minitest::Test
58
50
  assert_equal 5, info[:instances].count
59
51
  end
60
52
 
53
+ def test_celluloid_behaviour
54
+ skip 'Could not test celluloid 0.15.0pre, as it would not start reactor on test environment.
55
+ See the examples folder for a working celluloid (reel) server.'
56
+ end
57
+
61
58
  def test_size_is_mandatory
62
59
  assert_raises ArgumentError do
63
60
  FiberConnectionPool.new { ::BlockingConnection.new }
@@ -70,5 +67,170 @@ class TestFiberConnectionPool < Minitest::Test
70
67
  end
71
68
  end
72
69
 
70
+ def test_failure_reaction
71
+ info = { :instances => [] }
72
+
73
+ # get pool and fibers
74
+ pool = FiberConnectionPool.new(:size => 5) { ::EMSynchronyConnection.new(:delay => 0.05) }
75
+
76
+ fibers = Array.new(14){ Fiber.new { pool.do_something(info) } }
77
+
78
+ failing_fiber = Fiber.new do
79
+ begin
80
+ pool.fail(info)
81
+ rescue
82
+ pool.with_failed_connection do |connection|
83
+ info[:repaired_connection] = connection
84
+ # replace it in the pool
85
+ ::EMSynchronyConnection.new(:delay => 0.05)
86
+ end
87
+ end
88
+ end
89
+ # put it among others, not the first or the last
90
+ # so we see it does not mistake the failing connection
91
+ fibers.insert 7,failing_fiber
92
+
93
+ run_em_reactor fibers
94
+
95
+ # we should have visited 1 thread, 15 fibers and 6 instances (including failed)
96
+ info.dup.each{ |k,v| info[k] = v.uniq if v.is_a?(Array) }
97
+ assert_equal 6, info[:instances].count
98
+
99
+ # assert we do not lose track of failing connection
100
+ assert_equal info[:repaired_connection], info[:failing_connection]
101
+
102
+ # assert we replaced it
103
+ refute pool.has_connection?(info[:failing_connection])
104
+
105
+ # nothing left
106
+ assert_equal(0, pool.reserved_backup.count)
107
+
108
+ # if dealing with failed connection where you shouldn't...
109
+ assert_raises NoBackupConnection do
110
+ pool.with_failed_connection{ |c| 'boo' }
111
+ end
112
+ end
113
+
114
+ def test_reserved_backups
115
+ # create pool, run fibers and gather info
116
+ pool, info = run_reserved_backups
117
+
118
+ # one left
119
+ assert_equal(1, pool.reserved_backup.count)
120
+
121
+ # fire cleanup
122
+ pool.backup_cleanup
123
+
124
+ # nothing left
125
+ assert_equal(0, pool.reserved_backup.count)
126
+
127
+ # assert we did not replace it
128
+ assert pool.has_connection?(info[:failing_connection])
129
+ end
130
+
131
+ def test_auto_cleanup_reserved_backups
132
+ # lower ttl to force auto cleanup
133
+ prev_ttl = force_constant FiberConnectionPool, :RESERVED_BACKUP_TTL_SECS, 0
134
+
135
+ # create pool, run fibers and gather info
136
+ pool, info = run_reserved_backups
137
+
138
+ # nothing left, because failing fiber was not the last to run
139
+ # the following fiber made the cleanup
140
+ assert_equal(0, pool.reserved_backup.count)
141
+
142
+ # assert we did not replace it
143
+ assert pool.has_connection?(info[:failing_connection])
144
+ ensure
145
+ # restore
146
+ force_constant FiberConnectionPool, :RESERVED_BACKUP_TTL_SECS, prev_ttl
147
+ end
148
+
149
+ def test_save_data
150
+ # create pool, run fibers and gather info
151
+ pool, fibers, info = run_saved_data
152
+
153
+ # gathered data for all 4 fibers
154
+ assert fibers.all?{ |f| not pool.saved_data[f].nil? },
155
+ "fibers: #{fibers}, saved_data: #{pool.saved_data}"
156
+
157
+ # gathered 2 times each connection
158
+ connection_ids = pool.saved_data.values.map{ |v| v[:connection_id] }
159
+ assert info[:instances].all?{ |i| connection_ids.count(i) == 2 },
160
+ "info: #{info}, saved_data: #{pool.saved_data}"
161
+
162
+ # fire cleanup
163
+ pool.save_data_cleanup
164
+
165
+ # nothing left
166
+ assert_equal(0, pool.saved_data.count)
167
+ end
168
+
169
+ def test_auto_cleanup_saved_data
170
+ # lower ttl to force auto cleanup
171
+ prev_ttl = force_constant FiberConnectionPool, :SAVED_DATA_TTL_SECS, 0
172
+
173
+ # create pool, run fibers and gather info
174
+ pool, _, _ = run_saved_data
175
+
176
+ # only the last run left
177
+ # that fiber was the one making the cleanup, so it was still alive
178
+ assert_equal(1, pool.saved_data.count)
179
+ ensure
180
+ # restore
181
+ force_constant FiberConnectionPool, :SAVED_DATA_TTL_SECS, prev_ttl
182
+ end
183
+
184
+ private
185
+
186
+ def run_reserved_backups
187
+ info = { :instances => [] }
188
+
189
+ # get pool and fibers
190
+ pool = FiberConnectionPool.new(:size => 2) { ::EMSynchronyConnection.new(:delay => 0.05) }
191
+
192
+ fibers = Array.new(4){ Fiber.new { pool.do_something(info) } }
193
+
194
+ # we do not repair it, backup associated with this Fiber stays in the pool
195
+ failing_fiber = Fiber.new { pool.fail(info) rescue nil }
196
+
197
+ # put it among others, not the first or the last
198
+ # so we see it does not mistake the failing connection
199
+ fibers.insert 2,failing_fiber
200
+
201
+ run_em_reactor fibers
202
+
203
+ # we should have visited only 2 instances (no instance added by repairing broken one)
204
+ info.dup.each{ |k,v| info[k] = v.uniq if v.is_a?(Array) }
205
+ assert_equal 2, info[:instances].count
206
+
207
+ [ pool, info ]
208
+ end
209
+
210
+ def run_saved_data
211
+ info = { :instances => [] }
212
+
213
+ # get pool and fibers
214
+ pool = FiberConnectionPool.new(:size => 2) { ::EMSynchronyConnection.new(:delay => 0.05) }
215
+
216
+ # ask to save some data
217
+ pool.save_data(:connection_id) { |conn| conn.object_id }
218
+ pool.save_data(:fiber_id) { |conn| Fiber.current.object_id }
219
+
220
+ fibers = Array.new(4) do
221
+ Fiber.new do
222
+ pool.do_something(info)
223
+ assert_equal Fiber.current.object_id, pool.gathered_data[:fiber_id]
224
+ end
225
+ end
226
+
227
+ run_em_reactor fibers
228
+
229
+ # we should have visited 2 instances
230
+ info.dup.each{ |k,v| info[k] = v.uniq if v.is_a?(Array) }
231
+ assert_equal 2, info[:instances].count
232
+
233
+ [ pool, fibers, info ]
234
+ end
73
235
 
74
236
  end
data/test/helper.rb CHANGED
@@ -1,5 +1,6 @@
1
1
  require 'minitest/pride'
2
2
  require 'minitest/autorun'
3
+ require 'em-synchrony'
3
4
 
4
5
  require_relative '../lib/fiber_connection_pool'
5
6
 
@@ -8,17 +9,50 @@ class BlockingConnection
8
9
  @delay = opts[:delay] || 0.05
9
10
  end
10
11
 
11
- def do_something
12
+ def do_something(info = {})
13
+ fill_info info
12
14
  sleep @delay
13
- self.object_id
15
+ end
16
+
17
+ def fill_info(info = {})
18
+ info[:threads] << Thread.current.object_id if info[:threads]
19
+ info[:fibers] << Fiber.current.object_id if info[:fibers]
20
+ info[:instances] << self.object_id if info[:instances]
14
21
  end
15
22
  end
16
23
 
17
24
  class EMSynchronyConnection < BlockingConnection
18
- def do_something(info)
19
- info[:threads] << Thread.current.object_id
20
- info[:fibers] << Fiber.current.object_id
21
- info[:instances] << self.object_id
25
+ def do_something(info = {})
26
+ fill_info info
22
27
  EM::Synchrony.sleep @delay
23
28
  end
29
+
30
+ def fail(info)
31
+ fill_info info
32
+ info[:failing_connection] = self
33
+ raise "Sadly failing here..."
34
+ end
35
+ end
36
+
37
+ # start an EM reactor and run given fibers
38
+ # return time spent
39
+ def run_em_reactor(fibers)
40
+ a = b = nil
41
+ EM.synchrony do
42
+ a = Time.now
43
+ fibers.each{ |f| f.resume }
44
+ # wait all fibers to end
45
+ while fibers.any?{ |f| f.alive? } do
46
+ EM::Synchrony.sleep 0.01
47
+ end
48
+ b = Time.now
49
+ EM.stop
50
+ end
51
+ b-a
52
+ end
53
+
54
+ def force_constant(klass, name, value)
55
+ previous_value = klass.send(:remove_const, name)
56
+ klass.const_set name.to_s, value
57
+ previous_value
24
58
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: fiber_connection_pool
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.2
4
+ version: 0.2.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Ruben Caro
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2013-08-16 00:00:00.000000000 Z
12
+ date: 2013-08-20 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: minitest
@@ -44,7 +44,7 @@ description: |-
44
44
  non-blocking IO behaviour on the same thread
45
45
  as provided by EventMachine or Celluloid.
46
46
  email:
47
- - ruben@lanuez.org
47
+ - ruben.caro@lanuez.org
48
48
  executables: []
49
49
  extensions: []
50
50
  extra_rdoc_files: []
@@ -63,6 +63,7 @@ files:
63
63
  - examples/reel_server/main.rb
64
64
  - fiber_connection_pool.gemspec
65
65
  - lib/fiber_connection_pool.rb
66
+ - lib/fiber_connection_pool/exceptions.rb
66
67
  - test/fiber_connection_pool_test.rb
67
68
  - test/helper.rb
68
69
  homepage: https://github.com/rubencaro/fiber_connection_pool
@@ -85,9 +86,11 @@ required_rubygems_version: !ruby/object:Gem::Requirement
85
86
  version: '0'
86
87
  requirements: []
87
88
  rubyforge_project:
88
- rubygems_version: 2.0.3
89
+ rubygems_version: 2.0.6
89
90
  signing_key:
90
91
  specification_version: 4
91
92
  summary: Fiber-based generic connection pool for Ruby
92
- test_files: []
93
+ test_files:
94
+ - test/fiber_connection_pool_test.rb
95
+ - test/helper.rb
93
96
  has_rdoc: