zk 1.9.2 → 1.9.3

Sign up to get free protection for your applications and to get access to all the features.
data/Gemfile CHANGED
@@ -1,4 +1,4 @@
1
- source :rubygems
1
+ source "https://rubygems.org"
2
2
 
3
3
  # gem 'slyphon-zookeeper', :path => '~/zookeeper'
4
4
 
@@ -1,5 +1,10 @@
1
1
  This file notes feature differences and bugfixes contained between releases.
2
2
 
3
+ ### v1.9.3 ###
4
+
5
+ * Fix deadlocks between watchers and reconnecting
6
+
7
+
3
8
  ### v1.9.2 ###
4
9
 
5
10
  * Fix re-watching znodes after a lost session #72 (reported by kalantar)
@@ -26,7 +31,7 @@ This file notes feature differences and bugfixes contained between releases.
26
31
 
27
32
  ### v1.7.4 ###
28
33
 
29
- * Narsty bug in Locker (#54)
34
+ * Narsty bug in Locker (#54)
30
35
 
31
36
  If a locker is waiting on the lock, and a connection interruption occurs (that doesn't render the session invalid), the waiter will attempt to clean up while the connection is invalid, and not succeed in cleaning up its ephemeral. This patch will recognize that the `@lock_path` was already acquired, and just wait on the current owner (ie. it won't create an erroneous *third* lock node). The reproduction code has been added under `spec/informal/two-locks-enter-three-locks-leave.rb`
32
37
 
@@ -51,9 +56,9 @@ The code path in the case of a LockWaitTimeout would skip the lock node cleanup,
51
56
 
52
57
  * Added Locker timeout feature for blocking calls. (issue #40)
53
58
 
54
- Previously, when dealing with locks, there were only two options: blocking or non-blocking. In order to come up with a time-limited lock, you had to poll every so often until you acquired the lock. This is, needless to say, both inefficient and doesn't allow for fair acquisition.
59
+ Previously, when dealing with locks, there were only two options: blocking or non-blocking. In order to come up with a time-limited lock, you had to poll every so often until you acquired the lock. This is, needless to say, both inefficient and doesn't allow for fair acquisition.
55
60
 
56
- A timeout option has been added so that when blocking waiting for a lock, you can specify a deadline by which the lock should have been acquired.
61
+ A timeout option has been added so that when blocking waiting for a lock, you can specify a deadline by which the lock should have been acquired.
57
62
 
58
63
  ```ruby
59
64
  zk = ZK.new
@@ -121,7 +126,7 @@ Will go through your locker nodes one by one and try to lock and unlock them. If
121
126
 
122
127
  ### v1.5.3 ###
123
128
 
124
- * Fixed reconnect code. There was an occasional race/deadlock condition caused because the reopen call was done on the underlying connection's dispatch thread. Closing the dispatch thread is part of reopen, so this would cause a deadlock in real-world use. Moved the reconnect logic to a separate, single-purpose thread on ZK::Client::Threaded that watches for connection state changes.
129
+ * Fixed reconnect code. There was an occasional race/deadlock condition caused because the reopen call was done on the underlying connection's dispatch thread. Closing the dispatch thread is part of reopen, so this would cause a deadlock in real-world use. Moved the reconnect logic to a separate, single-purpose thread on ZK::Client::Threaded that watches for connection state changes.
125
130
 
126
131
  * 'private' is not 'protected'. I've been writing ruby for several years now, and apparently I'd forgotten that 'protected' does not work like how it does in java. The visibility of these methods has been corrected, and all specs pass, so I don't expect issues...but please report if this change causes any bugs in user code.
127
132
 
@@ -136,15 +141,15 @@ Will go through your locker nodes one by one and try to lock and unlock them. If
136
141
 
137
142
  ### v1.5.1 ###
138
143
 
139
- * Added a `:retry_duration` option to client constructor which will allows the user to specify for how long in the case of a connection loss, should an operation wait for the connection to be re-established before retrying the operation. This can be set at a global level and overridden on a per-call basis. The default is to not retry (which may change at a later date). Generally speaking, a timeout of > 30s is probably excessive, and care should be taken because during a connection loss, the server-side state may change without you being aware of it (i.e. events will not be delivered).
144
+ * Added a `:retry_duration` option to client constructor which will allows the user to specify for how long in the case of a connection loss, should an operation wait for the connection to be re-established before retrying the operation. This can be set at a global level and overridden on a per-call basis. The default is to not retry (which may change at a later date). Generally speaking, a timeout of > 30s is probably excessive, and care should be taken because during a connection loss, the server-side state may change without you being aware of it (i.e. events will not be delivered).
140
145
 
141
146
  * Small fork-hook implementation fix. Previously we were using WeakRefs so that hooks would not prevent an object from being garbage collected. This has been replaced with a finalizer which is more deterministic.
142
147
 
143
148
  ### v1.5.0 ###
144
149
 
145
- Ok, now seriously this time. I think all of the forking issues are done.
150
+ Ok, now seriously this time. I think all of the forking issues are done.
146
151
 
147
- * Implemented a 'stop the world' feature to ensure safety when forking. All threads are stopped, but state is preserved. `fork()` can then be called safely, and after fork returns, all threads will be restarted in the parent, and the connection will be torn down and reopened in the child.
152
+ * Implemented a 'stop the world' feature to ensure safety when forking. All threads are stopped, but state is preserved. `fork()` can then be called safely, and after fork returns, all threads will be restarted in the parent, and the connection will be torn down and reopened in the child.
148
153
 
149
154
  * The easiest, and supported, way of doing this is now to call `ZK.install_fork_hook` after requiring zk. This will install an `alias_method_chain` style hook around the `Kernel.fork` method, which handles pausing all clients in the parent, calling fork, then resuming in the parent and reconnecting in the child. If you're using ZK in resque, I *highly* recommend using this approach, as it will give the most consistent results.
150
155
 
@@ -194,7 +199,7 @@ Phusion Passenger and Unicorn users are encouraged to upgrade!
194
199
 
195
200
  You are __STRONGLY ENCOURAGED__ to go and look at the [CHANGELOG](http://git.io/tPbNBw) from the zookeeper 1.0.0 release
196
201
 
197
- * NOTICE: This release uses the 1.0 release of the zookeeper gem, which has had a MAJOR REFACTORING of its namespaces. Included in that zookeeper release is a compatibility layer that should ease the transition, but any references to Zookeeper\* heirarchy should be changed.
202
+ * NOTICE: This release uses the 1.0 release of the zookeeper gem, which has had a MAJOR REFACTORING of its namespaces. Included in that zookeeper release is a compatibility layer that should ease the transition, but any references to Zookeeper\* heirarchy should be changed.
198
203
 
199
204
  * Refactoring related to the zokeeper gem, use all the new names internally now.
200
205
 
@@ -202,7 +207,7 @@ You are __STRONGLY ENCOURAGED__ to go and look at the [CHANGELOG](http://git.io/
202
207
 
203
208
  * Add new Locker features!
204
209
  * `LockerBase#assert!` - will raise an exception if the lock is not held. This check is not only for local in-memory "are we locked?" state, but will check the connection state and re-run the algorithmic tests that determine if a given Locker implementation actually has the lock.
205
- * `LockerBase#acquirable?` - an advisory method that checks if any condition would prevent the receiver from acquiring the lock.
210
+ * `LockerBase#acquirable?` - an advisory method that checks if any condition would prevent the receiver from acquiring the lock.
206
211
 
207
212
  * Deprecation of the `lock!` and `unlock!` methods. These may change to be exception-raising in a future relase, so document and refactor that `lock` and `unlock` are the way to go.
208
213
 
@@ -216,7 +221,7 @@ You are __STRONGLY ENCOURAGED__ to go and look at the [CHANGELOG](http://git.io/
216
221
 
217
222
  * Fixes for Locker tests so that we can run specs against all supported ruby implementations on travis (relies on in-process zookeeper server in the zk-server-1.0.1 gem)
218
223
 
219
- * Support for 1.8.7 will be continued
224
+ * Support for 1.8.7 will be continued
220
225
 
221
226
  ## v1.1.0 ##
222
227
 
@@ -235,7 +240,7 @@ You are __STRONGLY ENCOURAGED__ to go and look at the [CHANGELOG](http://git.io/
235
240
 
236
241
  * add zk.register(:all) to recevie node updates for all nodes (i.e. not filtered on path)
237
242
 
238
- * add 'interest' feature to zk.register, now you can indicate what kind of events should be delivered to the given block (previously you had to do that filtering inside the block). The default behavior is still the same, if no 'interest' is given, then all event types for the given path will be delivered to that block.
243
+ * add 'interest' feature to zk.register, now you can indicate what kind of events should be delivered to the given block (previously you had to do that filtering inside the block). The default behavior is still the same, if no 'interest' is given, then all event types for the given path will be delivered to that block.
239
244
 
240
245
  zk.register('/path', :created) do |event|
241
246
  # event.node_created? will always be true
@@ -257,8 +262,8 @@ You are __STRONGLY ENCOURAGED__ to go and look at the [CHANGELOG](http://git.io/
257
262
 
258
263
  * fix for shutdown: close! called from threadpool will do the right thing
259
264
 
260
- * Chroot users rejoice! By default, ZK.new will create a chrooted path for you.
261
-
265
+ * Chroot users rejoice! By default, ZK.new will create a chrooted path for you.
266
+
262
267
  ZK.new('localhost:2181/path', :chroot => :create) # the default, create the path before returning connection
263
268
 
264
269
  ZK.new('localhost:2181/path', :chroot => :check) # make sure the chroot exists, raise if not
@@ -266,7 +271,7 @@ You are __STRONGLY ENCOURAGED__ to go and look at the [CHANGELOG](http://git.io/
266
271
  ZK.new('localhost:2181/path', :chroot => :do_nothing) # old default behavior
267
272
 
268
273
  # and, just for kicks
269
-
274
+
270
275
  ZK.new('localhost:2181', :chroot => '/path') # equivalent to 'localhost:2181/path', :chroot => :create
271
276
 
272
277
  * Most of the event functionality used is now in a ZK::Event module. This is still mixed into the underlying slyphon-zookeeper class, but now all of the important and relevant methods are documented, and Event appears as a first-class citizen.
@@ -278,16 +283,16 @@ You are __STRONGLY ENCOURAGED__ to go and look at the [CHANGELOG](http://git.io/
278
283
  The "Don't forget to update the RELEASES file before pushing a new release" release
279
284
 
280
285
  * Fix a fairly bad bug in event de-duplication (diff: http://is.gd/a1iKNc)
281
-
286
+
282
287
  This is fairly edge-case-y but could bite someone. If you'd set a watch
283
288
  when doing a get that failed because the node didn't exist, any subsequent
284
289
  attempts to set a watch would fail silently, because the client thought that the
285
290
  watch had already been set.
286
-
291
+
287
292
  We now wrap the operation in the setup_watcher! method, which rolls back the
288
293
  record-keeping of what watches have already been set for what nodes if an
289
294
  exception is raised.
290
-
295
+
291
296
  This change has the side-effect that certain operations (get,stat,exists?,children)
292
297
  will block event delivery until completion, because they need to have a consistent
293
298
  idea about what events are pending, and which have been delivered. This also means
@@ -303,7 +308,7 @@ The "Don't forget to update the RELEASES file before pushing a new release" rele
303
308
  * Fixed issue 9, where using a Locker in the main thread would never awaken if the connection was dropped or interrupted. Now a `ZK::Exceptions::InterruptedSession` exception (or mixee) will be thrown to alert the caller that something bad happened.
304
309
  * `ZK::Find.find` now returns the results in sorted order.
305
310
  * Added documentation explaining the Pool class, reasons for using it, reasons why you shouldn't (added complexities around watchers and events).
306
- * Began work on an experimental Multiplexed client, that would allow multithreaded clients to more effectively share a single connection by making all requests asynchronous behind the scenes, and using a queue to provide a synchronous (blocking) API.
311
+ * Began work on an experimental Multiplexed client, that would allow multithreaded clients to more effectively share a single connection by making all requests asynchronous behind the scenes, and using a queue to provide a synchronous (blocking) API.
307
312
 
308
313
 
309
314
  # vim:ft=markdown:sts=2:sw=2:et
@@ -34,6 +34,7 @@ module ZK
34
34
  EventHandlerSubscription.class_for_thread_option(@thread_opt) # this is side-effecty, will raise an ArgumentError if given a bad value.
35
35
 
36
36
  @mutex = nil
37
+ @setup_watcher_mutex = nil
37
38
 
38
39
  @callbacks = Hash.new { |h,k| h[k] = [] }
39
40
 
@@ -53,6 +54,8 @@ module ZK
53
54
  def reopen_after_fork!
54
55
  # logger.debug { "#{self.class}##{__method__}" }
55
56
  @mutex = Monitor.new
57
+ @setup_watcher_mutex = Monitor.new
58
+
56
59
  # XXX: need to test this w/ actor-style callbacks
57
60
 
58
61
  @state = :running
@@ -280,11 +283,16 @@ module ZK
280
283
  def setup_watcher!(watch_type, opts)
281
284
  return yield unless opts.delete(:watch)
282
285
 
283
- synchronize do
284
- set = @outstanding_watches.fetch(watch_type)
286
+ @setup_watcher_mutex.synchronize do
285
287
  path = opts[:path]
288
+ added, set = nil, nil
286
289
 
287
- if set.add?(path)
290
+ synchronize do
291
+ set = @outstanding_watches.fetch(watch_type)
292
+ added = set.add?(path)
293
+ end
294
+
295
+ if added
288
296
  logger.debug { "adding watcher #{watch_type.inspect} for #{path.inspect}"}
289
297
 
290
298
  # if we added the path to the set, blocking further registration of
@@ -295,7 +303,9 @@ module ZK
295
303
 
296
304
  yield opts
297
305
  rescue Exception
298
- set.delete(path)
306
+ synchronize do
307
+ set.delete(path)
308
+ end
299
309
  raise
300
310
  end
301
311
  else
@@ -1,3 +1,3 @@
1
1
  module ZK
2
- VERSION = "1.9.2"
2
+ VERSION = "1.9.3"
3
3
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: zk
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.9.2
4
+ version: 1.9.3
5
5
  prerelease:
6
6
  platform: ruby
7
7
  authors:
@@ -10,7 +10,7 @@ authors:
10
10
  autorequire:
11
11
  bindir: bin
12
12
  cert_chain: []
13
- date: 2013-09-24 00:00:00.000000000 Z
13
+ date: 2014-01-16 00:00:00.000000000 Z
14
14
  dependencies:
15
15
  - !ruby/object:Gem::Dependency
16
16
  name: zookeeper
@@ -165,18 +165,12 @@ required_ruby_version: !ruby/object:Gem::Requirement
165
165
  - - ! '>='
166
166
  - !ruby/object:Gem::Version
167
167
  version: '0'
168
- segments:
169
- - 0
170
- hash: 2898097721842831525
171
168
  required_rubygems_version: !ruby/object:Gem::Requirement
172
169
  none: false
173
170
  requirements:
174
171
  - - ! '>='
175
172
  - !ruby/object:Gem::Version
176
173
  version: '0'
177
- segments:
178
- - 0
179
- hash: 2898097721842831525
180
174
  requirements: []
181
175
  rubyforge_project:
182
176
  rubygems_version: 1.8.25
@@ -228,4 +222,3 @@ test_files:
228
222
  - spec/zk/threadpool_spec.rb
229
223
  - spec/zk/watch_spec.rb
230
224
  - spec/zk/zookeeper_spec.rb
231
- has_rdoc: