nogara-redis_failover 0.9.1 → 0.9.7

Sign up to get free protection for your applications and to get access to all the features.
data/.gitignore CHANGED
@@ -16,4 +16,4 @@ test/tmp
16
16
  test/version_tmp
17
17
  tmp
18
18
  tags
19
-
19
+ .DS_Store
data/Changes.md CHANGED
@@ -1,3 +1,35 @@
1
+ 0.9.7
2
+ -----------
3
+ - Stubbed Client#client to return itself, fixes a fork reconnect bug with Resque (dbalatero)
4
+
5
+ 0.9.6
6
+ -----------
7
+ - Handle the node discovery error condition where the znode points to a master that is now a slave.
8
+
9
+ 0.9.5
10
+ -----------
11
+ - Introduce a safer master node discovery process for the Node Manager (#34)
12
+ - Improved shutdown process for Node Manager
13
+
14
+ 0.9.4
15
+ -----------
16
+ - Preserve original master by reading from existing znode state.
17
+ - Prevent Timeout::Error from bringing down the process (#32) (@eric)
18
+
19
+ 0.9.3
20
+ -----------
21
+ - Add lock assert for Node Manager.
22
+
23
+ 0.9.2
24
+ -----------
25
+ - Improved exception handling in NodeWatcher.
26
+
27
+ 0.9.1
28
+ -----------
29
+ - Improve nested exception handling.
30
+ - Fix manual failover support when znode does not exist first.
31
+ - Various fixes to work better with 1.8.7.
32
+
1
33
  0.9.0
2
34
  -----------
3
35
  - Make Node Manager's lock path vary with its main znode. (Bira)
data/README.md CHANGED
@@ -155,7 +155,7 @@ redis_failover uses YARD for its API documentation. Refer to the generated [API
155
155
 
156
156
  ## Requirements
157
157
 
158
- - redis_failover is actively tested against MRI 1.9.2/1.9.3 and JRuby 1.6.7 (1.9 mode only). Other rubies may work, although I don't actively test against them.
158
+ - redis_failover is actively tested against MRI 1.8.7/1.9.2/1.9.3 and JRuby 1.6.7 (1.9 mode only). Other rubies may work, although I don't actively test against them.
159
159
  - redis_failover requires a ZooKeeper service cluster to ensure reliability and data consistency. ZooKeeper is very simple and easy to get up and running. Please refer to this [Quick ZooKeeper Guide](https://github.com/ryanlecompte/redis_failover/wiki/Quick-ZooKeeper-Guide) to get up and running quickly if you don't already have ZooKeeper as a part of your environment.
160
160
 
161
161
  ## Considerations
@@ -175,6 +175,8 @@ redis_failover uses YARD for its API documentation. Refer to the generated [API
175
175
  - To learn more about ZooKeeper, see the official [ZooKeeper](http://zookeeper.apache.org/) site.
176
176
  - See the [Quick ZooKeeper Guide](https://github.com/ryanlecompte/redis_failover/wiki/Quick-ZooKeeper-Guide) for a quick guide to getting ZooKeeper up and running with redis_failover.
177
177
  - To learn more about how ZooKeeper handles network partitions, see [ZooKeeper Failure Scenarios](http://wiki.apache.org/hadoop/ZooKeeper/FailureScenarios)
178
+ - Slides for a [lightning talk](http://www.slideshare.net/ryanlecompte/handling-redis-failover-with-zookeeper) that I gave at BaRuCo 2012.
179
+ - Feel free to join #zk-gem on the IRC freenode network. We're usually hanging out there talking about ZooKeeper and redis_failover.
178
180
 
179
181
 
180
182
  ## License
@@ -64,10 +64,18 @@ module RedisFailover
64
64
  build_clients
65
65
  end
66
66
 
67
-
68
- # Resque wants to use Resque.redis.reconnect to recreate all connections. So
69
- # we provide ourselves as a client to receive the "reconnect" message
70
- def client
67
+ # Stubs this method to return this RedisFailover::Client object.
68
+ #
69
+ # Some libraries (Resque) assume they can access the `client` via this method,
70
+ # but we don't want to actually ever expose the internal Redis connections.
71
+ #
72
+ # By returning `self` here, we can add stubs for functionality like #reconnect,
73
+ # and everything will Just Work.
74
+ #
75
+ # Takes an *args array for safety only.
76
+ #
77
+ # @return [RedisFailover::Client]
78
+ def client(*args)
71
79
  self
72
80
  end
73
81
 
@@ -190,6 +198,8 @@ module RedisFailover
190
198
  else
191
199
  logger.error("Unknown ZK node event: #{event.inspect}")
192
200
  end
201
+ ensure
202
+ @zk.stat(@znode, :watch => true)
193
203
  end
194
204
 
195
205
  # Determines if a method is a known redis operation.
@@ -250,7 +260,7 @@ module RedisFailover
250
260
  # @raise [NoMasterError] if no master fallback is available
251
261
  def slave
252
262
  # pick a slave, if none available fallback to master
253
- if slave = @lock.synchronize { @slaves.sample }
263
+ if slave = @lock.synchronize { @slaves.shuffle.first }
254
264
  verify_role!(slave, :slave)
255
265
  return slave
256
266
  end
@@ -266,7 +276,7 @@ module RedisFailover
266
276
  return unless nodes_changed?(nodes)
267
277
 
268
278
  purge_clients
269
- logger.info("Building new clients for nodes #{nodes}")
279
+ logger.info("Building new clients for nodes #{nodes.inspect}")
270
280
  new_master = new_clients_for(nodes[:master]).first if nodes[:master]
271
281
  new_slaves = new_clients_for(*nodes[:slaves])
272
282
  @master = new_master
@@ -300,7 +310,7 @@ module RedisFailover
300
310
  def fetch_nodes
301
311
  data = @zk.get(@znode, :watch => true).first
302
312
  nodes = symbolize_keys(decode(data))
303
- logger.debug("Fetched nodes: #{nodes}")
313
+ logger.debug("Fetched nodes: #{nodes.inspect}")
304
314
 
305
315
  nodes
306
316
  rescue Zookeeper::Exceptions::InheritedConnectionError => ex
@@ -25,6 +25,13 @@ module RedisFailover
25
25
  class NoMasterError < Error
26
26
  end
27
27
 
28
+ # Raised when more than one master is found on startup.
29
+ class MultipleMastersError < Error
30
+ def initialize(nodes)
31
+ super("Multiple nodes with master role: #{nodes.map(&:to_s)}")
32
+ end
33
+ end
34
+
28
35
  # Raised when no slave is currently available.
29
36
  class NoSlaveError < Error
30
37
  end
@@ -118,7 +118,6 @@ module RedisFailover
118
118
  end
119
119
  alias_method :eql?, :==
120
120
 
121
-
122
121
  # @return [Integer] a hash value for this node
123
122
  def hash
124
123
  to_s.hash
@@ -175,14 +174,14 @@ module RedisFailover
175
174
  redis = new_client
176
175
  yield redis
177
176
  end
178
- rescue
179
- raise NodeUnavailableError, self, caller
177
+ rescue Exception => ex
178
+ raise NodeUnavailableError, "#{ex.class}: #{ex.message}", ex.backtrace
180
179
  ensure
181
180
  if redis
182
181
  begin
183
182
  redis.client.disconnect
184
- rescue
185
- raise NodeUnavailableError, self, caller
183
+ rescue Exception => ex
184
+ raise NodeUnavailableError, "#{ex.class}: #{ex.message}", ex.backtrace
186
185
  end
187
186
  end
188
187
  end
@@ -10,7 +10,22 @@ module RedisFailover
10
10
  include Util
11
11
 
12
12
  # Number of seconds to wait before retrying bootstrap process.
13
- TIMEOUT = 5
13
+ TIMEOUT = 3
14
+
15
+ # ZK Errors that the Node Manager cares about.
16
+ ZK_ERRORS = [
17
+ ZK::Exceptions::LockAssertionFailedError,
18
+ ZK::Exceptions::InterruptedSession,
19
+ ZKDisconnectedError
20
+ ].freeze
21
+
22
+ # Errors that can happen during the node discovery process.
23
+ NODE_DISCOVERY_ERRORS = [
24
+ InvalidNodeRoleError,
25
+ NodeUnavailableError,
26
+ NoMasterError,
27
+ MultipleMastersError
28
+ ].freeze
14
29
 
15
30
  # Creates a new instance.
16
31
  #
@@ -26,13 +41,11 @@ module RedisFailover
26
41
  @znode = @options[:znode_path] || Util::DEFAULT_ZNODE_PATH
27
42
  @manual_znode = ManualFailover::ZNODE_PATH
28
43
  @mutex = Mutex.new
29
-
30
- # Name for the znode that handles exclusive locking between multiple
31
- # Node Manager processes. Whoever holds the lock will be considered
32
- # the "master" Node Manager, and will be responsible for monitoring
33
- # the redis nodes. When a Node Manager that holds the lock disappears
34
- # or fails, another Node Manager process will grab the lock and
35
- # become the
44
+ @shutdown = false
45
+ @leader = false
46
+ @master = nil
47
+ @slaves = []
48
+ @unavailable = []
36
49
  @lock_path = "#{@znode}_lock".freeze
37
50
  end
38
51
 
@@ -40,23 +53,22 @@ module RedisFailover
40
53
  #
41
54
  # @note This method does not return until the manager terminates.
42
55
  def start
56
+ return unless running?
43
57
  @queue = Queue.new
44
- @leader = false
45
58
  setup_zk
46
59
  logger.info('Waiting to become master Node Manager ...')
47
- @zk.with_lock(@lock_path) do
60
+ with_lock do
48
61
  @leader = true
49
62
  logger.info('Acquired master Node Manager lock')
50
- discover_nodes
51
- initialize_path
52
- spawn_watchers
53
- handle_state_reports
63
+ if discover_nodes
64
+ initialize_path
65
+ spawn_watchers
66
+ handle_state_reports
67
+ end
54
68
  end
55
- rescue ZK::Exceptions::InterruptedSession, ZKDisconnectedError => ex
69
+ rescue *ZK_ERRORS => ex
56
70
  logger.error("ZK error while attempting to manage nodes: #{ex.inspect}")
57
- logger.error(ex.backtrace.join("\n"))
58
- shutdown
59
- sleep(TIMEOUT)
71
+ reset
60
72
  retry
61
73
  end
62
74
 
@@ -69,12 +81,21 @@ module RedisFailover
69
81
  @queue << [node, state]
70
82
  end
71
83
 
72
- # Performs a graceful shutdown of the manager.
73
- def shutdown
74
- @queue.clear
75
- @queue << nil
84
+ # Performs a reset of the manager.
85
+ def reset
86
+ @leader = false
76
87
  @watchers.each(&:shutdown) if @watchers
88
+ @queue.clear
77
89
  @zk.close! if @zk
90
+ @zk_lock = nil
91
+ end
92
+
93
+ # Initiates a graceful shutdown.
94
+ def shutdown
95
+ logger.info('Shutting down ...')
96
+ @mutex.synchronize do
97
+ @shutdown = true
98
+ end
78
99
  end
79
100
 
80
101
  private
@@ -86,10 +107,8 @@ module RedisFailover
86
107
  @zk.on_expired_session { notify_state(:zk_disconnected, nil) }
87
108
 
88
109
  @zk.register(@manual_znode) do |event|
89
- @mutex.synchronize do
90
- if event.node_changed?
91
- schedule_manual_failover
92
- end
110
+ if event.node_created? || event.node_changed?
111
+ perform_manual_failover
93
112
  end
94
113
  end
95
114
 
@@ -99,21 +118,24 @@ module RedisFailover
99
118
 
100
119
  # Handles periodic state reports from {RedisFailover::NodeWatcher} instances.
101
120
  def handle_state_reports
102
- while state_report = @queue.pop
121
+ while running? && (state_report = @queue.pop)
103
122
  begin
104
- node, state = state_report
105
- case state
106
- when :unavailable then handle_unavailable(node)
107
- when :available then handle_available(node)
108
- when :syncing then handle_syncing(node)
109
- when :manual_failover then handle_manual_failover(node)
110
- when :zk_disconnected then raise ZKDisconnectedError
111
- else raise InvalidNodeStateError.new(node, state)
123
+ @mutex.synchronize do
124
+ return unless running?
125
+ @zk_lock.assert!
126
+ node, state = state_report
127
+ case state
128
+ when :unavailable then handle_unavailable(node)
129
+ when :available then handle_available(node)
130
+ when :syncing then handle_syncing(node)
131
+ when :zk_disconnected then raise ZKDisconnectedError
132
+ else raise InvalidNodeStateError.new(node, state)
133
+ end
134
+
135
+ # flush current state
136
+ write_state
112
137
  end
113
-
114
- # flush current state
115
- write_state
116
- rescue ZK::Exceptions::InterruptedSession, ZKDisconnectedError
138
+ rescue *ZK_ERRORS
117
139
  # fail hard if this is a ZK connection-related error
118
140
  raise
119
141
  rescue => ex
@@ -148,7 +170,7 @@ module RedisFailover
148
170
  reconcile(node)
149
171
 
150
172
  # no-op if we already know about this node
151
- return if @master == node || @slaves.include?(node)
173
+ return if @master == node || (@master && @slaves.include?(node))
152
174
  logger.info("Handling available node: #{node}")
153
175
 
154
176
  if @master
@@ -188,7 +210,7 @@ module RedisFailover
188
210
  logger.info("Handling manual failover")
189
211
 
190
212
  # make current master a slave, and promote new master
191
- @slaves << @master
213
+ @slaves << @master if @master
192
214
  @slaves.delete(node)
193
215
  promote_new_master(node)
194
216
  end
@@ -218,21 +240,69 @@ module RedisFailover
218
240
  end
219
241
 
220
242
  # Discovers the current master and slave nodes.
243
+ # @return [Boolean] true if nodes successfully discovered, false otherwise
221
244
  def discover_nodes
222
- @unavailable = []
223
- nodes = @options[:nodes].map { |opts| Node.new(opts) }.uniq
224
- @master = find_master(nodes)
225
- @slaves = nodes - [@master]
226
- logger.info("Managing master (#{@master}) and slaves" +
227
- " (#{@slaves.map(&:to_s).join(', ')})")
228
-
229
- # ensure that slaves are correctly pointing to this master
230
- redirect_slaves_to(@master) if @master
245
+ @mutex.synchronize do
246
+ return false unless running?
247
+ nodes = @options[:nodes].map { |opts| Node.new(opts) }.uniq
248
+ if @master = find_existing_master
249
+ logger.info("Using master #{@master} from existing znode config.")
250
+ elsif @master = guess_master(nodes)
251
+ logger.info("Guessed master #{@master} from known redis nodes.")
252
+ end
253
+ @slaves = nodes - [@master]
254
+ logger.info("Managing master (#{@master}) and slaves " +
255
+ "(#{@slaves.map(&:to_s).join(', ')})")
256
+ # ensure that slaves are correctly pointing to this master
257
+ redirect_slaves_to(@master)
258
+ true
259
+ end
260
+ rescue *NODE_DISCOVERY_ERRORS => ex
261
+ msg = <<-MSG.gsub(/\s+/, ' ')
262
+ Failed to discover master node: #{ex.inspect}
263
+ In order to ensure a safe startup, redis_failover requires that all redis
264
+ nodes be accessible, and only a single node indicating that it's the master.
265
+ In order to fix this, you can perform a manual failover via redis_failover,
266
+ or manually fix the individual redis servers. This discovery process will
267
+ retry in #{TIMEOUT}s.
268
+ MSG
269
+ logger.warn(msg)
270
+ sleep(TIMEOUT)
271
+ retry
272
+ end
273
+
274
+ # Seeds the initial node master from an existing znode config.
275
+ def find_existing_master
276
+ if data = @zk.get(@znode).first
277
+ nodes = symbolize_keys(decode(data))
278
+ master = node_from(nodes[:master])
279
+ logger.info("Master from existing znode config: #{master || 'none'}")
280
+ # Check for case where a node previously thought to be the master was
281
+ # somehow manually reconfigured to be a slave outside of the node manager's
282
+ # control.
283
+ if master && master.slave?
284
+ raise InvalidNodeRoleError.new(master, :master, :slave)
285
+ end
286
+ master
287
+ end
288
+ rescue ZK::Exceptions::NoNode
289
+ # blank slate, no last known master
290
+ nil
291
+ end
292
+
293
+ # Creates a Node instance from a string.
294
+ #
295
+ # @param [String] node_string a string representation of a node (e.g., host:port)
296
+ # @return [Node] the Node representation
297
+ def node_from(node_string)
298
+ return if node_string.nil?
299
+ host, port = node_string.split(':', 2)
300
+ Node.new(:host => host, :port => port, :password => @options[:password])
231
301
  end
232
302
 
233
303
  # Spawns the {RedisFailover::NodeWatcher} instances for each managed node.
234
304
  def spawn_watchers
235
- @watchers = [@master, @slaves, @unavailable].flatten.map do |node|
305
+ @watchers = [@master, @slaves, @unavailable].flatten.compact.map do |node|
236
306
  NodeWatcher.new(self, node, @options[:max_failures] || 3)
237
307
  end
238
308
  @watchers.each(&:watch)
@@ -242,14 +312,11 @@ module RedisFailover
242
312
  #
243
313
  # @param [Array<Node>] nodes the nodes to search
244
314
  # @return [Node] the found master node, nil if not found
245
- def find_master(nodes)
246
- nodes.find do |node|
247
- begin
248
- node.master?
249
- rescue NodeUnavailableError
250
- false
251
- end
252
- end
315
+ def guess_master(nodes)
316
+ master_nodes = nodes.select { |node| node.master? }
317
+ raise NoMasterError if master_nodes.empty?
318
+ raise MultipleMastersError.new(master_nodes) if master_nodes.size > 1
319
+ master_nodes.first
253
320
  end
254
321
 
255
322
  # Redirects all slaves to the specified node.
@@ -336,19 +403,47 @@ module RedisFailover
336
403
  @zk.set(@znode, encode(current_nodes))
337
404
  end
338
405
 
339
- # Schedules a manual failover to a redis node.
340
- def schedule_manual_failover
341
- return unless @leader
342
- new_master = @zk.get(@manual_znode, :watch => true).first
343
- logger.info("Received manual failover request for: #{new_master}")
406
+ # Executes a block wrapped in a ZK exclusive lock.
407
+ def with_lock
408
+ @zk_lock = @zk.locker(@lock_path)
409
+ while running? && !@zk_lock.lock
410
+ sleep(TIMEOUT)
411
+ end
344
412
 
345
- node = if new_master == ManualFailover::ANY_SLAVE
346
- @slaves.sample
347
- else
348
- host, port = new_master.split(':', 2)
349
- Node.new(:host => host, :port => port, :password => @options[:password])
413
+ if running?
414
+ yield
350
415
  end
351
- notify_state(node, :manual_failover) if node
416
+ ensure
417
+ @zk_lock.unlock! if @zk_lock
418
+ end
419
+
420
+ # Perform a manual failover to a redis node.
421
+ def perform_manual_failover
422
+ @mutex.synchronize do
423
+ return unless running? && @leader && @zk_lock
424
+ @zk_lock.assert!
425
+ new_master = @zk.get(@manual_znode, :watch => true).first
426
+ return unless new_master && new_master.size > 0
427
+ logger.info("Received manual failover request for: #{new_master}")
428
+ logger.info("Current nodes: #{current_nodes.inspect}")
429
+ node = new_master == ManualFailover::ANY_SLAVE ?
430
+ @slaves.shuffle.first : node_from(new_master)
431
+ if node
432
+ handle_manual_failover(node)
433
+ else
434
+ logger.error('Failed to perform manual failover, no candidate found.')
435
+ end
436
+ end
437
+ rescue => ex
438
+ logger.error("Error handling a manual failover: #{ex.inspect}")
439
+ logger.error(ex.backtrace.join("\n"))
440
+ ensure
441
+ @zk.stat(@manual_znode, :watch => true)
442
+ end
443
+
444
+ # @return [Boolean] true if running, false otherwise
445
+ def running?
446
+ !@shutdown
352
447
  end
353
448
  end
354
449
  end
@@ -35,8 +35,8 @@ module RedisFailover
35
35
  @done = true
36
36
  @node.wakeup
37
37
  @monitor_thread.join if @monitor_thread
38
- rescue
39
- # best effort
38
+ rescue => ex
39
+ logger.warn("Failed to gracefully shutdown watcher for #{@node}")
40
40
  end
41
41
 
42
42
  private
@@ -59,12 +59,16 @@ module RedisFailover
59
59
  notify(:available)
60
60
  @node.wait
61
61
  end
62
- rescue NodeUnavailableError
62
+ rescue NodeUnavailableError => ex
63
+ logger.debug("Failed to communicate with node #{@node}: #{ex.inspect}")
63
64
  failures += 1
64
65
  if failures >= @max_failures
65
66
  notify(:unavailable)
66
67
  failures = 0
67
68
  end
69
+ rescue Exception => ex
70
+ logger.error("Unexpected error while monitoring node #{@node}: #{ex.inspect}")
71
+ logger.error(ex.backtrace.join("\n"))
68
72
  end
69
73
  end
70
74
  end
@@ -8,22 +8,20 @@ module RedisFailover
8
8
  # Node Manager is gracefully stopped
9
9
  def self.run(options)
10
10
  options = CLI.parse(options)
11
- @node_manager = NodeManager.new(options)
12
- trap_signals
13
- @node_manager_thread = Thread.new { @node_manager.start }
14
- @node_manager_thread.join
11
+ node_manager = NodeManager.new(options)
12
+ trap_signals(node_manager)
13
+ node_manager.start
15
14
  end
16
15
 
17
16
  # Traps shutdown signals.
18
- def self.trap_signals
17
+ # @param [NodeManager] node_manager the node manager
18
+ def self.trap_signals(node_manager)
19
19
  [:INT, :TERM].each do |signal|
20
20
  trap(signal) do
21
- Util.logger.info('Shutting down ...')
22
- @node_manager.shutdown
23
- @node_manager_thread.join
24
- exit(0)
21
+ node_manager.shutdown
25
22
  end
26
23
  end
27
24
  end
25
+ private_class_method :trap_signals
28
26
  end
29
27
  end
@@ -1,3 +1,3 @@
1
1
  module RedisFailover
2
- VERSION = '0.9.1'
2
+ VERSION = '0.9.7'
3
3
  end
@@ -38,6 +38,12 @@ module RedisFailover
38
38
  end
39
39
  end
40
40
 
41
+ describe '#client' do
42
+ it 'should return itself as a delegate' do
43
+ client.client.should == client
44
+ end
45
+ end
46
+
41
47
  describe '#dispatch' do
42
48
  it 'routes write operations to master' do
43
49
  called = false
@@ -108,5 +108,29 @@ module RedisFailover
108
108
  end
109
109
  end
110
110
  end
111
+
112
+ describe '#guess_master' do
113
+ let(:node1) { Node.new(:host => 'node1').extend(RedisStubSupport) }
114
+ let(:node2) { Node.new(:host => 'node2').extend(RedisStubSupport) }
115
+ let(:node3) { Node.new(:host => 'node3').extend(RedisStubSupport) }
116
+
117
+ it 'raises error when no master is found' do
118
+ node1.make_slave!(node3)
119
+ node2.make_slave!(node3)
120
+ expect { manager.guess_master([node1, node2]) }.to raise_error(NoMasterError)
121
+ end
122
+
123
+ it 'raises error when multiple masters found' do
124
+ node1.make_master!
125
+ node2.make_master!
126
+ expect { manager.guess_master([node1, node2]) }.to raise_error(MultipleMastersError)
127
+ end
128
+
129
+ it 'raises error when a node can not be reached' do
130
+ node1.make_master!
131
+ node2.redis.make_unavailable!
132
+ expect { manager.guess_master([node1, node2]) }.to raise_error(NodeUnavailableError)
133
+ end
134
+ end
111
135
  end
112
136
  end
@@ -1,11 +1,12 @@
1
1
  module RedisFailover
2
2
  class NodeManagerStub < NodeManager
3
3
  attr_accessor :master
4
- public :current_nodes
4
+ # HACK - this will go away once we refactor the tests to use a real ZK/Redis server.
5
+ public :current_nodes, :guess_master
5
6
 
6
7
  def discover_nodes
7
8
  # only discover nodes once in testing
8
- return if @nodes_discovered
9
+ return true if @nodes_discovered
9
10
 
10
11
  master = Node.new(:host => 'master')
11
12
  slave = Node.new(:host => 'slave')
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: nogara-redis_failover
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.9.1
4
+ version: 0.9.7
5
5
  prerelease:
6
6
  platform: ruby
7
7
  authors:
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2012-09-21 00:00:00.000000000 Z
12
+ date: 2012-09-24 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: redis