redis_ha 0.1.0 → 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/README.md CHANGED
@@ -1,17 +1,25 @@
1
1
  RedisHA
2
2
  =======
3
3
 
4
- A redis client that runs commands on multiple servers in parallel
5
- without blocking if one of them is down.
4
+ RedisHA includes:
5
+
6
+ + A redis client that runs commands on multiple servers in parallel
7
+ and handles failure gracefully
8
+
9
+ + A few highly available data structures / CRDTS (counter, set, hashmap)
10
+
11
+
12
+ ### Rationale
6
13
 
7
14
  I used this to implement a highly available session store on top of
8
- redis; it writes and reads the data to multiple instances and merges
9
- the responses after every read. This approach is negligibly slower
10
- than writing to a single server since RedisHA uses asynchronous I/O
11
- and is much more robust than complex server-side redis failover solutions
12
- (sentinel, pacemaker, etcetera).
15
+ redis; it writes and reads to multiple servers and merges the responses
16
+ after every read.
13
17
 
14
- The gem includes three basic CRDTs (set, hashmap and counter).
18
+ This is negligibly slower than writing to a single server since RedisHA
19
+ uses asynchronous I/O, but it is more resilient than a complex server-side
20
+ redis failover solution (sentinel, pacemaker, etcetera): you can `kill -9`
21
+ any server at any time and continue to read and write as long as at least
22
+ one server is healthy.
15
23
 
16
24
  [1] _DeCandia, Hastorun et al_ (2007). [Dynamo: Amazon’s Highly Available Key-value Store](http://www.read.seas.harvard.edu/~kohler/class/cs239-w08/decandia07dynamo.pd) (SOSP 2007)
17
25
 
@@ -19,7 +27,7 @@ The gem includes three basic CRDTs (set, hashmap and counter).
19
27
  Usage
20
28
  -----
21
29
 
22
- Create a RedisHA::ConnectionPool (connect does not block):
30
+ Create a RedisHA::ConnectionPool (`connect` does not block):
23
31
 
24
32
  ```ruby
25
33
  pool = RedisHA::ConnectionPool.new
@@ -37,9 +45,20 @@ Execute a command in parallel:
37
45
  => ["PONG", "PONG", "PONG"]
38
46
 
39
47
  >> pool.setnx "fnord", 1
40
- => [1,1,1]
48
+ => [1, 1, 1]
41
49
  ```
42
50
 
51
+ Execute a command in parallel when server #2 is down:
52
+
53
+ ```ruby
54
+ >> pool.ping
55
+ => ["PONG", nil, "PONG"]
56
+
57
+ >> pool.setnx "fnord", 1
58
+ => [1, nil, 1]
59
+ ```
60
+
61
+
43
62
  RedisHA::Counter (INCR/DECR/SET/GET)
44
63
 
45
64
  ```ruby
@@ -82,28 +101,60 @@ RedisHA::Set (ADD/REM/GET)
82
101
  => [:fnord]
83
102
  ```
84
103
 
104
+
105
+
106
+ Installation
107
+ ------------
108
+
109
+ gem install redis_ha
110
+
111
+ or in your Gemfile:
112
+
113
+ gem 'redis_ha', '>= 0.1'
114
+
115
+
85
116
  Timeouts
86
117
  --------
87
118
 
88
- here be dragons
119
+ RedisHA implements two timeouts per connection: A `read_timeout` and a `retry_timeout`
89
120
 
121
+ When a server takes longer than read_timeout seconds to respond to a request it is
122
+ considered down. Once a server is down it is excluded from subsequent requests for the
123
+ given retry_timeout.
90
124
 
91
- Caveats
92
- --------
125
+ That means if one server is down, one request will take at least read_timeout seconds
126
+ to complete every retry_timeout seconds.
93
127
 
94
- -> delete / decrement is not safe
128
+ The defaults are 500ms for read and 10s for the retry. If you are only using fast redis
129
+ operations you should set the read_timeout to 100ms or lower.
95
130
 
131
+ ```ruby
132
+ pool = RedisHA::ConnectionPool.new
133
+ pool.retry_timeout = 10
134
+ pool.read_timeout = 0.1
135
+ ```
96
136
 
97
137
 
138
+ Merge Strategies
139
+ ----------------
98
140
 
99
- Installation
100
- ------------
141
+ The default merge strategy for `RedisHA::Set` favors addtions over deletions (a deleted
142
+ element might re-appear in a set if a server goes down and comes back up with an
143
+ old / inconsistent state, but a element can never be lost from a set as long as at least
144
+ one server is healthy)
101
145
 
102
- gem install redis_ha
146
+ The default merge strategy for `RedisHA::Counter` favor increments over decrements (a
147
+ counters value might be greater than the real value in some conditions but it can never
148
+ be less than the real value)
103
149
 
104
- or in your Gemfile:
150
+ You can define your own merge strategy:
151
+
152
+ ```ruby
153
+ >> ctr = RedisHA::Counter.new(pool, "my-counter")
105
154
 
106
- gem 'redis_ha', '~> 0.3'
155
+ # select the smallest value when merging counter responses
156
+ >> ctr.merge_strategy = lambda{ |values| vales.map(&:to_i).min }
157
+ ```
107
158
 
108
159
 
109
160
  License
@@ -2,7 +2,7 @@ class RedisHA::Set < RedisHA::Base
2
2
 
3
3
  # this lambda defines how the individual response hashes are merged
4
4
  # the default is set union
5
- DEFAULT_MERGE_STRATEGY = ->(v) { v.inject(&:+).uniq }
5
+ DEFAULT_MERGE_STRATEGY = ->(v) { v.inject(&:|) }
6
6
 
7
7
  def add(*items)
8
8
  pool.sadd(@key, *items)
@@ -8,15 +8,31 @@ class RedisHA::Protocol
8
8
 
9
9
  def self.peek?(buf)
10
10
  if ["+", ":", "-"].include?(buf[0])
11
- buf[-2..-1] == "\r\n"
12
- elsif buf[0] == "$"
11
+ !!buf.index("\r\n")
12
+
13
+ elsif ["$", "*"].include?(buf[0])
13
14
  offset = buf.index("\r\n").to_i
14
15
  return false if offset == 0
15
16
  length = buf[1..offset].to_i
16
17
  return true if length == -1
18
+ offset += 2
19
+
20
+ if buf[0] == "*"
21
+ multi = length
22
+ length.times do |ind|
23
+ if buf[offset+1..offset+2] == "-1"
24
+ offset += 5
25
+ elsif /^\$(?<len>[0-9]+)\r\n/ =~ buf[offset..-1]
26
+ length = len.to_i
27
+ offset += len.length + 3
28
+ offset += length + 2 if ind < multi - 1
29
+ else
30
+ return false
31
+ end
32
+ end
33
+ end
34
+
17
35
  buf.size >= (length + offset + 2)
18
- elsif buf[0] == "*"
19
- true
20
36
  end
21
37
  end
22
38
 
@@ -27,10 +43,20 @@ class RedisHA::Protocol
27
43
  when ":" then buf[1..-3].to_i
28
44
 
29
45
  when "$"
30
- buf.sub(/.*\r\n/,"")[0...-2] if buf[1..2] != "-1"
46
+ if buf[1..2] == "-1"
47
+ buf.replace(buf[5..-1] || "")
48
+ nil
49
+ else
50
+ len = buf.match(/^\$([-0-9]+)\r\n/)[1]
51
+ ret = buf[len.length+3..len.length+len.to_i+2]
52
+ buf.replace(buf[len.to_i+len.length+5..-1] || "")
53
+ ret
54
+ end
31
55
 
32
56
  when "*"
33
- RuntimeError.new("multi bulk replies are not supported")
57
+ cnt = buf.match(/^\*([0-9]+)\r\n/)[1]
58
+ buf = buf[cnt.length+3..-1]
59
+ cnt.to_i.times.map { parse(buf) }
34
60
 
35
61
  end
36
62
  end
@@ -23,6 +23,10 @@ map = RedisHA::HashMap.new(pool, "fnordmap")
23
23
  set = RedisHA::Set.new(pool, "fnordset")
24
24
  ctr = RedisHA::Counter.new(pool, "fnordctr")
25
25
 
26
+ #set.add(:fnord, :bar, :fubar, :blubb)
27
+ #puts pool.smembers("fnordset").inspect
28
+ #puts set.get.inspect
29
+
26
30
  Ripl.start :binding => binding
27
31
  exit
28
32
 
@@ -3,7 +3,7 @@ $:.push File.expand_path("../lib", __FILE__)
3
3
 
4
4
  Gem::Specification.new do |s|
5
5
  s.name = "redis_ha"
6
- s.version = "0.1.0"
6
+ s.version = "0.1.1"
7
7
  s.date = Date.today.to_s
8
8
  s.platform = Gem::Platform::RUBY
9
9
  s.authors = ["Paul Asmuth"]
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: redis_ha
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.0
4
+ version: 0.1.1
5
5
  prerelease:
6
6
  platform: ruby
7
7
  authors:
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2012-12-22 00:00:00.000000000 Z
12
+ date: 2012-12-23 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: redis