myobie-memcache-client 1.5.0.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- data/History.txt +85 -0
- data/README.txt +54 -0
- data/Rakefile +24 -0
- data/ext/crc32/crc32.c +28 -0
- data/ext/crc32/extconf.rb +5 -0
- data/lib/memcache.rb +791 -0
- data/lib/memcache_util.rb +90 -0
- data/test/test_mem_cache.rb +744 -0
- metadata +61 -0
data/History.txt
ADDED
@@ -0,0 +1,85 @@
|
|
1
|
+
= 1.5.0.3 (FiveRuns fork)
|
2
|
+
|
3
|
+
* Integrated ITU-T CRC32 operation in native C extension for speed. Thanks to Justin Balthrop!
|
4
|
+
|
5
|
+
= 1.5.0.2 (FiveRuns fork)
|
6
|
+
|
7
|
+
* Add support for seamless failover between servers. If one server connection dies,
|
8
|
+
the client will retry the operation on another server before giving up.
|
9
|
+
|
10
|
+
* Merge Will Bryant's socket retry patch.
|
11
|
+
http://willbryant.net/software/2007/12/21/ruby-memcache-client-reconnect-and-retry
|
12
|
+
|
13
|
+
= 1.5.0.1 (FiveRuns fork)
|
14
|
+
|
15
|
+
* Fix set not handling client disconnects.
|
16
|
+
http://dev.twitter.com/2008/02/solving-case-of-missing-updates.html
|
17
|
+
|
18
|
+
= 1.5.0
|
19
|
+
|
20
|
+
* Add MemCache#flush_all command. Patch #13019 and bug #10503. Patches
|
21
|
+
submitted by Sebastian Delmont and Rick Olson.
|
22
|
+
* Type-cast data returned by MemCache#stats. Patch #10505 submitted by
|
23
|
+
Sebastian Delmont.
|
24
|
+
|
25
|
+
= 1.4.0
|
26
|
+
|
27
|
+
* Fix bug #10371, #set does not check response for server errors.
|
28
|
+
Submitted by Ben VandenBos.
|
29
|
+
* Fix bug #12450, set TCP_NODELAY socket option. Patch by Chris
|
30
|
+
McGrath.
|
31
|
+
* Fix bug #10704, missing #add method. Patch by Jamie Macey.
|
32
|
+
* Fix bug #10371, handle socket EOF in cache_get. Submitted by Ben
|
33
|
+
VandenBos.
|
34
|
+
|
35
|
+
= 1.3.0
|
36
|
+
|
37
|
+
* Apply patch #6507, add stats command. Submitted by Tyler Kovacs.
|
38
|
+
* Apply patch #6509, parallel implementation of #get_multi. Submitted
|
39
|
+
by Tyler Kovacs.
|
40
|
+
* Validate keys. Disallow spaces in keys or keys that are too long.
|
41
|
+
* Perform more validation of server responses. MemCache now reports
|
42
|
+
errors if the socket was not in an expected state. (Please file
|
43
|
+
bugs if you find some.)
|
44
|
+
* Add #incr and #decr.
|
45
|
+
* Add raw argument to #set and #get to retrieve #incr and #decr
|
46
|
+
values.
|
47
|
+
* Also put on MemCacheError when using Cache::get with block.
|
48
|
+
* memcache.rb no longer sets $TESTING to a true value if it was
|
49
|
+
previously defined. Bug #8213 by Matijs van Zuijlen.
|
50
|
+
|
51
|
+
= 1.2.1
|
52
|
+
|
53
|
+
* Fix bug #7048, MemCache#servers= referenced changed local variable.
|
54
|
+
Submitted by Justin Dossey.
|
55
|
+
* Fix bug #7049, MemCache#initialize resets @buckets. Submitted by
|
56
|
+
Justin Dossey.
|
57
|
+
* Fix bug #6232, Make Cache::Get work with a block only when nil is
|
58
|
+
returned. Submitted by Jon Evans.
|
59
|
+
* Moved to the seattlerb project.
|
60
|
+
|
61
|
+
= 1.2.0
|
62
|
+
|
63
|
+
NOTE: This version will store keys in different places than previous
|
64
|
+
versions! Be prepared for some thrashing while memcached sorts itself
|
65
|
+
out!
|
66
|
+
|
67
|
+
* Fixed multithreaded operations, bug 5994 and 5989.
|
68
|
+
Thanks to Blaine Cook, Erik Hetzner, Elliot Smith, Dave Myron (and
|
69
|
+
possibly others I have forgotten).
|
70
|
+
* Made memcached interoperable with other memcached libraries, bug
|
71
|
+
4509. Thanks to anonymous.
|
72
|
+
* Added get_multi to match Perl/etc APIs
|
73
|
+
|
74
|
+
= 1.1.0
|
75
|
+
|
76
|
+
* Added some tests
|
77
|
+
* Sped up non-multithreaded and multithreaded operation
|
78
|
+
* More Ruby-memcache compatibility
|
79
|
+
* More RDoc
|
80
|
+
* Switched to Hoe
|
81
|
+
|
82
|
+
= 1.0.0
|
83
|
+
|
84
|
+
Birthday!
|
85
|
+
|
data/README.txt
ADDED
@@ -0,0 +1,54 @@
|
|
1
|
+
= memcache-client
|
2
|
+
|
3
|
+
This is the FiveRuns fork of seattle.rb's memcache-client 1.5.0. We've fixed several bugs
|
4
|
+
which are in that version.
|
5
|
+
|
6
|
+
Rubyforge Project:
|
7
|
+
|
8
|
+
http://rubyforge.org/projects/seattlerb
|
9
|
+
|
10
|
+
Documentation:
|
11
|
+
|
12
|
+
http://seattlerb.org/memcache-client
|
13
|
+
|
14
|
+
== Installing memcache-client
|
15
|
+
|
16
|
+
Just install the gem:
|
17
|
+
|
18
|
+
$ sudo gem install fiveruns-memcache-client --source http://gems.github.com
|
19
|
+
|
20
|
+
== Using memcache-client
|
21
|
+
|
22
|
+
With one server:
|
23
|
+
|
24
|
+
CACHE = MemCache.new 'localhost:11211', :namespace => 'my_namespace'
|
25
|
+
|
26
|
+
Or with multiple servers:
|
27
|
+
|
28
|
+
CACHE = MemCache.new %w[one.example.com:11211 two.example.com:11211],
|
29
|
+
:namespace => 'my_namespace'
|
30
|
+
|
31
|
+
See MemCache.new for details.
|
32
|
+
|
33
|
+
=== Using memcache-client with Rails
|
34
|
+
|
35
|
+
Rails will automatically load the memcache-client gem, but you may
|
36
|
+
need to uninstall Ruby-memcache, I don't know which one will get
|
37
|
+
picked by default.
|
38
|
+
|
39
|
+
Add your environment-specific caches to config/environment/*. If you run both
|
40
|
+
development and production on the same memcached server sets, be sure
|
41
|
+
to use different namespaces. Be careful when running tests using
|
42
|
+
memcache, you may get strange results. It will be less of a headache
|
43
|
+
to simply use a readonly memcache when testing.
|
44
|
+
|
45
|
+
memcache-client also comes with a wrapper called Cache in memcache_util.rb for
|
46
|
+
use with Rails. To use it be sure to assign your memcache connection to
|
47
|
+
CACHE. Cache returns nil on all memcache errors so you don't have to rescue
|
48
|
+
the errors yourself. It has #get, #put and #delete module functions.
|
49
|
+
|
50
|
+
=== Improving Performance ===
|
51
|
+
|
52
|
+
Performing the CRC-32 ITU-T step to determine which server to use for a given key
|
53
|
+
is VERY slow in Ruby. RubyGems should compile a native library for performing this
|
54
|
+
operation when the gem is installed.
|
data/Rakefile
ADDED
@@ -0,0 +1,24 @@
|
|
1
|
+
# vim: syntax=Ruby
|
2
|
+
require 'rubygems'
|
3
|
+
require 'rake/rdoctask'
|
4
|
+
require 'spec/rake/spectask'
|
5
|
+
|
6
|
+
task :gem do
|
7
|
+
sh "gem build memcache-client.gemspec"
|
8
|
+
end
|
9
|
+
|
10
|
+
task :install => [:gem] do
|
11
|
+
sh "sudo gem install memcache-client-*.gem"
|
12
|
+
end
|
13
|
+
|
14
|
+
Spec::Rake::SpecTask.new do |t|
|
15
|
+
t.ruby_opts = ['-rtest/unit']
|
16
|
+
t.spec_files = FileList['test/test_*.rb']
|
17
|
+
t.fail_on_error = true
|
18
|
+
end
|
19
|
+
|
20
|
+
Rake::RDocTask.new do |rd|
|
21
|
+
rd.main = "README.rdoc"
|
22
|
+
rd.rdoc_files.include("README.rdoc", "lib/**/*.rb")
|
23
|
+
rd.rdoc_dir = 'doc'
|
24
|
+
end
|
data/ext/crc32/crc32.c
ADDED
@@ -0,0 +1,28 @@
|
|
1
|
+
#include "ruby.h"
|
2
|
+
#include "stdio.h"
|
3
|
+
|
4
|
+
static VALUE t_itu_t(VALUE self, VALUE string) {
|
5
|
+
VALUE str = StringValue(string);
|
6
|
+
int n = RSTRING(str)->len;
|
7
|
+
char* p = RSTRING(str)->ptr;
|
8
|
+
unsigned long r = 0xFFFFFFFF;
|
9
|
+
int i, j;
|
10
|
+
|
11
|
+
for (i = 0; i < n; i++) {
|
12
|
+
r = r ^ p[i];
|
13
|
+
for (j = 0; j < 8; j++) {
|
14
|
+
if ( (r & 1) != 0 ) {
|
15
|
+
r = (r >> 1) ^ 0xEDB88320;
|
16
|
+
} else {
|
17
|
+
r = r >> 1;
|
18
|
+
}
|
19
|
+
}
|
20
|
+
}
|
21
|
+
return INT2FIX(r ^ 0xFFFFFFFF);
|
22
|
+
}
|
23
|
+
|
24
|
+
VALUE cCRC32;
|
25
|
+
void Init_crc32() {
|
26
|
+
cCRC32 = rb_define_module("CRC32");
|
27
|
+
rb_define_module_function(cCRC32, "itu_t", t_itu_t, 1);
|
28
|
+
}
|
data/lib/memcache.rb
ADDED
@@ -0,0 +1,791 @@
|
|
1
|
+
$TESTING = defined?($TESTING) && $TESTING
|
2
|
+
|
3
|
+
require 'socket'
|
4
|
+
require 'thread'
|
5
|
+
require 'timeout'
|
6
|
+
require 'rubygems'
|
7
|
+
|
8
|
+
class String
|
9
|
+
|
10
|
+
##
|
11
|
+
# Uses the ITU-T polynomial in the CRC32 algorithm.
|
12
|
+
begin
|
13
|
+
require 'crc32'
|
14
|
+
def crc32_ITU_T
|
15
|
+
CRC32.itu_t(self)
|
16
|
+
end
|
17
|
+
rescue LoadError => e
|
18
|
+
puts "Loading with slow CRC32 ITU-T implementation: #{e.message}"
|
19
|
+
|
20
|
+
def crc32_ITU_T
|
21
|
+
n = length
|
22
|
+
r = 0xFFFFFFFF
|
23
|
+
|
24
|
+
n.times do |i|
|
25
|
+
r ^= self[i]
|
26
|
+
8.times do
|
27
|
+
if (r & 1) != 0 then
|
28
|
+
r = (r>>1) ^ 0xEDB88320
|
29
|
+
else
|
30
|
+
r >>= 1
|
31
|
+
end
|
32
|
+
end
|
33
|
+
end
|
34
|
+
|
35
|
+
r ^ 0xFFFFFFFF
|
36
|
+
end
|
37
|
+
end
|
38
|
+
|
39
|
+
end
|
40
|
+
|
41
|
+
##
|
42
|
+
# A Ruby client library for memcached.
|
43
|
+
#
|
44
|
+
# This is intended to provide access to basic memcached functionality. It
|
45
|
+
# does not attempt to be complete implementation of the entire API, but it is
|
46
|
+
# approaching a complete implementation.
|
47
|
+
|
48
|
+
class MemCache
|
49
|
+
|
50
|
+
##
|
51
|
+
# The version of MemCache you are using.
|
52
|
+
|
53
|
+
VERSION = '1.5.0.1'
|
54
|
+
|
55
|
+
##
|
56
|
+
# Default options for the cache object.
|
57
|
+
|
58
|
+
DEFAULT_OPTIONS = {
|
59
|
+
:namespace => nil,
|
60
|
+
:readonly => false,
|
61
|
+
:multithread => false,
|
62
|
+
}
|
63
|
+
|
64
|
+
##
|
65
|
+
# Default memcached port.
|
66
|
+
|
67
|
+
DEFAULT_PORT = 11211
|
68
|
+
|
69
|
+
##
|
70
|
+
# Default memcached server weight.
|
71
|
+
|
72
|
+
DEFAULT_WEIGHT = 1
|
73
|
+
|
74
|
+
##
|
75
|
+
# The amount of time to wait for a response from a memcached server. If a
|
76
|
+
# response is not completed within this time, the connection to the server
|
77
|
+
# will be closed and an error will be raised.
|
78
|
+
|
79
|
+
attr_accessor :request_timeout
|
80
|
+
|
81
|
+
##
|
82
|
+
# The namespace for this instance
|
83
|
+
|
84
|
+
attr_reader :namespace
|
85
|
+
|
86
|
+
##
|
87
|
+
# The multithread setting for this instance
|
88
|
+
|
89
|
+
attr_reader :multithread
|
90
|
+
|
91
|
+
##
|
92
|
+
# The servers this client talks to. Play at your own peril.
|
93
|
+
|
94
|
+
attr_reader :servers
|
95
|
+
|
96
|
+
##
|
97
|
+
# Accepts a list of +servers+ and a list of +opts+. +servers+ may be
|
98
|
+
# omitted. See +servers=+ for acceptable server list arguments.
|
99
|
+
#
|
100
|
+
# Valid options for +opts+ are:
|
101
|
+
#
|
102
|
+
# [:namespace] Prepends this value to all keys added or retrieved.
|
103
|
+
# [:readonly] Raises an exeception on cache writes when true.
|
104
|
+
# [:multithread] Wraps cache access in a Mutex for thread safety.
|
105
|
+
#
|
106
|
+
# Other options are ignored.
|
107
|
+
|
108
|
+
def initialize(*args)
|
109
|
+
servers = []
|
110
|
+
opts = {}
|
111
|
+
|
112
|
+
case args.length
|
113
|
+
when 0 then # NOP
|
114
|
+
when 1 then
|
115
|
+
arg = args.shift
|
116
|
+
case arg
|
117
|
+
when Hash then opts = arg
|
118
|
+
when Array then servers = arg
|
119
|
+
when String then servers = [arg]
|
120
|
+
else raise ArgumentError, 'first argument must be Array, Hash or String'
|
121
|
+
end
|
122
|
+
when 2 then
|
123
|
+
servers, opts = args
|
124
|
+
else
|
125
|
+
raise ArgumentError, "wrong number of arguments (#{args.length} for 2)"
|
126
|
+
end
|
127
|
+
|
128
|
+
opts = DEFAULT_OPTIONS.merge opts
|
129
|
+
@namespace = opts[:namespace]
|
130
|
+
@readonly = opts[:readonly]
|
131
|
+
@multithread = opts[:multithread]
|
132
|
+
@mutex = Mutex.new if @multithread
|
133
|
+
@buckets = []
|
134
|
+
self.servers = servers
|
135
|
+
end
|
136
|
+
|
137
|
+
##
|
138
|
+
# Returns a string representation of the cache object.
|
139
|
+
|
140
|
+
def inspect
|
141
|
+
"<MemCache: %d servers, %d buckets, ns: %p, ro: %p>" %
|
142
|
+
[@servers.length, @buckets.length, @namespace, @readonly]
|
143
|
+
end
|
144
|
+
|
145
|
+
##
|
146
|
+
# Returns whether there is at least one active server for the object.
|
147
|
+
|
148
|
+
def active?
|
149
|
+
not @servers.empty?
|
150
|
+
end
|
151
|
+
|
152
|
+
##
|
153
|
+
# Returns whether or not the cache object was created read only.
|
154
|
+
|
155
|
+
def readonly?
|
156
|
+
@readonly
|
157
|
+
end
|
158
|
+
|
159
|
+
##
|
160
|
+
# Set the servers that the requests will be distributed between. Entries
|
161
|
+
# can be either strings of the form "hostname:port" or
|
162
|
+
# "hostname:port:weight" or MemCache::Server objects.
|
163
|
+
|
164
|
+
def servers=(servers)
|
165
|
+
# Create the server objects.
|
166
|
+
@servers = servers.collect do |server|
|
167
|
+
case server
|
168
|
+
when String
|
169
|
+
host, port, weight = server.split ':', 3
|
170
|
+
port ||= DEFAULT_PORT
|
171
|
+
weight ||= DEFAULT_WEIGHT
|
172
|
+
Server.new self, host, port, weight
|
173
|
+
when Server
|
174
|
+
if server.memcache.multithread != @multithread then
|
175
|
+
raise ArgumentError, "can't mix threaded and non-threaded servers"
|
176
|
+
end
|
177
|
+
server
|
178
|
+
else
|
179
|
+
raise TypeError, "cannot convert #{server.class} into MemCache::Server"
|
180
|
+
end
|
181
|
+
end
|
182
|
+
|
183
|
+
# Create an array of server buckets for weight selection of servers.
|
184
|
+
@buckets = []
|
185
|
+
@servers.each do |server|
|
186
|
+
server.weight.times { @buckets.push(server) }
|
187
|
+
end
|
188
|
+
end
|
189
|
+
|
190
|
+
##
|
191
|
+
# Decrements the value for +key+ by +amount+ and returns the new value.
|
192
|
+
# +key+ must already exist. If +key+ is not an integer, it is assumed to be
|
193
|
+
# 0. +key+ can not be decremented below 0.
|
194
|
+
|
195
|
+
def decr(key, amount = 1)
|
196
|
+
raise MemCacheError, "Update of readonly cache" if @readonly
|
197
|
+
with_server(key) do |server, cache_key|
|
198
|
+
cache_decr server, cache_key, amount
|
199
|
+
end
|
200
|
+
rescue TypeError => err
|
201
|
+
handle_error server, err
|
202
|
+
end
|
203
|
+
|
204
|
+
##
|
205
|
+
# Retrieves +key+ from memcache. If +raw+ is false, the value will be
|
206
|
+
# unmarshalled.
|
207
|
+
|
208
|
+
def get(key, raw = false)
|
209
|
+
with_server(key) do |server, cache_key|
|
210
|
+
value = cache_get server, cache_key
|
211
|
+
return nil if value.nil?
|
212
|
+
value = Marshal.load value unless raw
|
213
|
+
return value
|
214
|
+
end
|
215
|
+
rescue TypeError => err
|
216
|
+
handle_error server, err
|
217
|
+
end
|
218
|
+
|
219
|
+
##
|
220
|
+
# Retrieves multiple values from memcached in parallel, if possible.
|
221
|
+
#
|
222
|
+
# The memcached protocol supports the ability to retrieve multiple
|
223
|
+
# keys in a single request. Pass in an array of keys to this method
|
224
|
+
# and it will:
|
225
|
+
#
|
226
|
+
# 1. map the key to the appropriate memcached server
|
227
|
+
# 2. send a single request to each server that has one or more key values
|
228
|
+
#
|
229
|
+
# Returns a hash of values.
|
230
|
+
#
|
231
|
+
# cache["a"] = 1
|
232
|
+
# cache["b"] = 2
|
233
|
+
# cache.get_multi "a", "b" # => { "a" => 1, "b" => 2 }
|
234
|
+
|
235
|
+
def get_multi(*keys)
|
236
|
+
raise MemCacheError, 'No active servers' unless active?
|
237
|
+
|
238
|
+
keys.flatten!
|
239
|
+
key_count = keys.length
|
240
|
+
cache_keys = {}
|
241
|
+
server_keys = Hash.new { |h,k| h[k] = [] }
|
242
|
+
|
243
|
+
# map keys to servers
|
244
|
+
keys.each do |key|
|
245
|
+
server, cache_key = request_setup key
|
246
|
+
cache_keys[cache_key] = key
|
247
|
+
server_keys[server] << cache_key
|
248
|
+
end
|
249
|
+
|
250
|
+
results = {}
|
251
|
+
|
252
|
+
server_keys.each do |server, keys|
|
253
|
+
keys = keys.join ' '
|
254
|
+
values = cache_get_multi server, keys
|
255
|
+
values.each do |key, value|
|
256
|
+
results[cache_keys[key]] = Marshal.load value
|
257
|
+
end
|
258
|
+
end
|
259
|
+
|
260
|
+
return results
|
261
|
+
rescue TypeError => err
|
262
|
+
handle_error server, err
|
263
|
+
end
|
264
|
+
|
265
|
+
##
|
266
|
+
# Increments the value for +key+ by +amount+ and returns the new value.
|
267
|
+
# +key+ must already exist. If +key+ is not an integer, it is assumed to be
|
268
|
+
# 0.
|
269
|
+
|
270
|
+
def incr(key, amount = 1)
|
271
|
+
raise MemCacheError, "Update of readonly cache" if @readonly
|
272
|
+
with_server(key) do |server, cache_key|
|
273
|
+
cache_incr server, cache_key, amount
|
274
|
+
end
|
275
|
+
rescue TypeError => err
|
276
|
+
handle_error server, err
|
277
|
+
end
|
278
|
+
|
279
|
+
##
|
280
|
+
# Add +key+ to the cache with value +value+ that expires in +expiry+
|
281
|
+
# seconds. If +raw+ is true, +value+ will not be Marshalled.
|
282
|
+
#
|
283
|
+
# Warning: Readers should not call this method in the event of a cache miss;
|
284
|
+
# see MemCache#add.
|
285
|
+
|
286
|
+
def set(key, value, expiry = 0, raw = false)
|
287
|
+
raise MemCacheError, "Update of readonly cache" if @readonly
|
288
|
+
with_server(key) do |server, cache_key|
|
289
|
+
|
290
|
+
value = Marshal.dump value unless raw
|
291
|
+
command = "set #{cache_key} 0 #{expiry} #{value.to_s.size}\r\n#{value}\r\n"
|
292
|
+
|
293
|
+
with_socket_management(server) do |socket|
|
294
|
+
socket.write command
|
295
|
+
result = socket.gets
|
296
|
+
if result.nil?
|
297
|
+
server.close
|
298
|
+
raise MemCacheError, "lost connection to #{server.host}:#{server.port}"
|
299
|
+
end
|
300
|
+
|
301
|
+
if result =~ /^SERVER_ERROR (.*)/
|
302
|
+
server.close
|
303
|
+
raise MemCacheError, $1.strip
|
304
|
+
end
|
305
|
+
end
|
306
|
+
end
|
307
|
+
end
|
308
|
+
|
309
|
+
##
|
310
|
+
# Add +key+ to the cache with value +value+ that expires in +expiry+
|
311
|
+
# seconds, but only if +key+ does not already exist in the cache.
|
312
|
+
# If +raw+ is true, +value+ will not be Marshalled.
|
313
|
+
#
|
314
|
+
# Readers should call this method in the event of a cache miss, not
|
315
|
+
# MemCache#set or MemCache#[]=.
|
316
|
+
|
317
|
+
def add(key, value, expiry = 0, raw = false)
|
318
|
+
raise MemCacheError, "Update of readonly cache" if @readonly
|
319
|
+
with_server(key) do |server, cache_key|
|
320
|
+
value = Marshal.dump value unless raw
|
321
|
+
command = "add #{cache_key} 0 #{expiry} #{value.size}\r\n#{value}\r\n"
|
322
|
+
|
323
|
+
with_socket_management(server) do |socket|
|
324
|
+
socket.write command
|
325
|
+
socket.gets
|
326
|
+
end
|
327
|
+
end
|
328
|
+
end
|
329
|
+
|
330
|
+
##
|
331
|
+
# Removes +key+ from the cache in +expiry+ seconds.
|
332
|
+
|
333
|
+
def delete(key, expiry = 0)
|
334
|
+
raise MemCacheError, "Update of readonly cache" if @readonly
|
335
|
+
server, cache_key = request_setup key
|
336
|
+
|
337
|
+
with_socket_management(server) do |socket|
|
338
|
+
socket.write "delete #{cache_key} #{expiry}\r\n"
|
339
|
+
socket.gets
|
340
|
+
end
|
341
|
+
end
|
342
|
+
|
343
|
+
##
|
344
|
+
# Flush the cache from all memcache servers.
|
345
|
+
|
346
|
+
def flush_all
|
347
|
+
raise MemCacheError, 'No active servers' unless active?
|
348
|
+
raise MemCacheError, "Update of readonly cache" if @readonly
|
349
|
+
begin
|
350
|
+
@mutex.lock if @multithread
|
351
|
+
@servers.each do |server|
|
352
|
+
with_socket_management(server) do |socket|
|
353
|
+
socket.write "flush_all\r\n"
|
354
|
+
result = socket.gets
|
355
|
+
raise MemCacheError, $2.strip if result =~ /^(SERVER_)?ERROR(.*)/
|
356
|
+
end
|
357
|
+
end
|
358
|
+
ensure
|
359
|
+
@mutex.unlock if @multithread
|
360
|
+
end
|
361
|
+
end
|
362
|
+
|
363
|
+
##
|
364
|
+
# Reset the connection to all memcache servers. This should be called if
|
365
|
+
# there is a problem with a cache lookup that might have left the connection
|
366
|
+
# in a corrupted state.
|
367
|
+
|
368
|
+
def reset
|
369
|
+
@servers.each { |server| server.close }
|
370
|
+
end
|
371
|
+
|
372
|
+
##
|
373
|
+
# Returns statistics for each memcached server. An explanation of the
|
374
|
+
# statistics can be found in the memcached docs:
|
375
|
+
#
|
376
|
+
# http://code.sixapart.com/svn/memcached/trunk/server/doc/protocol.txt
|
377
|
+
#
|
378
|
+
# Example:
|
379
|
+
#
|
380
|
+
# >> pp CACHE.stats
|
381
|
+
# {"localhost:11211"=>
|
382
|
+
# {"bytes"=>4718,
|
383
|
+
# "pid"=>20188,
|
384
|
+
# "connection_structures"=>4,
|
385
|
+
# "time"=>1162278121,
|
386
|
+
# "pointer_size"=>32,
|
387
|
+
# "limit_maxbytes"=>67108864,
|
388
|
+
# "cmd_get"=>14532,
|
389
|
+
# "version"=>"1.2.0",
|
390
|
+
# "bytes_written"=>432583,
|
391
|
+
# "cmd_set"=>32,
|
392
|
+
# "get_misses"=>0,
|
393
|
+
# "total_connections"=>19,
|
394
|
+
# "curr_connections"=>3,
|
395
|
+
# "curr_items"=>4,
|
396
|
+
# "uptime"=>1557,
|
397
|
+
# "get_hits"=>14532,
|
398
|
+
# "total_items"=>32,
|
399
|
+
# "rusage_system"=>0.313952,
|
400
|
+
# "rusage_user"=>0.119981,
|
401
|
+
# "bytes_read"=>190619}}
|
402
|
+
# => nil
|
403
|
+
|
404
|
+
def stats
|
405
|
+
raise MemCacheError, "No active servers" unless active?
|
406
|
+
server_stats = {}
|
407
|
+
|
408
|
+
@servers.each do |server|
|
409
|
+
next unless server.alive?
|
410
|
+
with_socket_management(server) do |socket|
|
411
|
+
value = nil # TODO: why is this line here?
|
412
|
+
socket.write "stats\r\n"
|
413
|
+
stats = {}
|
414
|
+
while line = socket.gets do
|
415
|
+
break if line == "END\r\n"
|
416
|
+
if line =~ /^STAT ([\w]+) ([\w\.\:]+)/ then
|
417
|
+
name, value = $1, $2
|
418
|
+
stats[name] = case name
|
419
|
+
when 'version'
|
420
|
+
value
|
421
|
+
when 'rusage_user', 'rusage_system' then
|
422
|
+
seconds, microseconds = value.split(/:/, 2)
|
423
|
+
microseconds ||= 0
|
424
|
+
Float(seconds) + (Float(microseconds) / 1_000_000)
|
425
|
+
else
|
426
|
+
if value =~ /^\d+$/ then
|
427
|
+
value.to_i
|
428
|
+
else
|
429
|
+
value
|
430
|
+
end
|
431
|
+
end
|
432
|
+
end
|
433
|
+
end
|
434
|
+
server_stats["#{server.host}:#{server.port}"] = stats
|
435
|
+
end
|
436
|
+
end
|
437
|
+
|
438
|
+
server_stats
|
439
|
+
end
|
440
|
+
|
441
|
+
##
|
442
|
+
# Shortcut to get a value from the cache.
|
443
|
+
|
444
|
+
alias [] get
|
445
|
+
|
446
|
+
##
|
447
|
+
# Shortcut to save a value in the cache. This method does not set an
|
448
|
+
# expiration on the entry. Use set to specify an explicit expiry.
|
449
|
+
|
450
|
+
def []=(key, value)
|
451
|
+
set key, value
|
452
|
+
end
|
453
|
+
|
454
|
+
protected unless $TESTING
|
455
|
+
|
456
|
+
##
|
457
|
+
# Create a key for the cache, incorporating the namespace qualifier if
|
458
|
+
# requested.
|
459
|
+
|
460
|
+
def make_cache_key(key)
|
461
|
+
if namespace.nil? then
|
462
|
+
key
|
463
|
+
else
|
464
|
+
"#{@namespace}:#{key}"
|
465
|
+
end
|
466
|
+
end
|
467
|
+
|
468
|
+
##
|
469
|
+
# Pick a server to handle the request based on a hash of the key.
|
470
|
+
|
471
|
+
def get_server_for_key(key)
|
472
|
+
raise ArgumentError, "illegal character in key #{key.inspect}" if
|
473
|
+
key =~ /\s/
|
474
|
+
raise ArgumentError, "key too long #{key.inspect}" if key.length > 250
|
475
|
+
raise MemCacheError, "No servers available" if @servers.empty?
|
476
|
+
return @servers.first if @servers.length == 1
|
477
|
+
|
478
|
+
hkey = hash_for key
|
479
|
+
|
480
|
+
20.times do |try|
|
481
|
+
server = @buckets[hkey % @buckets.nitems]
|
482
|
+
return server if server.alive?
|
483
|
+
hkey += hash_for "#{try}#{key}"
|
484
|
+
end
|
485
|
+
|
486
|
+
raise MemCacheError, "No servers available"
|
487
|
+
end
|
488
|
+
|
489
|
+
##
|
490
|
+
# Returns an interoperable hash value for +key+. (I think, docs are
|
491
|
+
# sketchy for down servers).
|
492
|
+
|
493
|
+
def hash_for(key)
|
494
|
+
(key.crc32_ITU_T >> 16) & 0x7fff
|
495
|
+
end
|
496
|
+
|
497
|
+
##
|
498
|
+
# Performs a raw decr for +cache_key+ from +server+. Returns nil if not
|
499
|
+
# found.
|
500
|
+
|
501
|
+
def cache_decr(server, cache_key, amount)
|
502
|
+
with_socket_management(server) do |socket|
|
503
|
+
socket.write "decr #{cache_key} #{amount}\r\n"
|
504
|
+
text = socket.gets
|
505
|
+
return nil if text == "NOT_FOUND\r\n"
|
506
|
+
return text.to_i
|
507
|
+
end
|
508
|
+
end
|
509
|
+
|
510
|
+
##
|
511
|
+
# Fetches the raw data for +cache_key+ from +server+. Returns nil on cache
|
512
|
+
# miss.
|
513
|
+
|
514
|
+
def cache_get(server, cache_key)
|
515
|
+
with_socket_management(server) do |socket|
|
516
|
+
socket.write "get #{cache_key}\r\n"
|
517
|
+
keyline = socket.gets # "VALUE <key> <flags> <bytes>\r\n"
|
518
|
+
|
519
|
+
if keyline.nil? then
|
520
|
+
server.close
|
521
|
+
raise MemCacheError, "lost connection to #{server.host}:#{server.port}" # TODO: retry here too
|
522
|
+
end
|
523
|
+
|
524
|
+
return nil if keyline == "END\r\n"
|
525
|
+
|
526
|
+
unless keyline =~ /(\d+)\r/ then
|
527
|
+
server.close
|
528
|
+
raise MemCacheError, "unexpected response #{keyline.inspect}"
|
529
|
+
end
|
530
|
+
value = socket.read $1.to_i
|
531
|
+
socket.read 2 # "\r\n"
|
532
|
+
socket.gets # "END\r\n"
|
533
|
+
return value
|
534
|
+
end
|
535
|
+
end
|
536
|
+
|
537
|
+
##
|
538
|
+
# Fetches +cache_keys+ from +server+ using a multi-get.
|
539
|
+
|
540
|
+
def cache_get_multi(server, cache_keys)
|
541
|
+
with_socket_management(server) do |socket|
|
542
|
+
values = {}
|
543
|
+
socket.write "get #{cache_keys}\r\n"
|
544
|
+
|
545
|
+
while keyline = socket.gets do
|
546
|
+
return values if keyline == "END\r\n"
|
547
|
+
|
548
|
+
unless keyline =~ /^VALUE (.+) (.+) (.+)/ then
|
549
|
+
server.close
|
550
|
+
raise MemCacheError, "unexpected response #{keyline.inspect}"
|
551
|
+
end
|
552
|
+
|
553
|
+
key, data_length = $1, $3
|
554
|
+
values[$1] = socket.read data_length.to_i
|
555
|
+
socket.read(2) # "\r\n"
|
556
|
+
end
|
557
|
+
|
558
|
+
server.close
|
559
|
+
raise MemCacheError, "lost connection to #{server.host}:#{server.port}" # TODO: retry here too
|
560
|
+
end
|
561
|
+
end
|
562
|
+
|
563
|
+
##
|
564
|
+
# Performs a raw incr for +cache_key+ from +server+. Returns nil if not
|
565
|
+
# found.
|
566
|
+
|
567
|
+
def cache_incr(server, cache_key, amount)
|
568
|
+
with_socket_management(server) do |socket|
|
569
|
+
socket.write "incr #{cache_key} #{amount}\r\n"
|
570
|
+
text = socket.gets
|
571
|
+
return nil if text == "NOT_FOUND\r\n"
|
572
|
+
return text.to_i
|
573
|
+
end
|
574
|
+
end
|
575
|
+
|
576
|
+
##
|
577
|
+
# Gets or creates a socket connected to the given server, and yields it
|
578
|
+
# to the block. If a socket error (SocketError, SystemCallError, IOError)
|
579
|
+
# or protocol error (MemCacheError) is raised by the block, closes the
|
580
|
+
# socket, attempts to connect again, and retries the block (once). If
|
581
|
+
# an error is again raised, reraises it as MemCacheError.
|
582
|
+
# If unable to connect to the server (or if in the reconnect wait period),
|
583
|
+
# raises MemCacheError - note that the socket connect code marks a server
|
584
|
+
# dead for a timeout period, so retrying does not apply to connection attempt
|
585
|
+
# failures (but does still apply to unexpectedly lost connections etc.).
|
586
|
+
# Wraps the whole lot in mutex synchronization if @multithread is true.
|
587
|
+
|
588
|
+
def with_socket_management(server, &block)
|
589
|
+
@mutex.lock if @multithread
|
590
|
+
retried = false
|
591
|
+
begin
|
592
|
+
socket = server.socket
|
593
|
+
# Raise an IndexError to show this server is out of whack.
|
594
|
+
# We'll catch it in higher-level code and attempt to restart the operation.
|
595
|
+
raise IndexError, "No connection to server (#{server.status})" if socket.nil?
|
596
|
+
block.call(socket)
|
597
|
+
rescue MemCacheError, SocketError, SystemCallError, IOError => err
|
598
|
+
handle_error(server, err) if retried || socket.nil?
|
599
|
+
retried = true
|
600
|
+
retry
|
601
|
+
end
|
602
|
+
ensure
|
603
|
+
@mutex.unlock if @multithread
|
604
|
+
end
|
605
|
+
|
606
|
+
def with_server(key)
|
607
|
+
retried = false
|
608
|
+
begin
|
609
|
+
server, cache_key = request_setup(key)
|
610
|
+
yield server, cache_key
|
611
|
+
rescue IndexError => e
|
612
|
+
if !retried && @servers.size > 1
|
613
|
+
puts "Connection to server #{server.inspect} DIED! Retrying operation..."
|
614
|
+
retried = true
|
615
|
+
retry
|
616
|
+
end
|
617
|
+
handle_error(nil, e)
|
618
|
+
end
|
619
|
+
end
|
620
|
+
|
621
|
+
##
|
622
|
+
# Handles +error+ from +server+.
|
623
|
+
|
624
|
+
def handle_error(server, error)
|
625
|
+
raise error if error.is_a?(MemCacheError)
|
626
|
+
server.close if server
|
627
|
+
new_error = MemCacheError.new error.message
|
628
|
+
new_error.set_backtrace error.backtrace
|
629
|
+
raise new_error
|
630
|
+
end
|
631
|
+
|
632
|
+
##
|
633
|
+
# Performs setup for making a request with +key+ from memcached. Returns
|
634
|
+
# the server to fetch the key from and the complete key to use.
|
635
|
+
|
636
|
+
def request_setup(key)
|
637
|
+
raise MemCacheError, 'No active servers' unless active?
|
638
|
+
cache_key = make_cache_key key
|
639
|
+
server = get_server_for_key cache_key
|
640
|
+
return server, cache_key
|
641
|
+
end
|
642
|
+
|
643
|
+
##
|
644
|
+
# This class represents a memcached server instance.
|
645
|
+
|
646
|
+
class Server
|
647
|
+
|
648
|
+
##
|
649
|
+
# The amount of time to wait to establish a connection with a memcached
|
650
|
+
# server. If a connection cannot be established within this time limit,
|
651
|
+
# the server will be marked as down.
|
652
|
+
|
653
|
+
CONNECT_TIMEOUT = 0.25
|
654
|
+
|
655
|
+
##
|
656
|
+
# The amount of time to wait before attempting to re-establish a
|
657
|
+
# connection with a server that is marked dead.
|
658
|
+
|
659
|
+
RETRY_DELAY = 30.0
|
660
|
+
|
661
|
+
##
|
662
|
+
# The host the memcached server is running on.
|
663
|
+
|
664
|
+
attr_reader :host
|
665
|
+
|
666
|
+
##
|
667
|
+
# The port the memcached server is listening on.
|
668
|
+
|
669
|
+
attr_reader :port
|
670
|
+
|
671
|
+
##
|
672
|
+
# The weight given to the server.
|
673
|
+
|
674
|
+
attr_reader :weight
|
675
|
+
|
676
|
+
##
|
677
|
+
# The time of next retry if the connection is dead.
|
678
|
+
|
679
|
+
attr_reader :retry
|
680
|
+
|
681
|
+
##
|
682
|
+
# A text status string describing the state of the server.
|
683
|
+
|
684
|
+
attr_reader :status
|
685
|
+
|
686
|
+
##
|
687
|
+
# Create a new MemCache::Server object for the memcached instance
|
688
|
+
# listening on the given host and port, weighted by the given weight.
|
689
|
+
|
690
|
+
def initialize(memcache, host, port = DEFAULT_PORT, weight = DEFAULT_WEIGHT)
|
691
|
+
raise ArgumentError, "No host specified" if host.nil? or host.empty?
|
692
|
+
raise ArgumentError, "No port specified" if port.nil? or port.to_i.zero?
|
693
|
+
|
694
|
+
@memcache = memcache
|
695
|
+
@host = host
|
696
|
+
@port = port.to_i
|
697
|
+
@weight = weight.to_i
|
698
|
+
|
699
|
+
@multithread = @memcache.multithread
|
700
|
+
@mutex = Mutex.new
|
701
|
+
|
702
|
+
@sock = nil
|
703
|
+
@retry = nil
|
704
|
+
@status = 'NOT CONNECTED'
|
705
|
+
end
|
706
|
+
|
707
|
+
##
|
708
|
+
# Return a string representation of the server object.
|
709
|
+
|
710
|
+
def inspect
|
711
|
+
"<MemCache::Server: %s:%d [%d] (%s)>" % [@host, @port, @weight, @status]
|
712
|
+
end
|
713
|
+
|
714
|
+
##
|
715
|
+
# Check whether the server connection is alive. This will cause the
|
716
|
+
# socket to attempt to connect if it isn't already connected and or if
|
717
|
+
# the server was previously marked as down and the retry time has
|
718
|
+
# been exceeded.
|
719
|
+
|
720
|
+
def alive?
|
721
|
+
!!socket
|
722
|
+
end
|
723
|
+
|
724
|
+
##
|
725
|
+
# Try to connect to the memcached server targeted by this object.
|
726
|
+
# Returns the connected socket object on success or nil on failure.
|
727
|
+
|
728
|
+
def socket
|
729
|
+
@mutex.lock if @multithread
|
730
|
+
return @sock if @sock and not @sock.closed?
|
731
|
+
|
732
|
+
@sock = nil
|
733
|
+
|
734
|
+
# If the host was dead, don't retry for a while.
|
735
|
+
return if @retry and @retry > Time.now
|
736
|
+
|
737
|
+
# Attempt to connect if not already connected.
|
738
|
+
begin
|
739
|
+
@sock = timeout CONNECT_TIMEOUT do
|
740
|
+
TCPSocket.new @host, @port
|
741
|
+
end
|
742
|
+
if Socket.constants.include? 'TCP_NODELAY' then
|
743
|
+
@sock.setsockopt Socket::IPPROTO_TCP, Socket::TCP_NODELAY, 1
|
744
|
+
end
|
745
|
+
@retry = nil
|
746
|
+
@status = 'CONNECTED'
|
747
|
+
rescue SocketError, SystemCallError, IOError, Timeout::Error => err
|
748
|
+
mark_dead err.message
|
749
|
+
end
|
750
|
+
|
751
|
+
return @sock
|
752
|
+
ensure
|
753
|
+
@mutex.unlock if @multithread
|
754
|
+
end
|
755
|
+
|
756
|
+
##
|
757
|
+
# Close the connection to the memcached server targeted by this
|
758
|
+
# object. The server is not considered dead.
|
759
|
+
|
760
|
+
def close
|
761
|
+
@mutex.lock if @multithread
|
762
|
+
@sock.close if @sock && !@sock.closed?
|
763
|
+
@sock = nil
|
764
|
+
@retry = nil
|
765
|
+
@status = "NOT CONNECTED"
|
766
|
+
ensure
|
767
|
+
@mutex.unlock if @multithread
|
768
|
+
end
|
769
|
+
|
770
|
+
private
|
771
|
+
|
772
|
+
##
|
773
|
+
# Mark the server as dead and close its socket.
|
774
|
+
|
775
|
+
def mark_dead(reason = "Unknown error")
|
776
|
+
@sock.close if @sock && !@sock.closed?
|
777
|
+
@sock = nil
|
778
|
+
@retry = Time.now + RETRY_DELAY
|
779
|
+
|
780
|
+
@status = sprintf "DEAD: %s, will retry at %s", reason, @retry
|
781
|
+
end
|
782
|
+
|
783
|
+
end
|
784
|
+
|
785
|
+
##
|
786
|
+
# Base MemCache exception class.
|
787
|
+
|
788
|
+
class MemCacheError < RuntimeError; end
|
789
|
+
|
790
|
+
end
|
791
|
+
|