brendanlim-memcache-client 1.5.0.6
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- data/History.txt +109 -0
- data/LICENSE.txt +28 -0
- data/README.txt +5 -0
- data/Rakefile +22 -0
- data/lib/memcache.rb +841 -0
- data/lib/memcache_util.rb +102 -0
- data/test/test_mem_cache.rb +805 -0
- metadata +62 -0
data/History.txt
ADDED
@@ -0,0 +1,109 @@
|
|
1
|
+
= Unreleased
|
2
|
+
|
3
|
+
* Implement a consistent hashing algorithm, as described in libketama.
|
4
|
+
This dramatically reduces the cost of adding or removing servers dynamically
|
5
|
+
as keys are much more likely to map to the same server.
|
6
|
+
|
7
|
+
Take a scenario where we add a fourth server. With a dumb modulo algorithm, about
|
8
|
+
25% of the keys will map to the same server. In other words, 75% of your memcached
|
9
|
+
content suddenly becomes invalid. With a consistent algorithm, 75% of the keys
|
10
|
+
will map to the same server as before - only 25% will be invalidated.
|
11
|
+
|
12
|
+
= 1.5.0.5
|
13
|
+
|
14
|
+
* Remove native C CRC32_ITU_T extension in favor of Zlib's crc32 method.
|
15
|
+
memcache-client is now pure Ruby again and will work with JRuby and Rubinius.
|
16
|
+
|
17
|
+
= 1.5.0.4
|
18
|
+
|
19
|
+
* Get test suite working again (packagethief)
|
20
|
+
* Ruby 1.9 compatiblity fixes (packagethief, mperham)
|
21
|
+
* Consistently return server responses and check for errors (packagethief)
|
22
|
+
* Properly calculate CRC in Ruby 1.9 strings (mperham)
|
23
|
+
* Drop rspec in favor of test/unit, for 1.9 compat (mperham)
|
24
|
+
|
25
|
+
= 1.5.0.3 (FiveRuns fork)
|
26
|
+
|
27
|
+
* Integrated ITU-T CRC32 operation in native C extension for speed. Thanks to Justin Balthrop!
|
28
|
+
|
29
|
+
= 1.5.0.2 (FiveRuns fork)
|
30
|
+
|
31
|
+
* Add support for seamless failover between servers. If one server connection dies,
|
32
|
+
the client will retry the operation on another server before giving up.
|
33
|
+
|
34
|
+
* Merge Will Bryant's socket retry patch.
|
35
|
+
http://willbryant.net/software/2007/12/21/ruby-memcache-client-reconnect-and-retry
|
36
|
+
|
37
|
+
= 1.5.0.1 (FiveRuns fork)
|
38
|
+
|
39
|
+
* Fix set not handling client disconnects.
|
40
|
+
http://dev.twitter.com/2008/02/solving-case-of-missing-updates.html
|
41
|
+
|
42
|
+
= 1.5.0
|
43
|
+
|
44
|
+
* Add MemCache#flush_all command. Patch #13019 and bug #10503. Patches
|
45
|
+
submitted by Sebastian Delmont and Rick Olson.
|
46
|
+
* Type-cast data returned by MemCache#stats. Patch #10505 submitted by
|
47
|
+
Sebastian Delmont.
|
48
|
+
|
49
|
+
= 1.4.0
|
50
|
+
|
51
|
+
* Fix bug #10371, #set does not check response for server errors.
|
52
|
+
Submitted by Ben VandenBos.
|
53
|
+
* Fix bug #12450, set TCP_NODELAY socket option. Patch by Chris
|
54
|
+
McGrath.
|
55
|
+
* Fix bug #10704, missing #add method. Patch by Jamie Macey.
|
56
|
+
* Fix bug #10371, handle socket EOF in cache_get. Submitted by Ben
|
57
|
+
VandenBos.
|
58
|
+
|
59
|
+
= 1.3.0
|
60
|
+
|
61
|
+
* Apply patch #6507, add stats command. Submitted by Tyler Kovacs.
|
62
|
+
* Apply patch #6509, parallel implementation of #get_multi. Submitted
|
63
|
+
by Tyler Kovacs.
|
64
|
+
* Validate keys. Disallow spaces in keys or keys that are too long.
|
65
|
+
* Perform more validation of server responses. MemCache now reports
|
66
|
+
errors if the socket was not in an expected state. (Please file
|
67
|
+
bugs if you find some.)
|
68
|
+
* Add #incr and #decr.
|
69
|
+
* Add raw argument to #set and #get to retrieve #incr and #decr
|
70
|
+
values.
|
71
|
+
* Also put on MemCacheError when using Cache::get with block.
|
72
|
+
* memcache.rb no longer sets $TESTING to a true value if it was
|
73
|
+
previously defined. Bug #8213 by Matijs van Zuijlen.
|
74
|
+
|
75
|
+
= 1.2.1
|
76
|
+
|
77
|
+
* Fix bug #7048, MemCache#servers= referenced changed local variable.
|
78
|
+
Submitted by Justin Dossey.
|
79
|
+
* Fix bug #7049, MemCache#initialize resets @buckets. Submitted by
|
80
|
+
Justin Dossey.
|
81
|
+
* Fix bug #6232, Make Cache::Get work with a block only when nil is
|
82
|
+
returned. Submitted by Jon Evans.
|
83
|
+
* Moved to the seattlerb project.
|
84
|
+
|
85
|
+
= 1.2.0
|
86
|
+
|
87
|
+
NOTE: This version will store keys in different places than previous
|
88
|
+
versions! Be prepared for some thrashing while memcached sorts itself
|
89
|
+
out!
|
90
|
+
|
91
|
+
* Fixed multithreaded operations, bug 5994 and 5989.
|
92
|
+
Thanks to Blaine Cook, Erik Hetzner, Elliot Smith, Dave Myron (and
|
93
|
+
possibly others I have forgotten).
|
94
|
+
* Made memcached interoperable with other memcached libraries, bug
|
95
|
+
4509. Thanks to anonymous.
|
96
|
+
* Added get_multi to match Perl/etc APIs
|
97
|
+
|
98
|
+
= 1.1.0
|
99
|
+
|
100
|
+
* Added some tests
|
101
|
+
* Sped up non-multithreaded and multithreaded operation
|
102
|
+
* More Ruby-memcache compatibility
|
103
|
+
* More RDoc
|
104
|
+
* Switched to Hoe
|
105
|
+
|
106
|
+
= 1.0.0
|
107
|
+
|
108
|
+
Birthday!
|
109
|
+
|
data/LICENSE.txt
ADDED
@@ -0,0 +1,28 @@
|
|
1
|
+
All original code copyright 2005, 2006, 2007 Bob Cottrell, Eric Hodel,
|
2
|
+
The Robot Co-op. All rights reserved.
|
3
|
+
|
4
|
+
Redistribution and use in source and binary forms, with or without
|
5
|
+
modification, are permitted provided that the following conditions
|
6
|
+
are met:
|
7
|
+
|
8
|
+
1. Redistributions of source code must retain the above copyright
|
9
|
+
notice, this list of conditions and the following disclaimer.
|
10
|
+
2. Redistributions in binary form must reproduce the above copyright
|
11
|
+
notice, this list of conditions and the following disclaimer in the
|
12
|
+
documentation and/or other materials provided with the distribution.
|
13
|
+
3. Neither the names of the authors nor the names of their contributors
|
14
|
+
may be used to endorse or promote products derived from this software
|
15
|
+
without specific prior written permission.
|
16
|
+
|
17
|
+
THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS'' AND ANY EXPRESS
|
18
|
+
OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
19
|
+
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
20
|
+
ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE
|
21
|
+
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
|
22
|
+
OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
|
23
|
+
OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
|
24
|
+
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
|
25
|
+
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
|
26
|
+
OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
|
27
|
+
EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
28
|
+
|
data/README.txt
ADDED
data/Rakefile
ADDED
@@ -0,0 +1,22 @@
|
|
1
|
+
# vim: syntax=Ruby
|
2
|
+
require 'rubygems'
|
3
|
+
require 'rake/rdoctask'
|
4
|
+
require 'rake/testtask'
|
5
|
+
|
6
|
+
task :gem do
|
7
|
+
sh "gem build memcache-client.gemspec"
|
8
|
+
end
|
9
|
+
|
10
|
+
task :install => [:gem] do
|
11
|
+
sh "sudo gem install memcache-client-*.gem"
|
12
|
+
end
|
13
|
+
|
14
|
+
Rake::RDocTask.new do |rd|
|
15
|
+
rd.main = "README.rdoc"
|
16
|
+
rd.rdoc_files.include("README.rdoc", "lib/**/*.rb")
|
17
|
+
rd.rdoc_dir = 'doc'
|
18
|
+
end
|
19
|
+
|
20
|
+
Rake::TestTask.new
|
21
|
+
|
22
|
+
task :default => :test
|
data/lib/memcache.rb
ADDED
@@ -0,0 +1,841 @@
|
|
1
|
+
$TESTING = defined?($TESTING) && $TESTING
|
2
|
+
|
3
|
+
require 'socket'
|
4
|
+
require 'thread'
|
5
|
+
require 'timeout'
|
6
|
+
require 'rubygems'
|
7
|
+
require 'zlib'
|
8
|
+
require 'digest/sha1'
|
9
|
+
require 'digest/md5'
|
10
|
+
##
|
11
|
+
# A Ruby client library for memcached.
|
12
|
+
#
|
13
|
+
# This is intended to provide access to basic memcached functionality. It
|
14
|
+
# does not attempt to be complete implementation of the entire API, but it is
|
15
|
+
# approaching a complete implementation.
|
16
|
+
|
17
|
+
class MemCache
|
18
|
+
|
19
|
+
##
|
20
|
+
# The version of MemCache you are using.
|
21
|
+
|
22
|
+
VERSION = '1.5.0.7'
|
23
|
+
|
24
|
+
##
|
25
|
+
# Default options for the cache object.
|
26
|
+
|
27
|
+
DEFAULT_OPTIONS = {
|
28
|
+
:namespace => nil,
|
29
|
+
:readonly => false,
|
30
|
+
:multithread => false,
|
31
|
+
}
|
32
|
+
|
33
|
+
##
|
34
|
+
# Default memcached port.
|
35
|
+
|
36
|
+
DEFAULT_PORT = 11211
|
37
|
+
|
38
|
+
##
|
39
|
+
# Default memcached server weight.
|
40
|
+
|
41
|
+
DEFAULT_WEIGHT = 1
|
42
|
+
|
43
|
+
##
|
44
|
+
# The amount of time to wait for a response from a memcached server. If a
|
45
|
+
# response is not completed within this time, the connection to the server
|
46
|
+
# will be closed and an error will be raised.
|
47
|
+
|
48
|
+
attr_accessor :request_timeout
|
49
|
+
|
50
|
+
##
|
51
|
+
# The namespace for this instance
|
52
|
+
|
53
|
+
attr_reader :namespace
|
54
|
+
|
55
|
+
##
|
56
|
+
# The multithread setting for this instance
|
57
|
+
|
58
|
+
attr_reader :multithread
|
59
|
+
|
60
|
+
##
|
61
|
+
# The servers this client talks to. Play at your own peril.
|
62
|
+
|
63
|
+
attr_reader :servers
|
64
|
+
|
65
|
+
##
|
66
|
+
# Accepts a list of +servers+ and a list of +opts+. +servers+ may be
|
67
|
+
# omitted. See +servers=+ for acceptable server list arguments.
|
68
|
+
#
|
69
|
+
# Valid options for +opts+ are:
|
70
|
+
#
|
71
|
+
# [:namespace] Prepends this value to all keys added or retrieved.
|
72
|
+
# [:readonly] Raises an exception on cache writes when true.
|
73
|
+
# [:multithread] Wraps cache access in a Mutex for thread safety.
|
74
|
+
#
|
75
|
+
# Other options are ignored.
|
76
|
+
|
77
|
+
def initialize(*args)
|
78
|
+
servers = []
|
79
|
+
opts = {}
|
80
|
+
|
81
|
+
case args.length
|
82
|
+
when 0 then # NOP
|
83
|
+
when 1 then
|
84
|
+
arg = args.shift
|
85
|
+
case arg
|
86
|
+
when Hash then opts = arg
|
87
|
+
when Array then servers = arg
|
88
|
+
when String then servers = [arg]
|
89
|
+
else raise ArgumentError, 'first argument must be Array, Hash or String'
|
90
|
+
end
|
91
|
+
when 2 then
|
92
|
+
servers, opts = args
|
93
|
+
else
|
94
|
+
raise ArgumentError, "wrong number of arguments (#{args.length} for 2)"
|
95
|
+
end
|
96
|
+
|
97
|
+
opts = DEFAULT_OPTIONS.merge opts
|
98
|
+
@namespace = opts[:namespace]
|
99
|
+
@readonly = opts[:readonly]
|
100
|
+
@multithread = opts[:multithread]
|
101
|
+
@mutex = Mutex.new if @multithread
|
102
|
+
@buckets = []
|
103
|
+
self.servers = servers
|
104
|
+
end
|
105
|
+
|
106
|
+
##
|
107
|
+
# Returns a string representation of the cache object.
|
108
|
+
|
109
|
+
def inspect
|
110
|
+
"<MemCache: %d servers, %d buckets, ns: %p, ro: %p>" %
|
111
|
+
[@servers.length, @buckets.length, @namespace, @readonly]
|
112
|
+
end
|
113
|
+
|
114
|
+
##
|
115
|
+
# Returns whether there is at least one active server for the object.
|
116
|
+
|
117
|
+
def active?
|
118
|
+
not @servers.empty?
|
119
|
+
end
|
120
|
+
|
121
|
+
##
|
122
|
+
# Returns whether or not the cache object was created read only.
|
123
|
+
|
124
|
+
def readonly?
|
125
|
+
@readonly
|
126
|
+
end
|
127
|
+
|
128
|
+
##
|
129
|
+
# Set the servers that the requests will be distributed between. Entries
|
130
|
+
# can be either strings of the form "hostname:port" or
|
131
|
+
# "hostname:port:weight" or MemCache::Server objects.
|
132
|
+
#
|
133
|
+
def servers=(servers)
|
134
|
+
# Create the server objects.
|
135
|
+
@servers = Array(servers).collect do |server|
|
136
|
+
case server
|
137
|
+
when String
|
138
|
+
host, port, weight = server.split ':', 3
|
139
|
+
port ||= DEFAULT_PORT
|
140
|
+
weight ||= DEFAULT_WEIGHT
|
141
|
+
Server.new self, host, port, weight
|
142
|
+
when Server
|
143
|
+
if server.multithread != @multithread then
|
144
|
+
raise ArgumentError, "can't mix threaded and non-threaded servers"
|
145
|
+
end
|
146
|
+
server
|
147
|
+
else
|
148
|
+
raise TypeError, "cannot convert #{server.class} into MemCache::Server"
|
149
|
+
end
|
150
|
+
end
|
151
|
+
|
152
|
+
# There's no point in doing this if there's only one server
|
153
|
+
@continuum = create_continuum_for(@servers) if @servers.size > 1
|
154
|
+
|
155
|
+
@servers
|
156
|
+
end
|
157
|
+
|
158
|
+
##
|
159
|
+
# Decrements the value for +key+ by +amount+ and returns the new value.
|
160
|
+
# +key+ must already exist. If +key+ is not an integer, it is assumed to be
|
161
|
+
# 0. +key+ can not be decremented below 0.
|
162
|
+
|
163
|
+
def decr(key, amount = 1)
|
164
|
+
raise MemCacheError, "Update of readonly cache" if @readonly
|
165
|
+
with_server(key) do |server, cache_key|
|
166
|
+
cache_decr server, cache_key, amount
|
167
|
+
end
|
168
|
+
rescue TypeError => err
|
169
|
+
handle_error nil, err
|
170
|
+
end
|
171
|
+
|
172
|
+
##
|
173
|
+
# Retrieves +key+ from memcache. If +raw+ is false, the value will be
|
174
|
+
# unmarshalled.
|
175
|
+
|
176
|
+
def get(key, raw = false)
|
177
|
+
with_server(key) do |server, cache_key|
|
178
|
+
value = cache_get server, cache_key
|
179
|
+
return nil if value.nil?
|
180
|
+
value = Marshal.load value unless raw
|
181
|
+
return value
|
182
|
+
end
|
183
|
+
rescue TypeError => err
|
184
|
+
handle_error nil, err
|
185
|
+
end
|
186
|
+
|
187
|
+
##
|
188
|
+
# Retrieves multiple values from memcached in parallel, if possible.
|
189
|
+
#
|
190
|
+
# The memcached protocol supports the ability to retrieve multiple
|
191
|
+
# keys in a single request. Pass in an array of keys to this method
|
192
|
+
# and it will:
|
193
|
+
#
|
194
|
+
# 1. map the key to the appropriate memcached server
|
195
|
+
# 2. send a single request to each server that has one or more key values
|
196
|
+
#
|
197
|
+
# Returns a hash of values.
|
198
|
+
#
|
199
|
+
# cache["a"] = 1
|
200
|
+
# cache["b"] = 2
|
201
|
+
# cache.get_multi "a", "b" # => { "a" => 1, "b" => 2 }
|
202
|
+
|
203
|
+
def get_multi(*keys)
|
204
|
+
raise MemCacheError, 'No active servers' unless active?
|
205
|
+
|
206
|
+
keys.flatten!
|
207
|
+
key_count = keys.length
|
208
|
+
cache_keys = {}
|
209
|
+
server_keys = Hash.new { |h,k| h[k] = [] }
|
210
|
+
|
211
|
+
# map keys to servers
|
212
|
+
keys.each do |key|
|
213
|
+
server, cache_key = request_setup key
|
214
|
+
cache_keys[cache_key] = key
|
215
|
+
server_keys[server] << cache_key
|
216
|
+
end
|
217
|
+
|
218
|
+
results = {}
|
219
|
+
|
220
|
+
server_keys.each do |server, keys_for_server|
|
221
|
+
keys_for_server = keys_for_server.join ' '
|
222
|
+
values = cache_get_multi server, keys_for_server
|
223
|
+
values.each do |key, value|
|
224
|
+
results[cache_keys[key]] = Marshal.load value
|
225
|
+
end
|
226
|
+
end
|
227
|
+
|
228
|
+
return results
|
229
|
+
rescue TypeError, IndexError => err
|
230
|
+
handle_error nil, err
|
231
|
+
end
|
232
|
+
|
233
|
+
##
|
234
|
+
# Increments the value for +key+ by +amount+ and returns the new value.
|
235
|
+
# +key+ must already exist. If +key+ is not an integer, it is assumed to be
|
236
|
+
# 0.
|
237
|
+
|
238
|
+
def incr(key, amount = 1)
|
239
|
+
raise MemCacheError, "Update of readonly cache" if @readonly
|
240
|
+
with_server(key) do |server, cache_key|
|
241
|
+
cache_incr server, cache_key, amount
|
242
|
+
end
|
243
|
+
rescue TypeError => err
|
244
|
+
handle_error nil, err
|
245
|
+
end
|
246
|
+
|
247
|
+
##
|
248
|
+
# Add +key+ to the cache with value +value+ that expires in +expiry+
|
249
|
+
# seconds. If +raw+ is true, +value+ will not be Marshalled.
|
250
|
+
#
|
251
|
+
# Warning: Readers should not call this method in the event of a cache miss;
|
252
|
+
# see MemCache#add.
|
253
|
+
|
254
|
+
def set(key, value, expiry = 0, raw = false)
|
255
|
+
raise MemCacheError, "Update of readonly cache" if @readonly
|
256
|
+
with_server(key) do |server, cache_key|
|
257
|
+
|
258
|
+
value = Marshal.dump value unless raw
|
259
|
+
command = "set #{cache_key} 0 #{expiry} #{value.to_s.size}\r\n#{value}\r\n"
|
260
|
+
|
261
|
+
with_socket_management(server) do |socket|
|
262
|
+
socket.write command
|
263
|
+
result = socket.gets
|
264
|
+
raise_on_error_response! result
|
265
|
+
|
266
|
+
if result.nil?
|
267
|
+
server.close
|
268
|
+
raise MemCacheError, "lost connection to #{server.host}:#{server.port}"
|
269
|
+
end
|
270
|
+
|
271
|
+
result
|
272
|
+
end
|
273
|
+
end
|
274
|
+
end
|
275
|
+
|
276
|
+
##
|
277
|
+
# Add +key+ to the cache with value +value+ that expires in +expiry+
|
278
|
+
# seconds, but only if +key+ does not already exist in the cache.
|
279
|
+
# If +raw+ is true, +value+ will not be Marshalled.
|
280
|
+
#
|
281
|
+
# Readers should call this method in the event of a cache miss, not
|
282
|
+
# MemCache#set or MemCache#[]=.
|
283
|
+
|
284
|
+
def add(key, value, expiry = 0, raw = false)
|
285
|
+
raise MemCacheError, "Update of readonly cache" if @readonly
|
286
|
+
with_server(key) do |server, cache_key|
|
287
|
+
value = Marshal.dump value unless raw
|
288
|
+
command = "add #{cache_key} 0 #{expiry} #{value.size}\r\n#{value}\r\n"
|
289
|
+
|
290
|
+
with_socket_management(server) do |socket|
|
291
|
+
socket.write command
|
292
|
+
result = socket.gets
|
293
|
+
raise_on_error_response! result
|
294
|
+
result
|
295
|
+
end
|
296
|
+
end
|
297
|
+
end
|
298
|
+
|
299
|
+
##
|
300
|
+
# Removes +key+ from the cache in +expiry+ seconds.
|
301
|
+
|
302
|
+
def delete(key, expiry = 0)
|
303
|
+
raise MemCacheError, "Update of readonly cache" if @readonly
|
304
|
+
with_server(key) do |server, cache_key|
|
305
|
+
with_socket_management(server) do |socket|
|
306
|
+
socket.write "delete #{cache_key} #{expiry}\r\n"
|
307
|
+
result = socket.gets
|
308
|
+
raise_on_error_response! result
|
309
|
+
result
|
310
|
+
end
|
311
|
+
end
|
312
|
+
end
|
313
|
+
|
314
|
+
##
|
315
|
+
# Flush the cache from all memcache servers.
|
316
|
+
|
317
|
+
def flush_all
|
318
|
+
raise MemCacheError, 'No active servers' unless active?
|
319
|
+
raise MemCacheError, "Update of readonly cache" if @readonly
|
320
|
+
|
321
|
+
begin
|
322
|
+
@mutex.lock if @multithread
|
323
|
+
@servers.each do |server|
|
324
|
+
with_socket_management(server) do |socket|
|
325
|
+
socket.write "flush_all\r\n"
|
326
|
+
result = socket.gets
|
327
|
+
raise_on_error_response! result
|
328
|
+
result
|
329
|
+
end
|
330
|
+
end
|
331
|
+
rescue IndexError => err
|
332
|
+
handle_error nil, err
|
333
|
+
ensure
|
334
|
+
@mutex.unlock if @multithread
|
335
|
+
end
|
336
|
+
end
|
337
|
+
|
338
|
+
##
|
339
|
+
# Reset the connection to all memcache servers. This should be called if
|
340
|
+
# there is a problem with a cache lookup that might have left the connection
|
341
|
+
# in a corrupted state.
|
342
|
+
|
343
|
+
def reset
|
344
|
+
@servers.each { |server| server.close }
|
345
|
+
end
|
346
|
+
|
347
|
+
##
|
348
|
+
# Returns statistics for each memcached server. An explanation of the
|
349
|
+
# statistics can be found in the memcached docs:
|
350
|
+
#
|
351
|
+
# http://code.sixapart.com/svn/memcached/trunk/server/doc/protocol.txt
|
352
|
+
#
|
353
|
+
# Example:
|
354
|
+
#
|
355
|
+
# >> pp CACHE.stats
|
356
|
+
# {"localhost:11211"=>
|
357
|
+
# {"bytes"=>4718,
|
358
|
+
# "pid"=>20188,
|
359
|
+
# "connection_structures"=>4,
|
360
|
+
# "time"=>1162278121,
|
361
|
+
# "pointer_size"=>32,
|
362
|
+
# "limit_maxbytes"=>67108864,
|
363
|
+
# "cmd_get"=>14532,
|
364
|
+
# "version"=>"1.2.0",
|
365
|
+
# "bytes_written"=>432583,
|
366
|
+
# "cmd_set"=>32,
|
367
|
+
# "get_misses"=>0,
|
368
|
+
# "total_connections"=>19,
|
369
|
+
# "curr_connections"=>3,
|
370
|
+
# "curr_items"=>4,
|
371
|
+
# "uptime"=>1557,
|
372
|
+
# "get_hits"=>14532,
|
373
|
+
# "total_items"=>32,
|
374
|
+
# "rusage_system"=>0.313952,
|
375
|
+
# "rusage_user"=>0.119981,
|
376
|
+
# "bytes_read"=>190619}}
|
377
|
+
# => nil
|
378
|
+
|
379
|
+
def stats
|
380
|
+
raise MemCacheError, "No active servers" unless active?
|
381
|
+
server_stats = {}
|
382
|
+
|
383
|
+
@servers.each do |server|
|
384
|
+
next unless server.alive?
|
385
|
+
|
386
|
+
with_socket_management(server) do |socket|
|
387
|
+
value = nil
|
388
|
+
socket.write "stats\r\n"
|
389
|
+
stats = {}
|
390
|
+
while line = socket.gets do
|
391
|
+
raise_on_error_response! line
|
392
|
+
break if line == "END\r\n"
|
393
|
+
if line =~ /\ASTAT ([\w]+) ([\w\.\:]+)/ then
|
394
|
+
name, value = $1, $2
|
395
|
+
stats[name] = case name
|
396
|
+
when 'version'
|
397
|
+
value
|
398
|
+
when 'rusage_user', 'rusage_system' then
|
399
|
+
seconds, microseconds = value.split(/:/, 2)
|
400
|
+
microseconds ||= 0
|
401
|
+
Float(seconds) + (Float(microseconds) / 1_000_000)
|
402
|
+
else
|
403
|
+
if value =~ /\A\d+\Z/ then
|
404
|
+
value.to_i
|
405
|
+
else
|
406
|
+
value
|
407
|
+
end
|
408
|
+
end
|
409
|
+
end
|
410
|
+
end
|
411
|
+
server_stats["#{server.host}:#{server.port}"] = stats
|
412
|
+
end
|
413
|
+
end
|
414
|
+
|
415
|
+
raise MemCacheError, "No active servers" if server_stats.empty?
|
416
|
+
server_stats
|
417
|
+
end
|
418
|
+
|
419
|
+
##
|
420
|
+
# Shortcut to get a value from the cache.
|
421
|
+
|
422
|
+
alias [] get
|
423
|
+
|
424
|
+
##
|
425
|
+
# Shortcut to save a value in the cache. This method does not set an
|
426
|
+
# expiration on the entry. Use set to specify an explicit expiry.
|
427
|
+
|
428
|
+
def []=(key, value)
|
429
|
+
set key, value
|
430
|
+
end
|
431
|
+
|
432
|
+
protected unless $TESTING
|
433
|
+
|
434
|
+
##
|
435
|
+
# Create a key for the cache, incorporating the namespace qualifier if
|
436
|
+
# requested.
|
437
|
+
|
438
|
+
def make_cache_key(key)
|
439
|
+
if namespace.nil? then
|
440
|
+
key
|
441
|
+
else
|
442
|
+
"#{@namespace}:#{key}"
|
443
|
+
end
|
444
|
+
end
|
445
|
+
|
446
|
+
##
|
447
|
+
# Returns an interoperable hash value for +key+. (I think, docs are
|
448
|
+
# sketchy for down servers).
|
449
|
+
|
450
|
+
def hash_for(key)
|
451
|
+
Zlib.crc32(key)
|
452
|
+
end
|
453
|
+
|
454
|
+
##
|
455
|
+
# Pick a server to handle the request based on a hash of the key.
|
456
|
+
|
457
|
+
def get_server_for_key(key)
|
458
|
+
raise ArgumentError, "illegal character in key #{key.inspect}" if
|
459
|
+
key =~ /\s/
|
460
|
+
# raise ArgumentError, "key too long #{key.inspect}" if key.length > 250
|
461
|
+
raise MemCacheError, "No servers available" if @servers.empty?
|
462
|
+
return @servers.first if @servers.length == 1
|
463
|
+
key = Digest::MD5.hexdigest(key) if key.length > 250
|
464
|
+
|
465
|
+
hkey = hash_for(key)
|
466
|
+
|
467
|
+
20.times do |try|
|
468
|
+
server = binary_search(@continuum, hkey) { |e| e.value }.server
|
469
|
+
return server if server.alive?
|
470
|
+
hkey = hash_for "#{try}#{key}"
|
471
|
+
end
|
472
|
+
|
473
|
+
raise MemCacheError, "No servers available"
|
474
|
+
end
|
475
|
+
|
476
|
+
##
|
477
|
+
# Performs a raw decr for +cache_key+ from +server+. Returns nil if not
|
478
|
+
# found.
|
479
|
+
|
480
|
+
def cache_decr(server, cache_key, amount)
|
481
|
+
with_socket_management(server) do |socket|
|
482
|
+
socket.write "decr #{cache_key} #{amount}\r\n"
|
483
|
+
text = socket.gets
|
484
|
+
raise_on_error_response! text
|
485
|
+
return nil if text == "NOT_FOUND\r\n"
|
486
|
+
return text.to_i
|
487
|
+
end
|
488
|
+
end
|
489
|
+
|
490
|
+
##
|
491
|
+
# Fetches the raw data for +cache_key+ from +server+. Returns nil on cache
|
492
|
+
# miss.
|
493
|
+
|
494
|
+
def cache_get(server, cache_key)
|
495
|
+
with_socket_management(server) do |socket|
|
496
|
+
socket.write "get #{cache_key}\r\n"
|
497
|
+
keyline = socket.gets # "VALUE <key> <flags> <bytes>\r\n"
|
498
|
+
|
499
|
+
if keyline.nil? then
|
500
|
+
server.close
|
501
|
+
raise MemCacheError, "lost connection to #{server.host}:#{server.port}"
|
502
|
+
end
|
503
|
+
|
504
|
+
raise_on_error_response! keyline
|
505
|
+
return nil if keyline == "END\r\n"
|
506
|
+
|
507
|
+
unless keyline =~ /(\d+)\r/ then
|
508
|
+
server.close
|
509
|
+
raise MemCacheError, "unexpected response #{keyline.inspect}"
|
510
|
+
end
|
511
|
+
value = socket.read $1.to_i
|
512
|
+
socket.read 2 # "\r\n"
|
513
|
+
socket.gets # "END\r\n"
|
514
|
+
return value
|
515
|
+
end
|
516
|
+
end
|
517
|
+
|
518
|
+
##
|
519
|
+
# Fetches +cache_keys+ from +server+ using a multi-get.
|
520
|
+
|
521
|
+
def cache_get_multi(server, cache_keys)
|
522
|
+
with_socket_management(server) do |socket|
|
523
|
+
values = {}
|
524
|
+
socket.write "get #{cache_keys}\r\n"
|
525
|
+
|
526
|
+
while keyline = socket.gets do
|
527
|
+
return values if keyline == "END\r\n"
|
528
|
+
raise_on_error_response! keyline
|
529
|
+
|
530
|
+
unless keyline =~ /\AVALUE (.+) (.+) (.+)/ then
|
531
|
+
server.close
|
532
|
+
raise MemCacheError, "unexpected response #{keyline.inspect}"
|
533
|
+
end
|
534
|
+
|
535
|
+
key, data_length = $1, $3
|
536
|
+
values[$1] = socket.read data_length.to_i
|
537
|
+
socket.read(2) # "\r\n"
|
538
|
+
end
|
539
|
+
|
540
|
+
server.close
|
541
|
+
raise MemCacheError, "lost connection to #{server.host}:#{server.port}" # TODO: retry here too
|
542
|
+
end
|
543
|
+
end
|
544
|
+
|
545
|
+
##
|
546
|
+
# Performs a raw incr for +cache_key+ from +server+. Returns nil if not
|
547
|
+
# found.
|
548
|
+
|
549
|
+
def cache_incr(server, cache_key, amount)
|
550
|
+
with_socket_management(server) do |socket|
|
551
|
+
socket.write "incr #{cache_key} #{amount}\r\n"
|
552
|
+
text = socket.gets
|
553
|
+
raise_on_error_response! text
|
554
|
+
return nil if text == "NOT_FOUND\r\n"
|
555
|
+
return text.to_i
|
556
|
+
end
|
557
|
+
end
|
558
|
+
|
559
|
+
##
|
560
|
+
# Gets or creates a socket connected to the given server, and yields it
|
561
|
+
# to the block, wrapped in a mutex synchronization if @multithread is true.
|
562
|
+
#
|
563
|
+
# If a socket error (SocketError, SystemCallError, IOError) or protocol error
|
564
|
+
# (MemCacheError) is raised by the block, closes the socket, attempts to
|
565
|
+
# connect again, and retries the block (once). If an error is again raised,
|
566
|
+
# reraises it as MemCacheError.
|
567
|
+
#
|
568
|
+
# If unable to connect to the server (or if in the reconnect wait period),
|
569
|
+
# raises MemCacheError. Note that the socket connect code marks a server
|
570
|
+
# dead for a timeout period, so retrying does not apply to connection attempt
|
571
|
+
# failures (but does still apply to unexpectedly lost connections etc.).
|
572
|
+
|
573
|
+
def with_socket_management(server, &block)
|
574
|
+
@mutex.lock if @multithread
|
575
|
+
retried = false
|
576
|
+
begin
|
577
|
+
socket = server.socket
|
578
|
+
|
579
|
+
# Raise an IndexError to show this server is out of whack. If were inside
|
580
|
+
# a with_server block, we'll catch it and attempt to restart the operation.
|
581
|
+
raise IndexError, "No connection to server (#{server.status})" if socket.nil?
|
582
|
+
|
583
|
+
block.call(socket)
|
584
|
+
rescue MemCacheError, SocketError, SystemCallError, IOError => err
|
585
|
+
handle_error(server, err) if retried || socket.nil?
|
586
|
+
retried = true
|
587
|
+
retry
|
588
|
+
end
|
589
|
+
ensure
|
590
|
+
@mutex.unlock if @multithread
|
591
|
+
end
|
592
|
+
|
593
|
+
def with_server(key)
|
594
|
+
retried = false
|
595
|
+
begin
|
596
|
+
server, cache_key = request_setup(key)
|
597
|
+
yield server, cache_key
|
598
|
+
rescue IndexError => e
|
599
|
+
if !retried && @servers.size > 1
|
600
|
+
puts "Connection to server #{server.inspect} DIED! Retrying operation..."
|
601
|
+
retried = true
|
602
|
+
retry
|
603
|
+
end
|
604
|
+
handle_error(nil, e)
|
605
|
+
end
|
606
|
+
end
|
607
|
+
|
608
|
+
##
|
609
|
+
# Handles +error+ from +server+.
|
610
|
+
|
611
|
+
def handle_error(server, error)
|
612
|
+
raise error if error.is_a?(MemCacheError)
|
613
|
+
server.close if server
|
614
|
+
new_error = MemCacheError.new error.message
|
615
|
+
new_error.set_backtrace error.backtrace
|
616
|
+
raise new_error
|
617
|
+
end
|
618
|
+
|
619
|
+
##
|
620
|
+
# Performs setup for making a request with +key+ from memcached. Returns
|
621
|
+
# the server to fetch the key from and the complete key to use.
|
622
|
+
|
623
|
+
def request_setup(key)
|
624
|
+
raise MemCacheError, 'No active servers' unless active?
|
625
|
+
cache_key = make_cache_key key
|
626
|
+
server = get_server_for_key cache_key
|
627
|
+
return server, cache_key
|
628
|
+
end
|
629
|
+
|
630
|
+
def raise_on_error_response!(response)
|
631
|
+
if response =~ /\A(?:CLIENT_|SERVER_)?ERROR(.*)/
|
632
|
+
raise MemCacheError, $1.strip
|
633
|
+
end
|
634
|
+
end
|
635
|
+
|
636
|
+
def create_continuum_for(servers)
|
637
|
+
total_weight = servers.inject(0) { |memo, srv| memo + srv.weight }
|
638
|
+
continuum = []
|
639
|
+
|
640
|
+
servers.each do |server|
|
641
|
+
entry_count_for(server, servers.size, total_weight).times do |idx|
|
642
|
+
hash = Digest::SHA1.hexdigest("#{server.host}:#{server.port}:#{idx}")
|
643
|
+
value = Integer("0x#{hash[0..7]}")
|
644
|
+
continuum << ContinuumEntry.new(value, server)
|
645
|
+
end
|
646
|
+
end
|
647
|
+
|
648
|
+
continuum.sort { |a, b| a.value <=> b.value }
|
649
|
+
end
|
650
|
+
|
651
|
+
def entry_count_for(server, total_servers, total_weight)
|
652
|
+
((total_servers * ContinuumEntry::POINTS_PER_SERVER * server.weight) / Float(total_weight)).floor
|
653
|
+
end
|
654
|
+
|
655
|
+
class ContinuumEntry
|
656
|
+
POINTS_PER_SERVER = 160 # this is the default in libmemcached
|
657
|
+
|
658
|
+
attr_reader :value
|
659
|
+
attr_reader :server
|
660
|
+
|
661
|
+
def initialize(val, srv)
|
662
|
+
@value = val
|
663
|
+
@server = srv
|
664
|
+
end
|
665
|
+
|
666
|
+
def inspect
|
667
|
+
"<#{value}, #{server.host}:#{server.port}>"
|
668
|
+
end
|
669
|
+
end
|
670
|
+
|
671
|
+
##
|
672
|
+
# This class represents a memcached server instance.
|
673
|
+
|
674
|
+
class Server
|
675
|
+
|
676
|
+
##
|
677
|
+
# The amount of time to wait to establish a connection with a memcached
|
678
|
+
# server. If a connection cannot be established within this time limit,
|
679
|
+
# the server will be marked as down.
|
680
|
+
|
681
|
+
CONNECT_TIMEOUT = 0.25
|
682
|
+
|
683
|
+
##
|
684
|
+
# The amount of time to wait before attempting to re-establish a
|
685
|
+
# connection with a server that is marked dead.
|
686
|
+
|
687
|
+
RETRY_DELAY = 30.0
|
688
|
+
|
689
|
+
##
|
690
|
+
# The host the memcached server is running on.
|
691
|
+
|
692
|
+
attr_reader :host
|
693
|
+
|
694
|
+
##
|
695
|
+
# The port the memcached server is listening on.
|
696
|
+
|
697
|
+
attr_reader :port
|
698
|
+
|
699
|
+
##
|
700
|
+
# The weight given to the server.
|
701
|
+
|
702
|
+
attr_reader :weight
|
703
|
+
|
704
|
+
##
|
705
|
+
# The time of next retry if the connection is dead.
|
706
|
+
|
707
|
+
attr_reader :retry
|
708
|
+
|
709
|
+
##
|
710
|
+
# A text status string describing the state of the server.
|
711
|
+
|
712
|
+
attr_reader :status
|
713
|
+
|
714
|
+
attr_reader :multithread
|
715
|
+
|
716
|
+
##
|
717
|
+
# Create a new MemCache::Server object for the memcached instance
|
718
|
+
# listening on the given host and port, weighted by the given weight.
|
719
|
+
|
720
|
+
def initialize(memcache, host, port = DEFAULT_PORT, weight = DEFAULT_WEIGHT)
|
721
|
+
raise ArgumentError, "No host specified" if host.nil? or host.empty?
|
722
|
+
raise ArgumentError, "No port specified" if port.nil? or port.to_i.zero?
|
723
|
+
|
724
|
+
@host = host
|
725
|
+
@port = port.to_i
|
726
|
+
@weight = weight.to_i
|
727
|
+
|
728
|
+
@multithread = memcache.multithread
|
729
|
+
@mutex = Mutex.new
|
730
|
+
|
731
|
+
@sock = nil
|
732
|
+
@retry = nil
|
733
|
+
@status = 'NOT CONNECTED'
|
734
|
+
end
|
735
|
+
|
736
|
+
##
|
737
|
+
# Return a string representation of the server object.
|
738
|
+
|
739
|
+
def inspect
|
740
|
+
"<MemCache::Server: %s:%d [%d] (%s)>" % [@host, @port, @weight, @status]
|
741
|
+
end
|
742
|
+
|
743
|
+
##
|
744
|
+
# Check whether the server connection is alive. This will cause the
|
745
|
+
# socket to attempt to connect if it isn't already connected and or if
|
746
|
+
# the server was previously marked as down and the retry time has
|
747
|
+
# been exceeded.
|
748
|
+
|
749
|
+
def alive?
|
750
|
+
!!socket
|
751
|
+
end
|
752
|
+
|
753
|
+
##
|
754
|
+
# Try to connect to the memcached server targeted by this object.
|
755
|
+
# Returns the connected socket object on success or nil on failure.
|
756
|
+
|
757
|
+
def socket
|
758
|
+
@mutex.lock if @multithread
|
759
|
+
return @sock if @sock and not @sock.closed?
|
760
|
+
|
761
|
+
@sock = nil
|
762
|
+
|
763
|
+
# If the host was dead, don't retry for a while.
|
764
|
+
return if @retry and @retry > Time.now
|
765
|
+
|
766
|
+
# Attempt to connect if not already connected.
|
767
|
+
begin
|
768
|
+
@sock = timeout CONNECT_TIMEOUT do
|
769
|
+
TCPSocket.new @host, @port
|
770
|
+
end
|
771
|
+
if Socket.constants.include? 'TCP_NODELAY' then
|
772
|
+
@sock.setsockopt Socket::IPPROTO_TCP, Socket::TCP_NODELAY, 1
|
773
|
+
end
|
774
|
+
@retry = nil
|
775
|
+
@status = 'CONNECTED'
|
776
|
+
rescue SocketError, SystemCallError, IOError, Timeout::Error => err
|
777
|
+
mark_dead err.message
|
778
|
+
end
|
779
|
+
|
780
|
+
return @sock
|
781
|
+
ensure
|
782
|
+
@mutex.unlock if @multithread
|
783
|
+
end
|
784
|
+
|
785
|
+
##
|
786
|
+
# Close the connection to the memcached server targeted by this
|
787
|
+
# object. The server is not considered dead.
|
788
|
+
|
789
|
+
def close
|
790
|
+
@mutex.lock if @multithread
|
791
|
+
@sock.close if @sock && !@sock.closed?
|
792
|
+
@sock = nil
|
793
|
+
@retry = nil
|
794
|
+
@status = "NOT CONNECTED"
|
795
|
+
ensure
|
796
|
+
@mutex.unlock if @multithread
|
797
|
+
end
|
798
|
+
|
799
|
+
private
|
800
|
+
|
801
|
+
##
|
802
|
+
# Mark the server as dead and close its socket.
|
803
|
+
|
804
|
+
def mark_dead(reason = "Unknown error")
|
805
|
+
@sock.close if @sock && !@sock.closed?
|
806
|
+
@sock = nil
|
807
|
+
@retry = Time.now + RETRY_DELAY
|
808
|
+
|
809
|
+
@status = sprintf "DEAD: %s, will retry at %s", reason, @retry
|
810
|
+
end
|
811
|
+
|
812
|
+
end
|
813
|
+
|
814
|
+
##
|
815
|
+
# Base MemCache exception class.
|
816
|
+
|
817
|
+
class MemCacheError < RuntimeError; end
|
818
|
+
|
819
|
+
|
820
|
+
# Find the closest element in Array less than or equal to value.
|
821
|
+
def binary_search(ary, value, &block)
|
822
|
+
upper = ary.size - 1
|
823
|
+
lower = 0
|
824
|
+
idx = 0
|
825
|
+
|
826
|
+
result = while(lower <= upper) do
|
827
|
+
idx = (lower + upper) / 2
|
828
|
+
comp = block.call(ary[idx]) <=> value
|
829
|
+
|
830
|
+
if comp == 0
|
831
|
+
break idx
|
832
|
+
elsif comp > 0
|
833
|
+
upper = idx - 1
|
834
|
+
else
|
835
|
+
lower = idx + 1
|
836
|
+
end
|
837
|
+
end
|
838
|
+
result ? ary[result] : ary[upper]
|
839
|
+
end
|
840
|
+
end
|
841
|
+
|