carldr-memcache-client 1.7.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,193 @@
1
+ = 1.7.0 (2009-03-08)
2
+
3
+ * Go through the memcached protocol document and implement any commands not already implemented:
4
+ - cas
5
+ - append
6
+ - prepend
7
+ - replace
8
+
9
+ Append and prepend only work with raw data since it makes no sense to concatenate two Marshalled
10
+ values together. The cas functionality should be considered a prototype. Since I don't have an
11
+ application which uses +cas+, I'm not sure what semantic sugar the API should provide. Should it
12
+ retry if the value was changed? Should it massage the returned string into true/false? Feedback
13
+ would be appreciated.
14
+
15
+ * Add fetch method which provides a method very similar to ActiveSupport::Cache::Store#fetch,
16
+ basically a wrapper around get and add. (djanowski)
17
+
18
+ * Implement the flush_all delay parameter, to allow a large memcached farm to be flushed gradually.
19
+
20
+ * Implement the noreply flag, which tells memcached not to reply in operations which don't
21
+ need a reply, i.e. set/add/delete/flush_all.
22
+
23
+ * The only known functionality not implemented anymore is the <flags> parameter to the storage
24
+ commands. This would require modification of the API method signatures. If someone can come
25
+ up with a clean way to implement it, I would be happy to consider including it.
26
+
27
+ = 1.6.5 (2009-02-27)
28
+
29
+ * Change memcache-client to multithreaded by default. The mutex does not add significant
30
+ overhead and it is far too easy, now that Sinatra, Rails and Merb are all thread-safe, to
31
+ use memcache-client in a thread-unsafe manner. Remove some unnecessary mutexing and add
32
+ a test to verify heavily multithreaded usage does not act unexpectedly.
33
+
34
+ * Add optional support for the SystemTimer gem when running on Ruby 1.8.x. This gem is
35
+ highly recommended - it ensures timeouts actually work and halves the overhead of using
36
+ timeouts. Using this gem, Ruby 1.8.x is actually faster in my performance tests
37
+ than Ruby 1.9.x. Just "gem install SystemTimer" and it should be picked up automatically.
38
+
39
+ = 1.6.4 (2009-02-19)
40
+
41
+ * Remove native code altogether. The speedup was only 10% on Ruby 1.8.6 and did not work
42
+ on Ruby 1.9.1.
43
+
44
+ * Removed memcache_util.rb from the distribution. If you are using it, please copy the code
45
+ into your own project. The file will live in the github repository for a few more months
46
+ for this purposes. http://github.com/mperham/memcache-client/raw/7a276089aa3c914e47e3960f9740ac7377204970/lib/memcache_util.rb
47
+
48
+ * Roll continuum.rb into memcache.rb. The project is again a single Ruby file, with no dependencies.
49
+
50
+ = 1.6.3 (2009-02-14)
51
+
52
+ * Remove gem native extension in preference to RubyInline. This allows the gem to install
53
+ and work on JRuby and Ruby 1.8.5 when the native code fails to compile.
54
+
55
+ = 1.6.2 (2009-02-04)
56
+
57
+ * Validate that values are less than one megabyte in size.
58
+
59
+ * Refactor error handling in get_multi to handle server failures and return what values
60
+ we could successfully retrieve.
61
+
62
+ * Add optional logging parameter for debugging and tracing.
63
+
64
+ * First official release since 1.5.0. Thanks to Eric Hodel for turning over the project to me!
65
+ New project home page: http://github.com/mperham/memcache-client
66
+
67
+ = 1.6.1 (2009-01-28)
68
+
69
+ * Add option to disable socket timeout support. Socket timeout has a significant performance
70
+ penalty (approx 3x slower than without in Ruby 1.8.6). You can turn off the timeouts if you
71
+ need absolute performance, but by default timeouts are enabled. The performance
72
+ penalty is much lower in Ruby 1.8.7, 1.9 and JRuby. (mperham)
73
+
74
+ * Add option to disable server failover. Failover can lead to "split-brain" caches that
75
+ return stale data. (mperham)
76
+
77
+ * Implement continuum binary search in native code for performance reasons. Pure ruby
78
+ is available for platforms like JRuby or Rubinius which can't use C extensions. (mperham)
79
+
80
+ * Fix #add with raw=true (iamaleksey)
81
+
82
+ = 1.6.0
83
+
84
+ * Implement a consistent hashing algorithm, as described in libketama.
85
+ This dramatically reduces the cost of adding or removing servers dynamically
86
+ as keys are much more likely to map to the same server.
87
+
88
+ Take a scenario where we add a fourth server. With a naive modulo algorithm, about
89
+ 25% of the keys will map to the same server. In other words, 75% of your memcached
90
+ content suddenly becomes invalid. With a consistent algorithm, 75% of the keys
91
+ will map to the same server as before - only 25% will be invalidated. (mperham)
92
+
93
+ * Implement socket timeouts, should fix rare cases of very bad things happening
94
+ in production at 37signals and FiveRuns. (jseirles)
95
+
96
+ = 1.5.0.5
97
+
98
+ * Remove native C CRC32_ITU_T extension in favor of Zlib's crc32 method.
99
+ memcache-client is now pure Ruby again and will work with JRuby and Rubinius.
100
+
101
+ = 1.5.0.4
102
+
103
+ * Get test suite working again (packagethief)
104
+ * Ruby 1.9 compatiblity fixes (packagethief, mperham)
105
+ * Consistently return server responses and check for errors (packagethief)
106
+ * Properly calculate CRC in Ruby 1.9 strings (mperham)
107
+ * Drop rspec in favor of test/unit, for 1.9 compat (mperham)
108
+
109
+ = 1.5.0.3 (FiveRuns fork)
110
+
111
+ * Integrated ITU-T CRC32 operation in native C extension for speed. Thanks to Justin Balthrop!
112
+
113
+ = 1.5.0.2 (FiveRuns fork)
114
+
115
+ * Add support for seamless failover between servers. If one server connection dies,
116
+ the client will retry the operation on another server before giving up.
117
+
118
+ * Merge Will Bryant's socket retry patch.
119
+ http://willbryant.net/software/2007/12/21/ruby-memcache-client-reconnect-and-retry
120
+
121
+ = 1.5.0.1 (FiveRuns fork)
122
+
123
+ * Fix set not handling client disconnects.
124
+ http://dev.twitter.com/2008/02/solving-case-of-missing-updates.html
125
+
126
+ = 1.5.0
127
+
128
+ * Add MemCache#flush_all command. Patch #13019 and bug #10503. Patches
129
+ submitted by Sebastian Delmont and Rick Olson.
130
+ * Type-cast data returned by MemCache#stats. Patch #10505 submitted by
131
+ Sebastian Delmont.
132
+
133
+ = 1.4.0
134
+
135
+ * Fix bug #10371, #set does not check response for server errors.
136
+ Submitted by Ben VandenBos.
137
+ * Fix bug #12450, set TCP_NODELAY socket option. Patch by Chris
138
+ McGrath.
139
+ * Fix bug #10704, missing #add method. Patch by Jamie Macey.
140
+ * Fix bug #10371, handle socket EOF in cache_get. Submitted by Ben
141
+ VandenBos.
142
+
143
+ = 1.3.0
144
+
145
+ * Apply patch #6507, add stats command. Submitted by Tyler Kovacs.
146
+ * Apply patch #6509, parallel implementation of #get_multi. Submitted
147
+ by Tyler Kovacs.
148
+ * Validate keys. Disallow spaces in keys or keys that are too long.
149
+ * Perform more validation of server responses. MemCache now reports
150
+ errors if the socket was not in an expected state. (Please file
151
+ bugs if you find some.)
152
+ * Add #incr and #decr.
153
+ * Add raw argument to #set and #get to retrieve #incr and #decr
154
+ values.
155
+ * Also put on MemCacheError when using Cache::get with block.
156
+ * memcache.rb no longer sets $TESTING to a true value if it was
157
+ previously defined. Bug #8213 by Matijs van Zuijlen.
158
+
159
+ = 1.2.1
160
+
161
+ * Fix bug #7048, MemCache#servers= referenced changed local variable.
162
+ Submitted by Justin Dossey.
163
+ * Fix bug #7049, MemCache#initialize resets @buckets. Submitted by
164
+ Justin Dossey.
165
+ * Fix bug #6232, Make Cache::Get work with a block only when nil is
166
+ returned. Submitted by Jon Evans.
167
+ * Moved to the seattlerb project.
168
+
169
+ = 1.2.0
170
+
171
+ NOTE: This version will store keys in different places than previous
172
+ versions! Be prepared for some thrashing while memcached sorts itself
173
+ out!
174
+
175
+ * Fixed multithreaded operations, bug 5994 and 5989.
176
+ Thanks to Blaine Cook, Erik Hetzner, Elliot Smith, Dave Myron (and
177
+ possibly others I have forgotten).
178
+ * Made memcached interoperable with other memcached libraries, bug
179
+ 4509. Thanks to anonymous.
180
+ * Added get_multi to match Perl/etc APIs
181
+
182
+ = 1.1.0
183
+
184
+ * Added some tests
185
+ * Sped up non-multithreaded and multithreaded operation
186
+ * More Ruby-memcache compatibility
187
+ * More RDoc
188
+ * Switched to Hoe
189
+
190
+ = 1.0.0
191
+
192
+ Birthday!
193
+
@@ -0,0 +1,28 @@
1
+ Copyright 2005-2009 Bob Cottrell, Eric Hodel, Mike Perham.
2
+ All rights reserved.
3
+
4
+ Redistribution and use in source and binary forms, with or without
5
+ modification, are permitted provided that the following conditions
6
+ are met:
7
+
8
+ 1. Redistributions of source code must retain the above copyright
9
+ notice, this list of conditions and the following disclaimer.
10
+ 2. Redistributions in binary form must reproduce the above copyright
11
+ notice, this list of conditions and the following disclaimer in the
12
+ documentation and/or other materials provided with the distribution.
13
+ 3. Neither the names of the authors nor the names of their contributors
14
+ may be used to endorse or promote products derived from this software
15
+ without specific prior written permission.
16
+
17
+ THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS'' AND ANY EXPRESS
18
+ OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
19
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
20
+ ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE
21
+ LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
22
+ OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
23
+ OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
24
+ BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
25
+ WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
26
+ OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
27
+ EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
28
+
@@ -0,0 +1,45 @@
1
+ = memcache-client
2
+
3
+ A pure ruby library for accessing memcached.
4
+
5
+ Source:
6
+
7
+ http://github.com/mperham/memcache-client
8
+
9
+ == Installing memcache-client
10
+
11
+ Just install the gem:
12
+
13
+ $ sudo gem install memcache-client
14
+
15
+ == Using memcache-client
16
+
17
+ With one server:
18
+
19
+ CACHE = MemCache.new 'localhost:11211'
20
+
21
+ Or with multiple servers:
22
+
23
+ CACHE = MemCache.new %w[one.example.com:11211 two.example.com:11211]
24
+
25
+
26
+ == Tuning memcache-client
27
+
28
+ The MemCache.new method takes a number of options which can be useful at times. Please
29
+ read the source comments there for an overview.
30
+
31
+
32
+ == Using memcache-client with Rails
33
+
34
+ Rails 2.1+ includes memcache-client out of the box. See ActiveSupport::Cache::MemCacheStore
35
+ and the Rails.cache method for more details.
36
+
37
+
38
+ == Questions?
39
+
40
+ memcache-client is maintained by Mike Perham and was originally written by Bob Cottrell,
41
+ Eric Hodel and the seattle.rb crew.
42
+
43
+ Email:: mailto:mperham@gmail.com
44
+ Twitter:: mperham[http://twitter.com/mperham]
45
+ WWW:: http://mikeperham.com
@@ -0,0 +1,35 @@
1
+ # vim: syntax=Ruby
2
+ require 'rubygems'
3
+ require 'rake/rdoctask'
4
+ require 'rake/testtask'
5
+
6
+ task :gem do
7
+ sh "gem build memcache-client.gemspec"
8
+ end
9
+
10
+ task :install => [:gem] do
11
+ sh "sudo gem install memcache-client-*.gem"
12
+ end
13
+
14
+ task :clean do
15
+ sh "rm -f memcache-client-*.gem"
16
+ end
17
+
18
+ task :publish => [:clean, :gem, :install] do
19
+ require 'lib/memcache'
20
+ sh "rubyforge add_release seattlerb memcache-client #{MemCache::VERSION} memcache-client-#{MemCache::VERSION}.gem"
21
+ end
22
+
23
+ Rake::RDocTask.new do |rd|
24
+ rd.main = "README.rdoc"
25
+ rd.rdoc_files.include("README.rdoc", "lib/**/*.rb")
26
+ rd.rdoc_dir = 'doc'
27
+ end
28
+
29
+ Rake::TestTask.new
30
+
31
+ task :default => :test
32
+
33
+ task :rcov do
34
+ `rcov -Ilib test/*.rb`
35
+ end
@@ -0,0 +1,1144 @@
1
+ $TESTING = defined?($TESTING) && $TESTING
2
+
3
+ require 'socket'
4
+ require 'thread'
5
+ require 'zlib'
6
+ require 'digest/sha1'
7
+
8
+ begin
9
+ # Try to use the SystemTimer gem instead of Ruby's timeout library
10
+ # when running on something that looks like Ruby 1.8.x. See:
11
+ # http://ph7spot.com/articles/system_timer
12
+ # We don't want to bother trying to load SystemTimer on jruby and
13
+ # ruby 1.9+.
14
+ if !defined?(RUBY_ENGINE)
15
+ require 'system_timer'
16
+ MemCacheTimer = SystemTimer
17
+ else
18
+ require 'timeout'
19
+ MemCacheTimer = Timeout
20
+ end
21
+ rescue LoadError => e
22
+ puts "[memcache-client] Could not load SystemTimer gem, falling back to Ruby's slower/unsafe timeout library: #{e.message}"
23
+ require 'timeout'
24
+ MemCacheTimer = Timeout
25
+ end
26
+
27
+ ##
28
+ # A Ruby client library for memcached.
29
+ #
30
+
31
+ class MemCache
32
+
33
+ ##
34
+ # The version of MemCache you are using.
35
+
36
+ VERSION = '1.7.0'
37
+
38
+ ##
39
+ # Default options for the cache object.
40
+
41
+ DEFAULT_OPTIONS = {
42
+ :namespace => nil,
43
+ :readonly => false,
44
+ :multithread => true,
45
+ :failover => true,
46
+ :timeout => 0.5,
47
+ :logger => nil,
48
+ :no_reply => false,
49
+ }
50
+
51
+ ##
52
+ # Default memcached port.
53
+
54
+ DEFAULT_PORT = 11211
55
+
56
+ ##
57
+ # Default memcached server weight.
58
+
59
+ DEFAULT_WEIGHT = 1
60
+
61
+ ##
62
+ # The namespace for this instance
63
+
64
+ attr_reader :namespace
65
+
66
+ ##
67
+ # The multithread setting for this instance
68
+
69
+ attr_reader :multithread
70
+
71
+ ##
72
+ # The servers this client talks to. Play at your own peril.
73
+
74
+ attr_reader :servers
75
+
76
+ ##
77
+ # Socket timeout limit with this client, defaults to 0.5 sec.
78
+ # Set to nil to disable timeouts.
79
+
80
+ attr_reader :timeout
81
+
82
+ ##
83
+ # Should the client try to failover to another server if the
84
+ # first server is down? Defaults to true.
85
+
86
+ attr_reader :failover
87
+
88
+ ##
89
+ # Log debug/info/warn/error to the given Logger, defaults to nil.
90
+
91
+ attr_reader :logger
92
+
93
+ ##
94
+ # Don't send or look for a reply from the memcached server for write operations.
95
+ # Please note this feature only works in memcached 1.2.5 and later. Earlier
96
+ # versions will reply with "ERROR".
97
+ attr_reader :no_reply
98
+
99
+ ##
100
+ # Accepts a list of +servers+ and a list of +opts+. +servers+ may be
101
+ # omitted. See +servers=+ for acceptable server list arguments.
102
+ #
103
+ # Valid options for +opts+ are:
104
+ #
105
+ # [:namespace] Prepends this value to all keys added or retrieved.
106
+ # [:readonly] Raises an exception on cache writes when true.
107
+ # [:multithread] Wraps cache access in a Mutex for thread safety. Defaults to true.
108
+ # [:failover] Should the client try to failover to another server if the
109
+ # first server is down? Defaults to true.
110
+ # [:timeout] Time to use as the socket read timeout. Defaults to 0.5 sec,
111
+ # set to nil to disable timeouts (this is a major performance penalty in Ruby 1.8,
112
+ # "gem install SystemTimer' to remove most of the penalty).
113
+ # [:logger] Logger to use for info/debug output, defaults to nil
114
+ # [:no_reply] Don't bother looking for a reply for write operations (i.e. they
115
+ # become 'fire and forget'), memcached 1.2.5 and later only, speeds up
116
+ # set/add/delete/incr/decr significantly.
117
+ #
118
+ # Other options are ignored.
119
+
120
+ def initialize(*args)
121
+ servers = []
122
+ opts = {}
123
+
124
+ case args.length
125
+ when 0 then # NOP
126
+ when 1 then
127
+ arg = args.shift
128
+ case arg
129
+ when Hash then opts = arg
130
+ when Array then servers = arg
131
+ when String then servers = [arg]
132
+ else raise ArgumentError, 'first argument must be Array, Hash or String'
133
+ end
134
+ when 2 then
135
+ servers, opts = args
136
+ else
137
+ raise ArgumentError, "wrong number of arguments (#{args.length} for 2)"
138
+ end
139
+
140
+ opts = DEFAULT_OPTIONS.merge opts
141
+ @namespace = opts[:namespace]
142
+ @readonly = opts[:readonly]
143
+ @multithread = opts[:multithread]
144
+ @timeout = opts[:timeout]
145
+ @failover = opts[:failover]
146
+ @logger = opts[:logger]
147
+ @no_reply = opts[:no_reply]
148
+ @mutex = Mutex.new if @multithread
149
+
150
+ logger.info { "memcache-client #{VERSION} #{Array(servers).inspect}" } if logger
151
+
152
+ Thread.current[:memcache_client] = self.object_id if !@multithread
153
+
154
+ self.servers = servers
155
+ end
156
+
157
+ ##
158
+ # Returns a string representation of the cache object.
159
+
160
+ def inspect
161
+ "<MemCache: %d servers, ns: %p, ro: %p>" %
162
+ [@servers.length, @namespace, @readonly]
163
+ end
164
+
165
+ ##
166
+ # Returns whether there is at least one active server for the object.
167
+
168
+ def active?
169
+ not @servers.empty?
170
+ end
171
+
172
+ ##
173
+ # Returns whether or not the cache object was created read only.
174
+
175
+ def readonly?
176
+ @readonly
177
+ end
178
+
179
+ ##
180
+ # Set the servers that the requests will be distributed between. Entries
181
+ # can be either strings of the form "hostname:port" or
182
+ # "hostname:port:weight" or MemCache::Server objects.
183
+ #
184
+ def servers=(servers)
185
+ # Create the server objects.
186
+ @servers = Array(servers).collect do |server|
187
+ case server
188
+ when String
189
+ host, port, weight = server.split ':', 3
190
+ port ||= DEFAULT_PORT
191
+ weight ||= DEFAULT_WEIGHT
192
+ Server.new self, host, port, weight
193
+ else
194
+ server
195
+ end
196
+ end
197
+
198
+ logger.debug { "Servers now: #{@servers.inspect}" } if logger
199
+
200
+ # There's no point in doing this if there's only one server
201
+ @continuum = create_continuum_for(@servers) if @servers.size > 1
202
+
203
+ @servers
204
+ end
205
+
206
+ ##
207
+ # Decrements the value for +key+ by +amount+ and returns the new value.
208
+ # +key+ must already exist. If +key+ is not an integer, it is assumed to be
209
+ # 0. +key+ can not be decremented below 0.
210
+
211
+ def decr(key, amount = 1)
212
+ raise MemCacheError, "Update of readonly cache" if @readonly
213
+ with_server(key) do |server, cache_key|
214
+ cache_decr server, cache_key, amount
215
+ end
216
+ rescue TypeError => err
217
+ handle_error nil, err
218
+ end
219
+
220
+ ##
221
+ # Retrieves +key+ from memcache. If +raw+ is false, the value will be
222
+ # unmarshalled.
223
+
224
+ def get(key, raw = false)
225
+ with_server(key) do |server, cache_key|
226
+ logger.debug { "get #{key} from #{server.inspect}: #{value ? value.to_s.size : 'nil'}" } if logger
227
+ value = cache_get server, cache_key
228
+ return nil if value.nil?
229
+ value = Marshal.load value unless raw
230
+ return value
231
+ end
232
+ rescue TypeError => err
233
+ handle_error nil, err
234
+ end
235
+
236
+ ##
237
+ # Performs a +get+ with the given +key+. If
238
+ # the value does not exist and a block was given,
239
+ # the block will be called and the result saved via +add+.
240
+ #
241
+ # If you do not provide a block, using this
242
+ # method is the same as using +get+.
243
+ #
244
+ def fetch(key, expiry = 0, raw = false)
245
+ value = get(key, raw)
246
+
247
+ if value.nil? && block_given?
248
+ value = yield
249
+ add(key, value, expiry, raw)
250
+ end
251
+
252
+ value
253
+ end
254
+
255
+ ##
256
+ # Retrieves multiple values from memcached in parallel, if possible.
257
+ #
258
+ # The memcached protocol supports the ability to retrieve multiple
259
+ # keys in a single request. Pass in an array of keys to this method
260
+ # and it will:
261
+ #
262
+ # 1. map the key to the appropriate memcached server
263
+ # 2. send a single request to each server that has one or more key values
264
+ #
265
+ # Returns a hash of values.
266
+ #
267
+ # cache["a"] = 1
268
+ # cache["b"] = 2
269
+ # cache.get_multi "a", "b" # => { "a" => 1, "b" => 2 }
270
+ #
271
+ # Note that get_multi assumes the values are marshalled.
272
+
273
+ def get_multi(*keys)
274
+ raise MemCacheError, 'No active servers' unless active?
275
+
276
+ keys.flatten!
277
+ key_count = keys.length
278
+ cache_keys = {}
279
+ server_keys = Hash.new { |h,k| h[k] = [] }
280
+
281
+ # map keys to servers
282
+ keys.each do |key|
283
+ server, cache_key = request_setup key
284
+ cache_keys[cache_key] = key
285
+ server_keys[server] << cache_key
286
+ end
287
+
288
+ results = {}
289
+
290
+ server_keys.each do |server, keys_for_server|
291
+ keys_for_server_str = keys_for_server.join ' '
292
+ begin
293
+ values = cache_get_multi server, keys_for_server_str
294
+ values.each do |key, value|
295
+ results[cache_keys[key]] = Marshal.load value
296
+ end
297
+ rescue IndexError => e
298
+ # Ignore this server and try the others
299
+ logger.warn { "Unable to retrieve #{keys_for_server.size} elements from #{server.inspect}: #{e.message}"} if logger
300
+ end
301
+ end
302
+
303
+ return results
304
+ rescue TypeError => err
305
+ handle_error nil, err
306
+ end
307
+
308
+ ##
309
+ # Increments the value for +key+ by +amount+ and returns the new value.
310
+ # +key+ must already exist. If +key+ is not an integer, it is assumed to be
311
+ # 0.
312
+
313
+ def incr(key, amount = 1)
314
+ raise MemCacheError, "Update of readonly cache" if @readonly
315
+ with_server(key) do |server, cache_key|
316
+ cache_incr server, cache_key, amount
317
+ end
318
+ rescue TypeError => err
319
+ handle_error nil, err
320
+ end
321
+
322
+ ##
323
+ # Add +key+ to the cache with value +value+ that expires in +expiry+
324
+ # seconds. If +raw+ is true, +value+ will not be Marshalled.
325
+ #
326
+ # Warning: Readers should not call this method in the event of a cache miss;
327
+ # see MemCache#add.
328
+
329
+ ONE_MB = 1024 * 1024
330
+
331
+ def set(key, value, expiry = 0, raw = false)
332
+ raise MemCacheError, "Update of readonly cache" if @readonly
333
+ with_server(key) do |server, cache_key|
334
+
335
+ value = Marshal.dump value unless raw
336
+ data = value.to_s
337
+ logger.debug { "set #{key} to #{server.inspect}: #{data.size}" } if logger
338
+
339
+ raise MemCacheError, "Value too large, memcached can only store 1MB of data per key" if data.size > ONE_MB
340
+
341
+ command = "set #{cache_key} 0 #{expiry} #{data.size}#{noreply}\r\n#{data}\r\n"
342
+
343
+ with_socket_management(server) do |socket|
344
+ socket.write command
345
+ break nil if @no_reply
346
+ result = socket.gets
347
+ raise_on_error_response! result
348
+
349
+ if result.nil?
350
+ server.close
351
+ raise MemCacheError, "lost connection to #{server.host}:#{server.port}"
352
+ end
353
+
354
+ result
355
+ end
356
+ end
357
+ end
358
+
359
+ ##
360
+ # "cas" is a check and set operation which means "store this data but
361
+ # only if no one else has updated since I last fetched it." This can
362
+ # be used as a form of optimistic locking.
363
+ #
364
+ # Works in block form like so:
365
+ # cache.cas('some-key') do |value|
366
+ # value + 1
367
+ # end
368
+ #
369
+ # Returns:
370
+ # +nil+ if the value was not found on the memcached server.
371
+ # +STORED+ if the value was updated successfully
372
+ # +EXISTS+ if the value was updated by someone else since last fetch
373
+
374
+ def cas(key, expiry=0, raw=false)
375
+ raise MemCacheError, "Update of readonly cache" if @readonly
376
+ raise MemCacheError, "A block is required" unless block_given?
377
+
378
+ (value, token) = gets(key, raw)
379
+ return nil unless value
380
+ updated = yield value
381
+
382
+ with_server(key) do |server, cache_key|
383
+ logger.debug { "cas #{key} to #{server.inspect}: #{data.size}" } if logger
384
+
385
+ value = Marshal.dump updated unless raw
386
+ data = value.to_s
387
+ command = "cas #{cache_key} 0 #{expiry} #{value.size} #{token}#{noreply}\r\n#{value}\r\n"
388
+
389
+ with_socket_management(server) do |socket|
390
+ socket.write command
391
+ break nil if @no_reply
392
+ result = socket.gets
393
+ raise_on_error_response! result
394
+
395
+ if result.nil?
396
+ server.close
397
+ raise MemCacheError, "lost connection to #{server.host}:#{server.port}"
398
+ end
399
+
400
+ result
401
+ end
402
+ end
403
+ end
404
+
405
+ ##
406
+ # Add +key+ to the cache with value +value+ that expires in +expiry+
407
+ # seconds, but only if +key+ does not already exist in the cache.
408
+ # If +raw+ is true, +value+ will not be Marshalled.
409
+ #
410
+ # Readers should call this method in the event of a cache miss, not
411
+ # MemCache#set.
412
+
413
+ def add(key, value, expiry = 0, raw = false)
414
+ raise MemCacheError, "Update of readonly cache" if @readonly
415
+ with_server(key) do |server, cache_key|
416
+ value = Marshal.dump value unless raw
417
+ logger.debug { "add #{key} to #{server}: #{value ? value.to_s.size : 'nil'}" } if logger
418
+ command = "add #{cache_key} 0 #{expiry} #{value.to_s.size}#{noreply}\r\n#{value}\r\n"
419
+
420
+ with_socket_management(server) do |socket|
421
+ socket.write command
422
+ break nil if @no_reply
423
+ result = socket.gets
424
+ raise_on_error_response! result
425
+ result
426
+ end
427
+ end
428
+ end
429
+
430
+ ##
431
+ # Add +key+ to the cache with value +value+ that expires in +expiry+
432
+ # seconds, but only if +key+ already exists in the cache.
433
+ # If +raw+ is true, +value+ will not be Marshalled.
434
+ def replace(key, value, expiry = 0, raw = false)
435
+ raise MemCacheError, "Update of readonly cache" if @readonly
436
+ with_server(key) do |server, cache_key|
437
+ value = Marshal.dump value unless raw
438
+ logger.debug { "replace #{key} to #{server}: #{value ? value.to_s.size : 'nil'}" } if logger
439
+ command = "replace #{cache_key} 0 #{expiry} #{value.to_s.size}#{noreply}\r\n#{value}\r\n"
440
+
441
+ with_socket_management(server) do |socket|
442
+ socket.write command
443
+ break nil if @no_reply
444
+ result = socket.gets
445
+ raise_on_error_response! result
446
+ result
447
+ end
448
+ end
449
+ end
450
+
451
+ ##
452
+ # Append - 'add this data to an existing key after existing data'
453
+ # Please note the value is always passed to memcached as raw since it
454
+ # doesn't make a lot of sense to concatenate marshalled data together.
455
+ def append(key, value)
456
+ raise MemCacheError, "Update of readonly cache" if @readonly
457
+ with_server(key) do |server, cache_key|
458
+ logger.debug { "append #{key} to #{server}: #{value ? value.to_s.size : 'nil'}" } if logger
459
+ command = "append #{cache_key} 0 0 #{value.to_s.size}#{noreply}\r\n#{value}\r\n"
460
+
461
+ with_socket_management(server) do |socket|
462
+ socket.write command
463
+ break nil if @no_reply
464
+ result = socket.gets
465
+ raise_on_error_response! result
466
+ result
467
+ end
468
+ end
469
+ end
470
+
471
+ ##
472
+ # Prepend - 'add this data to an existing key before existing data'
473
+ # Please note the value is always passed to memcached as raw since it
474
+ # doesn't make a lot of sense to concatenate marshalled data together.
475
+ def prepend(key, value)
476
+ raise MemCacheError, "Update of readonly cache" if @readonly
477
+ with_server(key) do |server, cache_key|
478
+ logger.debug { "prepend #{key} to #{server}: #{value ? value.to_s.size : 'nil'}" } if logger
479
+ command = "prepend #{cache_key} 0 0 #{value.to_s.size}#{noreply}\r\n#{value}\r\n"
480
+
481
+ with_socket_management(server) do |socket|
482
+ socket.write command
483
+ break nil if @no_reply
484
+ result = socket.gets
485
+ raise_on_error_response! result
486
+ result
487
+ end
488
+ end
489
+ end
490
+
491
+ ##
492
+ # Removes +key+ from the cache in +expiry+ seconds.
493
+
494
+ def delete(key, expiry = 0)
495
+ raise MemCacheError, "Update of readonly cache" if @readonly
496
+ with_server(key) do |server, cache_key|
497
+ with_socket_management(server) do |socket|
498
+ logger.debug { "delete #{cache_key} on #{server}" } if logger
499
+ socket.write "delete #{cache_key} #{expiry}#{noreply}\r\n"
500
+ break nil if @no_reply
501
+ result = socket.gets
502
+ raise_on_error_response! result
503
+ result
504
+ end
505
+ end
506
+ end
507
+
508
+ ##
509
+ # Flush the cache from all memcache servers.
510
+ # A non-zero value for +delay+ will ensure that the flush
511
+ # is propogated slowly through your memcached server farm.
512
+ # The Nth server will be flushed N*delay seconds from now,
513
+ # asynchronously so this method returns quickly.
514
+ # This prevents a huge database spike due to a total
515
+ # flush all at once.
516
+
517
+ def flush_all(delay=0)
518
+ raise MemCacheError, 'No active servers' unless active?
519
+ raise MemCacheError, "Update of readonly cache" if @readonly
520
+
521
+ begin
522
+ delay_time = 0
523
+ @servers.each do |server|
524
+ with_socket_management(server) do |socket|
525
+ logger.debug { "flush_all #{delay_time} on #{server}" } if logger
526
+ socket.write "flush_all #{delay_time}#{noreply}\r\n"
527
+ break nil if @no_reply
528
+ result = socket.gets
529
+ raise_on_error_response! result
530
+ result
531
+ end
532
+ delay_time += delay
533
+ end
534
+ rescue IndexError => err
535
+ handle_error nil, err
536
+ end
537
+ end
538
+
539
+ ##
540
+ # Reset the connection to all memcache servers. This should be called if
541
+ # there is a problem with a cache lookup that might have left the connection
542
+ # in a corrupted state.
543
+
544
+ def reset
545
+ @servers.each { |server| server.close }
546
+ end
547
+
548
+ ##
549
+ # Returns statistics for each memcached server. An explanation of the
550
+ # statistics can be found in the memcached docs:
551
+ #
552
+ # http://code.sixapart.com/svn/memcached/trunk/server/doc/protocol.txt
553
+ #
554
+ # Example:
555
+ #
556
+ # >> pp CACHE.stats
557
+ # {"localhost:11211"=>
558
+ # {"bytes"=>4718,
559
+ # "pid"=>20188,
560
+ # "connection_structures"=>4,
561
+ # "time"=>1162278121,
562
+ # "pointer_size"=>32,
563
+ # "limit_maxbytes"=>67108864,
564
+ # "cmd_get"=>14532,
565
+ # "version"=>"1.2.0",
566
+ # "bytes_written"=>432583,
567
+ # "cmd_set"=>32,
568
+ # "get_misses"=>0,
569
+ # "total_connections"=>19,
570
+ # "curr_connections"=>3,
571
+ # "curr_items"=>4,
572
+ # "uptime"=>1557,
573
+ # "get_hits"=>14532,
574
+ # "total_items"=>32,
575
+ # "rusage_system"=>0.313952,
576
+ # "rusage_user"=>0.119981,
577
+ # "bytes_read"=>190619}}
578
+ # => nil
579
+
580
+ def stats
581
+ raise MemCacheError, "No active servers" unless active?
582
+ server_stats = {}
583
+
584
+ @servers.each do |server|
585
+ next unless server.alive?
586
+
587
+ with_socket_management(server) do |socket|
588
+ value = nil
589
+ socket.write "stats\r\n"
590
+ stats = {}
591
+ while line = socket.gets do
592
+ raise_on_error_response! line
593
+ break if line == "END\r\n"
594
+ if line =~ /\ASTAT ([\S]+) ([\w\.\:]+)/ then
595
+ name, value = $1, $2
596
+ stats[name] = case name
597
+ when 'version'
598
+ value
599
+ when 'rusage_user', 'rusage_system' then
600
+ seconds, microseconds = value.split(/:/, 2)
601
+ microseconds ||= 0
602
+ Float(seconds) + (Float(microseconds) / 1_000_000)
603
+ else
604
+ if value =~ /\A\d+\Z/ then
605
+ value.to_i
606
+ else
607
+ value
608
+ end
609
+ end
610
+ end
611
+ end
612
+ server_stats["#{server.host}:#{server.port}"] = stats
613
+ end
614
+ end
615
+
616
+ raise MemCacheError, "No active servers" if server_stats.empty?
617
+ server_stats
618
+ end
619
+
620
+ ##
621
+ # Shortcut to get a value from the cache.
622
+
623
+ alias [] get
624
+
625
+ ##
626
+ # Shortcut to save a value in the cache. This method does not set an
627
+ # expiration on the entry. Use set to specify an explicit expiry.
628
+
629
+ def []=(key, value)
630
+ set key, value
631
+ end
632
+
633
+ protected unless $TESTING
634
+
635
+ ##
636
+ # Create a key for the cache, incorporating the namespace qualifier if
637
+ # requested.
638
+
639
+ def make_cache_key(key)
640
+ if namespace.nil? then
641
+ key
642
+ else
643
+ "#{@namespace}:#{key}"
644
+ end
645
+ end
646
+
647
+ ##
648
+ # Returns an interoperable hash value for +key+. (I think, docs are
649
+ # sketchy for down servers).
650
+
651
+ def hash_for(key)
652
+ Zlib.crc32(key)
653
+ end
654
+
655
+ ##
656
+ # Pick a server to handle the request based on a hash of the key.
657
+
658
+ def get_server_for_key(key, options = {})
659
+ raise ArgumentError, "illegal character in key #{key.inspect}" if
660
+ key =~ /\s/
661
+ raise ArgumentError, "key too long #{key.inspect}" if key.length > 250
662
+ raise MemCacheError, "No servers available" if @servers.empty?
663
+ return @servers.first if @servers.length == 1
664
+
665
+ hkey = hash_for(key)
666
+
667
+ 20.times do |try|
668
+ entryidx = Continuum.binary_search(@continuum, hkey)
669
+ server = @continuum[entryidx].server
670
+ return server if server.alive?
671
+ break unless failover
672
+ hkey = hash_for "#{try}#{key}"
673
+ end
674
+
675
+ raise MemCacheError, "No servers available"
676
+ end
677
+
678
+ ##
679
+ # Performs a raw decr for +cache_key+ from +server+. Returns nil if not
680
+ # found.
681
+
682
+ def cache_decr(server, cache_key, amount)
683
+ with_socket_management(server) do |socket|
684
+ socket.write "decr #{cache_key} #{amount}#{noreply}\r\n"
685
+ break nil if @no_reply
686
+ text = socket.gets
687
+ raise_on_error_response! text
688
+ return nil if text == "NOT_FOUND\r\n"
689
+ return text.to_i
690
+ end
691
+ end
692
+
693
+ ##
694
+ # Fetches the raw data for +cache_key+ from +server+. Returns nil on cache
695
+ # miss.
696
+
697
+ def cache_get(server, cache_key)
698
+ with_socket_management(server) do |socket|
699
+ socket.write "get #{cache_key}\r\n"
700
+ keyline = socket.gets # "VALUE <key> <flags> <bytes>\r\n"
701
+
702
+ if keyline.nil? then
703
+ server.close
704
+ raise MemCacheError, "lost connection to #{server.host}:#{server.port}"
705
+ end
706
+
707
+ raise_on_error_response! keyline
708
+ return nil if keyline == "END\r\n"
709
+
710
+ unless keyline =~ /(\d+)\r/ then
711
+ server.close
712
+ raise MemCacheError, "unexpected response #{keyline.inspect}"
713
+ end
714
+ value = socket.read $1.to_i
715
+ socket.read 2 # "\r\n"
716
+ socket.gets # "END\r\n"
717
+ return value
718
+ end
719
+ end
720
+
721
+ def gets(key, raw = false)
722
+ with_server(key) do |server, cache_key|
723
+ logger.debug { "gets #{key} from #{server.inspect}: #{value ? value.to_s.size : 'nil'}" } if logger
724
+ result = with_socket_management(server) do |socket|
725
+ socket.write "gets #{cache_key}\r\n"
726
+ keyline = socket.gets # "VALUE <key> <flags> <bytes> <cas token>\r\n"
727
+
728
+ if keyline.nil? then
729
+ server.close
730
+ raise MemCacheError, "lost connection to #{server.host}:#{server.port}"
731
+ end
732
+
733
+ raise_on_error_response! keyline
734
+ return nil if keyline == "END\r\n"
735
+
736
+ unless keyline =~ /(\d+) (\w+)\r/ then
737
+ server.close
738
+ raise MemCacheError, "unexpected response #{keyline.inspect}"
739
+ end
740
+ value = socket.read $1.to_i
741
+ socket.read 2 # "\r\n"
742
+ socket.gets # "END\r\n"
743
+ [value, $2]
744
+ end
745
+ result[0] = Marshal.load result[0] unless raw
746
+ result
747
+ end
748
+ rescue TypeError => err
749
+ handle_error nil, err
750
+ end
751
+
752
+
753
+ ##
754
+ # Fetches +cache_keys+ from +server+ using a multi-get.
755
+
756
+ def cache_get_multi(server, cache_keys)
757
+ with_socket_management(server) do |socket|
758
+ values = {}
759
+ socket.write "get #{cache_keys}\r\n"
760
+
761
+ while keyline = socket.gets do
762
+ return values if keyline == "END\r\n"
763
+ raise_on_error_response! keyline
764
+
765
+ unless keyline =~ /\AVALUE (.+) (.+) (.+)/ then
766
+ server.close
767
+ raise MemCacheError, "unexpected response #{keyline.inspect}"
768
+ end
769
+
770
+ key, data_length = $1, $3
771
+ values[$1] = socket.read data_length.to_i
772
+ socket.read(2) # "\r\n"
773
+ end
774
+
775
+ server.close
776
+ raise MemCacheError, "lost connection to #{server.host}:#{server.port}" # TODO: retry here too
777
+ end
778
+ end
779
+
780
+ ##
781
+ # Performs a raw incr for +cache_key+ from +server+. Returns nil if not
782
+ # found.
783
+
784
+ def cache_incr(server, cache_key, amount)
785
+ with_socket_management(server) do |socket|
786
+ socket.write "incr #{cache_key} #{amount}#{noreply}\r\n"
787
+ break nil if @no_reply
788
+ text = socket.gets
789
+ raise_on_error_response! text
790
+ return nil if text == "NOT_FOUND\r\n"
791
+ return text.to_i
792
+ end
793
+ end
794
+
795
+ ##
796
+ # Gets or creates a socket connected to the given server, and yields it
797
+ # to the block, wrapped in a mutex synchronization if @multithread is true.
798
+ #
799
+ # If a socket error (SocketError, SystemCallError, IOError) or protocol error
800
+ # (MemCacheError) is raised by the block, closes the socket, attempts to
801
+ # connect again, and retries the block (once). If an error is again raised,
802
+ # reraises it as MemCacheError.
803
+ #
804
+ # If unable to connect to the server (or if in the reconnect wait period),
805
+ # raises MemCacheError. Note that the socket connect code marks a server
806
+ # dead for a timeout period, so retrying does not apply to connection attempt
807
+ # failures (but does still apply to unexpectedly lost connections etc.).
808
+
809
+ def with_socket_management(server, &block)
810
+ check_multithread_status!
811
+
812
+ @mutex.lock if @multithread
813
+ retried = false
814
+
815
+ begin
816
+ socket = server.socket
817
+
818
+ # Raise an IndexError to show this server is out of whack. If were inside
819
+ # a with_server block, we'll catch it and attempt to restart the operation.
820
+
821
+ raise IndexError, "No connection to server (#{server.status})" if socket.nil?
822
+
823
+ block.call(socket)
824
+
825
+ rescue SocketError, Timeout::Error => err
826
+ logger.warn { "Socket failure: #{err.message}" } if logger
827
+ server.mark_dead(err)
828
+ handle_error(server, err)
829
+
830
+ rescue MemCacheError, SystemCallError, IOError => err
831
+ logger.warn { "Generic failure: #{err.class.name}: #{err.message}" } if logger
832
+ handle_error(server, err) if retried || socket.nil?
833
+ retried = true
834
+ retry
835
+ end
836
+ ensure
837
+ @mutex.unlock if @multithread
838
+ end
839
+
840
+ def with_server(key)
841
+ retried = false
842
+ begin
843
+ server, cache_key = request_setup(key)
844
+ yield server, cache_key
845
+ rescue IndexError => e
846
+ logger.warn { "Server failed: #{e.class.name}: #{e.message}" } if logger
847
+ if !retried && @servers.size > 1
848
+ logger.info { "Connection to server #{server.inspect} DIED! Retrying operation..." } if logger
849
+ retried = true
850
+ retry
851
+ end
852
+ handle_error(nil, e)
853
+ end
854
+ end
855
+
856
+ ##
857
+ # Handles +error+ from +server+.
858
+
859
+ def handle_error(server, error)
860
+ raise error if error.is_a?(MemCacheError)
861
+ server.close if server
862
+ new_error = MemCacheError.new error.message
863
+ new_error.set_backtrace error.backtrace
864
+ raise new_error
865
+ end
866
+
867
+ def noreply
868
+ @no_reply ? ' noreply' : ''
869
+ end
870
+
871
+ ##
872
+ # Performs setup for making a request with +key+ from memcached. Returns
873
+ # the server to fetch the key from and the complete key to use.
874
+
875
+ def request_setup(key)
876
+ raise MemCacheError, 'No active servers' unless active?
877
+ cache_key = make_cache_key key
878
+ server = get_server_for_key cache_key
879
+ return server, cache_key
880
+ end
881
+
882
+ def raise_on_error_response!(response)
883
+ if response =~ /\A(?:CLIENT_|SERVER_)?ERROR(.*)/
884
+ raise MemCacheError, $1.strip
885
+ end
886
+ end
887
+
888
+ def create_continuum_for(servers)
889
+ total_weight = servers.inject(0) { |memo, srv| memo + srv.weight }
890
+ continuum = []
891
+
892
+ servers.each do |server|
893
+ entry_count_for(server, servers.size, total_weight).times do |idx|
894
+ hash = Digest::SHA1.hexdigest("#{server.host}:#{server.port}:#{idx}")
895
+ value = Integer("0x#{hash[0..7]}")
896
+ continuum << Continuum::Entry.new(value, server)
897
+ end
898
+ end
899
+
900
+ continuum.sort { |a, b| a.value <=> b.value }
901
+ end
902
+
903
+ def entry_count_for(server, total_servers, total_weight)
904
+ ((total_servers * Continuum::POINTS_PER_SERVER * server.weight) / Float(total_weight)).floor
905
+ end
906
+
907
+ def check_multithread_status!
908
+ return if @multithread
909
+
910
+ if Thread.current[:memcache_client] != self.object_id
911
+ raise MemCacheError, <<-EOM
912
+ You are accessing this memcache-client instance from multiple threads but have not enabled multithread support.
913
+ Normally: MemCache.new(['localhost:11211'], :multithread => true)
914
+ In Rails: config.cache_store = [:mem_cache_store, 'localhost:11211', { :multithread => true }]
915
+ EOM
916
+ end
917
+ end
918
+
919
+ ##
920
+ # This class represents a memcached server instance.
921
+
922
+ class Server
923
+
924
+ ##
925
+ # The amount of time to wait to establish a connection with a memcached
926
+ # server. If a connection cannot be established within this time limit,
927
+ # the server will be marked as down.
928
+
929
+ CONNECT_TIMEOUT = 0.25
930
+
931
+ ##
932
+ # The amount of time to wait before attempting to re-establish a
933
+ # connection with a server that is marked dead.
934
+
935
+ RETRY_DELAY = 30.0
936
+
937
+ ##
938
+ # The host the memcached server is running on.
939
+
940
+ attr_reader :host
941
+
942
+ ##
943
+ # The port the memcached server is listening on.
944
+
945
+ attr_reader :port
946
+
947
+ ##
948
+ # The weight given to the server.
949
+
950
+ attr_reader :weight
951
+
952
+ ##
953
+ # The time of next retry if the connection is dead.
954
+
955
+ attr_reader :retry
956
+
957
+ ##
958
+ # A text status string describing the state of the server.
959
+
960
+ attr_reader :status
961
+
962
+ attr_reader :logger
963
+
964
+ ##
965
+ # Create a new MemCache::Server object for the memcached instance
966
+ # listening on the given host and port, weighted by the given weight.
967
+
968
+ def initialize(memcache, host, port = DEFAULT_PORT, weight = DEFAULT_WEIGHT)
969
+ raise ArgumentError, "No host specified" if host.nil? or host.empty?
970
+ raise ArgumentError, "No port specified" if port.nil? or port.to_i.zero?
971
+
972
+ @host = host
973
+ @port = port.to_i
974
+ @weight = weight.to_i
975
+
976
+ @sock = nil
977
+ @retry = nil
978
+ @status = 'NOT CONNECTED'
979
+ @timeout = memcache.timeout
980
+ @logger = memcache.logger
981
+ end
982
+
983
+ ##
984
+ # Return a string representation of the server object.
985
+
986
+ def inspect
987
+ "<MemCache::Server: %s:%d [%d] (%s)>" % [@host, @port, @weight, @status]
988
+ end
989
+
990
+ ##
991
+ # Check whether the server connection is alive. This will cause the
992
+ # socket to attempt to connect if it isn't already connected and or if
993
+ # the server was previously marked as down and the retry time has
994
+ # been exceeded.
995
+
996
+ def alive?
997
+ !!socket
998
+ end
999
+
1000
+ ##
1001
+ # Try to connect to the memcached server targeted by this object.
1002
+ # Returns the connected socket object on success or nil on failure.
1003
+
1004
+ def socket
1005
+ return @sock if @sock and not @sock.closed?
1006
+
1007
+ @sock = nil
1008
+
1009
+ # If the host was dead, don't retry for a while.
1010
+ return if @retry and @retry > Time.now
1011
+
1012
+ # Attempt to connect if not already connected.
1013
+ begin
1014
+ @sock = @timeout ? TCPTimeoutSocket.new(@host, @port, @timeout) : TCPSocket.new(@host, @port)
1015
+
1016
+ if Socket.constants.include? 'TCP_NODELAY' then
1017
+ @sock.setsockopt Socket::IPPROTO_TCP, Socket::TCP_NODELAY, 1
1018
+ end
1019
+ @retry = nil
1020
+ @status = 'CONNECTED'
1021
+ rescue SocketError, SystemCallError, IOError, Timeout::Error => err
1022
+ logger.warn { "Unable to open socket: #{err.class.name}, #{err.message}" } if logger
1023
+ mark_dead err
1024
+ end
1025
+
1026
+ return @sock
1027
+ end
1028
+
1029
+ ##
1030
+ # Close the connection to the memcached server targeted by this
1031
+ # object. The server is not considered dead.
1032
+
1033
+ def close
1034
+ @sock.close if @sock && !@sock.closed?
1035
+ @sock = nil
1036
+ @retry = nil
1037
+ @status = "NOT CONNECTED"
1038
+ end
1039
+
1040
+ ##
1041
+ # Mark the server as dead and close its socket.
1042
+
1043
+ def mark_dead(error)
1044
+ @sock.close if @sock && !@sock.closed?
1045
+ @sock = nil
1046
+ @retry = Time.now + RETRY_DELAY
1047
+
1048
+ reason = "#{error.class.name}: #{error.message}"
1049
+ @status = sprintf "%s:%s DEAD (%s), will retry at %s", @host, @port, reason, @retry
1050
+ @logger.info { @status } if @logger
1051
+ end
1052
+
1053
+ end
1054
+
1055
+ ##
1056
+ # Base MemCache exception class.
1057
+
1058
+ class MemCacheError < RuntimeError; end
1059
+
1060
+ end
1061
+
1062
+ # TCPSocket facade class which implements timeouts.
1063
+ class TCPTimeoutSocket
1064
+
1065
+ def initialize(host, port, timeout)
1066
+ MemCacheTimer.timeout(MemCache::Server::CONNECT_TIMEOUT) do
1067
+ @sock = TCPSocket.new(host, port)
1068
+ @len = timeout
1069
+ end
1070
+ end
1071
+
1072
+ def write(*args)
1073
+ MemCacheTimer.timeout(@len) do
1074
+ @sock.write(*args)
1075
+ end
1076
+ end
1077
+
1078
+ def gets(*args)
1079
+ MemCacheTimer.timeout(@len) do
1080
+ @sock.gets(*args)
1081
+ end
1082
+ end
1083
+
1084
+ def read(*args)
1085
+ MemCacheTimer.timeout(@len) do
1086
+ @sock.read(*args)
1087
+ end
1088
+ end
1089
+
1090
+ def _socket
1091
+ @sock
1092
+ end
1093
+
1094
+ def method_missing(meth, *args)
1095
+ @sock.__send__(meth, *args)
1096
+ end
1097
+
1098
+ def closed?
1099
+ @sock.closed?
1100
+ end
1101
+
1102
+ def close
1103
+ @sock.close
1104
+ end
1105
+ end
1106
+
1107
+ module Continuum
1108
+ POINTS_PER_SERVER = 160 # this is the default in libmemcached
1109
+
1110
+ # Find the closest index in Continuum with value <= the given value
1111
+ def self.binary_search(ary, value, &block)
1112
+ upper = ary.size - 1
1113
+ lower = 0
1114
+ idx = 0
1115
+
1116
+ while(lower <= upper) do
1117
+ idx = (lower + upper) / 2
1118
+ comp = ary[idx].value <=> value
1119
+
1120
+ if comp == 0
1121
+ return idx
1122
+ elsif comp > 0
1123
+ upper = idx - 1
1124
+ else
1125
+ lower = idx + 1
1126
+ end
1127
+ end
1128
+ return upper
1129
+ end
1130
+
1131
+ class Entry
1132
+ attr_reader :value
1133
+ attr_reader :server
1134
+
1135
+ def initialize(val, srv)
1136
+ @value = val
1137
+ @server = srv
1138
+ end
1139
+
1140
+ def inspect
1141
+ "<#{value}, #{server.host}:#{server.port}>"
1142
+ end
1143
+ end
1144
+ end