yury-memcache-client 1.7.1.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,24 @@
1
+ = Memcache-client FAQ
2
+
3
+ == Does memcache-client work with Ruby 1.9?
4
+
5
+ Yes, Ruby 1.9 is supported. The test suite should pass completely on 1.8.6 and 1.9.1.
6
+
7
+
8
+ == I'm seeing "execution expired" or "time's up!" errors, what's that all about?
9
+
10
+ memcache-client 1.6.x+ now has socket operations timed out by default. This is to prevent
11
+ the Ruby process from hanging if memcached or starling get into a bad state, which has been
12
+ seen in production by both 37signals and FiveRuns. The default timeout is 0.5 seconds, which
13
+ should be more than enough time under normal circumstances. It's possible to hit a storm of
14
+ concurrent events which cause this timer to expire: a large Ruby VM can cause the GC to take
15
+ a while, while also storing a large (500k-1MB value), for example.
16
+
17
+ You can increase the timeout or disable them completely with the following configuration:
18
+
19
+ Rails:
20
+ config.cache_store = :mem_cache_store, 'server1', 'server2', { :timeout => nil } # no timeout
21
+
22
+ native:
23
+ MemCache.new ['server1', 'server2'], { :timeout => 1.0 } # 1 second timeout
24
+
@@ -0,0 +1,207 @@
1
+ = 1.7.1 (2009-03-28)
2
+
3
+ * Performance optimizations:
4
+ * Rely on higher performance operating system socket timeouts for low-level socket
5
+ read/writes where possible, instead of the (slower) SystemTimer or (slowest,
6
+ unreliable) Timeout libraries.
7
+ * the native binary search is back! The recent performance tuning made the binary search
8
+ a bottleneck again so it had to return. It uses RubyInline to compile the native extension and
9
+ silently falls back to pure Ruby if anything fails. Make sure you run:
10
+ `gem install RubyInline` if you want ultimate performance.
11
+ * the changes make memcache-client 100% faster than 1.7.0 in my performance test on Ruby 1.8.6:
12
+ 15 sec -> 8 sec.
13
+ * Fix several logging issues.
14
+
15
+ = 1.7.0 (2009-03-08)
16
+
17
+ * Go through the memcached protocol document and implement any commands not already implemented:
18
+ - cas
19
+ - append
20
+ - prepend
21
+ - replace
22
+
23
+ Append and prepend only work with raw data since it makes no sense to concatenate two Marshalled
24
+ values together. The cas functionality should be considered a prototype. Since I don't have an
25
+ application which uses +cas+, I'm not sure what semantic sugar the API should provide. Should it
26
+ retry if the value was changed? Should it massage the returned string into true/false? Feedback
27
+ would be appreciated.
28
+
29
+ * Add fetch method which provides a method very similar to ActiveSupport::Cache::Store#fetch,
30
+ basically a wrapper around get and add. (djanowski)
31
+
32
+ * Implement the flush_all delay parameter, to allow a large memcached farm to be flushed gradually.
33
+
34
+ * Implement the noreply flag, which tells memcached not to reply in operations which don't
35
+ need a reply, i.e. set/add/delete/flush_all.
36
+
37
+ * The only known functionality not implemented anymore is the <flags> parameter to the storage
38
+ commands. This would require modification of the API method signatures. If someone can come
39
+ up with a clean way to implement it, I would be happy to consider including it.
40
+
41
+ = 1.6.5 (2009-02-27)
42
+
43
+ * Change memcache-client to multithreaded by default. The mutex does not add significant
44
+ overhead and it is far too easy, now that Sinatra, Rails and Merb are all thread-safe, to
45
+ use memcache-client in a thread-unsafe manner. Remove some unnecessary mutexing and add
46
+ a test to verify heavily multithreaded usage does not act unexpectedly.
47
+
48
+ * Add optional support for the SystemTimer gem when running on Ruby 1.8.x. This gem is
49
+ highly recommended - it ensures timeouts actually work and halves the overhead of using
50
+ timeouts. Using this gem, Ruby 1.8.x is actually faster in my performance tests
51
+ than Ruby 1.9.x. Just "gem install SystemTimer" and it should be picked up automatically.
52
+
53
+ = 1.6.4 (2009-02-19)
54
+
55
+ * Remove native code altogether. The speedup was only 10% on Ruby 1.8.6 and did not work
56
+ on Ruby 1.9.1.
57
+
58
+ * Removed memcache_util.rb from the distribution. If you are using it, please copy the code
59
+ into your own project. The file will live in the github repository for a few more months
60
+ for this purposes. http://github.com/mperham/memcache-client/raw/7a276089aa3c914e47e3960f9740ac7377204970/lib/memcache_util.rb
61
+
62
+ * Roll continuum.rb into memcache.rb. The project is again a single Ruby file, with no dependencies.
63
+
64
+ = 1.6.3 (2009-02-14)
65
+
66
+ * Remove gem native extension in preference to RubyInline. This allows the gem to install
67
+ and work on JRuby and Ruby 1.8.5 when the native code fails to compile.
68
+
69
+ = 1.6.2 (2009-02-04)
70
+
71
+ * Validate that values are less than one megabyte in size.
72
+
73
+ * Refactor error handling in get_multi to handle server failures and return what values
74
+ we could successfully retrieve.
75
+
76
+ * Add optional logging parameter for debugging and tracing.
77
+
78
+ * First official release since 1.5.0. Thanks to Eric Hodel for turning over the project to me!
79
+ New project home page: http://github.com/mperham/memcache-client
80
+
81
+ = 1.6.1 (2009-01-28)
82
+
83
+ * Add option to disable socket timeout support. Socket timeout has a significant performance
84
+ penalty (approx 3x slower than without in Ruby 1.8.6). You can turn off the timeouts if you
85
+ need absolute performance, but by default timeouts are enabled. The performance
86
+ penalty is much lower in Ruby 1.8.7, 1.9 and JRuby. (mperham)
87
+
88
+ * Add option to disable server failover. Failover can lead to "split-brain" caches that
89
+ return stale data. (mperham)
90
+
91
+ * Implement continuum binary search in native code for performance reasons. Pure ruby
92
+ is available for platforms like JRuby or Rubinius which can't use C extensions. (mperham)
93
+
94
+ * Fix #add with raw=true (iamaleksey)
95
+
96
+ = 1.6.0
97
+
98
+ * Implement a consistent hashing algorithm, as described in libketama.
99
+ This dramatically reduces the cost of adding or removing servers dynamically
100
+ as keys are much more likely to map to the same server.
101
+
102
+ Take a scenario where we add a fourth server. With a naive modulo algorithm, about
103
+ 25% of the keys will map to the same server. In other words, 75% of your memcached
104
+ content suddenly becomes invalid. With a consistent algorithm, 75% of the keys
105
+ will map to the same server as before - only 25% will be invalidated. (mperham)
106
+
107
+ * Implement socket timeouts, should fix rare cases of very bad things happening
108
+ in production at 37signals and FiveRuns. (jseirles)
109
+
110
+ = 1.5.0.5
111
+
112
+ * Remove native C CRC32_ITU_T extension in favor of Zlib's crc32 method.
113
+ memcache-client is now pure Ruby again and will work with JRuby and Rubinius.
114
+
115
+ = 1.5.0.4
116
+
117
+ * Get test suite working again (packagethief)
118
+ * Ruby 1.9 compatiblity fixes (packagethief, mperham)
119
+ * Consistently return server responses and check for errors (packagethief)
120
+ * Properly calculate CRC in Ruby 1.9 strings (mperham)
121
+ * Drop rspec in favor of test/unit, for 1.9 compat (mperham)
122
+
123
+ = 1.5.0.3 (FiveRuns fork)
124
+
125
+ * Integrated ITU-T CRC32 operation in native C extension for speed. Thanks to Justin Balthrop!
126
+
127
+ = 1.5.0.2 (FiveRuns fork)
128
+
129
+ * Add support for seamless failover between servers. If one server connection dies,
130
+ the client will retry the operation on another server before giving up.
131
+
132
+ * Merge Will Bryant's socket retry patch.
133
+ http://willbryant.net/software/2007/12/21/ruby-memcache-client-reconnect-and-retry
134
+
135
+ = 1.5.0.1 (FiveRuns fork)
136
+
137
+ * Fix set not handling client disconnects.
138
+ http://dev.twitter.com/2008/02/solving-case-of-missing-updates.html
139
+
140
+ = 1.5.0
141
+
142
+ * Add MemCache#flush_all command. Patch #13019 and bug #10503. Patches
143
+ submitted by Sebastian Delmont and Rick Olson.
144
+ * Type-cast data returned by MemCache#stats. Patch #10505 submitted by
145
+ Sebastian Delmont.
146
+
147
+ = 1.4.0
148
+
149
+ * Fix bug #10371, #set does not check response for server errors.
150
+ Submitted by Ben VandenBos.
151
+ * Fix bug #12450, set TCP_NODELAY socket option. Patch by Chris
152
+ McGrath.
153
+ * Fix bug #10704, missing #add method. Patch by Jamie Macey.
154
+ * Fix bug #10371, handle socket EOF in cache_get. Submitted by Ben
155
+ VandenBos.
156
+
157
+ = 1.3.0
158
+
159
+ * Apply patch #6507, add stats command. Submitted by Tyler Kovacs.
160
+ * Apply patch #6509, parallel implementation of #get_multi. Submitted
161
+ by Tyler Kovacs.
162
+ * Validate keys. Disallow spaces in keys or keys that are too long.
163
+ * Perform more validation of server responses. MemCache now reports
164
+ errors if the socket was not in an expected state. (Please file
165
+ bugs if you find some.)
166
+ * Add #incr and #decr.
167
+ * Add raw argument to #set and #get to retrieve #incr and #decr
168
+ values.
169
+ * Also put on MemCacheError when using Cache::get with block.
170
+ * memcache.rb no longer sets $TESTING to a true value if it was
171
+ previously defined. Bug #8213 by Matijs van Zuijlen.
172
+
173
+ = 1.2.1
174
+
175
+ * Fix bug #7048, MemCache#servers= referenced changed local variable.
176
+ Submitted by Justin Dossey.
177
+ * Fix bug #7049, MemCache#initialize resets @buckets. Submitted by
178
+ Justin Dossey.
179
+ * Fix bug #6232, Make Cache::Get work with a block only when nil is
180
+ returned. Submitted by Jon Evans.
181
+ * Moved to the seattlerb project.
182
+
183
+ = 1.2.0
184
+
185
+ NOTE: This version will store keys in different places than previous
186
+ versions! Be prepared for some thrashing while memcached sorts itself
187
+ out!
188
+
189
+ * Fixed multithreaded operations, bug 5994 and 5989.
190
+ Thanks to Blaine Cook, Erik Hetzner, Elliot Smith, Dave Myron (and
191
+ possibly others I have forgotten).
192
+ * Made memcached interoperable with other memcached libraries, bug
193
+ 4509. Thanks to anonymous.
194
+ * Added get_multi to match Perl/etc APIs
195
+
196
+ = 1.1.0
197
+
198
+ * Added some tests
199
+ * Sped up non-multithreaded and multithreaded operation
200
+ * More Ruby-memcache compatibility
201
+ * More RDoc
202
+ * Switched to Hoe
203
+
204
+ = 1.0.0
205
+
206
+ Birthday!
207
+
@@ -0,0 +1,28 @@
1
+ Copyright 2005-2009 Bob Cottrell, Eric Hodel, Mike Perham.
2
+ All rights reserved.
3
+
4
+ Redistribution and use in source and binary forms, with or without
5
+ modification, are permitted provided that the following conditions
6
+ are met:
7
+
8
+ 1. Redistributions of source code must retain the above copyright
9
+ notice, this list of conditions and the following disclaimer.
10
+ 2. Redistributions in binary form must reproduce the above copyright
11
+ notice, this list of conditions and the following disclaimer in the
12
+ documentation and/or other materials provided with the distribution.
13
+ 3. Neither the names of the authors nor the names of their contributors
14
+ may be used to endorse or promote products derived from this software
15
+ without specific prior written permission.
16
+
17
+ THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS'' AND ANY EXPRESS
18
+ OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
19
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
20
+ ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE
21
+ LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
22
+ OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
23
+ OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
24
+ BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
25
+ WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
26
+ OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
27
+ EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
28
+
@@ -0,0 +1,50 @@
1
+ = memcache-client
2
+
3
+ A ruby library for accessing memcached.
4
+
5
+ Source:
6
+
7
+ http://github.com/mperham/memcache-client
8
+
9
+ == Installing memcache-client
10
+
11
+ Just install the gem:
12
+
13
+ $ sudo gem install memcache-client
14
+
15
+ == Using memcache-client
16
+
17
+ With one server:
18
+
19
+ CACHE = MemCache.new 'localhost:11211'
20
+
21
+ Or with multiple servers:
22
+
23
+ CACHE = MemCache.new %w[one.example.com:11211 two.example.com:11211]
24
+
25
+
26
+ == Tuning memcache-client
27
+
28
+ The MemCache.new method takes a number of options which can be useful at times. Please
29
+ read the source comments there for an overview. If you are using Ruby 1.8.x and using
30
+ multiple memcached servers, you should install the RubyInline gem for ultimate performance.
31
+
32
+
33
+ == Using memcache-client with Rails
34
+
35
+ Rails 2.1+ includes memcache-client out of the box. See ActiveSupport::Cache::MemCacheStore
36
+ and the Rails.cache method for more details.
37
+
38
+
39
+ == Questions?
40
+
41
+ memcache-client is maintained by Mike Perham and was originally written by Bob Cottrell,
42
+ Eric Hodel and the seattle.rb crew.
43
+
44
+ Email:: mailto:mperham@gmail.com
45
+ Twitter:: mperham[http://twitter.com/mperham]
46
+ WWW:: http://mikeperham.com
47
+
48
+ If my work on memcache-client is something you support, please take a moment to
49
+ recommend me at WWR[http://workingwithrails.com/person/10797-mike-perham]. I'm not
50
+ asking for money, just a electronic "thumbs up".
@@ -0,0 +1,35 @@
1
+ # vim: syntax=Ruby
2
+ require 'rubygems'
3
+ require 'rake/rdoctask'
4
+ require 'rake/testtask'
5
+
6
+ task :gem do
7
+ sh "gem build memcache-client.gemspec"
8
+ end
9
+
10
+ task :install => [:gem] do
11
+ sh "sudo gem install memcache-client-*.gem"
12
+ end
13
+
14
+ task :clean do
15
+ sh "rm -f memcache-client-*.gem"
16
+ end
17
+
18
+ task :publish => [:clean, :gem, :install] do
19
+ require 'lib/memcache'
20
+ sh "rubyforge add_release seattlerb memcache-client #{MemCache::VERSION} memcache-client-#{MemCache::VERSION}.gem"
21
+ end
22
+
23
+ Rake::RDocTask.new do |rd|
24
+ rd.main = "README.rdoc"
25
+ rd.rdoc_files.include("README.rdoc", "FAQ.rdoc", "History.rdoc", "lib/memcache.rb")
26
+ rd.rdoc_dir = 'doc'
27
+ end
28
+
29
+ Rake::TestTask.new
30
+
31
+ task :default => :test
32
+
33
+ task :rcov do
34
+ `rcov -Ilib test/*.rb`
35
+ end
@@ -0,0 +1,41 @@
1
+ module Continuum
2
+
3
+ class << self
4
+
5
+ # Native extension to perform the binary search within the continuum
6
+ # space. There's a pure ruby version in memcache.rb so this is purely
7
+ # optional for performance and only necessary if you are using multiple
8
+ # memcached servers.
9
+ begin
10
+ require 'inline'
11
+ inline do |builder|
12
+ builder.c <<-EOM
13
+ int binary_search(VALUE ary, unsigned int r) {
14
+ int upper = RARRAY_LEN(ary) - 1;
15
+ int lower = 0;
16
+ int idx = 0;
17
+ ID value = rb_intern("value");
18
+
19
+ while (lower <= upper) {
20
+ idx = (lower + upper) / 2;
21
+
22
+ VALUE continuumValue = rb_funcall(RARRAY_PTR(ary)[idx], value, 0);
23
+ unsigned int l = NUM2UINT(continuumValue);
24
+ if (l == r) {
25
+ return idx;
26
+ }
27
+ else if (l > r) {
28
+ upper = idx - 1;
29
+ }
30
+ else {
31
+ lower = idx + 1;
32
+ }
33
+ }
34
+ return upper;
35
+ }
36
+ EOM
37
+ end
38
+ rescue Exception => e
39
+ end
40
+ end
41
+ end
@@ -0,0 +1,1120 @@
1
+ $TESTING = defined?($TESTING) && $TESTING
2
+
3
+ require 'socket'
4
+ require 'thread'
5
+ require 'zlib'
6
+ require 'digest/sha1'
7
+
8
+ begin
9
+ # Try to use the SystemTimer gem instead of Ruby's timeout library
10
+ # when running on something that looks like Ruby 1.8.x. See:
11
+ # http://ph7spot.com/articles/system_timer
12
+ # We don't want to bother trying to load SystemTimer on jruby and
13
+ # ruby 1.9+.
14
+ if !defined?(RUBY_ENGINE)
15
+ require 'system_timer'
16
+ MemCacheTimer = SystemTimer
17
+ else
18
+ require 'timeout'
19
+ MemCacheTimer = Timeout
20
+ end
21
+ rescue LoadError => e
22
+ puts "[memcache-client] Could not load SystemTimer gem, falling back to Ruby's slower/unsafe timeout library: #{e.message}"
23
+ require 'timeout'
24
+ MemCacheTimer = Timeout
25
+ end
26
+
27
+ ##
28
+ # A Ruby client library for memcached.
29
+ #
30
+
31
+ class MemCache
32
+
33
+ ##
34
+ # The version of MemCache you are using.
35
+
36
+ VERSION = '1.7.1.2'
37
+
38
+ ##
39
+ # Default options for the cache object.
40
+
41
+ DEFAULT_OPTIONS = {
42
+ :namespace => nil,
43
+ :readonly => false,
44
+ :multithread => true,
45
+ :failover => true,
46
+ :timeout => 0.5,
47
+ :logger => nil,
48
+ :raw => false,
49
+ :persistent_hashing => true,
50
+ :no_reply => false,
51
+ }
52
+
53
+ ##
54
+ # Default memcached port.
55
+
56
+ DEFAULT_PORT = 11211
57
+
58
+ ##
59
+ # Default memcached server weight.
60
+
61
+ DEFAULT_WEIGHT = 1
62
+
63
+ ##
64
+ # The namespace for this instance
65
+
66
+ attr_reader :namespace
67
+
68
+ ##
69
+ # The multithread setting for this instance
70
+
71
+ attr_reader :multithread
72
+
73
+ ##
74
+ # The servers this client talks to. Play at your own peril.
75
+
76
+ attr_reader :servers
77
+
78
+ ##
79
+ # Socket timeout limit with this client, defaults to 0.5 sec.
80
+ # Set to nil to disable timeouts.
81
+
82
+ attr_reader :timeout
83
+
84
+ ##
85
+ # Should the client try to failover to another server if the
86
+ # first server is down? Defaults to true.
87
+
88
+ attr_reader :failover
89
+
90
+ ##
91
+ # Log debug/info/warn/error to the given Logger, defaults to nil.
92
+
93
+ attr_reader :logger
94
+
95
+ ##
96
+ # Don't send or look for a reply from the memcached server for write operations.
97
+ # Please note this feature only works in memcached 1.2.5 and later. Earlier
98
+ # versions will reply with "ERROR".
99
+ attr_reader :no_reply
100
+
101
+ ##
102
+ # Accepts a list of +servers+ and a list of +opts+. +servers+ may be
103
+ # omitted. See +servers=+ for acceptable server list arguments.
104
+ #
105
+ # Valid options for +opts+ are:
106
+ #
107
+ # [:namespace] Prepends this value to all keys added or retrieved.
108
+ # [:readonly] Raises an exception on cache writes when true.
109
+ # [:multithread] Wraps cache access in a Mutex for thread safety. Defaults to true.
110
+ # [:failover] Should the client try to failover to another server if the
111
+ # first server is down? Defaults to true.
112
+ # [:timeout] Time to use as the socket read timeout. Defaults to 0.5 sec,
113
+ # set to nil to disable timeouts (this is a major performance penalty in Ruby 1.8,
114
+ # "gem install SystemTimer' to remove most of the penalty).
115
+ # [:logger] Logger to use for info/debug output, defaults to nil
116
+ # [:raw] If true, the value(s) will be returned unaltered.
117
+ # [:persistent_hashing]
118
+ # If true do persisting hashing otherwise do modulo hashing, defaults to true.
119
+ # [:no_reply] Don't bother looking for a reply for write operations (i.e. they
120
+ # become 'fire and forget'), memcached 1.2.5 and later only, speeds up
121
+ # set/add/delete/incr/decr significantly.
122
+ #
123
+ # Other options are ignored.
124
+
125
+ def initialize(*args)
126
+ servers = []
127
+ opts = {}
128
+
129
+ case args.length
130
+ when 0 then # NOP
131
+ when 1 then
132
+ arg = args.shift
133
+ case arg
134
+ when Hash then opts = arg
135
+ when Array then servers = arg
136
+ when String then servers = [arg]
137
+ else raise ArgumentError, 'first argument must be Array, Hash or String'
138
+ end
139
+ when 2 then
140
+ servers, opts = args
141
+ else
142
+ raise ArgumentError, "wrong number of arguments (#{args.length} for 2)"
143
+ end
144
+
145
+ opts = DEFAULT_OPTIONS.merge opts
146
+ @namespace = opts[:namespace]
147
+ @readonly = opts[:readonly]
148
+ @multithread = opts[:multithread]
149
+ @timeout = opts[:timeout]
150
+ @failover = opts[:failover]
151
+ @logger = opts[:logger]
152
+ @raw = opts[:raw]
153
+ @persistent_hashing = opts[:persistent_hashing]
154
+ @no_reply = opts[:no_reply]
155
+ @mutex = Mutex.new if @multithread
156
+
157
+ logger.info { "memcache-client #{VERSION} #{Array(servers).inspect}" } if logger
158
+
159
+ self.servers = servers
160
+ end
161
+
162
+ ##
163
+ # Returns a string representation of the cache object.
164
+
165
+ def inspect
166
+ "<MemCache: %d servers, ns: %p, ro: %p, raw: %p>" %
167
+ [@servers.length, @namespace, @readonly, @raw]
168
+ end
169
+
170
+ ##
171
+ # Returns whether there is at least one active server for the object.
172
+
173
+ def active?
174
+ not @servers.empty?
175
+ end
176
+
177
+ ##
178
+ # Returns whether or not the cache object was created read only.
179
+
180
+ def readonly?
181
+ @readonly
182
+ end
183
+
184
+ ##
185
+ # Set the servers that the requests will be distributed between. Entries
186
+ # can be either strings of the form "hostname:port" or
187
+ # "hostname:port:weight" or MemCache::Server objects.
188
+ #
189
+ def servers=(servers)
190
+ # Create the server objects.
191
+ @servers = Array(servers).collect do |server|
192
+ case server
193
+ when String
194
+ host, port, weight = server.split ':', 3
195
+ port ||= DEFAULT_PORT
196
+ weight ||= DEFAULT_WEIGHT
197
+ Server.new self, host, port, weight
198
+ else
199
+ server
200
+ end
201
+ end
202
+
203
+ logger.debug { "Servers now: #{@servers.inspect}" } if logger
204
+
205
+ # There's no point in doing this if there's only one server
206
+ if @servers.size > 1
207
+ if @persistent_hashing
208
+ @continuum = create_continuum_for(@servers)
209
+ else
210
+ @buckets = []
211
+ @servers.each do |server|
212
+ server.weight.times { @buckets.push(server) }
213
+ end
214
+ end
215
+ end
216
+
217
+ @servers
218
+ end
219
+
220
+ ##
221
+ # Decrements the value for +key+ by +amount+ and returns the new value.
222
+ # +key+ must already exist. If +key+ is not an integer, it is assumed to be
223
+ # 0. +key+ can not be decremented below 0.
224
+
225
+ def decr(key, amount = 1)
226
+ raise MemCacheError, "Update of readonly cache" if @readonly
227
+ with_server(key) do |server, cache_key|
228
+ cache_decr server, cache_key, amount
229
+ end
230
+ rescue TypeError => err
231
+ handle_error nil, err
232
+ end
233
+
234
+ ##
235
+ # Retrieves +key+ from memcache. If +raw+ is false, the value will be
236
+ # unmarshalled.
237
+
238
+ def get(key, raw = nil)
239
+ with_server(key) do |server, cache_key|
240
+ value = cache_get server, cache_key
241
+ logger.debug { "get #{key} from #{server.inspect}: #{value ? value.to_s.size : 'nil'}" } if logger
242
+ return nil if value.nil?
243
+ value = Marshal.load value unless (raw == nil && @raw) || raw
244
+ return value
245
+ end
246
+ rescue TypeError => err
247
+ handle_error nil, err
248
+ end
249
+
250
+ ##
251
+ # Performs a +get+ with the given +key+. If
252
+ # the value does not exist and a block was given,
253
+ # the block will be called and the result saved via +add+.
254
+ #
255
+ # If you do not provide a block, using this
256
+ # method is the same as using +get+.
257
+ #
258
+ def fetch(key, expiry = 0, raw = false)
259
+ value = get(key, raw)
260
+
261
+ if value.nil? && block_given?
262
+ value = yield
263
+ add(key, value, expiry, raw)
264
+ end
265
+
266
+ value
267
+ end
268
+
269
+ ##
270
+ # Retrieves multiple values from memcached in parallel, if possible.
271
+ #
272
+ # The memcached protocol supports the ability to retrieve multiple
273
+ # keys in a single request. Pass in an array of keys to this method
274
+ # and it will:
275
+ #
276
+ # 1. map the key to the appropriate memcached server
277
+ # 2. send a single request to each server that has one or more key values
278
+ #
279
+ # Returns a hash of values.
280
+ #
281
+ # cache["a"] = 1
282
+ # cache["b"] = 2
283
+ # cache.get_multi "a", "b" # => { "a" => 1, "b" => 2 }
284
+ #
285
+ # Note that get_multi assumes the values are marshalled.
286
+
287
+ def get_multi(*keys)
288
+ raise MemCacheError, 'No active servers' unless active?
289
+
290
+ keys.flatten!
291
+ key_count = keys.length
292
+ cache_keys = {}
293
+ server_keys = Hash.new { |h,k| h[k] = [] }
294
+
295
+ # map keys to servers
296
+ keys.each do |key|
297
+ server, cache_key = request_setup key
298
+ cache_keys[cache_key] = key
299
+ server_keys[server] << cache_key
300
+ end
301
+
302
+ results = {}
303
+
304
+ server_keys.each do |server, keys_for_server|
305
+ keys_for_server_str = keys_for_server.join ' '
306
+ begin
307
+ values = cache_get_multi server, keys_for_server_str
308
+ values.each do |key, value|
309
+ results[cache_keys[key]] = if @raw then value else Marshal.load value end
310
+ end
311
+ rescue IndexError => e
312
+ # Ignore this server and try the others
313
+ logger.warn { "Unable to retrieve #{keys_for_server.size} elements from #{server.inspect}: #{e.message}"} if logger
314
+ end
315
+ end
316
+
317
+ return results
318
+ rescue TypeError => err
319
+ handle_error nil, err
320
+ end
321
+
322
+ ##
323
+ # Increments the value for +key+ by +amount+ and returns the new value.
324
+ # +key+ must already exist. If +key+ is not an integer, it is assumed to be
325
+ # 0.
326
+
327
+ def incr(key, amount = 1)
328
+ raise MemCacheError, "Update of readonly cache" if @readonly
329
+ with_server(key) do |server, cache_key|
330
+ cache_incr server, cache_key, amount
331
+ end
332
+ rescue TypeError => err
333
+ handle_error nil, err
334
+ end
335
+
336
+ ##
337
+ # Add +key+ to the cache with value +value+ that expires in +expiry+
338
+ # seconds. If +raw+ is true, +value+ will not be Marshalled.
339
+ #
340
+ # Warning: Readers should not call this method in the event of a cache miss;
341
+ # see MemCache#add.
342
+
343
+ ONE_MB = 1024 * 1024
344
+
345
+ def set(key, value, expiry = 0, raw = nil)
346
+ raise MemCacheError, "Update of readonly cache" if @readonly
347
+ with_server(key) do |server, cache_key|
348
+
349
+ value = Marshal.dump value unless (raw == nil && @raw) || raw
350
+ logger.debug { "set #{key} to #{server.inspect}: #{value.to_s.size}" } if logger
351
+
352
+ raise MemCacheError, "Value too large, memcached can only store 1MB of data per key" if value.to_s.size > ONE_MB
353
+
354
+ command = "set #{cache_key} 0 #{expiry} #{value.to_s.size}#{noreply}\r\n#{value}\r\n"
355
+
356
+ with_socket_management(server) do |socket|
357
+ socket.write command
358
+ break nil if @no_reply
359
+ result = socket.gets
360
+ raise_on_error_response! result
361
+
362
+ if result.nil?
363
+ server.close
364
+ raise MemCacheError, "lost connection to #{server.host}:#{server.port}"
365
+ end
366
+
367
+ result
368
+ end
369
+ end
370
+ end
371
+
372
+ ##
373
+ # "cas" is a check and set operation which means "store this data but
374
+ # only if no one else has updated since I last fetched it." This can
375
+ # be used as a form of optimistic locking.
376
+ #
377
+ # Works in block form like so:
378
+ # cache.cas('some-key') do |value|
379
+ # value + 1
380
+ # end
381
+ #
382
+ # Returns:
383
+ # +nil+ if the value was not found on the memcached server.
384
+ # +STORED+ if the value was updated successfully
385
+ # +EXISTS+ if the value was updated by someone else since last fetch
386
+
387
+ def cas(key, expiry=0, raw=nil)
388
+ raise MemCacheError, "Update of readonly cache" if @readonly
389
+ raise MemCacheError, "A block is required" unless block_given?
390
+
391
+ (value, token) = gets(key, raw)
392
+ return nil unless value
393
+ value = yield value
394
+
395
+ with_server(key) do |server, cache_key|
396
+ value = Marshal.dump value unless (raw == nil && @raw) || raw
397
+ logger.debug { "cas #{key} to #{server.inspect}: #{value.to_s.size}" } if logger
398
+ command = "cas #{cache_key} 0 #{expiry} #{value.to_s.size} #{token}#{noreply}\r\n#{value}\r\n"
399
+
400
+ with_socket_management(server) do |socket|
401
+ socket.write command
402
+ break nil if @no_reply
403
+ result = socket.gets
404
+ raise_on_error_response! result
405
+
406
+ if result.nil?
407
+ server.close
408
+ raise MemCacheError, "lost connection to #{server.host}:#{server.port}"
409
+ end
410
+
411
+ result
412
+ end
413
+ end
414
+ end
415
+
416
+ ##
417
+ # Add +key+ to the cache with value +value+ that expires in +expiry+
418
+ # seconds, but only if +key+ does not already exist in the cache.
419
+ # If +raw+ is true, +value+ will not be Marshalled.
420
+ #
421
+ # Readers should call this method in the event of a cache miss, not
422
+ # MemCache#set.
423
+
424
+ def add(key, value, expiry = 0, raw = nil)
425
+ raise MemCacheError, "Update of readonly cache" if @readonly
426
+ with_server(key) do |server, cache_key|
427
+ value = Marshal.dump value unless (raw == nil && @raw) || raw
428
+ logger.debug { "add #{key} to #{server}: #{value ? value.to_s.size : 'nil'}" } if logger
429
+ command = "add #{cache_key} 0 #{expiry} #{value.to_s.size}#{noreply}\r\n#{value}\r\n"
430
+
431
+ with_socket_management(server) do |socket|
432
+ socket.write command
433
+ break nil if @no_reply
434
+ result = socket.gets
435
+ raise_on_error_response! result
436
+ result
437
+ end
438
+ end
439
+ end
440
+
441
+ ##
442
+ # Add +key+ to the cache with value +value+ that expires in +expiry+
443
+ # seconds, but only if +key+ already exists in the cache.
444
+ # If +raw+ is true, +value+ will not be Marshalled.
445
+ def replace(key, value, expiry = 0, raw = false)
446
+ raise MemCacheError, "Update of readonly cache" if @readonly
447
+ with_server(key) do |server, cache_key|
448
+ value = Marshal.dump value unless raw
449
+ logger.debug { "replace #{key} to #{server}: #{value ? value.to_s.size : 'nil'}" } if logger
450
+ command = "replace #{cache_key} 0 #{expiry} #{value.to_s.size}#{noreply}\r\n#{value}\r\n"
451
+
452
+ with_socket_management(server) do |socket|
453
+ socket.write command
454
+ break nil if @no_reply
455
+ result = socket.gets
456
+ raise_on_error_response! result
457
+ result
458
+ end
459
+ end
460
+ end
461
+
462
+ ##
463
+ # Append - 'add this data to an existing key after existing data'
464
+ # Please note the value is always passed to memcached as raw since it
465
+ # doesn't make a lot of sense to concatenate marshalled data together.
466
+ def append(key, value)
467
+ raise MemCacheError, "Update of readonly cache" if @readonly
468
+ with_server(key) do |server, cache_key|
469
+ logger.debug { "append #{key} to #{server}: #{value ? value.to_s.size : 'nil'}" } if logger
470
+ command = "append #{cache_key} 0 0 #{value.to_s.size}#{noreply}\r\n#{value}\r\n"
471
+
472
+ with_socket_management(server) do |socket|
473
+ socket.write command
474
+ break nil if @no_reply
475
+ result = socket.gets
476
+ raise_on_error_response! result
477
+ result
478
+ end
479
+ end
480
+ end
481
+
482
+ ##
483
+ # Prepend - 'add this data to an existing key before existing data'
484
+ # Please note the value is always passed to memcached as raw since it
485
+ # doesn't make a lot of sense to concatenate marshalled data together.
486
+ def prepend(key, value)
487
+ raise MemCacheError, "Update of readonly cache" if @readonly
488
+ with_server(key) do |server, cache_key|
489
+ logger.debug { "prepend #{key} to #{server}: #{value ? value.to_s.size : 'nil'}" } if logger
490
+ command = "prepend #{cache_key} 0 0 #{value.to_s.size}#{noreply}\r\n#{value}\r\n"
491
+
492
+ with_socket_management(server) do |socket|
493
+ socket.write command
494
+ break nil if @no_reply
495
+ result = socket.gets
496
+ raise_on_error_response! result
497
+ result
498
+ end
499
+ end
500
+ end
501
+
502
+ ##
503
+ # Removes +key+ from the cache in +expiry+ seconds.
504
+
505
+ def delete(key, expiry = 0)
506
+ raise MemCacheError, "Update of readonly cache" if @readonly
507
+ with_server(key) do |server, cache_key|
508
+ with_socket_management(server) do |socket|
509
+ logger.debug { "delete #{cache_key} on #{server}" } if logger
510
+ socket.write "delete #{cache_key} #{expiry}#{noreply}\r\n"
511
+ break nil if @no_reply
512
+ result = socket.gets
513
+ raise_on_error_response! result
514
+ result
515
+ end
516
+ end
517
+ end
518
+
519
+ ##
520
+ # Flush the cache from all memcache servers.
521
+ # A non-zero value for +delay+ will ensure that the flush
522
+ # is propogated slowly through your memcached server farm.
523
+ # The Nth server will be flushed N*delay seconds from now,
524
+ # asynchronously so this method returns quickly.
525
+ # This prevents a huge database spike due to a total
526
+ # flush all at once.
527
+
528
+ def flush_all(delay=0)
529
+ raise MemCacheError, 'No active servers' unless active?
530
+ raise MemCacheError, "Update of readonly cache" if @readonly
531
+
532
+ begin
533
+ delay_time = 0
534
+ @servers.each do |server|
535
+ with_socket_management(server) do |socket|
536
+ logger.debug { "flush_all #{delay_time} on #{server}" } if logger
537
+ socket.write "flush_all #{delay_time}#{noreply}\r\n"
538
+ break nil if @no_reply
539
+ result = socket.gets
540
+ raise_on_error_response! result
541
+ result
542
+ end
543
+ delay_time += delay
544
+ end
545
+ rescue IndexError => err
546
+ handle_error nil, err
547
+ end
548
+ end
549
+
550
+ ##
551
+ # Reset the connection to all memcache servers. This should be called if
552
+ # there is a problem with a cache lookup that might have left the connection
553
+ # in a corrupted state.
554
+
555
+ def reset
556
+ @servers.each { |server| server.close }
557
+ end
558
+
559
+ ##
560
+ # Returns statistics for each memcached server. An explanation of the
561
+ # statistics can be found in the memcached docs:
562
+ #
563
+ # http://code.sixapart.com/svn/memcached/trunk/server/doc/protocol.txt
564
+ #
565
+ # Example:
566
+ #
567
+ # >> pp CACHE.stats
568
+ # {"localhost:11211"=>
569
+ # {"bytes"=>4718,
570
+ # "pid"=>20188,
571
+ # "connection_structures"=>4,
572
+ # "time"=>1162278121,
573
+ # "pointer_size"=>32,
574
+ # "limit_maxbytes"=>67108864,
575
+ # "cmd_get"=>14532,
576
+ # "version"=>"1.2.0",
577
+ # "bytes_written"=>432583,
578
+ # "cmd_set"=>32,
579
+ # "get_misses"=>0,
580
+ # "total_connections"=>19,
581
+ # "curr_connections"=>3,
582
+ # "curr_items"=>4,
583
+ # "uptime"=>1557,
584
+ # "get_hits"=>14532,
585
+ # "total_items"=>32,
586
+ # "rusage_system"=>0.313952,
587
+ # "rusage_user"=>0.119981,
588
+ # "bytes_read"=>190619}}
589
+ # => nil
590
+
591
+ def stats
592
+ raise MemCacheError, "No active servers" unless active?
593
+ server_stats = {}
594
+
595
+ @servers.each do |server|
596
+ next unless server.alive?
597
+
598
+ with_socket_management(server) do |socket|
599
+ value = nil
600
+ socket.write "stats\r\n"
601
+ stats = {}
602
+ while line = socket.gets do
603
+ raise_on_error_response! line
604
+ break if line == "END\r\n"
605
+ if line =~ /\ASTAT ([\S]+) ([\w\.\:]+)/ then
606
+ name, value = $1, $2
607
+ stats[name] = case name
608
+ when 'version'
609
+ value
610
+ when 'rusage_user', 'rusage_system' then
611
+ seconds, microseconds = value.split(/:/, 2)
612
+ microseconds ||= 0
613
+ Float(seconds) + (Float(microseconds) / 1_000_000)
614
+ else
615
+ if value =~ /\A\d+\Z/ then
616
+ value.to_i
617
+ else
618
+ value
619
+ end
620
+ end
621
+ end
622
+ end
623
+ server_stats["#{server.host}:#{server.port}"] = stats
624
+ end
625
+ end
626
+
627
+ raise MemCacheError, "No active servers" if server_stats.empty?
628
+ server_stats
629
+ end
630
+
631
+ ##
632
+ # Shortcut to get a value from the cache.
633
+
634
+ alias [] get
635
+
636
+ ##
637
+ # Shortcut to save a value in the cache. This method does not set an
638
+ # expiration on the entry. Use set to specify an explicit expiry.
639
+
640
+ def []=(key, value)
641
+ set key, value
642
+ end
643
+
644
+ protected unless $TESTING
645
+
646
+ ##
647
+ # Create a key for the cache, incorporating the namespace qualifier if
648
+ # requested.
649
+
650
+ def make_cache_key(key)
651
+ if namespace.nil? then
652
+ key
653
+ else
654
+ "#{@namespace}:#{key}"
655
+ end
656
+ end
657
+
658
+ ##
659
+ # Pick a server to handle the request based on a hash of the key.
660
+
661
+ def get_server_for_key(key, options = {})
662
+ raise ArgumentError, "illegal character in key #{key.inspect}" if
663
+ key =~ /\s/
664
+ raise ArgumentError, "key too long #{key.inspect}" if key.length > 250
665
+ raise MemCacheError, "No servers available" if @servers.empty?
666
+ return @servers.first if @servers.length == 1
667
+
668
+ # for unknown reason hashing is different between memcache-client.rb and original Cache::Memcached.pm
669
+
670
+ if @persistent_hashing
671
+ hkey = Zlib.crc32 key
672
+
673
+ 20.times do |try|
674
+ entryidx = Continuum.binary_search(@continuum, hkey)
675
+ server = @continuum[entryidx].server
676
+ return server if server.alive?
677
+ break unless failover
678
+ hkey = Zlib.crc32 "#{try}#{key}"
679
+ end
680
+ else
681
+ hkey = (Zlib.crc32(key) >> 16) & 0x7fff
682
+
683
+ 20.times do |try|
684
+ server = @buckets[hkey % @buckets.length]
685
+ return server if server.alive?
686
+ hkey += (Zlib.crc32("#{try}#{key}") >> 16) & 0x7fff
687
+ end
688
+ end
689
+
690
+ raise MemCacheError, "No servers available"
691
+ end
692
+
693
+ ##
694
+ # Performs a raw decr for +cache_key+ from +server+. Returns nil if not
695
+ # found.
696
+
697
+ def cache_decr(server, cache_key, amount)
698
+ with_socket_management(server) do |socket|
699
+ socket.write "decr #{cache_key} #{amount}#{noreply}\r\n"
700
+ break nil if @no_reply
701
+ text = socket.gets
702
+ raise_on_error_response! text
703
+ return nil if text == "NOT_FOUND\r\n"
704
+ return text.to_i
705
+ end
706
+ end
707
+
708
+ ##
709
+ # Fetches the raw data for +cache_key+ from +server+. Returns nil on cache
710
+ # miss.
711
+
712
+ def cache_get(server, cache_key)
713
+ with_socket_management(server) do |socket|
714
+ socket.write "get #{cache_key}\r\n"
715
+ keyline = socket.gets # "VALUE <key> <flags> <bytes>\r\n"
716
+
717
+ if keyline.nil? then
718
+ server.close
719
+ raise MemCacheError, "lost connection to #{server.host}:#{server.port}"
720
+ end
721
+
722
+ raise_on_error_response! keyline
723
+ return nil if keyline == "END\r\n"
724
+
725
+ unless keyline =~ /(\d+)\r/ then
726
+ server.close
727
+ raise MemCacheError, "unexpected response #{keyline.inspect}"
728
+ end
729
+ value = socket.read $1.to_i
730
+ socket.read 2 # "\r\n"
731
+ socket.gets # "END\r\n"
732
+ return value
733
+ end
734
+ end
735
+
736
+ def gets(key, raw = nil)
737
+ with_server(key) do |server, cache_key|
738
+ logger.debug { "gets #{key} from #{server.inspect}" } if logger
739
+ result = with_socket_management(server) do |socket|
740
+ socket.write "gets #{cache_key}\r\n"
741
+ keyline = socket.gets # "VALUE <key> <flags> <bytes> <cas token>\r\n"
742
+
743
+ if keyline.nil? then
744
+ server.close
745
+ raise MemCacheError, "lost connection to #{server.host}:#{server.port}"
746
+ end
747
+
748
+ raise_on_error_response! keyline
749
+ return nil if keyline == "END\r\n"
750
+
751
+ unless keyline =~ /(\d+) (\w+)\r/ then
752
+ server.close
753
+ raise MemCacheError, "unexpected response #{keyline.inspect}"
754
+ end
755
+ value = socket.read $1.to_i
756
+ socket.read 2 # "\r\n"
757
+ socket.gets # "END\r\n"
758
+ logger.debug { "gets #{key} from #{server.inspect}: #{value ? value.to_s.size : 'nil'}" } if logger
759
+ [value, $2]
760
+ end
761
+ result[0] = Marshal.load result[0] unless (raw == nil && @raw) || raw
762
+ result
763
+ end
764
+ rescue TypeError => err
765
+ handle_error nil, err
766
+ end
767
+
768
+
769
+ ##
770
+ # Fetches +cache_keys+ from +server+ using a multi-get.
771
+
772
+ def cache_get_multi(server, cache_keys)
773
+ with_socket_management(server) do |socket|
774
+ values = {}
775
+ socket.write "get #{cache_keys}\r\n"
776
+
777
+ while keyline = socket.gets do
778
+ return values if keyline == "END\r\n"
779
+ raise_on_error_response! keyline
780
+
781
+ unless keyline =~ /\AVALUE (.+) (.+) (.+)/ then
782
+ server.close
783
+ raise MemCacheError, "unexpected response #{keyline.inspect}"
784
+ end
785
+
786
+ key, data_length = $1, $3
787
+ values[$1] = socket.read data_length.to_i
788
+ socket.read(2) # "\r\n"
789
+ end
790
+
791
+ server.close
792
+ raise MemCacheError, "lost connection to #{server.host}:#{server.port}" # TODO: retry here too
793
+ end
794
+ end
795
+
796
+ ##
797
+ # Performs a raw incr for +cache_key+ from +server+. Returns nil if not
798
+ # found.
799
+
800
+ def cache_incr(server, cache_key, amount)
801
+ with_socket_management(server) do |socket|
802
+ socket.write "incr #{cache_key} #{amount}#{noreply}\r\n"
803
+ break nil if @no_reply
804
+ text = socket.gets
805
+ raise_on_error_response! text
806
+ return nil if text == "NOT_FOUND\r\n"
807
+ return text.to_i
808
+ end
809
+ end
810
+
811
+ ##
812
+ # Gets or creates a socket connected to the given server, and yields it
813
+ # to the block, wrapped in a mutex synchronization if @multithread is true.
814
+ #
815
+ # If a socket error (SocketError, SystemCallError, IOError) or protocol error
816
+ # (MemCacheError) is raised by the block, closes the socket, attempts to
817
+ # connect again, and retries the block (once). If an error is again raised,
818
+ # reraises it as MemCacheError.
819
+ #
820
+ # If unable to connect to the server (or if in the reconnect wait period),
821
+ # raises MemCacheError. Note that the socket connect code marks a server
822
+ # dead for a timeout period, so retrying does not apply to connection attempt
823
+ # failures (but does still apply to unexpectedly lost connections etc.).
824
+
825
+ def with_socket_management(server, &block)
826
+
827
+ @mutex.lock if @multithread
828
+ retried = false
829
+
830
+ begin
831
+ socket = server.socket
832
+
833
+ # Raise an IndexError to show this server is out of whack. If were inside
834
+ # a with_server block, we'll catch it and attempt to restart the operation.
835
+
836
+ raise IndexError, "No connection to server (#{server.status})" if socket.nil?
837
+
838
+ block.call(socket)
839
+
840
+ rescue SocketError, Errno::EAGAIN, Timeout::Error => err
841
+ logger.warn { "Socket failure: #{err.message}" } if logger
842
+ server.mark_dead(err)
843
+ handle_error(server, err)
844
+
845
+ rescue MemCacheError, SystemCallError, IOError => err
846
+ logger.warn { "Generic failure: #{err.class.name}: #{err.message}" } if logger
847
+ handle_error(server, err) if retried || socket.nil?
848
+ retried = true
849
+ retry
850
+ end
851
+ ensure
852
+ @mutex.unlock if @multithread
853
+ end
854
+
855
+ def with_server(key)
856
+ retried = false
857
+ begin
858
+ server, cache_key = request_setup(key)
859
+ yield server, cache_key
860
+ rescue IndexError => e
861
+ logger.warn { "Server failed: #{e.class.name}: #{e.message}" } if logger
862
+ if !retried && @servers.size > 1
863
+ logger.info { "Connection to server #{server.inspect} DIED! Retrying operation..." } if logger
864
+ retried = true
865
+ retry
866
+ end
867
+ handle_error(nil, e)
868
+ end
869
+ end
870
+
871
+ ##
872
+ # Handles +error+ from +server+.
873
+
874
+ def handle_error(server, error)
875
+ raise error if error.is_a?(MemCacheError)
876
+ server.close if server
877
+ new_error = MemCacheError.new error.message
878
+ new_error.set_backtrace error.backtrace
879
+ raise new_error
880
+ end
881
+
882
+ def noreply
883
+ @no_reply ? ' noreply' : ''
884
+ end
885
+
886
+ ##
887
+ # Performs setup for making a request with +key+ from memcached. Returns
888
+ # the server to fetch the key from and the complete key to use.
889
+
890
+ def request_setup(key)
891
+ raise MemCacheError, 'No active servers' unless active?
892
+ cache_key = make_cache_key key
893
+ server = get_server_for_key cache_key
894
+ return server, cache_key
895
+ end
896
+
897
+ def raise_on_error_response!(response)
898
+ if response =~ /\A(?:CLIENT_|SERVER_)?ERROR(.*)/
899
+ raise MemCacheError, $1.strip
900
+ end
901
+ end
902
+
903
+ def create_continuum_for(servers)
904
+ total_weight = servers.inject(0) { |memo, srv| memo + srv.weight }
905
+ continuum = []
906
+
907
+ servers.each do |server|
908
+ entry_count_for(server, servers.size, total_weight).times do |idx|
909
+ hash = Digest::SHA1.hexdigest("#{server.host}:#{server.port}:#{idx}")
910
+ value = Integer("0x#{hash[0..7]}")
911
+ continuum << Continuum::Entry.new(value, server)
912
+ end
913
+ end
914
+
915
+ continuum.sort { |a, b| a.value <=> b.value }
916
+ end
917
+
918
+ def entry_count_for(server, total_servers, total_weight)
919
+ ((total_servers * Continuum::POINTS_PER_SERVER * server.weight) / Float(total_weight)).floor
920
+ end
921
+
922
+ ##
923
+ # This class represents a memcached server instance.
924
+
925
+ class Server
926
+
927
+ ##
928
+ # The amount of time to wait before attempting to re-establish a
929
+ # connection with a server that is marked dead.
930
+
931
+ RETRY_DELAY = 30.0
932
+
933
+ ##
934
+ # The host the memcached server is running on.
935
+
936
+ attr_reader :host
937
+
938
+ ##
939
+ # The port the memcached server is listening on.
940
+
941
+ attr_reader :port
942
+
943
+ ##
944
+ # The weight given to the server.
945
+
946
+ attr_reader :weight
947
+
948
+ ##
949
+ # The time of next retry if the connection is dead.
950
+
951
+ attr_reader :retry
952
+
953
+ ##
954
+ # A text status string describing the state of the server.
955
+
956
+ attr_reader :status
957
+
958
+ attr_reader :logger
959
+
960
+ ##
961
+ # Create a new MemCache::Server object for the memcached instance
962
+ # listening on the given host and port, weighted by the given weight.
963
+
964
+ def initialize(memcache, host, port = DEFAULT_PORT, weight = DEFAULT_WEIGHT)
965
+ raise ArgumentError, "No host specified" if host.nil? or host.empty?
966
+ raise ArgumentError, "No port specified" if port.nil? or port.to_i.zero?
967
+
968
+ @host = host
969
+ @port = port.to_i
970
+ @weight = weight.to_i
971
+
972
+ @sock = nil
973
+ @retry = nil
974
+ @status = 'NOT CONNECTED'
975
+ @timeout = memcache.timeout
976
+ @logger = memcache.logger
977
+ end
978
+
979
+ ##
980
+ # Return a string representation of the server object.
981
+
982
+ def inspect
983
+ "<MemCache::Server: %s:%d [%d] (%s)>" % [@host, @port, @weight, @status]
984
+ end
985
+
986
+ ##
987
+ # Check whether the server connection is alive. This will cause the
988
+ # socket to attempt to connect if it isn't already connected and or if
989
+ # the server was previously marked as down and the retry time has
990
+ # been exceeded.
991
+
992
+ def alive?
993
+ !!socket
994
+ end
995
+
996
+ ##
997
+ # Try to connect to the memcached server targeted by this object.
998
+ # Returns the connected socket object on success or nil on failure.
999
+
1000
+ def socket
1001
+ return @sock if @sock and not @sock.closed?
1002
+
1003
+ @sock = nil
1004
+
1005
+ # If the host was dead, don't retry for a while.
1006
+ return if @retry and @retry > Time.now
1007
+
1008
+ # Attempt to connect if not already connected.
1009
+ begin
1010
+ @sock = connect_to(@host, @port, @timeout)
1011
+ @sock.setsockopt Socket::IPPROTO_TCP, Socket::TCP_NODELAY, 1
1012
+ @retry = nil
1013
+ @status = 'CONNECTED'
1014
+ rescue SocketError, SystemCallError, IOError => err
1015
+ logger.warn { "Unable to open socket: #{err.class.name}, #{err.message}" } if logger
1016
+ mark_dead err
1017
+ end
1018
+
1019
+ return @sock
1020
+ end
1021
+
1022
+ def connect_to(host, port, timeout=nil)
1023
+ addr = Socket.getaddrinfo(host, nil)
1024
+ sock = Socket.new(Socket.const_get(addr[0][0]), Socket::SOCK_STREAM, 0)
1025
+
1026
+ if timeout
1027
+ secs = Integer(timeout)
1028
+ usecs = Integer((timeout - secs) * 1_000_000)
1029
+ optval = [secs, usecs].pack("l_2")
1030
+ sock.setsockopt Socket::SOL_SOCKET, Socket::SO_RCVTIMEO, optval
1031
+ sock.setsockopt Socket::SOL_SOCKET, Socket::SO_SNDTIMEO, optval
1032
+
1033
+ # Socket timeouts don't work for more complex IO operations
1034
+ # like gets which lay on top of read. We need to fall back to
1035
+ # the standard Timeout mechanism.
1036
+ sock.instance_eval <<-EOR
1037
+ alias :blocking_gets :gets
1038
+ def gets
1039
+ MemCacheTimer.timeout(#{timeout}) do
1040
+ self.blocking_gets
1041
+ end
1042
+ end
1043
+ EOR
1044
+ end
1045
+ sock.connect(Socket.pack_sockaddr_in(port, addr[0][3]))
1046
+ sock
1047
+ end
1048
+
1049
+ ##
1050
+ # Close the connection to the memcached server targeted by this
1051
+ # object. The server is not considered dead.
1052
+
1053
+ def close
1054
+ @sock.close if @sock && !@sock.closed?
1055
+ @sock = nil
1056
+ @retry = nil
1057
+ @status = "NOT CONNECTED"
1058
+ end
1059
+
1060
+ ##
1061
+ # Mark the server as dead and close its socket.
1062
+
1063
+ def mark_dead(error)
1064
+ @sock.close if @sock && !@sock.closed?
1065
+ @sock = nil
1066
+ @retry = Time.now + RETRY_DELAY
1067
+
1068
+ reason = "#{error.class.name}: #{error.message}"
1069
+ @status = sprintf "%s:%s DEAD (%s), will retry at %s", @host, @port, reason, @retry
1070
+ @logger.info { @status } if @logger
1071
+ end
1072
+
1073
+ end
1074
+
1075
+ ##
1076
+ # Base MemCache exception class.
1077
+
1078
+ class MemCacheError < RuntimeError; end
1079
+
1080
+ end
1081
+
1082
+ module Continuum
1083
+ POINTS_PER_SERVER = 160 # this is the default in libmemcached
1084
+
1085
+ # Find the closest index in Continuum with value <= the given value
1086
+ def self.binary_search(ary, value, &block)
1087
+ upper = ary.size - 1
1088
+ lower = 0
1089
+ idx = 0
1090
+
1091
+ while(lower <= upper) do
1092
+ idx = (lower + upper) / 2
1093
+ comp = ary[idx].value <=> value
1094
+
1095
+ if comp == 0
1096
+ return idx
1097
+ elsif comp > 0
1098
+ upper = idx - 1
1099
+ else
1100
+ lower = idx + 1
1101
+ end
1102
+ end
1103
+ return upper
1104
+ end
1105
+
1106
+ class Entry
1107
+ attr_reader :value
1108
+ attr_reader :server
1109
+
1110
+ def initialize(val, srv)
1111
+ @value = val
1112
+ @server = srv
1113
+ end
1114
+
1115
+ def inspect
1116
+ "<#{value}, #{server.host}:#{server.port}>"
1117
+ end
1118
+ end
1119
+ end
1120
+ require 'continuum_native'