brontes3d-memcache-client 1.7.4.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/FAQ.rdoc ADDED
@@ -0,0 +1,31 @@
1
+ = Memcache-client FAQ
2
+
3
+ == Does memcache-client work with Ruby 1.9?
4
+
5
+ Yes, Ruby 1.9 is supported. The test suite should pass completely on 1.8.6 and 1.9.1.
6
+
7
+
8
+ == I'm seeing "execution expired" or "time's up!" errors, what's that all about?
9
+
10
+ memcache-client 1.6.x+ now has socket operations timed out by default. This is to prevent
11
+ the Ruby process from hanging if memcached or starling get into a bad state, which has been
12
+ seen in production by both 37signals and FiveRuns. The default timeout is 0.5 seconds, which
13
+ should be more than enough time under normal circumstances. It's possible to hit a storm of
14
+ concurrent events which cause this timer to expire: a large Ruby VM can cause the GC to take
15
+ a while, while also storing a large (500k-1MB value), for example.
16
+
17
+ You can increase the timeout or disable them completely with the following configuration:
18
+
19
+ Rails:
20
+ config.cache_store = :mem_cache_store, 'server1', 'server2', { :timeout => nil } # no timeout
21
+
22
+ native:
23
+ MemCache.new ['server1', 'server2'], { :timeout => 1.0 } # 1 second timeout
24
+
25
+
26
+ == Isn't Evan Weaver's memcached gem faster?
27
+
28
+ The latest version of memcached-client is anywhere from 33% to 100% slower than memcached in various benchmarks. Keep in mind this means that 10,000 get requests take 1.8 sec instead of 1.2 seconds.
29
+ In practice, memcache-client is unlikely to be a bottleneck in your system but there is always going
30
+ to be an overhead to pure Ruby. Evan's memcached gem is a thin wrapper around a C library, which
31
+ makes it very fast but also difficult to install for some people.
data/History.rdoc ADDED
@@ -0,0 +1,230 @@
1
+
2
+ = 1.7.4.1 (2009-08-06)
3
+
4
+ * move calls to Marshal.dump outside with_server(). this fixes an issue where,
5
+ when running with multiple servers, and one of those servers fails, data being
6
+ put into memcache gets marshaled twice. (Rajiv Aaron Manglani rajiv@brontes3d.com)
7
+
8
+ = 1.7.4 (2009-06-09)
9
+
10
+ * Fix issue with raising timeout errors.
11
+
12
+ = 1.7.3 (2009-06-06)
13
+
14
+ * Remove SystemTimer support, refactor I/O to use nonblocking operations. Speeds up
15
+ performance approx 100%. Timeouts basically have no overhead now! (tenderlove)
16
+ * Update load logic to support SystemTimer running in Ruby Enterprise Edition. Thanks
17
+ to splattael on github for the comment.
18
+
19
+ = 1.7.2 (2009-04-12)
20
+
21
+ * Rollback socket timeout optimization. It does not work on all operating systems
22
+ and was a support headache.
23
+
24
+ = 1.7.1 (2009-03-28)
25
+
26
+ * Performance optimizations:
27
+ * Rely on higher performance operating system socket timeouts for low-level socket
28
+ read/writes where possible, instead of the (slower) SystemTimer or (slowest,
29
+ unreliable) Timeout libraries.
30
+ * the native binary search is back! The recent performance tuning made the binary search
31
+ a bottleneck again so it had to return. It uses RubyInline to compile the native extension and
32
+ silently falls back to pure Ruby if anything fails. Make sure you run:
33
+ `gem install RubyInline` if you want ultimate performance.
34
+ * the changes make memcache-client 100% faster than 1.7.0 in my performance test on Ruby 1.8.6:
35
+ 15 sec -> 8 sec.
36
+ * Fix several logging issues.
37
+
38
+ = 1.7.0 (2009-03-08)
39
+
40
+ * Go through the memcached protocol document and implement any commands not already implemented:
41
+ - cas
42
+ - append
43
+ - prepend
44
+ - replace
45
+
46
+ Append and prepend only work with raw data since it makes no sense to concatenate two Marshalled
47
+ values together. The cas functionality should be considered a prototype. Since I don't have an
48
+ application which uses +cas+, I'm not sure what semantic sugar the API should provide. Should it
49
+ retry if the value was changed? Should it massage the returned string into true/false? Feedback
50
+ would be appreciated.
51
+
52
+ * Add fetch method which provides a method very similar to ActiveSupport::Cache::Store#fetch,
53
+ basically a wrapper around get and add. (djanowski)
54
+
55
+ * Implement the flush_all delay parameter, to allow a large memcached farm to be flushed gradually.
56
+
57
+ * Implement the noreply flag, which tells memcached not to reply in operations which don't
58
+ need a reply, i.e. set/add/delete/flush_all.
59
+
60
+ * The only known functionality not implemented anymore is the <flags> parameter to the storage
61
+ commands. This would require modification of the API method signatures. If someone can come
62
+ up with a clean way to implement it, I would be happy to consider including it.
63
+
64
+ = 1.6.5 (2009-02-27)
65
+
66
+ * Change memcache-client to multithreaded by default. The mutex does not add significant
67
+ overhead and it is far too easy, now that Sinatra, Rails and Merb are all thread-safe, to
68
+ use memcache-client in a thread-unsafe manner. Remove some unnecessary mutexing and add
69
+ a test to verify heavily multithreaded usage does not act unexpectedly.
70
+
71
+ * Add optional support for the SystemTimer gem when running on Ruby 1.8.x. This gem is
72
+ highly recommended - it ensures timeouts actually work and halves the overhead of using
73
+ timeouts. Using this gem, Ruby 1.8.x is actually faster in my performance tests
74
+ than Ruby 1.9.x. Just "gem install SystemTimer" and it should be picked up automatically.
75
+
76
+ = 1.6.4 (2009-02-19)
77
+
78
+ * Remove native code altogether. The speedup was only 10% on Ruby 1.8.6 and did not work
79
+ on Ruby 1.9.1.
80
+
81
+ * Removed memcache_util.rb from the distribution. If you are using it, please copy the code
82
+ into your own project. The file will live in the github repository for a few more months
83
+ for this purposes. http://github.com/mperham/memcache-client/raw/7a276089aa3c914e47e3960f9740ac7377204970/lib/memcache_util.rb
84
+
85
+ * Roll continuum.rb into memcache.rb. The project is again a single Ruby file, with no dependencies.
86
+
87
+ = 1.6.3 (2009-02-14)
88
+
89
+ * Remove gem native extension in preference to RubyInline. This allows the gem to install
90
+ and work on JRuby and Ruby 1.8.5 when the native code fails to compile.
91
+
92
+ = 1.6.2 (2009-02-04)
93
+
94
+ * Validate that values are less than one megabyte in size.
95
+
96
+ * Refactor error handling in get_multi to handle server failures and return what values
97
+ we could successfully retrieve.
98
+
99
+ * Add optional logging parameter for debugging and tracing.
100
+
101
+ * First official release since 1.5.0. Thanks to Eric Hodel for turning over the project to me!
102
+ New project home page: http://github.com/mperham/memcache-client
103
+
104
+ = 1.6.1 (2009-01-28)
105
+
106
+ * Add option to disable socket timeout support. Socket timeout has a significant performance
107
+ penalty (approx 3x slower than without in Ruby 1.8.6). You can turn off the timeouts if you
108
+ need absolute performance, but by default timeouts are enabled. The performance
109
+ penalty is much lower in Ruby 1.8.7, 1.9 and JRuby. (mperham)
110
+
111
+ * Add option to disable server failover. Failover can lead to "split-brain" caches that
112
+ return stale data. (mperham)
113
+
114
+ * Implement continuum binary search in native code for performance reasons. Pure ruby
115
+ is available for platforms like JRuby or Rubinius which can't use C extensions. (mperham)
116
+
117
+ * Fix #add with raw=true (iamaleksey)
118
+
119
+ = 1.6.0
120
+
121
+ * Implement a consistent hashing algorithm, as described in libketama.
122
+ This dramatically reduces the cost of adding or removing servers dynamically
123
+ as keys are much more likely to map to the same server.
124
+
125
+ Take a scenario where we add a fourth server. With a naive modulo algorithm, about
126
+ 25% of the keys will map to the same server. In other words, 75% of your memcached
127
+ content suddenly becomes invalid. With a consistent algorithm, 75% of the keys
128
+ will map to the same server as before - only 25% will be invalidated. (mperham)
129
+
130
+ * Implement socket timeouts, should fix rare cases of very bad things happening
131
+ in production at 37signals and FiveRuns. (jseirles)
132
+
133
+ = 1.5.0.5
134
+
135
+ * Remove native C CRC32_ITU_T extension in favor of Zlib's crc32 method.
136
+ memcache-client is now pure Ruby again and will work with JRuby and Rubinius.
137
+
138
+ = 1.5.0.4
139
+
140
+ * Get test suite working again (packagethief)
141
+ * Ruby 1.9 compatiblity fixes (packagethief, mperham)
142
+ * Consistently return server responses and check for errors (packagethief)
143
+ * Properly calculate CRC in Ruby 1.9 strings (mperham)
144
+ * Drop rspec in favor of test/unit, for 1.9 compat (mperham)
145
+
146
+ = 1.5.0.3 (FiveRuns fork)
147
+
148
+ * Integrated ITU-T CRC32 operation in native C extension for speed. Thanks to Justin Balthrop!
149
+
150
+ = 1.5.0.2 (FiveRuns fork)
151
+
152
+ * Add support for seamless failover between servers. If one server connection dies,
153
+ the client will retry the operation on another server before giving up.
154
+
155
+ * Merge Will Bryant's socket retry patch.
156
+ http://willbryant.net/software/2007/12/21/ruby-memcache-client-reconnect-and-retry
157
+
158
+ = 1.5.0.1 (FiveRuns fork)
159
+
160
+ * Fix set not handling client disconnects.
161
+ http://dev.twitter.com/2008/02/solving-case-of-missing-updates.html
162
+
163
+ = 1.5.0
164
+
165
+ * Add MemCache#flush_all command. Patch #13019 and bug #10503. Patches
166
+ submitted by Sebastian Delmont and Rick Olson.
167
+ * Type-cast data returned by MemCache#stats. Patch #10505 submitted by
168
+ Sebastian Delmont.
169
+
170
+ = 1.4.0
171
+
172
+ * Fix bug #10371, #set does not check response for server errors.
173
+ Submitted by Ben VandenBos.
174
+ * Fix bug #12450, set TCP_NODELAY socket option. Patch by Chris
175
+ McGrath.
176
+ * Fix bug #10704, missing #add method. Patch by Jamie Macey.
177
+ * Fix bug #10371, handle socket EOF in cache_get. Submitted by Ben
178
+ VandenBos.
179
+
180
+ = 1.3.0
181
+
182
+ * Apply patch #6507, add stats command. Submitted by Tyler Kovacs.
183
+ * Apply patch #6509, parallel implementation of #get_multi. Submitted
184
+ by Tyler Kovacs.
185
+ * Validate keys. Disallow spaces in keys or keys that are too long.
186
+ * Perform more validation of server responses. MemCache now reports
187
+ errors if the socket was not in an expected state. (Please file
188
+ bugs if you find some.)
189
+ * Add #incr and #decr.
190
+ * Add raw argument to #set and #get to retrieve #incr and #decr
191
+ values.
192
+ * Also put on MemCacheError when using Cache::get with block.
193
+ * memcache.rb no longer sets $TESTING to a true value if it was
194
+ previously defined. Bug #8213 by Matijs van Zuijlen.
195
+
196
+ = 1.2.1
197
+
198
+ * Fix bug #7048, MemCache#servers= referenced changed local variable.
199
+ Submitted by Justin Dossey.
200
+ * Fix bug #7049, MemCache#initialize resets @buckets. Submitted by
201
+ Justin Dossey.
202
+ * Fix bug #6232, Make Cache::Get work with a block only when nil is
203
+ returned. Submitted by Jon Evans.
204
+ * Moved to the seattlerb project.
205
+
206
+ = 1.2.0
207
+
208
+ NOTE: This version will store keys in different places than previous
209
+ versions! Be prepared for some thrashing while memcached sorts itself
210
+ out!
211
+
212
+ * Fixed multithreaded operations, bug 5994 and 5989.
213
+ Thanks to Blaine Cook, Erik Hetzner, Elliot Smith, Dave Myron (and
214
+ possibly others I have forgotten).
215
+ * Made memcached interoperable with other memcached libraries, bug
216
+ 4509. Thanks to anonymous.
217
+ * Added get_multi to match Perl/etc APIs
218
+
219
+ = 1.1.0
220
+
221
+ * Added some tests
222
+ * Sped up non-multithreaded and multithreaded operation
223
+ * More Ruby-memcache compatibility
224
+ * More RDoc
225
+ * Switched to Hoe
226
+
227
+ = 1.0.0
228
+
229
+ Birthday!
230
+
data/LICENSE.txt ADDED
@@ -0,0 +1,28 @@
1
+ Copyright 2005-2009 Bob Cottrell, Eric Hodel, Mike Perham.
2
+ All rights reserved.
3
+
4
+ Redistribution and use in source and binary forms, with or without
5
+ modification, are permitted provided that the following conditions
6
+ are met:
7
+
8
+ 1. Redistributions of source code must retain the above copyright
9
+ notice, this list of conditions and the following disclaimer.
10
+ 2. Redistributions in binary form must reproduce the above copyright
11
+ notice, this list of conditions and the following disclaimer in the
12
+ documentation and/or other materials provided with the distribution.
13
+ 3. Neither the names of the authors nor the names of their contributors
14
+ may be used to endorse or promote products derived from this software
15
+ without specific prior written permission.
16
+
17
+ THIS SOFTWARE IS PROVIDED BY THE AUTHORS ``AS IS'' AND ANY EXPRESS
18
+ OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
19
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
20
+ ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE
21
+ LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY,
22
+ OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
23
+ OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
24
+ BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
25
+ WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
26
+ OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
27
+ EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
28
+
data/README.rdoc ADDED
@@ -0,0 +1,51 @@
1
+ = memcache-client
2
+
3
+ A ruby library for accessing memcached.
4
+
5
+ Source:
6
+
7
+ http://github.com/mperham/memcache-client
8
+
9
+ == Installing memcache-client
10
+
11
+ Just install the gem:
12
+
13
+ $ sudo gem install memcache-client
14
+
15
+ == Using memcache-client
16
+
17
+ With one server:
18
+
19
+ CACHE = MemCache.new 'localhost:11211'
20
+
21
+ Or with multiple servers:
22
+
23
+ CACHE = MemCache.new %w[one.example.com:11211 two.example.com:11211]
24
+
25
+
26
+ == Tuning memcache-client
27
+
28
+ The MemCache.new method takes a number of options which can be useful at times. Please
29
+ read the source comments there for an overview. If you are using Ruby 1.8.x and using
30
+ multiple memcached servers, you should install the RubyInline gem for ultimate performance.
31
+
32
+
33
+ == Using memcache-client with Rails
34
+
35
+ Rails 2.1+ includes memcache-client 1.5.0 out of the box. See ActiveSupport::Cache::MemCacheStore
36
+ and the Rails.cache method for more details. Rails 2.3+ will use the latest memcache-client
37
+ gem installed.
38
+
39
+
40
+ == Questions?
41
+
42
+ memcache-client is maintained by Mike Perham and was originally written by Bob Cottrell,
43
+ Eric Hodel and the seattle.rb crew.
44
+
45
+ Email:: mailto:mperham@gmail.com
46
+ Twitter:: mperham[http://twitter.com/mperham]
47
+ WWW:: http://mikeperham.com
48
+
49
+ If my work on memcache-client is something you support, please take a moment to
50
+ recommend me at WWR[http://workingwithrails.com/person/10797-mike-perham]. I'm not
51
+ asking for money, just a electronic "thumbs up".
data/Rakefile ADDED
@@ -0,0 +1,35 @@
1
+ # vim: syntax=Ruby
2
+ require 'rubygems'
3
+ require 'rake/rdoctask'
4
+ require 'rake/testtask'
5
+
6
+ task :gem do
7
+ sh "gem build memcache-client.gemspec"
8
+ end
9
+
10
+ task :install => [:gem] do
11
+ sh "sudo gem install memcache-client-*.gem"
12
+ end
13
+
14
+ task :clean do
15
+ sh "rm -f memcache-client-*.gem"
16
+ end
17
+
18
+ task :publish => [:clean, :gem, :install] do
19
+ require 'lib/memcache'
20
+ sh "rubyforge add_release seattlerb memcache-client #{MemCache::VERSION} memcache-client-#{MemCache::VERSION}.gem"
21
+ end
22
+
23
+ Rake::RDocTask.new do |rd|
24
+ rd.main = "README.rdoc"
25
+ rd.rdoc_files.include("README.rdoc", "FAQ.rdoc", "History.rdoc", "lib/memcache.rb")
26
+ rd.rdoc_dir = 'doc'
27
+ end
28
+
29
+ Rake::TestTask.new
30
+
31
+ task :default => :test
32
+
33
+ task :rcov do
34
+ `rcov -Ilib test/*.rb`
35
+ end
@@ -0,0 +1,41 @@
1
+ module Continuum
2
+
3
+ class << self
4
+
5
+ # Native extension to perform the binary search within the continuum
6
+ # space. There's a pure ruby version in memcache.rb so this is purely
7
+ # optional for performance and only necessary if you are using multiple
8
+ # memcached servers.
9
+ begin
10
+ require 'inline'
11
+ inline do |builder|
12
+ builder.c <<-EOM
13
+ int binary_search(VALUE ary, unsigned int r) {
14
+ int upper = RARRAY_LEN(ary) - 1;
15
+ int lower = 0;
16
+ int idx = 0;
17
+ ID value = rb_intern("value");
18
+
19
+ while (lower <= upper) {
20
+ idx = (lower + upper) / 2;
21
+
22
+ VALUE continuumValue = rb_funcall(RARRAY_PTR(ary)[idx], value, 0);
23
+ unsigned int l = NUM2UINT(continuumValue);
24
+ if (l == r) {
25
+ return idx;
26
+ }
27
+ else if (l > r) {
28
+ upper = idx - 1;
29
+ }
30
+ else {
31
+ lower = idx + 1;
32
+ }
33
+ }
34
+ return upper;
35
+ }
36
+ EOM
37
+ end
38
+ rescue Exception => e
39
+ end
40
+ end
41
+ end
data/lib/memcache.rb ADDED
@@ -0,0 +1,1112 @@
1
+ $TESTING = defined?($TESTING) && $TESTING
2
+
3
+ require 'socket'
4
+ require 'thread'
5
+ require 'zlib'
6
+ require 'digest/sha1'
7
+ require 'net/protocol'
8
+
9
+ ##
10
+ # A Ruby client library for memcached.
11
+ #
12
+
13
+ class MemCache
14
+
15
+ ##
16
+ # The version of MemCache you are using.
17
+
18
+ VERSION = '1.7.4'
19
+
20
+ ##
21
+ # Default options for the cache object.
22
+
23
+ DEFAULT_OPTIONS = {
24
+ :namespace => nil,
25
+ :readonly => false,
26
+ :multithread => true,
27
+ :failover => true,
28
+ :timeout => 0.5,
29
+ :logger => nil,
30
+ :no_reply => false,
31
+ :check_size => true
32
+ }
33
+
34
+ ##
35
+ # Default memcached port.
36
+
37
+ DEFAULT_PORT = 11211
38
+
39
+ ##
40
+ # Default memcached server weight.
41
+
42
+ DEFAULT_WEIGHT = 1
43
+
44
+ ##
45
+ # The namespace for this instance
46
+
47
+ attr_reader :namespace
48
+
49
+ ##
50
+ # The multithread setting for this instance
51
+
52
+ attr_reader :multithread
53
+
54
+ ##
55
+ # The servers this client talks to. Play at your own peril.
56
+
57
+ attr_reader :servers
58
+
59
+ ##
60
+ # Socket timeout limit with this client, defaults to 0.5 sec.
61
+ # Set to nil to disable timeouts.
62
+
63
+ attr_reader :timeout
64
+
65
+ ##
66
+ # Should the client try to failover to another server if the
67
+ # first server is down? Defaults to true.
68
+
69
+ attr_reader :failover
70
+
71
+ ##
72
+ # Log debug/info/warn/error to the given Logger, defaults to nil.
73
+
74
+ attr_reader :logger
75
+
76
+ ##
77
+ # Don't send or look for a reply from the memcached server for write operations.
78
+ # Please note this feature only works in memcached 1.2.5 and later. Earlier
79
+ # versions will reply with "ERROR".
80
+ attr_reader :no_reply
81
+
82
+ ##
83
+ # Accepts a list of +servers+ and a list of +opts+. +servers+ may be
84
+ # omitted. See +servers=+ for acceptable server list arguments.
85
+ #
86
+ # Valid options for +opts+ are:
87
+ #
88
+ # [:namespace] Prepends this value to all keys added or retrieved.
89
+ # [:readonly] Raises an exception on cache writes when true.
90
+ # [:multithread] Wraps cache access in a Mutex for thread safety. Defaults to true.
91
+ # [:failover] Should the client try to failover to another server if the
92
+ # first server is down? Defaults to true.
93
+ # [:timeout] Time to use as the socket read timeout. Defaults to 0.5 sec,
94
+ # set to nil to disable timeouts.
95
+ # [:logger] Logger to use for info/debug output, defaults to nil
96
+ # [:no_reply] Don't bother looking for a reply for write operations (i.e. they
97
+ # become 'fire and forget'), memcached 1.2.5 and later only, speeds up
98
+ # set/add/delete/incr/decr significantly.
99
+ # [:check_size] Raises a MemCacheError if the value to be set is greater than 1 MB, which
100
+ # is the maximum key size for the standard memcached server. Defaults to true.
101
+ #
102
+ # Other options are ignored.
103
+
104
+ def initialize(*args)
105
+ servers = []
106
+ opts = {}
107
+
108
+ case args.length
109
+ when 0 then # NOP
110
+ when 1 then
111
+ arg = args.shift
112
+ case arg
113
+ when Hash then opts = arg
114
+ when Array then servers = arg
115
+ when String then servers = [arg]
116
+ else raise ArgumentError, 'first argument must be Array, Hash or String'
117
+ end
118
+ when 2 then
119
+ servers, opts = args
120
+ else
121
+ raise ArgumentError, "wrong number of arguments (#{args.length} for 2)"
122
+ end
123
+
124
+ opts = DEFAULT_OPTIONS.merge opts
125
+ @namespace = opts[:namespace]
126
+ @readonly = opts[:readonly]
127
+ @multithread = opts[:multithread]
128
+ @timeout = opts[:timeout]
129
+ @failover = opts[:failover]
130
+ @logger = opts[:logger]
131
+ @no_reply = opts[:no_reply]
132
+ @check_size = opts[:check_size]
133
+ @mutex = Mutex.new if @multithread
134
+
135
+ logger.info { "memcache-client #{VERSION} #{Array(servers).inspect}" } if logger
136
+
137
+ Thread.current[:memcache_client] = self.object_id if !@multithread
138
+
139
+ self.servers = servers
140
+ end
141
+
142
+ ##
143
+ # Returns a string representation of the cache object.
144
+
145
+ def inspect
146
+ "<MemCache: %d servers, ns: %p, ro: %p>" %
147
+ [@servers.length, @namespace, @readonly]
148
+ end
149
+
150
+ ##
151
+ # Returns whether there is at least one active server for the object.
152
+
153
+ def active?
154
+ not @servers.empty?
155
+ end
156
+
157
+ ##
158
+ # Returns whether or not the cache object was created read only.
159
+
160
+ def readonly?
161
+ @readonly
162
+ end
163
+
164
+ ##
165
+ # Set the servers that the requests will be distributed between. Entries
166
+ # can be either strings of the form "hostname:port" or
167
+ # "hostname:port:weight" or MemCache::Server objects.
168
+ #
169
+ def servers=(servers)
170
+ # Create the server objects.
171
+ @servers = Array(servers).collect do |server|
172
+ case server
173
+ when String
174
+ host, port, weight = server.split ':', 3
175
+ port ||= DEFAULT_PORT
176
+ weight ||= DEFAULT_WEIGHT
177
+ Server.new self, host, port, weight
178
+ else
179
+ server
180
+ end
181
+ end
182
+
183
+ logger.debug { "Servers now: #{@servers.inspect}" } if logger
184
+
185
+ # There's no point in doing this if there's only one server
186
+ @continuum = create_continuum_for(@servers) if @servers.size > 1
187
+
188
+ @servers
189
+ end
190
+
191
+ ##
192
+ # Decrements the value for +key+ by +amount+ and returns the new value.
193
+ # +key+ must already exist. If +key+ is not an integer, it is assumed to be
194
+ # 0. +key+ can not be decremented below 0.
195
+
196
+ def decr(key, amount = 1)
197
+ raise MemCacheError, "Update of readonly cache" if @readonly
198
+ with_server(key) do |server, cache_key|
199
+ cache_decr server, cache_key, amount
200
+ end
201
+ rescue TypeError => err
202
+ handle_error nil, err
203
+ end
204
+
205
+ ##
206
+ # Retrieves +key+ from memcache. If +raw+ is false, the value will be
207
+ # unmarshalled.
208
+
209
+ def get(key, raw = false)
210
+ with_server(key) do |server, cache_key|
211
+ logger.debug { "get #{key} from #{server.inspect}" } if logger
212
+ value = cache_get server, cache_key
213
+ return nil if value.nil?
214
+ value = Marshal.load value unless raw
215
+ return value
216
+ end
217
+ rescue TypeError => err
218
+ handle_error nil, err
219
+ end
220
+
221
+ ##
222
+ # Performs a +get+ with the given +key+. If
223
+ # the value does not exist and a block was given,
224
+ # the block will be called and the result saved via +add+.
225
+ #
226
+ # If you do not provide a block, using this
227
+ # method is the same as using +get+.
228
+ #
229
+ def fetch(key, expiry = 0, raw = false)
230
+ value = get(key, raw)
231
+
232
+ if value.nil? && block_given?
233
+ value = yield
234
+ add(key, value, expiry, raw)
235
+ end
236
+
237
+ value
238
+ end
239
+
240
+ ##
241
+ # Retrieves multiple values from memcached in parallel, if possible.
242
+ #
243
+ # The memcached protocol supports the ability to retrieve multiple
244
+ # keys in a single request. Pass in an array of keys to this method
245
+ # and it will:
246
+ #
247
+ # 1. map the key to the appropriate memcached server
248
+ # 2. send a single request to each server that has one or more key values
249
+ #
250
+ # Returns a hash of values.
251
+ #
252
+ # cache["a"] = 1
253
+ # cache["b"] = 2
254
+ # cache.get_multi "a", "b" # => { "a" => 1, "b" => 2 }
255
+ #
256
+ # Note that get_multi assumes the values are marshalled.
257
+
258
+ def get_multi(*keys)
259
+ raise MemCacheError, 'No active servers' unless active?
260
+
261
+ keys.flatten!
262
+ key_count = keys.length
263
+ cache_keys = {}
264
+ server_keys = Hash.new { |h,k| h[k] = [] }
265
+
266
+ # map keys to servers
267
+ keys.each do |key|
268
+ server, cache_key = request_setup key
269
+ cache_keys[cache_key] = key
270
+ server_keys[server] << cache_key
271
+ end
272
+
273
+ results = {}
274
+
275
+ server_keys.each do |server, keys_for_server|
276
+ keys_for_server_str = keys_for_server.join ' '
277
+ begin
278
+ values = cache_get_multi server, keys_for_server_str
279
+ values.each do |key, value|
280
+ results[cache_keys[key]] = Marshal.load value
281
+ end
282
+ rescue IndexError => e
283
+ # Ignore this server and try the others
284
+ logger.warn { "Unable to retrieve #{keys_for_server.size} elements from #{server.inspect}: #{e.message}"} if logger
285
+ end
286
+ end
287
+
288
+ return results
289
+ rescue TypeError => err
290
+ handle_error nil, err
291
+ end
292
+
293
+ ##
294
+ # Increments the value for +key+ by +amount+ and returns the new value.
295
+ # +key+ must already exist. If +key+ is not an integer, it is assumed to be
296
+ # 0.
297
+
298
+ def incr(key, amount = 1)
299
+ raise MemCacheError, "Update of readonly cache" if @readonly
300
+ with_server(key) do |server, cache_key|
301
+ cache_incr server, cache_key, amount
302
+ end
303
+ rescue TypeError => err
304
+ handle_error nil, err
305
+ end
306
+
307
+ ##
308
+ # Add +key+ to the cache with value +value+ that expires in +expiry+
309
+ # seconds. If +raw+ is true, +value+ will not be Marshalled.
310
+ #
311
+ # Warning: Readers should not call this method in the event of a cache miss;
312
+ # see MemCache#add.
313
+
314
+ ONE_MB = 1024 * 1024
315
+
316
+ def set(key, value, expiry = 0, raw = false)
317
+ raise MemCacheError, "Update of readonly cache" if @readonly
318
+
319
+ value = Marshal.dump value unless raw
320
+ with_server(key) do |server, cache_key|
321
+ logger.debug { "set #{key} to #{server.inspect}: #{value.to_s.size}" } if logger
322
+
323
+ if @check_size && value.to_s.size > ONE_MB
324
+ raise MemCacheError, "Value too large, memcached can only store 1MB of data per key"
325
+ end
326
+
327
+ command = "set #{cache_key} 0 #{expiry} #{value.to_s.size}#{noreply}\r\n#{value}\r\n"
328
+
329
+ with_socket_management(server) do |socket|
330
+ socket.write command
331
+ break nil if @no_reply
332
+ result = socket.gets
333
+ raise_on_error_response! result
334
+
335
+ if result.nil?
336
+ server.close
337
+ raise MemCacheError, "lost connection to #{server.host}:#{server.port}"
338
+ end
339
+
340
+ result
341
+ end
342
+ end
343
+ end
344
+
345
+ ##
346
+ # "cas" is a check and set operation which means "store this data but
347
+ # only if no one else has updated since I last fetched it." This can
348
+ # be used as a form of optimistic locking.
349
+ #
350
+ # Works in block form like so:
351
+ # cache.cas('some-key') do |value|
352
+ # value + 1
353
+ # end
354
+ #
355
+ # Returns:
356
+ # +nil+ if the value was not found on the memcached server.
357
+ # +STORED+ if the value was updated successfully
358
+ # +EXISTS+ if the value was updated by someone else since last fetch
359
+
360
+ def cas(key, expiry=0, raw=false)
361
+ raise MemCacheError, "Update of readonly cache" if @readonly
362
+ raise MemCacheError, "A block is required" unless block_given?
363
+
364
+ (value, token) = gets(key, raw)
365
+ return nil unless value
366
+ updated = yield value
367
+ value = Marshal.dump updated unless raw
368
+
369
+ with_server(key) do |server, cache_key|
370
+ logger.debug { "cas #{key} to #{server.inspect}: #{value.to_s.size}" } if logger
371
+ command = "cas #{cache_key} 0 #{expiry} #{value.to_s.size} #{token}#{noreply}\r\n#{value}\r\n"
372
+
373
+ with_socket_management(server) do |socket|
374
+ socket.write command
375
+ break nil if @no_reply
376
+ result = socket.gets
377
+ raise_on_error_response! result
378
+
379
+ if result.nil?
380
+ server.close
381
+ raise MemCacheError, "lost connection to #{server.host}:#{server.port}"
382
+ end
383
+
384
+ result
385
+ end
386
+ end
387
+ end
388
+
389
+ ##
390
+ # Add +key+ to the cache with value +value+ that expires in +expiry+
391
+ # seconds, but only if +key+ does not already exist in the cache.
392
+ # If +raw+ is true, +value+ will not be Marshalled.
393
+ #
394
+ # Readers should call this method in the event of a cache miss, not
395
+ # MemCache#set.
396
+
397
+ def add(key, value, expiry = 0, raw = false)
398
+ raise MemCacheError, "Update of readonly cache" if @readonly
399
+ value = Marshal.dump value unless raw
400
+ with_server(key) do |server, cache_key|
401
+ logger.debug { "add #{key} to #{server}: #{value ? value.to_s.size : 'nil'}" } if logger
402
+ command = "add #{cache_key} 0 #{expiry} #{value.to_s.size}#{noreply}\r\n#{value}\r\n"
403
+
404
+ with_socket_management(server) do |socket|
405
+ socket.write command
406
+ break nil if @no_reply
407
+ result = socket.gets
408
+ raise_on_error_response! result
409
+ result
410
+ end
411
+ end
412
+ end
413
+
414
+ ##
415
+ # Add +key+ to the cache with value +value+ that expires in +expiry+
416
+ # seconds, but only if +key+ already exists in the cache.
417
+ # If +raw+ is true, +value+ will not be Marshalled.
418
+ def replace(key, value, expiry = 0, raw = false)
419
+ raise MemCacheError, "Update of readonly cache" if @readonly
420
+ value = Marshal.dump value unless raw
421
+ with_server(key) do |server, cache_key|
422
+ logger.debug { "replace #{key} to #{server}: #{value ? value.to_s.size : 'nil'}" } if logger
423
+ command = "replace #{cache_key} 0 #{expiry} #{value.to_s.size}#{noreply}\r\n#{value}\r\n"
424
+
425
+ with_socket_management(server) do |socket|
426
+ socket.write command
427
+ break nil if @no_reply
428
+ result = socket.gets
429
+ raise_on_error_response! result
430
+ result
431
+ end
432
+ end
433
+ end
434
+
435
+ ##
436
+ # Append - 'add this data to an existing key after existing data'
437
+ # Please note the value is always passed to memcached as raw since it
438
+ # doesn't make a lot of sense to concatenate marshalled data together.
439
+ def append(key, value)
440
+ raise MemCacheError, "Update of readonly cache" if @readonly
441
+ with_server(key) do |server, cache_key|
442
+ logger.debug { "append #{key} to #{server}: #{value ? value.to_s.size : 'nil'}" } if logger
443
+ command = "append #{cache_key} 0 0 #{value.to_s.size}#{noreply}\r\n#{value}\r\n"
444
+
445
+ with_socket_management(server) do |socket|
446
+ socket.write command
447
+ break nil if @no_reply
448
+ result = socket.gets
449
+ raise_on_error_response! result
450
+ result
451
+ end
452
+ end
453
+ end
454
+
455
+ ##
456
+ # Prepend - 'add this data to an existing key before existing data'
457
+ # Please note the value is always passed to memcached as raw since it
458
+ # doesn't make a lot of sense to concatenate marshalled data together.
459
+ def prepend(key, value)
460
+ raise MemCacheError, "Update of readonly cache" if @readonly
461
+ with_server(key) do |server, cache_key|
462
+ logger.debug { "prepend #{key} to #{server}: #{value ? value.to_s.size : 'nil'}" } if logger
463
+ command = "prepend #{cache_key} 0 0 #{value.to_s.size}#{noreply}\r\n#{value}\r\n"
464
+
465
+ with_socket_management(server) do |socket|
466
+ socket.write command
467
+ break nil if @no_reply
468
+ result = socket.gets
469
+ raise_on_error_response! result
470
+ result
471
+ end
472
+ end
473
+ end
474
+
475
+ ##
476
+ # Removes +key+ from the cache in +expiry+ seconds.
477
+
478
+ def delete(key, expiry = 0)
479
+ raise MemCacheError, "Update of readonly cache" if @readonly
480
+ with_server(key) do |server, cache_key|
481
+ with_socket_management(server) do |socket|
482
+ logger.debug { "delete #{cache_key} on #{server}" } if logger
483
+ socket.write "delete #{cache_key} #{expiry}#{noreply}\r\n"
484
+ break nil if @no_reply
485
+ result = socket.gets
486
+ raise_on_error_response! result
487
+ result
488
+ end
489
+ end
490
+ end
491
+
492
+ ##
493
+ # Flush the cache from all memcache servers.
494
+ # A non-zero value for +delay+ will ensure that the flush
495
+ # is propogated slowly through your memcached server farm.
496
+ # The Nth server will be flushed N*delay seconds from now,
497
+ # asynchronously so this method returns quickly.
498
+ # This prevents a huge database spike due to a total
499
+ # flush all at once.
500
+
501
+ def flush_all(delay=0)
502
+ raise MemCacheError, 'No active servers' unless active?
503
+ raise MemCacheError, "Update of readonly cache" if @readonly
504
+
505
+ begin
506
+ delay_time = 0
507
+ @servers.each do |server|
508
+ with_socket_management(server) do |socket|
509
+ logger.debug { "flush_all #{delay_time} on #{server}" } if logger
510
+ if delay == 0 # older versions of memcached will fail silently otherwise
511
+ socket.write "flush_all#{noreply}\r\n"
512
+ else
513
+ socket.write "flush_all #{delay_time}#{noreply}\r\n"
514
+ end
515
+ break nil if @no_reply
516
+ result = socket.gets
517
+ raise_on_error_response! result
518
+ result
519
+ end
520
+ delay_time += delay
521
+ end
522
+ rescue IndexError => err
523
+ handle_error nil, err
524
+ end
525
+ end
526
+
527
+ ##
528
+ # Reset the connection to all memcache servers. This should be called if
529
+ # there is a problem with a cache lookup that might have left the connection
530
+ # in a corrupted state.
531
+
532
+ def reset
533
+ @servers.each { |server| server.close }
534
+ end
535
+
536
+ ##
537
+ # Returns statistics for each memcached server. An explanation of the
538
+ # statistics can be found in the memcached docs:
539
+ #
540
+ # http://code.sixapart.com/svn/memcached/trunk/server/doc/protocol.txt
541
+ #
542
+ # Example:
543
+ #
544
+ # >> pp CACHE.stats
545
+ # {"localhost:11211"=>
546
+ # {"bytes"=>4718,
547
+ # "pid"=>20188,
548
+ # "connection_structures"=>4,
549
+ # "time"=>1162278121,
550
+ # "pointer_size"=>32,
551
+ # "limit_maxbytes"=>67108864,
552
+ # "cmd_get"=>14532,
553
+ # "version"=>"1.2.0",
554
+ # "bytes_written"=>432583,
555
+ # "cmd_set"=>32,
556
+ # "get_misses"=>0,
557
+ # "total_connections"=>19,
558
+ # "curr_connections"=>3,
559
+ # "curr_items"=>4,
560
+ # "uptime"=>1557,
561
+ # "get_hits"=>14532,
562
+ # "total_items"=>32,
563
+ # "rusage_system"=>0.313952,
564
+ # "rusage_user"=>0.119981,
565
+ # "bytes_read"=>190619}}
566
+ # => nil
567
+
568
+ def stats
569
+ raise MemCacheError, "No active servers" unless active?
570
+ server_stats = {}
571
+
572
+ @servers.each do |server|
573
+ next unless server.alive?
574
+
575
+ with_socket_management(server) do |socket|
576
+ value = nil
577
+ socket.write "stats\r\n"
578
+ stats = {}
579
+ while line = socket.gets do
580
+ raise_on_error_response! line
581
+ break if line == "END\r\n"
582
+ if line =~ /\ASTAT ([\S]+) ([\w\.\:]+)/ then
583
+ name, value = $1, $2
584
+ stats[name] = case name
585
+ when 'version'
586
+ value
587
+ when 'rusage_user', 'rusage_system' then
588
+ seconds, microseconds = value.split(/:/, 2)
589
+ microseconds ||= 0
590
+ Float(seconds) + (Float(microseconds) / 1_000_000)
591
+ else
592
+ if value =~ /\A\d+\Z/ then
593
+ value.to_i
594
+ else
595
+ value
596
+ end
597
+ end
598
+ end
599
+ end
600
+ server_stats["#{server.host}:#{server.port}"] = stats
601
+ end
602
+ end
603
+
604
+ raise MemCacheError, "No active servers" if server_stats.empty?
605
+ server_stats
606
+ end
607
+
608
+ ##
609
+ # Shortcut to get a value from the cache.
610
+
611
+ alias [] get
612
+
613
+ ##
614
+ # Shortcut to save a value in the cache. This method does not set an
615
+ # expiration on the entry. Use set to specify an explicit expiry.
616
+
617
+ def []=(key, value)
618
+ set key, value
619
+ end
620
+
621
+ protected unless $TESTING
622
+
623
+ ##
624
+ # Create a key for the cache, incorporating the namespace qualifier if
625
+ # requested.
626
+
627
+ def make_cache_key(key)
628
+ if namespace.nil? then
629
+ key
630
+ else
631
+ "#{@namespace}:#{key}"
632
+ end
633
+ end
634
+
635
+ ##
636
+ # Returns an interoperable hash value for +key+. (I think, docs are
637
+ # sketchy for down servers).
638
+
639
+ def hash_for(key)
640
+ Zlib.crc32(key)
641
+ end
642
+
643
+ ##
644
+ # Pick a server to handle the request based on a hash of the key.
645
+
646
+ def get_server_for_key(key, options = {})
647
+ raise ArgumentError, "illegal character in key #{key.inspect}" if
648
+ key =~ /\s/
649
+ raise ArgumentError, "key too long #{key.inspect}" if key.length > 250
650
+ raise MemCacheError, "No servers available" if @servers.empty?
651
+ return @servers.first if @servers.length == 1
652
+
653
+ hkey = hash_for(key)
654
+
655
+ 20.times do |try|
656
+ entryidx = Continuum.binary_search(@continuum, hkey)
657
+ server = @continuum[entryidx].server
658
+ return server if server.alive?
659
+ break unless failover
660
+ hkey = hash_for "#{try}#{key}"
661
+ end
662
+
663
+ raise MemCacheError, "No servers available"
664
+ end
665
+
666
+ ##
667
+ # Performs a raw decr for +cache_key+ from +server+. Returns nil if not
668
+ # found.
669
+
670
+ def cache_decr(server, cache_key, amount)
671
+ with_socket_management(server) do |socket|
672
+ socket.write "decr #{cache_key} #{amount}#{noreply}\r\n"
673
+ break nil if @no_reply
674
+ text = socket.gets
675
+ raise_on_error_response! text
676
+ return nil if text == "NOT_FOUND\r\n"
677
+ return text.to_i
678
+ end
679
+ end
680
+
681
+ ##
682
+ # Fetches the raw data for +cache_key+ from +server+. Returns nil on cache
683
+ # miss.
684
+
685
+ def cache_get(server, cache_key)
686
+ with_socket_management(server) do |socket|
687
+ socket.write "get #{cache_key}\r\n"
688
+ keyline = socket.gets # "VALUE <key> <flags> <bytes>\r\n"
689
+
690
+ if keyline.nil? then
691
+ server.close
692
+ raise MemCacheError, "lost connection to #{server.host}:#{server.port}"
693
+ end
694
+
695
+ raise_on_error_response! keyline
696
+ return nil if keyline == "END\r\n"
697
+
698
+ unless keyline =~ /(\d+)\r/ then
699
+ server.close
700
+ raise MemCacheError, "unexpected response #{keyline.inspect}"
701
+ end
702
+ value = socket.read $1.to_i
703
+ socket.read 2 # "\r\n"
704
+ socket.gets # "END\r\n"
705
+ return value
706
+ end
707
+ end
708
+
709
+ def gets(key, raw = false)
710
+ with_server(key) do |server, cache_key|
711
+ logger.debug { "gets #{key} from #{server.inspect}" } if logger
712
+ result = with_socket_management(server) do |socket|
713
+ socket.write "gets #{cache_key}\r\n"
714
+ keyline = socket.gets # "VALUE <key> <flags> <bytes> <cas token>\r\n"
715
+
716
+ if keyline.nil? then
717
+ server.close
718
+ raise MemCacheError, "lost connection to #{server.host}:#{server.port}"
719
+ end
720
+
721
+ raise_on_error_response! keyline
722
+ return nil if keyline == "END\r\n"
723
+
724
+ unless keyline =~ /(\d+) (\w+)\r/ then
725
+ server.close
726
+ raise MemCacheError, "unexpected response #{keyline.inspect}"
727
+ end
728
+ value = socket.read $1.to_i
729
+ socket.read 2 # "\r\n"
730
+ socket.gets # "END\r\n"
731
+ [value, $2]
732
+ end
733
+ result[0] = Marshal.load result[0] unless raw
734
+ result
735
+ end
736
+ rescue TypeError => err
737
+ handle_error nil, err
738
+ end
739
+
740
+
741
+ ##
742
+ # Fetches +cache_keys+ from +server+ using a multi-get.
743
+
744
+ def cache_get_multi(server, cache_keys)
745
+ with_socket_management(server) do |socket|
746
+ values = {}
747
+ socket.write "get #{cache_keys}\r\n"
748
+
749
+ while keyline = socket.gets do
750
+ return values if keyline == "END\r\n"
751
+ raise_on_error_response! keyline
752
+
753
+ unless keyline =~ /\AVALUE (.+) (.+) (.+)/ then
754
+ server.close
755
+ raise MemCacheError, "unexpected response #{keyline.inspect}"
756
+ end
757
+
758
+ key, data_length = $1, $3
759
+ values[$1] = socket.read data_length.to_i
760
+ socket.read(2) # "\r\n"
761
+ end
762
+
763
+ server.close
764
+ raise MemCacheError, "lost connection to #{server.host}:#{server.port}" # TODO: retry here too
765
+ end
766
+ end
767
+
768
+ ##
769
+ # Performs a raw incr for +cache_key+ from +server+. Returns nil if not
770
+ # found.
771
+
772
+ def cache_incr(server, cache_key, amount)
773
+ with_socket_management(server) do |socket|
774
+ socket.write "incr #{cache_key} #{amount}#{noreply}\r\n"
775
+ break nil if @no_reply
776
+ text = socket.gets
777
+ raise_on_error_response! text
778
+ return nil if text == "NOT_FOUND\r\n"
779
+ return text.to_i
780
+ end
781
+ end
782
+
783
+ ##
784
+ # Gets or creates a socket connected to the given server, and yields it
785
+ # to the block, wrapped in a mutex synchronization if @multithread is true.
786
+ #
787
+ # If a socket error (SocketError, SystemCallError, IOError) or protocol error
788
+ # (MemCacheError) is raised by the block, closes the socket, attempts to
789
+ # connect again, and retries the block (once). If an error is again raised,
790
+ # reraises it as MemCacheError.
791
+ #
792
+ # If unable to connect to the server (or if in the reconnect wait period),
793
+ # raises MemCacheError. Note that the socket connect code marks a server
794
+ # dead for a timeout period, so retrying does not apply to connection attempt
795
+ # failures (but does still apply to unexpectedly lost connections etc.).
796
+
797
+ def with_socket_management(server, &block)
798
+ check_multithread_status!
799
+
800
+ @mutex.lock if @multithread
801
+ retried = false
802
+
803
+ begin
804
+ socket = server.socket
805
+
806
+ # Raise an IndexError to show this server is out of whack. If were inside
807
+ # a with_server block, we'll catch it and attempt to restart the operation.
808
+
809
+ raise IndexError, "No connection to server (#{server.status})" if socket.nil?
810
+
811
+ block.call(socket)
812
+
813
+ rescue SocketError, Errno::EAGAIN, Timeout::Error => err
814
+ logger.warn { "Socket failure: #{err.message}" } if logger
815
+ server.mark_dead(err)
816
+ handle_error(server, err)
817
+
818
+ rescue MemCacheError, SystemCallError, IOError => err
819
+ logger.warn { "Generic failure: #{err.class.name}: #{err.message}" } if logger
820
+ handle_error(server, err) if retried || socket.nil?
821
+ retried = true
822
+ retry
823
+ end
824
+ ensure
825
+ @mutex.unlock if @multithread
826
+ end
827
+
828
+ def with_server(key)
829
+ retried = false
830
+ begin
831
+ server, cache_key = request_setup(key)
832
+ yield server, cache_key
833
+ rescue IndexError => e
834
+ logger.warn { "Server failed: #{e.class.name}: #{e.message}" } if logger
835
+ if !retried && @servers.size > 1
836
+ logger.info { "Connection to server #{server.inspect} DIED! Retrying operation..." } if logger
837
+ retried = true
838
+ retry
839
+ end
840
+ handle_error(nil, e)
841
+ end
842
+ end
843
+
844
+ ##
845
+ # Handles +error+ from +server+.
846
+
847
+ def handle_error(server, error)
848
+ raise error if error.is_a?(MemCacheError)
849
+ server.close if server
850
+ new_error = MemCacheError.new error.message
851
+ new_error.set_backtrace error.backtrace
852
+ raise new_error
853
+ end
854
+
855
+ def noreply
856
+ @no_reply ? ' noreply' : ''
857
+ end
858
+
859
+ ##
860
+ # Performs setup for making a request with +key+ from memcached. Returns
861
+ # the server to fetch the key from and the complete key to use.
862
+
863
+ def request_setup(key)
864
+ raise MemCacheError, 'No active servers' unless active?
865
+ cache_key = make_cache_key key
866
+ server = get_server_for_key cache_key
867
+ return server, cache_key
868
+ end
869
+
870
+ def raise_on_error_response!(response)
871
+ if response =~ /\A(?:CLIENT_|SERVER_)?ERROR(.*)/
872
+ raise MemCacheError, $1.strip
873
+ end
874
+ end
875
+
876
+ def create_continuum_for(servers)
877
+ total_weight = servers.inject(0) { |memo, srv| memo + srv.weight }
878
+ continuum = []
879
+
880
+ servers.each do |server|
881
+ entry_count_for(server, servers.size, total_weight).times do |idx|
882
+ hash = Digest::SHA1.hexdigest("#{server.host}:#{server.port}:#{idx}")
883
+ value = Integer("0x#{hash[0..7]}")
884
+ continuum << Continuum::Entry.new(value, server)
885
+ end
886
+ end
887
+
888
+ continuum.sort { |a, b| a.value <=> b.value }
889
+ end
890
+
891
+ def entry_count_for(server, total_servers, total_weight)
892
+ ((total_servers * Continuum::POINTS_PER_SERVER * server.weight) / Float(total_weight)).floor
893
+ end
894
+
895
+ def check_multithread_status!
896
+ return if @multithread
897
+
898
+ if Thread.current[:memcache_client] != self.object_id
899
+ raise MemCacheError, <<-EOM
900
+ You are accessing this memcache-client instance from multiple threads but have not enabled multithread support.
901
+ Normally: MemCache.new(['localhost:11211'], :multithread => true)
902
+ In Rails: config.cache_store = [:mem_cache_store, 'localhost:11211', { :multithread => true }]
903
+ EOM
904
+ end
905
+ end
906
+
907
+ ##
908
+ # This class represents a memcached server instance.
909
+
910
+ class Server
911
+
912
+ ##
913
+ # The amount of time to wait before attempting to re-establish a
914
+ # connection with a server that is marked dead.
915
+
916
+ RETRY_DELAY = 30.0
917
+
918
+ ##
919
+ # The host the memcached server is running on.
920
+
921
+ attr_reader :host
922
+
923
+ ##
924
+ # The port the memcached server is listening on.
925
+
926
+ attr_reader :port
927
+
928
+ ##
929
+ # The weight given to the server.
930
+
931
+ attr_reader :weight
932
+
933
+ ##
934
+ # The time of next retry if the connection is dead.
935
+
936
+ attr_reader :retry
937
+
938
+ ##
939
+ # A text status string describing the state of the server.
940
+
941
+ attr_reader :status
942
+
943
+ attr_reader :logger
944
+
945
+ ##
946
+ # Create a new MemCache::Server object for the memcached instance
947
+ # listening on the given host and port, weighted by the given weight.
948
+
949
+ def initialize(memcache, host, port = DEFAULT_PORT, weight = DEFAULT_WEIGHT)
950
+ raise ArgumentError, "No host specified" if host.nil? or host.empty?
951
+ raise ArgumentError, "No port specified" if port.nil? or port.to_i.zero?
952
+
953
+ @host = host
954
+ @port = port.to_i
955
+ @weight = weight.to_i
956
+
957
+ @sock = nil
958
+ @retry = nil
959
+ @status = 'NOT CONNECTED'
960
+ @timeout = memcache.timeout
961
+ @logger = memcache.logger
962
+ end
963
+
964
+ ##
965
+ # Return a string representation of the server object.
966
+
967
+ def inspect
968
+ "<MemCache::Server: %s:%d [%d] (%s)>" % [@host, @port, @weight, @status]
969
+ end
970
+
971
+ ##
972
+ # Check whether the server connection is alive. This will cause the
973
+ # socket to attempt to connect if it isn't already connected and or if
974
+ # the server was previously marked as down and the retry time has
975
+ # been exceeded.
976
+
977
+ def alive?
978
+ !!socket
979
+ end
980
+
981
+ ##
982
+ # Try to connect to the memcached server targeted by this object.
983
+ # Returns the connected socket object on success or nil on failure.
984
+
985
+ def socket
986
+ return @sock if @sock and not @sock.closed?
987
+
988
+ @sock = nil
989
+
990
+ # If the host was dead, don't retry for a while.
991
+ return if @retry and @retry > Time.now
992
+
993
+ # Attempt to connect if not already connected.
994
+ begin
995
+ @sock = connect_to(@host, @port, @timeout)
996
+ @sock.setsockopt Socket::IPPROTO_TCP, Socket::TCP_NODELAY, 1
997
+ @retry = nil
998
+ @status = 'CONNECTED'
999
+ rescue SocketError, SystemCallError, IOError => err
1000
+ logger.warn { "Unable to open socket: #{err.class.name}, #{err.message}" } if logger
1001
+ mark_dead err
1002
+ end
1003
+
1004
+ return @sock
1005
+ end
1006
+
1007
+ def connect_to(host, port, timeout=nil)
1008
+ io = MemCache::BufferedIO.new(TCPSocket.new(host, port))
1009
+ io.read_timeout = timeout
1010
+ io
1011
+ end
1012
+
1013
+ ##
1014
+ # Close the connection to the memcached server targeted by this
1015
+ # object. The server is not considered dead.
1016
+
1017
+ def close
1018
+ @sock.close if @sock && !@sock.closed?
1019
+ @sock = nil
1020
+ @retry = nil
1021
+ @status = "NOT CONNECTED"
1022
+ end
1023
+
1024
+ ##
1025
+ # Mark the server as dead and close its socket.
1026
+
1027
+ def mark_dead(error)
1028
+ @sock.close if @sock && !@sock.closed?
1029
+ @sock = nil
1030
+ @retry = Time.now + RETRY_DELAY
1031
+
1032
+ reason = "#{error.class.name}: #{error.message}"
1033
+ @status = sprintf "%s:%s DEAD (%s), will retry at %s", @host, @port, reason, @retry
1034
+ @logger.info { @status } if @logger
1035
+ end
1036
+
1037
+ end
1038
+
1039
+ ##
1040
+ # Base MemCache exception class.
1041
+
1042
+ class MemCacheError < RuntimeError; end
1043
+
1044
+ class BufferedIO < Net::BufferedIO # :nodoc:
1045
+ BUFSIZE = 1024 * 16
1046
+
1047
+ if RUBY_VERSION < '1.9.1'
1048
+ def rbuf_fill
1049
+ begin
1050
+ @rbuf << @io.read_nonblock(BUFSIZE)
1051
+ rescue Errno::EWOULDBLOCK
1052
+ retry unless @read_timeout
1053
+ if IO.select([@io], nil, nil, @read_timeout)
1054
+ retry
1055
+ else
1056
+ raise Timeout::Error, 'IO timeout'
1057
+ end
1058
+ end
1059
+ end
1060
+ end
1061
+
1062
+ def setsockopt *args
1063
+ @io.setsockopt *args
1064
+ end
1065
+
1066
+ def gets
1067
+ readuntil("\n")
1068
+ end
1069
+ end
1070
+
1071
+ end
1072
+
1073
+ module Continuum
1074
+ POINTS_PER_SERVER = 160 # this is the default in libmemcached
1075
+
1076
+ # Find the closest index in Continuum with value <= the given value
1077
+ def self.binary_search(ary, value, &block)
1078
+ upper = ary.size - 1
1079
+ lower = 0
1080
+ idx = 0
1081
+
1082
+ while(lower <= upper) do
1083
+ idx = (lower + upper) / 2
1084
+ comp = ary[idx].value <=> value
1085
+
1086
+ if comp == 0
1087
+ return idx
1088
+ elsif comp > 0
1089
+ upper = idx - 1
1090
+ else
1091
+ lower = idx + 1
1092
+ end
1093
+ end
1094
+ return upper
1095
+ end
1096
+
1097
+ class Entry
1098
+ attr_reader :value
1099
+ attr_reader :server
1100
+
1101
+ def initialize(val, srv)
1102
+ @value = val
1103
+ @server = srv
1104
+ end
1105
+
1106
+ def inspect
1107
+ "<#{value}, #{server.host}:#{server.port}>"
1108
+ end
1109
+ end
1110
+
1111
+ end
1112
+ require 'continuum_native'