memcache-client 1.5.0 → 1.6.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- data/History.txt +71 -0
- data/LICENSE.txt +2 -2
- data/README.rdoc +43 -0
- data/Rakefile +20 -13
- data/ext/memcache/binary_search.c +54 -0
- data/ext/memcache/extconf.rb +5 -0
- data/lib/continuum.rb +46 -0
- data/lib/memcache.rb +335 -244
- data/lib/memcache_util.rb +20 -8
- data/test/test_mem_cache.rb +259 -53
- metadata +46 -87
- data.tar.gz.sig +0 -0
- data/Manifest.txt +0 -8
- data/README.txt +0 -54
- metadata.gz.sig +0 -0
data/History.txt
CHANGED
@@ -1,3 +1,74 @@
|
|
1
|
+
= 1.6.2 (2009-02-04)
|
2
|
+
|
3
|
+
* Validate that values are less than one megabyte in size.
|
4
|
+
|
5
|
+
* Refactor error handling in get_multi to handle server failures and return what values
|
6
|
+
we could successfully retrieve.
|
7
|
+
|
8
|
+
* Add optional logging parameter for debugging and tracing.
|
9
|
+
|
10
|
+
* First official release since 1.5.0. Thanks to Eric Hodel for turning over the project to me!
|
11
|
+
New project home page: http://github.com/mperham/memcache-client
|
12
|
+
|
13
|
+
= 1.6.1 (2009-01-28)
|
14
|
+
|
15
|
+
* Add option to disable socket timeout support. Socket timeout has a significant performance
|
16
|
+
penalty (approx 3x slower than without in Ruby 1.8.6). You can turn off the timeouts if you
|
17
|
+
need absolute performance, but by default timeouts are enabled. The performance
|
18
|
+
penalty is much lower in Ruby 1.8.7, 1.9 and JRuby. (mperham)
|
19
|
+
|
20
|
+
* Add option to disable server failover. Failover can lead to "split-brain" caches that
|
21
|
+
return stale data. (mperham)
|
22
|
+
|
23
|
+
* Implement continuum binary search in native code for performance reasons. Pure ruby
|
24
|
+
is available for platforms like JRuby or Rubinius which can't use C extensions. (mperham)
|
25
|
+
|
26
|
+
* Fix #add with raw=true (iamaleksey)
|
27
|
+
|
28
|
+
= 1.6.0
|
29
|
+
|
30
|
+
* Implement a consistent hashing algorithm, as described in libketama.
|
31
|
+
This dramatically reduces the cost of adding or removing servers dynamically
|
32
|
+
as keys are much more likely to map to the same server.
|
33
|
+
|
34
|
+
Take a scenario where we add a fourth server. With a naive modulo algorithm, about
|
35
|
+
25% of the keys will map to the same server. In other words, 75% of your memcached
|
36
|
+
content suddenly becomes invalid. With a consistent algorithm, 75% of the keys
|
37
|
+
will map to the same server as before - only 25% will be invalidated. (mperham)
|
38
|
+
|
39
|
+
* Implement socket timeouts, should fix rare cases of very bad things happening
|
40
|
+
in production at 37signals and FiveRuns. (jseirles)
|
41
|
+
|
42
|
+
= 1.5.0.5
|
43
|
+
|
44
|
+
* Remove native C CRC32_ITU_T extension in favor of Zlib's crc32 method.
|
45
|
+
memcache-client is now pure Ruby again and will work with JRuby and Rubinius.
|
46
|
+
|
47
|
+
= 1.5.0.4
|
48
|
+
|
49
|
+
* Get test suite working again (packagethief)
|
50
|
+
* Ruby 1.9 compatiblity fixes (packagethief, mperham)
|
51
|
+
* Consistently return server responses and check for errors (packagethief)
|
52
|
+
* Properly calculate CRC in Ruby 1.9 strings (mperham)
|
53
|
+
* Drop rspec in favor of test/unit, for 1.9 compat (mperham)
|
54
|
+
|
55
|
+
= 1.5.0.3 (FiveRuns fork)
|
56
|
+
|
57
|
+
* Integrated ITU-T CRC32 operation in native C extension for speed. Thanks to Justin Balthrop!
|
58
|
+
|
59
|
+
= 1.5.0.2 (FiveRuns fork)
|
60
|
+
|
61
|
+
* Add support for seamless failover between servers. If one server connection dies,
|
62
|
+
the client will retry the operation on another server before giving up.
|
63
|
+
|
64
|
+
* Merge Will Bryant's socket retry patch.
|
65
|
+
http://willbryant.net/software/2007/12/21/ruby-memcache-client-reconnect-and-retry
|
66
|
+
|
67
|
+
= 1.5.0.1 (FiveRuns fork)
|
68
|
+
|
69
|
+
* Fix set not handling client disconnects.
|
70
|
+
http://dev.twitter.com/2008/02/solving-case-of-missing-updates.html
|
71
|
+
|
1
72
|
= 1.5.0
|
2
73
|
|
3
74
|
* Add MemCache#flush_all command. Patch #13019 and bug #10503. Patches
|
data/LICENSE.txt
CHANGED
@@ -1,5 +1,5 @@
|
|
1
|
-
|
2
|
-
|
1
|
+
Copyright 2005-2009 Bob Cottrell, Eric Hodel, Mike Perham.
|
2
|
+
All rights reserved.
|
3
3
|
|
4
4
|
Redistribution and use in source and binary forms, with or without
|
5
5
|
modification, are permitted provided that the following conditions
|
data/README.rdoc
ADDED
@@ -0,0 +1,43 @@
|
|
1
|
+
= memcache-client
|
2
|
+
|
3
|
+
A pure ruby library for accessing memcached.
|
4
|
+
|
5
|
+
Source:
|
6
|
+
|
7
|
+
http://github.com/mperham/memcache-client
|
8
|
+
|
9
|
+
== Installing memcache-client
|
10
|
+
|
11
|
+
Just install the gem:
|
12
|
+
|
13
|
+
$ sudo gem install memcache-client
|
14
|
+
|
15
|
+
== Using memcache-client
|
16
|
+
|
17
|
+
With one server:
|
18
|
+
|
19
|
+
CACHE = MemCache.new 'localhost:11211', :namespace => 'my_namespace'
|
20
|
+
|
21
|
+
Or with multiple servers:
|
22
|
+
|
23
|
+
CACHE = MemCache.new %w[one.example.com:11211 two.example.com:11211],
|
24
|
+
:namespace => 'my_namespace'
|
25
|
+
|
26
|
+
See MemCache.new for details. Please note memcache-client is not thread-safe
|
27
|
+
by default. You should create a separate instance for each thread in your
|
28
|
+
process.
|
29
|
+
|
30
|
+
== Using memcache-client with Rails
|
31
|
+
|
32
|
+
There's no need to use memcache-client in a Rails application. Rails 2.1+ includes
|
33
|
+
a basic caching library which can be used with memcached. See ActiveSupport::Cache::Store
|
34
|
+
for more details.
|
35
|
+
|
36
|
+
== Questions?
|
37
|
+
|
38
|
+
memcache-client is maintained by Mike Perham and was originally written by Bob Cottrell,
|
39
|
+
Eric Hodel and the seattle.rb crew.
|
40
|
+
|
41
|
+
Email:: mailto:mperham@gmail.com
|
42
|
+
Twitter:: mperham[http://twitter.com/mperham]
|
43
|
+
WWW:: http://mikeperham.com
|
data/Rakefile
CHANGED
@@ -1,19 +1,26 @@
|
|
1
1
|
# vim: syntax=Ruby
|
2
|
+
require 'rubygems'
|
3
|
+
require 'rake/rdoctask'
|
4
|
+
require 'rake/testtask'
|
2
5
|
|
3
|
-
|
4
|
-
|
5
|
-
|
6
|
-
require 'memcache'
|
6
|
+
task :gem do
|
7
|
+
sh "gem build memcache-client.gemspec"
|
8
|
+
end
|
7
9
|
|
8
|
-
|
9
|
-
|
10
|
-
|
11
|
-
p.author = ['Eric Hodel', 'Robert Cottrell']
|
12
|
-
p.email = 'drbrain@segment7.net'
|
13
|
-
p.url = p.paragraphs_of('README.txt', 6).first
|
14
|
-
p.changes = File.read('History.txt').scan(/\A(=.*?)^=/m).first.first
|
10
|
+
task :install => [:gem] do
|
11
|
+
sh "sudo gem install memcache-client-*.gem"
|
12
|
+
end
|
15
13
|
|
16
|
-
|
17
|
-
|
14
|
+
Rake::RDocTask.new do |rd|
|
15
|
+
rd.main = "README.rdoc"
|
16
|
+
rd.rdoc_files.include("README.rdoc", "lib/**/*.rb")
|
17
|
+
rd.rdoc_dir = 'doc'
|
18
18
|
end
|
19
19
|
|
20
|
+
Rake::TestTask.new
|
21
|
+
|
22
|
+
task :default => :test
|
23
|
+
|
24
|
+
task :rcov do
|
25
|
+
`rcov -Ilib test/*.rb`
|
26
|
+
end
|
@@ -0,0 +1,54 @@
|
|
1
|
+
#include "ruby.h"
|
2
|
+
#include "stdio.h"
|
3
|
+
|
4
|
+
/*
|
5
|
+
def binary_search(ary, value)
|
6
|
+
upper = ary.size - 1
|
7
|
+
lower = 0
|
8
|
+
idx = 0
|
9
|
+
|
10
|
+
while(lower <= upper) do
|
11
|
+
idx = (lower + upper) / 2
|
12
|
+
comp = ary[idx].value <=> value
|
13
|
+
|
14
|
+
if comp == 0
|
15
|
+
return idx
|
16
|
+
elsif comp > 0
|
17
|
+
upper = idx - 1
|
18
|
+
else
|
19
|
+
lower = idx + 1
|
20
|
+
end
|
21
|
+
end
|
22
|
+
return upper
|
23
|
+
end
|
24
|
+
*/
|
25
|
+
static VALUE binary_search(VALUE self, VALUE ary, VALUE number) {
|
26
|
+
int upper = RARRAY_LEN(ary) - 1;
|
27
|
+
int lower = 0;
|
28
|
+
int idx = 0;
|
29
|
+
unsigned int r = NUM2UINT(number);
|
30
|
+
ID value = rb_intern("value");
|
31
|
+
|
32
|
+
while (lower <= upper) {
|
33
|
+
idx = (lower + upper) / 2;
|
34
|
+
|
35
|
+
VALUE continuumValue = rb_funcall(RARRAY_PTR(ary)[idx], value, 0);
|
36
|
+
unsigned int l = NUM2UINT(continuumValue);
|
37
|
+
if (l == r) {
|
38
|
+
return INT2FIX(idx);
|
39
|
+
}
|
40
|
+
else if (l > r) {
|
41
|
+
upper = idx - 1;
|
42
|
+
}
|
43
|
+
else {
|
44
|
+
lower = idx + 1;
|
45
|
+
}
|
46
|
+
}
|
47
|
+
return INT2FIX(upper);
|
48
|
+
}
|
49
|
+
|
50
|
+
VALUE cContinuum;
|
51
|
+
void Init_binary_search() {
|
52
|
+
cContinuum = rb_define_module("Continuum");
|
53
|
+
rb_define_module_function(cContinuum, "binary_search", binary_search, 2);
|
54
|
+
}
|
data/lib/continuum.rb
ADDED
@@ -0,0 +1,46 @@
|
|
1
|
+
module Continuum
|
2
|
+
POINTS_PER_SERVER = 160 # this is the default in libmemcached
|
3
|
+
|
4
|
+
begin
|
5
|
+
require 'binary_search' # try to load native extension
|
6
|
+
rescue LoadError => e
|
7
|
+
puts "Unable to load fast binary search, falling back to pure Ruby: #{e.message}"
|
8
|
+
|
9
|
+
# slow but pure ruby version
|
10
|
+
# Find the closest index in Continuum with value <= the given value
|
11
|
+
def self.binary_search(ary, value, &block)
|
12
|
+
upper = ary.size - 1
|
13
|
+
lower = 0
|
14
|
+
idx = 0
|
15
|
+
|
16
|
+
while(lower <= upper) do
|
17
|
+
idx = (lower + upper) / 2
|
18
|
+
comp = ary[idx].value <=> value
|
19
|
+
|
20
|
+
if comp == 0
|
21
|
+
return idx
|
22
|
+
elsif comp > 0
|
23
|
+
upper = idx - 1
|
24
|
+
else
|
25
|
+
lower = idx + 1
|
26
|
+
end
|
27
|
+
end
|
28
|
+
return upper
|
29
|
+
end
|
30
|
+
end
|
31
|
+
|
32
|
+
|
33
|
+
class Entry
|
34
|
+
attr_reader :value
|
35
|
+
attr_reader :server
|
36
|
+
|
37
|
+
def initialize(val, srv)
|
38
|
+
@value = val
|
39
|
+
@server = srv
|
40
|
+
end
|
41
|
+
|
42
|
+
def inspect
|
43
|
+
"<#{value}, #{server.host}:#{server.port}>"
|
44
|
+
end
|
45
|
+
end
|
46
|
+
end
|
data/lib/memcache.rb
CHANGED
@@ -3,46 +3,21 @@ $TESTING = defined?($TESTING) && $TESTING
|
|
3
3
|
require 'socket'
|
4
4
|
require 'thread'
|
5
5
|
require 'timeout'
|
6
|
-
require '
|
6
|
+
require 'zlib'
|
7
|
+
require 'digest/sha1'
|
7
8
|
|
8
|
-
|
9
|
-
|
10
|
-
##
|
11
|
-
# Uses the ITU-T polynomial in the CRC32 algorithm.
|
12
|
-
|
13
|
-
def crc32_ITU_T
|
14
|
-
n = length
|
15
|
-
r = 0xFFFFFFFF
|
16
|
-
|
17
|
-
n.times do |i|
|
18
|
-
r ^= self[i]
|
19
|
-
8.times do
|
20
|
-
if (r & 1) != 0 then
|
21
|
-
r = (r>>1) ^ 0xEDB88320
|
22
|
-
else
|
23
|
-
r >>= 1
|
24
|
-
end
|
25
|
-
end
|
26
|
-
end
|
27
|
-
|
28
|
-
r ^ 0xFFFFFFFF
|
29
|
-
end
|
30
|
-
|
31
|
-
end
|
9
|
+
require 'continuum'
|
32
10
|
|
33
11
|
##
|
34
12
|
# A Ruby client library for memcached.
|
35
13
|
#
|
36
|
-
# This is intended to provide access to basic memcached functionality. It
|
37
|
-
# does not attempt to be complete implementation of the entire API, but it is
|
38
|
-
# approaching a complete implementation.
|
39
14
|
|
40
15
|
class MemCache
|
41
16
|
|
42
17
|
##
|
43
18
|
# The version of MemCache you are using.
|
44
19
|
|
45
|
-
VERSION = '1.
|
20
|
+
VERSION = '1.6.2'
|
46
21
|
|
47
22
|
##
|
48
23
|
# Default options for the cache object.
|
@@ -51,6 +26,9 @@ class MemCache
|
|
51
26
|
:namespace => nil,
|
52
27
|
:readonly => false,
|
53
28
|
:multithread => false,
|
29
|
+
:failover => true,
|
30
|
+
:timeout => 0.5,
|
31
|
+
:logger => nil,
|
54
32
|
}
|
55
33
|
|
56
34
|
##
|
@@ -63,13 +41,6 @@ class MemCache
|
|
63
41
|
|
64
42
|
DEFAULT_WEIGHT = 1
|
65
43
|
|
66
|
-
##
|
67
|
-
# The amount of time to wait for a response from a memcached server. If a
|
68
|
-
# response is not completed within this time, the connection to the server
|
69
|
-
# will be closed and an error will be raised.
|
70
|
-
|
71
|
-
attr_accessor :request_timeout
|
72
|
-
|
73
44
|
##
|
74
45
|
# The namespace for this instance
|
75
46
|
|
@@ -85,6 +56,23 @@ class MemCache
|
|
85
56
|
|
86
57
|
attr_reader :servers
|
87
58
|
|
59
|
+
##
|
60
|
+
# Socket timeout limit with this client, defaults to 0.25 sec.
|
61
|
+
# Set to nil to disable timeouts.
|
62
|
+
|
63
|
+
attr_reader :timeout
|
64
|
+
|
65
|
+
##
|
66
|
+
# Should the client try to failover to another server if the
|
67
|
+
# first server is down? Defaults to true.
|
68
|
+
|
69
|
+
attr_reader :failover
|
70
|
+
|
71
|
+
##
|
72
|
+
# Log debug/info/warn/error to the given Logger, defaults to nil.
|
73
|
+
|
74
|
+
attr_reader :logger
|
75
|
+
|
88
76
|
##
|
89
77
|
# Accepts a list of +servers+ and a list of +opts+. +servers+ may be
|
90
78
|
# omitted. See +servers=+ for acceptable server list arguments.
|
@@ -92,9 +80,13 @@ class MemCache
|
|
92
80
|
# Valid options for +opts+ are:
|
93
81
|
#
|
94
82
|
# [:namespace] Prepends this value to all keys added or retrieved.
|
95
|
-
# [:readonly] Raises an
|
83
|
+
# [:readonly] Raises an exception on cache writes when true.
|
96
84
|
# [:multithread] Wraps cache access in a Mutex for thread safety.
|
97
|
-
#
|
85
|
+
# [:failover] Should the client try to failover to another server if the
|
86
|
+
# first server is down? Defaults to true.
|
87
|
+
# [:timeout] Time to use as the socket read timeout. Defaults to 0.25 sec,
|
88
|
+
# set to nil to disable timeouts (this is a major performance penalty in Ruby 1.8).
|
89
|
+
# [:logger] Logger to use for info/debug output, defaults to nil
|
98
90
|
# Other options are ignored.
|
99
91
|
|
100
92
|
def initialize(*args)
|
@@ -121,8 +113,13 @@ class MemCache
|
|
121
113
|
@namespace = opts[:namespace]
|
122
114
|
@readonly = opts[:readonly]
|
123
115
|
@multithread = opts[:multithread]
|
116
|
+
@timeout = opts[:timeout]
|
117
|
+
@failover = opts[:failover]
|
118
|
+
@logger = opts[:logger]
|
124
119
|
@mutex = Mutex.new if @multithread
|
125
|
-
|
120
|
+
|
121
|
+
logger.info { "memcache-client #{VERSION} #{Array(servers).inspect}" } if logger
|
122
|
+
|
126
123
|
self.servers = servers
|
127
124
|
end
|
128
125
|
|
@@ -130,8 +127,8 @@ class MemCache
|
|
130
127
|
# Returns a string representation of the cache object.
|
131
128
|
|
132
129
|
def inspect
|
133
|
-
"<MemCache: %d servers,
|
134
|
-
[@servers.length, @
|
130
|
+
"<MemCache: %d servers, ns: %p, ro: %p>" %
|
131
|
+
[@servers.length, @namespace, @readonly]
|
135
132
|
end
|
136
133
|
|
137
134
|
##
|
@@ -152,48 +149,44 @@ class MemCache
|
|
152
149
|
# Set the servers that the requests will be distributed between. Entries
|
153
150
|
# can be either strings of the form "hostname:port" or
|
154
151
|
# "hostname:port:weight" or MemCache::Server objects.
|
155
|
-
|
152
|
+
#
|
156
153
|
def servers=(servers)
|
157
154
|
# Create the server objects.
|
158
|
-
@servers = servers.collect do |server|
|
155
|
+
@servers = Array(servers).collect do |server|
|
159
156
|
case server
|
160
157
|
when String
|
161
158
|
host, port, weight = server.split ':', 3
|
162
159
|
port ||= DEFAULT_PORT
|
163
160
|
weight ||= DEFAULT_WEIGHT
|
164
161
|
Server.new self, host, port, weight
|
165
|
-
|
166
|
-
if server.
|
162
|
+
else
|
163
|
+
if server.multithread != @multithread then
|
167
164
|
raise ArgumentError, "can't mix threaded and non-threaded servers"
|
168
165
|
end
|
169
166
|
server
|
170
|
-
else
|
171
|
-
raise TypeError, "cannot convert #{server.class} into MemCache::Server"
|
172
167
|
end
|
173
168
|
end
|
174
169
|
|
175
|
-
|
176
|
-
|
177
|
-
|
178
|
-
|
179
|
-
|
170
|
+
logger.debug { "Servers now: #{@servers.inspect}" } if logger
|
171
|
+
|
172
|
+
# There's no point in doing this if there's only one server
|
173
|
+
@continuum = create_continuum_for(@servers) if @servers.size > 1
|
174
|
+
|
175
|
+
@servers
|
180
176
|
end
|
181
177
|
|
182
178
|
##
|
183
|
-
#
|
179
|
+
# Decrements the value for +key+ by +amount+ and returns the new value.
|
184
180
|
# +key+ must already exist. If +key+ is not an integer, it is assumed to be
|
185
181
|
# 0. +key+ can not be decremented below 0.
|
186
182
|
|
187
183
|
def decr(key, amount = 1)
|
188
|
-
|
189
|
-
|
190
|
-
if @multithread then
|
191
|
-
threadsafe_cache_decr server, cache_key, amount
|
192
|
-
else
|
184
|
+
raise MemCacheError, "Update of readonly cache" if @readonly
|
185
|
+
with_server(key) do |server, cache_key|
|
193
186
|
cache_decr server, cache_key, amount
|
194
187
|
end
|
195
|
-
rescue TypeError
|
196
|
-
handle_error
|
188
|
+
rescue TypeError => err
|
189
|
+
handle_error nil, err
|
197
190
|
end
|
198
191
|
|
199
192
|
##
|
@@ -201,21 +194,15 @@ class MemCache
|
|
201
194
|
# unmarshalled.
|
202
195
|
|
203
196
|
def get(key, raw = false)
|
204
|
-
server, cache_key
|
205
|
-
|
206
|
-
|
207
|
-
|
208
|
-
|
209
|
-
|
210
|
-
|
211
|
-
|
212
|
-
|
213
|
-
|
214
|
-
value = Marshal.load value unless raw
|
215
|
-
|
216
|
-
return value
|
217
|
-
rescue TypeError, SocketError, SystemCallError, IOError => err
|
218
|
-
handle_error server, err
|
197
|
+
with_server(key) do |server, cache_key|
|
198
|
+
value = cache_get server, cache_key
|
199
|
+
logger.debug { "GET #{key} from #{server.inspect}: #{value ? value.to_s.size : 'nil'}" } if logger
|
200
|
+
return nil if value.nil?
|
201
|
+
value = Marshal.load value unless raw
|
202
|
+
return value
|
203
|
+
end
|
204
|
+
rescue TypeError => err
|
205
|
+
handle_error nil, err
|
219
206
|
end
|
220
207
|
|
221
208
|
##
|
@@ -251,38 +238,36 @@ class MemCache
|
|
251
238
|
|
252
239
|
results = {}
|
253
240
|
|
254
|
-
server_keys.each do |server,
|
255
|
-
|
256
|
-
|
257
|
-
|
258
|
-
|
259
|
-
|
260
|
-
|
261
|
-
|
262
|
-
|
241
|
+
server_keys.each do |server, keys_for_server|
|
242
|
+
keys_for_server_str = keys_for_server.join ' '
|
243
|
+
begin
|
244
|
+
values = cache_get_multi server, keys_for_server_str
|
245
|
+
values.each do |key, value|
|
246
|
+
results[cache_keys[key]] = Marshal.load value
|
247
|
+
end
|
248
|
+
rescue IndexError => e
|
249
|
+
# Ignore this server and try the others
|
250
|
+
logger.warn { "Unable to retrieve #{keys_for_server.size} elements from #{server.inspect}: #{e.message}"} if logger
|
263
251
|
end
|
264
252
|
end
|
265
253
|
|
266
254
|
return results
|
267
|
-
rescue TypeError
|
268
|
-
handle_error
|
255
|
+
rescue TypeError => err
|
256
|
+
handle_error nil, err
|
269
257
|
end
|
270
258
|
|
271
259
|
##
|
272
|
-
# Increments the value for +key+ by +amount+ and
|
260
|
+
# Increments the value for +key+ by +amount+ and returns the new value.
|
273
261
|
# +key+ must already exist. If +key+ is not an integer, it is assumed to be
|
274
262
|
# 0.
|
275
263
|
|
276
264
|
def incr(key, amount = 1)
|
277
|
-
|
278
|
-
|
279
|
-
if @multithread then
|
280
|
-
threadsafe_cache_incr server, cache_key, amount
|
281
|
-
else
|
265
|
+
raise MemCacheError, "Update of readonly cache" if @readonly
|
266
|
+
with_server(key) do |server, cache_key|
|
282
267
|
cache_incr server, cache_key, amount
|
283
268
|
end
|
284
|
-
rescue TypeError
|
285
|
-
handle_error
|
269
|
+
rescue TypeError => err
|
270
|
+
handle_error nil, err
|
286
271
|
end
|
287
272
|
|
288
273
|
##
|
@@ -292,24 +277,32 @@ class MemCache
|
|
292
277
|
# Warning: Readers should not call this method in the event of a cache miss;
|
293
278
|
# see MemCache#add.
|
294
279
|
|
280
|
+
ONE_MB = 1024 * 1024
|
281
|
+
|
295
282
|
def set(key, value, expiry = 0, raw = false)
|
296
283
|
raise MemCacheError, "Update of readonly cache" if @readonly
|
297
|
-
server, cache_key
|
298
|
-
socket = server.socket
|
284
|
+
with_server(key) do |server, cache_key|
|
299
285
|
|
300
|
-
|
301
|
-
|
286
|
+
value = Marshal.dump value unless raw
|
287
|
+
logger.debug { "SET #{key} to #{server.inspect}: #{value ? value.to_s.size : 'nil'}" } if logger
|
302
288
|
|
303
|
-
|
304
|
-
|
305
|
-
|
306
|
-
|
307
|
-
|
308
|
-
|
309
|
-
|
310
|
-
|
311
|
-
|
312
|
-
|
289
|
+
data = value.to_s
|
290
|
+
raise MemCacheError, "Value too large, memcached can only store 1MB of data per key" if data.size > ONE_MB
|
291
|
+
|
292
|
+
command = "set #{cache_key} 0 #{expiry} #{data.size}\r\n#{data}\r\n"
|
293
|
+
|
294
|
+
with_socket_management(server) do |socket|
|
295
|
+
socket.write command
|
296
|
+
result = socket.gets
|
297
|
+
raise_on_error_response! result
|
298
|
+
|
299
|
+
if result.nil?
|
300
|
+
server.close
|
301
|
+
raise MemCacheError, "lost connection to #{server.host}:#{server.port}"
|
302
|
+
end
|
303
|
+
|
304
|
+
result
|
305
|
+
end
|
313
306
|
end
|
314
307
|
end
|
315
308
|
|
@@ -323,21 +316,17 @@ class MemCache
|
|
323
316
|
|
324
317
|
def add(key, value, expiry = 0, raw = false)
|
325
318
|
raise MemCacheError, "Update of readonly cache" if @readonly
|
326
|
-
server, cache_key
|
327
|
-
|
328
|
-
|
329
|
-
|
330
|
-
|
331
|
-
|
332
|
-
|
333
|
-
|
334
|
-
|
335
|
-
|
336
|
-
|
337
|
-
server.close
|
338
|
-
raise MemCacheError, err.message
|
339
|
-
ensure
|
340
|
-
@mutex.unlock if @multithread
|
319
|
+
with_server(key) do |server, cache_key|
|
320
|
+
value = Marshal.dump value unless raw
|
321
|
+
logger.debug { "ADD #{key} to #{server}: #{value ? value.to_s.size : 'nil'}" } if logger
|
322
|
+
command = "add #{cache_key} 0 #{expiry} #{value.to_s.size}\r\n#{value}\r\n"
|
323
|
+
|
324
|
+
with_socket_management(server) do |socket|
|
325
|
+
socket.write command
|
326
|
+
result = socket.gets
|
327
|
+
raise_on_error_response! result
|
328
|
+
result
|
329
|
+
end
|
341
330
|
end
|
342
331
|
end
|
343
332
|
|
@@ -345,24 +334,15 @@ class MemCache
|
|
345
334
|
# Removes +key+ from the cache in +expiry+ seconds.
|
346
335
|
|
347
336
|
def delete(key, expiry = 0)
|
348
|
-
|
349
|
-
|
350
|
-
|
351
|
-
|
352
|
-
|
353
|
-
|
354
|
-
|
355
|
-
|
356
|
-
|
357
|
-
begin
|
358
|
-
sock.write "delete #{cache_key} #{expiry}\r\n"
|
359
|
-
sock.gets
|
360
|
-
rescue SocketError, SystemCallError, IOError => err
|
361
|
-
server.close
|
362
|
-
raise MemCacheError, err.message
|
337
|
+
raise MemCacheError, "Update of readonly cache" if @readonly
|
338
|
+
with_server(key) do |server, cache_key|
|
339
|
+
with_socket_management(server) do |socket|
|
340
|
+
socket.write "delete #{cache_key} #{expiry}\r\n"
|
341
|
+
result = socket.gets
|
342
|
+
raise_on_error_response! result
|
343
|
+
result
|
344
|
+
end
|
363
345
|
end
|
364
|
-
ensure
|
365
|
-
@mutex.unlock if @multithread
|
366
346
|
end
|
367
347
|
|
368
348
|
##
|
@@ -371,20 +351,19 @@ class MemCache
|
|
371
351
|
def flush_all
|
372
352
|
raise MemCacheError, 'No active servers' unless active?
|
373
353
|
raise MemCacheError, "Update of readonly cache" if @readonly
|
354
|
+
|
374
355
|
begin
|
375
356
|
@mutex.lock if @multithread
|
376
357
|
@servers.each do |server|
|
377
|
-
|
378
|
-
|
379
|
-
|
380
|
-
|
381
|
-
result
|
382
|
-
raise MemCacheError, $2.strip if result =~ /^(SERVER_)?ERROR(.*)/
|
383
|
-
rescue SocketError, SystemCallError, IOError => err
|
384
|
-
server.close
|
385
|
-
raise MemCacheError, err.message
|
358
|
+
with_socket_management(server) do |socket|
|
359
|
+
socket.write "flush_all\r\n"
|
360
|
+
result = socket.gets
|
361
|
+
raise_on_error_response! result
|
362
|
+
result
|
386
363
|
end
|
387
364
|
end
|
365
|
+
rescue IndexError => err
|
366
|
+
handle_error nil, err
|
388
367
|
ensure
|
389
368
|
@mutex.unlock if @multithread
|
390
369
|
end
|
@@ -436,16 +415,16 @@ class MemCache
|
|
436
415
|
server_stats = {}
|
437
416
|
|
438
417
|
@servers.each do |server|
|
439
|
-
|
440
|
-
raise MemCacheError, "No connection to server" if sock.nil?
|
418
|
+
next unless server.alive?
|
441
419
|
|
442
|
-
|
443
|
-
|
444
|
-
|
420
|
+
with_socket_management(server) do |socket|
|
421
|
+
value = nil
|
422
|
+
socket.write "stats\r\n"
|
445
423
|
stats = {}
|
446
|
-
while line =
|
424
|
+
while line = socket.gets do
|
425
|
+
raise_on_error_response! line
|
447
426
|
break if line == "END\r\n"
|
448
|
-
if line =~
|
427
|
+
if line =~ /\ASTAT ([\S]+) ([\w\.\:]+)/ then
|
449
428
|
name, value = $1, $2
|
450
429
|
stats[name] = case name
|
451
430
|
when 'version'
|
@@ -455,7 +434,7 @@ class MemCache
|
|
455
434
|
microseconds ||= 0
|
456
435
|
Float(seconds) + (Float(microseconds) / 1_000_000)
|
457
436
|
else
|
458
|
-
if value =~
|
437
|
+
if value =~ /\A\d+\Z/ then
|
459
438
|
value.to_i
|
460
439
|
else
|
461
440
|
value
|
@@ -464,12 +443,10 @@ class MemCache
|
|
464
443
|
end
|
465
444
|
end
|
466
445
|
server_stats["#{server.host}:#{server.port}"] = stats
|
467
|
-
rescue SocketError, SystemCallError, IOError => err
|
468
|
-
server.close
|
469
|
-
raise MemCacheError, err.message
|
470
446
|
end
|
471
447
|
end
|
472
448
|
|
449
|
+
raise MemCacheError, "No active servers" if server_stats.empty?
|
473
450
|
server_stats
|
474
451
|
end
|
475
452
|
|
@@ -500,45 +477,49 @@ class MemCache
|
|
500
477
|
end
|
501
478
|
end
|
502
479
|
|
480
|
+
##
|
481
|
+
# Returns an interoperable hash value for +key+. (I think, docs are
|
482
|
+
# sketchy for down servers).
|
483
|
+
|
484
|
+
def hash_for(key)
|
485
|
+
Zlib.crc32(key)
|
486
|
+
end
|
487
|
+
|
503
488
|
##
|
504
489
|
# Pick a server to handle the request based on a hash of the key.
|
505
490
|
|
506
|
-
def get_server_for_key(key)
|
491
|
+
def get_server_for_key(key, options = {})
|
507
492
|
raise ArgumentError, "illegal character in key #{key.inspect}" if
|
508
493
|
key =~ /\s/
|
509
494
|
raise ArgumentError, "key too long #{key.inspect}" if key.length > 250
|
510
495
|
raise MemCacheError, "No servers available" if @servers.empty?
|
511
496
|
return @servers.first if @servers.length == 1
|
512
497
|
|
513
|
-
hkey = hash_for
|
498
|
+
hkey = hash_for(key)
|
514
499
|
|
515
500
|
20.times do |try|
|
516
|
-
|
501
|
+
entryidx = Continuum.binary_search(@continuum, hkey)
|
502
|
+
server = @continuum[entryidx].server
|
517
503
|
return server if server.alive?
|
518
|
-
|
504
|
+
break unless failover
|
505
|
+
hkey = hash_for "#{try}#{key}"
|
519
506
|
end
|
520
|
-
|
507
|
+
|
521
508
|
raise MemCacheError, "No servers available"
|
522
509
|
end
|
523
510
|
|
524
|
-
##
|
525
|
-
# Returns an interoperable hash value for +key+. (I think, docs are
|
526
|
-
# sketchy for down servers).
|
527
|
-
|
528
|
-
def hash_for(key)
|
529
|
-
(key.crc32_ITU_T >> 16) & 0x7fff
|
530
|
-
end
|
531
|
-
|
532
511
|
##
|
533
512
|
# Performs a raw decr for +cache_key+ from +server+. Returns nil if not
|
534
513
|
# found.
|
535
514
|
|
536
515
|
def cache_decr(server, cache_key, amount)
|
537
|
-
|
538
|
-
|
539
|
-
|
540
|
-
|
541
|
-
|
516
|
+
with_socket_management(server) do |socket|
|
517
|
+
socket.write "decr #{cache_key} #{amount}\r\n"
|
518
|
+
text = socket.gets
|
519
|
+
raise_on_error_response! text
|
520
|
+
return nil if text == "NOT_FOUND\r\n"
|
521
|
+
return text.to_i
|
522
|
+
end
|
542
523
|
end
|
543
524
|
|
544
525
|
##
|
@@ -546,50 +527,54 @@ class MemCache
|
|
546
527
|
# miss.
|
547
528
|
|
548
529
|
def cache_get(server, cache_key)
|
549
|
-
|
550
|
-
|
551
|
-
|
530
|
+
with_socket_management(server) do |socket|
|
531
|
+
socket.write "get #{cache_key}\r\n"
|
532
|
+
keyline = socket.gets # "VALUE <key> <flags> <bytes>\r\n"
|
552
533
|
|
553
|
-
|
554
|
-
|
555
|
-
|
556
|
-
|
534
|
+
if keyline.nil? then
|
535
|
+
server.close
|
536
|
+
raise MemCacheError, "lost connection to #{server.host}:#{server.port}"
|
537
|
+
end
|
557
538
|
|
558
|
-
|
539
|
+
raise_on_error_response! keyline
|
540
|
+
return nil if keyline == "END\r\n"
|
559
541
|
|
560
|
-
|
561
|
-
|
562
|
-
|
542
|
+
unless keyline =~ /(\d+)\r/ then
|
543
|
+
server.close
|
544
|
+
raise MemCacheError, "unexpected response #{keyline.inspect}"
|
545
|
+
end
|
546
|
+
value = socket.read $1.to_i
|
547
|
+
socket.read 2 # "\r\n"
|
548
|
+
socket.gets # "END\r\n"
|
549
|
+
return value
|
563
550
|
end
|
564
|
-
value = socket.read $1.to_i
|
565
|
-
socket.read 2 # "\r\n"
|
566
|
-
socket.gets # "END\r\n"
|
567
|
-
return value
|
568
551
|
end
|
569
552
|
|
570
553
|
##
|
571
554
|
# Fetches +cache_keys+ from +server+ using a multi-get.
|
572
555
|
|
573
556
|
def cache_get_multi(server, cache_keys)
|
574
|
-
|
575
|
-
|
576
|
-
|
557
|
+
with_socket_management(server) do |socket|
|
558
|
+
values = {}
|
559
|
+
socket.write "get #{cache_keys}\r\n"
|
577
560
|
|
578
|
-
|
579
|
-
|
561
|
+
while keyline = socket.gets do
|
562
|
+
return values if keyline == "END\r\n"
|
563
|
+
raise_on_error_response! keyline
|
580
564
|
|
581
|
-
|
582
|
-
|
583
|
-
|
565
|
+
unless keyline =~ /\AVALUE (.+) (.+) (.+)/ then
|
566
|
+
server.close
|
567
|
+
raise MemCacheError, "unexpected response #{keyline.inspect}"
|
568
|
+
end
|
569
|
+
|
570
|
+
key, data_length = $1, $3
|
571
|
+
values[$1] = socket.read data_length.to_i
|
572
|
+
socket.read(2) # "\r\n"
|
584
573
|
end
|
585
574
|
|
586
|
-
|
587
|
-
|
588
|
-
socket.read(2) # "\r\n"
|
575
|
+
server.close
|
576
|
+
raise MemCacheError, "lost connection to #{server.host}:#{server.port}" # TODO: retry here too
|
589
577
|
end
|
590
|
-
|
591
|
-
server.close
|
592
|
-
raise MemCacheError, "lost connection to #{server.host}:#{server.port}"
|
593
578
|
end
|
594
579
|
|
595
580
|
##
|
@@ -597,17 +582,79 @@ class MemCache
|
|
597
582
|
# found.
|
598
583
|
|
599
584
|
def cache_incr(server, cache_key, amount)
|
600
|
-
|
601
|
-
|
602
|
-
|
603
|
-
|
604
|
-
|
585
|
+
with_socket_management(server) do |socket|
|
586
|
+
socket.write "incr #{cache_key} #{amount}\r\n"
|
587
|
+
text = socket.gets
|
588
|
+
raise_on_error_response! text
|
589
|
+
return nil if text == "NOT_FOUND\r\n"
|
590
|
+
return text.to_i
|
591
|
+
end
|
592
|
+
end
|
593
|
+
|
594
|
+
##
|
595
|
+
# Gets or creates a socket connected to the given server, and yields it
|
596
|
+
# to the block, wrapped in a mutex synchronization if @multithread is true.
|
597
|
+
#
|
598
|
+
# If a socket error (SocketError, SystemCallError, IOError) or protocol error
|
599
|
+
# (MemCacheError) is raised by the block, closes the socket, attempts to
|
600
|
+
# connect again, and retries the block (once). If an error is again raised,
|
601
|
+
# reraises it as MemCacheError.
|
602
|
+
#
|
603
|
+
# If unable to connect to the server (or if in the reconnect wait period),
|
604
|
+
# raises MemCacheError. Note that the socket connect code marks a server
|
605
|
+
# dead for a timeout period, so retrying does not apply to connection attempt
|
606
|
+
# failures (but does still apply to unexpectedly lost connections etc.).
|
607
|
+
|
608
|
+
def with_socket_management(server, &block)
|
609
|
+
@mutex.lock if @multithread
|
610
|
+
retried = false
|
611
|
+
|
612
|
+
begin
|
613
|
+
socket = server.socket
|
614
|
+
|
615
|
+
# Raise an IndexError to show this server is out of whack. If were inside
|
616
|
+
# a with_server block, we'll catch it and attempt to restart the operation.
|
617
|
+
|
618
|
+
raise IndexError, "No connection to server (#{server.status})" if socket.nil?
|
619
|
+
|
620
|
+
block.call(socket)
|
621
|
+
|
622
|
+
rescue SocketError => err
|
623
|
+
logger.warn { "Socket failure: #{err.message}" } if logger
|
624
|
+
server.mark_dead(err)
|
625
|
+
handle_error(server, err)
|
626
|
+
|
627
|
+
rescue MemCacheError, SystemCallError, IOError => err
|
628
|
+
logger.warn { "Generic failure: #{err.class.name}: #{err.message}" } if logger
|
629
|
+
handle_error(server, err) if retried || socket.nil?
|
630
|
+
retried = true
|
631
|
+
retry
|
632
|
+
end
|
633
|
+
ensure
|
634
|
+
@mutex.unlock if @multithread
|
635
|
+
end
|
636
|
+
|
637
|
+
def with_server(key)
|
638
|
+
retried = false
|
639
|
+
begin
|
640
|
+
server, cache_key = request_setup(key)
|
641
|
+
yield server, cache_key
|
642
|
+
rescue IndexError => e
|
643
|
+
logger.warn { "Server failed: #{e.class.name}: #{e.message}" } if logger
|
644
|
+
if !retried && @servers.size > 1
|
645
|
+
logger.info { "Connection to server #{server.inspect} DIED! Retrying operation..." } if logger
|
646
|
+
retried = true
|
647
|
+
retry
|
648
|
+
end
|
649
|
+
handle_error(nil, e)
|
650
|
+
end
|
605
651
|
end
|
606
652
|
|
607
653
|
##
|
608
654
|
# Handles +error+ from +server+.
|
609
655
|
|
610
656
|
def handle_error(server, error)
|
657
|
+
raise error if error.is_a?(MemCacheError)
|
611
658
|
server.close if server
|
612
659
|
new_error = MemCacheError.new error.message
|
613
660
|
new_error.set_backtrace error.backtrace
|
@@ -622,36 +669,32 @@ class MemCache
|
|
622
669
|
raise MemCacheError, 'No active servers' unless active?
|
623
670
|
cache_key = make_cache_key key
|
624
671
|
server = get_server_for_key cache_key
|
625
|
-
raise MemCacheError, 'No connection to server' if server.socket.nil?
|
626
672
|
return server, cache_key
|
627
673
|
end
|
628
674
|
|
629
|
-
def
|
630
|
-
|
631
|
-
|
632
|
-
|
633
|
-
@mutex.unlock
|
675
|
+
def raise_on_error_response!(response)
|
676
|
+
if response =~ /\A(?:CLIENT_|SERVER_)?ERROR(.*)/
|
677
|
+
raise MemCacheError, $1.strip
|
678
|
+
end
|
634
679
|
end
|
635
680
|
|
636
|
-
def
|
637
|
-
|
638
|
-
|
639
|
-
ensure
|
640
|
-
@mutex.unlock
|
641
|
-
end
|
681
|
+
def create_continuum_for(servers)
|
682
|
+
total_weight = servers.inject(0) { |memo, srv| memo + srv.weight }
|
683
|
+
continuum = []
|
642
684
|
|
643
|
-
|
644
|
-
|
645
|
-
|
646
|
-
|
647
|
-
|
685
|
+
servers.each do |server|
|
686
|
+
entry_count_for(server, servers.size, total_weight).times do |idx|
|
687
|
+
hash = Digest::SHA1.hexdigest("#{server.host}:#{server.port}:#{idx}")
|
688
|
+
value = Integer("0x#{hash[0..7]}")
|
689
|
+
continuum << Continuum::Entry.new(value, server)
|
690
|
+
end
|
691
|
+
end
|
692
|
+
|
693
|
+
continuum.sort { |a, b| a.value <=> b.value }
|
648
694
|
end
|
649
695
|
|
650
|
-
def
|
651
|
-
|
652
|
-
cache_incr server, cache_key, amount
|
653
|
-
ensure
|
654
|
-
@mutex.unlock
|
696
|
+
def entry_count_for(server, total_servers, total_weight)
|
697
|
+
((total_servers * Continuum::POINTS_PER_SERVER * server.weight) / Float(total_weight)).floor
|
655
698
|
end
|
656
699
|
|
657
700
|
##
|
@@ -697,6 +740,9 @@ class MemCache
|
|
697
740
|
|
698
741
|
attr_reader :status
|
699
742
|
|
743
|
+
attr_reader :multithread
|
744
|
+
attr_reader :logger
|
745
|
+
|
700
746
|
##
|
701
747
|
# Create a new MemCache::Server object for the memcached instance
|
702
748
|
# listening on the given host and port, weighted by the given weight.
|
@@ -705,17 +751,18 @@ class MemCache
|
|
705
751
|
raise ArgumentError, "No host specified" if host.nil? or host.empty?
|
706
752
|
raise ArgumentError, "No port specified" if port.nil? or port.to_i.zero?
|
707
753
|
|
708
|
-
@memcache = memcache
|
709
754
|
@host = host
|
710
755
|
@port = port.to_i
|
711
756
|
@weight = weight.to_i
|
712
757
|
|
713
|
-
@multithread =
|
758
|
+
@multithread = memcache.multithread
|
714
759
|
@mutex = Mutex.new
|
715
760
|
|
716
761
|
@sock = nil
|
717
762
|
@retry = nil
|
718
763
|
@status = 'NOT CONNECTED'
|
764
|
+
@timeout = memcache.timeout
|
765
|
+
@logger = memcache.logger
|
719
766
|
end
|
720
767
|
|
721
768
|
##
|
@@ -750,16 +797,16 @@ class MemCache
|
|
750
797
|
|
751
798
|
# Attempt to connect if not already connected.
|
752
799
|
begin
|
753
|
-
@sock = timeout
|
754
|
-
|
755
|
-
end
|
800
|
+
@sock = @timeout ? TCPTimeoutSocket.new(@host, @port, @timeout) : TCPSocket.new(@host, @port)
|
801
|
+
|
756
802
|
if Socket.constants.include? 'TCP_NODELAY' then
|
757
803
|
@sock.setsockopt Socket::IPPROTO_TCP, Socket::TCP_NODELAY, 1
|
758
804
|
end
|
759
805
|
@retry = nil
|
760
806
|
@status = 'CONNECTED'
|
761
807
|
rescue SocketError, SystemCallError, IOError, Timeout::Error => err
|
762
|
-
|
808
|
+
logger.warn { "Unable to open socket: #{err.class.name}, #{err.message}" } if logger
|
809
|
+
mark_dead err
|
763
810
|
end
|
764
811
|
|
765
812
|
return @sock
|
@@ -781,17 +828,17 @@ class MemCache
|
|
781
828
|
@mutex.unlock if @multithread
|
782
829
|
end
|
783
830
|
|
784
|
-
private
|
785
|
-
|
786
831
|
##
|
787
832
|
# Mark the server as dead and close its socket.
|
788
833
|
|
789
|
-
def mark_dead(
|
834
|
+
def mark_dead(error)
|
790
835
|
@sock.close if @sock && !@sock.closed?
|
791
836
|
@sock = nil
|
792
837
|
@retry = Time.now + RETRY_DELAY
|
793
838
|
|
794
|
-
|
839
|
+
reason = "#{error.class.name}: #{error.message}"
|
840
|
+
@status = sprintf "%s:%s DEAD (%s), will retry at %s", @host, @port, reason, @retry
|
841
|
+
@logger.info { @status } if @logger
|
795
842
|
end
|
796
843
|
|
797
844
|
end
|
@@ -803,3 +850,47 @@ class MemCache
|
|
803
850
|
|
804
851
|
end
|
805
852
|
|
853
|
+
# TCPSocket facade class which implements timeouts.
|
854
|
+
class TCPTimeoutSocket
|
855
|
+
|
856
|
+
def initialize(host, port, timeout)
|
857
|
+
Timeout::timeout(MemCache::Server::CONNECT_TIMEOUT, SocketError) do
|
858
|
+
@sock = TCPSocket.new(host, port)
|
859
|
+
@len = timeout
|
860
|
+
end
|
861
|
+
end
|
862
|
+
|
863
|
+
def write(*args)
|
864
|
+
Timeout::timeout(@len, SocketError) do
|
865
|
+
@sock.write(*args)
|
866
|
+
end
|
867
|
+
end
|
868
|
+
|
869
|
+
def gets(*args)
|
870
|
+
Timeout::timeout(@len, SocketError) do
|
871
|
+
@sock.gets(*args)
|
872
|
+
end
|
873
|
+
end
|
874
|
+
|
875
|
+
def read(*args)
|
876
|
+
Timeout::timeout(@len, SocketError) do
|
877
|
+
@sock.read(*args)
|
878
|
+
end
|
879
|
+
end
|
880
|
+
|
881
|
+
def _socket
|
882
|
+
@sock
|
883
|
+
end
|
884
|
+
|
885
|
+
def method_missing(meth, *args)
|
886
|
+
@sock.__send__(meth, *args)
|
887
|
+
end
|
888
|
+
|
889
|
+
def closed?
|
890
|
+
@sock.closed?
|
891
|
+
end
|
892
|
+
|
893
|
+
def close
|
894
|
+
@sock.close
|
895
|
+
end
|
896
|
+
end
|