syncache 1.0.0
Sign up to get free protection for your applications and to get access to all the features.
- data/.document +5 -0
- data/.gitignore +21 -0
- data/LICENSE +20 -0
- data/README.rdoc +91 -0
- data/Rakefile +63 -0
- data/VERSION +1 -0
- data/lib/syncache.rb +258 -0
- data/lib/syncache_sync_patch.rb +36 -0
- data/syncache.gemspec +63 -0
- data/test/helper.rb +9 -0
- data/test/test_syncache.rb +66 -0
- metadata +75 -0
data/.document
ADDED
data/.gitignore
ADDED
data/LICENSE
ADDED
@@ -0,0 +1,20 @@
|
|
1
|
+
Copyright (c) 2009 David Czarnecki
|
2
|
+
|
3
|
+
Permission is hereby granted, free of charge, to any person obtaining
|
4
|
+
a copy of this software and associated documentation files (the
|
5
|
+
"Software"), to deal in the Software without restriction, including
|
6
|
+
without limitation the rights to use, copy, modify, merge, publish,
|
7
|
+
distribute, sublicense, and/or sell copies of the Software, and to
|
8
|
+
permit persons to whom the Software is furnished to do so, subject to
|
9
|
+
the following conditions:
|
10
|
+
|
11
|
+
The above copyright notice and this permission notice shall be
|
12
|
+
included in all copies or substantial portions of the Software.
|
13
|
+
|
14
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
15
|
+
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
16
|
+
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
17
|
+
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
18
|
+
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
19
|
+
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
20
|
+
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
data/README.rdoc
ADDED
@@ -0,0 +1,91 @@
|
|
1
|
+
= SynCache - thread-safe time-limited cache with flexible replacement policy
|
2
|
+
|
3
|
+
== Synopsys
|
4
|
+
|
5
|
+
require 'syncache'
|
6
|
+
|
7
|
+
@cache = SynCache::Cache.new
|
8
|
+
|
9
|
+
@cache.fetch_or_add('key:1') do
|
10
|
+
# expensive operation
|
11
|
+
end
|
12
|
+
|
13
|
+
@cache.flush(/^key:/)
|
14
|
+
|
15
|
+
|
16
|
+
== Description
|
17
|
+
|
18
|
+
SynCache::Cache stores cached objects in a Hash that is protected by an
|
19
|
+
advanced two-level locking mechanism. Two-level locking ensures that:
|
20
|
+
|
21
|
+
* Multiple threads can add and fetch objects in parallel without
|
22
|
+
stepping on each other's toes.
|
23
|
+
|
24
|
+
* While one thread is working on a cache entry, other threads can access
|
25
|
+
the rest of the cache with no waiting on the global lock, no race
|
26
|
+
conditions nor deadlock or livelock situations.
|
27
|
+
|
28
|
+
* While one thread is performing a long and resource-intensive
|
29
|
+
operation, other threads that request the same data with #fetch_or_add
|
30
|
+
method will be put on hold, and as soon as the first thread completes
|
31
|
+
the operation, the result will be returned to all threads. Without
|
32
|
+
this feature, a steady stream of requests with less time between them
|
33
|
+
than it takes to complete one request can easily bury a server under
|
34
|
+
an avalanche of threads all wasting resources on the same expensive
|
35
|
+
operation.
|
36
|
+
|
37
|
+
When number of cache entries exceeds the size limit, the least recently
|
38
|
+
accessed entries are replaced with new data. This replacement strategy
|
39
|
+
is controlled by the SynCache::CacheEntry class and can be changed by
|
40
|
+
overriding its #replacement_index method.
|
41
|
+
|
42
|
+
Cache entries are automatically invalidated when their +ttl+ (time to
|
43
|
+
live) is exceeded. Entries can be explicitly invalidated by #flush
|
44
|
+
method. The method can use <tt>===</tt> operator to compare cache keys
|
45
|
+
against flush base (so that base can be e.g. a Regexp), and invalidates
|
46
|
+
all entries when invoked without the +base+ parameter.
|
47
|
+
|
48
|
+
The +flush_delay+ initialization option allows to limit cache's flush
|
49
|
+
rate. When this option is set, SynCache will make sure that at least
|
50
|
+
this many seconds (it can also be a fraction) pass between two flushes.
|
51
|
+
When extra flushes are requested, invalidation of flushed entries is
|
52
|
+
postponed until earliest time when next flush is allowed.
|
53
|
+
|
54
|
+
|
55
|
+
== SynCache DRb Server
|
56
|
+
|
57
|
+
SynCache::Cache object can be shared between multiple Ruby processes,
|
58
|
+
even across different computers. All you need is the
|
59
|
+
<tt>syncache-drb</tt> script shipped with this module. This script will
|
60
|
+
start a daemon that serves a SynCache::Cache object over dRuby protocol,
|
61
|
+
with $SAFE set to 1 for security.
|
62
|
+
|
63
|
+
To access a remote cache, you will need to use DRb library:
|
64
|
+
|
65
|
+
require 'drb'
|
66
|
+
|
67
|
+
# connect to the remote cache
|
68
|
+
@cache = DRbObject.new_with_uri('druby://localhost:9000')
|
69
|
+
|
70
|
+
# allow remote cache to access local objects from fetch_or_add blocks
|
71
|
+
DRb.start_service('druby://localhost:0')
|
72
|
+
|
73
|
+
|
74
|
+
== Copying
|
75
|
+
|
76
|
+
Copyright (c) 2002-2009 Dmitry Borodaenko <angdraug@debian.org>
|
77
|
+
|
78
|
+
This program is free software.
|
79
|
+
You can distribute/modify this program under the terms of the GNU
|
80
|
+
General Public License version 3 or later.
|
81
|
+
|
82
|
+
|
83
|
+
== Note on Patches/Pull Requests
|
84
|
+
|
85
|
+
* Fork the project.
|
86
|
+
* Make your feature addition or bug fix.
|
87
|
+
* Add tests for it. This is important so I don't break it in a
|
88
|
+
future version unintentionally.
|
89
|
+
* Commit, do not mess with rakefile, version, or history.
|
90
|
+
(if you want to have your own version, that is fine but bump version in a commit by itself I can ignore when I pull)
|
91
|
+
* Send me a pull request. Bonus points for topic branches.
|
data/Rakefile
ADDED
@@ -0,0 +1,63 @@
|
|
1
|
+
require 'rubygems'
|
2
|
+
require 'rake'
|
3
|
+
|
4
|
+
begin
|
5
|
+
require 'jeweler'
|
6
|
+
Jeweler::Tasks.new do |gem|
|
7
|
+
gem.name = "syncache"
|
8
|
+
gem.summary = %Q{SynCache is a thread-safe time-limited cache with flexible replacement policy and ability to wrap generation of expensive cache entries in synchronized blocks. SynCache was used in the Samizdat open publishing engine since 2005, and now it's released as a stand-alone module ready for use in other applications.}
|
9
|
+
gem.description = %Q{SynCache::Cache stores cached objects in a Hash that is protected by an advanced two-level locking mechanism. Two-level locking ensures that:
|
10
|
+
|
11
|
+
* Multiple threads can add and fetch objects in parallel without stepping on each other's toes.
|
12
|
+
* While one thread is working on a cache entry, other threads can access the rest of the cache with no waiting on the global lock, no race conditions nor deadlock or livelock situations.
|
13
|
+
* While one thread is performing a long and resource-intensive operation, other threads that request the same data with fetch_or_add() method will be put on hold, and as soon as the first thread completes the operation, the result will be returned to all threads. Without this feature, a steady stream of requests with less time between them than it takes to complete one request can easily bury a server under an avalanche of threads all wasting resources on the same expensive operation.
|
14
|
+
|
15
|
+
When number of cache entries exceeds the size limit, the least recently accessed entries are replaced with new data. This replacement strategy is controlled by the SynCache::CacheEntry class and can be changed by overriding its replacement_index() method.
|
16
|
+
|
17
|
+
Cache entries are automatically invalidated when their ttl (time to live) is exceeded. Entries can be explicitly invalidated by flush() method. The method can use === operator to compare cache keys against flush base (so that base can be e.g. a Regexp), and invalidates all entries when invoked without the base parameter.
|
18
|
+
|
19
|
+
The flush_delay initialization option allows to limit cache's flush rate. When this option is set, SynCache will make sure that at least this many seconds (it can also be a fraction) pass between two flushes. When extra flushes are requested, invalidation of flushed entries is postponed until earliest time when next flush is allowed.
|
20
|
+
}
|
21
|
+
gem.email = "angdraug@debian.org"
|
22
|
+
gem.homepage = "http://github.com/czarneckid/syncache"
|
23
|
+
gem.authors = ["Dmitry Borodaenko", "David Czarnecki"]
|
24
|
+
# gem is a Gem::Specification... see http://www.rubygems.org/read/chapter/20 for additional settings
|
25
|
+
end
|
26
|
+
Jeweler::GemcutterTasks.new
|
27
|
+
rescue LoadError
|
28
|
+
puts "Jeweler (or a dependency) not available. Install it with: gem install jeweler"
|
29
|
+
end
|
30
|
+
|
31
|
+
require 'rake/testtask'
|
32
|
+
Rake::TestTask.new(:test) do |test|
|
33
|
+
test.libs << 'lib' << 'test'
|
34
|
+
test.pattern = 'test/**/test_*.rb'
|
35
|
+
test.verbose = true
|
36
|
+
end
|
37
|
+
|
38
|
+
begin
|
39
|
+
require 'rcov/rcovtask'
|
40
|
+
Rcov::RcovTask.new do |test|
|
41
|
+
test.libs << 'test'
|
42
|
+
test.pattern = 'test/**/test_*.rb'
|
43
|
+
test.verbose = true
|
44
|
+
end
|
45
|
+
rescue LoadError
|
46
|
+
task :rcov do
|
47
|
+
abort "RCov is not available. In order to run rcov, you must: sudo gem install spicycode-rcov"
|
48
|
+
end
|
49
|
+
end
|
50
|
+
|
51
|
+
task :test => :check_dependencies
|
52
|
+
|
53
|
+
task :default => :test
|
54
|
+
|
55
|
+
require 'rake/rdoctask'
|
56
|
+
Rake::RDocTask.new do |rdoc|
|
57
|
+
version = File.exist?('VERSION') ? File.read('VERSION') : ""
|
58
|
+
|
59
|
+
rdoc.rdoc_dir = 'rdoc'
|
60
|
+
rdoc.title = "syncache #{version}"
|
61
|
+
rdoc.rdoc_files.include('README*')
|
62
|
+
rdoc.rdoc_files.include('lib/**/*.rb')
|
63
|
+
end
|
data/VERSION
ADDED
@@ -0,0 +1 @@
|
|
1
|
+
1.0.0
|
data/lib/syncache.rb
ADDED
@@ -0,0 +1,258 @@
|
|
1
|
+
# SynCache: thread-safe time-limited cache with flexible replacement policy
|
2
|
+
# (originally written for Samizdat project)
|
3
|
+
#
|
4
|
+
# Copyright (c) 2002-2009 Dmitry Borodaenko <angdraug@debian.org>
|
5
|
+
#
|
6
|
+
# This program is free software.
|
7
|
+
# You can distribute/modify this program under the terms of
|
8
|
+
# the GNU General Public License version 3 or later.
|
9
|
+
#
|
10
|
+
# vim: et sw=2 sts=2 ts=8 tw=0
|
11
|
+
|
12
|
+
require 'sync'
|
13
|
+
require 'syncache_sync_patch'
|
14
|
+
|
15
|
+
module SynCache
|
16
|
+
|
17
|
+
FOREVER = 60 * 60 * 24 * 365 * 5 # 5 years
|
18
|
+
|
19
|
+
class CacheEntry
|
20
|
+
def initialize(ttl = nil, value = nil)
|
21
|
+
@value = value
|
22
|
+
@ttl = ttl
|
23
|
+
@dirty = false
|
24
|
+
record_access
|
25
|
+
|
26
|
+
@sync = Sync.new
|
27
|
+
end
|
28
|
+
|
29
|
+
# stores the value object
|
30
|
+
attr_accessor :value
|
31
|
+
|
32
|
+
# change this to make the entry expire sooner
|
33
|
+
attr_accessor :ttl
|
34
|
+
|
35
|
+
# use this to synchronize access to +value+
|
36
|
+
attr_reader :sync
|
37
|
+
|
38
|
+
# record the fact that the entry was accessed
|
39
|
+
#
|
40
|
+
def record_access
|
41
|
+
return if @dirty
|
42
|
+
@expires = Time.now + (@ttl or FOREVER)
|
43
|
+
end
|
44
|
+
|
45
|
+
# entries with lowest index will be replaced first
|
46
|
+
#
|
47
|
+
def replacement_index
|
48
|
+
@expires
|
49
|
+
end
|
50
|
+
|
51
|
+
# check if entry is stale
|
52
|
+
#
|
53
|
+
def stale?
|
54
|
+
@expires < Time.now
|
55
|
+
end
|
56
|
+
|
57
|
+
# mark entry as dirty and schedule it to expire at given time
|
58
|
+
#
|
59
|
+
def expire_at(time)
|
60
|
+
@expires = time if @expires > time
|
61
|
+
@dirty = true
|
62
|
+
end
|
63
|
+
end
|
64
|
+
|
65
|
+
class Cache
|
66
|
+
|
67
|
+
# set to _true_ to report every single cache operation to syslog
|
68
|
+
#
|
69
|
+
DEBUG = false
|
70
|
+
|
71
|
+
# a float number of seconds to sleep when a race condition is detected
|
72
|
+
# (actual delay is randomized to avoid live lock situation)
|
73
|
+
#
|
74
|
+
LOCK_SLEEP = 0.2
|
75
|
+
|
76
|
+
# _ttl_ (time to live) is time in seconds from the last access until cache
|
77
|
+
# entry is expired (set to _nil_ to disable time limit)
|
78
|
+
#
|
79
|
+
# _max_size_ is max number of objects in cache
|
80
|
+
#
|
81
|
+
# _flush_delay_ is used to rate-limit flush operations: if less than that
|
82
|
+
# number of seconds has passed since last flush, next flush will be delayed;
|
83
|
+
# default is no rate limit
|
84
|
+
#
|
85
|
+
def initialize(ttl = 60*60, max_size = 5000, flush_delay = nil)
|
86
|
+
@ttl = ttl
|
87
|
+
@max_size = max_size
|
88
|
+
|
89
|
+
if @flush_delay = flush_delay
|
90
|
+
@last_flush = Time.now
|
91
|
+
end
|
92
|
+
|
93
|
+
@sync = Sync.new
|
94
|
+
@cache = {}
|
95
|
+
end
|
96
|
+
|
97
|
+
# remove all values from cache
|
98
|
+
#
|
99
|
+
# if _base_ is given, only values with keys matching the base (using
|
100
|
+
# <tt>===</tt> operator) are removed
|
101
|
+
#
|
102
|
+
def flush(base = nil)
|
103
|
+
debug('flush ' << base.to_s)
|
104
|
+
|
105
|
+
@sync.synchronize do
|
106
|
+
|
107
|
+
if @flush_delay
|
108
|
+
next_flush = @last_flush + @flush_delay
|
109
|
+
|
110
|
+
if next_flush > Time.now
|
111
|
+
flush_at(next_flush, base)
|
112
|
+
else
|
113
|
+
flush_now(base)
|
114
|
+
@last_flush = Time.now
|
115
|
+
end
|
116
|
+
|
117
|
+
else
|
118
|
+
flush_now(base)
|
119
|
+
end
|
120
|
+
end
|
121
|
+
end
|
122
|
+
|
123
|
+
# remove single value from cache
|
124
|
+
#
|
125
|
+
def delete(key)
|
126
|
+
debug('delete ' << key.to_s)
|
127
|
+
|
128
|
+
@sync.synchronize do
|
129
|
+
@cache.delete(key)
|
130
|
+
end
|
131
|
+
end
|
132
|
+
|
133
|
+
# store new value in cache
|
134
|
+
#
|
135
|
+
# see also Cache#fetch_or_add
|
136
|
+
#
|
137
|
+
def []=(key, value)
|
138
|
+
debug('[]= ' << key.to_s)
|
139
|
+
|
140
|
+
entry = get_entry(key)
|
141
|
+
entry.sync.synchronize do
|
142
|
+
entry.value = value
|
143
|
+
end
|
144
|
+
value
|
145
|
+
end
|
146
|
+
|
147
|
+
# retrieve value from cache if it's still fresh
|
148
|
+
#
|
149
|
+
# see also Cache#fetch_or_add
|
150
|
+
#
|
151
|
+
def [](key)
|
152
|
+
debug('[] ' << key.to_s)
|
153
|
+
|
154
|
+
entry = get_entry(key)
|
155
|
+
entry.sync.synchronize(:SH) do
|
156
|
+
entry.value
|
157
|
+
end
|
158
|
+
end
|
159
|
+
|
160
|
+
# initialize missing cache entry from supplied block
|
161
|
+
#
|
162
|
+
# this is the preferred method of adding values to the cache as it locks the
|
163
|
+
# key for the duration of computation of the supplied block to prevent
|
164
|
+
# parallel execution of resource-intensive actions
|
165
|
+
#
|
166
|
+
def fetch_or_add(key)
|
167
|
+
debug('fetch_or_add ' << key.to_s)
|
168
|
+
|
169
|
+
entry = nil # scope fix
|
170
|
+
entry_locked = false
|
171
|
+
until entry_locked do
|
172
|
+
@sync.synchronize do
|
173
|
+
entry = get_entry(key)
|
174
|
+
entry_locked = entry.sync.try_lock # fixme
|
175
|
+
end
|
176
|
+
sleep(rand * LOCK_SLEEP) unless entry_locked
|
177
|
+
end
|
178
|
+
|
179
|
+
begin
|
180
|
+
entry.record_access
|
181
|
+
entry.value ||= yield
|
182
|
+
ensure
|
183
|
+
entry.sync.unlock
|
184
|
+
end
|
185
|
+
end
|
186
|
+
|
187
|
+
private
|
188
|
+
|
189
|
+
# immediate flush (delete all entries matching _base_)
|
190
|
+
#
|
191
|
+
# must be run from inside global lock, see #flush
|
192
|
+
#
|
193
|
+
def flush_now(base = nil)
|
194
|
+
if base
|
195
|
+
@cache.delete_if {|key, entry| base === key }
|
196
|
+
else
|
197
|
+
@cache = {}
|
198
|
+
end
|
199
|
+
end
|
200
|
+
|
201
|
+
# delayed flush (ensure all entries matching _base_ expire no later than _next_flush_)
|
202
|
+
#
|
203
|
+
# must be run from inside global lock, see #flush
|
204
|
+
#
|
205
|
+
def flush_at(next_flush, base = nil)
|
206
|
+
@cache.each do |key, entry|
|
207
|
+
next if base and not base === key
|
208
|
+
entry.expire_at(next_flush)
|
209
|
+
end
|
210
|
+
end
|
211
|
+
|
212
|
+
def get_entry(key)
|
213
|
+
debug('get_entry ' << key.to_s)
|
214
|
+
|
215
|
+
@sync.synchronize do
|
216
|
+
entry = @cache[key]
|
217
|
+
|
218
|
+
if entry.kind_of?(CacheEntry)
|
219
|
+
if entry.stale?
|
220
|
+
@cache[key] = entry = CacheEntry.new(@ttl)
|
221
|
+
end
|
222
|
+
else
|
223
|
+
@cache[key] = entry = CacheEntry.new(@ttl)
|
224
|
+
check_size
|
225
|
+
end
|
226
|
+
|
227
|
+
entry.record_access
|
228
|
+
entry
|
229
|
+
end
|
230
|
+
end
|
231
|
+
|
232
|
+
# remove oldest item from cache if size limit reached
|
233
|
+
#
|
234
|
+
def check_size
|
235
|
+
debug('check_size')
|
236
|
+
|
237
|
+
return unless @max_size.kind_of? Numeric
|
238
|
+
|
239
|
+
@sync.synchronize do
|
240
|
+
while @cache.size > @max_size do
|
241
|
+
# optimize: supplement hash with queue
|
242
|
+
oldest = @cache.keys.min {|a, b| @cache[a].replacement_index <=> @cache[b].replacement_index }
|
243
|
+
|
244
|
+
@cache.delete(oldest)
|
245
|
+
end
|
246
|
+
end
|
247
|
+
end
|
248
|
+
|
249
|
+
# send debug output to syslog if enabled
|
250
|
+
#
|
251
|
+
def debug(message)
|
252
|
+
if DEBUG and defined?(Syslog) and Syslog.opened?
|
253
|
+
Syslog.debug(Thread.current.to_s << ' ' << message)
|
254
|
+
end
|
255
|
+
end
|
256
|
+
end
|
257
|
+
|
258
|
+
end # module SynCache
|
@@ -0,0 +1,36 @@
|
|
1
|
+
# Monkey patch for standard sync.rb (see bug #11680 on RubyForge).
|
2
|
+
|
3
|
+
if RUBY_VERSION < "1.8.7" or (RUBY_VERSION == "1.8.7" and RUBY_PATCHLEVEL < 173)
|
4
|
+
|
5
|
+
module Sync_m
|
6
|
+
class Err < StandardError
|
7
|
+
def Err.Fail(*opt)
|
8
|
+
Thread.critical = false
|
9
|
+
fail self, sprintf(self::Message, *opt)
|
10
|
+
end
|
11
|
+
end
|
12
|
+
|
13
|
+
def sync_try_lock(mode = EX)
|
14
|
+
return unlock if mode == UN
|
15
|
+
|
16
|
+
Thread.critical = true
|
17
|
+
ret = sync_try_lock_sub(mode)
|
18
|
+
Thread.critical = false
|
19
|
+
ret
|
20
|
+
end
|
21
|
+
end
|
22
|
+
|
23
|
+
elsif RUBY_VERSION == "1.9.0"
|
24
|
+
|
25
|
+
module Sync_m
|
26
|
+
def sync_try_lock(mode = EX)
|
27
|
+
return unlock if mode == UN
|
28
|
+
ret = nil
|
29
|
+
@sync_mutex.synchronize do
|
30
|
+
ret = sync_try_lock_sub(mode)
|
31
|
+
end
|
32
|
+
ret
|
33
|
+
end
|
34
|
+
end
|
35
|
+
|
36
|
+
end
|
data/syncache.gemspec
ADDED
@@ -0,0 +1,63 @@
|
|
1
|
+
# Generated by jeweler
|
2
|
+
# DO NOT EDIT THIS FILE DIRECTLY
|
3
|
+
# Instead, edit Jeweler::Tasks in Rakefile, and run the gemspec command
|
4
|
+
# -*- encoding: utf-8 -*-
|
5
|
+
|
6
|
+
Gem::Specification.new do |s|
|
7
|
+
s.name = %q{syncache}
|
8
|
+
s.version = "1.0.0"
|
9
|
+
|
10
|
+
s.required_rubygems_version = Gem::Requirement.new(">= 0") if s.respond_to? :required_rubygems_version=
|
11
|
+
s.authors = ["Dmitry Borodaenko", "David Czarnecki"]
|
12
|
+
s.date = %q{2010-05-06}
|
13
|
+
s.description = %q{SynCache::Cache stores cached objects in a Hash that is protected by an advanced two-level locking mechanism. Two-level locking ensures that:
|
14
|
+
|
15
|
+
* Multiple threads can add and fetch objects in parallel without stepping on each other's toes.
|
16
|
+
* While one thread is working on a cache entry, other threads can access the rest of the cache with no waiting on the global lock, no race conditions nor deadlock or livelock situations.
|
17
|
+
* While one thread is performing a long and resource-intensive operation, other threads that request the same data with fetch_or_add() method will be put on hold, and as soon as the first thread completes the operation, the result will be returned to all threads. Without this feature, a steady stream of requests with less time between them than it takes to complete one request can easily bury a server under an avalanche of threads all wasting resources on the same expensive operation.
|
18
|
+
|
19
|
+
When number of cache entries exceeds the size limit, the least recently accessed entries are replaced with new data. This replacement strategy is controlled by the SynCache::CacheEntry class and can be changed by overriding its replacement_index() method.
|
20
|
+
|
21
|
+
Cache entries are automatically invalidated when their ttl (time to live) is exceeded. Entries can be explicitly invalidated by flush() method. The method can use === operator to compare cache keys against flush base (so that base can be e.g. a Regexp), and invalidates all entries when invoked without the base parameter.
|
22
|
+
|
23
|
+
The flush_delay initialization option allows to limit cache's flush rate. When this option is set, SynCache will make sure that at least this many seconds (it can also be a fraction) pass between two flushes. When extra flushes are requested, invalidation of flushed entries is postponed until earliest time when next flush is allowed.
|
24
|
+
}
|
25
|
+
s.email = %q{angdraug@debian.org}
|
26
|
+
s.extra_rdoc_files = [
|
27
|
+
"LICENSE",
|
28
|
+
"README.rdoc"
|
29
|
+
]
|
30
|
+
s.files = [
|
31
|
+
".document",
|
32
|
+
".gitignore",
|
33
|
+
"LICENSE",
|
34
|
+
"README.rdoc",
|
35
|
+
"Rakefile",
|
36
|
+
"VERSION",
|
37
|
+
"lib/syncache.rb",
|
38
|
+
"lib/syncache_sync_patch.rb",
|
39
|
+
"syncache.gemspec",
|
40
|
+
"test/helper.rb",
|
41
|
+
"test/test_syncache.rb"
|
42
|
+
]
|
43
|
+
s.homepage = %q{http://github.com/czarneckid/syncache}
|
44
|
+
s.rdoc_options = ["--charset=UTF-8"]
|
45
|
+
s.require_paths = ["lib"]
|
46
|
+
s.rubygems_version = %q{1.3.6}
|
47
|
+
s.summary = %q{SynCache is a thread-safe time-limited cache with flexible replacement policy and ability to wrap generation of expensive cache entries in synchronized blocks. SynCache was used in the Samizdat open publishing engine since 2005, and now it's released as a stand-alone module ready for use in other applications.}
|
48
|
+
s.test_files = [
|
49
|
+
"test/helper.rb",
|
50
|
+
"test/test_syncache.rb"
|
51
|
+
]
|
52
|
+
|
53
|
+
if s.respond_to? :specification_version then
|
54
|
+
current_version = Gem::Specification::CURRENT_SPECIFICATION_VERSION
|
55
|
+
s.specification_version = 3
|
56
|
+
|
57
|
+
if Gem::Version.new(Gem::RubyGemsVersion) >= Gem::Version.new('1.2.0') then
|
58
|
+
else
|
59
|
+
end
|
60
|
+
else
|
61
|
+
end
|
62
|
+
end
|
63
|
+
|
data/test/helper.rb
ADDED
@@ -0,0 +1,66 @@
|
|
1
|
+
#!/usr/bin/env ruby
|
2
|
+
#
|
3
|
+
# SynCache tests
|
4
|
+
#
|
5
|
+
# Copyright (c) 2002-2009 Dmitry Borodaenko <angdraug@debian.org>
|
6
|
+
#
|
7
|
+
# This program is free software.
|
8
|
+
# You can distribute/modify this program under the terms of
|
9
|
+
# the GNU General Public License version 3 or later.
|
10
|
+
#
|
11
|
+
# vim: et sw=2 sts=2 ts=8 tw=0
|
12
|
+
|
13
|
+
require 'helper'
|
14
|
+
|
15
|
+
include SynCache
|
16
|
+
|
17
|
+
class TC_Cache < Test::Unit::TestCase
|
18
|
+
|
19
|
+
def test_initialize
|
20
|
+
cache = Cache.new(3, 5)
|
21
|
+
end
|
22
|
+
|
23
|
+
def test_flush
|
24
|
+
cache = Cache.new(3, 5)
|
25
|
+
cache['t'] = 'test'
|
26
|
+
cache.flush
|
27
|
+
assert_equal nil, cache['t']
|
28
|
+
end
|
29
|
+
|
30
|
+
def test_add_fetch
|
31
|
+
cache = Cache.new(3, 5)
|
32
|
+
cache['t'] = 'test'
|
33
|
+
assert_equal 'test', cache['t']
|
34
|
+
end
|
35
|
+
|
36
|
+
def test_fetch_or_add
|
37
|
+
cache = Cache.new(3, 5)
|
38
|
+
assert_equal nil, cache['t']
|
39
|
+
cache.fetch_or_add('t') { 'test' }
|
40
|
+
assert_equal 'test', cache['t']
|
41
|
+
end
|
42
|
+
|
43
|
+
def test_truncate
|
44
|
+
cache = Cache.new(3, 5)
|
45
|
+
1.upto(5) {|i| cache[i] = i }
|
46
|
+
1.upto(5) do |i|
|
47
|
+
assert_equal i, cache[i]
|
48
|
+
end
|
49
|
+
6.upto(10) {|i| cache[i] = i }
|
50
|
+
1.upto(5) do |i|
|
51
|
+
assert_equal nil, cache[i]
|
52
|
+
end
|
53
|
+
end
|
54
|
+
|
55
|
+
def test_timeout
|
56
|
+
cache = Cache.new(0.01, 5)
|
57
|
+
1.upto(5) {|i| cache[i] = i }
|
58
|
+
1.upto(5) do |i|
|
59
|
+
assert_equal i, cache[i]
|
60
|
+
end
|
61
|
+
sleep(0.02)
|
62
|
+
1.upto(5) do |i|
|
63
|
+
assert_equal nil, cache[i]
|
64
|
+
end
|
65
|
+
end
|
66
|
+
end
|
metadata
ADDED
@@ -0,0 +1,75 @@
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
2
|
+
name: syncache
|
3
|
+
version: !ruby/object:Gem::Version
|
4
|
+
prerelease: false
|
5
|
+
segments:
|
6
|
+
- 1
|
7
|
+
- 0
|
8
|
+
- 0
|
9
|
+
version: 1.0.0
|
10
|
+
platform: ruby
|
11
|
+
authors:
|
12
|
+
- Dmitry Borodaenko
|
13
|
+
- David Czarnecki
|
14
|
+
autorequire:
|
15
|
+
bindir: bin
|
16
|
+
cert_chain: []
|
17
|
+
|
18
|
+
date: 2010-05-06 00:00:00 -04:00
|
19
|
+
default_executable:
|
20
|
+
dependencies: []
|
21
|
+
|
22
|
+
description: "SynCache::Cache stores cached objects in a Hash that is protected by an advanced two-level locking mechanism. Two-level locking ensures that:\n\n * Multiple threads can add and fetch objects in parallel without stepping on each other's toes.\n * While one thread is working on a cache entry, other threads can access the rest of the cache with no waiting on the global lock, no race conditions nor deadlock or livelock situations.\n * While one thread is performing a long and resource-intensive operation, other threads that request the same data with fetch_or_add() method will be put on hold, and as soon as the first thread completes the operation, the result will be returned to all threads. Without this feature, a steady stream of requests with less time between them than it takes to complete one request can easily bury a server under an avalanche of threads all wasting resources on the same expensive operation.\n\n When number of cache entries exceeds the size limit, the least recently accessed entries are replaced with new data. This replacement strategy is controlled by the SynCache::CacheEntry class and can be changed by overriding its replacement_index() method.\n\n Cache entries are automatically invalidated when their ttl (time to live) is exceeded. Entries can be explicitly invalidated by flush() method. The method can use === operator to compare cache keys against flush base (so that base can be e.g. a Regexp), and invalidates all entries when invoked without the base parameter.\n\n The flush_delay initialization option allows to limit cache's flush rate. When this option is set, SynCache will make sure that at least this many seconds (it can also be a fraction) pass between two flushes. When extra flushes are requested, invalidation of flushed entries is postponed until earliest time when next flush is allowed.\n "
|
23
|
+
email: angdraug@debian.org
|
24
|
+
executables: []
|
25
|
+
|
26
|
+
extensions: []
|
27
|
+
|
28
|
+
extra_rdoc_files:
|
29
|
+
- LICENSE
|
30
|
+
- README.rdoc
|
31
|
+
files:
|
32
|
+
- .document
|
33
|
+
- .gitignore
|
34
|
+
- LICENSE
|
35
|
+
- README.rdoc
|
36
|
+
- Rakefile
|
37
|
+
- VERSION
|
38
|
+
- lib/syncache.rb
|
39
|
+
- lib/syncache_sync_patch.rb
|
40
|
+
- syncache.gemspec
|
41
|
+
- test/helper.rb
|
42
|
+
- test/test_syncache.rb
|
43
|
+
has_rdoc: true
|
44
|
+
homepage: http://github.com/czarneckid/syncache
|
45
|
+
licenses: []
|
46
|
+
|
47
|
+
post_install_message:
|
48
|
+
rdoc_options:
|
49
|
+
- --charset=UTF-8
|
50
|
+
require_paths:
|
51
|
+
- lib
|
52
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
53
|
+
requirements:
|
54
|
+
- - ">="
|
55
|
+
- !ruby/object:Gem::Version
|
56
|
+
segments:
|
57
|
+
- 0
|
58
|
+
version: "0"
|
59
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
60
|
+
requirements:
|
61
|
+
- - ">="
|
62
|
+
- !ruby/object:Gem::Version
|
63
|
+
segments:
|
64
|
+
- 0
|
65
|
+
version: "0"
|
66
|
+
requirements: []
|
67
|
+
|
68
|
+
rubyforge_project:
|
69
|
+
rubygems_version: 1.3.6
|
70
|
+
signing_key:
|
71
|
+
specification_version: 3
|
72
|
+
summary: SynCache is a thread-safe time-limited cache with flexible replacement policy and ability to wrap generation of expensive cache entries in synchronized blocks. SynCache was used in the Samizdat open publishing engine since 2005, and now it's released as a stand-alone module ready for use in other applications.
|
73
|
+
test_files:
|
74
|
+
- test/helper.rb
|
75
|
+
- test/test_syncache.rb
|