fast_cache 1.0.0 → 1.0.1
Sign up to get free protection for your applications and to get access to all the features.
- data/.rspec +1 -0
- data/.travis.yml +8 -0
- data/README.md +20 -24
- data/Rakefile +14 -1
- data/fast_cache.gemspec +2 -0
- data/lib/fast_cache/version.rb +1 -1
- metadata +35 -2
data/.rspec
CHANGED
data/.travis.yml
ADDED
data/README.md
CHANGED
@@ -1,17 +1,16 @@
|
|
1
|
-
# FastCache
|
1
|
+
# FastCache [![Build Status](https://secure.travis-ci.org/swoop-inc/fast_cache.png)](http://travis-ci.org/swoop-inc/fast_cache?branch=master) [![Dependency Status](https://gemnasium.com/swoop-inc/fast_cache.png)](https://gemnasium.com/swoop-inc/fast_cache)
|
2
2
|
|
3
3
|
|
4
|
-
There are two reasons why you may want to
|
4
|
+
There are two reasons why you may want to skip this:
|
5
5
|
|
6
6
|
1. This is yet another caching gem, which is grounds for extreme suspicion.
|
7
|
-
|
8
7
|
2. Many Ruby developers don't care about performance.
|
9
8
|
|
10
|
-
If you're still reading, there are three reasons why
|
9
|
+
If you're still reading, there are three reasons why this is worth checking out:
|
11
10
|
|
12
|
-
1. Performance is a feature users love. Products from 37signals' to Google's have proven this time and time again. Performance almost never matters if you are not successful but almost always does if you are. At [Swoop](http://swoop.com) we have tens of millions of users. We care about correctness, simplicity and maintainability but also
|
11
|
+
1. Performance is a feature users love. Products from 37signals' to Google's have proven this time and time again. Performance almost never matters if you are not successful but almost always does if you are. At [Swoop](http://swoop.com) we have tens of millions of users. We care about correctness, simplicity and maintainability but also about performance.
|
13
12
|
|
14
|
-
2. This cache benchmarks 10-100x faster than ActiveSupport::Cache::MemoryStore without breaking a sweat. You can switch to FastCache in a couple minutes and, most likely, you won't have to refactor your tests. FastCache has 100% test coverage at 20+ hits/line. There are no third party runtime dependencies so you can use this anywhere with Ruby 1.9+.
|
13
|
+
2. This cache benchmarks 10-100x faster than [ActiveSupport::Cache::MemoryStore](http://api.rubyonrails.org/classes/ActiveSupport/Cache/MemoryStore.html) without breaking a sweat. You can switch to FastCache in a couple minutes and, most likely, you won't have to refactor your tests. FastCache has 100% test coverage at 20+ hits/line. There are no third party runtime dependencies so you can use this anywhere with Ruby 1.9+.
|
15
14
|
|
16
15
|
3. The implementation exploits some neat features of Ruby's native data structures that could be useful and fun to learn about.
|
17
16
|
|
@@ -35,7 +34,7 @@ Or install it yourself as:
|
|
35
34
|
|
36
35
|
FastCache::Cache is an in-process cache with least recently used (LRU) and time to live (TTL) expiration semantics, which makes it an easy replacement for ActiveSupport::Cache::MemoryStore as well as a great candidate for the in-process portion of a hierarchical caching system (FastCache sitting in front of, say, memcached or Redis).
|
37
36
|
|
38
|
-
The current implementation is not thread-safe because at [Swoop](http://swoop.com) we prefer to handle simple concurrency in MRI Ruby via the [reactor pattern](http://en.wikipedia.org/wiki/Reactor_pattern) with [eventmachine](https://github.com/eventmachine/eventmachine). An easy way to add thread safety would be via a [synchronizing subclass](https://github.com/SamSaffron/lru_redux/blob/master/lib/lru_redux/thread_safe_cache.rb) or decorator.
|
37
|
+
The current implementation is not thread-safe because at [Swoop](http://swoop.com) we prefer to handle simple concurrency in MRI Ruby via the [reactor pattern](http://en.wikipedia.org/wiki/Reactor_pattern) with [eventmachine](https://github.com/eventmachine/eventmachine). An easy way to add thread safety would be via a [synchronizing subclass](https://github.com/SamSaffron/lru_redux/blob/master/lib/lru_redux/thread_safe_cache.rb) or decorator.
|
39
38
|
|
40
39
|
The implementation does not use a separate thread for expiring stale cached values. Instead, before a value is returned from the cache, its expiration time is checked. In order to avoid the case where a value that is never accessed cannot be removed, every _N_ operations the cache removes all expired values.
|
41
40
|
|
@@ -61,7 +60,7 @@ cache.expire!
|
|
61
60
|
|
62
61
|
## Performance
|
63
62
|
|
64
|
-
If you are looking for an in-process cache with LRU and time-to-live expiration semantics the go-to implementation is ActiveSupport::Cache::MemoryStore, which as of Rails 3.1 [started marshaling](http://apidock.com/rails/v3.2.13/ActiveSupport/Cache/Entry/value) the data even though the keys and values never leave the process boundary. The performance of the cache is dominated by marshaling and loading, i.e., by the size and complexity of keys and values. The better job you do of finding large, complex, cacheable data structures, the slower it will run. That doesn't feel right for an in-process cache.
|
63
|
+
If you are looking for an in-process cache with LRU and time-to-live expiration semantics the go-to implementation is [ActiveSupport::Cache::MemoryStore](http://api.rubyonrails.org/classes/ActiveSupport/Cache/MemoryStore.html), which as of Rails 3.1 [started marshaling](http://apidock.com/rails/v3.2.13/ActiveSupport/Cache/Entry/value) the data even though the keys and values never leave the process boundary. The performance of the cache is dominated by marshaling and loading, i.e., by the size and complexity of keys and values. The better job you do of finding large, complex, cacheable data structures, the slower it will run. That doesn't feel right for an in-process cache.
|
65
64
|
|
66
65
|
We benchmark against [LruRedux::Cache](https://github.com/SamSaffron/lru_redux), which was the inspiration behind FastCache::Cache and, of course, ActiveSupport::Cache::MemoryStore.
|
67
66
|
|
@@ -74,7 +73,7 @@ bin/fast-cache-benchmark
|
|
74
73
|
The [benchmark](bin/fast-cache-benchmark) includes a simple value test (caching just the Symbol `:value`) and a more complex value test (caching a [medium-size data structure](bench/caching_sample.json)). Both tests run for one million iterations with an expected cache hit rate of 50%.
|
75
74
|
|
76
75
|
```
|
77
|
-
|
76
|
+
$ bin/fast-cache-benchmark
|
78
77
|
Simple value benchmark
|
79
78
|
Rehearsal ------------------------------------------------
|
80
79
|
lru_redux 2.200000 0.020000 2.220000 ( 2.213863)
|
@@ -100,41 +99,38 @@ fast_cache 19.620000 0.030000 19.650000 ( 19.650379)
|
|
100
99
|
memory_store 1286.790000 1.850000 1288.640000 (1289.115472)
|
101
100
|
```
|
102
101
|
|
103
|
-
In both tests FastCache::Cache is 2-3x slower than LruRedux::Cache, which only provides LRU semantics. For small values, FastCache::Cache is 5x faster than ActiveSupport::Cache::MemoryStore. For more complex values the difference grows to 50
|
102
|
+
In both tests FastCache::Cache is 2-3x slower than LruRedux::Cache, which only provides LRU expiration semantics. For small values, FastCache::Cache is 5x faster than ActiveSupport::Cache::MemoryStore. For more complex values the difference grows to 50-100x (67x in the particular benchmark).
|
104
103
|
|
105
|
-
In one case
|
104
|
+
In one case of CSV generation where every row involved looking up model attributes FastCache was more than 100 times faster. Operations that took many minutes now happen in seconds.
|
106
105
|
|
107
106
|
|
108
107
|
## Implementation
|
109
108
|
|
110
|
-
[Sam Saffron](https://github.com/SamSaffron) noticed that Ruby 1.9 Hash's property to preserve insertion order can be used as a second index
|
109
|
+
[Sam Saffron](https://github.com/SamSaffron) noticed that Ruby 1.9 Hash's property to preserve insertion order can be used as a second index into the hash, in addition to indexing by a key. That led Sam to create the [lru_redux](https://github.com/SamSaffron/lru_redux) gem, whose cache behaves in a very non-intuitive way at first glance. For example, the simplified pseudocode for the cache get operation is:
|
111
110
|
|
112
111
|
```
|
113
112
|
cache[key]:
|
114
|
-
value = @
|
115
|
-
@
|
113
|
+
value = @data.delete(key)
|
114
|
+
@data[key] = value
|
116
115
|
value
|
117
116
|
```
|
118
117
|
|
119
|
-
In other words, the code performs two mutating operations (delete and insert) in order to satisfy a single non-mutating operation (get). Why? The reason is that this is how the cache maintains its least recently used removal property. The picture below shows the get operation step-by-step using a fictitious cache of names against some difficult-to-compute
|
118
|
+
In other words, the code performs two mutating operations (delete and insert) in order to satisfy a single non-mutating operation (get). Why? The reason is that this is how the cache maintains its least recently used removal property. The picture below shows the get operation step-by-step using a fictitious cache of names against some difficult-to-compute scores.
|
120
119
|
|
121
120
|
![lru](https://www.lucidchart.com/publicSegments/view/525be92f-6034-40f7-b3b6-377d0a005604/image.png)
|
122
|
-
If the cache gets full, it can create space by removing elements from the
|
121
|
+
If the cache gets full, it can create space by removing elements from the front of its insertion order data structure using [Hash#shift](http://www.ruby-doc.org/core-2.0.0/Hash.html#method-i-shift).
|
122
|
+
|
123
|
+
For those of you familiar with [Redis](http://redis.io), this approach to using a Ruby Hash may remind you of [sorted sets](http://redis.io/commands#sorted_set).
|
123
124
|
|
124
125
|
To add time-based expiration, we need to:
|
125
126
|
|
126
127
|
1. Keep track of expiration times.
|
127
|
-
|
128
128
|
2. Index by expiration time, to clean up in `expire!`.
|
129
|
-
|
130
129
|
3. Efficiently remove items from the expiration index when a stale item is detected.
|
131
130
|
|
132
|
-
By exploiting the dual index property of Hash we can achieve this with just one extra hash. The diagram below shows the object relationships.
|
133
|
-
|
134
|
-
![lru-and-ttl](https://www.lucidchart.com/publicSegments/view/525be9d7-fe08-40fb-9dd2-37850a005603/image.png)
|
135
|
-
### A note about Time
|
131
|
+
By exploiting the dual index property of Hash we can achieve this with just one extra "expires" hash, which is the inverse of our "data" hash. We keep the data hash ordered by recency of use and the expires hash ordered by insertion order, which is also the removal order because the time to live is constant. The diagram below shows the object relationships.
|
136
132
|
|
137
|
-
|
133
|
+
![lru-and-ttl](https://www.lucidchart.com/publicSegments/view/525c994d-5b2c-48d3-aaf6-3fcb0a00d3e5/image.png)
|
138
134
|
|
139
135
|
|
140
136
|
## Contributing
|
@@ -150,7 +146,7 @@ Please don't change the version and add solid tests: [simplecov](https://github.
|
|
150
146
|
|
151
147
|
## Credits
|
152
148
|
|
153
|
-
|
149
|
+
[Sam Saffron](https://github.com/SamSaffron) for his guiding insight as well as [Richard Schneeman](https://github.com/schneems) and [Piotr Sarnacki](https://github.com/drogus) for [helping improve](https://github.com/rails/rails/issues/11512) ActiveSupport::Cache::MemoryStore.
|
154
150
|
|
155
151
|
Who says Ruby can't be fun **and** fast?
|
156
152
|
|
data/Rakefile
CHANGED
@@ -1 +1,14 @@
|
|
1
|
-
require
|
1
|
+
require 'bundler/gem_tasks'
|
2
|
+
require 'rspec/core/rake_task'
|
3
|
+
require 'yard'
|
4
|
+
|
5
|
+
desc 'Default: run the specs'
|
6
|
+
task :default do
|
7
|
+
system("bundle exec rspec")
|
8
|
+
end
|
9
|
+
|
10
|
+
desc 'Run the specs'
|
11
|
+
task :spec => :default
|
12
|
+
|
13
|
+
YARD::Rake::YardocTask.new do |t|
|
14
|
+
end
|
data/fast_cache.gemspec
CHANGED
@@ -24,4 +24,6 @@ Gem::Specification.new do |spec|
|
|
24
24
|
spec.add_development_dependency "simplecov"
|
25
25
|
spec.add_development_dependency "awesome_print"
|
26
26
|
spec.add_development_dependency "timecop"
|
27
|
+
spec.add_development_dependency "yard"
|
28
|
+
spec.add_development_dependency "redcarpet"
|
27
29
|
end
|
data/lib/fast_cache/version.rb
CHANGED
metadata
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: fast_cache
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 1.0.
|
4
|
+
version: 1.0.1
|
5
5
|
prerelease:
|
6
6
|
platform: ruby
|
7
7
|
authors:
|
@@ -9,7 +9,7 @@ authors:
|
|
9
9
|
autorequire:
|
10
10
|
bindir: bin
|
11
11
|
cert_chain: []
|
12
|
-
date: 2013-10-
|
12
|
+
date: 2013-10-15 00:00:00.000000000 Z
|
13
13
|
dependencies:
|
14
14
|
- !ruby/object:Gem::Dependency
|
15
15
|
name: bundler
|
@@ -107,6 +107,38 @@ dependencies:
|
|
107
107
|
- - ! '>='
|
108
108
|
- !ruby/object:Gem::Version
|
109
109
|
version: '0'
|
110
|
+
- !ruby/object:Gem::Dependency
|
111
|
+
name: yard
|
112
|
+
requirement: !ruby/object:Gem::Requirement
|
113
|
+
none: false
|
114
|
+
requirements:
|
115
|
+
- - ! '>='
|
116
|
+
- !ruby/object:Gem::Version
|
117
|
+
version: '0'
|
118
|
+
type: :development
|
119
|
+
prerelease: false
|
120
|
+
version_requirements: !ruby/object:Gem::Requirement
|
121
|
+
none: false
|
122
|
+
requirements:
|
123
|
+
- - ! '>='
|
124
|
+
- !ruby/object:Gem::Version
|
125
|
+
version: '0'
|
126
|
+
- !ruby/object:Gem::Dependency
|
127
|
+
name: redcarpet
|
128
|
+
requirement: !ruby/object:Gem::Requirement
|
129
|
+
none: false
|
130
|
+
requirements:
|
131
|
+
- - ! '>='
|
132
|
+
- !ruby/object:Gem::Version
|
133
|
+
version: '0'
|
134
|
+
type: :development
|
135
|
+
prerelease: false
|
136
|
+
version_requirements: !ruby/object:Gem::Requirement
|
137
|
+
none: false
|
138
|
+
requirements:
|
139
|
+
- - ! '>='
|
140
|
+
- !ruby/object:Gem::Version
|
141
|
+
version: '0'
|
110
142
|
description: Very fast LRU + TTL cache
|
111
143
|
email:
|
112
144
|
- sim@swoop.com
|
@@ -118,6 +150,7 @@ files:
|
|
118
150
|
- .gitignore
|
119
151
|
- .rspec
|
120
152
|
- .simplecov
|
153
|
+
- .travis.yml
|
121
154
|
- .yardopts
|
122
155
|
- Gemfile
|
123
156
|
- LICENSE.txt
|