composite_cache_store 0.0.3 → 0.0.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/README.md +117 -47
- data/Rakefile +30 -1
- data/lib/composite_cache_store/version.rb +1 -1
- data/lib/composite_cache_store.rb +122 -96
- metadata +59 -3
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: 8e8b3140f8d6d37fd4b5c876b550863f13a4dc134142e1cbc7115c63d167a3d4
|
|
4
|
+
data.tar.gz: b16ced0803e2be6075d8fe80a6e532285a0f9aed1fef63a6d8cd0092563a1c9d
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: cdf5f5918114b2f7f148be7518131ed141df10ee1d8d6f0c5096859e3185b0f0698c9df1b47e9dd0525aeb9a8feb4113c28dff0538d281b417621711cc48f1a3
|
|
7
|
+
data.tar.gz: 2ed7a6b022cedcb8fa079e9d8d70a02b42c934b1d1b2d15bcc208c3480e422cc79e6ecdd039420f445103e08e4ca1f98b43f821054f6cd9624b88ac9766c5386
|
data/README.md
CHANGED
|
@@ -1,39 +1,53 @@
|
|
|
1
|
-
|
|
2
|
-
|
|
3
|
-
|
|
1
|
+
<p align="center">
|
|
2
|
+
<h1 align="center">CompositeCacheStore 🚀</h1>
|
|
3
|
+
<p align="center">
|
|
4
|
+
<a href="http://blog.codinghorror.com/the-best-code-is-no-code-at-all/">
|
|
5
|
+
<img alt="Lines of Code" src="https://img.shields.io/badge/loc-137-47d299.svg" />
|
|
6
|
+
</a>
|
|
7
|
+
<a href="https://codeclimate.com/github/hopsoft/composite_cache_store/maintainability">
|
|
8
|
+
<img src="https://api.codeclimate.com/v1/badges/80bcd3acced072534a3a/maintainability" />
|
|
9
|
+
</a>
|
|
10
|
+
<a href="https://rubygems.org/gems/composite_cache_store">
|
|
11
|
+
<img alt="GEM Version" src="https://img.shields.io/gem/v/composite_cache_store?color=168AFE&include_prereleases&logo=ruby&logoColor=FE1616">
|
|
12
|
+
</a>
|
|
13
|
+
<a href="https://rubygems.org/gems/composite_cache_store">
|
|
14
|
+
<img alt="GEM Downloads" src="https://img.shields.io/gem/dt/composite_cache_store?color=168AFE&logo=ruby&logoColor=FE1616">
|
|
15
|
+
</a>
|
|
16
|
+
<a href="https://github.com/testdouble/standard">
|
|
17
|
+
<img alt="Ruby Style" src="https://img.shields.io/badge/style-standard-168AFE?logo=ruby&logoColor=FE1616" />
|
|
18
|
+
</a>
|
|
19
|
+
<a href="https://github.com/hopsoft/composite_cache_store/actions/workflows/tests.yml">
|
|
20
|
+
<img alt="Tests" src="https://github.com/hopsoft/composite_cache_store/actions/workflows/tests.yml/badge.svg" />
|
|
21
|
+
</a>
|
|
22
|
+
<a href="https://github.com/sponsors/hopsoft">
|
|
23
|
+
<img alt="Sponsors" src="https://img.shields.io/github/sponsors/hopsoft?color=eb4aaa&logo=GitHub%20Sponsors" />
|
|
24
|
+
</a>
|
|
25
|
+
<br>
|
|
26
|
+
<a href="https://ruby.social/@hopsoft">
|
|
27
|
+
<img alt="Ruby.Social Follow" src="https://img.shields.io/mastodon/follow/000008274?domain=https%3A%2F%2Fruby.social&label=%40hopsoft&style=social">
|
|
28
|
+
</a>
|
|
29
|
+
<a href="https://twitter.com/hopsoft">
|
|
30
|
+
<img alt="Twitter Follow" src="https://img.shields.io/twitter/url?label=%40hopsoft&style=social&url=https%3A%2F%2Ftwitter.com%2Fhopsoft">
|
|
31
|
+
</a>
|
|
32
|
+
</p>
|
|
33
|
+
<h2 align="center">Boost application speed and maximize user satisfaction with layered caching</h2>
|
|
34
|
+
</p>
|
|
4
35
|
|
|
5
36
|
<!-- Tocer[start]: Auto-generated, don't remove. -->
|
|
6
37
|
|
|
7
38
|
## Table of Contents
|
|
8
39
|
|
|
9
|
-
- [Why a composite cache?](#why-a-composite-cache)
|
|
10
40
|
- [Sponsors](#sponsors)
|
|
41
|
+
- [Why a composite cache?](#why-a-composite-cache)
|
|
42
|
+
- [Eventual consistentency](#eventual-consistentency)
|
|
11
43
|
- [Dependencies](#dependencies)
|
|
12
44
|
- [Installation](#installation)
|
|
13
45
|
- [Setup](#setup)
|
|
14
|
-
- [Ruby on Rails](#ruby-on-rails)
|
|
15
46
|
- [Usage](#usage)
|
|
16
47
|
- [License](#license)
|
|
17
48
|
|
|
18
49
|
<!-- Tocer[finish]: Auto-generated, don't remove. -->
|
|
19
50
|
|
|
20
|
-
## Why a composite cache?
|
|
21
|
-
|
|
22
|
-
Most web applications implement some form of caching mechanics to improve performance.
|
|
23
|
-
Sufficiently large applications often employ a persistence service to back the cache.
|
|
24
|
-
_(Redis, Memcache, etc.)_ These services make it possible to use a shared cache between multiple machines/processes.
|
|
25
|
-
|
|
26
|
-
While these services are robust and performant, they can also be a source of latency and are potential bottlenecks.
|
|
27
|
-
__A composite (or layered) cache can mitigate these risks__
|
|
28
|
-
by reducing traffic and backpressure on the persistence service.
|
|
29
|
-
|
|
30
|
-
Consider a composite cache that wraps a remote Redis-backed "layer 2 cache" with a local in-memory "layer 1 cache".
|
|
31
|
-
When both caches are warm, a read hit on the local in-memory "layer 1 cache" returns instantly and avoids the overhead of
|
|
32
|
-
inter-process communication (IPC) and/or network traffic _(with its attendant data marshaling and socket/wire noise)_
|
|
33
|
-
associated with accessing the remote Redis-backed "layer 2 cache".
|
|
34
|
-
|
|
35
|
-
To summarize: __Reads prioritize the inner cache and fall back to the outer cache.__
|
|
36
|
-
|
|
37
51
|
## Sponsors
|
|
38
52
|
|
|
39
53
|
<p align="center">
|
|
@@ -45,6 +59,48 @@ To summarize: __Reads prioritize the inner cache and fall back to the outer cach
|
|
|
45
59
|
</a>
|
|
46
60
|
</p>
|
|
47
61
|
|
|
62
|
+
## Why a composite cache?
|
|
63
|
+
|
|
64
|
+
Layered caching allows you to stack multiple caches with different scopes, lifetimes, and levels of reliability.
|
|
65
|
+
A technique that yields several benefits.
|
|
66
|
+
|
|
67
|
+
- __Improved performance__
|
|
68
|
+
- __Higher throughput__
|
|
69
|
+
- __Reduced load__
|
|
70
|
+
- __Enhanced capacity/scalability__
|
|
71
|
+
|
|
72
|
+
Inner cache layer(s) provide the fastest reads as they're close to the application, _typically in-memory within the same process_.
|
|
73
|
+
Outer layers are slower _(still fast)_ but are shared by multiple processes and servers.
|
|
74
|
+
|
|
75
|
+
<img height="250" src="https://ik.imagekit.io/hopsoft/composite_cache_store_jnHZcjAuK.svg?updatedAt=1679445477496" />
|
|
76
|
+
|
|
77
|
+
You can configure each layer with different expiration times, eviction policies, and storage mechanisms.
|
|
78
|
+
You're in control of balancing the trade-offs between performance and data freshness.
|
|
79
|
+
|
|
80
|
+
__Inner layers are supersonic while outer layers are speedy.__
|
|
81
|
+
|
|
82
|
+
The difference between a cache hit on a local in-memory store versus a cache hit on a remote store
|
|
83
|
+
is similar to making a grocery run in a
|
|
84
|
+
[Bugatti Chiron Super Sport 300+](https://www.bugatti.com/models/chiron-models/chiron-super-sport-300/)
|
|
85
|
+
compared to making the same trip on a bicyle, but all cache layers will be much faster than the underlying operations.
|
|
86
|
+
For example, a complete cache miss _(that triggers database queries and view rendering)_ would be equivalent to making this trip riding a sloth.
|
|
87
|
+
|
|
88
|
+
## Eventual consistentency
|
|
89
|
+
|
|
90
|
+
Layered caching techniques exhibit some of the same traits as [distributed systems](https://en.wikipedia.org/wiki/Eventual_consistency)
|
|
91
|
+
because inner layers may hold onto __stale data__ until their entries expire.
|
|
92
|
+
__Be sure to configure inner layers appropriately with shorter lifetimes__.
|
|
93
|
+
|
|
94
|
+
This behavior is similar to the
|
|
95
|
+
[`race_condition_ttl`](https://api.rubyonrails.org/classes/ActiveSupport/Cache/Store.html#method-i-fetch-label-Options)
|
|
96
|
+
option in `ActiveSupport::Cache::Store` which helps to avoid race conditions whenever multiple threads/processes try to write to the same cache entry simultaneously.
|
|
97
|
+
|
|
98
|
+
__Be mindful of the potential gotchas.__
|
|
99
|
+
|
|
100
|
+
- __Data consistency__ - it's possible to end up with inconsistent or stale data
|
|
101
|
+
- __Over-caching__ - caching too much can lead to increased memory usage and even slower performance
|
|
102
|
+
- __Bugs/Testing__ - difficult bugs can be introduced with sophisticated caching techniques
|
|
103
|
+
|
|
48
104
|
## Dependencies
|
|
49
105
|
|
|
50
106
|
- [ActiveSupport `>= 6.0`](https://github.com/rails/rails/tree/main/activesupport)
|
|
@@ -57,31 +113,39 @@ bundle add "composite_cache_store"
|
|
|
57
113
|
|
|
58
114
|
## Setup
|
|
59
115
|
|
|
60
|
-
|
|
61
|
-
|
|
62
|
-
```ruby
|
|
63
|
-
# config/environments/production.rb
|
|
64
|
-
module Example
|
|
65
|
-
class Application < Rails::Application
|
|
66
|
-
config.cache_store = :redis_cache_store, { url: "redis://example.com:6379/1" }
|
|
67
|
-
end
|
|
68
|
-
end
|
|
69
|
-
```
|
|
116
|
+
Here's an example of how you might set up layered caching in a Rails application.
|
|
70
117
|
|
|
71
118
|
```ruby
|
|
72
119
|
# config/initializers/composite_cache_store.rb
|
|
73
120
|
def Rails.composite_cache
|
|
74
121
|
@composite_cache ||= CompositeCacheStore.new(
|
|
75
|
-
|
|
76
|
-
|
|
77
|
-
|
|
78
|
-
|
|
79
|
-
|
|
80
|
-
|
|
81
|
-
|
|
82
|
-
|
|
83
|
-
|
|
84
|
-
|
|
122
|
+
layers: [
|
|
123
|
+
# Layer 1 cache (fastest)
|
|
124
|
+
# Most beneficial for high traffic volume
|
|
125
|
+
# Isolated to the process running an application instance
|
|
126
|
+
ActiveSupport::Cache::MemoryStore.new(
|
|
127
|
+
expires_in: 15.minutes,
|
|
128
|
+
size: 32.megabytes
|
|
129
|
+
),
|
|
130
|
+
|
|
131
|
+
# Layer 2 cache (faster)
|
|
132
|
+
# Most beneficial for moderate traffic volume
|
|
133
|
+
# Isolated to the machine running N-number of application instances,
|
|
134
|
+
# and shared by all application processes on the machine
|
|
135
|
+
ActiveSupport::Cache::RedisCacheStore.new(
|
|
136
|
+
url: "redis://localhost:6379/0",
|
|
137
|
+
expires_in: 2.hours
|
|
138
|
+
),
|
|
139
|
+
|
|
140
|
+
# Layer 3 cache (fast)
|
|
141
|
+
# Global cache shared by all application processes on all machines
|
|
142
|
+
ActiveSupport::Cache::RedisCacheStore.new(
|
|
143
|
+
url: "redis://remote.example.com:6379/0",
|
|
144
|
+
expires_in: 7.days
|
|
145
|
+
),
|
|
146
|
+
|
|
147
|
+
# additional layers are optional
|
|
148
|
+
]
|
|
85
149
|
)
|
|
86
150
|
end
|
|
87
151
|
```
|
|
@@ -91,12 +155,18 @@ end
|
|
|
91
155
|
A composite cache is ideal for mitigating hot spot latency in frequently invoked areas of the codebase.
|
|
92
156
|
|
|
93
157
|
```ruby
|
|
94
|
-
# method that's invoked frequently by multiple processes
|
|
158
|
+
# method that's invoked frequently by multiple processes/machines
|
|
95
159
|
def hotspot
|
|
96
|
-
|
|
97
|
-
|
|
98
|
-
|
|
99
|
-
#
|
|
160
|
+
Rails.composite_cache.fetch("example", expires_in: 12.hours) do
|
|
161
|
+
# reserve for high frequency access of slow operations
|
|
162
|
+
#
|
|
163
|
+
# examples:
|
|
164
|
+
# - api invocations
|
|
165
|
+
# - database queries
|
|
166
|
+
# - template renders
|
|
167
|
+
# - etc.
|
|
168
|
+
|
|
169
|
+
frequently_accessed_slow_operation
|
|
100
170
|
end
|
|
101
171
|
end
|
|
102
172
|
```
|
data/Rakefile
CHANGED
|
@@ -2,9 +2,38 @@
|
|
|
2
2
|
|
|
3
3
|
require "bundler/gem_tasks"
|
|
4
4
|
require "minitest/test_task"
|
|
5
|
+
require "paint"
|
|
6
|
+
|
|
7
|
+
# versions of rails to test against
|
|
8
|
+
rails_versions = %w[
|
|
9
|
+
v5.2.8.1
|
|
10
|
+
v6.1.7.3
|
|
11
|
+
v7.0.4.3
|
|
12
|
+
edge
|
|
13
|
+
]
|
|
5
14
|
|
|
6
15
|
task default: :test
|
|
7
16
|
|
|
8
|
-
Minitest::TestTask.create(:
|
|
17
|
+
Minitest::TestTask.create(:minitest) do |t|
|
|
9
18
|
t.test_globs = ["test/**/*_test.rb"]
|
|
10
19
|
end
|
|
20
|
+
|
|
21
|
+
task :test do
|
|
22
|
+
ENV["COMPOSITE_CACHE_STORE_ENV"] = "test"
|
|
23
|
+
rails_versions.each do |rails_version|
|
|
24
|
+
ENV["RAILS_VERSION"] = (rails_version == "edge") ? nil : rails_version
|
|
25
|
+
puts Paint % ["Bundling activesupport %{version} from github ", :blue, :underline, version: [rails_version, "sky blue", :underline]]
|
|
26
|
+
print Paint["required for tests provided by rails... ", "slate gray"]
|
|
27
|
+
`bundle update activesupport`
|
|
28
|
+
puts "done!\n\n"
|
|
29
|
+
Rake::Task["minitest"].invoke
|
|
30
|
+
Rake::Task["minitest"].reenable unless rails_version == rails_versions.last
|
|
31
|
+
end
|
|
32
|
+
ensure
|
|
33
|
+
if ENV["GITHUB_ACTIONS"] != "true"
|
|
34
|
+
ENV["COMPOSITE_CACHE_STORE_ENV"] = nil
|
|
35
|
+
print Paint["Restoring bundle with activesupport from rubygems... ", :blue]
|
|
36
|
+
`bundle update activesupport`
|
|
37
|
+
puts "done!"
|
|
38
|
+
end
|
|
39
|
+
end
|
|
@@ -4,146 +4,172 @@ require "active_support/all"
|
|
|
4
4
|
require_relative "composite_cache_store/version"
|
|
5
5
|
|
|
6
6
|
class CompositeCacheStore
|
|
7
|
-
|
|
8
|
-
|
|
9
|
-
size: 16.megabytes
|
|
10
|
-
}
|
|
11
|
-
|
|
12
|
-
DEFAULT_LAYER_2_OPTIONS = {
|
|
13
|
-
expires_in: 1.day,
|
|
14
|
-
size: 32.megabytes
|
|
15
|
-
}
|
|
16
|
-
|
|
17
|
-
attr_reader :layers
|
|
7
|
+
attr_reader :options, :layers
|
|
8
|
+
attr_accessor :logger
|
|
18
9
|
|
|
19
10
|
# Returns a new CompositeCacheStore instance
|
|
20
|
-
def initialize(
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
11
|
+
def initialize(options = {})
|
|
12
|
+
options = options.dup || {}
|
|
13
|
+
layers = options.delete(:layers) || []
|
|
14
|
+
|
|
15
|
+
raise ArgumentError.new("A layered cache requires more than 1 layer!") unless layers.size > 1
|
|
25
16
|
|
|
26
|
-
|
|
27
|
-
|
|
28
|
-
raise ArgumentError.new(message) unless layer.is_a?(ActiveSupport::Cache::Store)
|
|
17
|
+
unless layers.all? { |layer| layer.is_a? ActiveSupport::Cache::Store }
|
|
18
|
+
raise ArgumentError.new("All layers must be instances of ActiveSupport::Cache::Store!")
|
|
29
19
|
end
|
|
30
20
|
|
|
31
|
-
layers.freeze
|
|
32
|
-
@
|
|
21
|
+
@layers = layers.freeze
|
|
22
|
+
@logger = options[:logger]
|
|
23
|
+
@options = options
|
|
33
24
|
end
|
|
34
25
|
|
|
35
|
-
def
|
|
36
|
-
|
|
26
|
+
def read(name, options = nil)
|
|
27
|
+
value = nil
|
|
28
|
+
warm_layer = layers.find { |layer| layer_read?(layer, name, options) { |val| value = val } }
|
|
29
|
+
yield(value, warm_layer) if block_given?
|
|
30
|
+
value
|
|
37
31
|
end
|
|
38
32
|
|
|
39
|
-
def
|
|
40
|
-
|
|
33
|
+
def read_multi(*names)
|
|
34
|
+
value = {}
|
|
35
|
+
warm_layer = layers.find { |layer| layer_read_multi?(layer, *names) { |val| value.merge!(val) } }
|
|
36
|
+
yield(value, warm_layer) if block_given?
|
|
37
|
+
value
|
|
41
38
|
end
|
|
42
39
|
|
|
43
|
-
def
|
|
44
|
-
|
|
40
|
+
def fetch(name, options = nil, &block)
|
|
41
|
+
options ||= {}
|
|
42
|
+
|
|
43
|
+
if options[:force]
|
|
44
|
+
raise ArgumentError, "Missing block: Calling `Cache#fetch` with `force: true` requires a block." unless block
|
|
45
|
+
value = block&.call(name)
|
|
46
|
+
layers.each { |layer| layer.write(name, value, options) }
|
|
47
|
+
return value
|
|
48
|
+
end
|
|
49
|
+
|
|
50
|
+
read(name, options) do |value, warm_layer|
|
|
51
|
+
value ||= block&.call(name) unless warm_layer
|
|
52
|
+
|
|
53
|
+
layers.each do |layer|
|
|
54
|
+
break if layer == warm_layer
|
|
55
|
+
layer.write(name, value, options) unless value.nil? && options[:skip_nil]
|
|
56
|
+
end
|
|
57
|
+
|
|
58
|
+
return value
|
|
59
|
+
end
|
|
45
60
|
end
|
|
46
61
|
|
|
47
|
-
def
|
|
48
|
-
|
|
62
|
+
def fetch_multi(*names, &block)
|
|
63
|
+
raise ArgumentError, "Missing block: `Cache#fetch_multi` requires a block." unless block
|
|
64
|
+
|
|
65
|
+
keys = names.dup
|
|
66
|
+
options = keys.extract_options!
|
|
67
|
+
|
|
68
|
+
if options[:force]
|
|
69
|
+
value = keys.each_with_object({}) { |key, memo| memo[key] = block&.call(key) }
|
|
70
|
+
layers.each { |layer| layer.write_multi(value, options) }
|
|
71
|
+
return value
|
|
72
|
+
end
|
|
73
|
+
|
|
74
|
+
read_multi(*names) do |value, warm_layer|
|
|
75
|
+
unless warm_layer
|
|
76
|
+
missing_keys = keys - value.keys
|
|
77
|
+
missing_keys.each { |key| value[key] = block&.call(key) }
|
|
78
|
+
end
|
|
79
|
+
|
|
80
|
+
value.compact! if options[:skip_nil]
|
|
81
|
+
|
|
82
|
+
layers.each do |layer|
|
|
83
|
+
break if layer == warm_layer
|
|
84
|
+
layer.write_multi(value, options)
|
|
85
|
+
end
|
|
86
|
+
|
|
87
|
+
# return ordered hash value
|
|
88
|
+
return keys.each_with_object({}) { |key, memo| memo[key] = value[key] }
|
|
89
|
+
end
|
|
49
90
|
end
|
|
50
91
|
|
|
51
|
-
def
|
|
52
|
-
layers.
|
|
92
|
+
def write(name, value, options = nil)
|
|
93
|
+
layers.map { |layer| layer.write(name, value, options) }.last
|
|
53
94
|
end
|
|
54
95
|
|
|
55
|
-
def
|
|
56
|
-
layers.
|
|
96
|
+
def write_multi(hash, options = nil)
|
|
97
|
+
layers.map { |layer| layer.write_multi(hash, options) }.last
|
|
57
98
|
end
|
|
58
99
|
|
|
59
|
-
def
|
|
60
|
-
layers.
|
|
61
|
-
return true if store.exist?(...)
|
|
62
|
-
end
|
|
63
|
-
false
|
|
100
|
+
def delete(...)
|
|
101
|
+
layers.map { |layer| layer.delete(...) }.last
|
|
64
102
|
end
|
|
65
103
|
|
|
66
|
-
def
|
|
67
|
-
|
|
68
|
-
return store.fetch(*args, &block) if store == layers.last
|
|
69
|
-
store.fetch(*args) { f.call(layers[layers.index(store) + 1]) }
|
|
70
|
-
end
|
|
71
|
-
f.call(layers.first)
|
|
104
|
+
def delete_multi(...)
|
|
105
|
+
layers.map { |layer| layer.delete_multi(...) }.last
|
|
72
106
|
end
|
|
73
107
|
|
|
74
|
-
def
|
|
75
|
-
|
|
76
|
-
return store.fetch_multi(*args, &block) if store == layers.last
|
|
77
|
-
store.fetch_multi(*args) { fm.call(layers[layers.index(store) + 1]) }
|
|
78
|
-
end
|
|
79
|
-
fm.call(layers.first)
|
|
108
|
+
def delete_matched(...)
|
|
109
|
+
layers.map { |layer| layer.delete_matched(...) }.last
|
|
80
110
|
end
|
|
81
111
|
|
|
82
|
-
def increment(
|
|
83
|
-
|
|
112
|
+
def increment(name, amount = 1, options = nil)
|
|
113
|
+
provisional_layers.each { |layer| layer.delete(name, options) }
|
|
114
|
+
layers.last.increment(name, amount, options)
|
|
84
115
|
end
|
|
85
116
|
|
|
86
|
-
def
|
|
87
|
-
|
|
88
|
-
|
|
89
|
-
store.mute { m.call(layers[layers.index(store) + 1]) }
|
|
90
|
-
end
|
|
91
|
-
m.call(layers.first)
|
|
117
|
+
def decrement(name, amount = 1, options = nil)
|
|
118
|
+
provisional_layers.each { |layer| layer.delete(name, options) }
|
|
119
|
+
layers.last.decrement(name, amount, options)
|
|
92
120
|
end
|
|
93
121
|
|
|
94
|
-
def
|
|
95
|
-
|
|
96
|
-
return store.read(*args) if store == layers.last
|
|
97
|
-
store.fetch(*args) { r.call(layers[layers.index(store) + 1]) }
|
|
98
|
-
end
|
|
99
|
-
r.call(layers.first)
|
|
122
|
+
def cleanup(...)
|
|
123
|
+
layers.map { |layer| layer.cleanup(...) }.last
|
|
100
124
|
end
|
|
101
125
|
|
|
102
|
-
def
|
|
103
|
-
|
|
104
|
-
layers.each do |store|
|
|
105
|
-
hash = store.read_multi(...)
|
|
106
|
-
if hash.present?
|
|
107
|
-
missed_layers.each { |s| s.write_multi(hash) }
|
|
108
|
-
return hash
|
|
109
|
-
end
|
|
110
|
-
missed_layers << store
|
|
111
|
-
end
|
|
112
|
-
{}
|
|
126
|
+
def clear(...)
|
|
127
|
+
layers.map { |layer| layer.clear(...) }.last
|
|
113
128
|
end
|
|
114
129
|
|
|
115
|
-
def
|
|
116
|
-
layers.
|
|
130
|
+
def exist?(...)
|
|
131
|
+
layers.any? { |layer| layer.exist?(...) }
|
|
117
132
|
end
|
|
118
133
|
|
|
119
|
-
def
|
|
120
|
-
layers.
|
|
121
|
-
store.write name, value, permitted_options(store, options)
|
|
122
|
-
end
|
|
134
|
+
def mute
|
|
135
|
+
layers.map { |layer| layer.mute { yield } }.last
|
|
123
136
|
end
|
|
124
137
|
|
|
125
|
-
def
|
|
126
|
-
layers.
|
|
127
|
-
store.write_multi hash, permitted_options(store, options)
|
|
128
|
-
end
|
|
138
|
+
def silence!
|
|
139
|
+
layers.map { |layer| layer.silence! }.last
|
|
129
140
|
end
|
|
130
141
|
|
|
131
142
|
private
|
|
132
143
|
|
|
133
|
-
def
|
|
134
|
-
|
|
135
|
-
|
|
136
|
-
|
|
144
|
+
def provisional_layers
|
|
145
|
+
layers.take layers.size - 1
|
|
146
|
+
end
|
|
147
|
+
|
|
148
|
+
def layer_read?(layer, name, options)
|
|
149
|
+
if layer.respond_to?(:with_local_cache)
|
|
150
|
+
layer.with_local_cache do
|
|
151
|
+
value = layer.read(name, options)
|
|
152
|
+
yield value
|
|
153
|
+
value || layer.exist?(name, options)
|
|
154
|
+
end
|
|
155
|
+
else
|
|
156
|
+
value = layer.read(name, options)
|
|
157
|
+
yield value
|
|
158
|
+
value || layer.exist?(name, options)
|
|
159
|
+
end
|
|
137
160
|
end
|
|
138
161
|
|
|
139
|
-
def
|
|
140
|
-
|
|
141
|
-
|
|
162
|
+
def layer_read_multi?(layer, *names)
|
|
163
|
+
keys = names.dup
|
|
164
|
+
keys.extract_options!
|
|
142
165
|
|
|
143
|
-
|
|
144
|
-
|
|
145
|
-
|
|
166
|
+
value = if layer.respond_to?(:with_local_cache)
|
|
167
|
+
layer.with_local_cache { layer.read_multi(*names) }
|
|
168
|
+
else
|
|
169
|
+
layer.read_multi(*names)
|
|
170
|
+
end
|
|
146
171
|
|
|
147
|
-
|
|
172
|
+
yield value
|
|
173
|
+
value.size == keys.size
|
|
148
174
|
end
|
|
149
175
|
end
|
metadata
CHANGED
|
@@ -1,14 +1,14 @@
|
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
|
2
2
|
name: composite_cache_store
|
|
3
3
|
version: !ruby/object:Gem::Version
|
|
4
|
-
version: 0.0.
|
|
4
|
+
version: 0.0.4
|
|
5
5
|
platform: ruby
|
|
6
6
|
authors:
|
|
7
7
|
- Nate Hopkins (hopsoft)
|
|
8
8
|
autorequire:
|
|
9
9
|
bindir: bin
|
|
10
10
|
cert_chain: []
|
|
11
|
-
date: 2023-03-
|
|
11
|
+
date: 2023-03-31 00:00:00.000000000 Z
|
|
12
12
|
dependencies:
|
|
13
13
|
- !ruby/object:Gem::Dependency
|
|
14
14
|
name: activesupport
|
|
@@ -52,6 +52,20 @@ dependencies:
|
|
|
52
52
|
- - ">="
|
|
53
53
|
- !ruby/object:Gem::Version
|
|
54
54
|
version: '0'
|
|
55
|
+
- !ruby/object:Gem::Dependency
|
|
56
|
+
name: paint
|
|
57
|
+
requirement: !ruby/object:Gem::Requirement
|
|
58
|
+
requirements:
|
|
59
|
+
- - ">="
|
|
60
|
+
- !ruby/object:Gem::Version
|
|
61
|
+
version: '0'
|
|
62
|
+
type: :development
|
|
63
|
+
prerelease: false
|
|
64
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
65
|
+
requirements:
|
|
66
|
+
- - ">="
|
|
67
|
+
- !ruby/object:Gem::Version
|
|
68
|
+
version: '0'
|
|
55
69
|
- !ruby/object:Gem::Dependency
|
|
56
70
|
name: pry-byebug
|
|
57
71
|
requirement: !ruby/object:Gem::Requirement
|
|
@@ -67,7 +81,49 @@ dependencies:
|
|
|
67
81
|
- !ruby/object:Gem::Version
|
|
68
82
|
version: '0'
|
|
69
83
|
- !ruby/object:Gem::Dependency
|
|
70
|
-
name:
|
|
84
|
+
name: pry-doc
|
|
85
|
+
requirement: !ruby/object:Gem::Requirement
|
|
86
|
+
requirements:
|
|
87
|
+
- - ">="
|
|
88
|
+
- !ruby/object:Gem::Version
|
|
89
|
+
version: '0'
|
|
90
|
+
type: :development
|
|
91
|
+
prerelease: false
|
|
92
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
93
|
+
requirements:
|
|
94
|
+
- - ">="
|
|
95
|
+
- !ruby/object:Gem::Version
|
|
96
|
+
version: '0'
|
|
97
|
+
- !ruby/object:Gem::Dependency
|
|
98
|
+
name: rake
|
|
99
|
+
requirement: !ruby/object:Gem::Requirement
|
|
100
|
+
requirements:
|
|
101
|
+
- - ">="
|
|
102
|
+
- !ruby/object:Gem::Version
|
|
103
|
+
version: '0'
|
|
104
|
+
type: :development
|
|
105
|
+
prerelease: false
|
|
106
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
107
|
+
requirements:
|
|
108
|
+
- - ">="
|
|
109
|
+
- !ruby/object:Gem::Version
|
|
110
|
+
version: '0'
|
|
111
|
+
- !ruby/object:Gem::Dependency
|
|
112
|
+
name: standard
|
|
113
|
+
requirement: !ruby/object:Gem::Requirement
|
|
114
|
+
requirements:
|
|
115
|
+
- - ">="
|
|
116
|
+
- !ruby/object:Gem::Version
|
|
117
|
+
version: '0'
|
|
118
|
+
type: :development
|
|
119
|
+
prerelease: false
|
|
120
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
121
|
+
requirements:
|
|
122
|
+
- - ">="
|
|
123
|
+
- !ruby/object:Gem::Version
|
|
124
|
+
version: '0'
|
|
125
|
+
- !ruby/object:Gem::Dependency
|
|
126
|
+
name: tocer
|
|
71
127
|
requirement: !ruby/object:Gem::Requirement
|
|
72
128
|
requirements:
|
|
73
129
|
- - ">="
|