atomic_cache 0.2.2.rc1 → 0.4.0.rc1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: bb65136822683212f95725ed2a56753b369cabc3ff68335310755f465e997dc7
4
- data.tar.gz: 891410bc30fcb78b0b8850913b7d3e2c70eff9d2063df3e76c233a3685a6c55e
3
+ metadata.gz: 0f8d7906f63f08bf7e8d7b9f9f88d05a209d65c37375c464f60448ffd9db427e
4
+ data.tar.gz: ac2b7c09f3299ccbb4dcf588cbe163134c8e81b94d79c08fe908afb6200911a7
5
5
  SHA512:
6
- metadata.gz: 13b9b39ea19ad6e157e6015a4e2670df741f9106d36b0f64931da60d651ecd61abd39c24183fb648498f3d4d54ab92684c389ced188650a61f6c2dcc7170748d
7
- data.tar.gz: e33b8ded8491885eb96508e678ddb8bbc7f62f691c795fbcbaa9303c0a6e578f5666793c64ca1db8b0fbf98a0d9257f1eb68653c0f862d416bf3d6b65fa012b3
6
+ metadata.gz: fa4f82e7cd8b729461b5b122a9f9cb92c2c36898c295a63bdd18a510133920ca5694d3e44b9cceceb7d5e2a078b2781f75481b32bda1e21804f720a00287eacb
7
+ data.tar.gz: 18ca04880e1f4c57308d3442fa4bbb50318121ba9d663ca2858819838ddcd27c118ad79b4767c60708872778deca668b7315f7cad32fdcb951cec2a18dc168ac
data/README.md CHANGED
@@ -23,7 +23,7 @@ In a nutshell:
23
23
  class Foo < ActiveRecord::Base
24
24
  include AtomicCache::GlobalLMTCacheConcern
25
25
 
26
- cache_class(:custom_foo) # optional
26
+ force_cache_class(:custom_foo) # optional
27
27
  cache_version(5) # optional
28
28
 
29
29
  def active_foos(ids)
data/docs/MODEL_SETUP.md CHANGED
@@ -7,13 +7,13 @@ class Foo < ActiveRecord::Base
7
7
  end
8
8
  ```
9
9
 
10
- ### cache_class
10
+ ### force_cache_class
11
11
  By default the cache identifier for a class is set to the name of a class (ie. `self.to_s`). In some cases it makes sense to set a custom value for the cache identifier. In cases where a custom cache identifier is set, it's important that the identifier remain unique across the project.
12
12
 
13
13
  ```ruby
14
14
  class SuperDescriptiveDomainModelAbstractFactoryImplManager < ActiveRecord::Base
15
15
  include AtomicCache::GlobalLMTCacheConcern
16
- cache_class('sddmafim')
16
+ force_cache_class('sddmafim')
17
17
  end
18
18
  ```
19
19
 
@@ -23,9 +23,10 @@ AtomicCache::DefaultConfig.configure do |config|
23
23
  config.metrics = Datadog::Statsd.new('localhost', 8125, namespace: 'cache.atomic')
24
24
 
25
25
  # note: these values can also be set in an env file for env-specific settings
26
- config.namespace = 'atom'
27
- config.cache_storage = AtomicCache::Storage::SharedMemory.new
28
- config.key_storage = AtomicCache::Storage::SharedMemory.new
26
+ config.namespace = 'atom'
27
+ config.default_options = { generate_ttl_ms: 500 }
28
+ config.cache_storage = AtomicCache::Storage::SharedMemory.new
29
+ config.key_storage = AtomicCache::Storage::SharedMemory.new
29
30
  end
30
31
  ```
31
32
 
@@ -36,7 +37,7 @@ Note that `Datadog::Statsd` is not _required_. Adding it, however, will enable
36
37
  * `key_storage` - Storage adapter for key manager (see below)
37
38
 
38
39
  #### Optional
39
- * `default_options` - Default options for every fetch call. See [fetch options](/Ibotta/atomic_cache/blob/main/docs/USAGE.md#fetch).
40
+ * `default_options` - Override default options for every fetch call, unless specified at call site. See [fetch options](/Ibotta/atomic_cache/blob/main/docs/USAGE.md#fetch).
40
41
  * `logger` - Logger instance. Used for debug and warn logs. Defaults to nil.
41
42
  * `timestamp_formatter` - Proc to format last modified time for storage. Defaults to timestamp (`Time.to_i`)
42
43
  * `metrics` - Metrics instance. Defaults to nil.
@@ -45,6 +46,49 @@ Note that `Datadog::Statsd` is not _required_. Adding it, however, will enable
45
46
  #### ★ Best Practice ★
46
47
  Keep the global namespace short. For example, memcached has a limit of 250 characters for key length.
47
48
 
49
+ #### More Complex Rails Configuration
50
+
51
+ In any real-world project, the need to run multiple caching strategies or setups is likely to arise. In those cases, it's often advantageous
52
+ to keep a DRY setup, with multiple caching clients sharing the same config. Because Rails initializers run after the environment-specific
53
+ config files, a sane way to manage this is to keep client network settings int he config files, then reference them from the initializer.
54
+
55
+ ```ruby
56
+ # config/environments/staging
57
+ config.memcache_hosts = [ "staging.host.cache.amazonaws.com" ]
58
+ config.cache_store_options = {
59
+ expires_in: 15.minutes,
60
+ compress: true,
61
+ # ...
62
+ }
63
+
64
+ # config/environments/production
65
+ config.memcache_hosts = [ "prod1.host.cache.amazonaws.com", "prod2.host.cache.amazonaws.com" ]
66
+ config.cache_store_options = {
67
+ expires_in: 1.hour,
68
+ compress: true,
69
+ # ...
70
+ }
71
+
72
+ # config/initializers/cache.rb
73
+ AtomicCache::DefaultConfig.configure do |config|
74
+ if Rails.env.development? || Rails.env.test?
75
+ config.cache_storage = AtomicCache::Storage::SharedMemory.new
76
+ config.key_storage = AtomicCache::Storage::SharedMemory.new
77
+
78
+ elsif Rails.env.staging? || Rails.env.production?
79
+ # Your::Application.config will be loaded by config/environments/*
80
+ memcache_hosts = Your::Application.config.memcache_hosts
81
+ options = Your::Application.config.cache_store_options
82
+
83
+ dc = Dalli::Client.new(memcache_hosts, options)
84
+ config.cache_storage = AtomicCache::Storage::Dalli.new(dc)
85
+ config.key_storage = AtomicCache::Storage::Dalli.new(dc)
86
+ end
87
+
88
+ # other AtomicCache configuration...
89
+ end
90
+ ```
91
+
48
92
  ## Storage Adapters
49
93
 
50
94
  ### InstanceMemory & SharedMemory
data/docs/USAGE.md CHANGED
@@ -38,17 +38,6 @@ The ideal `generate_ttl_ms` time is just slightly longer than the average genera
38
38
 
39
39
  If metrics are enabled, the `<namespace>.generate.run` can be used to determine the min/max/average generate time for a particular cache and the `generate_ttl_ms` tuned using that.
40
40
 
41
- #### `quick_retry_ms`
42
- _`false` to disable. Defaults to false._
43
-
44
- In the case where another process is computing the new cache value, before falling back to the last known value, if `quick_retry_ms` has a value the atomic client will check the new cache once after the given duration (in milliseconds).
45
-
46
- The danger with `quick_retry_ms` is that when enabled it applies a delay to all fall-through requests at the cost of only benefitting some customers. As the average generate block duration increases, the effectiveness of `quick_retry_ms` decreases because there is less of a likelihood that a customer will get a fresh value. Consider the graph below. For example, a cache with an average generate duration of 200ms, configured with a `quick_retry_ms` of 50ms (red) will only likely get a fresh value for 25% of customers.
47
-
48
- `quick_retry_ms` is most effective for caches that are quick to generate but whose values are slow to change. `quick_retry_ms` is least effective for caches that are slow to update but quick to change.
49
-
50
- ![quick_retry_ms graph](https://github.com/Ibotta/atomic_cache/raw/ca473f28e179da8c24f638eeeeb48750bc8cbe64/docs/img/quick_retry_graph.png)
51
-
52
41
  #### `max_retries` & `backoff_duration_ms`
53
42
  _`max_retries` defaults to 5._
54
43
  _`backoff_duration_ms` defaults to 50ms._
@@ -6,7 +6,6 @@ require 'active_support/core_ext/hash'
6
6
  module AtomicCache
7
7
  class AtomicCacheClient
8
8
 
9
- DEFAULT_quick_retry_ms = false
10
9
  DEFAULT_MAX_RETRIES = 5
11
10
  DEFAULT_GENERATE_TIME_MS = 30000 # 30 seconds
12
11
  BACKOFF_DURATION_MS = 50
@@ -27,13 +26,11 @@ module AtomicCache
27
26
  raise ArgumentError.new("`storage` required but none given") unless @storage.present?
28
27
  end
29
28
 
30
-
31
29
  # Attempts to fetch the given keyspace, using an optional block to generate
32
30
  # a new value when the cache is expired
33
31
  #
34
32
  # @param keyspace [AtomicCache::Keyspace] the keyspace to fetch
35
33
  # @option options [Numeric] :generate_ttl_ms (30000) Max generate duration in ms
36
- # @option options [Numeric] :quick_retry_ms (false) Short duration to check back before using last known value
37
34
  # @option options [Numeric] :max_retries (5) Max times to rety in waiting case
38
35
  # @option options [Numeric] :backoff_duration_ms (50) Duration in ms to wait between retries
39
36
  # @yield Generates a new value when cache is expired
@@ -45,6 +42,7 @@ module AtomicCache
45
42
  value = @storage.read(key, options) if key.present?
46
43
  if !value.nil?
47
44
  metrics(:increment, 'read.present', tags: tags)
45
+ log(:debug, "Read value from key: '#{key}'")
48
46
  return value
49
47
  end
50
48
 
@@ -57,15 +55,14 @@ module AtomicCache
57
55
  return new_value unless new_value.nil?
58
56
  end
59
57
 
60
- # quick check back to see if the other process has finished
61
- # or fall back to the last known value
62
- value = quick_retry(keyspace, options, tags) || last_known_value(keyspace, options, tags)
58
+ # attempt to fall back to the last known value
59
+ value = last_known_value(keyspace, options, tags)
63
60
  return value if value.present?
64
61
 
65
62
  # wait for the other process if a last known value isn't there
66
63
  if key.present?
67
64
  return time('wait.run', tags: tags) do
68
- wait_for_new_value(key, options, tags)
65
+ wait_for_new_value(keyspace, options, tags)
69
66
  end
70
67
  end
71
68
 
@@ -109,24 +106,6 @@ module AtomicCache
109
106
  nil
110
107
  end
111
108
 
112
- def quick_retry(keyspace, options, tags)
113
- key = @timestamp_manager.current_key(keyspace)
114
- duration = option(:quick_retry_ms, options, DEFAULT_quick_retry_ms)
115
-
116
- if duration.present? and key.present?
117
- sleep(duration.to_f / 1000)
118
- value = @storage.read(key, options)
119
-
120
- if !value.nil?
121
- metrics(:increment, 'empty-cache-retry.present', tags: tags)
122
- return value
123
- end
124
- metrics(:increment, 'empty-cache-retry.not-present', tags: tags)
125
- end
126
-
127
- nil
128
- end
129
-
130
109
  def last_known_value(keyspace, options, tags)
131
110
  lkk = @timestamp_manager.last_known_key(keyspace)
132
111
 
@@ -136,6 +115,7 @@ module AtomicCache
136
115
  # last known key may have expired
137
116
  if !lkv.nil?
138
117
  metrics(:increment, 'last-known-value.present', tags: tags)
118
+ log(:debug, "Read value from last known value key: '#{lkk}'")
139
119
  return lkv
140
120
  end
141
121
 
@@ -149,7 +129,7 @@ module AtomicCache
149
129
  nil
150
130
  end
151
131
 
152
- def wait_for_new_value(key, options, tags)
132
+ def wait_for_new_value(keyspace, options, tags)
153
133
  max_retries = option(:max_retries, options, DEFAULT_MAX_RETRIES)
154
134
  max_retries.times do |attempt|
155
135
  metrics_tags = tags.clone.push("attempt:#{attempt}")
@@ -160,6 +140,8 @@ module AtomicCache
160
140
  backoff_duration_ms = option(:backoff_duration_ms, options, backoff_duration_ms)
161
141
  sleep((backoff_duration_ms.to_f / 1000) * attempt)
162
142
 
143
+ # re-fetch the key each time, to make sure we're actually getting the latest key with the correct LMT
144
+ key = @timestamp_manager.current_key(keyspace)
163
145
  value = @storage.read(key, options)
164
146
  if !value.nil?
165
147
  metrics(:increment, 'wait.present', tags: metrics_tags)
@@ -168,7 +150,7 @@ module AtomicCache
168
150
  end
169
151
 
170
152
  metrics(:increment, 'wait.give-up')
171
- log(:warn, "Giving up fetching cache key `#{key}`. Exceeded max retries (#{max_retries}).")
153
+ log(:warn, "Giving up waiting. Exceeded max retries (#{max_retries}).")
172
154
  nil
173
155
  end
174
156
 
@@ -26,7 +26,7 @@ module AtomicCache
26
26
  end
27
27
  end
28
28
 
29
- def cache_class(kls)
29
+ def force_cache_class(kls)
30
30
  ATOMIC_CACHE_CONCERN_MUTEX.synchronize do
31
31
  @atomic_cache_class = kls
32
32
  end
@@ -10,10 +10,6 @@ module AtomicCache
10
10
  class Dalli < Store
11
11
  extend Forwardable
12
12
 
13
- ADD_SUCCESS = 'STORED'
14
- ADD_UNSUCCESSFUL = 'NOT_STORED'
15
- ADD_EXISTS = 'EXISTS'
16
-
17
13
  def_delegators :@dalli_client, :delete
18
14
 
19
15
  def initialize(dalli_client)
@@ -27,8 +23,7 @@ module AtomicCache
27
23
  # dalli expects time in seconds
28
24
  # https://github.com/petergoldstein/dalli/blob/b8f4afe165fb3e07294c36fb1c63901b0ed9ce10/lib/dalli/client.rb#L27
29
25
  # TODO: verify this unit is being treated correctly through the system
30
- response = @dalli_client.add(key, new_value, ttl, opts)
31
- response.start_with?(ADD_SUCCESS)
26
+ !!@dalli_client.add(key, new_value, ttl, opts)
32
27
  end
33
28
 
34
29
  def read(key, user_options={})
@@ -38,7 +33,7 @@ module AtomicCache
38
33
  def set(key, value, user_options={})
39
34
  ttl = user_options[:ttl]
40
35
  user_options.delete(:ttl)
41
- @dalli_client.set(key, value, ttl, user_options)
36
+ !!@dalli_client.set(key, value, ttl, user_options)
42
37
  end
43
38
 
44
39
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module AtomicCache
4
- VERSION = "0.2.2.rc1"
4
+ VERSION = "0.4.0.rc1"
5
5
  end
@@ -138,17 +138,6 @@ describe 'AtomicCacheClient' do
138
138
  timestamp_manager.lock(keyspace, 100)
139
139
  end
140
140
 
141
- it 'waits for a short duration to see if the other thread generated the value' do
142
- timestamp_manager.promote(keyspace, last_known_key: 'lkk', timestamp: 1420090000)
143
- key_storage.set('lkk', 'old:value')
144
- new_value = 'value from another thread'
145
- allow(cache_storage).to receive(:read)
146
- .with(timestamp_manager.current_key(keyspace), anything)
147
- .and_return(nil, new_value)
148
-
149
- expect(subject.fetch(keyspace, quick_retry_ms: 5) { 'value' }).to eq(new_value)
150
- end
151
-
152
141
  context 'when the last known value is present' do
153
142
  it 'returns the last known value' do
154
143
  timestamp_manager.promote(keyspace, last_known_key: 'lkk', timestamp: 1420090000)
@@ -191,17 +180,6 @@ describe 'AtomicCacheClient' do
191
180
  end
192
181
 
193
182
  context 'and when a block is NOT given' do
194
- it 'waits for a short duration to see if the other thread generated the value' do
195
- timestamp_manager.promote(keyspace, last_known_key: 'asdf', timestamp: 1420090000)
196
- new_value = 'value from another thread'
197
- allow(cache_storage).to receive(:read)
198
- .with(timestamp_manager.current_key(keyspace), anything)
199
- .and_return(nil, new_value)
200
-
201
- result = subject.fetch(keyspace, quick_retry_ms: 50)
202
- expect(result).to eq(new_value)
203
- end
204
-
205
183
  it 'returns nil if nothing is present' do
206
184
  expect(subject.fetch(keyspace)).to eq(nil)
207
185
  end
@@ -104,12 +104,12 @@ describe 'AtomicCacheConcern' do
104
104
  class Foo2
105
105
  include AtomicCache::GlobalLMTCacheConcern
106
106
  cache_version(3)
107
- cache_class('foo')
107
+ force_cache_class('foo')
108
108
  end
109
109
  Foo2
110
110
  end
111
111
 
112
- it 'uses the given version and cache_class become part of the cache keyspace' do
112
+ it 'uses the given version and force_cache_class become part of the cache keyspace' do
113
113
  subject.expire_cache
114
114
  expect(key_storage.store).to have_key(:'foo:v3:lmt')
115
115
  end
@@ -0,0 +1,137 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'spec_helper'
4
+
5
+ describe 'Integration -' do
6
+ let(:key_storage) { AtomicCache::Storage::SharedMemory.new }
7
+ let(:cache_storage) { AtomicCache::Storage::SharedMemory.new }
8
+ let(:keyspace) { AtomicCache::Keyspace.new(namespace: 'int.waiting') }
9
+ let(:timestamp_manager) { AtomicCache::LastModTimeKeyManager.new(keyspace: keyspace, storage: key_storage) }
10
+
11
+ before(:each) do
12
+ key_storage.reset
13
+ cache_storage.reset
14
+ end
15
+
16
+ describe 'fallback:' do
17
+ let(:generating_client) { AtomicCache::AtomicCacheClient.new(storage: cache_storage, timestamp_manager: timestamp_manager) }
18
+ let(:fallback_client) { AtomicCache::AtomicCacheClient.new(storage: cache_storage, timestamp_manager: timestamp_manager) }
19
+
20
+ it 'falls back to the old value when a lock is present' do
21
+ old_time = Time.local(2021, 1, 1, 15, 30, 0)
22
+ new_time = Time.local(2021, 1, 1, 16, 30, 0)
23
+
24
+ # prime cache with an old value
25
+
26
+ Timecop.freeze(old_time) do
27
+ generating_client.fetch(keyspace) { "old value" }
28
+ end
29
+ timestamp_manager.last_modified_time = new_time
30
+
31
+ # start generating process for new time
32
+ generating_thread = ClientThread.new(generating_client, keyspace)
33
+ generating_thread.start
34
+ sleep 0.05
35
+
36
+ value = fallback_client.fetch(keyspace)
37
+ generating_thread.terminate
38
+
39
+ expect(value).to eq("old value")
40
+ end
41
+ end
42
+
43
+ describe 'waiting:' do
44
+ let(:generating_client) { AtomicCache::AtomicCacheClient.new(storage: cache_storage, timestamp_manager: timestamp_manager) }
45
+ let(:waiting_client) { AtomicCache::AtomicCacheClient.new(storage: cache_storage, timestamp_manager: timestamp_manager) }
46
+
47
+ it 'waits for a key when no last know value is available' do
48
+ generating_thread = ClientThread.new(generating_client, keyspace)
49
+ generating_thread.start
50
+ waiting_thread = ClientThread.new(waiting_client, keyspace)
51
+ waiting_thread.start
52
+
53
+ generating_thread.generate
54
+ sleep 0.05
55
+ waiting_thread.fetch
56
+ sleep 0.05
57
+ generating_thread.complete
58
+ sleep 0.05
59
+
60
+ generating_thread.terminate
61
+ waiting_thread.terminate
62
+
63
+ expect(generating_thread.result).to eq([1, 2, 3])
64
+ expect(waiting_thread.result).to eq([1, 2, 3])
65
+ end
66
+ end
67
+ end
68
+
69
+
70
+ # Avert your eyes:
71
+ # this class allows atomic client interaction to happen asynchronously so that
72
+ # the waiting behavior of the client can be tested simultaneous to controlling how
73
+ # long the 'generate' behavior takes
74
+ #
75
+ # It works by accepting an incoming 'message' which it places onto one of two queues
76
+ class ClientThread
77
+ attr_reader :result
78
+
79
+ # idea: maybe make the return value set when the thread is initialized
80
+ def initialize(client, keyspace)
81
+ @keyspace = keyspace
82
+ @client = client
83
+ @msg_queue = Queue.new
84
+ @generate_queue = Queue.new
85
+ @result = nil
86
+ end
87
+
88
+ def start
89
+ @thread = Thread.new(&method(:run))
90
+ end
91
+
92
+ def fetch
93
+ @msg_queue << :fetch
94
+ end
95
+
96
+ def generate
97
+ @msg_queue << :generate
98
+ end
99
+
100
+ def complete
101
+ @generate_queue << :complete
102
+ end
103
+
104
+ def terminate
105
+ @msg_queue << :terminate
106
+ end
107
+
108
+ private
109
+
110
+ def run
111
+ loop do
112
+ msg = @msg_queue.pop
113
+ sleep 0.001; next unless msg
114
+
115
+ case msg
116
+ when :terminate
117
+ Thread.stop
118
+ when :generate
119
+ do_generate
120
+ when :fetch
121
+ @result = @client.fetch(@keyspace)
122
+ end
123
+ end
124
+ end
125
+
126
+ def do_generate
127
+ @client.fetch(@keyspace) do
128
+ loop do
129
+ msg = @generate_queue.pop
130
+ sleep 0.001; next unless msg
131
+ break if msg == :complete
132
+ end
133
+ @result = [1, 2, 3] # generated value
134
+ @result
135
+ end
136
+ end
137
+ end
@@ -37,7 +37,7 @@ describe 'Dalli' do
37
37
 
38
38
  context '#add' do
39
39
  before(:each) do
40
- allow(dalli_client).to receive(:add).and_return('NOT_STORED\r\n')
40
+ allow(dalli_client).to receive(:add).and_return(false)
41
41
  end
42
42
 
43
43
  it 'delegates to #add with the raw option set' do
@@ -47,22 +47,17 @@ describe 'Dalli' do
47
47
  end
48
48
 
49
49
  it 'returns true when the add is successful' do
50
- expect(dalli_client).to receive(:add).and_return('STORED\r\n')
50
+ expect(dalli_client).to receive(:add).and_return(12339031748204560384)
51
51
  result = subject.add('key', 'value', 100)
52
52
  expect(result).to eq(true)
53
53
  end
54
54
 
55
55
  it 'returns false if the key already exists' do
56
- expect(dalli_client).to receive(:add).and_return('EXISTS\r\n')
56
+ expect(dalli_client).to receive(:add).and_return(false)
57
57
  result = subject.add('key', 'value', 100)
58
58
  expect(result).to eq(false)
59
59
  end
60
60
 
61
- it 'returns false if the add fails' do
62
- expect(dalli_client).to receive(:add).and_return('NOT_STORED\r\n')
63
- result = subject.add('key', 'value', 100)
64
- expect(result).to eq(false)
65
- end
66
61
  end
67
62
 
68
63
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: atomic_cache
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.2.2.rc1
4
+ version: 0.4.0.rc1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Ibotta Developers
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2021-06-23 00:00:00.000000000 Z
12
+ date: 2021-07-07 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: bundler
@@ -171,7 +171,8 @@ dependencies:
171
171
  - - "~>"
172
172
  - !ruby/object:Gem::Version
173
173
  version: '0.1'
174
- description: desc
174
+ description: A gem which prevents the thundering herd problem through a distributed
175
+ lock
175
176
  email: osscompliance@ibotta.com
176
177
  executables: []
177
178
  extensions: []
@@ -184,7 +185,6 @@ files:
184
185
  - docs/MODEL_SETUP.md
185
186
  - docs/PROJECT_SETUP.md
186
187
  - docs/USAGE.md
187
- - docs/img/quick_retry_graph.png
188
188
  - lib/atomic_cache.rb
189
189
  - lib/atomic_cache/atomic_cache_client.rb
190
190
  - lib/atomic_cache/concerns/global_lmt_cache_concern.rb
@@ -200,6 +200,7 @@ files:
200
200
  - spec/atomic_cache/atomic_cache_client_spec.rb
201
201
  - spec/atomic_cache/concerns/global_lmt_cache_concern_spec.rb
202
202
  - spec/atomic_cache/default_config_spec.rb
203
+ - spec/atomic_cache/integration/integration_spec.rb
203
204
  - spec/atomic_cache/key/keyspace_spec.rb
204
205
  - spec/atomic_cache/key/last_mod_time_key_manager_spec.rb
205
206
  - spec/atomic_cache/storage/dalli_spec.rb
@@ -229,5 +230,9 @@ requirements: []
229
230
  rubygems_version: 3.0.8
230
231
  signing_key:
231
232
  specification_version: 4
232
- summary: summary
233
+ summary: In a nutshell:* The key of every cached value includes a timestamp* Once
234
+ a cache key is written to, it is never written over* When a newer version of a cached
235
+ value is available, it is written to a new key* When a new value is being generated
236
+ for a new key only 1 process is allowed to do so at a time* While the new value
237
+ is being generated, other processes read one key older than most recent
233
238
  test_files: []
Binary file