atomic_cache 0.2.3.rc1 → 0.4.1.rc1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: da4480539afcbe1c7c15e34dfbd77c7da40761853fc270bab1a0ad6d778b253e
4
- data.tar.gz: c0deb89b6a828430252b6ae85d1727aa5282837b355013f661be6f100406125b
3
+ metadata.gz: db2b157ed872e6cb90af8cf2ef61a97f237485bc763998464eb4faf488d26c11
4
+ data.tar.gz: 48e13e212479454f2ece192c81c8f755a4aa039a2cf5930cf1cdfea2eaae438d
5
5
  SHA512:
6
- metadata.gz: 1aacd6f8ac2f420ac3b87e3d3031533f0cd3268cd6cb567a22e990c860274a6640800c4cfd4a88481b90af5c40ef60101d699a2a2df348fbe60e8e4639dd6901
7
- data.tar.gz: c405dbd590e3f3f095a25ca1189687548cdf09380bba0b390060560b555730eade8528f897665d99d50a2ac0783ca00bd96e4029d852fbd79ffe14cd38daee0c
6
+ metadata.gz: 5fb9aba2eaeb20e9e3206c80039941b005c7f6e82c05bfddd59482783b028706f706a6843a8ea8261d18895732035d1deeabda178e5437620f33ca9abf88b54e
7
+ data.tar.gz: 06511e8a0ec9b33de005994f6938877f85b053c03072f52b6cc0dbd22a99389dfb36245f7002d59ea364d1ea583eec0b440c6bd2a5e1b84604434474275d6fe2
data/README.md CHANGED
@@ -23,7 +23,7 @@ In a nutshell:
23
23
  class Foo < ActiveRecord::Base
24
24
  include AtomicCache::GlobalLMTCacheConcern
25
25
 
26
- cache_class(:custom_foo) # optional
26
+ force_cache_class(:custom_foo) # optional
27
27
  cache_version(5) # optional
28
28
 
29
29
  def active_foos(ids)
data/docs/MODEL_SETUP.md CHANGED
@@ -7,13 +7,13 @@ class Foo < ActiveRecord::Base
7
7
  end
8
8
  ```
9
9
 
10
- ### cache_class
10
+ ### force_cache_class
11
11
  By default the cache identifier for a class is set to the name of a class (ie. `self.to_s`). In some cases it makes sense to set a custom value for the cache identifier. In cases where a custom cache identifier is set, it's important that the identifier remain unique across the project.
12
12
 
13
13
  ```ruby
14
14
  class SuperDescriptiveDomainModelAbstractFactoryImplManager < ActiveRecord::Base
15
15
  include AtomicCache::GlobalLMTCacheConcern
16
- cache_class('sddmafim')
16
+ force_cache_class('sddmafim')
17
17
  end
18
18
  ```
19
19
 
@@ -23,9 +23,10 @@ AtomicCache::DefaultConfig.configure do |config|
23
23
  config.metrics = Datadog::Statsd.new('localhost', 8125, namespace: 'cache.atomic')
24
24
 
25
25
  # note: these values can also be set in an env file for env-specific settings
26
- config.namespace = 'atom'
27
- config.cache_storage = AtomicCache::Storage::SharedMemory.new
28
- config.key_storage = AtomicCache::Storage::SharedMemory.new
26
+ config.namespace = 'atom'
27
+ config.default_options = { generate_ttl_ms: 500 }
28
+ config.cache_storage = AtomicCache::Storage::SharedMemory.new
29
+ config.key_storage = AtomicCache::Storage::SharedMemory.new
29
30
  end
30
31
  ```
31
32
 
@@ -36,7 +37,7 @@ Note that `Datadog::Statsd` is not _required_. Adding it, however, will enable
36
37
  * `key_storage` - Storage adapter for key manager (see below)
37
38
 
38
39
  #### Optional
39
- * `default_options` - Default options for every fetch call. See [fetch options](/Ibotta/atomic_cache/blob/main/docs/USAGE.md#fetch).
40
+ * `default_options` - Override default options for every fetch call, unless specified at call site. See [fetch options](/Ibotta/atomic_cache/blob/main/docs/USAGE.md#fetch).
40
41
  * `logger` - Logger instance. Used for debug and warn logs. Defaults to nil.
41
42
  * `timestamp_formatter` - Proc to format last modified time for storage. Defaults to timestamp (`Time.to_i`)
42
43
  * `metrics` - Metrics instance. Defaults to nil.
@@ -45,6 +46,49 @@ Note that `Datadog::Statsd` is not _required_. Adding it, however, will enable
45
46
  #### ★ Best Practice ★
46
47
  Keep the global namespace short. For example, memcached has a limit of 250 characters for key length.
47
48
 
49
+ #### More Complex Rails Configuration
50
+
51
+ In any real-world project, the need to run multiple caching strategies or setups is likely to arise. In those cases, it's often advantageous
52
+ to keep a DRY setup, with multiple caching clients sharing the same config. Because Rails initializers run after the environment-specific
53
+ config files, a sane way to manage this is to keep client network settings int he config files, then reference them from the initializer.
54
+
55
+ ```ruby
56
+ # config/environments/staging
57
+ config.memcache_hosts = [ "staging.host.cache.amazonaws.com" ]
58
+ config.cache_store_options = {
59
+ expires_in: 15.minutes,
60
+ compress: true,
61
+ # ...
62
+ }
63
+
64
+ # config/environments/production
65
+ config.memcache_hosts = [ "prod1.host.cache.amazonaws.com", "prod2.host.cache.amazonaws.com" ]
66
+ config.cache_store_options = {
67
+ expires_in: 1.hour,
68
+ compress: true,
69
+ # ...
70
+ }
71
+
72
+ # config/initializers/cache.rb
73
+ AtomicCache::DefaultConfig.configure do |config|
74
+ if Rails.env.development? || Rails.env.test?
75
+ config.cache_storage = AtomicCache::Storage::SharedMemory.new
76
+ config.key_storage = AtomicCache::Storage::SharedMemory.new
77
+
78
+ elsif Rails.env.staging? || Rails.env.production?
79
+ # Your::Application.config will be loaded by config/environments/*
80
+ memcache_hosts = Your::Application.config.memcache_hosts
81
+ options = Your::Application.config.cache_store_options
82
+
83
+ dc = Dalli::Client.new(memcache_hosts, options)
84
+ config.cache_storage = AtomicCache::Storage::Dalli.new(dc)
85
+ config.key_storage = AtomicCache::Storage::Dalli.new(dc)
86
+ end
87
+
88
+ # other AtomicCache configuration...
89
+ end
90
+ ```
91
+
48
92
  ## Storage Adapters
49
93
 
50
94
  ### InstanceMemory & SharedMemory
data/docs/USAGE.md CHANGED
@@ -38,17 +38,6 @@ The ideal `generate_ttl_ms` time is just slightly longer than the average genera
38
38
 
39
39
  If metrics are enabled, the `<namespace>.generate.run` can be used to determine the min/max/average generate time for a particular cache and the `generate_ttl_ms` tuned using that.
40
40
 
41
- #### `quick_retry_ms`
42
- _`false` to disable. Defaults to false._
43
-
44
- In the case where another process is computing the new cache value, before falling back to the last known value, if `quick_retry_ms` has a value the atomic client will check the new cache once after the given duration (in milliseconds).
45
-
46
- The danger with `quick_retry_ms` is that when enabled it applies a delay to all fall-through requests at the cost of only benefitting some customers. As the average generate block duration increases, the effectiveness of `quick_retry_ms` decreases because there is less of a likelihood that a customer will get a fresh value. Consider the graph below. For example, a cache with an average generate duration of 200ms, configured with a `quick_retry_ms` of 50ms (red) will only likely get a fresh value for 25% of customers.
47
-
48
- `quick_retry_ms` is most effective for caches that are quick to generate but whose values are slow to change. `quick_retry_ms` is least effective for caches that are slow to update but quick to change.
49
-
50
- ![quick_retry_ms graph](https://github.com/Ibotta/atomic_cache/raw/ca473f28e179da8c24f638eeeeb48750bc8cbe64/docs/img/quick_retry_graph.png)
51
-
52
41
  #### `max_retries` & `backoff_duration_ms`
53
42
  _`max_retries` defaults to 5._
54
43
  _`backoff_duration_ms` defaults to 50ms._
@@ -6,7 +6,6 @@ require 'active_support/core_ext/hash'
6
6
  module AtomicCache
7
7
  class AtomicCacheClient
8
8
 
9
- DEFAULT_quick_retry_ms = false
10
9
  DEFAULT_MAX_RETRIES = 5
11
10
  DEFAULT_GENERATE_TIME_MS = 30000 # 30 seconds
12
11
  BACKOFF_DURATION_MS = 50
@@ -27,13 +26,11 @@ module AtomicCache
27
26
  raise ArgumentError.new("`storage` required but none given") unless @storage.present?
28
27
  end
29
28
 
30
-
31
29
  # Attempts to fetch the given keyspace, using an optional block to generate
32
30
  # a new value when the cache is expired
33
31
  #
34
32
  # @param keyspace [AtomicCache::Keyspace] the keyspace to fetch
35
33
  # @option options [Numeric] :generate_ttl_ms (30000) Max generate duration in ms
36
- # @option options [Numeric] :quick_retry_ms (false) Short duration to check back before using last known value
37
34
  # @option options [Numeric] :max_retries (5) Max times to rety in waiting case
38
35
  # @option options [Numeric] :backoff_duration_ms (50) Duration in ms to wait between retries
39
36
  # @yield Generates a new value when cache is expired
@@ -45,6 +42,7 @@ module AtomicCache
45
42
  value = @storage.read(key, options) if key.present?
46
43
  if !value.nil?
47
44
  metrics(:increment, 'read.present', tags: tags)
45
+ log(:debug, "Read value from key: '#{key}'")
48
46
  return value
49
47
  end
50
48
 
@@ -57,15 +55,14 @@ module AtomicCache
57
55
  return new_value unless new_value.nil?
58
56
  end
59
57
 
60
- # quick check back to see if the other process has finished
61
- # or fall back to the last known value
62
- value = quick_retry(keyspace, options, tags) || last_known_value(keyspace, options, tags)
58
+ # attempt to fall back to the last known value
59
+ value = last_known_value(keyspace, options, tags)
63
60
  return value if value.present?
64
61
 
65
62
  # wait for the other process if a last known value isn't there
66
63
  if key.present?
67
64
  return time('wait.run', tags: tags) do
68
- wait_for_new_value(key, options, tags)
65
+ wait_for_new_value(keyspace, options, tags)
69
66
  end
70
67
  end
71
68
 
@@ -109,24 +106,6 @@ module AtomicCache
109
106
  nil
110
107
  end
111
108
 
112
- def quick_retry(keyspace, options, tags)
113
- key = @timestamp_manager.current_key(keyspace)
114
- duration = option(:quick_retry_ms, options, DEFAULT_quick_retry_ms)
115
-
116
- if duration.present? and key.present?
117
- sleep(duration.to_f / 1000)
118
- value = @storage.read(key, options)
119
-
120
- if !value.nil?
121
- metrics(:increment, 'empty-cache-retry.present', tags: tags)
122
- return value
123
- end
124
- metrics(:increment, 'empty-cache-retry.not-present', tags: tags)
125
- end
126
-
127
- nil
128
- end
129
-
130
109
  def last_known_value(keyspace, options, tags)
131
110
  lkk = @timestamp_manager.last_known_key(keyspace)
132
111
 
@@ -136,6 +115,7 @@ module AtomicCache
136
115
  # last known key may have expired
137
116
  if !lkv.nil?
138
117
  metrics(:increment, 'last-known-value.present', tags: tags)
118
+ log(:debug, "Read value from last known value key: '#{lkk}'")
139
119
  return lkv
140
120
  end
141
121
 
@@ -149,7 +129,7 @@ module AtomicCache
149
129
  nil
150
130
  end
151
131
 
152
- def wait_for_new_value(key, options, tags)
132
+ def wait_for_new_value(keyspace, options, tags)
153
133
  max_retries = option(:max_retries, options, DEFAULT_MAX_RETRIES)
154
134
  max_retries.times do |attempt|
155
135
  metrics_tags = tags.clone.push("attempt:#{attempt}")
@@ -160,6 +140,8 @@ module AtomicCache
160
140
  backoff_duration_ms = option(:backoff_duration_ms, options, backoff_duration_ms)
161
141
  sleep((backoff_duration_ms.to_f / 1000) * attempt)
162
142
 
143
+ # re-fetch the key each time, to make sure we're actually getting the latest key with the correct LMT
144
+ key = @timestamp_manager.current_key(keyspace)
163
145
  value = @storage.read(key, options)
164
146
  if !value.nil?
165
147
  metrics(:increment, 'wait.present', tags: metrics_tags)
@@ -168,7 +150,7 @@ module AtomicCache
168
150
  end
169
151
 
170
152
  metrics(:increment, 'wait.give-up')
171
- log(:warn, "Giving up fetching cache key `#{key}`. Exceeded max retries (#{max_retries}).")
153
+ log(:warn, "Giving up waiting. Exceeded max retries (#{max_retries}).")
172
154
  nil
173
155
  end
174
156
 
@@ -26,7 +26,7 @@ module AtomicCache
26
26
  end
27
27
  end
28
28
 
29
- def cache_class(kls)
29
+ def force_cache_class(kls)
30
30
  ATOMIC_CACHE_CONCERN_MUTEX.synchronize do
31
31
  @atomic_cache_class = kls
32
32
  end
@@ -16,7 +16,7 @@ module AtomicCache
16
16
 
17
17
  def add(raw_key, new_value, ttl, user_options={})
18
18
  store_op(raw_key, user_options) do |key, options|
19
- return false if store.has_key?(key)
19
+ return false if store.has_key?(key) && !ttl_expired?(store[key])
20
20
  write(key, new_value, ttl, user_options)
21
21
  end
22
22
  end
@@ -29,8 +29,7 @@ module AtomicCache
29
29
  unmarshaled = unmarshal(entry[:value], user_options)
30
30
  return unmarshaled if entry[:ttl].nil? or entry[:ttl] == false
31
31
 
32
- life = Time.now - entry[:written_at]
33
- if (life >= entry[:ttl])
32
+ if ttl_expired?(entry)
34
33
  store.delete(key)
35
34
  nil
36
35
  else
@@ -54,6 +53,12 @@ module AtomicCache
54
53
 
55
54
  protected
56
55
 
56
+ def ttl_expired?(entry)
57
+ return false unless entry
58
+ life = Time.now - entry[:written_at]
59
+ life >= entry[:ttl]
60
+ end
61
+
57
62
  def write(key, value, ttl=nil, user_options)
58
63
  store[key] = {
59
64
  value: marshal(value, user_options),
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module AtomicCache
4
- VERSION = "0.2.3.rc1"
4
+ VERSION = "0.4.1.rc1"
5
5
  end
@@ -138,17 +138,6 @@ describe 'AtomicCacheClient' do
138
138
  timestamp_manager.lock(keyspace, 100)
139
139
  end
140
140
 
141
- it 'waits for a short duration to see if the other thread generated the value' do
142
- timestamp_manager.promote(keyspace, last_known_key: 'lkk', timestamp: 1420090000)
143
- key_storage.set('lkk', 'old:value')
144
- new_value = 'value from another thread'
145
- allow(cache_storage).to receive(:read)
146
- .with(timestamp_manager.current_key(keyspace), anything)
147
- .and_return(nil, new_value)
148
-
149
- expect(subject.fetch(keyspace, quick_retry_ms: 5) { 'value' }).to eq(new_value)
150
- end
151
-
152
141
  context 'when the last known value is present' do
153
142
  it 'returns the last known value' do
154
143
  timestamp_manager.promote(keyspace, last_known_key: 'lkk', timestamp: 1420090000)
@@ -191,17 +180,6 @@ describe 'AtomicCacheClient' do
191
180
  end
192
181
 
193
182
  context 'and when a block is NOT given' do
194
- it 'waits for a short duration to see if the other thread generated the value' do
195
- timestamp_manager.promote(keyspace, last_known_key: 'asdf', timestamp: 1420090000)
196
- new_value = 'value from another thread'
197
- allow(cache_storage).to receive(:read)
198
- .with(timestamp_manager.current_key(keyspace), anything)
199
- .and_return(nil, new_value)
200
-
201
- result = subject.fetch(keyspace, quick_retry_ms: 50)
202
- expect(result).to eq(new_value)
203
- end
204
-
205
183
  it 'returns nil if nothing is present' do
206
184
  expect(subject.fetch(keyspace)).to eq(nil)
207
185
  end
@@ -104,12 +104,12 @@ describe 'AtomicCacheConcern' do
104
104
  class Foo2
105
105
  include AtomicCache::GlobalLMTCacheConcern
106
106
  cache_version(3)
107
- cache_class('foo')
107
+ force_cache_class('foo')
108
108
  end
109
109
  Foo2
110
110
  end
111
111
 
112
- it 'uses the given version and cache_class become part of the cache keyspace' do
112
+ it 'uses the given version and force_cache_class become part of the cache keyspace' do
113
113
  subject.expire_cache
114
114
  expect(key_storage.store).to have_key(:'foo:v3:lmt')
115
115
  end
@@ -0,0 +1,137 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'spec_helper'
4
+
5
+ describe 'Integration -' do
6
+ let(:key_storage) { AtomicCache::Storage::SharedMemory.new }
7
+ let(:cache_storage) { AtomicCache::Storage::SharedMemory.new }
8
+ let(:keyspace) { AtomicCache::Keyspace.new(namespace: 'int.waiting') }
9
+ let(:timestamp_manager) { AtomicCache::LastModTimeKeyManager.new(keyspace: keyspace, storage: key_storage) }
10
+
11
+ before(:each) do
12
+ key_storage.reset
13
+ cache_storage.reset
14
+ end
15
+
16
+ describe 'fallback:' do
17
+ let(:generating_client) { AtomicCache::AtomicCacheClient.new(storage: cache_storage, timestamp_manager: timestamp_manager) }
18
+ let(:fallback_client) { AtomicCache::AtomicCacheClient.new(storage: cache_storage, timestamp_manager: timestamp_manager) }
19
+
20
+ it 'falls back to the old value when a lock is present' do
21
+ old_time = Time.local(2021, 1, 1, 15, 30, 0)
22
+ new_time = Time.local(2021, 1, 1, 16, 30, 0)
23
+
24
+ # prime cache with an old value
25
+
26
+ Timecop.freeze(old_time) do
27
+ generating_client.fetch(keyspace) { "old value" }
28
+ end
29
+ timestamp_manager.last_modified_time = new_time
30
+
31
+ # start generating process for new time
32
+ generating_thread = ClientThread.new(generating_client, keyspace)
33
+ generating_thread.start
34
+ sleep 0.05
35
+
36
+ value = fallback_client.fetch(keyspace)
37
+ generating_thread.terminate
38
+
39
+ expect(value).to eq("old value")
40
+ end
41
+ end
42
+
43
+ describe 'waiting:' do
44
+ let(:generating_client) { AtomicCache::AtomicCacheClient.new(storage: cache_storage, timestamp_manager: timestamp_manager) }
45
+ let(:waiting_client) { AtomicCache::AtomicCacheClient.new(storage: cache_storage, timestamp_manager: timestamp_manager) }
46
+
47
+ it 'waits for a key when no last know value is available' do
48
+ generating_thread = ClientThread.new(generating_client, keyspace)
49
+ generating_thread.start
50
+ waiting_thread = ClientThread.new(waiting_client, keyspace)
51
+ waiting_thread.start
52
+
53
+ generating_thread.generate
54
+ sleep 0.05
55
+ waiting_thread.fetch
56
+ sleep 0.05
57
+ generating_thread.complete
58
+ sleep 0.05
59
+
60
+ generating_thread.terminate
61
+ waiting_thread.terminate
62
+
63
+ expect(generating_thread.result).to eq([1, 2, 3])
64
+ expect(waiting_thread.result).to eq([1, 2, 3])
65
+ end
66
+ end
67
+ end
68
+
69
+
70
+ # Avert your eyes:
71
+ # this class allows atomic client interaction to happen asynchronously so that
72
+ # the waiting behavior of the client can be tested simultaneous to controlling how
73
+ # long the 'generate' behavior takes
74
+ #
75
+ # It works by accepting an incoming 'message' which it places onto one of two queues
76
+ class ClientThread
77
+ attr_reader :result
78
+
79
+ # idea: maybe make the return value set when the thread is initialized
80
+ def initialize(client, keyspace)
81
+ @keyspace = keyspace
82
+ @client = client
83
+ @msg_queue = Queue.new
84
+ @generate_queue = Queue.new
85
+ @result = nil
86
+ end
87
+
88
+ def start
89
+ @thread = Thread.new(&method(:run))
90
+ end
91
+
92
+ def fetch
93
+ @msg_queue << :fetch
94
+ end
95
+
96
+ def generate
97
+ @msg_queue << :generate
98
+ end
99
+
100
+ def complete
101
+ @generate_queue << :complete
102
+ end
103
+
104
+ def terminate
105
+ @msg_queue << :terminate
106
+ end
107
+
108
+ private
109
+
110
+ def run
111
+ loop do
112
+ msg = @msg_queue.pop
113
+ sleep 0.001; next unless msg
114
+
115
+ case msg
116
+ when :terminate
117
+ Thread.stop
118
+ when :generate
119
+ do_generate
120
+ when :fetch
121
+ @result = @client.fetch(@keyspace)
122
+ end
123
+ end
124
+ end
125
+
126
+ def do_generate
127
+ @client.fetch(@keyspace) do
128
+ loop do
129
+ msg = @generate_queue.pop
130
+ sleep 0.001; next unless msg
131
+ break if msg == :complete
132
+ end
133
+ @result = [1, 2, 3] # generated value
134
+ @result
135
+ end
136
+ end
137
+ end
@@ -17,17 +17,34 @@ shared_examples 'memory storage' do
17
17
  expect(result).to eq(true)
18
18
  end
19
19
 
20
- it 'does not write the key if it exists' do
21
- entry = { value: Marshal.dump('foo'), ttl: 100, written_at: 100 }
20
+ # SharedMemory.new.add("foo", ttl: 100)
21
+
22
+ it 'does not write the key if it exists but expiration time is NOT up' do
23
+ entry = { value: Marshal.dump('foo'), ttl: 5000, written_at: Time.local(2021, 1, 1, 12, 0, 0) }
22
24
  subject.store[:key] = entry
23
25
 
24
- result = subject.add('key', 'value', 200)
25
- expect(result).to eq(false)
26
+ Timecop.freeze(Time.local(2021, 1, 1, 12, 0, 1)) do
27
+ result = subject.add('key', 'value', 5000)
28
+ expect(result).to eq(false)
29
+ end
26
30
 
27
31
  # stored values should not have changed
28
32
  expect(subject.store).to have_key(:key)
29
33
  expect(Marshal.load(subject.store[:key][:value])).to eq('foo')
30
- expect(subject.store[:key][:ttl]).to eq(100)
34
+ end
35
+
36
+ it 'does write the key if it exists and expiration time IS up' do
37
+ entry = { value: Marshal.dump('foo'), ttl: 50, written_at: Time.local(2021, 1, 1, 12, 0, 0) }
38
+ subject.store[:key] = entry
39
+
40
+ Timecop.freeze(Time.local(2021, 1, 1, 12, 30, 0)) do
41
+ result = subject.add('key', 'value', 50)
42
+ expect(result).to eq(true)
43
+ end
44
+
45
+ # stored values should not have changed
46
+ expect(subject.store).to have_key(:key)
47
+ expect(Marshal.load(subject.store[:key][:value])).to eq('value')
31
48
  end
32
49
  end
33
50
 
@@ -3,7 +3,7 @@
3
3
  require 'spec_helper'
4
4
  require_relative 'memory_spec'
5
5
 
6
- describe 'InstanceMemory' do
6
+ describe 'SharedMemory' do
7
7
  subject { AtomicCache::Storage::SharedMemory.new }
8
8
  it_behaves_like 'memory storage'
9
9
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: atomic_cache
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.2.3.rc1
4
+ version: 0.4.1.rc1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Ibotta Developers
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2021-06-23 00:00:00.000000000 Z
12
+ date: 2021-07-07 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: bundler
@@ -171,7 +171,8 @@ dependencies:
171
171
  - - "~>"
172
172
  - !ruby/object:Gem::Version
173
173
  version: '0.1'
174
- description: desc
174
+ description: A gem which prevents the thundering herd problem through a distributed
175
+ lock
175
176
  email: osscompliance@ibotta.com
176
177
  executables: []
177
178
  extensions: []
@@ -184,7 +185,6 @@ files:
184
185
  - docs/MODEL_SETUP.md
185
186
  - docs/PROJECT_SETUP.md
186
187
  - docs/USAGE.md
187
- - docs/img/quick_retry_graph.png
188
188
  - lib/atomic_cache.rb
189
189
  - lib/atomic_cache/atomic_cache_client.rb
190
190
  - lib/atomic_cache/concerns/global_lmt_cache_concern.rb
@@ -200,6 +200,7 @@ files:
200
200
  - spec/atomic_cache/atomic_cache_client_spec.rb
201
201
  - spec/atomic_cache/concerns/global_lmt_cache_concern_spec.rb
202
202
  - spec/atomic_cache/default_config_spec.rb
203
+ - spec/atomic_cache/integration/integration_spec.rb
203
204
  - spec/atomic_cache/key/keyspace_spec.rb
204
205
  - spec/atomic_cache/key/last_mod_time_key_manager_spec.rb
205
206
  - spec/atomic_cache/storage/dalli_spec.rb
@@ -229,5 +230,9 @@ requirements: []
229
230
  rubygems_version: 3.0.8
230
231
  signing_key:
231
232
  specification_version: 4
232
- summary: summary
233
+ summary: In a nutshell:* The key of every cached value includes a timestamp* Once
234
+ a cache key is written to, it is never written over* When a newer version of a cached
235
+ value is available, it is written to a new key* When a new value is being generated
236
+ for a new key only 1 process is allowed to do so at a time* While the new value
237
+ is being generated, other processes read one key older than most recent
233
238
  test_files: []
Binary file