atomic_cache 0.2.4.rc1 → 0.5.0.rc1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 372963cc5a7e83a92d2dd2ca4336cac86f9664efc0a8cabdffcf7dfb9ba86b17
4
- data.tar.gz: 2db0bb7c54a07280c70349ce6527836d80898e671c0b49eb427f86340094ed6f
3
+ metadata.gz: 7362eab306c19b619c6eb4bd30501b2b18194cd9e870a344e45b39c6ff18b4ae
4
+ data.tar.gz: cce7666dec7a66bca3717cb7278bcdb9d7b81aaffc8c68fb020e15e02820f921
5
5
  SHA512:
6
- metadata.gz: 32a9f8189dcdbb5793dc785d0521fd39c42743d017e8763d0ac20cdf030fea10bbe28d7365cdaf2386c3b0cc121db954b73f053f691d8450ca170fdd9eae9f19
7
- data.tar.gz: f17e1bd01c14c07d43be161edb08f0601e85c526b4c2354ae2bd06e26ae5441f7437900e5af6afe8d83a618bbe434a231a77810201865ee6eb237a06239911fc
6
+ metadata.gz: b418f307870a20aa99b326644c14066d3baba30287138ce2a4b533582d788231058c0f3f66737fcf19225cdadbcf5dbb78653d7969ade7eb7555a7c3f849b4c2
7
+ data.tar.gz: c9f48abddc3b0f9d623db83bd493df4fd555ae468710251dd13db6ed684399052b52a48ffec516f96ae17a1ccc747416688b2234f514ef8278bba0fd4ed550d2
data/README.md CHANGED
@@ -23,7 +23,7 @@ In a nutshell:
23
23
  class Foo < ActiveRecord::Base
24
24
  include AtomicCache::GlobalLMTCacheConcern
25
25
 
26
- cache_class(:custom_foo) # optional
26
+ force_cache_class(:custom_foo) # optional
27
27
  cache_version(5) # optional
28
28
 
29
29
  def active_foos(ids)
data/docs/MODEL_SETUP.md CHANGED
@@ -7,13 +7,13 @@ class Foo < ActiveRecord::Base
7
7
  end
8
8
  ```
9
9
 
10
- ### cache_class
10
+ ### force_cache_class
11
11
  By default the cache identifier for a class is set to the name of a class (ie. `self.to_s`). In some cases it makes sense to set a custom value for the cache identifier. In cases where a custom cache identifier is set, it's important that the identifier remain unique across the project.
12
12
 
13
13
  ```ruby
14
14
  class SuperDescriptiveDomainModelAbstractFactoryImplManager < ActiveRecord::Base
15
15
  include AtomicCache::GlobalLMTCacheConcern
16
- cache_class('sddmafim')
16
+ force_cache_class('sddmafim')
17
17
  end
18
18
  ```
19
19
 
@@ -23,9 +23,10 @@ AtomicCache::DefaultConfig.configure do |config|
23
23
  config.metrics = Datadog::Statsd.new('localhost', 8125, namespace: 'cache.atomic')
24
24
 
25
25
  # note: these values can also be set in an env file for env-specific settings
26
- config.namespace = 'atom'
27
- config.cache_storage = AtomicCache::Storage::SharedMemory.new
28
- config.key_storage = AtomicCache::Storage::SharedMemory.new
26
+ config.namespace = 'atom'
27
+ config.default_options = { generate_ttl_ms: 500 }
28
+ config.cache_storage = AtomicCache::Storage::SharedMemory.new
29
+ config.key_storage = AtomicCache::Storage::SharedMemory.new
29
30
  end
30
31
  ```
31
32
 
@@ -36,7 +37,7 @@ Note that `Datadog::Statsd` is not _required_. Adding it, however, will enable
36
37
  * `key_storage` - Storage adapter for key manager (see below)
37
38
 
38
39
  #### Optional
39
- * `default_options` - Default options for every fetch call. See [fetch options](/Ibotta/atomic_cache/blob/main/docs/USAGE.md#fetch).
40
+ * `default_options` - Override default options for every fetch call, unless specified at call site. See [fetch options](/Ibotta/atomic_cache/blob/main/docs/USAGE.md#fetch).
40
41
  * `logger` - Logger instance. Used for debug and warn logs. Defaults to nil.
41
42
  * `timestamp_formatter` - Proc to format last modified time for storage. Defaults to timestamp (`Time.to_i`)
42
43
  * `metrics` - Metrics instance. Defaults to nil.
@@ -45,6 +46,49 @@ Note that `Datadog::Statsd` is not _required_. Adding it, however, will enable
45
46
  #### ★ Best Practice ★
46
47
  Keep the global namespace short. For example, memcached has a limit of 250 characters for key length.
47
48
 
49
+ #### More Complex Rails Configuration
50
+
51
+ In any real-world project, the need to run multiple caching strategies or setups is likely to arise. In those cases, it's often advantageous
52
+ to keep a DRY setup, with multiple caching clients sharing the same config. Because Rails initializers run after the environment-specific
53
+ config files, a sane way to manage this is to keep client network settings int he config files, then reference them from the initializer.
54
+
55
+ ```ruby
56
+ # config/environments/staging
57
+ config.memcache_hosts = [ "staging.host.cache.amazonaws.com" ]
58
+ config.cache_store_options = {
59
+ expires_in: 15.minutes,
60
+ compress: true,
61
+ # ...
62
+ }
63
+
64
+ # config/environments/production
65
+ config.memcache_hosts = [ "prod1.host.cache.amazonaws.com", "prod2.host.cache.amazonaws.com" ]
66
+ config.cache_store_options = {
67
+ expires_in: 1.hour,
68
+ compress: true,
69
+ # ...
70
+ }
71
+
72
+ # config/initializers/cache.rb
73
+ AtomicCache::DefaultConfig.configure do |config|
74
+ if Rails.env.development? || Rails.env.test?
75
+ config.cache_storage = AtomicCache::Storage::SharedMemory.new
76
+ config.key_storage = AtomicCache::Storage::SharedMemory.new
77
+
78
+ elsif Rails.env.staging? || Rails.env.production?
79
+ # Your::Application.config will be loaded by config/environments/*
80
+ memcache_hosts = Your::Application.config.memcache_hosts
81
+ options = Your::Application.config.cache_store_options
82
+
83
+ dc = Dalli::Client.new(memcache_hosts, options)
84
+ config.cache_storage = AtomicCache::Storage::Dalli.new(dc)
85
+ config.key_storage = AtomicCache::Storage::Dalli.new(dc)
86
+ end
87
+
88
+ # other AtomicCache configuration...
89
+ end
90
+ ```
91
+
48
92
  ## Storage Adapters
49
93
 
50
94
  ### InstanceMemory & SharedMemory
data/docs/USAGE.md CHANGED
@@ -38,17 +38,6 @@ The ideal `generate_ttl_ms` time is just slightly longer than the average genera
38
38
 
39
39
  If metrics are enabled, the `<namespace>.generate.run` can be used to determine the min/max/average generate time for a particular cache and the `generate_ttl_ms` tuned using that.
40
40
 
41
- #### `quick_retry_ms`
42
- _`false` to disable. Defaults to false._
43
-
44
- In the case where another process is computing the new cache value, before falling back to the last known value, if `quick_retry_ms` has a value the atomic client will check the new cache once after the given duration (in milliseconds).
45
-
46
- The danger with `quick_retry_ms` is that when enabled it applies a delay to all fall-through requests at the cost of only benefitting some customers. As the average generate block duration increases, the effectiveness of `quick_retry_ms` decreases because there is less of a likelihood that a customer will get a fresh value. Consider the graph below. For example, a cache with an average generate duration of 200ms, configured with a `quick_retry_ms` of 50ms (red) will only likely get a fresh value for 25% of customers.
47
-
48
- `quick_retry_ms` is most effective for caches that are quick to generate but whose values are slow to change. `quick_retry_ms` is least effective for caches that are slow to update but quick to change.
49
-
50
- ![quick_retry_ms graph](https://github.com/Ibotta/atomic_cache/raw/ca473f28e179da8c24f638eeeeb48750bc8cbe64/docs/img/quick_retry_graph.png)
51
-
52
41
  #### `max_retries` & `backoff_duration_ms`
53
42
  _`max_retries` defaults to 5._
54
43
  _`backoff_duration_ms` defaults to 50ms._
@@ -83,6 +72,13 @@ All incoming keys are normalized to symbols. All values are stored with a `valu
83
72
 
84
73
  It's likely preferable to use an environments file to configure the `key_storage` and `cache_storage` to always be an in-memory adapter when running in the test environment instead of manually configuring the storage adapter per spec.
85
74
 
75
+ #### TTL in Tests
76
+ In a test environment, unlike in a production environment, database queries are fast, and time doesn't elapse quite like it does in the real world. As tests get more complex, they perform changes for which they expect the cache to expire. However, because of the synthetic nature of testing, TTLs, particularly those on locks, don't quite work the same either.
77
+
78
+ There are a few approaches to address this, for example, using `sleep` to cause real time to pass (not preferable) or wrapping each test in a TimeCop, forcing time to pass (works but quite manual).
79
+
80
+ Since this situation is highly likely to arise, `atomic_cache` provides a feature to globally disable enforcing TTL on locks for the `SharedMemory` implementation. Set `enforce_ttl = false` to disable TTL checking on locks within SharedMemory in a test context. This will prevent tests from failing due to unexpired TTLs on locks.
81
+
86
82
  #### ★ Testing Tip ★
87
83
  If using `SharedMemory` for integration style tests, a global `before(:each)` can be configured in `spec_helper.rb`.
88
84
 
@@ -90,9 +86,10 @@ If using `SharedMemory` for integration style tests, a global `before(:each)` ca
90
86
  # spec/spec_helper.rb
91
87
  RSpec.configure do |config|
92
88
 
93
- #your other config
89
+ # your other config
94
90
 
95
91
  config.before(:each) do
92
+ AtomicCache::Storage::SharedMemory.enforce_ttl = false
96
93
  AtomicCache::Storage::SharedMemory.reset
97
94
  end
98
95
  end
@@ -6,7 +6,6 @@ require 'active_support/core_ext/hash'
6
6
  module AtomicCache
7
7
  class AtomicCacheClient
8
8
 
9
- DEFAULT_quick_retry_ms = false
10
9
  DEFAULT_MAX_RETRIES = 5
11
10
  DEFAULT_GENERATE_TIME_MS = 30000 # 30 seconds
12
11
  BACKOFF_DURATION_MS = 50
@@ -27,13 +26,11 @@ module AtomicCache
27
26
  raise ArgumentError.new("`storage` required but none given") unless @storage.present?
28
27
  end
29
28
 
30
-
31
29
  # Attempts to fetch the given keyspace, using an optional block to generate
32
30
  # a new value when the cache is expired
33
31
  #
34
32
  # @param keyspace [AtomicCache::Keyspace] the keyspace to fetch
35
33
  # @option options [Numeric] :generate_ttl_ms (30000) Max generate duration in ms
36
- # @option options [Numeric] :quick_retry_ms (false) Short duration to check back before using last known value
37
34
  # @option options [Numeric] :max_retries (5) Max times to rety in waiting case
38
35
  # @option options [Numeric] :backoff_duration_ms (50) Duration in ms to wait between retries
39
36
  # @yield Generates a new value when cache is expired
@@ -58,15 +55,14 @@ module AtomicCache
58
55
  return new_value unless new_value.nil?
59
56
  end
60
57
 
61
- # quick check back to see if the other process has finished
62
- # or fall back to the last known value
63
- value = quick_retry(keyspace, options, tags) || last_known_value(keyspace, options, tags)
58
+ # attempt to fall back to the last known value
59
+ value = last_known_value(keyspace, options, tags)
64
60
  return value if value.present?
65
61
 
66
62
  # wait for the other process if a last known value isn't there
67
63
  if key.present?
68
64
  return time('wait.run', tags: tags) do
69
- wait_for_new_value(key, options, tags)
65
+ wait_for_new_value(keyspace, options, tags)
70
66
  end
71
67
  end
72
68
 
@@ -110,24 +106,6 @@ module AtomicCache
110
106
  nil
111
107
  end
112
108
 
113
- def quick_retry(keyspace, options, tags)
114
- key = @timestamp_manager.current_key(keyspace)
115
- duration = option(:quick_retry_ms, options, DEFAULT_quick_retry_ms)
116
-
117
- if duration.present? and key.present?
118
- sleep(duration.to_f / 1000)
119
- value = @storage.read(key, options)
120
-
121
- if !value.nil?
122
- metrics(:increment, 'empty-cache-retry.present', tags: tags)
123
- return value
124
- end
125
- metrics(:increment, 'empty-cache-retry.not-present', tags: tags)
126
- end
127
-
128
- nil
129
- end
130
-
131
109
  def last_known_value(keyspace, options, tags)
132
110
  lkk = @timestamp_manager.last_known_key(keyspace)
133
111
 
@@ -151,7 +129,7 @@ module AtomicCache
151
129
  nil
152
130
  end
153
131
 
154
- def wait_for_new_value(key, options, tags)
132
+ def wait_for_new_value(keyspace, options, tags)
155
133
  max_retries = option(:max_retries, options, DEFAULT_MAX_RETRIES)
156
134
  max_retries.times do |attempt|
157
135
  metrics_tags = tags.clone.push("attempt:#{attempt}")
@@ -162,6 +140,8 @@ module AtomicCache
162
140
  backoff_duration_ms = option(:backoff_duration_ms, options, backoff_duration_ms)
163
141
  sleep((backoff_duration_ms.to_f / 1000) * attempt)
164
142
 
143
+ # re-fetch the key each time, to make sure we're actually getting the latest key with the correct LMT
144
+ key = @timestamp_manager.current_key(keyspace)
165
145
  value = @storage.read(key, options)
166
146
  if !value.nil?
167
147
  metrics(:increment, 'wait.present', tags: metrics_tags)
@@ -170,7 +150,7 @@ module AtomicCache
170
150
  end
171
151
 
172
152
  metrics(:increment, 'wait.give-up')
173
- log(:warn, "Giving up fetching cache key `#{key}`. Exceeded max retries (#{max_retries}).")
153
+ log(:warn, "Giving up waiting. Exceeded max retries (#{max_retries}).")
174
154
  nil
175
155
  end
176
156
 
@@ -26,7 +26,7 @@ module AtomicCache
26
26
  end
27
27
  end
28
28
 
29
- def cache_class(kls)
29
+ def force_cache_class(kls)
30
30
  ATOMIC_CACHE_CONCERN_MUTEX.synchronize do
31
31
  @atomic_cache_class = kls
32
32
  end
@@ -16,7 +16,7 @@ module AtomicCache
16
16
 
17
17
  def add(raw_key, new_value, ttl, user_options={})
18
18
  store_op(raw_key, user_options) do |key, options|
19
- return false if store.has_key?(key)
19
+ return false if store.has_key?(key) && !ttl_expired?(store[key])
20
20
  write(key, new_value, ttl, user_options)
21
21
  end
22
22
  end
@@ -29,8 +29,7 @@ module AtomicCache
29
29
  unmarshaled = unmarshal(entry[:value], user_options)
30
30
  return unmarshaled if entry[:ttl].nil? or entry[:ttl] == false
31
31
 
32
- life = Time.now - entry[:written_at]
33
- if (life >= entry[:ttl])
32
+ if ttl_expired?(entry)
34
33
  store.delete(key)
35
34
  nil
36
35
  else
@@ -54,6 +53,12 @@ module AtomicCache
54
53
 
55
54
  protected
56
55
 
56
+ def ttl_expired?(entry)
57
+ return false unless entry
58
+ life = Time.now - entry[:written_at]
59
+ life >= entry[:ttl]
60
+ end
61
+
57
62
  def write(key, value, ttl=nil, user_options)
58
63
  store[key] = {
59
64
  value: marshal(value, user_options),
@@ -10,6 +10,21 @@ module AtomicCache
10
10
  STORE = {}
11
11
  SEMAPHORE = Mutex.new
12
12
 
13
+ @enforce_ttl = true
14
+ class << self
15
+ attr_accessor :enforce_ttl
16
+ end
17
+
18
+ def add(raw_key, new_value, ttl, user_options={})
19
+ if self.class.enforce_ttl
20
+ super(raw_key, new_value, ttl, user_options)
21
+ else
22
+ store_op(raw_key, user_options) do |key, options|
23
+ write(key, new_value, ttl, user_options)
24
+ end
25
+ end
26
+ end
27
+
13
28
  def self.reset
14
29
  STORE.clear
15
30
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module AtomicCache
4
- VERSION = "0.2.4.rc1"
4
+ VERSION = "0.5.0.rc1"
5
5
  end
@@ -138,17 +138,6 @@ describe 'AtomicCacheClient' do
138
138
  timestamp_manager.lock(keyspace, 100)
139
139
  end
140
140
 
141
- it 'waits for a short duration to see if the other thread generated the value' do
142
- timestamp_manager.promote(keyspace, last_known_key: 'lkk', timestamp: 1420090000)
143
- key_storage.set('lkk', 'old:value')
144
- new_value = 'value from another thread'
145
- allow(cache_storage).to receive(:read)
146
- .with(timestamp_manager.current_key(keyspace), anything)
147
- .and_return(nil, new_value)
148
-
149
- expect(subject.fetch(keyspace, quick_retry_ms: 5) { 'value' }).to eq(new_value)
150
- end
151
-
152
141
  context 'when the last known value is present' do
153
142
  it 'returns the last known value' do
154
143
  timestamp_manager.promote(keyspace, last_known_key: 'lkk', timestamp: 1420090000)
@@ -191,17 +180,6 @@ describe 'AtomicCacheClient' do
191
180
  end
192
181
 
193
182
  context 'and when a block is NOT given' do
194
- it 'waits for a short duration to see if the other thread generated the value' do
195
- timestamp_manager.promote(keyspace, last_known_key: 'asdf', timestamp: 1420090000)
196
- new_value = 'value from another thread'
197
- allow(cache_storage).to receive(:read)
198
- .with(timestamp_manager.current_key(keyspace), anything)
199
- .and_return(nil, new_value)
200
-
201
- result = subject.fetch(keyspace, quick_retry_ms: 50)
202
- expect(result).to eq(new_value)
203
- end
204
-
205
183
  it 'returns nil if nothing is present' do
206
184
  expect(subject.fetch(keyspace)).to eq(nil)
207
185
  end
@@ -104,12 +104,12 @@ describe 'AtomicCacheConcern' do
104
104
  class Foo2
105
105
  include AtomicCache::GlobalLMTCacheConcern
106
106
  cache_version(3)
107
- cache_class('foo')
107
+ force_cache_class('foo')
108
108
  end
109
109
  Foo2
110
110
  end
111
111
 
112
- it 'uses the given version and cache_class become part of the cache keyspace' do
112
+ it 'uses the given version and force_cache_class become part of the cache keyspace' do
113
113
  subject.expire_cache
114
114
  expect(key_storage.store).to have_key(:'foo:v3:lmt')
115
115
  end
@@ -0,0 +1,137 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'spec_helper'
4
+
5
+ describe 'Integration -' do
6
+ let(:key_storage) { AtomicCache::Storage::SharedMemory.new }
7
+ let(:cache_storage) { AtomicCache::Storage::SharedMemory.new }
8
+ let(:keyspace) { AtomicCache::Keyspace.new(namespace: 'int.waiting') }
9
+ let(:timestamp_manager) { AtomicCache::LastModTimeKeyManager.new(keyspace: keyspace, storage: key_storage) }
10
+
11
+ before(:each) do
12
+ key_storage.reset
13
+ cache_storage.reset
14
+ end
15
+
16
+ describe 'fallback:' do
17
+ let(:generating_client) { AtomicCache::AtomicCacheClient.new(storage: cache_storage, timestamp_manager: timestamp_manager) }
18
+ let(:fallback_client) { AtomicCache::AtomicCacheClient.new(storage: cache_storage, timestamp_manager: timestamp_manager) }
19
+
20
+ it 'falls back to the old value when a lock is present' do
21
+ old_time = Time.local(2021, 1, 1, 15, 30, 0)
22
+ new_time = Time.local(2021, 1, 1, 16, 30, 0)
23
+
24
+ # prime cache with an old value
25
+
26
+ Timecop.freeze(old_time) do
27
+ generating_client.fetch(keyspace) { "old value" }
28
+ end
29
+ timestamp_manager.last_modified_time = new_time
30
+
31
+ # start generating process for new time
32
+ generating_thread = ClientThread.new(generating_client, keyspace)
33
+ generating_thread.start
34
+ sleep 0.05
35
+
36
+ value = fallback_client.fetch(keyspace)
37
+ generating_thread.terminate
38
+
39
+ expect(value).to eq("old value")
40
+ end
41
+ end
42
+
43
+ describe 'waiting:' do
44
+ let(:generating_client) { AtomicCache::AtomicCacheClient.new(storage: cache_storage, timestamp_manager: timestamp_manager) }
45
+ let(:waiting_client) { AtomicCache::AtomicCacheClient.new(storage: cache_storage, timestamp_manager: timestamp_manager) }
46
+
47
+ it 'waits for a key when no last know value is available' do
48
+ generating_thread = ClientThread.new(generating_client, keyspace)
49
+ generating_thread.start
50
+ waiting_thread = ClientThread.new(waiting_client, keyspace)
51
+ waiting_thread.start
52
+
53
+ generating_thread.generate
54
+ sleep 0.05
55
+ waiting_thread.fetch
56
+ sleep 0.05
57
+ generating_thread.complete
58
+ sleep 0.05
59
+
60
+ generating_thread.terminate
61
+ waiting_thread.terminate
62
+
63
+ expect(generating_thread.result).to eq([1, 2, 3])
64
+ expect(waiting_thread.result).to eq([1, 2, 3])
65
+ end
66
+ end
67
+ end
68
+
69
+
70
+ # Avert your eyes:
71
+ # this class allows atomic client interaction to happen asynchronously so that
72
+ # the waiting behavior of the client can be tested simultaneous to controlling how
73
+ # long the 'generate' behavior takes
74
+ #
75
+ # It works by accepting an incoming 'message' which it places onto one of two queues
76
+ class ClientThread
77
+ attr_reader :result
78
+
79
+ # idea: maybe make the return value set when the thread is initialized
80
+ def initialize(client, keyspace)
81
+ @keyspace = keyspace
82
+ @client = client
83
+ @msg_queue = Queue.new
84
+ @generate_queue = Queue.new
85
+ @result = nil
86
+ end
87
+
88
+ def start
89
+ @thread = Thread.new(&method(:run))
90
+ end
91
+
92
+ def fetch
93
+ @msg_queue << :fetch
94
+ end
95
+
96
+ def generate
97
+ @msg_queue << :generate
98
+ end
99
+
100
+ def complete
101
+ @generate_queue << :complete
102
+ end
103
+
104
+ def terminate
105
+ @msg_queue << :terminate
106
+ end
107
+
108
+ private
109
+
110
+ def run
111
+ loop do
112
+ msg = @msg_queue.pop
113
+ sleep 0.001; next unless msg
114
+
115
+ case msg
116
+ when :terminate
117
+ Thread.stop
118
+ when :generate
119
+ do_generate
120
+ when :fetch
121
+ @result = @client.fetch(@keyspace)
122
+ end
123
+ end
124
+ end
125
+
126
+ def do_generate
127
+ @client.fetch(@keyspace) do
128
+ loop do
129
+ msg = @generate_queue.pop
130
+ sleep 0.001; next unless msg
131
+ break if msg == :complete
132
+ end
133
+ @result = [1, 2, 3] # generated value
134
+ @result
135
+ end
136
+ end
137
+ end
@@ -17,17 +17,32 @@ shared_examples 'memory storage' do
17
17
  expect(result).to eq(true)
18
18
  end
19
19
 
20
- it 'does not write the key if it exists' do
21
- entry = { value: Marshal.dump('foo'), ttl: 100, written_at: 100 }
20
+ it 'does not write the key if it exists but expiration time is NOT up' do
21
+ entry = { value: Marshal.dump('foo'), ttl: 5000, written_at: Time.local(2021, 1, 1, 12, 0, 0) }
22
22
  subject.store[:key] = entry
23
23
 
24
- result = subject.add('key', 'value', 200)
25
- expect(result).to eq(false)
24
+ Timecop.freeze(Time.local(2021, 1, 1, 12, 0, 1)) do
25
+ result = subject.add('key', 'value', 5000)
26
+ expect(result).to eq(false)
27
+ end
26
28
 
27
29
  # stored values should not have changed
28
30
  expect(subject.store).to have_key(:key)
29
31
  expect(Marshal.load(subject.store[:key][:value])).to eq('foo')
30
- expect(subject.store[:key][:ttl]).to eq(100)
32
+ end
33
+
34
+ it 'does write the key if it exists and expiration time IS up' do
35
+ entry = { value: Marshal.dump('foo'), ttl: 50, written_at: Time.local(2021, 1, 1, 12, 0, 0) }
36
+ subject.store[:key] = entry
37
+
38
+ Timecop.freeze(Time.local(2021, 1, 1, 12, 30, 0)) do
39
+ result = subject.add('key', 'value', 50)
40
+ expect(result).to eq(true)
41
+ end
42
+
43
+ # stored values should not have changed
44
+ expect(subject.store).to have_key(:key)
45
+ expect(Marshal.load(subject.store[:key][:value])).to eq('value')
31
46
  end
32
47
  end
33
48
 
@@ -3,7 +3,21 @@
3
3
  require 'spec_helper'
4
4
  require_relative 'memory_spec'
5
5
 
6
- describe 'InstanceMemory' do
6
+ describe 'SharedMemory' do
7
7
  subject { AtomicCache::Storage::SharedMemory.new }
8
8
  it_behaves_like 'memory storage'
9
+
10
+ context 'enforce_ttl disabled' do
11
+ before(:each) do
12
+ AtomicCache::Storage::SharedMemory.enforce_ttl = false
13
+ end
14
+
15
+ it 'allows instantly `add`ing keys' do
16
+ subject.add("foo", 1, ttl: 100000)
17
+ subject.add("foo", 2, ttl: 1)
18
+
19
+ expect(subject.store).to have_key(:foo)
20
+ expect(Marshal.load(subject.store[:foo][:value])).to eq(2)
21
+ end
22
+ end
9
23
  end
data/spec/spec_helper.rb CHANGED
@@ -17,4 +17,8 @@ RSpec.configure do |config|
17
17
  expectations.include_chain_clauses_in_custom_matcher_descriptions = true
18
18
  expectations.syntax = :expect
19
19
  end
20
+
21
+ config.before(:each) do
22
+ AtomicCache::Storage::SharedMemory.enforce_ttl = true
23
+ end
20
24
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: atomic_cache
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.2.4.rc1
4
+ version: 0.5.0.rc1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Ibotta Developers
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2021-06-24 00:00:00.000000000 Z
12
+ date: 2021-07-08 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: bundler
@@ -171,7 +171,8 @@ dependencies:
171
171
  - - "~>"
172
172
  - !ruby/object:Gem::Version
173
173
  version: '0.1'
174
- description: desc
174
+ description: A gem which prevents the thundering herd problem through a distributed
175
+ lock
175
176
  email: osscompliance@ibotta.com
176
177
  executables: []
177
178
  extensions: []
@@ -184,7 +185,6 @@ files:
184
185
  - docs/MODEL_SETUP.md
185
186
  - docs/PROJECT_SETUP.md
186
187
  - docs/USAGE.md
187
- - docs/img/quick_retry_graph.png
188
188
  - lib/atomic_cache.rb
189
189
  - lib/atomic_cache/atomic_cache_client.rb
190
190
  - lib/atomic_cache/concerns/global_lmt_cache_concern.rb
@@ -200,6 +200,7 @@ files:
200
200
  - spec/atomic_cache/atomic_cache_client_spec.rb
201
201
  - spec/atomic_cache/concerns/global_lmt_cache_concern_spec.rb
202
202
  - spec/atomic_cache/default_config_spec.rb
203
+ - spec/atomic_cache/integration/integration_spec.rb
203
204
  - spec/atomic_cache/key/keyspace_spec.rb
204
205
  - spec/atomic_cache/key/last_mod_time_key_manager_spec.rb
205
206
  - spec/atomic_cache/storage/dalli_spec.rb
@@ -229,5 +230,9 @@ requirements: []
229
230
  rubygems_version: 3.0.8
230
231
  signing_key:
231
232
  specification_version: 4
232
- summary: summary
233
+ summary: In a nutshell:* The key of every cached value includes a timestamp* Once
234
+ a cache key is written to, it is never written over* When a newer version of a cached
235
+ value is available, it is written to a new key* When a new value is being generated
236
+ for a new key only 1 process is allowed to do so at a time* While the new value
237
+ is being generated, other processes read one key older than most recent
233
238
  test_files: []
Binary file