atomic_cache 0.2.5.rc1 → 0.5.1.rc1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 5c6e0bf96718fb99b6047009b67c12f3fb4da0b36fd18ba8eb7aded71b744cb2
4
- data.tar.gz: 8443307bb59ff3e3ca393cb6401f6c30974ec97bc3729024848be74504c5abda
3
+ metadata.gz: f03b8f8d9294f3a40ea719ea5165d4c6789c3eebb36459768c8fbd0c57d8fc3a
4
+ data.tar.gz: b2a320b8323c57785e0202e29d89922b33e63fc9ebd898d6df61f76fb5e2600c
5
5
  SHA512:
6
- metadata.gz: f438b868a3b60d2d64be72fa314f8cb4ca2a04d9972286d623393bf9754b488577f150d4726c26ee236a0a0cbaae9444fc892fc808b8cda3b23052e2adb8fb8d
7
- data.tar.gz: d84517bed76804507f0dc395205eaad4e114e0dfda3023b33f6a1e1191e396d1643b7fb501ebd66e677d8d682ef822b5fa5329660ee5c0c2947d3adb4009c99c
6
+ metadata.gz: f461c2bf7d903c6f7940334b8f5d11672c7ed04d1d55ad5d25c2337c07906d24a39f8958ddf04a2dd5d5dc690c9a0920121fb99c9f25c809c08959e48ee13ae3
7
+ data.tar.gz: 9100ad9f5e5012c3729e3bc3a3b95ca64705c562f14e5d610df9f223436ee9351ec58cddc64b61c87c186a588f18e6158e891308aa5d431739f56aa2562744fa
data/README.md CHANGED
@@ -23,7 +23,7 @@ In a nutshell:
23
23
  class Foo < ActiveRecord::Base
24
24
  include AtomicCache::GlobalLMTCacheConcern
25
25
 
26
- cache_class(:custom_foo) # optional
26
+ force_cache_class(:custom_foo) # optional
27
27
  cache_version(5) # optional
28
28
 
29
29
  def active_foos(ids)
data/docs/MODEL_SETUP.md CHANGED
@@ -7,13 +7,13 @@ class Foo < ActiveRecord::Base
7
7
  end
8
8
  ```
9
9
 
10
- ### cache_class
10
+ ### force_cache_class
11
11
  By default the cache identifier for a class is set to the name of a class (ie. `self.to_s`). In some cases it makes sense to set a custom value for the cache identifier. In cases where a custom cache identifier is set, it's important that the identifier remain unique across the project.
12
12
 
13
13
  ```ruby
14
14
  class SuperDescriptiveDomainModelAbstractFactoryImplManager < ActiveRecord::Base
15
15
  include AtomicCache::GlobalLMTCacheConcern
16
- cache_class('sddmafim')
16
+ force_cache_class('sddmafim')
17
17
  end
18
18
  ```
19
19
 
@@ -23,9 +23,10 @@ AtomicCache::DefaultConfig.configure do |config|
23
23
  config.metrics = Datadog::Statsd.new('localhost', 8125, namespace: 'cache.atomic')
24
24
 
25
25
  # note: these values can also be set in an env file for env-specific settings
26
- config.namespace = 'atom'
27
- config.cache_storage = AtomicCache::Storage::SharedMemory.new
28
- config.key_storage = AtomicCache::Storage::SharedMemory.new
26
+ config.namespace = 'atom'
27
+ config.default_options = { generate_ttl_ms: 500 }
28
+ config.cache_storage = AtomicCache::Storage::SharedMemory.new
29
+ config.key_storage = AtomicCache::Storage::SharedMemory.new
29
30
  end
30
31
  ```
31
32
 
@@ -36,7 +37,7 @@ Note that `Datadog::Statsd` is not _required_. Adding it, however, will enable
36
37
  * `key_storage` - Storage adapter for key manager (see below)
37
38
 
38
39
  #### Optional
39
- * `default_options` - Default options for every fetch call. See [fetch options](/Ibotta/atomic_cache/blob/main/docs/USAGE.md#fetch).
40
+ * `default_options` - Override default options for every fetch call, unless specified at call site. See [fetch options](/Ibotta/atomic_cache/blob/main/docs/USAGE.md#fetch).
40
41
  * `logger` - Logger instance. Used for debug and warn logs. Defaults to nil.
41
42
  * `timestamp_formatter` - Proc to format last modified time for storage. Defaults to timestamp (`Time.to_i`)
42
43
  * `metrics` - Metrics instance. Defaults to nil.
@@ -45,6 +46,49 @@ Note that `Datadog::Statsd` is not _required_. Adding it, however, will enable
45
46
  #### ★ Best Practice ★
46
47
  Keep the global namespace short. For example, memcached has a limit of 250 characters for key length.
47
48
 
49
+ #### More Complex Rails Configuration
50
+
51
+ In any real-world project, the need to run multiple caching strategies or setups is likely to arise. In those cases, it's often advantageous
52
+ to keep a DRY setup, with multiple caching clients sharing the same config. Because Rails initializers run after the environment-specific
53
+ config files, a sane way to manage this is to keep client network settings int he config files, then reference them from the initializer.
54
+
55
+ ```ruby
56
+ # config/environments/staging
57
+ config.memcache_hosts = [ "staging.host.cache.amazonaws.com" ]
58
+ config.cache_store_options = {
59
+ expires_in: 15.minutes,
60
+ compress: true,
61
+ # ...
62
+ }
63
+
64
+ # config/environments/production
65
+ config.memcache_hosts = [ "prod1.host.cache.amazonaws.com", "prod2.host.cache.amazonaws.com" ]
66
+ config.cache_store_options = {
67
+ expires_in: 1.hour,
68
+ compress: true,
69
+ # ...
70
+ }
71
+
72
+ # config/initializers/cache.rb
73
+ AtomicCache::DefaultConfig.configure do |config|
74
+ if Rails.env.development? || Rails.env.test?
75
+ config.cache_storage = AtomicCache::Storage::SharedMemory.new
76
+ config.key_storage = AtomicCache::Storage::SharedMemory.new
77
+
78
+ elsif Rails.env.staging? || Rails.env.production?
79
+ # Your::Application.config will be loaded by config/environments/*
80
+ memcache_hosts = Your::Application.config.memcache_hosts
81
+ options = Your::Application.config.cache_store_options
82
+
83
+ dc = Dalli::Client.new(memcache_hosts, options)
84
+ config.cache_storage = AtomicCache::Storage::Dalli.new(dc)
85
+ config.key_storage = AtomicCache::Storage::Dalli.new(dc)
86
+ end
87
+
88
+ # other AtomicCache configuration...
89
+ end
90
+ ```
91
+
48
92
  ## Storage Adapters
49
93
 
50
94
  ### InstanceMemory & SharedMemory
data/docs/USAGE.md CHANGED
@@ -38,17 +38,6 @@ The ideal `generate_ttl_ms` time is just slightly longer than the average genera
38
38
 
39
39
  If metrics are enabled, the `<namespace>.generate.run` can be used to determine the min/max/average generate time for a particular cache and the `generate_ttl_ms` tuned using that.
40
40
 
41
- #### `quick_retry_ms`
42
- _`false` to disable. Defaults to false._
43
-
44
- In the case where another process is computing the new cache value, before falling back to the last known value, if `quick_retry_ms` has a value the atomic client will check the new cache once after the given duration (in milliseconds).
45
-
46
- The danger with `quick_retry_ms` is that when enabled it applies a delay to all fall-through requests at the cost of only benefitting some customers. As the average generate block duration increases, the effectiveness of `quick_retry_ms` decreases because there is less of a likelihood that a customer will get a fresh value. Consider the graph below. For example, a cache with an average generate duration of 200ms, configured with a `quick_retry_ms` of 50ms (red) will only likely get a fresh value for 25% of customers.
47
-
48
- `quick_retry_ms` is most effective for caches that are quick to generate but whose values are slow to change. `quick_retry_ms` is least effective for caches that are slow to update but quick to change.
49
-
50
- ![quick_retry_ms graph](https://github.com/Ibotta/atomic_cache/raw/ca473f28e179da8c24f638eeeeb48750bc8cbe64/docs/img/quick_retry_graph.png)
51
-
52
41
  #### `max_retries` & `backoff_duration_ms`
53
42
  _`max_retries` defaults to 5._
54
43
  _`backoff_duration_ms` defaults to 50ms._
@@ -83,6 +72,13 @@ All incoming keys are normalized to symbols. All values are stored with a `valu
83
72
 
84
73
  It's likely preferable to use an environments file to configure the `key_storage` and `cache_storage` to always be an in-memory adapter when running in the test environment instead of manually configuring the storage adapter per spec.
85
74
 
75
+ #### TTL in Tests
76
+ In a test environment, unlike in a production environment, database queries are fast, and time doesn't elapse quite like it does in the real world. As tests get more complex, they perform changes for which they expect the cache to expire. However, because of the synthetic nature of testing, TTLs, particularly those on locks, don't quite work the same either.
77
+
78
+ There are a few approaches to address this, for example, using `sleep` to cause real time to pass (not preferable) or wrapping each test in a TimeCop, forcing time to pass (works but quite manual).
79
+
80
+ Since this situation is highly likely to arise, `atomic_cache` provides a feature to globally disable enforcing TTL on locks for the `SharedMemory` implementation. Set `enforce_ttl = false` to disable TTL checking on locks within SharedMemory in a test context. This will prevent tests from failing due to unexpired TTLs on locks.
81
+
86
82
  #### ★ Testing Tip ★
87
83
  If using `SharedMemory` for integration style tests, a global `before(:each)` can be configured in `spec_helper.rb`.
88
84
 
@@ -90,9 +86,10 @@ If using `SharedMemory` for integration style tests, a global `before(:each)` ca
90
86
  # spec/spec_helper.rb
91
87
  RSpec.configure do |config|
92
88
 
93
- #your other config
89
+ # your other config
94
90
 
95
91
  config.before(:each) do
92
+ AtomicCache::Storage::SharedMemory.enforce_ttl = false
96
93
  AtomicCache::Storage::SharedMemory.reset
97
94
  end
98
95
  end
@@ -6,7 +6,6 @@ require 'active_support/core_ext/hash'
6
6
  module AtomicCache
7
7
  class AtomicCacheClient
8
8
 
9
- DEFAULT_quick_retry_ms = false
10
9
  DEFAULT_MAX_RETRIES = 5
11
10
  DEFAULT_GENERATE_TIME_MS = 30000 # 30 seconds
12
11
  BACKOFF_DURATION_MS = 50
@@ -32,7 +31,6 @@ module AtomicCache
32
31
  #
33
32
  # @param keyspace [AtomicCache::Keyspace] the keyspace to fetch
34
33
  # @option options [Numeric] :generate_ttl_ms (30000) Max generate duration in ms
35
- # @option options [Numeric] :quick_retry_ms (false) Short duration to check back before using last known value
36
34
  # @option options [Numeric] :max_retries (5) Max times to rety in waiting case
37
35
  # @option options [Numeric] :backoff_duration_ms (50) Duration in ms to wait between retries
38
36
  # @yield Generates a new value when cache is expired
@@ -57,9 +55,8 @@ module AtomicCache
57
55
  return new_value unless new_value.nil?
58
56
  end
59
57
 
60
- # quick check back to see if the other process has finished
61
- # or fall back to the last known value
62
- value = quick_retry(key, options, tags) || last_known_value(keyspace, options, tags)
58
+ # attempt to fall back to the last known value
59
+ value = last_known_value(keyspace, options, tags)
63
60
  return value if value.present?
64
61
 
65
62
  # wait for the other process if a last known value isn't there
@@ -109,22 +106,6 @@ module AtomicCache
109
106
  nil
110
107
  end
111
108
 
112
- def quick_retry(key, options, tags)
113
- duration = option(:quick_retry_ms, options, DEFAULT_quick_retry_ms)
114
- if duration.present? and key.present?
115
- sleep(duration.to_f / 1000)
116
- value = @storage.read(key, options)
117
-
118
- if !value.nil?
119
- metrics(:increment, 'empty-cache-retry.present', tags: tags)
120
- return value
121
- end
122
- metrics(:increment, 'empty-cache-retry.not-present', tags: tags)
123
- end
124
-
125
- nil
126
- end
127
-
128
109
  def last_known_value(keyspace, options, tags)
129
110
  lkk = @timestamp_manager.last_known_key(keyspace)
130
111
 
@@ -138,13 +119,11 @@ module AtomicCache
138
119
  return lkv
139
120
  end
140
121
 
141
- # if the value of the last known key is nil, we can infer that it's
142
- # most likely expired, thus remove it so other processes don't waste
143
- # time trying to read it
144
- @storage.delete(lkk)
122
+ metrics(:increment, 'last-known-value.nil', tags: tags)
123
+ else
124
+ metrics(:increment, 'last-known-value.not-present', tags: tags)
145
125
  end
146
126
 
147
- metrics(:increment, 'last-known-value.not-present', tags: tags)
148
127
  nil
149
128
  end
150
129
 
@@ -26,7 +26,7 @@ module AtomicCache
26
26
  end
27
27
  end
28
28
 
29
- def cache_class(kls)
29
+ def force_cache_class(kls)
30
30
  ATOMIC_CACHE_CONCERN_MUTEX.synchronize do
31
31
  @atomic_cache_class = kls
32
32
  end
@@ -59,6 +59,13 @@ module AtomicCache
59
59
  @storage.add(keyspace.lock_key, LOCK_VALUE, ttl, options)
60
60
  end
61
61
 
62
+ # check if the keyspace is locked
63
+ #
64
+ # @param keyspace [AtomicCache::Keyspace] keyspace to lock
65
+ def lock_present?(keyspace)
66
+ @storage.read(keyspace.lock_key) == LOCK_VALUE
67
+ end
68
+
62
69
  # remove existing lock to allow other processes to update keyspace
63
70
  #
64
71
  # @param keyspace [AtomicCache::Keyspace] keyspace to lock
@@ -16,7 +16,7 @@ module AtomicCache
16
16
 
17
17
  def add(raw_key, new_value, ttl, user_options={})
18
18
  store_op(raw_key, user_options) do |key, options|
19
- return false if store.has_key?(key)
19
+ return false if store.has_key?(key) && !ttl_expired?(store[key])
20
20
  write(key, new_value, ttl, user_options)
21
21
  end
22
22
  end
@@ -29,8 +29,7 @@ module AtomicCache
29
29
  unmarshaled = unmarshal(entry[:value], user_options)
30
30
  return unmarshaled if entry[:ttl].nil? or entry[:ttl] == false
31
31
 
32
- life = Time.now - entry[:written_at]
33
- if (life >= entry[:ttl])
32
+ if ttl_expired?(entry)
34
33
  store.delete(key)
35
34
  nil
36
35
  else
@@ -54,6 +53,12 @@ module AtomicCache
54
53
 
55
54
  protected
56
55
 
56
+ def ttl_expired?(entry)
57
+ return false unless entry
58
+ life = Time.now - entry[:written_at]
59
+ life >= entry[:ttl]
60
+ end
61
+
57
62
  def write(key, value, ttl=nil, user_options)
58
63
  store[key] = {
59
64
  value: marshal(value, user_options),
@@ -10,6 +10,21 @@ module AtomicCache
10
10
  STORE = {}
11
11
  SEMAPHORE = Mutex.new
12
12
 
13
+ @enforce_ttl = true
14
+ class << self
15
+ attr_accessor :enforce_ttl
16
+ end
17
+
18
+ def add(raw_key, new_value, ttl, user_options={})
19
+ if self.class.enforce_ttl
20
+ super(raw_key, new_value, ttl, user_options)
21
+ else
22
+ store_op(raw_key, user_options) do |key, options|
23
+ write(key, new_value, ttl, user_options)
24
+ end
25
+ end
26
+ end
27
+
13
28
  def self.reset
14
29
  STORE.clear
15
30
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module AtomicCache
4
- VERSION = "0.2.5.rc1"
4
+ VERSION = "0.5.1.rc1"
5
5
  end
@@ -138,17 +138,6 @@ describe 'AtomicCacheClient' do
138
138
  timestamp_manager.lock(keyspace, 100)
139
139
  end
140
140
 
141
- it 'waits for a short duration to see if the other thread generated the value' do
142
- timestamp_manager.promote(keyspace, last_known_key: 'lkk', timestamp: 1420090000)
143
- key_storage.set('lkk', 'old:value')
144
- new_value = 'value from another thread'
145
- allow(cache_storage).to receive(:read)
146
- .with(timestamp_manager.current_key(keyspace), anything)
147
- .and_return(nil, new_value)
148
-
149
- expect(subject.fetch(keyspace, quick_retry_ms: 5) { 'value' }).to eq(new_value)
150
- end
151
-
152
141
  context 'when the last known value is present' do
153
142
  it 'returns the last known value' do
154
143
  timestamp_manager.promote(keyspace, last_known_key: 'lkk', timestamp: 1420090000)
@@ -179,29 +168,11 @@ describe 'AtomicCacheClient' do
179
168
  result = subject.fetch(keyspace, backoff_duration_ms: 5) { 'value from generate' }
180
169
  expect(result).to eq(nil)
181
170
  end
182
-
183
- it 'deletes the last known key' do
184
- key_storage.set(keyspace.last_known_key_key, :oldkey)
185
- cache_storage.set(:oldkey, nil)
186
- subject.fetch(keyspace, backoff_duration_ms: 5) { 'value from generate' }
187
- expect(cache_storage.store).to_not have_key(:oldkey)
188
- end
189
171
  end
190
172
  end
191
173
  end
192
174
 
193
175
  context 'and when a block is NOT given' do
194
- it 'waits for a short duration to see if the other thread generated the value' do
195
- timestamp_manager.promote(keyspace, last_known_key: 'asdf', timestamp: 1420090000)
196
- new_value = 'value from another thread'
197
- allow(cache_storage).to receive(:read)
198
- .with(timestamp_manager.current_key(keyspace), anything)
199
- .and_return(nil, new_value)
200
-
201
- result = subject.fetch(keyspace, quick_retry_ms: 50)
202
- expect(result).to eq(new_value)
203
- end
204
-
205
176
  it 'returns nil if nothing is present' do
206
177
  expect(subject.fetch(keyspace)).to eq(nil)
207
178
  end
@@ -104,12 +104,12 @@ describe 'AtomicCacheConcern' do
104
104
  class Foo2
105
105
  include AtomicCache::GlobalLMTCacheConcern
106
106
  cache_version(3)
107
- cache_class('foo')
107
+ force_cache_class('foo')
108
108
  end
109
109
  Foo2
110
110
  end
111
111
 
112
- it 'uses the given version and cache_class become part of the cache keyspace' do
112
+ it 'uses the given version and force_cache_class become part of the cache keyspace' do
113
113
  subject.expire_cache
114
114
  expect(key_storage.store).to have_key(:'foo:v3:lmt')
115
115
  end
@@ -0,0 +1,137 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'spec_helper'
4
+
5
+ describe 'Integration -' do
6
+ let(:key_storage) { AtomicCache::Storage::SharedMemory.new }
7
+ let(:cache_storage) { AtomicCache::Storage::SharedMemory.new }
8
+ let(:keyspace) { AtomicCache::Keyspace.new(namespace: 'int.waiting') }
9
+ let(:timestamp_manager) { AtomicCache::LastModTimeKeyManager.new(keyspace: keyspace, storage: key_storage) }
10
+
11
+ before(:each) do
12
+ key_storage.reset
13
+ cache_storage.reset
14
+ end
15
+
16
+ describe 'fallback:' do
17
+ let(:generating_client) { AtomicCache::AtomicCacheClient.new(storage: cache_storage, timestamp_manager: timestamp_manager) }
18
+ let(:fallback_client) { AtomicCache::AtomicCacheClient.new(storage: cache_storage, timestamp_manager: timestamp_manager) }
19
+
20
+ it 'falls back to the old value when a lock is present' do
21
+ old_time = Time.local(2021, 1, 1, 15, 30, 0)
22
+ new_time = Time.local(2021, 1, 1, 16, 30, 0)
23
+
24
+ # prime cache with an old value
25
+
26
+ Timecop.freeze(old_time) do
27
+ generating_client.fetch(keyspace) { "old value" }
28
+ end
29
+ timestamp_manager.last_modified_time = new_time
30
+
31
+ # start generating process for new time
32
+ generating_thread = ClientThread.new(generating_client, keyspace)
33
+ generating_thread.start
34
+ sleep 0.05
35
+
36
+ value = fallback_client.fetch(keyspace)
37
+ generating_thread.terminate
38
+
39
+ expect(value).to eq("old value")
40
+ end
41
+ end
42
+
43
+ describe 'waiting:' do
44
+ let(:generating_client) { AtomicCache::AtomicCacheClient.new(storage: cache_storage, timestamp_manager: timestamp_manager) }
45
+ let(:waiting_client) { AtomicCache::AtomicCacheClient.new(storage: cache_storage, timestamp_manager: timestamp_manager) }
46
+
47
+ it 'waits for a key when no last know value is available' do
48
+ generating_thread = ClientThread.new(generating_client, keyspace)
49
+ generating_thread.start
50
+ waiting_thread = ClientThread.new(waiting_client, keyspace)
51
+ waiting_thread.start
52
+
53
+ generating_thread.generate
54
+ sleep 0.05
55
+ waiting_thread.fetch
56
+ sleep 0.05
57
+ generating_thread.complete
58
+ sleep 0.05
59
+
60
+ generating_thread.terminate
61
+ waiting_thread.terminate
62
+
63
+ expect(generating_thread.result).to eq([1, 2, 3])
64
+ expect(waiting_thread.result).to eq([1, 2, 3])
65
+ end
66
+ end
67
+ end
68
+
69
+
70
+ # Avert your eyes:
71
+ # this class allows atomic client interaction to happen asynchronously so that
72
+ # the waiting behavior of the client can be tested simultaneous to controlling how
73
+ # long the 'generate' behavior takes
74
+ #
75
+ # It works by accepting an incoming 'message' which it places onto one of two queues
76
+ class ClientThread
77
+ attr_reader :result
78
+
79
+ # idea: maybe make the return value set when the thread is initialized
80
+ def initialize(client, keyspace)
81
+ @keyspace = keyspace
82
+ @client = client
83
+ @msg_queue = Queue.new
84
+ @generate_queue = Queue.new
85
+ @result = nil
86
+ end
87
+
88
+ def start
89
+ @thread = Thread.new(&method(:run))
90
+ end
91
+
92
+ def fetch
93
+ @msg_queue << :fetch
94
+ end
95
+
96
+ def generate
97
+ @msg_queue << :generate
98
+ end
99
+
100
+ def complete
101
+ @generate_queue << :complete
102
+ end
103
+
104
+ def terminate
105
+ @msg_queue << :terminate
106
+ end
107
+
108
+ private
109
+
110
+ def run
111
+ loop do
112
+ msg = @msg_queue.pop
113
+ sleep 0.001; next unless msg
114
+
115
+ case msg
116
+ when :terminate
117
+ Thread.stop
118
+ when :generate
119
+ do_generate
120
+ when :fetch
121
+ @result = @client.fetch(@keyspace)
122
+ end
123
+ end
124
+ end
125
+
126
+ def do_generate
127
+ @client.fetch(@keyspace) do
128
+ loop do
129
+ msg = @generate_queue.pop
130
+ sleep 0.001; next unless msg
131
+ break if msg == :complete
132
+ end
133
+ @result = [1, 2, 3] # generated value
134
+ @result
135
+ end
136
+ end
137
+ end
@@ -40,6 +40,15 @@ describe 'LastModTimeKeyManager' do
40
40
  expect(storage.store).to_not have_key(:'ns:lock')
41
41
  end
42
42
 
43
+ it 'checks if the lock is present' do
44
+ subject.lock(req_keyspace, 100)
45
+ expect(subject.lock_present?(req_keyspace)).to eq(true)
46
+ end
47
+
48
+ it 'checks if the lock is not present' do
49
+ expect(subject.lock_present?(req_keyspace)).to eq(false)
50
+ end
51
+
43
52
  it 'promotes a timestamp and last known key' do
44
53
  subject.promote(req_keyspace, last_known_key: 'asdf', timestamp: timestamp)
45
54
  expect(storage.read(:'ns:lkk')).to eq('asdf')
@@ -17,17 +17,32 @@ shared_examples 'memory storage' do
17
17
  expect(result).to eq(true)
18
18
  end
19
19
 
20
- it 'does not write the key if it exists' do
21
- entry = { value: Marshal.dump('foo'), ttl: 100, written_at: 100 }
20
+ it 'does not write the key if it exists but expiration time is NOT up' do
21
+ entry = { value: Marshal.dump('foo'), ttl: 5000, written_at: Time.local(2021, 1, 1, 12, 0, 0) }
22
22
  subject.store[:key] = entry
23
23
 
24
- result = subject.add('key', 'value', 200)
25
- expect(result).to eq(false)
24
+ Timecop.freeze(Time.local(2021, 1, 1, 12, 0, 1)) do
25
+ result = subject.add('key', 'value', 5000)
26
+ expect(result).to eq(false)
27
+ end
26
28
 
27
29
  # stored values should not have changed
28
30
  expect(subject.store).to have_key(:key)
29
31
  expect(Marshal.load(subject.store[:key][:value])).to eq('foo')
30
- expect(subject.store[:key][:ttl]).to eq(100)
32
+ end
33
+
34
+ it 'does write the key if it exists and expiration time IS up' do
35
+ entry = { value: Marshal.dump('foo'), ttl: 50, written_at: Time.local(2021, 1, 1, 12, 0, 0) }
36
+ subject.store[:key] = entry
37
+
38
+ Timecop.freeze(Time.local(2021, 1, 1, 12, 30, 0)) do
39
+ result = subject.add('key', 'value', 50)
40
+ expect(result).to eq(true)
41
+ end
42
+
43
+ # stored values should not have changed
44
+ expect(subject.store).to have_key(:key)
45
+ expect(Marshal.load(subject.store[:key][:value])).to eq('value')
31
46
  end
32
47
  end
33
48
 
@@ -3,7 +3,21 @@
3
3
  require 'spec_helper'
4
4
  require_relative 'memory_spec'
5
5
 
6
- describe 'InstanceMemory' do
6
+ describe 'SharedMemory' do
7
7
  subject { AtomicCache::Storage::SharedMemory.new }
8
8
  it_behaves_like 'memory storage'
9
+
10
+ context 'enforce_ttl disabled' do
11
+ before(:each) do
12
+ AtomicCache::Storage::SharedMemory.enforce_ttl = false
13
+ end
14
+
15
+ it 'allows instantly `add`ing keys' do
16
+ subject.add("foo", 1, ttl: 100000)
17
+ subject.add("foo", 2, ttl: 1)
18
+
19
+ expect(subject.store).to have_key(:foo)
20
+ expect(Marshal.load(subject.store[:foo][:value])).to eq(2)
21
+ end
22
+ end
9
23
  end
data/spec/spec_helper.rb CHANGED
@@ -17,4 +17,8 @@ RSpec.configure do |config|
17
17
  expectations.include_chain_clauses_in_custom_matcher_descriptions = true
18
18
  expectations.syntax = :expect
19
19
  end
20
+
21
+ config.before(:each) do
22
+ AtomicCache::Storage::SharedMemory.enforce_ttl = true
23
+ end
20
24
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: atomic_cache
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.2.5.rc1
4
+ version: 0.5.1.rc1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Ibotta Developers
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2021-06-28 00:00:00.000000000 Z
12
+ date: 2021-07-12 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: bundler
@@ -185,7 +185,6 @@ files:
185
185
  - docs/MODEL_SETUP.md
186
186
  - docs/PROJECT_SETUP.md
187
187
  - docs/USAGE.md
188
- - docs/img/quick_retry_graph.png
189
188
  - lib/atomic_cache.rb
190
189
  - lib/atomic_cache/atomic_cache_client.rb
191
190
  - lib/atomic_cache/concerns/global_lmt_cache_concern.rb
@@ -201,7 +200,7 @@ files:
201
200
  - spec/atomic_cache/atomic_cache_client_spec.rb
202
201
  - spec/atomic_cache/concerns/global_lmt_cache_concern_spec.rb
203
202
  - spec/atomic_cache/default_config_spec.rb
204
- - spec/atomic_cache/integration/waiting_spec.rb
203
+ - spec/atomic_cache/integration/integration_spec.rb
205
204
  - spec/atomic_cache/key/keyspace_spec.rb
206
205
  - spec/atomic_cache/key/last_mod_time_key_manager_spec.rb
207
206
  - spec/atomic_cache/storage/dalli_spec.rb
Binary file
@@ -1,102 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- require 'spec_helper'
4
-
5
- describe 'Integration' do
6
- let(:key_storage) { AtomicCache::Storage::SharedMemory.new }
7
- let(:cache_storage) { AtomicCache::Storage::SharedMemory.new }
8
- let(:keyspace) { AtomicCache::Keyspace.new(namespace: 'int.waiting') }
9
- let(:timestamp_manager) { AtomicCache::LastModTimeKeyManager.new(keyspace: keyspace, storage: key_storage) }
10
-
11
- let(:generating_client) { AtomicCache::AtomicCacheClient.new(storage: cache_storage, timestamp_manager: timestamp_manager) }
12
- let(:waiting_client) { AtomicCache::AtomicCacheClient.new(storage: cache_storage, timestamp_manager: timestamp_manager) }
13
-
14
- it 'correctly waits for a key when no last know value is available' do
15
- generating_thread = ClientThread.new(generating_client, keyspace)
16
- generating_thread.start
17
- waiting_thread = ClientThread.new(waiting_client, keyspace)
18
- waiting_thread.start
19
-
20
- generating_thread.generate
21
- sleep 0.05
22
- waiting_thread.fetch
23
- sleep 0.05
24
- generating_thread.complete
25
- sleep 0.05
26
-
27
- generating_thread.terminate
28
- waiting_thread.terminate
29
-
30
- expect(generating_thread.result).to eq([1, 2, 3])
31
- expect(waiting_thread.result).to eq([1, 2, 3])
32
- end
33
- end
34
-
35
-
36
- # Avert your eyes:
37
- # this class allows atomic client interaction to happen asynchronously so that
38
- # the waiting behavior of the client can be tested simultaneous to controlling how
39
- # long the 'generate' behavior takes
40
- #
41
- # It works by accepting an incoming 'message' which it places onto one of two queues
42
- class ClientThread
43
- attr_reader :result
44
-
45
- def initialize(client, keyspace)
46
- @keyspace = keyspace
47
- @client = client
48
- @msg_queue = Queue.new
49
- @generate_queue = Queue.new
50
- @result = nil
51
- end
52
-
53
- def start
54
- @thread = Thread.new(&method(:run))
55
- end
56
-
57
- def fetch
58
- @msg_queue << :fetch
59
- end
60
-
61
- def generate
62
- @msg_queue << :generate
63
- end
64
-
65
- def complete
66
- @generate_queue << :complete
67
- end
68
-
69
- def terminate
70
- @msg_queue << :terminate
71
- end
72
-
73
- private
74
-
75
- def run
76
- loop do
77
- msg = @msg_queue.pop
78
- sleep 0.001; next unless msg
79
-
80
- case msg
81
- when :terminate
82
- Thread.stop
83
- when :generate
84
- do_generate
85
- when :fetch
86
- @result = @client.fetch(@keyspace)
87
- end
88
- end
89
- end
90
-
91
- def do_generate
92
- @client.fetch(@keyspace) do
93
- loop do
94
- msg = @generate_queue.pop
95
- sleep 0.001; next unless msg
96
- break if msg == :complete
97
- end
98
- @result = [1, 2, 3] # generated value
99
- @result
100
- end
101
- end
102
- end