atomic_cache 0.1.0.rc1 → 0.2.1.rc2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
- SHA1:
3
- metadata.gz: a6de69eb6ef0c02cbe647a36487ee3e5ef051c63
4
- data.tar.gz: 140e5dacb3a85b7e67259b9789599f8d73c9d9d0
2
+ SHA256:
3
+ metadata.gz: 89d82475dddde4fb324d6f2187d47122916dc381883fedcfec2ff00204035e08
4
+ data.tar.gz: 34d5d998acee788561e3e49019d5983fed4226e4cf88ba7cb0d8fe7cb2bd5a3b
5
5
  SHA512:
6
- metadata.gz: 604b436749bba999493381af064d0011a6cd5197bc191267873415370e8f8484e523780ca89ad5ed302ab40bf96b784f6df505bb0ffb14a40a72e17377f049ca
7
- data.tar.gz: 3abb13a75e2fb3f426ac8ea9907ca7591692f7e8219b3516009522da6f5a044fe75a360a5ac876d68c45c26e8842f6326a1a90d5b38dbd4ac1bff1b3bc23cb75
6
+ metadata.gz: e19f7c43629d01ebeceba040c818a8ae75f183245e9171d3ab39a3d302388769f263292ae04d39c1dcc01137190181f27f5628b3f091a26808c1767ce85ba24e
7
+ data.tar.gz: b5d0036456849d3d882227231908da8e885e1e6772f4e0b1a4b121a3690e34acd0f0b4ce5a67816c5fc62b5e053100d8c93980b0e8252f693f8737e42bed60e7
data/README.md CHANGED
@@ -1,5 +1,6 @@
1
1
  # atomic_cache Gem
2
- [![Build Status](https://travis-ci.org/Ibotta/atomic_cache.svg?branch=master)](https://travis-ci.org/Ibotta/atomic_cache)
2
+ [![Gem Version](https://badge.fury.io/rb/atomic_cache.svg)](https://badge.fury.io/rb/atomic_cache)
3
+ [![Build Status](https://travis-ci.com/Ibotta/atomic_cache.svg?branch=main)](https://travis-ci.com/Ibotta/atomic_cache)
3
4
  [![Test Coverage](https://api.codeclimate.com/v1/badges/790faad5866d2a00ca6c/test_coverage)](https://codeclimate.com/github/Ibotta/atomic_cache/test_coverage)
4
5
 
5
6
  ## User Documentation
@@ -27,7 +28,7 @@ class Foo < ActiveRecord::Base
27
28
 
28
29
  def active_foos(ids)
29
30
  keyspace = cache_keyspace(:activeids, ids)
30
- AtomicCache.fetch(keyspace, expires_in: 5.minutes) do
31
+ atomic_cache.fetch(keyspace, expires_in: 5.minutes) do
31
32
  Foo.active.where(id: ids.uniq)
32
33
  end
33
34
 
@@ -42,8 +43,15 @@ For further details and examples see [Usage & Testing](docs/USAGE.md)
42
43
 
43
44
  After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
44
45
 
45
- To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org).
46
-
47
46
  ## Contributing
48
47
 
49
48
  Bug reports and pull requests are welcome on GitHub at https://github.com/ibotta/atomic_cache
49
+
50
+ ## Releasing
51
+
52
+ Releases are automatically handled via the Travis CI build. When a version greater than
53
+ the version published on rubygems.org is pushed to the `main` branch, Travis will:
54
+
55
+ - re-generate the CHANGELOG file
56
+ - tag the release with GitHub
57
+ - release to rubygems.org
@@ -1,7 +1,4 @@
1
1
  ## Gem Installation
2
-
3
- You will need to ensure you have the correct deploy credentials
4
-
5
2
  Add this line to your application's Gemfile:
6
3
 
7
4
  ```ruby
@@ -22,18 +19,24 @@ require 'datadog/statsd'
22
19
  require 'atomic_cache'
23
20
 
24
21
  AtomicCache::DefaultConfig.configure do |config|
25
- config.logger = Rails.logger
26
- config.metrics = Datadog::Statsd.new('localhost', 8125, namespace: 'cache.atomic')
27
- config.namespace = 'atom'
22
+ config.logger = Rails.logger
23
+ config.metrics = Datadog::Statsd.new('localhost', 8125, namespace: 'cache.atomic')
24
+
25
+ # note: these values can also be set in an env file for env-specific settings
26
+ config.namespace = 'atom'
27
+ config.cache_storage = AtomicCache::Storage::SharedMemory.new
28
+ config.key_storage = AtomicCache::Storage::SharedMemory.new
28
29
  end
29
30
  ```
30
31
 
32
+ Note that `Datadog::Statsd` is not _required_. Adding it, however, will enable metrics support.
33
+
31
34
  #### Required
32
35
  * `cache_storage` - Storage adapter for cache (see below)
33
36
  * `key_storage` - Storage adapter for key manager (see below)
34
37
 
35
38
  #### Optional
36
- * `default_options` - Default options for every fetch call. See [options](TODO: LINK).
39
+ * `default_options` - Default options for every fetch call. See [fetch options](/Ibotta/atomic_cache/blob/main/docs/USAGE.md#fetch).
37
40
  * `logger` - Logger instance. Used for debug and warn logs. Defaults to nil.
38
41
  * `timestamp_formatter` - Proc to format last modified time for storage. Defaults to timestamp (`Time.to_i`)
39
42
  * `metrics` - Metrics instance. Defaults to nil.
data/docs/USAGE.md CHANGED
@@ -11,10 +11,10 @@ expire_cache(Time.now - 100) # an optional time can be given
11
11
  The concern makes a `last_modified_time` method available both on the class and on the instance.
12
12
 
13
13
  ### Fetch
14
- The concern makes a `AtomicCache` object available both on the class and on the instance.
14
+ The concern makes a `atomic_cache` object available both on the class and on the instance.
15
15
 
16
16
  ```ruby
17
- AtomicCache.fetch(options) do
17
+ atomic_cache.fetch(options) do
18
18
  # generate block
19
19
  end
20
20
  ```
@@ -39,7 +39,7 @@ The danger with `quick_retry_ms` is that when enabled it applies a delay to all
39
39
 
40
40
  `quick_retry_ms` is most effective for caches that are quick to generate but whose values are slow to change. `quick_retry_ms` is least effective for caches that are slow to update but quick to change.
41
41
 
42
- ![quick_retry_ms graph](img/quick_retry_ms_graph.png)
42
+ ![quick_retry_ms graph](https://github.com/Ibotta/atomic_cache/raw/ca473f28e179da8c24f638eeeeb48750bc8cbe64/docs/img/quick_retry_graph.png)
43
43
 
44
44
  #### `max_retries` & `backoff_duration_ms`
45
45
  _`max_retries` defaults to 5._
@@ -37,8 +37,7 @@ module AtomicCache
37
37
  # @option options [Numeric] :max_retries (5) Max times to rety in waiting case
38
38
  # @option options [Numeric] :backoff_duration_ms (50) Duration in ms to wait between retries
39
39
  # @yield Generates a new value when cache is expired
40
- def fetch(keyspace, options=nil)
41
- options ||= {}
40
+ def fetch(keyspace, options={})
42
41
  key = @timestamp_manager.current_key(keyspace)
43
42
  tags = ["cache_keyspace:#{keyspace.root}"]
44
43
 
@@ -15,7 +15,7 @@ module AtomicCache
15
15
 
16
16
  class_methods do
17
17
 
18
- def AtomicCache
18
+ def atomic_cache
19
19
  init_atomic_cache
20
20
  @atomic_cache
21
21
  end
@@ -91,8 +91,8 @@ module AtomicCache
91
91
  end
92
92
  end
93
93
 
94
- def AtomicCache
95
- self.class.AtomicCache
94
+ def atomic_cache
95
+ self.class.atomic_cache
96
96
  end
97
97
 
98
98
  def cache_keyspace(ns)
@@ -100,7 +100,7 @@ module AtomicCache
100
100
  end
101
101
 
102
102
  def expire_cache(at=Time.now)
103
- self.class.expire_cache(ns)
103
+ self.class.expire_cache(at)
104
104
  end
105
105
 
106
106
  def last_modified_time
@@ -20,11 +20,11 @@ module AtomicCache
20
20
  # @param separator [String] character or string to separate keyspace segments
21
21
  # @param timestamp_formatter [Proc] function to turn Time -> String
22
22
  def initialize(namespace:, root: nil, separator: nil, timestamp_formatter: nil)
23
+ @timestamp_formatter = timestamp_formatter || DefaultConfig.instance.timestamp_formatter
24
+ @separator = separator || DefaultConfig.instance.separator
23
25
  @namespace = []
24
26
  @namespace = normalize_segments(namespace) if namespace.present?
25
- @separator = separator || DefaultConfig.instance.separator
26
- @timestamp_formatter = timestamp_formatter || DefaultConfig.instance.timestamp_formatter
27
- @root = root || namespace.last
27
+ @root = root || @namespace.last
28
28
  end
29
29
 
30
30
  # Create a new Keyspace, extending the namespace with the given segments and
@@ -70,7 +70,7 @@ module AtomicCache
70
70
  def normalize_segments(segments)
71
71
  if segments.is_a? Array
72
72
  segments.map { |seg| expand_segment(seg) }
73
- elsif sgs.nil?
73
+ elsif segments.nil?
74
74
  []
75
75
  else
76
76
  [expand_segment(segments)]
@@ -54,7 +54,8 @@ module AtomicCache
54
54
  # @param keyspace [AtomicCache::Keyspace] keyspace to lock
55
55
  # @param ttl [Numeric] the duration in ms to lock (auto expires after duration is up)
56
56
  # @param options [Hash] options to pass to the storage adapter
57
- def lock(keyspace, ttl, options=nil)
57
+ def lock(keyspace, ttl, options={})
58
+ # returns false if the key already exists
58
59
  @storage.add(keyspace.lock_key, LOCK_VALUE, ttl, options)
59
60
  end
60
61
 
@@ -20,8 +20,8 @@ module AtomicCache
20
20
  @dalli_client = dalli_client
21
21
  end
22
22
 
23
- def add(key, new_value, ttl, user_options=nil)
24
- opts = user_options&.clone || {}
23
+ def add(key, new_value, ttl, user_options={})
24
+ opts = user_options.clone
25
25
  opts[:raw] = true
26
26
 
27
27
  # dalli expects time in seconds
@@ -31,13 +31,11 @@ module AtomicCache
31
31
  response.start_with?(ADD_SUCCESS)
32
32
  end
33
33
 
34
- def read(key, user_options=nil)
35
- user_options ||= {}
34
+ def read(key, user_options={})
36
35
  @dalli_client.read(key, user_options)
37
36
  end
38
37
 
39
- def set(key, value, user_options=nil)
40
- user_options ||= {}
38
+ def set(key, value, user_options={})
41
39
  @dalli_client.set(key, value, user_options)
42
40
  end
43
41
 
@@ -21,14 +21,13 @@ module AtomicCache
21
21
  @store
22
22
  end
23
23
 
24
- def store_op(key, user_options=nil)
24
+ def store_op(key, user_options={})
25
25
  if !key.present?
26
26
  desc = if key.nil? then 'Nil' else 'Empty' end
27
27
  raise ArgumentError.new("#{desc} key given for storage operation") unless key.present?
28
28
  end
29
29
 
30
30
  normalized_key = key.to_sym
31
- user_options ||= {}
32
31
  yield(normalized_key, user_options)
33
32
  end
34
33
 
@@ -12,35 +12,36 @@ module AtomicCache
12
12
  def store; raise NotImplementedError end
13
13
 
14
14
  # @abstract implement performing an operation on the store
15
- def store_op(key, user_options=nil); raise NotImplementedError end
15
+ def store_op(key, user_options={}); raise NotImplementedError end
16
16
 
17
- def add(raw_key, new_value, ttl, user_options=nil)
17
+ def add(raw_key, new_value, ttl, user_options={})
18
18
  store_op(raw_key, user_options) do |key, options|
19
19
  return false if store.has_key?(key)
20
- write(key, new_value, ttl)
20
+ write(key, new_value, ttl, user_options)
21
21
  end
22
22
  end
23
23
 
24
- def read(raw_key, user_options=nil)
24
+ def read(raw_key, user_options={})
25
25
  store_op(raw_key, user_options) do |key, options|
26
26
  entry = store[key]
27
27
  return nil unless entry.present?
28
28
 
29
- return entry[:value] if entry[:ttl].nil? or entry[:ttl] == false
29
+ unmarshaled = unmarshal(entry[:value], user_options)
30
+ return unmarshaled if entry[:ttl].nil? or entry[:ttl] == false
30
31
 
31
32
  life = Time.now - entry[:written_at]
32
33
  if (life >= entry[:ttl])
33
34
  store.delete(key)
34
35
  nil
35
36
  else
36
- entry[:value]
37
+ unmarshaled
37
38
  end
38
39
  end
39
40
  end
40
41
 
41
- def set(raw_key, new_value, user_options=nil)
42
+ def set(raw_key, new_value, user_options={})
42
43
  store_op(raw_key, user_options) do |key, options|
43
- write(key, new_value, options[:expires_in])
44
+ write(key, new_value, options[:expires_in], user_options)
44
45
  end
45
46
  end
46
47
 
@@ -51,12 +52,11 @@ module AtomicCache
51
52
  end
52
53
  end
53
54
 
54
- def write(key, value, ttl=nil)
55
- stored_value = value.to_s
56
- stored_value = nil if value.nil?
55
+ protected
57
56
 
57
+ def write(key, value, ttl=nil, user_options)
58
58
  store[key] = {
59
- value: stored_value,
59
+ value: marshal(value, user_options),
60
60
  ttl: ttl || false,
61
61
  written_at: Time.now
62
62
  }
@@ -26,10 +26,8 @@ module AtomicCache
26
26
  STORE
27
27
  end
28
28
 
29
- def store_op(key, user_options=nil)
29
+ def store_op(key, user_options={})
30
30
  normalized_key = key.to_sym
31
- user_options ||= {}
32
-
33
31
  SEMAPHORE.synchronize do
34
32
  yield(normalized_key, user_options)
35
33
  end
@@ -26,6 +26,18 @@ module AtomicCache
26
26
  # returns true if it succeeds; false otherwise
27
27
  def delete(key, user_options); raise NotImplementedError end
28
28
 
29
+ protected
30
+
31
+ def marshal(value, user_options={})
32
+ return value if user_options[:raw]
33
+ Marshal.dump(value)
34
+ end
35
+
36
+ def unmarshal(value, user_options={})
37
+ return value if user_options[:raw]
38
+ Marshal.load(value)
39
+ end
40
+
29
41
  end
30
42
  end
31
43
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module AtomicCache
4
- VERSION = "0.1.0.rc1"
4
+ VERSION = "0.2.1.rc2"
5
5
  end
@@ -0,0 +1,213 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'spec_helper'
4
+
5
+ describe 'AtomicCacheClient' do
6
+ subject { AtomicCache::AtomicCacheClient.new(storage: cache_storage, timestamp_manager: timestamp_manager) }
7
+
8
+ let(:formatter) { Proc.new { |time| time.to_i } }
9
+ let(:keyspace) { AtomicCache::Keyspace.new(namespace: ['foo', 'bar'], root: 'bar') }
10
+ let(:key_storage) { AtomicCache::Storage::InstanceMemory.new }
11
+ let(:cache_storage) { AtomicCache::Storage::InstanceMemory.new }
12
+
13
+ let(:timestamp_manager) do
14
+ AtomicCache::LastModTimeKeyManager.new(
15
+ keyspace: keyspace,
16
+ storage: key_storage,
17
+ timestamp_formatter: formatter,
18
+ )
19
+ end
20
+
21
+ before(:each) do
22
+ AtomicCache::DefaultConfig.reset
23
+ end
24
+
25
+ describe '#fetch' do
26
+
27
+ context 'when the value is present' do
28
+ before(:each) do
29
+ timestamp_manager.last_modified_time = 1420090000
30
+ end
31
+
32
+ it 'returns the cached value' do
33
+ cache_storage.set(timestamp_manager.current_key(keyspace), 'value')
34
+ expect(subject.fetch(keyspace)).to eq('value')
35
+ end
36
+
37
+ it 'returns 0 as a cached value' do
38
+ cache_storage.set(timestamp_manager.current_key(keyspace), '0')
39
+ expect(subject.fetch(keyspace)).to eq('0')
40
+ end
41
+
42
+ it 'returns empty strings as a cached value' do
43
+ cache_storage.set(timestamp_manager.current_key(keyspace), '')
44
+ expect(subject.fetch(keyspace)).to eq('')
45
+ end
46
+ end
47
+
48
+ context 'when the value is NOT present' do
49
+ context 'and when a block is given' do
50
+ context 'and when another thread is NOT generating,' do
51
+
52
+ it 'returns the new value' do
53
+ result = subject.fetch(keyspace) { 'value from block' }
54
+ expect(result).to eq('value from block')
55
+ end
56
+
57
+ it 'returns the new value when it is an empty string' do
58
+ result = subject.fetch(keyspace) { '' }
59
+ expect(result).to eq('')
60
+ end
61
+
62
+ it 'does not store the value if the generator returns nil' do
63
+ # create a fallback value to make sure we don't use the value from the block
64
+ key_storage.set(keyspace.last_known_key_key, 'foo_value')
65
+ cache_storage.set('foo', 'last known value')
66
+
67
+ timestamp_manager.promote(keyspace, last_known_key: 'foo', timestamp: Time.now)
68
+ subject.fetch(keyspace) { nil }
69
+ expect(subject.fetch(keyspace)).to eq('last known value')
70
+ end
71
+
72
+ it 'unlocks if the generate block returns nil' do
73
+ subject.fetch(keyspace) { nil }
74
+ expect(key_storage.store).to_not have_key(:'foo:bar:lock')
75
+ end
76
+
77
+ it 'stores the new value' do
78
+ subject.fetch(keyspace) { 'value from block' }
79
+ expect(subject.fetch(keyspace)).to eq('value from block')
80
+ end
81
+
82
+ it 'stores the updated last mod time' do
83
+ time = Time.local(2018, 1, 1, 15, 30, 0)
84
+ timestamp_manager.promote(keyspace, timestamp: (time - 10).to_i, last_known_key: 'lkk')
85
+
86
+ Timecop.freeze(time) do
87
+ subject.fetch(keyspace) { 'value from block' }
88
+ lmt = key_storage.read(timestamp_manager.last_modified_time_key)
89
+ expect(lmt).to eq(time.to_i)
90
+ end
91
+ end
92
+
93
+ it 'stores the current key as the last known key' do
94
+ time = Time.local(2018, 1, 1, 15, 30, 0)
95
+ timestamp_manager.promote(keyspace, last_known_key: "test:#{(time - 10).to_i}", timestamp: time.to_i)
96
+
97
+ Timecop.freeze(time) do
98
+ subject.fetch(keyspace) { 'value from block' }
99
+ lkk = key_storage.read(keyspace.last_known_key_key)
100
+ new_key = timestamp_manager.next_key(keyspace, time)
101
+ expect(lkk).to eq(new_key)
102
+ end
103
+ end
104
+
105
+ it 'sets a TTL on the build key when a TTL is not explicitly given' do
106
+ subject.fetch(keyspace) { 'value from block' }
107
+ lock_entry = key_storage.store[keyspace.lock_key.to_sym]
108
+ expect(lock_entry[:ttl]).to eq(30)
109
+ end
110
+
111
+ it 'sets a TTL on the build key when a TTL is given at fetch time' do
112
+ subject.fetch(keyspace, generate_ttl_ms: 1100) { 'value from block' }
113
+ lock_entry = key_storage.store[keyspace.lock_key.to_sym]
114
+ expect(lock_entry[:ttl]).to eq(1.1)
115
+ end
116
+
117
+ it 'sets a TTL on the build key when a value less than a second is given' do
118
+ subject.fetch(keyspace, generate_ttl_ms: 500) { 'value from block' }
119
+ lock_entry = key_storage.store[keyspace.lock_key.to_sym]
120
+ expect(lock_entry[:ttl]).to eq(0.5)
121
+ end
122
+
123
+ it 'sets a TTL on the build key when there is a TTL in the default options' do
124
+ subject = AtomicCache::AtomicCacheClient.new(
125
+ storage: cache_storage,
126
+ timestamp_manager: timestamp_manager,
127
+ default_options: { generate_ttl_ms: 600 }
128
+ )
129
+
130
+ subject.fetch(keyspace) { 'value from block' }
131
+ lock_entry = key_storage.store[keyspace.lock_key.to_sym]
132
+ expect(lock_entry[:ttl]).to eq(0.6)
133
+ end
134
+ end
135
+
136
+ context 'and when another thread is generating the new value,' do
137
+ before(:each) do
138
+ timestamp_manager.lock(keyspace, 100)
139
+ end
140
+
141
+ it 'waits for a short duration to see if the other thread generated the value' do
142
+ timestamp_manager.promote(keyspace, last_known_key: 'lkk', timestamp: 1420090000)
143
+ key_storage.set('lkk', 'old:value')
144
+ new_value = 'value from another thread'
145
+ allow(cache_storage).to receive(:read)
146
+ .with(timestamp_manager.current_key(keyspace), anything)
147
+ .and_return(nil, new_value)
148
+
149
+ expect(subject.fetch(keyspace, quick_retry_ms: 5) { 'value' }).to eq(new_value)
150
+ end
151
+
152
+ context 'when the last known value is present' do
153
+ it 'returns the last known value' do
154
+ timestamp_manager.promote(keyspace, last_known_key: 'lkk', timestamp: 1420090000)
155
+ cache_storage.set('lkk', 'old value')
156
+
157
+ result = subject.fetch(keyspace, backoff_duration_ms: 5) { 'value from generate' }
158
+ expect(result).to eq('old value')
159
+ end
160
+ end
161
+
162
+ context 'when the last known value is NOT present' do
163
+ it 'waits for another thread to generate the new value' do
164
+ key_storage.set(timestamp_manager.last_modified_time_key, '1420090000')
165
+ new_value = 'value from another thread'
166
+
167
+ # multiple returned values here are faking what it would look like to
168
+ # the client if another thread suddenly wrote a value into the cache
169
+ allow(cache_storage).to receive(:read)
170
+ .with(timestamp_manager.current_key(keyspace), anything)
171
+ .and_return(nil, nil, nil, nil, new_value)
172
+
173
+ result = subject.fetch(keyspace, backoff_duration_ms: 5) { 'value from generate' }
174
+ expect(result).to eq(new_value)
175
+ end
176
+
177
+ it 'stops waiting when the max retry count is reached' do
178
+ timestamp_manager.promote(keyspace, last_known_key: 'asdf', timestamp: 1420090000)
179
+ result = subject.fetch(keyspace, backoff_duration_ms: 5) { 'value from generate' }
180
+ expect(result).to eq(nil)
181
+ end
182
+
183
+ it 'deletes the last known key' do
184
+ key_storage.set(keyspace.last_known_key_key, :oldkey)
185
+ cache_storage.set(:oldkey, nil)
186
+ subject.fetch(keyspace, backoff_duration_ms: 5) { 'value from generate' }
187
+ expect(cache_storage.store).to_not have_key(:oldkey)
188
+ end
189
+ end
190
+ end
191
+ end
192
+
193
+ context 'and when a block is NOT given' do
194
+ it 'waits for a short duration to see if the other thread generated the value' do
195
+ timestamp_manager.promote(keyspace, last_known_key: 'asdf', timestamp: 1420090000)
196
+ new_value = 'value from another thread'
197
+ allow(cache_storage).to receive(:read)
198
+ .with(timestamp_manager.current_key(keyspace), anything)
199
+ .and_return(nil, new_value)
200
+
201
+ result = subject.fetch(keyspace, quick_retry_ms: 50)
202
+ expect(result).to eq(new_value)
203
+ end
204
+
205
+ it 'returns nil if nothing is present' do
206
+ expect(subject.fetch(keyspace)).to eq(nil)
207
+ end
208
+ end
209
+ end
210
+
211
+ end
212
+
213
+ end