atomic_cache 0.1.0.rc1
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +7 -0
- data/.gitignore +51 -0
- data/.ruby_version +1 -0
- data/.travis.yml +26 -0
- data/CODE_OF_CONDUCT.md +46 -0
- data/Gemfile +6 -0
- data/LICENSE +201 -0
- data/README.md +49 -0
- data/Rakefile +6 -0
- data/atomic_cache.gemspec +36 -0
- data/bin/console +14 -0
- data/bin/setup +8 -0
- data/docs/ARCH.md +34 -0
- data/docs/INTERFACES.md +45 -0
- data/docs/MODEL_SETUP.md +31 -0
- data/docs/PROJECT_SETUP.md +68 -0
- data/docs/USAGE.md +106 -0
- data/docs/img/quick_retry_graph.png +0 -0
- data/lib/atomic_cache.rb +11 -0
- data/lib/atomic_cache/atomic_cache_client.rb +197 -0
- data/lib/atomic_cache/concerns/global_lmt_cache_concern.rb +111 -0
- data/lib/atomic_cache/default_config.rb +62 -0
- data/lib/atomic_cache/key/keyspace.rb +98 -0
- data/lib/atomic_cache/key/last_mod_time_key_manager.rb +95 -0
- data/lib/atomic_cache/storage/dalli.rb +46 -0
- data/lib/atomic_cache/storage/instance_memory.rb +37 -0
- data/lib/atomic_cache/storage/memory.rb +67 -0
- data/lib/atomic_cache/storage/shared_memory.rb +40 -0
- data/lib/atomic_cache/storage/store.rb +31 -0
- data/lib/atomic_cache/version.rb +5 -0
- metadata +185 -0
data/bin/setup
ADDED
data/docs/ARCH.md
ADDED
@@ -0,0 +1,34 @@
|
|
1
|
+
|
2
|
+
## Overview
|
3
|
+
The problem of handling the scope of timestamps for multiple caches within a single context is more nuanced than it appears at first. The most common context is a model class. That will be used as the example through this documentation, but this gem could support other contexts as well.
|
4
|
+
|
5
|
+
Any single model class may have multiple caches associated with it, for example, a cache of all active or inactive instances of the model. When any instance of that class changes, which changes become invalidated? A simple solution is to keep one last modified time that is within scope for all instances of the class, and a change to any instance results in a change of the last modified time. Likewise, a change to the last modified time of the model would need to result in an invalidation of all collection caches. Thus, the last modified time is at a broader scope than any individual cache. In addition, what is often viewed as a single key or an individual cache is actually a collection of similar keys oriented around storing one logical value. The reason for this is the cache client has a fall-through stack where it tries to find the best value; it possibly needs to look into several cache keys before finding the best value, thus it needs to understand the namespace (or collection of sub-keys), not just a single string.
|
6
|
+
|
7
|
+
The implementation of this gem handles this by separating management of the last modified time value into a "timestamp manager" and encapsulation of all the sub-keys for a given cache into a "keyspace". Because the timestamp manager maintains a timestamp which has a scope larger than any single logical value being stored it stores this time in a parent keyspace. Additional caches for that model are then child keyspaces which namespace themselves relative to the parent and their specific concern.
|
8
|
+
|
9
|
+
To keep things simple, when using a concern, there is a one-to-one correlation between a cache client instance and a timestamp manager. In the common case this simplifies needing to know about these individual parts and lets users just get to the tasks of fetching and writing caches. At runtime the cache client only requires the namespace in order to operate, and automatically uses the last modified time from it's timestamp manager.
|
10
|
+
|
11
|
+
#### Terms
|
12
|
+
* *Keyspace* - Responsible for knowing the namespace and generating all the sub keys for a logical cache location
|
13
|
+
* *TimestampManager* - Responsible for managing and storing the last modified time. Represents a logical scope of cache invalidation.
|
14
|
+
* *CacheClient* - The distributed lock implementation. Responsible for fetching the best value for a keyspace.
|
15
|
+
* *StorageAdapter* - Interface to storage facility
|
16
|
+
|
17
|
+
#### Storage Locations
|
18
|
+
The gem stores data in two locations, a key store and a cache store.
|
19
|
+
|
20
|
+
##### Stored in the Atomic Cache Client's storage:
|
21
|
+
* cached value
|
22
|
+
|
23
|
+
###### Stored in the Key Keyspace's storage:
|
24
|
+
* atomic lock
|
25
|
+
* last known key
|
26
|
+
* last modified time
|
27
|
+
|
28
|
+
### Keyspace Keys
|
29
|
+
Example keys assume use of concern. `id` in this context is whatever is given when `cache_keyspace` is run.
|
30
|
+
|
31
|
+
* *last modified time* - `<namespace>:<class name>:<version>:lmt`
|
32
|
+
* *value* - `<namespace>:<class name>:<version>:<id>:<timestamp>`
|
33
|
+
* *last known key* - `<namespace>:<class name>:<version>:<id>:lkk`
|
34
|
+
* *lock* - `<namespace>:<class name>:<version>:<id>:lock`
|
data/docs/INTERFACES.md
ADDED
@@ -0,0 +1,45 @@
|
|
1
|
+
## Storage Adapter
|
2
|
+
Any options passed in by the user at fetch time will be passed through to the storage adapter.
|
3
|
+
|
4
|
+
```ruby
|
5
|
+
class StorageAdapter
|
6
|
+
# (String, Object, Integer, Hash) -> Boolean
|
7
|
+
# ttl is in millis
|
8
|
+
# operation must be atomic
|
9
|
+
# returns true when the key doesn't exist and was written successfully
|
10
|
+
# returns false in all other cases
|
11
|
+
def add(key, new_value, ttl, user_options); end
|
12
|
+
|
13
|
+
# (String, Hash) -> String
|
14
|
+
# return the `value` at `key`
|
15
|
+
def read(key, user_options); end
|
16
|
+
|
17
|
+
# (String, Object) -> Boolean
|
18
|
+
# returns true if it succeeds; false otherwise
|
19
|
+
def set(key, new_value, user_options); end
|
20
|
+
|
21
|
+
# (String) -> Boolean
|
22
|
+
# returns true if it succeeds; false otherwise
|
23
|
+
def delete(key, user_options); end
|
24
|
+
end
|
25
|
+
```
|
26
|
+
|
27
|
+
## Metrics
|
28
|
+
```ruby
|
29
|
+
class Metrics
|
30
|
+
# (String, Hash) -> Nil
|
31
|
+
def increment(key, options); end
|
32
|
+
# (String, Hash, Block) -> Nil
|
33
|
+
def time(key, options, &block); end
|
34
|
+
end
|
35
|
+
```
|
36
|
+
|
37
|
+
## Logger
|
38
|
+
```ruby
|
39
|
+
class Logger
|
40
|
+
# (Object) -> Nil
|
41
|
+
def warn(msg); end
|
42
|
+
def info(msg); end
|
43
|
+
def debug(msg); end
|
44
|
+
end
|
45
|
+
```
|
data/docs/MODEL_SETUP.md
ADDED
@@ -0,0 +1,31 @@
|
|
1
|
+
## Model Setup
|
2
|
+
Include the `GlobalLMTCacheConcern`.
|
3
|
+
|
4
|
+
```ruby
|
5
|
+
class Foo < ActiveRecord::Base
|
6
|
+
include AtomicCache::GlobalLMTCacheConcern
|
7
|
+
end
|
8
|
+
```
|
9
|
+
|
10
|
+
### cache_class
|
11
|
+
By default the cache identifier for a class is set to the name of a class (ie. `self.to_s`). In some cases it makes sense to set a custom value for the cache identifier. In cases where a custom cache identifier is set, it's important that the identifier remain unique across the project.
|
12
|
+
|
13
|
+
```ruby
|
14
|
+
class SuperDescriptiveDomainModelAbstractFactoryImplManager < ActiveRecord::Base
|
15
|
+
include AtomicCache::GlobalLMTCacheConcern
|
16
|
+
cache_class('sddmafim')
|
17
|
+
end
|
18
|
+
```
|
19
|
+
|
20
|
+
#### ★ Best Practice ★
|
21
|
+
Generally it should only be necessary to explicitly set a `cache_class` in cases where the class name is extremely long and causing the max key length to be hit. In such a case the `cache_class` can be set to an abbreviation of the class name.
|
22
|
+
|
23
|
+
### cache_version
|
24
|
+
In cases where a code change that is incompatible with cached values already written needs to be deployed, a cache version can be set which further sub-divides the cache namespace, preventing old values from being read. When the version is `nil` (the default), no version is added to the cache key.
|
25
|
+
|
26
|
+
```ruby
|
27
|
+
class Foo < ActiveRecord::Base
|
28
|
+
include AtomicCache::GlobalLMTCacheConcern
|
29
|
+
cache_version(5)
|
30
|
+
end
|
31
|
+
```
|
@@ -0,0 +1,68 @@
|
|
1
|
+
## Gem Installation
|
2
|
+
|
3
|
+
You will need to ensure you have the correct deploy credentials
|
4
|
+
|
5
|
+
Add this line to your application's Gemfile:
|
6
|
+
|
7
|
+
```ruby
|
8
|
+
gem 'atomic_cache'
|
9
|
+
```
|
10
|
+
|
11
|
+
And then execute:
|
12
|
+
|
13
|
+
$ bundle
|
14
|
+
|
15
|
+
## Project Setup
|
16
|
+
`AtomicCache::DefaultConfig` is a singleton which allows global configuration.
|
17
|
+
|
18
|
+
#### Rails Initializer Example
|
19
|
+
```ruby
|
20
|
+
# config/initializers/cache.rb
|
21
|
+
require 'datadog/statsd'
|
22
|
+
require 'atomic_cache'
|
23
|
+
|
24
|
+
AtomicCache::DefaultConfig.configure do |config|
|
25
|
+
config.logger = Rails.logger
|
26
|
+
config.metrics = Datadog::Statsd.new('localhost', 8125, namespace: 'cache.atomic')
|
27
|
+
config.namespace = 'atom'
|
28
|
+
end
|
29
|
+
```
|
30
|
+
|
31
|
+
#### Required
|
32
|
+
* `cache_storage` - Storage adapter for cache (see below)
|
33
|
+
* `key_storage` - Storage adapter for key manager (see below)
|
34
|
+
|
35
|
+
#### Optional
|
36
|
+
* `default_options` - Default options for every fetch call. See [options](TODO: LINK).
|
37
|
+
* `logger` - Logger instance. Used for debug and warn logs. Defaults to nil.
|
38
|
+
* `timestamp_formatter` - Proc to format last modified time for storage. Defaults to timestamp (`Time.to_i`)
|
39
|
+
* `metrics` - Metrics instance. Defaults to nil.
|
40
|
+
* `namespace` - Global namespace that will prefix all cache keys. Defaults to nil.
|
41
|
+
|
42
|
+
#### ★ Best Practice ★
|
43
|
+
Keep the global namespace short. For example, memcached has a limit of 250 characters for key length.
|
44
|
+
|
45
|
+
## Storage Adapters
|
46
|
+
|
47
|
+
### InstanceMemory & SharedMemory
|
48
|
+
Both of these storage adapters provide a cache storage implementation that is limited to a single ruby instance. The difference is that `InstanceMemory` maintains a private store that is only visible when interacting with that instance of the adapter where as `SharedMemory` creates a class-scoped store such that all instances of the storage adapter read and write from the same store. `InstanceMemory` is great for integration testing as it isolates visibility of the store and `SharedMemory` is great for local development and integration testing in cases where multiple components reading and writing needs to be represented.
|
49
|
+
|
50
|
+
Neither memory storage implementation should be considered "production ready". Both respect TTL but only evaluate it on read meaning that data is only removed from the store when it's attempted to be read and the TTL is evaluated as expired.
|
51
|
+
|
52
|
+
##### Example
|
53
|
+
```ruby
|
54
|
+
AtomicCache::DefaultConfig.configure do |config|
|
55
|
+
config.key_storage = AtomicCache::Storage::InstanceMemory.new
|
56
|
+
end
|
57
|
+
```
|
58
|
+
|
59
|
+
### Dalli
|
60
|
+
The `Dalli` storage adapter provides a thin wrapper around the Dalli client.
|
61
|
+
|
62
|
+
##### Example
|
63
|
+
```ruby
|
64
|
+
dc = Dalli::Client.new('localhost:11211', options)
|
65
|
+
AtomicCache::DefaultConfig.configure do |config|
|
66
|
+
config.key_storage = AtomicCache::Storage::Dalli.new(dc)
|
67
|
+
end
|
68
|
+
```
|
data/docs/USAGE.md
ADDED
@@ -0,0 +1,106 @@
|
|
1
|
+
## Usage
|
2
|
+
|
3
|
+
### Invalidating the Cache on Change
|
4
|
+
The concern makes the `expire_cache` method available both on the class and on the instance.
|
5
|
+
```ruby
|
6
|
+
expire_cache
|
7
|
+
expire_cache(Time.now - 100) # an optional time can be given
|
8
|
+
```
|
9
|
+
|
10
|
+
### Getting Last Modified Time
|
11
|
+
The concern makes a `last_modified_time` method available both on the class and on the instance.
|
12
|
+
|
13
|
+
### Fetch
|
14
|
+
The concern makes a `AtomicCache` object available both on the class and on the instance.
|
15
|
+
|
16
|
+
```ruby
|
17
|
+
AtomicCache.fetch(options) do
|
18
|
+
# generate block
|
19
|
+
end
|
20
|
+
```
|
21
|
+
|
22
|
+
In addition to the below options, any other options given (e.g. `expires_in`, `cache_nils`) are passed through to the underlying storage adapter. This allows storage-specific options to be passed through (reference: [Dalli config](https://github.com/petergoldstein/dalli#configuration)).
|
23
|
+
|
24
|
+
#### `generate_ttl_ms`
|
25
|
+
_Defaults to 30 seconds._
|
26
|
+
|
27
|
+
When a cache client identifies that a cache is empty and that no other processes are actively generating a value, it will establish a lock and attempt to generate the value itself. However, if that process were to die or the instance on which it's on goes down in addition to being unable to write a cache and the lock that it established would still be active, preventing other processes from generating a new cache value. To prevent this, the lock *always* has a TTL on it forcing the lock to automatically be removed by the storage mechanism to prevent permanent locks. `generate_ttl_ms` is the duration of that TTL.
|
28
|
+
|
29
|
+
The ideal `generate_ttl_ms` time is just slightly longer than the average generate block duration. If `generate_ttl_ms` is set too low, the lock might expire before a process has written it's new value and another process will then try and generate an identical value.
|
30
|
+
|
31
|
+
If metrics are enabled, the `<namespace>.generate.run` can be used to determine the min/max/average generate time for a particular cache and the `generate_ttl_ms` tuned using that.
|
32
|
+
|
33
|
+
#### `quick_retry_ms`
|
34
|
+
_`false` to disable. Defaults to false._
|
35
|
+
|
36
|
+
In the case where another process is computing the new cache value, before falling back to the last known value, if `quick_retry_ms` has a value the atomic client will check the new cache once after the given duration (in milliseconds).
|
37
|
+
|
38
|
+
The danger with `quick_retry_ms` is that when enabled it applies a delay to all fall-through requests at the cost of only benefitting some customers. As the average generate block duration increases, the effectiveness of `quick_retry_ms` decreases because there is less of a likelihood that a customer will get a fresh value. Consider the graph below. For example, a cache with an average generate duration of 200ms, configured with a `quick_retry_ms` of 50ms (red) will only likely get a fresh value for 25% of customers.
|
39
|
+
|
40
|
+
`quick_retry_ms` is most effective for caches that are quick to generate but whose values are slow to change. `quick_retry_ms` is least effective for caches that are slow to update but quick to change.
|
41
|
+
|
42
|
+
![quick_retry_ms graph](img/quick_retry_ms_graph.png)
|
43
|
+
|
44
|
+
#### `max_retries` & `backoff_duration_ms`
|
45
|
+
_`max_retries` defaults to 5._
|
46
|
+
_`backoff_duration_ms` defaults to 50ms._
|
47
|
+
|
48
|
+
In cases where neither the cached value nor the last known value isn't available the client ends up in a state of polling for the new value, under the assumption that another process is generating that value. It's possible that the other process went down or is for some reason not able to write the new value to the cache. If the client didn't stop polling for a value, it would steal all the process time from other requests. `max_retries` defeats that case by limiting how many times the client can poll before giving up.
|
49
|
+
|
50
|
+
The client wait between polling. The duration it waits is `backoff_duration_ms * retry_count * random(1 to 15ms)`. A small random value is added to stagger multiple processes in the case after a deploy where many machines come online close to the same time and all need to same cache.
|
51
|
+
|
52
|
+
`backoff_duration_ms` and `max_retries` should both be small values. Ideally
|
53
|
+
|
54
|
+
##### Example retry with durations
|
55
|
+
`max_retries` = 5
|
56
|
+
`backoff_duration_ms` = 50ms
|
57
|
+
Assumes the random offset is always 10ms
|
58
|
+
Total time spent polling: 800ms
|
59
|
+
|
60
|
+
* First retry - wait 60ms
|
61
|
+
* Second retry - wait 110ms
|
62
|
+
* Third retry - wait 160ms
|
63
|
+
* Fourth retry - wait 210ms
|
64
|
+
* Fifth retry - wait 260ms
|
65
|
+
|
66
|
+
## Testing
|
67
|
+
|
68
|
+
### Integration Style Tests
|
69
|
+
`AtomicCache::Storage::InstanceMemory` or `AtomicCache::Storage::SharedMemory` can be used to make testing easier by offering an integration testing approach that allows assertion against what ended up in the cache instead of what methods on the cache client were called. Both storage adapters expose the following methods.
|
70
|
+
|
71
|
+
* `#reset` -- Clears all stored values
|
72
|
+
* `#store` -- Returns the underlying hash of values stored
|
73
|
+
|
74
|
+
All incoming keys are normalized to symbols. All values are stored with a `value`, `ttl`, and `written_at` property.
|
75
|
+
|
76
|
+
It's likely preferable to use an environments file to configure the `key_storage` and `cache_storage` to always be an in-memory adapter when running in the test environment instead of manually configuring the storage adapter per spec.
|
77
|
+
|
78
|
+
#### ★ Testing Tip ★
|
79
|
+
If using `SharedMemory` for integration style tests, a global `before(:each)` can be configured in `spec_helper.rb`.
|
80
|
+
|
81
|
+
```ruby
|
82
|
+
# spec/spec_helper.rb
|
83
|
+
RSpec.configure do |config|
|
84
|
+
|
85
|
+
#your other config
|
86
|
+
|
87
|
+
config.before(:each) do
|
88
|
+
AtomicCache::Storage::SharedMemory.reset
|
89
|
+
end
|
90
|
+
end
|
91
|
+
```
|
92
|
+
|
93
|
+
## Metrics
|
94
|
+
|
95
|
+
If a metrics client is configured via the DefaultConfig, the following metrics will be published:
|
96
|
+
|
97
|
+
* `<namespace>.read.present` - Number of times a key was fetched and was present in the cache
|
98
|
+
* `<namespace>.read.not-present` - Number of times a key was fetched and was NOT present in the cache
|
99
|
+
* `<namespace>.generate.current-thread` - Number of times the value was not present in the cache and the current thread started the task of generating a new value
|
100
|
+
* `<namespace>.generate.other-thread` - Number of times the value was not present in the cache but another thread was already generating the value
|
101
|
+
* `<namespace>.empty-cache-retry.present` - Number of times the value was not present, but the client checked again after a short duration and it was present
|
102
|
+
* `<namespace>.empty-cache-retry.not-present` - Number of times the value was not present, but the client checked again after a short duration and it was NOT present
|
103
|
+
* `<namespace>.last-known-value.present` - Number of times the value was not present but the last known value was
|
104
|
+
* `<namespace>.last-known-value.not-present` - Number of times the value was not present and the last known value was not either
|
105
|
+
* `<namespace>.wait.run` - When the value and last known value isn't available, this timer is the duration it takes to wait for another thread to generate the value before being recognized by the client on the current thread
|
106
|
+
* `<namespace>.generate.run` - When a new value is being generated, this timer is the duration it takes to generate that new value
|
Binary file
|
data/lib/atomic_cache.rb
ADDED
@@ -0,0 +1,11 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
require_relative 'atomic_cache/version'
|
3
|
+
|
4
|
+
require_relative 'atomic_cache/default_config'
|
5
|
+
require_relative 'atomic_cache/atomic_cache_client'
|
6
|
+
require_relative 'atomic_cache/key/last_mod_time_key_manager'
|
7
|
+
require_relative 'atomic_cache/key/keyspace'
|
8
|
+
require_relative 'atomic_cache/concerns/global_lmt_cache_concern'
|
9
|
+
require_relative 'atomic_cache/storage/instance_memory'
|
10
|
+
require_relative 'atomic_cache/storage/shared_memory'
|
11
|
+
require_relative 'atomic_cache/storage/dalli'
|
@@ -0,0 +1,197 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
require 'active_support/core_ext/object'
|
4
|
+
require 'active_support/core_ext/hash'
|
5
|
+
|
6
|
+
module AtomicCache
|
7
|
+
class AtomicCacheClient
|
8
|
+
|
9
|
+
DEFAULT_quick_retry_ms = false
|
10
|
+
DEFAULT_MAX_RETRIES = 5
|
11
|
+
DEFAULT_GENERATE_TIME_MS = 30000 # 30 seconds
|
12
|
+
BACKOFF_DURATION_MS = 50
|
13
|
+
|
14
|
+
# @param storage [Object] Cache storage adapter
|
15
|
+
# @param timestamp_manager [Object] Timestamp manager
|
16
|
+
# @param default_options [Hash] Default fetch options
|
17
|
+
# @param logger [Object] Logger
|
18
|
+
# @param metrics [Object] Metrics client
|
19
|
+
def initialize(storage: nil, timestamp_manager: nil, default_options: {}, logger: nil, metrics: nil)
|
20
|
+
@default_options = (DefaultConfig.instance.default_options&.clone || {}).merge(default_options || {})
|
21
|
+
@timestamp_manager = timestamp_manager
|
22
|
+
@logger = logger || DefaultConfig.instance.logger
|
23
|
+
@metrics = metrics || DefaultConfig.instance.metrics
|
24
|
+
@storage = storage || DefaultConfig.instance.cache_storage
|
25
|
+
|
26
|
+
raise ArgumentError.new("`timestamp_manager` required but none given") unless @timestamp_manager.present?
|
27
|
+
raise ArgumentError.new("`storage` required but none given") unless @storage.present?
|
28
|
+
end
|
29
|
+
|
30
|
+
|
31
|
+
# Attempts to fetch the given keyspace, using an optional block to generate
|
32
|
+
# a new value when the cache is expired
|
33
|
+
#
|
34
|
+
# @param keyspace [AtomicCache::Keyspace] the keyspace to fetch
|
35
|
+
# @option options [Numeric] :generate_ttl_ms (30000) Max generate duration in ms
|
36
|
+
# @option options [Numeric] :quick_retry_ms (false) Short duration to check back before using last known value
|
37
|
+
# @option options [Numeric] :max_retries (5) Max times to rety in waiting case
|
38
|
+
# @option options [Numeric] :backoff_duration_ms (50) Duration in ms to wait between retries
|
39
|
+
# @yield Generates a new value when cache is expired
|
40
|
+
def fetch(keyspace, options=nil)
|
41
|
+
options ||= {}
|
42
|
+
key = @timestamp_manager.current_key(keyspace)
|
43
|
+
tags = ["cache_keyspace:#{keyspace.root}"]
|
44
|
+
|
45
|
+
# happy path: see if the value is there in the key we expect
|
46
|
+
value = @storage.read(key, options) if key.present?
|
47
|
+
if !value.nil?
|
48
|
+
metrics(:increment, 'read.present', tags: tags)
|
49
|
+
return value
|
50
|
+
end
|
51
|
+
|
52
|
+
metrics(:increment, 'read.not-present', tags: tags)
|
53
|
+
log(:debug, "Cache key `#{key}` not present.")
|
54
|
+
|
55
|
+
# try to generate a new value if another process already isn't
|
56
|
+
if block_given?
|
57
|
+
new_value = generate_and_store(keyspace, options, tags, &Proc.new)
|
58
|
+
return new_value unless new_value.nil?
|
59
|
+
end
|
60
|
+
|
61
|
+
# quick check back to see if the other process has finished
|
62
|
+
# or fall back to the last known value
|
63
|
+
value = quick_retry(keyspace, options, tags) || last_known_value(keyspace, options, tags)
|
64
|
+
return value if value.present?
|
65
|
+
|
66
|
+
# wait for the other process if a last known value isn't there
|
67
|
+
if key.present?
|
68
|
+
return time('wait.run', tags: tags) do
|
69
|
+
wait_for_new_value(key, options, tags)
|
70
|
+
end
|
71
|
+
end
|
72
|
+
|
73
|
+
# At this point, there's no key, value, last known key, or last known value.
|
74
|
+
# A block wasn't given or couldn't create a non-nil value making it
|
75
|
+
# impossible to do anything else, so bail
|
76
|
+
if !key.present?
|
77
|
+
metrics(:increment, 'no-key.give-up')
|
78
|
+
log(:warn, "Giving up fetching cache keyspace for root `#{keyspace.root}`. No key could be generated.")
|
79
|
+
end
|
80
|
+
nil
|
81
|
+
end
|
82
|
+
|
83
|
+
protected
|
84
|
+
|
85
|
+
def generate_and_store(keyspace, options, tags)
|
86
|
+
generate_ttl_ms = option(:generate_ttl_ms, options, DEFAULT_GENERATE_TIME_MS).to_f / 1000
|
87
|
+
if @timestamp_manager.lock(keyspace, generate_ttl_ms, options)
|
88
|
+
lmt = Time.now
|
89
|
+
new_value = yield
|
90
|
+
|
91
|
+
if new_value.nil?
|
92
|
+
# let another thread try right away
|
93
|
+
@timestamp_manager.unlock(keyspace)
|
94
|
+
metrics(:increment, 'generate.nil', tags: tags)
|
95
|
+
log(:warn, "Generator for #{keyspace.key} returned nil. Aborting new cache value.")
|
96
|
+
return nil
|
97
|
+
end
|
98
|
+
|
99
|
+
new_key = @timestamp_manager.next_key(keyspace, lmt)
|
100
|
+
@timestamp_manager.promote(keyspace, last_known_key: new_key, timestamp: lmt)
|
101
|
+
@storage.set(new_key, new_value, options)
|
102
|
+
|
103
|
+
metrics(:increment, 'generate.current-thread', tags: tags)
|
104
|
+
log(:debug, "Generating new value for `#{new_key}`")
|
105
|
+
|
106
|
+
return new_value
|
107
|
+
end
|
108
|
+
|
109
|
+
metrics(:increment, 'generate.other-thread', tags: tags)
|
110
|
+
nil
|
111
|
+
end
|
112
|
+
|
113
|
+
def quick_retry(keyspace, options, tags)
|
114
|
+
key = @timestamp_manager.current_key(keyspace)
|
115
|
+
duration = option(:quick_retry_ms, options, DEFAULT_quick_retry_ms)
|
116
|
+
|
117
|
+
if duration.present? and key.present?
|
118
|
+
sleep(duration.to_f / 1000)
|
119
|
+
value = @storage.read(key, options)
|
120
|
+
|
121
|
+
if !value.nil?
|
122
|
+
metrics(:increment, 'empty-cache-retry.present', tags: tags)
|
123
|
+
return value
|
124
|
+
end
|
125
|
+
metrics(:increment, 'empty-cache-retry.not-present', tags: tags)
|
126
|
+
end
|
127
|
+
|
128
|
+
nil
|
129
|
+
end
|
130
|
+
|
131
|
+
def last_known_value(keyspace, options, tags)
|
132
|
+
lkk = @timestamp_manager.last_known_key(keyspace)
|
133
|
+
|
134
|
+
if lkk.present?
|
135
|
+
lkv = @storage.read(lkk, options)
|
136
|
+
# even if the last_known_key is present, the value at the
|
137
|
+
# last known key may have expired
|
138
|
+
if !lkv.nil?
|
139
|
+
metrics(:increment, 'last-known-value.present', tags: tags)
|
140
|
+
return lkv
|
141
|
+
end
|
142
|
+
|
143
|
+
# if the value of the last known key is nil, we can infer that it's
|
144
|
+
# most likely expired, thus remove it so other processes don't waste
|
145
|
+
# time trying to read it
|
146
|
+
@storage.delete(lkk)
|
147
|
+
end
|
148
|
+
|
149
|
+
metrics(:increment, 'last-known-value.not-present', tags: tags)
|
150
|
+
nil
|
151
|
+
end
|
152
|
+
|
153
|
+
def wait_for_new_value(key, options, tags)
|
154
|
+
max_retries = option(:max_retries, options, DEFAULT_MAX_RETRIES)
|
155
|
+
max_retries.times do |attempt|
|
156
|
+
metrics_tags = tags.clone.push("attempt:#{attempt}")
|
157
|
+
metrics(:increment, 'wait.attempt', tags: metrics_tags)
|
158
|
+
|
159
|
+
# the duration is given a random element in order to stagger retry across many processes
|
160
|
+
backoff_duration_ms = BACKOFF_DURATION_MS + rand(15)
|
161
|
+
backoff_duration_ms = option(:backoff_duration_ms, options, backoff_duration_ms)
|
162
|
+
sleep((backoff_duration_ms.to_f / 1000) * attempt)
|
163
|
+
|
164
|
+
value = @storage.read(key, options)
|
165
|
+
if !value.nil?
|
166
|
+
metrics(:increment, 'wait.present', tags: metrics_tags)
|
167
|
+
return value
|
168
|
+
end
|
169
|
+
end
|
170
|
+
|
171
|
+
metrics(:increment, 'wait.give-up')
|
172
|
+
log(:warn, "Giving up fetching cache key `#{key}`. Exceeded max retries (#{max_retries}).")
|
173
|
+
nil
|
174
|
+
end
|
175
|
+
|
176
|
+
def option(key, options, default=nil)
|
177
|
+
options[key] || @default_options[key] || default
|
178
|
+
end
|
179
|
+
|
180
|
+
def log(method, *args)
|
181
|
+
@logger.send(method, *args) if @logger.present?
|
182
|
+
end
|
183
|
+
|
184
|
+
def metrics(method, *args)
|
185
|
+
@metrics.send(method, *args) if @metrics.present?
|
186
|
+
end
|
187
|
+
|
188
|
+
def time(*args)
|
189
|
+
if @metrics.present?
|
190
|
+
@metrics.time(*args, &Proc.new)
|
191
|
+
else
|
192
|
+
yield
|
193
|
+
end
|
194
|
+
end
|
195
|
+
end
|
196
|
+
|
197
|
+
end
|