solid_cache 0.5.3 → 0.7.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 0e16bebed50154ce9a1657114a5fe1316d8137e7f5a0136fdb8858ece68f5351
4
- data.tar.gz: 6a66887e0dac5e4b52fae796c482fcc54ebcd1402f145217cf3530adc47d005a
3
+ metadata.gz: 3bb8fa17755c13f25f2a42794d3be5843de423f484d36d7d4a6c38c0241862c7
4
+ data.tar.gz: d71f6c7aad61cb20f1b1f2a077550e63bdb47d37c33b4a498855d9d35427a153
5
5
  SHA512:
6
- metadata.gz: 527da66c8b66d68ae31eafe5d3b6474f5801649119ed96848ec83a6990f3b59c0a35a706b20a33ba2ab99beebe3195575388b8f44e64308c4060e20b6fd9892f
7
- data.tar.gz: 0d41d9ad62843d0815db7468b9475088f6f95d3122a0eaada4d16e15020eee38a5a4642dd1570f8bde0ee347f7ea54a38c5b6810429b316ad888a3fb641dadf2
6
+ metadata.gz: 716d333398fa3efa935668d918e92742df8f7c8056c7ee222a545167085f2fd3b871fb96d3a8d0f6961ea63f1c0e5ac69146c8469985afa3da9913d4f4b0de0f
7
+ data.tar.gz: f51f7265604cf7c22ff2cb1ba9d865c0210b3cc54f2e57a0503dca99be015d345feb03d6eecfe1ed8211eba7e049f7aa5c70d647985fa5444d7cc3a393e7c04d
data/README.md CHANGED
@@ -4,7 +4,7 @@
4
4
 
5
5
  Solid Cache is a database-backed Active Support cache store implementation.
6
6
 
7
- Using SQL databases backed by SSDs we can have caches that are much larger and cheaper than traditional memory only Redis or Memcached backed caches.
7
+ Using SQL databases backed by SSDs we can have caches that are much larger and cheaper than traditional memory-only Redis or Memcached backed caches.
8
8
 
9
9
  ## Usage
10
10
 
@@ -14,15 +14,15 @@ To set Solid Cache as your Rails cache, you should add this to your environment
14
14
  config.cache_store = :solid_cache_store
15
15
  ```
16
16
 
17
- Solid Cache is a FIFO (first in, first out) cache. While this is not as efficient as an LRU cache, this is mitigated by the longer cache lifespan.
17
+ Solid Cache is a FIFO (first in, first out) cache. While this is not as efficient as an LRU cache, it is mitigated by the longer cache lifespan.
18
18
 
19
19
  A FIFO cache is much easier to manage:
20
- 1. We don't need to track when items are read
20
+ 1. We don't need to track when items are read.
21
21
  2. We can estimate and control the cache size by comparing the maximum and minimum IDs.
22
22
  3. By deleting from one end of the table and adding at the other end we can avoid fragmentation (on MySQL at least).
23
23
 
24
24
  ### Installation
25
- Add this line to your application's Gemfile:
25
+ Add this line to your application's `Gemfile`:
26
26
 
27
27
  ```ruby
28
28
  gem "solid_cache"
@@ -93,9 +93,9 @@ Setting `databases` to `[cache_db, cache_db2]` is the equivalent of:
93
93
  SolidCache::Record.connects_to shards: { cache_db1: { writing: :cache_db1 }, cache_db2: { writing: :cache_db2 } }
94
94
  ```
95
95
 
96
- If `connects_to` is set it will be passed directly.
96
+ If `connects_to` is set, it will be passed directly.
97
97
 
98
- If none of these are set, then Solid Cache will use the `ActiveRecord::Base` connection pool. This means that cache reads and writes will be part of any wrapping
98
+ If none of these are set, Solid Cache will use the `ActiveRecord::Base` connection pool. This means that cache reads and writes will be part of any wrapping
99
99
  database transaction.
100
100
 
101
101
  #### Engine configuration
@@ -104,7 +104,7 @@ There are three options that can be set on the engine:
104
104
 
105
105
  - `executor` - the [Rails executor](https://guides.rubyonrails.org/threading_and_code_execution.html#executor) used to wrap asynchronous operations, defaults to the app executor
106
106
  - `connects_to` - a custom connects to value for the abstract `SolidCache::Record` active record model. Required for sharding and/or using a separate cache database to the main app. This will overwrite any value set in `config/solid_cache.yml`
107
- - `size_estimate_samples` - if `max_size` is set on the cache, the number of the samples used to estimates the size.
107
+ - `size_estimate_samples` - if `max_size` is set on the cache, the number of the samples used to estimates the size
108
108
 
109
109
  These can be set in your Rails configuration:
110
110
 
@@ -116,7 +116,7 @@ end
116
116
 
117
117
  #### Cache configuration
118
118
 
119
- Solid Cache supports these options in addition to the standard `ActiveSupport::Cache::Store` options.
119
+ Solid Cache supports these options in addition to the standard `ActiveSupport::Cache::Store` options:
120
120
 
121
121
  - `error_handler` - a Proc to call to handle any `ActiveRecord::ActiveRecordError`s that are raises (default: log errors as warnings)
122
122
  - `expiry_batch_size` - the batch size to use when deleting old records (default: `100`)
@@ -125,27 +125,28 @@ Solid Cache supports these options in addition to the standard `ActiveSupport::C
125
125
  - `max_age` - the maximum age of entries in the cache (default: `2.weeks.to_i`). Can be set to `nil`, but this is not recommended unless using `max_entries` to limit the size of the cache.
126
126
  - `max_entries` - the maximum number of entries allowed in the cache (default: `nil`, meaning no limit)
127
127
  - `max_size` - the maximum size of the cache entries (default `nil`, meaning no limit)
128
- - `cluster` - a Hash of options for the cache database cluster, e.g `{ shards: [:database1, :database2, :database3] }`
129
- - `clusters` - and Array of Hashes for multiple cache clusters (ignored if `:cluster` is set)
128
+ - `cluster` - (deprecated) a Hash of options for the cache database cluster, e.g `{ shards: [:database1, :database2, :database3] }`
129
+ - `clusters` - (deprecated) an Array of Hashes for multiple cache clusters (ignored if `:cluster` is set)
130
+ - `shards` - an Array of databases
130
131
  - `active_record_instrumentation` - whether to instrument the cache's queries (default: `true`)
131
132
  - `clear_with` - clear the cache with `:truncate` or `:delete` (default `truncate`, except for when `Rails.env.test?` then `delete`)
132
133
  - `max_key_bytesize` - the maximum size of a normalized key in bytes (default `1024`)
133
134
 
134
- For more information on cache clusters see [Sharding the cache](#sharding-the-cache)
135
+ For more information on cache clusters, see [Sharding the cache](#sharding-the-cache)
135
136
 
136
137
  ### Cache expiry
137
138
 
138
139
  Solid Cache tracks writes to the cache. For every write it increments a counter by 1. Once the counter reaches 50% of the `expiry_batch_size` it adds a task to run on a background thread. That task will:
139
140
 
140
- 1. Check if we have exceeded the `max_entries` or `max_size` values (if set)
141
+ 1. Check if we have exceeded the `max_entries` or `max_size` values (if set).
141
142
  The current entries are estimated by subtracting the max and min IDs from the `SolidCache::Entry` table.
142
143
  The current size is estimated by sampling the entry `byte_size` columns.
143
- 2. If we have it will delete `expiry_batch_size` entries
144
- 3. If not it will delete up to `expiry_batch_size` entries, provided they are all older than `max_age`.
144
+ 2. If we have, it will delete `expiry_batch_size` entries.
145
+ 3. If not, it will delete up to `expiry_batch_size` entries, provided they are all older than `max_age`.
145
146
 
146
147
  Expiring when we reach 50% of the batch size allows us to expire records from the cache faster than we write to it when we need to reduce the cache size.
147
148
 
148
- Only triggering expiry when we write means that the if the cache is idle, the background thread is also idle.
149
+ Only triggering expiry when we write means that if the cache is idle, the background thread is also idle.
149
150
 
150
151
  If you want the cache expiry to be run in a background job instead of a thread, you can set `expiry_method` to `:job`. This will enqueue a `SolidCache::ExpiryJob`.
151
152
 
@@ -195,9 +196,9 @@ Solid Cache uses the [Maglev](https://static.googleusercontent.com/media/researc
195
196
 
196
197
  To shard:
197
198
 
198
- 1. Add the configuration for the database shards to database.yml
199
- 2. Configure the shards via `config.solid_cache.connects_to`
200
- 3. Pass the shards for the cache to use via the cluster option
199
+ 1. Add the configuration for the database shards to database.yml.
200
+ 2. Configure the shards via `config.solid_cache.connects_to`.
201
+ 3. Pass the shards for the cache to use via the cluster option.
201
202
 
202
203
  For example:
203
204
  ```yml
@@ -220,43 +221,6 @@ production:
220
221
  databases: [cache_shard1, cache_shard2, cache_shard3]
221
222
  ```
222
223
 
223
- ### Secondary cache clusters
224
-
225
- You can add secondary cache clusters. Reads will only be sent to the primary cluster (i.e. the first one listed).
226
-
227
- Writes will go to all clusters. The writes to the primary cluster are synchronous, but asynchronous to the secondary clusters.
228
-
229
- To specific multiple clusters you can do:
230
-
231
- ```yaml
232
- # config/solid_cache.yml
233
- production:
234
- databases: [cache_primary_shard1, cache_primary_shard2, cache_secondary_shard1, cache_secondary_shard2]
235
- store_options:
236
- clusters:
237
- - shards: [cache_primary_shard1, cache_primary_shard2]
238
- - shards: [cache_secondary_shard1, cache_secondary_shard2]
239
- ```
240
-
241
- ### Named shard destinations
242
-
243
- By default, the node key used for sharding is the name of the database in `database.yml`.
244
-
245
- It is possible to add names for the shards in the cluster config. This will allow you to shuffle or remove shards without breaking consistent hashing.
246
-
247
- ```yaml
248
- production:
249
- databases: [cache_primary_shard1, cache_primary_shard2, cache_secondary_shard1, cache_secondary_shard2]
250
- store_options:
251
- clusters:
252
- - shards:
253
- cache_primary_shard1: node1
254
- cache_primary_shard2: node2
255
- - shards:
256
- cache_secondary_shard1: node3
257
- cache_secondary_shard2: node4
258
- ```
259
-
260
224
  ### Enabling encryption
261
225
 
262
226
  Add this to an initializer:
@@ -270,8 +234,8 @@ end
270
234
  ### Index size limits
271
235
  The Solid Cache migrations try to create an index with 1024 byte entries. If that is too big for your database, you should:
272
236
 
273
- 1. Edit the index size in the migration
274
- 2. Set `max_key_bytesize` on your cache to the new value
237
+ 1. Edit the index size in the migration.
238
+ 2. Set `max_key_bytesize` on your cache to the new value.
275
239
 
276
240
  ## Development
277
241
 
@@ -298,10 +262,10 @@ $ TARGET_DB=mysql bin/rake test
298
262
  $ TARGET_DB=postgres bin/rake test
299
263
  ```
300
264
 
301
- ### Testing with multiple Rails version
265
+ ### Testing with multiple Rails versions
302
266
 
303
267
  Solid Cache relies on [appraisal](https://github.com/thoughtbot/appraisal/tree/main) to test
304
- multiple Rails version.
268
+ multiple Rails versions.
305
269
 
306
270
  To run a test for a specific version run:
307
271
 
data/Rakefile CHANGED
@@ -23,7 +23,7 @@ def run_without_aborting(*tasks)
23
23
  end
24
24
 
25
25
  def configs
26
- [ :default, :cluster, :cluster_inferred, :clusters, :clusters_named, :database, :no_database ]
26
+ [ :default, :connects_to, :database, :no_database, :shards, :unprepared_statements ]
27
27
  end
28
28
 
29
29
  task :test do
@@ -27,7 +27,7 @@ module SolidCache
27
27
  # We then calculate the fraction of the rows we want to sample by dividing the sample size by the estimated number
28
28
  # of rows.
29
29
  #
30
- # The we grab the byte_size sum of the rows in the range of key_hash values excluding any rows that are larger than
30
+ # Then we grab the byte_size sum of the rows in the range of key_hash values excluding any rows that are larger than
31
31
  # our minimum outlier cutoff. We then divide this by the sampling fraction to get an estimate of the size of the
32
32
  # non outlier rows
33
33
  #
@@ -3,9 +3,9 @@
3
3
  module SolidCache
4
4
  class Entry
5
5
  module Size
6
- # Moving averate cache size estimation
6
+ # Moving average cache size estimation
7
7
  #
8
- # To reduce variablitity in the cache size estimate, we'll use a moving average of the previous 20 estimates.
8
+ # To reduce variability in the cache size estimate, we'll use a moving average of the previous 20 estimates.
9
9
  # The estimates are stored directly in the cache, under the "__solid_cache_entry_size_moving_average_estimates" key.
10
10
  #
11
11
  # We'll remove the largest and smallest estimates, and then average remaining ones.
@@ -5,7 +5,7 @@ module SolidCache
5
5
  include Expiration, Size
6
6
 
7
7
  # The estimated cost of an extra row in bytes, including fixed size columns, overhead, indexes and free space
8
- # Based on expirimentation on SQLite, MySQL and Postgresql.
8
+ # Based on experimentation on SQLite, MySQL and Postgresql.
9
9
  # A bit high for SQLite (more like 90 bytes), but about right for MySQL/Postgresql.
10
10
  ESTIMATED_ROW_OVERHEAD = 140
11
11
  KEY_HASH_ID_RANGE = -(2**63)..(2**63 - 1)
@@ -31,7 +31,7 @@ module SolidCache
31
31
  end
32
32
 
33
33
  def delete_by_key(key)
34
- delete_no_query_cache(:key_hash, key_hash_for(key))
34
+ delete_no_query_cache(:key_hash, key_hash_for(key)) > 0
35
35
  end
36
36
 
37
37
  def delete_multi(keys)
@@ -47,21 +47,17 @@ module SolidCache
47
47
  in_batches.delete_all
48
48
  end
49
49
 
50
- def increment(key, amount)
50
+ def lock_and_write(key, &block)
51
51
  transaction do
52
52
  uncached do
53
53
  result = lock.where(key_hash: key_hash_for(key)).pick(:key, :value)
54
- amount += result[1].to_i if result&.first == key
55
- write(key, amount)
56
- amount
54
+ new_value = block.call(result&.first == key ? result[1] : nil)
55
+ write(key, new_value) if new_value
56
+ new_value
57
57
  end
58
58
  end
59
59
  end
60
60
 
61
- def decrement(key, amount)
62
- increment(key, -amount)
63
- end
64
-
65
61
  def id_range
66
62
  uncached do
67
63
  pick(Arel.sql("max(id) - min(id) + 1")) || 0
@@ -70,13 +66,13 @@ module SolidCache
70
66
 
71
67
  private
72
68
  def upsert_all_no_query_cache(payloads)
73
- insert_all = ActiveRecord::InsertAll.new(
74
- self,
75
- add_key_hash_and_byte_size(payloads),
76
- unique_by: upsert_unique_by,
77
- on_duplicate: :update,
78
- update_only: upsert_update_only
79
- )
69
+ args = [ self.all,
70
+ connection_for_insert_all,
71
+ add_key_hash_and_byte_size(payloads) ].compact
72
+ options = { unique_by: upsert_unique_by,
73
+ on_duplicate: :update,
74
+ update_only: upsert_update_only }
75
+ insert_all = ActiveRecord::InsertAll.new(*args, **options)
80
76
  sql = connection.build_insert_sql(ActiveRecord::InsertAll::Builder.new(insert_all))
81
77
 
82
78
  message = +"#{self} "
@@ -86,6 +82,10 @@ module SolidCache
86
82
  connection.send exec_query_method, sql, message
87
83
  end
88
84
 
85
+ def connection_for_insert_all
86
+ Rails.version >= "7.2" ? connection : nil
87
+ end
88
+
89
89
  def add_key_hash_and_byte_size(payloads)
90
90
  payloads.map do |payload|
91
91
  payload.dup.tap do |payload|
@@ -112,12 +112,8 @@ module SolidCache
112
112
  end
113
113
 
114
114
  def get_all_sql(key_hashes)
115
- if connection.prepared_statements?
116
- @get_all_sql_binds ||= {}
117
- @get_all_sql_binds[key_hashes.count] ||= build_sql(where(key_hash: key_hashes).select(:key, :value))
118
- else
119
- @get_all_sql_no_binds ||= build_sql(where(key_hash: [ 1, 2 ]).select(:key, :value)).gsub("?, ?", "?")
120
- end
115
+ @get_all_sql ||= {}
116
+ @get_all_sql[key_hashes.count] ||= build_sql(where(key_hash: key_hashes).select(:key, :value))
121
117
  end
122
118
 
123
119
  def build_sql(relation)
@@ -134,7 +130,7 @@ module SolidCache
134
130
  if connection.prepared_statements?
135
131
  result = connection.select_all(sanitize_sql(query), "#{name} Load", Array(values), preparable: true)
136
132
  else
137
- result = connection.select_all(sanitize_sql([ query, values ]), "#{name} Load", Array(values), preparable: false)
133
+ result = connection.select_all(sanitize_sql([ query, *values ]), "#{name} Load", Array(values), preparable: false)
138
134
  end
139
135
 
140
136
  result.cast_values(SolidCache::Entry.attribute_types)
@@ -148,9 +144,9 @@ module SolidCache
148
144
 
149
145
  # exec_delete does not clear the query cache
150
146
  if connection.prepared_statements?
151
- connection.exec_delete(sql, "#{name} Delete All", Array(values)).nonzero?
147
+ connection.exec_delete(sql, "#{name} Delete All", Array(values))
152
148
  else
153
- connection.exec_delete(sql, "#{name} Delete All").nonzero?
149
+ connection.exec_delete(sql, "#{name} Delete All")
154
150
  end
155
151
  end
156
152
  end
@@ -1,4 +1,4 @@
1
- class AddKeyHashAndByteSizeToSolidCacheEntries < ActiveRecord::Migration[7.1]
1
+ class AddKeyHashAndByteSizeToSolidCacheEntries < ActiveRecord::Migration[7.0]
2
2
  def change
3
3
  change_table :solid_cache_entries do |t|
4
4
  t.column :key_hash, :integer, null: true, limit: 8
@@ -1,4 +1,4 @@
1
- class AddKeyHashAndByteSizeIndexesAndNullConstraintsToSolidCacheEntries < ActiveRecord::Migration[7.1]
1
+ class AddKeyHashAndByteSizeIndexesAndNullConstraintsToSolidCacheEntries < ActiveRecord::Migration[7.0]
2
2
  def change
3
3
  change_table :solid_cache_entries, bulk: true do |t|
4
4
  t.change_null :key_hash, false
@@ -1,4 +1,4 @@
1
- class RemoveKeyIndexFromSolidCacheEntries < ActiveRecord::Migration[7.1]
1
+ class RemoveKeyIndexFromSolidCacheEntries < ActiveRecord::Migration[7.0]
2
2
  def change
3
3
  change_table :solid_cache_entries do |t|
4
4
  t.remove_index :key, unique: true
@@ -5,10 +5,9 @@ module SolidCache
5
5
  class Sharded
6
6
  attr_reader :names, :nodes, :consistent_hash
7
7
 
8
- def initialize(names, nodes)
8
+ def initialize(names)
9
9
  @names = names
10
- @nodes = nodes
11
- @consistent_hash = MaglevHash.new(@nodes.keys)
10
+ @consistent_hash = MaglevHash.new(names)
12
11
  end
13
12
 
14
13
  def with_each(&block)
@@ -35,7 +34,7 @@ module SolidCache
35
34
 
36
35
  private
37
36
  def shard_for(key)
38
- nodes[consistent_hash.node(key)]
37
+ consistent_hash.node(key)
39
38
  end
40
39
  end
41
40
  end
@@ -7,13 +7,8 @@ module SolidCache
7
7
  case options
8
8
  when NilClass
9
9
  names = SolidCache.configuration.shard_keys
10
- nodes = names.to_h { |name| [ name, name ] }
11
10
  when Array
12
11
  names = options.map(&:to_sym)
13
- nodes = names.to_h { |name| [ name, name ] }
14
- when Hash
15
- names = options.keys.map(&:to_sym)
16
- nodes = options.to_h { |names, nodes| [ nodes.to_sym, names.to_sym ] }
17
12
  end
18
13
 
19
14
  if (unknown_shards = names - SolidCache.configuration.shard_keys).any?
@@ -23,7 +18,7 @@ module SolidCache
23
18
  if names.size == 1
24
19
  Single.new(names.first)
25
20
  else
26
- Sharded.new(names, nodes)
21
+ Sharded.new(names)
27
22
  end
28
23
  else
29
24
  Unmanaged.new
@@ -15,17 +15,11 @@ module SolidCache
15
15
  end
16
16
 
17
17
  def increment(name, amount = 1, options = nil)
18
- options = merged_options(options)
19
- key = normalize_key(name, options)
20
-
21
- entry_increment(key, amount)
18
+ adjust(name, amount, options)
22
19
  end
23
20
 
24
21
  def decrement(name, amount = 1, options = nil)
25
- options = merged_options(options)
26
- key = normalize_key(name, options)
27
-
28
- entry_decrement(key, amount)
22
+ adjust(name, -amount, options)
29
23
  end
30
24
 
31
25
  def cleanup(options = nil)
@@ -41,20 +35,31 @@ module SolidCache
41
35
  deserialize_entry(read_serialized_entry(key, **options), **options)
42
36
  end
43
37
 
44
- def read_serialized_entry(key, raw: false, **options)
38
+ def read_serialized_entry(key, **options)
45
39
  entry_read(key)
46
40
  end
47
41
 
48
- def write_entry(key, entry, raw: false, **options)
42
+ def write_entry(key, entry, raw: false, unless_exist: false, **options)
49
43
  payload = serialize_entry(entry, raw: raw, **options)
50
- # No-op for us, but this writes it to the local cache
51
- write_serialized_entry(key, payload, raw: raw, **options)
52
44
 
53
- entry_write(key, payload)
45
+ if unless_exist
46
+ written = false
47
+ entry_lock_and_write(key) do |value|
48
+ if value.nil? || deserialize_entry(value, **options).expired?
49
+ written = true
50
+ payload
51
+ end
52
+ end
53
+ else
54
+ written = entry_write(key, payload)
55
+ end
56
+
57
+ write_serialized_entry(key, payload, raw: raw, returning: written, **options)
58
+ written
54
59
  end
55
60
 
56
- def write_serialized_entry(key, payload, raw: false, unless_exist: false, expires_in: nil, race_condition_ttl: nil, **options)
57
- true
61
+ def write_serialized_entry(key, payload, raw: false, unless_exist: false, expires_in: nil, race_condition_ttl: nil, returning: true, **options)
62
+ returning
58
63
  end
59
64
 
60
65
  def read_serialized_entries(keys)
@@ -109,11 +114,7 @@ module SolidCache
109
114
  end
110
115
 
111
116
  def serialize_entry(entry, raw: false, **options)
112
- if raw
113
- entry.value.to_s
114
- else
115
- super(entry, raw: raw, **options)
116
- end
117
+ super(entry, raw: raw, **options)
117
118
  end
118
119
 
119
120
  def serialize_entries(entries, **options)
@@ -122,12 +123,8 @@ module SolidCache
122
123
  end
123
124
  end
124
125
 
125
- def deserialize_entry(payload, raw: false, **)
126
- if payload && raw
127
- ActiveSupport::Cache::Entry.new(payload)
128
- else
129
- super(payload)
130
- end
126
+ def deserialize_entry(payload, **)
127
+ super(payload)
131
128
  end
132
129
 
133
130
  def normalize_key(key, options)
@@ -143,6 +140,30 @@ module SolidCache
143
140
  key
144
141
  end
145
142
  end
143
+
144
+ def adjust(name, amount, options)
145
+ options = merged_options(options)
146
+ key = normalize_key(name, options)
147
+
148
+ new_value = entry_lock_and_write(key) do |value|
149
+ serialize_entry(adjusted_entry(value, amount, options))
150
+ end
151
+ deserialize_entry(new_value, **options).value if new_value
152
+ end
153
+
154
+ def adjusted_entry(value, amount, options)
155
+ entry = deserialize_entry(value, **options)
156
+
157
+ if entry && !entry.expired?
158
+ ActiveSupport::Cache::Entry.new \
159
+ amount + entry.value.to_i, **options.dup.merge(expires_in: nil, expires_at: entry.expires_at)
160
+ elsif /\A\d+\z/.match?(value)
161
+ # This is to match old raw values
162
+ ActiveSupport::Cache::Entry.new(amount + value.to_i, **options)
163
+ else
164
+ ActiveSupport::Cache::Entry.new(amount, **options)
165
+ end
166
+ end
146
167
  end
147
168
  end
148
169
  end
@@ -0,0 +1,108 @@
1
+ # frozen_string_literal: true
2
+
3
+ module SolidCache
4
+ class Store
5
+ module Connections
6
+ attr_reader :shard_options
7
+
8
+ def initialize(options = {})
9
+ super(options)
10
+ if options[:clusters].present?
11
+ if options[:clusters].size > 1
12
+ raise ArgumentError, "Multiple clusters are no longer supported"
13
+ else
14
+ ActiveSupport.deprecator.warn(":clusters is deprecated, use :shards instead.")
15
+ end
16
+ @shard_options = options.fetch(:clusters).first[:shards]
17
+ elsif options[:cluster].present?
18
+ ActiveSupport.deprecator.warn(":cluster is deprecated, use :shards instead.")
19
+ @shard_options = options.fetch(:cluster, {})[:shards]
20
+ else
21
+ @shard_options = options.fetch(:shards, nil)
22
+ end
23
+
24
+ if [ Array, NilClass ].none? { |klass| @shard_options.is_a? klass }
25
+ raise ArgumentError, "`shards` is a `#{@shard_options.class.name}`, it should be Array or nil"
26
+ end
27
+ end
28
+
29
+ def with_each_connection(async: false, &block)
30
+ return enum_for(:with_each_connection) unless block_given?
31
+
32
+ connections.with_each do
33
+ execute(async, &block)
34
+ end
35
+ end
36
+
37
+ def with_connection_for(key, async: false, &block)
38
+ connections.with_connection_for(key) do
39
+ execute(async, &block)
40
+ end
41
+ end
42
+
43
+ def with_connection(name, async: false, &block)
44
+ connections.with(name) do
45
+ execute(async, &block)
46
+ end
47
+ end
48
+
49
+ def group_by_connection(keys)
50
+ connections.assign(keys)
51
+ end
52
+
53
+ def connection_names
54
+ connections.names
55
+ end
56
+
57
+ def connections
58
+ @connections ||= SolidCache::Connections.from_config(@shard_options)
59
+ end
60
+
61
+ private
62
+ def setup!
63
+ connections
64
+ end
65
+
66
+ def reading_key(key, failsafe:, failsafe_returning: nil, &block)
67
+ failsafe(failsafe, returning: failsafe_returning) do
68
+ with_connection_for(key, &block)
69
+ end
70
+ end
71
+
72
+ def reading_keys(keys, failsafe:, failsafe_returning: nil)
73
+ group_by_connection(keys).map do |connection, keys|
74
+ failsafe(failsafe, returning: failsafe_returning) do
75
+ with_connection(connection) do
76
+ yield keys
77
+ end
78
+ end
79
+ end
80
+ end
81
+
82
+
83
+ def writing_key(key, failsafe:, failsafe_returning: nil, &block)
84
+ failsafe(failsafe, returning: failsafe_returning) do
85
+ with_connection_for(key, &block)
86
+ end
87
+ end
88
+
89
+ def writing_keys(entries, failsafe:, failsafe_returning: nil)
90
+ group_by_connection(entries).map do |connection, entries|
91
+ failsafe(failsafe, returning: failsafe_returning) do
92
+ with_connection(connection) do
93
+ yield entries
94
+ end
95
+ end
96
+ end
97
+ end
98
+
99
+ def writing_all(failsafe:, failsafe_returning: nil, &block)
100
+ connection_names.map do |connection|
101
+ failsafe(failsafe, returning: failsafe_returning) do
102
+ with_connection(connection, &block)
103
+ end
104
+ end.first
105
+ end
106
+ end
107
+ end
108
+ end
@@ -18,7 +18,7 @@ module SolidCache
18
18
 
19
19
  private
20
20
  def entry_clear
21
- writing_all(failsafe: :clear) do
21
+ writing_all(failsafe: :clear, failsafe_returning: nil) do
22
22
  if clear_with == :truncate
23
23
  Entry.clear_truncate
24
24
  else
@@ -27,15 +27,11 @@ module SolidCache
27
27
  end
28
28
  end
29
29
 
30
- def entry_increment(key, amount)
30
+ def entry_lock_and_write(key, &block)
31
31
  writing_key(key, failsafe: :increment) do
32
- Entry.increment(key, amount)
33
- end
34
- end
35
-
36
- def entry_decrement(key, amount)
37
- writing_key(key, failsafe: :decrement) do
38
- Entry.decrement(key, amount)
32
+ Entry.lock_and_write(key) do |value|
33
+ block.call(value).tap { |result| track_writes(1) if result }
34
+ end
39
35
  end
40
36
  end
41
37
 
@@ -52,17 +48,17 @@ module SolidCache
52
48
  end
53
49
 
54
50
  def entry_write(key, payload)
55
- writing_key(key, failsafe: :write_entry, failsafe_returning: false) do |cluster|
51
+ writing_key(key, failsafe: :write_entry, failsafe_returning: nil) do
56
52
  Entry.write(key, payload)
57
- cluster.track_writes(1)
53
+ track_writes(1)
58
54
  true
59
55
  end
60
56
  end
61
57
 
62
58
  def entry_write_multi(entries)
63
- writing_keys(entries, failsafe: :write_multi_entries, failsafe_returning: false) do |cluster, entries|
59
+ writing_keys(entries, failsafe: :write_multi_entries, failsafe_returning: false) do |entries|
64
60
  Entry.write_multi(entries)
65
- cluster.track_writes(entries.count)
61
+ track_writes(entries.count)
66
62
  true
67
63
  end
68
64
  end
@@ -74,7 +70,7 @@ module SolidCache
74
70
  end
75
71
 
76
72
  def entry_delete_multi(entries)
77
- writing_keys(entries, failsafe: :delete_multi_entries, failsafe_returning: false) do
73
+ writing_keys(entries, failsafe: :delete_multi_entries, failsafe_returning: 0) do
78
74
  Entry.delete_multi(entries)
79
75
  end
80
76
  end
@@ -1,7 +1,7 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module SolidCache
4
- class Cluster
4
+ class Store
5
5
  module Execution
6
6
  def initialize(options = {})
7
7
  super(options)
@@ -16,7 +16,7 @@ module SolidCache
16
16
  @background << ->() do
17
17
  wrap_in_rails_executor do
18
18
  connections.with(current_shard) do
19
- instrument(&block)
19
+ setup_instrumentation(&block)
20
20
  end
21
21
  end
22
22
  rescue Exception => exception
@@ -28,7 +28,7 @@ module SolidCache
28
28
  if async
29
29
  async(&block)
30
30
  else
31
- instrument(&block)
31
+ setup_instrumentation(&block)
32
32
  end
33
33
  end
34
34
 
@@ -44,7 +44,7 @@ module SolidCache
44
44
  @active_record_instrumentation
45
45
  end
46
46
 
47
- def instrument(&block)
47
+ def setup_instrumentation(&block)
48
48
  if active_record_instrumentation?
49
49
  block.call
50
50
  else
@@ -3,7 +3,7 @@
3
3
  require "concurrent/atomic/atomic_fixnum"
4
4
 
5
5
  module SolidCache
6
- class Cluster
6
+ class Store
7
7
  module Expiry
8
8
  # For every write that we do, we attempt to delete EXPIRY_MULTIPLIER times as many records.
9
9
  # This ensures there is downward pressure on the cache size while there is valid data to delete
@@ -1,10 +1,10 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module SolidCache
4
- class Cluster
4
+ class Store
5
5
  module Stats
6
6
  def initialize(options = {})
7
- super()
7
+ super(options)
8
8
  end
9
9
 
10
10
  def stats
@@ -2,7 +2,7 @@
2
2
 
3
3
  module SolidCache
4
4
  class Store < ActiveSupport::Cache::Store
5
- include Api, Clusters, Entries, Failsafe
5
+ include Api, Connections, Entries, Execution, Expiry, Failsafe, Stats
6
6
  prepend ActiveSupport::Cache::Strategy::LocalCache
7
7
 
8
8
  def initialize(options = {})
@@ -16,9 +16,5 @@ module SolidCache
16
16
  def setup!
17
17
  super
18
18
  end
19
-
20
- def stats
21
- primary_cluster.stats
22
- end
23
19
  end
24
20
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module SolidCache
4
- VERSION = "0.5.3"
4
+ VERSION = "0.7.0"
5
5
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: solid_cache
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.5.3
4
+ version: 0.7.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Donal McBreen
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2024-02-29 00:00:00.000000000 Z
11
+ date: 2024-07-26 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: activerecord
@@ -80,6 +80,20 @@ dependencies:
80
80
  - - ">="
81
81
  - !ruby/object:Gem::Version
82
82
  version: '0'
83
+ - !ruby/object:Gem::Dependency
84
+ name: msgpack
85
+ requirement: !ruby/object:Gem::Requirement
86
+ requirements:
87
+ - - ">="
88
+ - !ruby/object:Gem::Version
89
+ version: '0'
90
+ type: :development
91
+ prerelease: false
92
+ version_requirements: !ruby/object:Gem::Requirement
93
+ requirements:
94
+ - - ">="
95
+ - !ruby/object:Gem::Version
96
+ version: '0'
83
97
  description: A database backed ActiveSupport::Cache::Store
84
98
  email:
85
99
  - donal@37signals.com
@@ -106,11 +120,6 @@ files:
106
120
  - lib/generators/solid_cache/install/install_generator.rb
107
121
  - lib/generators/solid_cache/install/templates/config/solid_cache.yml.tt
108
122
  - lib/solid_cache.rb
109
- - lib/solid_cache/cluster.rb
110
- - lib/solid_cache/cluster/connections.rb
111
- - lib/solid_cache/cluster/execution.rb
112
- - lib/solid_cache/cluster/expiry.rb
113
- - lib/solid_cache/cluster/stats.rb
114
123
  - lib/solid_cache/configuration.rb
115
124
  - lib/solid_cache/connections.rb
116
125
  - lib/solid_cache/connections/sharded.rb
@@ -120,9 +129,12 @@ files:
120
129
  - lib/solid_cache/maglev_hash.rb
121
130
  - lib/solid_cache/store.rb
122
131
  - lib/solid_cache/store/api.rb
123
- - lib/solid_cache/store/clusters.rb
132
+ - lib/solid_cache/store/connections.rb
124
133
  - lib/solid_cache/store/entries.rb
134
+ - lib/solid_cache/store/execution.rb
135
+ - lib/solid_cache/store/expiry.rb
125
136
  - lib/solid_cache/store/failsafe.rb
137
+ - lib/solid_cache/store/stats.rb
126
138
  - lib/solid_cache/version.rb
127
139
  - lib/tasks/solid_cache_tasks.rake
128
140
  homepage: http://github.com/rails/solid_cache
@@ -148,7 +160,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
148
160
  - !ruby/object:Gem::Version
149
161
  version: '0'
150
162
  requirements: []
151
- rubygems_version: 3.5.6
163
+ rubygems_version: 3.5.11
152
164
  signing_key:
153
165
  specification_version: 4
154
166
  summary: A database backed ActiveSupport::Cache::Store
@@ -1,55 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module SolidCache
4
- class Cluster
5
- module Connections
6
- attr_reader :shard_options
7
-
8
- def initialize(options = {})
9
- super(options)
10
- @shard_options = options.fetch(:shards, nil)
11
-
12
- if [ Hash, Array, NilClass ].none? { |klass| @shard_options.is_a? klass }
13
- raise ArgumentError, "`shards` is a `#{@shard_options.class.name}`, it should be one of Array, Hash or nil"
14
- end
15
- end
16
-
17
- def with_each_connection(async: false, &block)
18
- return enum_for(:with_each_connection) unless block_given?
19
-
20
- connections.with_each do
21
- execute(async, &block)
22
- end
23
- end
24
-
25
- def with_connection_for(key, async: false, &block)
26
- connections.with_connection_for(key) do
27
- execute(async, &block)
28
- end
29
- end
30
-
31
- def with_connection(name, async: false, &block)
32
- connections.with(name) do
33
- execute(async, &block)
34
- end
35
- end
36
-
37
- def group_by_connection(keys)
38
- connections.assign(keys)
39
- end
40
-
41
- def connection_names
42
- connections.names
43
- end
44
-
45
- def connections
46
- @connections ||= SolidCache::Connections.from_config(@shard_options)
47
- end
48
-
49
- private
50
- def setup!
51
- connections
52
- end
53
- end
54
- end
55
- end
@@ -1,18 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module SolidCache
4
- class Cluster
5
- include Connections, Execution, Expiry, Stats
6
-
7
- attr_reader :error_handler
8
-
9
- def initialize(options = {})
10
- @error_handler = options[:error_handler]
11
- super(options)
12
- end
13
-
14
- def setup!
15
- super
16
- end
17
- end
18
- end
@@ -1,83 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module SolidCache
4
- class Store
5
- module Clusters
6
- attr_reader :primary_cluster, :clusters
7
-
8
- def initialize(options = {})
9
- super(options)
10
-
11
- clusters_options = options.fetch(:clusters) { [ options.fetch(:cluster, {}) ] }
12
-
13
- @clusters = clusters_options.map.with_index do |cluster_options, index|
14
- Cluster.new(options.merge(cluster_options).merge(async_writes: index != 0, error_handler: error_handler))
15
- end
16
-
17
- @primary_cluster = clusters.first
18
- end
19
-
20
- def setup!
21
- clusters.each(&:setup!)
22
- end
23
-
24
- private
25
- def reading_key(key, failsafe:, failsafe_returning: nil, &block)
26
- failsafe(failsafe, returning: failsafe_returning) do
27
- primary_cluster.with_connection_for(key, &block)
28
- end
29
- end
30
-
31
- def reading_keys(keys, failsafe:, failsafe_returning: nil)
32
- connection_keys = primary_cluster.group_by_connection(keys)
33
-
34
- connection_keys.map do |connection, keys|
35
- failsafe(failsafe, returning: failsafe_returning) do
36
- primary_cluster.with_connection(connection) do
37
- yield keys
38
- end
39
- end
40
- end
41
- end
42
-
43
-
44
- def writing_key(key, failsafe:, failsafe_returning: nil)
45
- first_cluster_sync_rest_async do |cluster, async|
46
- failsafe(failsafe, returning: failsafe_returning) do
47
- cluster.with_connection_for(key, async: async) do
48
- yield cluster
49
- end
50
- end
51
- end
52
- end
53
-
54
- def writing_keys(entries, failsafe:, failsafe_returning: nil)
55
- first_cluster_sync_rest_async do |cluster, async|
56
- connection_entries = cluster.group_by_connection(entries)
57
-
58
- connection_entries.map do |connection, entries|
59
- failsafe(failsafe, returning: failsafe_returning) do
60
- cluster.with_connection(connection, async: async) do
61
- yield cluster, entries
62
- end
63
- end
64
- end
65
- end
66
- end
67
-
68
- def writing_all(failsafe:, failsafe_returning: nil, &block)
69
- first_cluster_sync_rest_async do |cluster, async|
70
- cluster.connection_names.each do |connection|
71
- failsafe(failsafe, returning: failsafe_returning) do
72
- cluster.with_connection(connection, async: async, &block)
73
- end
74
- end
75
- end
76
- end
77
-
78
- def first_cluster_sync_rest_async
79
- clusters.map.with_index { |cluster, index| yield cluster, index != 0 }.first
80
- end
81
- end
82
- end
83
- end