solid_cache 0.6.0 → 0.7.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 7ba0486907f0e4656593843d1cca1c7ec35763772ff6d5a2a4e43b9d43b023b3
4
- data.tar.gz: a93991c212dcb6d0a1dedb20e2d8d85e0664092c2342db895489beafb9f17e23
3
+ metadata.gz: 3bb8fa17755c13f25f2a42794d3be5843de423f484d36d7d4a6c38c0241862c7
4
+ data.tar.gz: d71f6c7aad61cb20f1b1f2a077550e63bdb47d37c33b4a498855d9d35427a153
5
5
  SHA512:
6
- metadata.gz: 9897aef78db43aff6bceea922aa43669f76a48852d4dc94e64ce8aa198c1ca7fd04152a15033760a6fecad11245b650369342345c6f07e366c3d3d4e0a71f1da
7
- data.tar.gz: cd59a068b761fd6060005d9ef78328cd03a3420ee539c07a6afd14eed072a0e9a092d135fbadeb5a58d38aa5a146e8481286b50ef411548a15b701464c0394b0
6
+ metadata.gz: 716d333398fa3efa935668d918e92742df8f7c8056c7ee222a545167085f2fd3b871fb96d3a8d0f6961ea63f1c0e5ac69146c8469985afa3da9913d4f4b0de0f
7
+ data.tar.gz: f51f7265604cf7c22ff2cb1ba9d865c0210b3cc54f2e57a0503dca99be015d345feb03d6eecfe1ed8211eba7e049f7aa5c70d647985fa5444d7cc3a393e7c04d
data/README.md CHANGED
@@ -4,7 +4,7 @@
4
4
 
5
5
  Solid Cache is a database-backed Active Support cache store implementation.
6
6
 
7
- Using SQL databases backed by SSDs we can have caches that are much larger and cheaper than traditional memory only Redis or Memcached backed caches.
7
+ Using SQL databases backed by SSDs we can have caches that are much larger and cheaper than traditional memory-only Redis or Memcached backed caches.
8
8
 
9
9
  ## Usage
10
10
 
@@ -14,15 +14,15 @@ To set Solid Cache as your Rails cache, you should add this to your environment
14
14
  config.cache_store = :solid_cache_store
15
15
  ```
16
16
 
17
- Solid Cache is a FIFO (first in, first out) cache. While this is not as efficient as an LRU cache, this is mitigated by the longer cache lifespan.
17
+ Solid Cache is a FIFO (first in, first out) cache. While this is not as efficient as an LRU cache, it is mitigated by the longer cache lifespan.
18
18
 
19
19
  A FIFO cache is much easier to manage:
20
- 1. We don't need to track when items are read
20
+ 1. We don't need to track when items are read.
21
21
  2. We can estimate and control the cache size by comparing the maximum and minimum IDs.
22
22
  3. By deleting from one end of the table and adding at the other end we can avoid fragmentation (on MySQL at least).
23
23
 
24
24
  ### Installation
25
- Add this line to your application's Gemfile:
25
+ Add this line to your application's `Gemfile`:
26
26
 
27
27
  ```ruby
28
28
  gem "solid_cache"
@@ -93,9 +93,9 @@ Setting `databases` to `[cache_db, cache_db2]` is the equivalent of:
93
93
  SolidCache::Record.connects_to shards: { cache_db1: { writing: :cache_db1 }, cache_db2: { writing: :cache_db2 } }
94
94
  ```
95
95
 
96
- If `connects_to` is set it will be passed directly.
96
+ If `connects_to` is set, it will be passed directly.
97
97
 
98
- If none of these are set, then Solid Cache will use the `ActiveRecord::Base` connection pool. This means that cache reads and writes will be part of any wrapping
98
+ If none of these are set, Solid Cache will use the `ActiveRecord::Base` connection pool. This means that cache reads and writes will be part of any wrapping
99
99
  database transaction.
100
100
 
101
101
  #### Engine configuration
@@ -104,7 +104,7 @@ There are three options that can be set on the engine:
104
104
 
105
105
  - `executor` - the [Rails executor](https://guides.rubyonrails.org/threading_and_code_execution.html#executor) used to wrap asynchronous operations, defaults to the app executor
106
106
  - `connects_to` - a custom connects to value for the abstract `SolidCache::Record` active record model. Required for sharding and/or using a separate cache database to the main app. This will overwrite any value set in `config/solid_cache.yml`
107
- - `size_estimate_samples` - if `max_size` is set on the cache, the number of the samples used to estimates the size.
107
+ - `size_estimate_samples` - if `max_size` is set on the cache, the number of the samples used to estimates the size
108
108
 
109
109
  These can be set in your Rails configuration:
110
110
 
@@ -116,7 +116,7 @@ end
116
116
 
117
117
  #### Cache configuration
118
118
 
119
- Solid Cache supports these options in addition to the standard `ActiveSupport::Cache::Store` options.
119
+ Solid Cache supports these options in addition to the standard `ActiveSupport::Cache::Store` options:
120
120
 
121
121
  - `error_handler` - a Proc to call to handle any `ActiveRecord::ActiveRecordError`s that are raises (default: log errors as warnings)
122
122
  - `expiry_batch_size` - the batch size to use when deleting old records (default: `100`)
@@ -125,27 +125,28 @@ Solid Cache supports these options in addition to the standard `ActiveSupport::C
125
125
  - `max_age` - the maximum age of entries in the cache (default: `2.weeks.to_i`). Can be set to `nil`, but this is not recommended unless using `max_entries` to limit the size of the cache.
126
126
  - `max_entries` - the maximum number of entries allowed in the cache (default: `nil`, meaning no limit)
127
127
  - `max_size` - the maximum size of the cache entries (default `nil`, meaning no limit)
128
- - `cluster` - a Hash of options for the cache database cluster, e.g `{ shards: [:database1, :database2, :database3] }`
129
- - `clusters` - and Array of Hashes for multiple cache clusters (ignored if `:cluster` is set)
128
+ - `cluster` - (deprecated) a Hash of options for the cache database cluster, e.g `{ shards: [:database1, :database2, :database3] }`
129
+ - `clusters` - (deprecated) an Array of Hashes for multiple cache clusters (ignored if `:cluster` is set)
130
+ - `shards` - an Array of databases
130
131
  - `active_record_instrumentation` - whether to instrument the cache's queries (default: `true`)
131
132
  - `clear_with` - clear the cache with `:truncate` or `:delete` (default `truncate`, except for when `Rails.env.test?` then `delete`)
132
133
  - `max_key_bytesize` - the maximum size of a normalized key in bytes (default `1024`)
133
134
 
134
- For more information on cache clusters see [Sharding the cache](#sharding-the-cache)
135
+ For more information on cache clusters, see [Sharding the cache](#sharding-the-cache)
135
136
 
136
137
  ### Cache expiry
137
138
 
138
139
  Solid Cache tracks writes to the cache. For every write it increments a counter by 1. Once the counter reaches 50% of the `expiry_batch_size` it adds a task to run on a background thread. That task will:
139
140
 
140
- 1. Check if we have exceeded the `max_entries` or `max_size` values (if set)
141
+ 1. Check if we have exceeded the `max_entries` or `max_size` values (if set).
141
142
  The current entries are estimated by subtracting the max and min IDs from the `SolidCache::Entry` table.
142
143
  The current size is estimated by sampling the entry `byte_size` columns.
143
- 2. If we have it will delete `expiry_batch_size` entries
144
- 3. If not it will delete up to `expiry_batch_size` entries, provided they are all older than `max_age`.
144
+ 2. If we have, it will delete `expiry_batch_size` entries.
145
+ 3. If not, it will delete up to `expiry_batch_size` entries, provided they are all older than `max_age`.
145
146
 
146
147
  Expiring when we reach 50% of the batch size allows us to expire records from the cache faster than we write to it when we need to reduce the cache size.
147
148
 
148
- Only triggering expiry when we write means that the if the cache is idle, the background thread is also idle.
149
+ Only triggering expiry when we write means that if the cache is idle, the background thread is also idle.
149
150
 
150
151
  If you want the cache expiry to be run in a background job instead of a thread, you can set `expiry_method` to `:job`. This will enqueue a `SolidCache::ExpiryJob`.
151
152
 
@@ -195,9 +196,9 @@ Solid Cache uses the [Maglev](https://static.googleusercontent.com/media/researc
195
196
 
196
197
  To shard:
197
198
 
198
- 1. Add the configuration for the database shards to database.yml
199
- 2. Configure the shards via `config.solid_cache.connects_to`
200
- 3. Pass the shards for the cache to use via the cluster option
199
+ 1. Add the configuration for the database shards to database.yml.
200
+ 2. Configure the shards via `config.solid_cache.connects_to`.
201
+ 3. Pass the shards for the cache to use via the cluster option.
201
202
 
202
203
  For example:
203
204
  ```yml
@@ -220,43 +221,6 @@ production:
220
221
  databases: [cache_shard1, cache_shard2, cache_shard3]
221
222
  ```
222
223
 
223
- ### Secondary cache clusters
224
-
225
- You can add secondary cache clusters. Reads will only be sent to the primary cluster (i.e. the first one listed).
226
-
227
- Writes will go to all clusters. The writes to the primary cluster are synchronous, but asynchronous to the secondary clusters.
228
-
229
- To specific multiple clusters you can do:
230
-
231
- ```yaml
232
- # config/solid_cache.yml
233
- production:
234
- databases: [cache_primary_shard1, cache_primary_shard2, cache_secondary_shard1, cache_secondary_shard2]
235
- store_options:
236
- clusters:
237
- - shards: [cache_primary_shard1, cache_primary_shard2]
238
- - shards: [cache_secondary_shard1, cache_secondary_shard2]
239
- ```
240
-
241
- ### Named shard destinations
242
-
243
- By default, the node key used for sharding is the name of the database in `database.yml`.
244
-
245
- It is possible to add names for the shards in the cluster config. This will allow you to shuffle or remove shards without breaking consistent hashing.
246
-
247
- ```yaml
248
- production:
249
- databases: [cache_primary_shard1, cache_primary_shard2, cache_secondary_shard1, cache_secondary_shard2]
250
- store_options:
251
- clusters:
252
- - shards:
253
- cache_primary_shard1: node1
254
- cache_primary_shard2: node2
255
- - shards:
256
- cache_secondary_shard1: node3
257
- cache_secondary_shard2: node4
258
- ```
259
-
260
224
  ### Enabling encryption
261
225
 
262
226
  Add this to an initializer:
@@ -270,8 +234,8 @@ end
270
234
  ### Index size limits
271
235
  The Solid Cache migrations try to create an index with 1024 byte entries. If that is too big for your database, you should:
272
236
 
273
- 1. Edit the index size in the migration
274
- 2. Set `max_key_bytesize` on your cache to the new value
237
+ 1. Edit the index size in the migration.
238
+ 2. Set `max_key_bytesize` on your cache to the new value.
275
239
 
276
240
  ## Development
277
241
 
@@ -298,10 +262,10 @@ $ TARGET_DB=mysql bin/rake test
298
262
  $ TARGET_DB=postgres bin/rake test
299
263
  ```
300
264
 
301
- ### Testing with multiple Rails version
265
+ ### Testing with multiple Rails versions
302
266
 
303
267
  Solid Cache relies on [appraisal](https://github.com/thoughtbot/appraisal/tree/main) to test
304
- multiple Rails version.
268
+ multiple Rails versions.
305
269
 
306
270
  To run a test for a specific version run:
307
271
 
data/Rakefile CHANGED
@@ -23,7 +23,7 @@ def run_without_aborting(*tasks)
23
23
  end
24
24
 
25
25
  def configs
26
- [ :default, :cluster, :cluster_inferred, :clusters, :clusters_named, :database, :no_database ]
26
+ [ :default, :connects_to, :database, :no_database, :shards, :unprepared_statements ]
27
27
  end
28
28
 
29
29
  task :test do
@@ -27,7 +27,7 @@ module SolidCache
27
27
  # We then calculate the fraction of the rows we want to sample by dividing the sample size by the estimated number
28
28
  # of rows.
29
29
  #
30
- # The we grab the byte_size sum of the rows in the range of key_hash values excluding any rows that are larger than
30
+ # Then we grab the byte_size sum of the rows in the range of key_hash values excluding any rows that are larger than
31
31
  # our minimum outlier cutoff. We then divide this by the sampling fraction to get an estimate of the size of the
32
32
  # non outlier rows
33
33
  #
@@ -3,9 +3,9 @@
3
3
  module SolidCache
4
4
  class Entry
5
5
  module Size
6
- # Moving averate cache size estimation
6
+ # Moving average cache size estimation
7
7
  #
8
- # To reduce variablitity in the cache size estimate, we'll use a moving average of the previous 20 estimates.
8
+ # To reduce variability in the cache size estimate, we'll use a moving average of the previous 20 estimates.
9
9
  # The estimates are stored directly in the cache, under the "__solid_cache_entry_size_moving_average_estimates" key.
10
10
  #
11
11
  # We'll remove the largest and smallest estimates, and then average remaining ones.
@@ -5,7 +5,7 @@ module SolidCache
5
5
  include Expiration, Size
6
6
 
7
7
  # The estimated cost of an extra row in bytes, including fixed size columns, overhead, indexes and free space
8
- # Based on expirimentation on SQLite, MySQL and Postgresql.
8
+ # Based on experimentation on SQLite, MySQL and Postgresql.
9
9
  # A bit high for SQLite (more like 90 bytes), but about right for MySQL/Postgresql.
10
10
  ESTIMATED_ROW_OVERHEAD = 140
11
11
  KEY_HASH_ID_RANGE = -(2**63)..(2**63 - 1)
@@ -52,7 +52,7 @@ module SolidCache
52
52
  uncached do
53
53
  result = lock.where(key_hash: key_hash_for(key)).pick(:key, :value)
54
54
  new_value = block.call(result&.first == key ? result[1] : nil)
55
- write(key, new_value)
55
+ write(key, new_value) if new_value
56
56
  new_value
57
57
  end
58
58
  end
@@ -66,7 +66,7 @@ module SolidCache
66
66
 
67
67
  private
68
68
  def upsert_all_no_query_cache(payloads)
69
- args = [ self,
69
+ args = [ self.all,
70
70
  connection_for_insert_all,
71
71
  add_key_hash_and_byte_size(payloads) ].compact
72
72
  options = { unique_by: upsert_unique_by,
@@ -112,12 +112,8 @@ module SolidCache
112
112
  end
113
113
 
114
114
  def get_all_sql(key_hashes)
115
- if connection.prepared_statements?
116
- @get_all_sql_binds ||= {}
117
- @get_all_sql_binds[key_hashes.count] ||= build_sql(where(key_hash: key_hashes).select(:key, :value))
118
- else
119
- @get_all_sql_no_binds ||= build_sql(where(key_hash: [ 1, 2 ]).select(:key, :value)).gsub("?, ?", "?")
120
- end
115
+ @get_all_sql ||= {}
116
+ @get_all_sql[key_hashes.count] ||= build_sql(where(key_hash: key_hashes).select(:key, :value))
121
117
  end
122
118
 
123
119
  def build_sql(relation)
@@ -134,7 +130,7 @@ module SolidCache
134
130
  if connection.prepared_statements?
135
131
  result = connection.select_all(sanitize_sql(query), "#{name} Load", Array(values), preparable: true)
136
132
  else
137
- result = connection.select_all(sanitize_sql([ query, values ]), "#{name} Load", Array(values), preparable: false)
133
+ result = connection.select_all(sanitize_sql([ query, *values ]), "#{name} Load", Array(values), preparable: false)
138
134
  end
139
135
 
140
136
  result.cast_values(SolidCache::Entry.attribute_types)
@@ -5,10 +5,9 @@ module SolidCache
5
5
  class Sharded
6
6
  attr_reader :names, :nodes, :consistent_hash
7
7
 
8
- def initialize(names, nodes)
8
+ def initialize(names)
9
9
  @names = names
10
- @nodes = nodes
11
- @consistent_hash = MaglevHash.new(@nodes.keys)
10
+ @consistent_hash = MaglevHash.new(names)
12
11
  end
13
12
 
14
13
  def with_each(&block)
@@ -35,7 +34,7 @@ module SolidCache
35
34
 
36
35
  private
37
36
  def shard_for(key)
38
- nodes[consistent_hash.node(key)]
37
+ consistent_hash.node(key)
39
38
  end
40
39
  end
41
40
  end
@@ -7,13 +7,8 @@ module SolidCache
7
7
  case options
8
8
  when NilClass
9
9
  names = SolidCache.configuration.shard_keys
10
- nodes = names.to_h { |name| [ name, name ] }
11
10
  when Array
12
11
  names = options.map(&:to_sym)
13
- nodes = names.to_h { |name| [ name, name ] }
14
- when Hash
15
- names = options.keys.map(&:to_sym)
16
- nodes = options.to_h { |names, nodes| [ nodes.to_sym, names.to_sym ] }
17
12
  end
18
13
 
19
14
  if (unknown_shards = names - SolidCache.configuration.shard_keys).any?
@@ -23,7 +18,7 @@ module SolidCache
23
18
  if names.size == 1
24
19
  Single.new(names.first)
25
20
  else
26
- Sharded.new(names, nodes)
21
+ Sharded.new(names)
27
22
  end
28
23
  else
29
24
  Unmanaged.new
@@ -39,16 +39,27 @@ module SolidCache
39
39
  entry_read(key)
40
40
  end
41
41
 
42
- def write_entry(key, entry, raw: false, **options)
42
+ def write_entry(key, entry, raw: false, unless_exist: false, **options)
43
43
  payload = serialize_entry(entry, raw: raw, **options)
44
- # No-op for us, but this writes it to the local cache
45
- write_serialized_entry(key, payload, raw: raw, **options)
46
44
 
47
- entry_write(key, payload)
45
+ if unless_exist
46
+ written = false
47
+ entry_lock_and_write(key) do |value|
48
+ if value.nil? || deserialize_entry(value, **options).expired?
49
+ written = true
50
+ payload
51
+ end
52
+ end
53
+ else
54
+ written = entry_write(key, payload)
55
+ end
56
+
57
+ write_serialized_entry(key, payload, raw: raw, returning: written, **options)
58
+ written
48
59
  end
49
60
 
50
- def write_serialized_entry(key, payload, raw: false, unless_exist: false, expires_in: nil, race_condition_ttl: nil, **options)
51
- true
61
+ def write_serialized_entry(key, payload, raw: false, unless_exist: false, expires_in: nil, race_condition_ttl: nil, returning: true, **options)
62
+ returning
52
63
  end
53
64
 
54
65
  def read_serialized_entries(keys)
@@ -0,0 +1,108 @@
1
+ # frozen_string_literal: true
2
+
3
+ module SolidCache
4
+ class Store
5
+ module Connections
6
+ attr_reader :shard_options
7
+
8
+ def initialize(options = {})
9
+ super(options)
10
+ if options[:clusters].present?
11
+ if options[:clusters].size > 1
12
+ raise ArgumentError, "Multiple clusters are no longer supported"
13
+ else
14
+ ActiveSupport.deprecator.warn(":clusters is deprecated, use :shards instead.")
15
+ end
16
+ @shard_options = options.fetch(:clusters).first[:shards]
17
+ elsif options[:cluster].present?
18
+ ActiveSupport.deprecator.warn(":cluster is deprecated, use :shards instead.")
19
+ @shard_options = options.fetch(:cluster, {})[:shards]
20
+ else
21
+ @shard_options = options.fetch(:shards, nil)
22
+ end
23
+
24
+ if [ Array, NilClass ].none? { |klass| @shard_options.is_a? klass }
25
+ raise ArgumentError, "`shards` is a `#{@shard_options.class.name}`, it should be Array or nil"
26
+ end
27
+ end
28
+
29
+ def with_each_connection(async: false, &block)
30
+ return enum_for(:with_each_connection) unless block_given?
31
+
32
+ connections.with_each do
33
+ execute(async, &block)
34
+ end
35
+ end
36
+
37
+ def with_connection_for(key, async: false, &block)
38
+ connections.with_connection_for(key) do
39
+ execute(async, &block)
40
+ end
41
+ end
42
+
43
+ def with_connection(name, async: false, &block)
44
+ connections.with(name) do
45
+ execute(async, &block)
46
+ end
47
+ end
48
+
49
+ def group_by_connection(keys)
50
+ connections.assign(keys)
51
+ end
52
+
53
+ def connection_names
54
+ connections.names
55
+ end
56
+
57
+ def connections
58
+ @connections ||= SolidCache::Connections.from_config(@shard_options)
59
+ end
60
+
61
+ private
62
+ def setup!
63
+ connections
64
+ end
65
+
66
+ def reading_key(key, failsafe:, failsafe_returning: nil, &block)
67
+ failsafe(failsafe, returning: failsafe_returning) do
68
+ with_connection_for(key, &block)
69
+ end
70
+ end
71
+
72
+ def reading_keys(keys, failsafe:, failsafe_returning: nil)
73
+ group_by_connection(keys).map do |connection, keys|
74
+ failsafe(failsafe, returning: failsafe_returning) do
75
+ with_connection(connection) do
76
+ yield keys
77
+ end
78
+ end
79
+ end
80
+ end
81
+
82
+
83
+ def writing_key(key, failsafe:, failsafe_returning: nil, &block)
84
+ failsafe(failsafe, returning: failsafe_returning) do
85
+ with_connection_for(key, &block)
86
+ end
87
+ end
88
+
89
+ def writing_keys(entries, failsafe:, failsafe_returning: nil)
90
+ group_by_connection(entries).map do |connection, entries|
91
+ failsafe(failsafe, returning: failsafe_returning) do
92
+ with_connection(connection) do
93
+ yield entries
94
+ end
95
+ end
96
+ end
97
+ end
98
+
99
+ def writing_all(failsafe:, failsafe_returning: nil, &block)
100
+ connection_names.map do |connection|
101
+ failsafe(failsafe, returning: failsafe_returning) do
102
+ with_connection(connection, &block)
103
+ end
104
+ end.first
105
+ end
106
+ end
107
+ end
108
+ end
@@ -29,7 +29,9 @@ module SolidCache
29
29
 
30
30
  def entry_lock_and_write(key, &block)
31
31
  writing_key(key, failsafe: :increment) do
32
- Entry.lock_and_write(key, &block)
32
+ Entry.lock_and_write(key) do |value|
33
+ block.call(value).tap { |result| track_writes(1) if result }
34
+ end
33
35
  end
34
36
  end
35
37
 
@@ -46,17 +48,17 @@ module SolidCache
46
48
  end
47
49
 
48
50
  def entry_write(key, payload)
49
- writing_key(key, failsafe: :write_entry, failsafe_returning: nil) do |cluster|
51
+ writing_key(key, failsafe: :write_entry, failsafe_returning: nil) do
50
52
  Entry.write(key, payload)
51
- cluster.track_writes(1)
53
+ track_writes(1)
52
54
  true
53
55
  end
54
56
  end
55
57
 
56
58
  def entry_write_multi(entries)
57
- writing_keys(entries, failsafe: :write_multi_entries, failsafe_returning: false) do |cluster, entries|
59
+ writing_keys(entries, failsafe: :write_multi_entries, failsafe_returning: false) do |entries|
58
60
  Entry.write_multi(entries)
59
- cluster.track_writes(entries.count)
61
+ track_writes(entries.count)
60
62
  true
61
63
  end
62
64
  end
@@ -1,7 +1,7 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module SolidCache
4
- class Cluster
4
+ class Store
5
5
  module Execution
6
6
  def initialize(options = {})
7
7
  super(options)
@@ -16,7 +16,7 @@ module SolidCache
16
16
  @background << ->() do
17
17
  wrap_in_rails_executor do
18
18
  connections.with(current_shard) do
19
- instrument(&block)
19
+ setup_instrumentation(&block)
20
20
  end
21
21
  end
22
22
  rescue Exception => exception
@@ -28,7 +28,7 @@ module SolidCache
28
28
  if async
29
29
  async(&block)
30
30
  else
31
- instrument(&block)
31
+ setup_instrumentation(&block)
32
32
  end
33
33
  end
34
34
 
@@ -44,7 +44,7 @@ module SolidCache
44
44
  @active_record_instrumentation
45
45
  end
46
46
 
47
- def instrument(&block)
47
+ def setup_instrumentation(&block)
48
48
  if active_record_instrumentation?
49
49
  block.call
50
50
  else
@@ -3,7 +3,7 @@
3
3
  require "concurrent/atomic/atomic_fixnum"
4
4
 
5
5
  module SolidCache
6
- class Cluster
6
+ class Store
7
7
  module Expiry
8
8
  # For every write that we do, we attempt to delete EXPIRY_MULTIPLIER times as many records.
9
9
  # This ensures there is downward pressure on the cache size while there is valid data to delete
@@ -1,10 +1,10 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module SolidCache
4
- class Cluster
4
+ class Store
5
5
  module Stats
6
6
  def initialize(options = {})
7
- super()
7
+ super(options)
8
8
  end
9
9
 
10
10
  def stats
@@ -2,7 +2,7 @@
2
2
 
3
3
  module SolidCache
4
4
  class Store < ActiveSupport::Cache::Store
5
- include Api, Clusters, Entries, Failsafe
5
+ include Api, Connections, Entries, Execution, Expiry, Failsafe, Stats
6
6
  prepend ActiveSupport::Cache::Strategy::LocalCache
7
7
 
8
8
  def initialize(options = {})
@@ -16,9 +16,5 @@ module SolidCache
16
16
  def setup!
17
17
  super
18
18
  end
19
-
20
- def stats
21
- primary_cluster.stats
22
- end
23
19
  end
24
20
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module SolidCache
4
- VERSION = "0.6.0"
4
+ VERSION = "0.7.0"
5
5
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: solid_cache
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.6.0
4
+ version: 0.7.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Donal McBreen
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2024-03-20 00:00:00.000000000 Z
11
+ date: 2024-07-26 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: activerecord
@@ -120,11 +120,6 @@ files:
120
120
  - lib/generators/solid_cache/install/install_generator.rb
121
121
  - lib/generators/solid_cache/install/templates/config/solid_cache.yml.tt
122
122
  - lib/solid_cache.rb
123
- - lib/solid_cache/cluster.rb
124
- - lib/solid_cache/cluster/connections.rb
125
- - lib/solid_cache/cluster/execution.rb
126
- - lib/solid_cache/cluster/expiry.rb
127
- - lib/solid_cache/cluster/stats.rb
128
123
  - lib/solid_cache/configuration.rb
129
124
  - lib/solid_cache/connections.rb
130
125
  - lib/solid_cache/connections/sharded.rb
@@ -134,9 +129,12 @@ files:
134
129
  - lib/solid_cache/maglev_hash.rb
135
130
  - lib/solid_cache/store.rb
136
131
  - lib/solid_cache/store/api.rb
137
- - lib/solid_cache/store/clusters.rb
132
+ - lib/solid_cache/store/connections.rb
138
133
  - lib/solid_cache/store/entries.rb
134
+ - lib/solid_cache/store/execution.rb
135
+ - lib/solid_cache/store/expiry.rb
139
136
  - lib/solid_cache/store/failsafe.rb
137
+ - lib/solid_cache/store/stats.rb
140
138
  - lib/solid_cache/version.rb
141
139
  - lib/tasks/solid_cache_tasks.rake
142
140
  homepage: http://github.com/rails/solid_cache
@@ -162,7 +160,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
162
160
  - !ruby/object:Gem::Version
163
161
  version: '0'
164
162
  requirements: []
165
- rubygems_version: 3.5.6
163
+ rubygems_version: 3.5.11
166
164
  signing_key:
167
165
  specification_version: 4
168
166
  summary: A database backed ActiveSupport::Cache::Store
@@ -1,55 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module SolidCache
4
- class Cluster
5
- module Connections
6
- attr_reader :shard_options
7
-
8
- def initialize(options = {})
9
- super(options)
10
- @shard_options = options.fetch(:shards, nil)
11
-
12
- if [ Hash, Array, NilClass ].none? { |klass| @shard_options.is_a? klass }
13
- raise ArgumentError, "`shards` is a `#{@shard_options.class.name}`, it should be one of Array, Hash or nil"
14
- end
15
- end
16
-
17
- def with_each_connection(async: false, &block)
18
- return enum_for(:with_each_connection) unless block_given?
19
-
20
- connections.with_each do
21
- execute(async, &block)
22
- end
23
- end
24
-
25
- def with_connection_for(key, async: false, &block)
26
- connections.with_connection_for(key) do
27
- execute(async, &block)
28
- end
29
- end
30
-
31
- def with_connection(name, async: false, &block)
32
- connections.with(name) do
33
- execute(async, &block)
34
- end
35
- end
36
-
37
- def group_by_connection(keys)
38
- connections.assign(keys)
39
- end
40
-
41
- def connection_names
42
- connections.names
43
- end
44
-
45
- def connections
46
- @connections ||= SolidCache::Connections.from_config(@shard_options)
47
- end
48
-
49
- private
50
- def setup!
51
- connections
52
- end
53
- end
54
- end
55
- end
@@ -1,18 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module SolidCache
4
- class Cluster
5
- include Connections, Execution, Expiry, Stats
6
-
7
- attr_reader :error_handler
8
-
9
- def initialize(options = {})
10
- @error_handler = options[:error_handler]
11
- super(options)
12
- end
13
-
14
- def setup!
15
- super
16
- end
17
- end
18
- end
@@ -1,83 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module SolidCache
4
- class Store
5
- module Clusters
6
- attr_reader :primary_cluster, :clusters
7
-
8
- def initialize(options = {})
9
- super(options)
10
-
11
- clusters_options = options.fetch(:clusters) { [ options.fetch(:cluster, {}) ] }
12
-
13
- @clusters = clusters_options.map.with_index do |cluster_options, index|
14
- Cluster.new(options.merge(cluster_options).merge(async_writes: index != 0, error_handler: error_handler))
15
- end
16
-
17
- @primary_cluster = clusters.first
18
- end
19
-
20
- def setup!
21
- clusters.each(&:setup!)
22
- end
23
-
24
- private
25
- def reading_key(key, failsafe:, failsafe_returning: nil, &block)
26
- failsafe(failsafe, returning: failsafe_returning) do
27
- primary_cluster.with_connection_for(key, &block)
28
- end
29
- end
30
-
31
- def reading_keys(keys, failsafe:, failsafe_returning: nil)
32
- connection_keys = primary_cluster.group_by_connection(keys)
33
-
34
- connection_keys.map do |connection, keys|
35
- failsafe(failsafe, returning: failsafe_returning) do
36
- primary_cluster.with_connection(connection) do
37
- yield keys
38
- end
39
- end
40
- end
41
- end
42
-
43
-
44
- def writing_key(key, failsafe:, failsafe_returning: nil)
45
- first_cluster_sync_rest_async do |cluster, async|
46
- failsafe(failsafe, returning: failsafe_returning) do
47
- cluster.with_connection_for(key, async: async) do
48
- yield cluster
49
- end
50
- end
51
- end
52
- end
53
-
54
- def writing_keys(entries, failsafe:, failsafe_returning: nil)
55
- first_cluster_sync_rest_async do |cluster, async|
56
- connection_entries = cluster.group_by_connection(entries)
57
-
58
- connection_entries.map do |connection, entries|
59
- failsafe(failsafe, returning: failsafe_returning) do
60
- cluster.with_connection(connection, async: async) do
61
- yield cluster, entries
62
- end
63
- end
64
- end
65
- end
66
- end
67
-
68
- def writing_all(failsafe:, failsafe_returning: nil, &block)
69
- first_cluster_sync_rest_async do |cluster, async|
70
- cluster.connection_names.map do |connection|
71
- failsafe(failsafe, returning: failsafe_returning) do
72
- cluster.with_connection(connection, async: async, &block)
73
- end
74
- end
75
- end.first
76
- end
77
-
78
- def first_cluster_sync_rest_async
79
- clusters.map.with_index { |cluster, index| yield cluster, index != 0 }.first
80
- end
81
- end
82
- end
83
- end