pecorino 0.7.2 → 0.7.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: df8db59f2303035498ca51c54787f664fbc000148965e3782e188e26b90fea31
4
- data.tar.gz: 7e67b40c92846045b0e38c8e706622d6114e307b49a8efaaec287db144734fdf
3
+ metadata.gz: 392d3fe294cb751ad452716ef2b569f48f536a57464bdac39379042c29aa8242
4
+ data.tar.gz: 36309ce687caa5e2d32bf5e5cc7c862e264a7065cf5f211062e9ae7f51118ced
5
5
  SHA512:
6
- metadata.gz: 6906f549004f30b57bbf9c0ca9d2a36ba1183acf0aec863563786d9118d8f0a1094d0dc2b6eddcdeb7929be7af1f017df0063a5073a798c74cd2337888a4a3e0
7
- data.tar.gz: d16cc4946a3205488afe0ab00cabc6f8a482e59751468513bec02fa0685ec77676db58b5bcb9d92d7d2fb7631e0a39fe823cfe2ecade824a0e241e45a37510fd
6
+ metadata.gz: 14b6cadec3609d946d786330e5c912e2ae57054ba6f740d4a0dd4b82bc4660417df2bd71e8ac97fa44990b47f20f60540d82a2299c5a176da4b8c191752f46ba
7
+ data.tar.gz: 5059d7a546df2a1a6822a70ed73290165f7506ba6f36f41daf3779cad158781768c004aa5bd063f24452c15dcb6d5889efd16362b3e79ef51e2f88bbcaffc3e8
data/.yardopts ADDED
@@ -0,0 +1 @@
1
+ --markup markdown - README.md CHANGELOG.md LICENSE.txt
data/CHANGELOG.md CHANGED
@@ -1,3 +1,11 @@
1
+ ## 0.7.4
2
+
3
+ - Ensure deprecated ActiveRecord::Base.connection is replaced with ActiveRecord::Base.connection_pool.with_connection. This prevents permanent connection checkout
4
+
5
+ ## 0.7.3
6
+
7
+ - Fix a number of YARD issues and generate both .rbi and .rbs typedefs
8
+
1
9
  ## 0.7.2
2
10
 
3
11
  - Set up a workable test harness for testing on both Rails 8 (Ruby 3.x) and Rails 7 (Ruby 2.x)
data/Gemfile ADDED
@@ -0,0 +1,17 @@
1
+ # frozen_string_literal: true
2
+
3
+ source "https://rubygems.org"
4
+
5
+ ruby ">= 3.0"
6
+ gemspec
7
+
8
+ gem "pg"
9
+ gem "sqlite3"
10
+ gem "activesupport", ">= 8"
11
+ gem "rake", "~> 13.0"
12
+ gem "minitest", "~> 5.0"
13
+ gem "redis", "~> 5", "< 6"
14
+ gem "yard"
15
+ gem "standard"
16
+ gem "sord"
17
+ gem "redcarpet"
data/README.md CHANGED
@@ -31,7 +31,7 @@ Once the installation is done you can use Pecorino to start defining your thrott
31
31
  We call this pattern **prefix usage** - apply throttle before allowing the action to proceed. This is more secure than registering an action after it has taken place.
32
32
 
33
33
  ```ruby
34
- throttle = Pecorino::Throttle.new(key: "password-attempts-#{request.ip}", over_time: 1.minute, capacity: 5, block_for: 30.minutes)
34
+ throttle = Pecorino::Throttle.new(key: "password-attempts-#{the_request.ip}", over_time: 1.minute, capacity: 5, block_for: 30.minutes)
35
35
  throttle.request!
36
36
  ```
37
37
  In a Rails controller you can then rescue from this exception to render the appropriate response:
@@ -119,11 +119,11 @@ class WalletController < ApplicationController
119
119
  end
120
120
 
121
121
  def withdraw
122
- Wallet.transaction do
123
- t = Pecorino::Throttle.new("wallet_#{current_user.id}_max_withdrawal", capacity: 200_00, over_time: 5.minutes)
124
- t.request!(10_00)
125
- current_user.wallet.withdraw(Money.new(10, "EUR"))
126
- end
122
+ Wallet.transaction do
123
+ t = Pecorino::Throttle.new("wallet_#{current_user.id}_max_withdrawal", capacity: 200_00, over_time: 5.minutes)
124
+ t.request!(10_00)
125
+ current_user.wallet.withdraw(Money.new(10, "EUR"))
126
+ end
127
127
  end
128
128
  end
129
129
  ```
@@ -189,8 +189,10 @@ The Pecorino buckets and blocks are stateful. If you are not running tests with
189
189
  ```ruby
190
190
  setup do
191
191
  # Delete all transient records
192
- ActiveRecord::Base.connection.execute("TRUNCATE TABLE pecorino_blocks")
193
- ActiveRecord::Base.connection.execute("TRUNCATE TABLE pecorino_leaky_buckets")
192
+ ActiveRecord::Base.connection_pool.with_connection do |connection|
193
+ connection.execute("TRUNCATE TABLE pecorino_blocks")
194
+ connection.execute("TRUNCATE TABLE pecorino_leaky_buckets")
195
+ end
194
196
  end
195
197
  ```
196
198
 
@@ -201,7 +203,7 @@ If you are using Redis, you may want to ensure it gets truncated/reset for every
201
203
  If a throttle is triggered, Pecorino sets a "block" record for that throttle key. Any request to that throttle will fail until the block is lifted. If you are getting hammered by requests which are getting throttled, it might be a good idea to install a caching layer which will respond with a "rate limit exceeded" error even before hitting your database - until the moment when the block would be lifted. You can use any [ActiveSupport::Cache::Store](https://api.rubyonrails.org/classes/ActiveSupport/Cache/Store.html) to store your blocks. If you have a fast Rails cache configured, create a wrapped throttle:
202
204
 
203
205
  ```ruby
204
- throttle = Pecorino::Throttle.new(key: "ip-#{request.ip}", capacity: 10, over_time: 2.seconds, block_for: 2.minutes)
206
+ throttle = Pecorino::Throttle.new(key: "ip-#{the_request.ip}", capacity: 10, over_time: 2.seconds, block_for: 2.minutes)
205
207
  cached_throttle = Pecorino::CachedThrottle.new(Rails.cache, throttle)
206
208
  cached_throttle.request!
207
209
  ```
@@ -214,7 +216,7 @@ config.pecorino_throttle_cache = ActiveSupport::Cache::MemoryStore.new
214
216
 
215
217
  # in your controller
216
218
 
217
- throttle = Pecorino::Throttle.new(key: "ip-#{request.ip}", capacity: 10, over_time: 2.seconds, block_for: 2.minutes)
219
+ throttle = Pecorino::Throttle.new(key: "ip-#{the_request.ip}", capacity: 10, over_time: 2.seconds, block_for: 2.minutes)
218
220
  cached_throttle = Pecorino::CachedThrottle.new(Rails.application.config.pecorino_throttle_cache, throttle)
219
221
  cached_throttle.request!
220
222
  ```
@@ -224,21 +226,24 @@ cached_throttle.request!
224
226
  Throttles and leaky buckets are transient resources. If you are using Postgres replication, it might be prudent to set the Pecorino tables to `UNLOGGED` which will exclude them from replication - and save you bandwidth and storage on your RR. To do so, add the following statements to your migration:
225
227
 
226
228
  ```ruby
227
- ActiveRecord::Base.connection.execute("ALTER TABLE pecorino_leaky_buckets SET UNLOGGED")
228
- ActiveRecord::Base.connection.execute("ALTER TABLE pecorino_blocks SET UNLOGGED")
229
+ ActiveRecord::Base.connection_pool.with_connection do |connection|
230
+ connection.execute("ALTER TABLE pecorino_leaky_buckets SET UNLOGGED")
231
+ connection.execute("ALTER TABLE pecorino_blocks SET UNLOGGED")
232
+ end
229
233
  ```
230
234
 
231
235
  ## Development
232
236
 
233
- After checking out the repo, set the Gemfile appropriate to your Ruby version and run Rake for tests, lint etc.
234
- Note that it is important to use the appropriate Gemfile per Ruby version and Rails version you want to test with. Due to some dependency shenanigans it is currently not very easy to have a single Gemfile.
237
+ After checking out the repo, run `bundle install` and then do the thing you need to do.
238
+
239
+ **Note:** CI runs other Gemfiles, because we can't test all Ruby versions and Rails versions just by swapping Gemfiles. If you need to debug something with a particular Ruby and Rails version, do this:
235
240
 
236
241
  ```bash
237
- $ rbenv local 2.7.7 && export BUNDLE_GEMFILE=gemfiles/Gemfile_ruby27_rails7 && bundle install
242
+ $ bundle rbenv local 2.7.7 && export BUNDLE_GEMFILE=gemfiles/Gemfile_ruby27_rails7 && bundle install
238
243
  $ bundle exec rake
239
244
  ```
240
245
 
241
- Then proceed to develop as normal. CI will run both the oldest supported dependencies and newest supported dependencies.
246
+ Then proceed as normal. Make sure to unset `BUNDLE_GEMFILE` when you are done. CI will run both the oldest supported dependencies and newest supported dependencies.
242
247
 
243
248
  ## Contributing
244
249
 
data/Rakefile CHANGED
@@ -20,6 +20,7 @@ end
20
20
 
21
21
  task :generate_typedefs do
22
22
  `bundle exec sord rbi/pecorino.rbi`
23
+ `bundle exec sord rbi/pecorino.rbs`
23
24
  end
24
25
 
25
26
  task default: [:test, :standard, :generate_typedefs]
@@ -30,7 +30,7 @@ class Pecorino::Adapters::PostgresAdapter
30
30
 
31
31
  # If the return value of the query is a NULL it means no such bucket exists,
32
32
  # so we assume the bucket is empty
33
- current_level = @model_class.connection.uncached { @model_class.connection.select_value(sql) } || 0.0
33
+ current_level = @model_class.connection_pool.with_connection { |connection| connection.uncached { connection.select_value(sql) } } || 0.0
34
34
  [current_level, capacity - current_level.abs < 0.01]
35
35
  end
36
36
 
@@ -83,7 +83,7 @@ class Pecorino::Adapters::PostgresAdapter
83
83
  # query as a repeat (since we use "select_one" for the RETURNING bit) and will not call into Postgres
84
84
  # correctly, thus the clock_timestamp() value would be frozen between calls. We don't want that here.
85
85
  # See https://stackoverflow.com/questions/73184531/why-would-postgres-clock-timestamp-freeze-inside-a-rails-unit-test
86
- upserted = @model_class.connection.uncached { @model_class.connection.select_one(sql) }
86
+ upserted = @model_class.connection_pool.with_connection { |connection| connection.uncached { connection.select_one(sql) } }
87
87
  capped_level_after_fillup, at_capacity = upserted.fetch("level"), upserted.fetch("at_capacity")
88
88
  [capped_level_after_fillup, at_capacity]
89
89
  end
@@ -141,7 +141,7 @@ class Pecorino::Adapters::PostgresAdapter
141
141
  level AS level_after
142
142
  SQL
143
143
 
144
- upserted = @model_class.connection.uncached { @model_class.connection.select_one(sql) }
144
+ upserted = @model_class.connection_pool.with_connection { |connection| connection.uncached { connection.select_one(sql) } }
145
145
  level_after = upserted.fetch("level_after")
146
146
  level_before = upserted.fetch("level_before")
147
147
  [level_after, level_after >= capacity, level_after != level_before]
@@ -159,19 +159,21 @@ class Pecorino::Adapters::PostgresAdapter
159
159
  blocked_until = GREATEST(EXCLUDED.blocked_until, t.blocked_until)
160
160
  RETURNING blocked_until
161
161
  SQL
162
- @model_class.connection.uncached { @model_class.connection.select_value(block_set_query) }
162
+ @model_class.connection_pool.with_connection { |connection| connection.uncached { connection.select_value(block_set_query) } }
163
163
  end
164
164
 
165
165
  def blocked_until(key:)
166
166
  block_check_query = @model_class.sanitize_sql_array([<<~SQL, key])
167
167
  SELECT blocked_until FROM pecorino_blocks WHERE key = ? AND blocked_until >= clock_timestamp() LIMIT 1
168
168
  SQL
169
- @model_class.connection.uncached { @model_class.connection.select_value(block_check_query) }
169
+ @model_class.connection_pool.with_connection { |connection| connection.uncached { connection.select_value(block_check_query) } }
170
170
  end
171
171
 
172
172
  def prune
173
- @model_class.connection.execute("DELETE FROM pecorino_blocks WHERE blocked_until < NOW()")
174
- @model_class.connection.execute("DELETE FROM pecorino_leaky_buckets WHERE may_be_deleted_after < NOW()")
173
+ @model_class.connection_pool.with_connection do |connection|
174
+ connection.execute("DELETE FROM pecorino_blocks WHERE blocked_until < NOW()")
175
+ connection.execute("DELETE FROM pecorino_leaky_buckets WHERE may_be_deleted_after < NOW()")
176
+ end
175
177
  end
176
178
 
177
179
  def create_tables(active_record_schema)
@@ -37,7 +37,7 @@ class Pecorino::Adapters::SqliteAdapter
37
37
 
38
38
  # If the return value of the query is a NULL it means no such bucket exists,
39
39
  # so we assume the bucket is empty
40
- current_level = @model_class.connection.uncached { @model_class.connection.select_value(sql) } || 0.0
40
+ current_level = @model_class.connection_pool.with_connection { |connection| connection.uncached { connection.select_value(sql) } } || 0.0
41
41
  [current_level, capacity - current_level.abs < 0.01]
42
42
  end
43
43
 
@@ -91,7 +91,7 @@ class Pecorino::Adapters::SqliteAdapter
91
91
  # query as a repeat (since we use "select_one" for the RETURNING bit) and will not call into Postgres
92
92
  # correctly, thus the clock_timestamp() value would be frozen between calls. We don't want that here.
93
93
  # See https://stackoverflow.com/questions/73184531/why-would-postgres-clock-timestamp-freeze-inside-a-rails-unit-test
94
- upserted = @model_class.connection.uncached { @model_class.connection.select_one(sql) }
94
+ upserted = @model_class.connection_pool.with_connection { |connection| connection.uncached { connection.select_one(sql) } }
95
95
  capped_level_after_fillup, one_if_did_overflow = upserted.fetch("level"), upserted.fetch("did_overflow")
96
96
  [capped_level_after_fillup, one_if_did_overflow == 1]
97
97
  end
@@ -130,7 +130,7 @@ class Pecorino::Adapters::SqliteAdapter
130
130
  -- so that it can't be deleted between our INSERT and our UPDATE
131
131
  may_be_deleted_after = EXCLUDED.may_be_deleted_after
132
132
  SQL
133
- @model_class.connection.execute(insert_sql)
133
+ @model_class.connection_pool.with_connection { |connection| connection.execute(insert_sql) }
134
134
 
135
135
  sql = @model_class.sanitize_sql_array([<<~SQL, query_params])
136
136
  -- With SQLite MATERIALIZED has to be used so that level_post is calculated before the UPDATE takes effect
@@ -156,7 +156,7 @@ class Pecorino::Adapters::SqliteAdapter
156
156
  level AS level_after
157
157
  SQL
158
158
 
159
- upserted = @model_class.connection.uncached { @model_class.connection.select_one(sql) }
159
+ upserted = @model_class.connection_pool.with_connection { |connection| connection.uncached { connection.select_one(sql) } }
160
160
  level_after = upserted.fetch("level_after")
161
161
  level_before = upserted.fetch("level_before")
162
162
  [level_after, level_after >= capacity, level_after != level_before]
@@ -174,7 +174,7 @@ class Pecorino::Adapters::SqliteAdapter
174
174
  blocked_until = MAX(EXCLUDED.blocked_until, t.blocked_until)
175
175
  RETURNING blocked_until;
176
176
  SQL
177
- blocked_until_s = @model_class.connection.uncached { @model_class.connection.select_value(block_set_query) }
177
+ blocked_until_s = @model_class.connection_pool.with_connection { |connection| connection.uncached { connection.select_value(block_set_query) } }
178
178
  Time.at(blocked_until_s)
179
179
  end
180
180
 
@@ -188,14 +188,16 @@ class Pecorino::Adapters::SqliteAdapter
188
188
  WHERE
189
189
  key = :key AND blocked_until >= :now_s LIMIT 1
190
190
  SQL
191
- blocked_until_s = @model_class.connection.uncached { @model_class.connection.select_value(block_check_query) }
191
+ blocked_until_s = @model_class.connection_pool.with_connection { |connection| connection.uncached { connection.select_value(block_check_query) } }
192
192
  blocked_until_s && Time.at(blocked_until_s)
193
193
  end
194
194
 
195
195
  def prune
196
196
  now_s = Time.now.to_f
197
- @model_class.connection.execute("DELETE FROM pecorino_blocks WHERE blocked_until < ?", now_s)
198
- @model_class.connection.execute("DELETE FROM pecorino_leaky_buckets WHERE may_be_deleted_after < ?", now_s)
197
+ @model_class.connection_pool.with_connection do |connection|
198
+ connection.execute("DELETE FROM pecorino_blocks WHERE blocked_until < ?", now_s)
199
+ connection.execute("DELETE FROM pecorino_leaky_buckets WHERE may_be_deleted_after < ?", now_s)
200
+ end
199
201
  end
200
202
 
201
203
  def create_tables(active_record_schema)
@@ -15,6 +15,10 @@ class Pecorino::CachedThrottle
15
15
  @throttle = throttle
16
16
  end
17
17
 
18
+ # Increments the cached throttle by the given number of tokens. If there is currently a known cached block on that throttle
19
+ # an exception will be raised immediately instead of querying the actual throttle data. Otherwise the call gets forwarded
20
+ # to the underlying throttle.
21
+ #
18
22
  # @see Pecorino::Throttle#request!
19
23
  def request!(n = 1)
20
24
  blocked_state = read_cached_blocked_state
@@ -28,9 +32,9 @@ class Pecorino::CachedThrottle
28
32
  end
29
33
  end
30
34
 
31
- # Returns cached `state` for the throttle if there is a currently active block for that throttle in the cache. Otherwise forwards to underlying throttle.
35
+ # Returns the cached `state` for the throttle if there is a currently active block for that throttle in the cache. Otherwise forwards to underlying throttle.
32
36
  #
33
- # @see Pecorino::Throttle#request
37
+ # @see Pecorino::Throttle#request!
34
38
  def request(n = 1)
35
39
  blocked_state = read_cached_blocked_state
36
40
  return blocked_state if blocked_state&.blocked?
@@ -91,7 +91,7 @@ class Pecorino::Throttle
91
91
 
92
92
  # The key for that throttle. Each key defines a unique throttle based on either a given name or
93
93
  # discriminators. If there is a component you want to key your throttle by, include it in the
94
- # `key` keyword argument to the constructor, like `"t-ip-#{request.ip}"`
94
+ # `key` keyword argument to the constructor, like `"t-ip-#{your_rails_request.ip}"`
95
95
  #
96
96
  # @return [String]
97
97
  attr_reader :key
@@ -100,8 +100,8 @@ class Pecorino::Throttle
100
100
  # @param block_for[Numeric] the number of seconds to block any further requests for. Defaults to time it takes
101
101
  # the bucket to leak out to the level of 0
102
102
  # @param adapter[Pecorino::Adapters::BaseAdapter] a compatible adapter
103
- # @param leaky_bucket_options Options for `Pecorino::LeakyBucket.new`
104
- # @see PecorinoLeakyBucket.new
103
+ # @param leaky_bucket_options Options for {Pecorino::LeakyBucket.new}
104
+ # @see Pecorino::LeakyBucket.new
105
105
  def initialize(key:, block_for: nil, adapter: Pecorino.adapter, **leaky_bucket_options)
106
106
  @adapter = adapter
107
107
  leaky_bucket_options.delete(:adapter)
@@ -129,16 +129,16 @@ class Pecorino::Throttle
129
129
  # The exception can be rescued later to provide a 429 response. This method is better
130
130
  # to use before performing the unit of work that the throttle is guarding:
131
131
  #
132
+ # If the method call returns it means that the request is not getting throttled.
133
+ #
132
134
  # @example
133
135
  # begin
134
- # t.request!
135
- # Note.create!(note_params)
136
+ # t.request!
137
+ # Note.create!(note_params)
136
138
  # rescue Pecorino::Throttle::Throttled => e
137
- # [429, {"Retry-After" => e.retry_after.to_s}, []]
139
+ # [429, {"Retry-After" => e.retry_after.to_s}, []]
138
140
  # end
139
- #
140
- # If the method call succeeds it means that the request is not getting throttled.
141
- #
141
+ # @param n [Numeric] how many tokens to place into the bucket or remove from the bucket. May be fractional or negative.
142
142
  # @return [State] the state of the throttle after filling up the leaky bucket / trying to pass the block
143
143
  def request!(n = 1)
144
144
  request(n).tap do |state_after|
@@ -156,8 +156,8 @@ class Pecorino::Throttle
156
156
  # Entry.create!(entry_params)
157
157
  # t.request
158
158
  # end
159
- #
160
- # @return [State] the state of the throttle after filling up the leaky bucket / trying to pass the block
159
+ # @param n [Numeric] how many tokens to place into the bucket or remove from the bucket. May be fractional or negative.
160
+ # @return [State] the state of the throttle after the attempt to fill up the leaky bucket
161
161
  def request(n = 1)
162
162
  existing_blocked_until = Pecorino::Block.blocked_until(key: @key, adapter: @adapter)
163
163
  return State.new(existing_blocked_until.utc) if existing_blocked_until
@@ -181,6 +181,7 @@ class Pecorino::Throttle
181
181
  # @example
182
182
  # t.throttled { Slack.alert("Things are going wrong") }
183
183
  #
184
+ # @param blk The block to run. Will only run if the throttle accepts the call.
184
185
  # @return [Object] the return value of the block if the block gets executed, or `nil` if the call got throttled
185
186
  def throttled(&blk)
186
187
  return if request(1).blocked?
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Pecorino
4
- VERSION = "0.7.2"
4
+ VERSION = "0.7.4"
5
5
  end
data/lib/pecorino.rb CHANGED
@@ -60,12 +60,12 @@ module Pecorino
60
60
 
61
61
  # Returns the database implementation for setting the values atomically. Since the implementation
62
62
  # differs per database, this method will return a different adapter depending on which database is
63
- # being used
63
+ # being used.
64
64
  #
65
- # @param adapter[Pecorino::Adapters::BaseAdapter]
65
+ # @return [Pecorino::Adapters::BaseAdapter]
66
66
  def self.default_adapter_from_main_database
67
67
  model_class = ActiveRecord::Base
68
- adapter_name = model_class.connection.adapter_name
68
+ adapter_name = model_class.connection_pool.with_connection(&:adapter_name)
69
69
  case adapter_name
70
70
  when /postgres/i
71
71
  Pecorino::Adapters::PostgresAdapter.new(model_class)
data/rbi/pecorino.rbi CHANGED
@@ -1,6 +1,6 @@
1
1
  # typed: strong
2
2
  module Pecorino
3
- VERSION = T.let("0.7.1", T.untyped)
3
+ VERSION = T.let("0.7.3", T.untyped)
4
4
 
5
5
  # Deletes stale leaky buckets and blocks which have expired. Run this method regularly to
6
6
  # avoid accumulating too many unused rows in your tables.
@@ -36,18 +36,15 @@ module Pecorino
36
36
  sig { returns(Pecorino::Adapters::BaseAdapter) }
37
37
  def self.adapter; end
38
38
 
39
- # sord omit - no YARD return type given, using untyped
40
39
  # Returns the database implementation for setting the values atomically. Since the implementation
41
40
  # differs per database, this method will return a different adapter depending on which database is
42
- # being used
43
- #
44
- # _@param_ `adapter`
45
- sig { returns(T.untyped) }
41
+ # being used.
42
+ sig { returns(Pecorino::Adapters::BaseAdapter) }
46
43
  def self.default_adapter_from_main_database; end
47
44
 
48
45
  module Adapters
49
46
  # An adapter allows Pecorino throttles, leaky buckets and other
50
- # resources to interfact to a data storage backend - a database, usually.
47
+ # resources to interface with a data storage backend - a database, usually.
51
48
  class BaseAdapter
52
49
  # Returns the state of a leaky bucket. The state should be a tuple of two
53
50
  # values: the current level (Float) and whether the bucket is now at capacity (Boolean)
@@ -503,9 +500,9 @@ module Pecorino
503
500
  #
504
501
  # _@param_ `adapter` — a compatible adapter
505
502
  #
506
- # _@param_ `leaky_bucket_options` — Options for `Pecorino::LeakyBucket.new`
503
+ # _@param_ `leaky_bucket_options` — Options for {Pecorino::LeakyBucket.new}
507
504
  #
508
- # _@see_ `PecorinoLeakyBucket.new`
505
+ # _@see_ `Pecorino::LeakyBucket.new`
509
506
  sig do
510
507
  params(
511
508
  key: String,
@@ -526,7 +523,6 @@ module Pecorino
526
523
  sig { params(n_tokens: Float).returns(T::Boolean) }
527
524
  def able_to_accept?(n_tokens = 1); end
528
525
 
529
- # sord omit - no YARD type given for "n", using untyped
530
526
  # Register that a request is being performed. Will raise Throttled
531
527
  # if there is a block in place for that throttle, or if the bucket cannot accept
532
528
  # this fillup and the block has just been installed as a result of this particular request.
@@ -534,28 +530,31 @@ module Pecorino
534
530
  # The exception can be rescued later to provide a 429 response. This method is better
535
531
  # to use before performing the unit of work that the throttle is guarding:
536
532
  #
537
- # If the method call succeeds it means that the request is not getting throttled.
533
+ # If the method call returns it means that the request is not getting throttled.
534
+ #
535
+ # _@param_ `n` — how many tokens to place into the bucket or remove from the bucket. May be fractional or negative.
538
536
  #
539
537
  # _@return_ — the state of the throttle after filling up the leaky bucket / trying to pass the block
540
538
  #
541
539
  # ```ruby
542
540
  # begin
543
- # t.request!
544
- # Note.create!(note_params)
541
+ # t.request!
542
+ # Note.create!(note_params)
545
543
  # rescue Pecorino::Throttle::Throttled => e
546
- # [429, {"Retry-After" => e.retry_after.to_s}, []]
544
+ # [429, {"Retry-After" => e.retry_after.to_s}, []]
547
545
  # end
548
546
  # ```
549
- sig { params(n: T.untyped).returns(State) }
547
+ sig { params(n: Numeric).returns(State) }
550
548
  def request!(n = 1); end
551
549
 
552
- # sord omit - no YARD type given for "n", using untyped
553
550
  # Register that a request is being performed. Will not raise any exceptions but return
554
551
  # the time at which the block will be lifted if a block resulted from this request or
555
552
  # was already in effect. Can be used for registering actions which already took place,
556
553
  # but should result in subsequent actions being blocked.
557
554
  #
558
- # _@return_the state of the throttle after filling up the leaky bucket / trying to pass the block
555
+ # _@param_ `n` how many tokens to place into the bucket or remove from the bucket. May be fractional or negative.
556
+ #
557
+ # _@return_ — the state of the throttle after the attempt to fill up the leaky bucket
559
558
  #
560
559
  # ```ruby
561
560
  # if t.able_to_accept?
@@ -563,7 +562,7 @@ module Pecorino
563
562
  # t.request
564
563
  # end
565
564
  # ```
566
- sig { params(n: T.untyped).returns(State) }
565
+ sig { params(n: Numeric).returns(State) }
567
566
  def request(n = 1); end
568
567
 
569
568
  # Fillup the throttle with 1 request and then perform the passed block. This is useful to perform actions which should
@@ -571,6 +570,8 @@ module Pecorino
571
570
  # the passed block will be executed. If the throttle is in the blocked state or if the call puts the throttle in
572
571
  # the blocked state the block will not be executed
573
572
  #
573
+ # _@param_ `blk` — The block to run. Will only run if the throttle accepts the call.
574
+ #
574
575
  # _@return_ — the return value of the block if the block gets executed, or `nil` if the call got throttled
575
576
  #
576
577
  # ```ruby
@@ -581,7 +582,7 @@ module Pecorino
581
582
 
582
583
  # The key for that throttle. Each key defines a unique throttle based on either a given name or
583
584
  # discriminators. If there is a component you want to key your throttle by, include it in the
584
- # `key` keyword argument to the constructor, like `"t-ip-#{request.ip}"`
585
+ # `key` keyword argument to the constructor, like `"t-ip-#{your_rails_request.ip}"`
585
586
  sig { returns(String) }
586
587
  attr_reader :key
587
588
 
@@ -835,6 +836,9 @@ module Pecorino
835
836
 
836
837
  # sord omit - no YARD type given for "n", using untyped
837
838
  # sord omit - no YARD return type given, using untyped
839
+ # Increments the cached throttle by the given number of tokens. If there is currently a known cached block on that throttle
840
+ # an exception will be raised immediately instead of querying the actual throttle data. Otherwise the call gets forwarded
841
+ # to the underlying throttle.
838
842
  #
839
843
  # _@see_ `Pecorino::Throttle#request!`
840
844
  sig { params(n: T.untyped).returns(T.untyped) }
@@ -842,9 +846,9 @@ module Pecorino
842
846
 
843
847
  # sord omit - no YARD type given for "n", using untyped
844
848
  # sord omit - no YARD return type given, using untyped
845
- # Returns cached `state` for the throttle if there is a currently active block for that throttle in the cache. Otherwise forwards to underlying throttle.
849
+ # Returns the cached `state` for the throttle if there is a currently active block for that throttle in the cache. Otherwise forwards to underlying throttle.
846
850
  #
847
- # _@see_ `Pecorino::Throttle#request`
851
+ # _@see_ `Pecorino::Throttle#request!`
848
852
  sig { params(n: T.untyped).returns(T.untyped) }
849
853
  def request(n = 1); end
850
854