pecorino 0.2.0 → 0.4.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: bfcca80bad8895a9b45ed8a3ed0afe06b52a0c6a851d8506b102a82b5375c2a3
4
- data.tar.gz: 7f57dd803797acfdf29a8d7cae31854528012af249887ae49e398b992a48f9d4
3
+ metadata.gz: 445e9997824e9ef7857a31e626e2a6de981d466fd4f5187299cff86533596b13
4
+ data.tar.gz: 9b31fad0bf017b2a9ee1b4d4f0660376945db76970c65e61ecadeccf706425b6
5
5
  SHA512:
6
- metadata.gz: dea8f3c693c1d9b5412cdc50cd333d1de6b92d90ec1358ae3c3f5d03fef685649bccd4f6fcc8757e04aa7aba5b13411b325b69b374be0bf30a86d10c16977613
7
- data.tar.gz: ee00695a622947cbdc14a300d73d06a850abd2357b3c1357e7904e7c93f2ea5513a4edbb98349a896b5b39cc57b08e889a221e78319e5344f778000dd7059f06
6
+ metadata.gz: 9b82a54b4e6f721aa2c752d9f7d0503f5b621174826a76594ef9c91a1b943b881c1e377c8ef7c7d4c57d810a4f9be82ac17bad21c12f65570ebf3daba5c51c7d
7
+ data.tar.gz: 95436d3b317c43d08a6630ad5e4dba1494cdbcae2f3e98ffccfb9cecf896a8ad2f8a02861d987c0288c1cb04b29e7fb1d490bdf37821b7858b472cdb82e6092b
data/CHANGELOG.md CHANGED
@@ -1,3 +1,19 @@
1
+ ## [0.4.0] - 2024-01-22
2
+
3
+ - Use Bucket#connditional_fillup inside Throttle and throttle only when the capacity _would_ be exceeded, as opposed
4
+ to throttling when capacity has already been exceeded. This allows for finer-grained throttles such as
5
+ "at most once in", where filling "exactly to capacity" is a requirement. It also provides for more accurate
6
+ and easier to understand throttling in general.
7
+ - Make sure Bucket#able_to_accept? allows the bucket to be filled to capacity, not only to below capacity
8
+ - Improve YARD documentation
9
+ - Allow "conditional fillup" - only add tokens to the leaky bucket if the bucket has enough space.
10
+ - Fix `over_time` leading to incorrect `leak_rate`. The divider/divisor were swapped, leading to the inverse leak rate getting computed.
11
+
12
+ ## [0.3.0] - 2024-01-18
13
+
14
+ - Allow `over_time` in addition to `leak_rate`, which is a more intuitive parameter to tweak
15
+ - Set default `block_for` to the time it takes the bucket to leak out completely instead of 30 seconds
16
+
1
17
  ## [0.2.0] - 2024-01-09
2
18
 
3
19
  - [Add support for SQLite](https://github.com/cheddar-me/pecorino/pull/9)
data/README.md CHANGED
@@ -22,10 +22,10 @@ And then execute:
22
22
 
23
23
  ## Usage
24
24
 
25
- Once the installation is done you can use Pecorino to start defining your throttles. Imagine you have a resource called `vault` and you want to limit the number of updates to it to 5 per second. To achieve that, instantiate a new `Throttle` in your controller or job code, and then trigger it using `Throttle#request!`. A call to `request!` registers 1 token getting added to the bucket. If the bucket is full, or the throttle is currently in "block" mode (has recently been triggered), a `Pecorino::Throttle::Throttled` exception will be raised.
25
+ Once the installation is done you can use Pecorino to start defining your throttles. Imagine you have a resource called `vault` and you want to limit the number of updates to it to 5 per second. To achieve that, instantiate a new `Throttle` in your controller or job code, and then trigger it using `Throttle#request!`. A call to `request!` registers 1 token getting added to the bucket. If the bucket would overspill (your request would make it overflow), or the throttle is currently in "block" mode (has recently been triggered), a `Pecorino::Throttle::Throttled` exception will be raised.
26
26
 
27
27
  ```ruby
28
- throttle = Pecorino::Throttle.new(key: "vault", leak_rate: 5, capacity: 5)
28
+ throttle = Pecorino::Throttle.new(key: "vault", over_time: 1.second, capacity: 5)
29
29
  throttle.request!
30
30
  ```
31
31
  In a Rails controller you can then rescue from this exception to render the appropriate response:
@@ -58,7 +58,7 @@ return render :capacity_exceeded unless throttle.able_to_accept?
58
58
  If you are dealing with a metered resource (like throughput, money, amount of storage...) you can supply the number of tokens to either `request!` or `able_to_accept?` to indicate the desired top-up of the leaky bucket. For example, if you are maintaining user wallets and want to ensure no more than 100 dollars may be taken from the wallet within a certain amount of time, you can do it like so:
59
59
 
60
60
  ```ruby
61
- throttle = Pecorino::Throttle.new(key: "wallet_t_#{current_user.id}", leak_rate: 100 / 60.0 / 60.0, capacity: 100, block_for: 60*60*3)
61
+ throttle = Pecorino::Throttle.new(key: "wallet_t_#{current_user.id}", over_time_: 1.hour, capacity: 100, block_for: 3.hours)
62
62
  throttle.request!(20) # Attempt to withdraw 20 dollars
63
63
  throttle.request!(20) # Attempt to withdraw 20 dollars more
64
64
  throttle.request!(20) # Attempt to withdraw 20 dollars more
@@ -67,6 +67,8 @@ throttle.request!(20) # Attempt to withdraw 20 dollars more
67
67
  throttle.request!(2) # Attempt to withdraw 2 dollars more, will raise `Throttled` and block withdrawals for 3 hours
68
68
  ```
69
69
 
70
+ ## Using just the leaky bucket
71
+
70
72
  Sometimes you don't want to use a throttle, but you want to track the amount added to the leaky bucket over time. A lower-level abstraction is available for that purpose in the form of the `LeakyBucket` class. It will not raise any exceptions and will not install blocks, but will permit you to track a bucket's state over time:
71
73
 
72
74
 
@@ -77,9 +79,10 @@ sleep 0.2
77
79
  b.state #=> Pecorino::LeakyBucket::State(full?: false, level: 1.8)
78
80
  ```
79
81
 
80
- Check out the inline YARD documentation for more options.
82
+ Check out the inline YARD documentation for more options. Do take note of the differences between `fillup()` and `fillup_conditionally` as you
83
+ might want to pick one or the other depending on your use case.
81
84
 
82
- ## Cleaning out stale locks from the database
85
+ ## Cleaning out stale buckets and blocks from the database
83
86
 
84
87
  We recommend running the following bit of code every couple of hours (via cron or similar) to delete the stale blocks and leaky buckets from the system:
85
88
 
@@ -25,69 +25,114 @@
25
25
  # The storage use is one DB row per leaky bucket you need to manage (likely - one throttled entity such
26
26
  # as a combination of an IP address + the URL you need to procect). The `key` is an arbitrary string you provide.
27
27
  class Pecorino::LeakyBucket
28
- State = Struct.new(:level, :full) do
29
- # Returns the level of the bucket after the operation on the LeakyBucket
30
- # object has taken place. There is a guarantee that no tokens have leaked
31
- # from the bucket between the operation and the freezing of the State
32
- # struct.
33
- #
34
- # @!attribute [r] level
35
- # @return [Float]
28
+ # Returned from `.state` and `.fillup`
29
+ class State
30
+ def initialize(level, is_full)
31
+ @level = level.to_f
32
+ @full = !!is_full
33
+ end
34
+
35
+ # Returns the level of the bucket
36
+ # @return [Float]
37
+ attr_reader :level
36
38
 
37
39
  # Tells whether the bucket was detected to be full when the operation on
38
- # the LeakyBucket was performed. There is a guarantee that no tokens have leaked
39
- # from the bucket between the operation and the freezing of the State
40
- # struct.
41
- #
42
- # @!attribute [r] full
43
- # @return [Boolean]
40
+ # the LeakyBucket was performed.
41
+ # @return [Boolean]
42
+ def full?
43
+ @full
44
+ end
44
45
 
45
- alias_method :full?, :full
46
+ alias_method :full, :full?
47
+ end
46
48
 
47
- # Returns the bucket level of the bucket state as a Float
48
- #
49
- # @return [Float]
50
- def to_f
51
- level.to_f
49
+ # Same as `State` but also communicates whether the write has been permitted or not. A conditional fillup
50
+ # may refuse a write if it would make the bucket overflow
51
+ class ConditionalFillupResult < State
52
+ def initialize(level, is_full, accepted)
53
+ super(level, is_full)
54
+ @accepted = !!accepted
52
55
  end
53
56
 
54
- # Returns the bucket level of the bucket state rounded to an Integer
55
- #
56
- # @return [Integer]
57
- def to_i
58
- level.to_i
57
+ # Tells whether the bucket did accept the requested fillup
58
+ # @return [Boolean]
59
+ def accepted?
60
+ @accepted
59
61
  end
60
62
  end
61
63
 
64
+ # The key (name) of the leaky bucket
65
+ # @return [String]
66
+ attr_reader :key
67
+
68
+ # The leak rate (tokens per second) of the bucket
69
+ # @return [Float]
70
+ attr_reader :leak_rate
71
+
72
+ # The capacity of the bucket in tokens
73
+ # @return [Float]
74
+ attr_reader :capacity
75
+
62
76
  # Creates a new LeakyBucket. The object controls 1 row in the database is
63
77
  # specific to the bucket key.
64
78
  #
65
79
  # @param key[String] the key for the bucket. The key also gets used
66
80
  # to derive locking keys, so that operations on a particular bucket
67
81
  # are always serialized.
68
- # @param leak_rate[Float] the leak rate of the bucket, in tokens per second
82
+ # @param leak_rate[Float] the leak rate of the bucket, in tokens per second.
83
+ # Either `leak_rate` or `over_time` can be used, but not both.
84
+ # @param over_time[#to_f] over how many seconds the bucket will leak out to 0 tokens.
85
+ # The value is assumed to be the number of seconds
86
+ # - or a duration which returns the number of seconds from `to_f`.
87
+ # Either `leak_rate` or `over_time` can be used, but not both.
69
88
  # @param capacity[Numeric] how many tokens is the bucket capped at.
70
89
  # Filling up the bucket using `fillup()` will add to that number, but
71
90
  # the bucket contents will then be capped at this value. So with
72
91
  # bucket_capacity set to 12 and a `fillup(14)` the bucket will reach the level
73
92
  # of 12, and will then immediately start leaking again.
74
- def initialize(key:, leak_rate:, capacity:)
93
+ def initialize(key:, capacity:, leak_rate: nil, over_time: nil)
94
+ raise ArgumentError, "Either leak_rate: or over_time: must be specified" if leak_rate.nil? && over_time.nil?
95
+ raise ArgumentError, "Either leak_rate: or over_time: may be specified, but not both" if leak_rate && over_time
96
+ @leak_rate = leak_rate || (capacity / over_time.to_f)
75
97
  @key = key
76
- @leak_rate = leak_rate.to_f
77
98
  @capacity = capacity.to_f
78
99
  end
79
100
 
80
- # Places `n` tokens in the bucket. Once tokens are placed, the bucket is set to expire
81
- # within 2 times the time it would take it to leak to 0, regardless of how many tokens
82
- # get put in - since the amount of tokens put in the bucket will always be capped
83
- # to the `capacity:` value you pass to the constructor. Calling `fillup` also deletes
84
- # leaky buckets which have expired.
101
+ # Places `n` tokens in the bucket. If the bucket has less capacity than `n` tokens, the bucket will be filled to capacity.
102
+ # If the bucket has less capacity than `n` tokens, it will be filled to capacity. If the bucket is already full
103
+ # when the fillup is requested, the bucket stays at capacity.
85
104
  #
86
- # @param n_tokens[Float]
105
+ # Once tokens are placed, the bucket is set to expire within 2 times the time it would take it to leak to 0,
106
+ # regardless of how many tokens get put in - since the amount of tokens put in the bucket will always be capped
107
+ # to the `capacity:` value you pass to the constructor.
108
+ #
109
+ # @param n_tokens[Float] How many tokens to fillup by
87
110
  # @return [State] the state of the bucket after the operation
88
111
  def fillup(n_tokens)
89
- capped_level_after_fillup, did_overflow = Pecorino.adapter.add_tokens(capacity: @capacity, key: @key, leak_rate: @leak_rate, n_tokens: n_tokens)
90
- State.new(capped_level_after_fillup, did_overflow)
112
+ capped_level_after_fillup, is_full = Pecorino.adapter.add_tokens(capacity: @capacity, key: @key, leak_rate: @leak_rate, n_tokens: n_tokens)
113
+ State.new(capped_level_after_fillup, is_full)
114
+ end
115
+
116
+ # Places `n` tokens in the bucket. If the bucket has less capacity than `n` tokens, the fillup will be rejected.
117
+ # This can be used for "exactly once" semantics or just more precise rate limiting. Note that if the bucket has
118
+ # _exactly_ `n` tokens of capacity the fillup will be accepted.
119
+ #
120
+ # Once tokens are placed, the bucket is set to expire within 2 times the time it would take it to leak to 0,
121
+ # regardless of how many tokens get put in - since the amount of tokens put in the bucket will always be capped
122
+ # to the `capacity:` value you pass to the constructor.
123
+ #
124
+ # @example
125
+ # withdrawals = LeakyBuket.new(key: "wallet-#{user.id}", capacity: 200, over_time: 1.day)
126
+ # if withdrawals.fillup_conditionally(amount_to_withdraw).accepted?
127
+ # user.wallet.withdraw(amount_to_withdraw)
128
+ # else
129
+ # raise "You need to wait a bit before withdrawing more"
130
+ # end
131
+ # @param n_tokens[Float] How many tokens to fillup by
132
+ # @return [ConditionalFillupResult] the state of the bucket after the operation and whether the operation succeeded
133
+ def fillup_conditionally(n_tokens)
134
+ capped_level_after_fillup, is_full, did_accept = Pecorino.adapter.add_tokens_conditionally(capacity: @capacity, key: @key, leak_rate: @leak_rate, n_tokens: n_tokens)
135
+ ConditionalFillupResult.new(capped_level_after_fillup, is_full, did_accept)
91
136
  end
92
137
 
93
138
  # Returns the current state of the bucket, containing the level and whether the bucket is full.
@@ -108,6 +153,6 @@ class Pecorino::LeakyBucket
108
153
  # @param n_tokens[Float]
109
154
  # @return [boolean]
110
155
  def able_to_accept?(n_tokens)
111
- (state.level + n_tokens) < @capacity
156
+ (state.level + n_tokens) <= @capacity
112
157
  end
113
158
  end
@@ -72,7 +72,7 @@ Pecorino::Postgres = Struct.new(:model_class) do
72
72
  RETURNING
73
73
  level,
74
74
  -- Compare level to the capacity inside the DB so that we won't have rounding issues
75
- level >= :capacity AS did_overflow
75
+ level >= :capacity AS at_capacity
76
76
  SQL
77
77
 
78
78
  # Note the use of .uncached here. The AR query cache will actually see our
@@ -80,8 +80,67 @@ Pecorino::Postgres = Struct.new(:model_class) do
80
80
  # correctly, thus the clock_timestamp() value would be frozen between calls. We don't want that here.
81
81
  # See https://stackoverflow.com/questions/73184531/why-would-postgres-clock-timestamp-freeze-inside-a-rails-unit-test
82
82
  upserted = model_class.connection.uncached { model_class.connection.select_one(sql) }
83
- capped_level_after_fillup, did_overflow = upserted.fetch("level"), upserted.fetch("did_overflow")
84
- [capped_level_after_fillup, did_overflow]
83
+ capped_level_after_fillup, at_capacity = upserted.fetch("level"), upserted.fetch("at_capacity")
84
+ [capped_level_after_fillup, at_capacity]
85
+ end
86
+
87
+ def add_tokens_conditionally(key:, capacity:, leak_rate:, n_tokens:)
88
+ # Take double the time it takes the bucket to empty under normal circumstances
89
+ # until the bucket may be deleted.
90
+ may_be_deleted_after_seconds = (capacity.to_f / leak_rate.to_f) * 2.0
91
+
92
+ # Create the leaky bucket if it does not exist, and update
93
+ # to the new level, taking the leak rate into account - if the bucket exists.
94
+ query_params = {
95
+ key: key.to_s,
96
+ capacity: capacity.to_f,
97
+ delete_after_s: may_be_deleted_after_seconds,
98
+ leak_rate: leak_rate.to_f,
99
+ fillup: n_tokens.to_f
100
+ }
101
+
102
+ sql = model_class.sanitize_sql_array([<<~SQL, query_params])
103
+ WITH pre AS MATERIALIZED (
104
+ SELECT
105
+ -- Note the double clamping here. First we clamp the "current level - leak" to not go below zero,
106
+ -- then we also clamp the above + fillup to not go below 0
107
+ GREATEST(0.0,
108
+ GREATEST(0.0, level - (EXTRACT(EPOCH FROM (clock_timestamp() - last_touched_at)) * :leak_rate)) + :fillup
109
+ ) AS level_post_with_uncapped_fillup,
110
+ GREATEST(0.0,
111
+ level - (EXTRACT(EPOCH FROM (clock_timestamp() - last_touched_at)) * :leak_rate)
112
+ ) AS level_post
113
+ FROM pecorino_leaky_buckets
114
+ WHERE key = :key
115
+ )
116
+ INSERT INTO pecorino_leaky_buckets AS t
117
+ (key, last_touched_at, may_be_deleted_after, level)
118
+ VALUES
119
+ (
120
+ :key,
121
+ clock_timestamp(),
122
+ clock_timestamp() + ':delete_after_s second'::interval,
123
+ GREATEST(0.0,
124
+ (CASE WHEN :fillup > :capacity THEN 0.0 ELSE :fillup END)
125
+ )
126
+ )
127
+ ON CONFLICT (key) DO UPDATE SET
128
+ last_touched_at = EXCLUDED.last_touched_at,
129
+ may_be_deleted_after = EXCLUDED.may_be_deleted_after,
130
+ level = CASE WHEN (SELECT level_post_with_uncapped_fillup FROM pre) <= :capacity THEN
131
+ (SELECT level_post_with_uncapped_fillup FROM pre)
132
+ ELSE
133
+ (SELECT level_post FROM pre)
134
+ END
135
+ RETURNING
136
+ COALESCE((SELECT level_post FROM pre), 0.0) AS level_before,
137
+ level AS level_after
138
+ SQL
139
+
140
+ upserted = model_class.connection.uncached { model_class.connection.select_one(sql) }
141
+ level_after = upserted.fetch("level_after")
142
+ level_before = upserted.fetch("level_before")
143
+ [level_after, level_after >= capacity, level_after != level_before]
85
144
  end
86
145
 
87
146
  def set_block(key:, block_for:)
@@ -90,17 +149,17 @@ Pecorino::Postgres = Struct.new(:model_class) do
90
149
  INSERT INTO pecorino_blocks AS t
91
150
  (key, blocked_until)
92
151
  VALUES
93
- (:key, NOW() + ':block_for seconds'::interval)
152
+ (:key, clock_timestamp() + ':block_for seconds'::interval)
94
153
  ON CONFLICT (key) DO UPDATE SET
95
154
  blocked_until = GREATEST(EXCLUDED.blocked_until, t.blocked_until)
96
- RETURNING blocked_until;
155
+ RETURNING blocked_until
97
156
  SQL
98
157
  model_class.connection.uncached { model_class.connection.select_value(block_set_query) }
99
158
  end
100
159
 
101
160
  def blocked_until(key:)
102
161
  block_check_query = model_class.sanitize_sql_array([<<~SQL, key])
103
- SELECT blocked_until FROM pecorino_blocks WHERE key = ? AND blocked_until >= NOW() LIMIT 1
162
+ SELECT blocked_until FROM pecorino_blocks WHERE key = ? AND blocked_until >= clock_timestamp() LIMIT 1
104
163
  SQL
105
164
  model_class.connection.uncached { model_class.connection.select_value(block_check_query) }
106
165
  end
@@ -94,6 +94,74 @@ Pecorino::Sqlite = Struct.new(:model_class) do
94
94
  [capped_level_after_fillup, one_if_did_overflow == 1]
95
95
  end
96
96
 
97
+ def add_tokens_conditionally(key:, capacity:, leak_rate:, n_tokens:)
98
+ # Take double the time it takes the bucket to empty under normal circumstances
99
+ # until the bucket may be deleted.
100
+ may_be_deleted_after_seconds = (capacity.to_f / leak_rate.to_f) * 2.0
101
+
102
+ # Create the leaky bucket if it does not exist, and update
103
+ # to the new level, taking the leak rate into account - if the bucket exists.
104
+ query_params = {
105
+ key: key.to_s,
106
+ capacity: capacity.to_f,
107
+ delete_after_s: may_be_deleted_after_seconds,
108
+ leak_rate: leak_rate.to_f,
109
+ now_s: Time.now.to_f, # See above as to why we are using a time value passed in
110
+ fillup: n_tokens.to_f,
111
+ id: SecureRandom.uuid # SQLite3 does not autogenerate UUIDs
112
+ }
113
+
114
+ # Sadly with SQLite we need to do an INSERT first, because otherwise the inserted row is visible
115
+ # to the WITH clause, so we cannot combine the initial fillup and the update into one statement.
116
+ # This shuld be fine however since we will suppress the INSERT on a key conflict
117
+ insert_sql = model_class.sanitize_sql_array([<<~SQL, query_params])
118
+ INSERT INTO pecorino_leaky_buckets AS t
119
+ (id, key, last_touched_at, may_be_deleted_after, level)
120
+ VALUES
121
+ (
122
+ :id,
123
+ :key,
124
+ :now_s, -- Precision loss must be avoided here as it is used for calculations
125
+ DATETIME('now', '+:delete_after_s seconds'), -- Precision loss is acceptable here
126
+ 0.0
127
+ )
128
+ ON CONFLICT (key) DO UPDATE SET
129
+ -- Make sure we extend the lifetime of the row
130
+ -- so that it can't be deleted between our INSERT and our UPDATE
131
+ may_be_deleted_after = EXCLUDED.may_be_deleted_after
132
+ SQL
133
+ model_class.connection.execute(insert_sql)
134
+
135
+ sql = model_class.sanitize_sql_array([<<~SQL, query_params])
136
+ -- With SQLite MATERIALIZED has to be used so that level_post is calculated before the UPDATE takes effect
137
+ WITH pre(level_post_with_uncapped_fillup, level_post) AS MATERIALIZED (
138
+ SELECT
139
+ -- Note the double clamping here. First we clamp the "current level - leak" to not go below zero,
140
+ -- then we also clamp the above + fillup to not go below 0
141
+ MAX(0.0, MAX(0.0, level - ((:now_s - last_touched_at) * :leak_rate)) + :fillup) AS level_post_with_uncapped_fillup,
142
+ MAX(0.0, level - ((:now_s - last_touched_at) * :leak_rate)) AS level_post
143
+ FROM
144
+ pecorino_leaky_buckets
145
+ WHERE key = :key
146
+ ) UPDATE pecorino_leaky_buckets SET
147
+ last_touched_at = :now_s,
148
+ may_be_deleted_after = DATETIME('now', '+:delete_after_s seconds'),
149
+ level = CASE WHEN (SELECT level_post_with_uncapped_fillup FROM pre) <= :capacity THEN
150
+ (SELECT level_post_with_uncapped_fillup FROM pre)
151
+ ELSE
152
+ (SELECT level_post FROM pre)
153
+ END
154
+ RETURNING
155
+ (SELECT level_post FROM pre) AS level_before,
156
+ level AS level_after
157
+ SQL
158
+
159
+ upserted = model_class.connection.uncached { model_class.connection.select_one(sql) }
160
+ level_after = upserted.fetch("level_after")
161
+ level_before = upserted.fetch("level_before")
162
+ [level_after, level_after >= capacity, level_after != level_before]
163
+ end
164
+
97
165
  def set_block(key:, block_for:)
98
166
  query_params = {id: SecureRandom.uuid, key: key.to_s, block_for: block_for.to_f, now_s: Time.now.to_f}
99
167
  block_set_query = model_class.sanitize_sql_array([<<~SQL, query_params])
@@ -14,6 +14,10 @@ class Pecorino::Throttle
14
14
  blocked_until ? true : false
15
15
  end
16
16
 
17
+ # Returns the number of seconds until the block will be lifted, rouded up to the closest
18
+ # whole second. This value can be used in a "Retry-After" HTTP response header.
19
+ #
20
+ # @return [Integer]
17
21
  def retry_after
18
22
  (blocked_until - Time.now.utc).ceil
19
23
  end
@@ -23,11 +27,14 @@ class Pecorino::Throttle
23
27
  # Returns the throttle which raised the exception. Can be used to disambiguiate between
24
28
  # multiple Throttled exceptions when multiple throttles are applied in a layered fashion:
25
29
  #
30
+ # @example
31
+ # begin
26
32
  # ip_addr_throttle.request!
27
33
  # user_email_throttle.request!
28
34
  # db_insert_throttle.request!(n_items_to_insert)
29
35
  # rescue Pecorino::Throttled => e
30
36
  # deliver_notification(user) if e.throttle == user_email_throttle
37
+ # end
31
38
  #
32
39
  # @return [Throttle]
33
40
  attr_reader :throttle
@@ -43,19 +50,21 @@ class Pecorino::Throttle
43
50
  end
44
51
 
45
52
  # @param key[String] the key for both the block record and the leaky bucket
46
- # @param block_for[Numeric] the number of seconds to block any further requests for
53
+ # @param block_for[Numeric] the number of seconds to block any further requests for. Defaults to time it takes
54
+ # the bucket to leak out to the level of 0
47
55
  # @param leaky_bucket_options Options for `Pecorino::LeakyBucket.new`
48
56
  # @see PecorinoLeakyBucket.new
49
- def initialize(key:, block_for: 30, **)
50
- @key = key.to_s
51
- @block_for = block_for.to_f
57
+ def initialize(key:, block_for: nil, **)
52
58
  @bucket = Pecorino::LeakyBucket.new(key:, **)
59
+ @key = key.to_s
60
+ @block_for = block_for ? block_for.to_f : (@bucket.capacity / @bucket.leak_rate)
53
61
  end
54
62
 
55
63
  # Tells whether the throttle will let this number of requests pass without raising
56
64
  # a Throttled. Note that this is not race-safe. Another request could overflow the bucket
57
65
  # after you call `able_to_accept?` but before you call `throttle!`. So before performing
58
- # the action you still need to call `throttle!`
66
+ # the action you still need to call `throttle!`. You may still use `able_to_accept?` to
67
+ # provide better UX to your users before they cause an action that would otherwise throttle.
59
68
  #
60
69
  # @param n_tokens[Float]
61
70
  # @return [boolean]
@@ -70,10 +79,13 @@ class Pecorino::Throttle
70
79
  # The exception can be rescued later to provide a 429 response. This method is better
71
80
  # to use before performing the unit of work that the throttle is guarding:
72
81
  #
73
- # @example t.request!
74
- # Note.create!(note_params)
75
- # rescue Pecorino::Throttle::Throttled => e
76
- # [429, {"Retry-After" => e.retry_after.to_s}, []]
82
+ # @example
83
+ # begin
84
+ # t.request!
85
+ # Note.create!(note_params)
86
+ # rescue Pecorino::Throttle::Throttled => e
87
+ # [429, {"Retry-After" => e.retry_after.to_s}, []]
88
+ # end
77
89
  #
78
90
  # If the method call succeeds it means that the request is not getting throttled.
79
91
  #
@@ -81,30 +93,33 @@ class Pecorino::Throttle
81
93
  def request!(n = 1)
82
94
  state = request(n)
83
95
  raise Throttled.new(self, state) if state.blocked?
96
+ nil
84
97
  end
85
98
 
86
99
  # Register that a request is being performed. Will not raise any exceptions but return
87
100
  # the time at which the block will be lifted if a block resulted from this request or
88
101
  # was already in effect. Can be used for registering actions which already took place,
89
- # but should result in subsequent actions being blocked in subsequent requests later.
102
+ # but should result in subsequent actions being blocked.
90
103
  #
91
- # @example unless t.able_to_accept?
92
- # Note.create!(note_params)
93
- # t.request
94
- # else
95
- # raise "Throttled or block in effect"
96
- # end
104
+ # @example
105
+ # if t.able_to_accept?
106
+ # Entry.create!(entry_params)
107
+ # t.request
108
+ # end
97
109
  #
98
110
  # @return [State] the state of the throttle after filling up the leaky bucket / trying to pass the block
99
111
  def request(n = 1)
100
112
  existing_blocked_until = Pecorino.adapter.blocked_until(key: @key)
101
113
  return State.new(existing_blocked_until.utc) if existing_blocked_until
102
114
 
103
- # Topup the leaky bucket
104
- return State.new(nil) unless @bucket.fillup(n.to_f).full?
105
-
106
- # and set the block if we reached it
107
- fresh_blocked_until = Pecorino.adapter.set_block(key: @key, block_for: @block_for)
108
- State.new(fresh_blocked_until.utc)
115
+ # Topup the leaky bucket, and if the topup gets rejected - block the caller
116
+ fillup = @bucket.fillup_conditionally(n)
117
+ if fillup.accepted?
118
+ State.new(nil)
119
+ else
120
+ # and set the block if the fillup was rejected
121
+ fresh_blocked_until = Pecorino.adapter.set_block(key: @key, block_for: @block_for)
122
+ State.new(fresh_blocked_until.utc)
123
+ end
109
124
  end
110
125
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Pecorino
4
- VERSION = "0.2.0"
4
+ VERSION = "0.4.0"
5
5
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: pecorino
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.2.0
4
+ version: 0.4.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Julik Tarkhanov
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2024-01-09 00:00:00.000000000 Z
11
+ date: 2024-01-22 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: activerecord