pecorino 0.3.0 → 0.4.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 2f94aa734cb0bb50657f5484bbdd8bfcaabc7c2d6b7d9329361d41456ba49db6
4
- data.tar.gz: 97e01c53e828092ce60be1412a288a70446ec9cc5ab783a6fa6e3ba147de1ee5
3
+ metadata.gz: 445e9997824e9ef7857a31e626e2a6de981d466fd4f5187299cff86533596b13
4
+ data.tar.gz: 9b31fad0bf017b2a9ee1b4d4f0660376945db76970c65e61ecadeccf706425b6
5
5
  SHA512:
6
- metadata.gz: 4d83ebb84009492403ca8950d181f4689b42782ab3f65f7fe5091cab92fc4f739ecd64625449ab784309a122f09e62525c262c71f3c602d3d538f4ac511a78e3
7
- data.tar.gz: 93d3a2845713c6dc71ff1f35e5433fcbca6837fdb336db00a7e403e5719f1e65eadac7dc819301c8886d40dc3f99c59b196d64004145de786d0875c42b98e635
6
+ metadata.gz: 9b82a54b4e6f721aa2c752d9f7d0503f5b621174826a76594ef9c91a1b943b881c1e377c8ef7c7d4c57d810a4f9be82ac17bad21c12f65570ebf3daba5c51c7d
7
+ data.tar.gz: 95436d3b317c43d08a6630ad5e4dba1494cdbcae2f3e98ffccfb9cecf896a8ad2f8a02861d987c0288c1cb04b29e7fb1d490bdf37821b7858b472cdb82e6092b
data/CHANGELOG.md CHANGED
@@ -1,3 +1,14 @@
1
+ ## [0.4.0] - 2024-01-22
2
+
3
+ - Use Bucket#connditional_fillup inside Throttle and throttle only when the capacity _would_ be exceeded, as opposed
4
+ to throttling when capacity has already been exceeded. This allows for finer-grained throttles such as
5
+ "at most once in", where filling "exactly to capacity" is a requirement. It also provides for more accurate
6
+ and easier to understand throttling in general.
7
+ - Make sure Bucket#able_to_accept? allows the bucket to be filled to capacity, not only to below capacity
8
+ - Improve YARD documentation
9
+ - Allow "conditional fillup" - only add tokens to the leaky bucket if the bucket has enough space.
10
+ - Fix `over_time` leading to incorrect `leak_rate`. The divider/divisor were swapped, leading to the inverse leak rate getting computed.
11
+
1
12
  ## [0.3.0] - 2024-01-18
2
13
 
3
14
  - Allow `over_time` in addition to `leak_rate`, which is a more intuitive parameter to tweak
data/README.md CHANGED
@@ -22,7 +22,7 @@ And then execute:
22
22
 
23
23
  ## Usage
24
24
 
25
- Once the installation is done you can use Pecorino to start defining your throttles. Imagine you have a resource called `vault` and you want to limit the number of updates to it to 5 per second. To achieve that, instantiate a new `Throttle` in your controller or job code, and then trigger it using `Throttle#request!`. A call to `request!` registers 1 token getting added to the bucket. If the bucket is full, or the throttle is currently in "block" mode (has recently been triggered), a `Pecorino::Throttle::Throttled` exception will be raised.
25
+ Once the installation is done you can use Pecorino to start defining your throttles. Imagine you have a resource called `vault` and you want to limit the number of updates to it to 5 per second. To achieve that, instantiate a new `Throttle` in your controller or job code, and then trigger it using `Throttle#request!`. A call to `request!` registers 1 token getting added to the bucket. If the bucket would overspill (your request would make it overflow), or the throttle is currently in "block" mode (has recently been triggered), a `Pecorino::Throttle::Throttled` exception will be raised.
26
26
 
27
27
  ```ruby
28
28
  throttle = Pecorino::Throttle.new(key: "vault", over_time: 1.second, capacity: 5)
@@ -58,7 +58,7 @@ return render :capacity_exceeded unless throttle.able_to_accept?
58
58
  If you are dealing with a metered resource (like throughput, money, amount of storage...) you can supply the number of tokens to either `request!` or `able_to_accept?` to indicate the desired top-up of the leaky bucket. For example, if you are maintaining user wallets and want to ensure no more than 100 dollars may be taken from the wallet within a certain amount of time, you can do it like so:
59
59
 
60
60
  ```ruby
61
- throttle = Pecorino::Throttle.new(key: "wallet_t_#{current_user.id}", over_time_: 1.hour, capacity: 100, block_for: 60*60*3)
61
+ throttle = Pecorino::Throttle.new(key: "wallet_t_#{current_user.id}", over_time_: 1.hour, capacity: 100, block_for: 3.hours)
62
62
  throttle.request!(20) # Attempt to withdraw 20 dollars
63
63
  throttle.request!(20) # Attempt to withdraw 20 dollars more
64
64
  throttle.request!(20) # Attempt to withdraw 20 dollars more
@@ -67,6 +67,8 @@ throttle.request!(20) # Attempt to withdraw 20 dollars more
67
67
  throttle.request!(2) # Attempt to withdraw 2 dollars more, will raise `Throttled` and block withdrawals for 3 hours
68
68
  ```
69
69
 
70
+ ## Using just the leaky bucket
71
+
70
72
  Sometimes you don't want to use a throttle, but you want to track the amount added to the leaky bucket over time. A lower-level abstraction is available for that purpose in the form of the `LeakyBucket` class. It will not raise any exceptions and will not install blocks, but will permit you to track a bucket's state over time:
71
73
 
72
74
 
@@ -77,9 +79,10 @@ sleep 0.2
77
79
  b.state #=> Pecorino::LeakyBucket::State(full?: false, level: 1.8)
78
80
  ```
79
81
 
80
- Check out the inline YARD documentation for more options.
82
+ Check out the inline YARD documentation for more options. Do take note of the differences between `fillup()` and `fillup_conditionally` as you
83
+ might want to pick one or the other depending on your use case.
81
84
 
82
- ## Cleaning out stale locks from the database
85
+ ## Cleaning out stale buckets and blocks from the database
83
86
 
84
87
  We recommend running the following bit of code every couple of hours (via cron or similar) to delete the stale blocks and leaky buckets from the system:
85
88
 
@@ -25,37 +25,39 @@
25
25
  # The storage use is one DB row per leaky bucket you need to manage (likely - one throttled entity such
26
26
  # as a combination of an IP address + the URL you need to procect). The `key` is an arbitrary string you provide.
27
27
  class Pecorino::LeakyBucket
28
- State = Struct.new(:level, :full) do
29
- # Returns the level of the bucket after the operation on the LeakyBucket
30
- # object has taken place. There is a guarantee that no tokens have leaked
31
- # from the bucket between the operation and the freezing of the State
32
- # struct.
33
- #
34
- # @!attribute [r] level
35
- # @return [Float]
28
+ # Returned from `.state` and `.fillup`
29
+ class State
30
+ def initialize(level, is_full)
31
+ @level = level.to_f
32
+ @full = !!is_full
33
+ end
34
+
35
+ # Returns the level of the bucket
36
+ # @return [Float]
37
+ attr_reader :level
36
38
 
37
39
  # Tells whether the bucket was detected to be full when the operation on
38
- # the LeakyBucket was performed. There is a guarantee that no tokens have leaked
39
- # from the bucket between the operation and the freezing of the State
40
- # struct.
41
- #
42
- # @!attribute [r] full
43
- # @return [Boolean]
40
+ # the LeakyBucket was performed.
41
+ # @return [Boolean]
42
+ def full?
43
+ @full
44
+ end
44
45
 
45
- alias_method :full?, :full
46
+ alias_method :full, :full?
47
+ end
46
48
 
47
- # Returns the bucket level of the bucket state as a Float
48
- #
49
- # @return [Float]
50
- def to_f
51
- level.to_f
49
+ # Same as `State` but also communicates whether the write has been permitted or not. A conditional fillup
50
+ # may refuse a write if it would make the bucket overflow
51
+ class ConditionalFillupResult < State
52
+ def initialize(level, is_full, accepted)
53
+ super(level, is_full)
54
+ @accepted = !!accepted
52
55
  end
53
56
 
54
- # Returns the bucket level of the bucket state rounded to an Integer
55
- #
56
- # @return [Integer]
57
- def to_i
58
- level.to_i
57
+ # Tells whether the bucket did accept the requested fillup
58
+ # @return [Boolean]
59
+ def accepted?
60
+ @accepted
59
61
  end
60
62
  end
61
63
 
@@ -91,22 +93,46 @@ class Pecorino::LeakyBucket
91
93
  def initialize(key:, capacity:, leak_rate: nil, over_time: nil)
92
94
  raise ArgumentError, "Either leak_rate: or over_time: must be specified" if leak_rate.nil? && over_time.nil?
93
95
  raise ArgumentError, "Either leak_rate: or over_time: may be specified, but not both" if leak_rate && over_time
94
- @leak_rate = leak_rate || (over_time.to_f / capacity)
96
+ @leak_rate = leak_rate || (capacity / over_time.to_f)
95
97
  @key = key
96
98
  @capacity = capacity.to_f
97
99
  end
98
100
 
99
- # Places `n` tokens in the bucket. Once tokens are placed, the bucket is set to expire
100
- # within 2 times the time it would take it to leak to 0, regardless of how many tokens
101
- # get put in - since the amount of tokens put in the bucket will always be capped
102
- # to the `capacity:` value you pass to the constructor. Calling `fillup` also deletes
103
- # leaky buckets which have expired.
101
+ # Places `n` tokens in the bucket. If the bucket has less capacity than `n` tokens, the bucket will be filled to capacity.
102
+ # If the bucket has less capacity than `n` tokens, it will be filled to capacity. If the bucket is already full
103
+ # when the fillup is requested, the bucket stays at capacity.
104
104
  #
105
- # @param n_tokens[Float]
105
+ # Once tokens are placed, the bucket is set to expire within 2 times the time it would take it to leak to 0,
106
+ # regardless of how many tokens get put in - since the amount of tokens put in the bucket will always be capped
107
+ # to the `capacity:` value you pass to the constructor.
108
+ #
109
+ # @param n_tokens[Float] How many tokens to fillup by
106
110
  # @return [State] the state of the bucket after the operation
107
111
  def fillup(n_tokens)
108
- capped_level_after_fillup, did_overflow = Pecorino.adapter.add_tokens(capacity: @capacity, key: @key, leak_rate: @leak_rate, n_tokens: n_tokens)
109
- State.new(capped_level_after_fillup, did_overflow)
112
+ capped_level_after_fillup, is_full = Pecorino.adapter.add_tokens(capacity: @capacity, key: @key, leak_rate: @leak_rate, n_tokens: n_tokens)
113
+ State.new(capped_level_after_fillup, is_full)
114
+ end
115
+
116
+ # Places `n` tokens in the bucket. If the bucket has less capacity than `n` tokens, the fillup will be rejected.
117
+ # This can be used for "exactly once" semantics or just more precise rate limiting. Note that if the bucket has
118
+ # _exactly_ `n` tokens of capacity the fillup will be accepted.
119
+ #
120
+ # Once tokens are placed, the bucket is set to expire within 2 times the time it would take it to leak to 0,
121
+ # regardless of how many tokens get put in - since the amount of tokens put in the bucket will always be capped
122
+ # to the `capacity:` value you pass to the constructor.
123
+ #
124
+ # @example
125
+ # withdrawals = LeakyBuket.new(key: "wallet-#{user.id}", capacity: 200, over_time: 1.day)
126
+ # if withdrawals.fillup_conditionally(amount_to_withdraw).accepted?
127
+ # user.wallet.withdraw(amount_to_withdraw)
128
+ # else
129
+ # raise "You need to wait a bit before withdrawing more"
130
+ # end
131
+ # @param n_tokens[Float] How many tokens to fillup by
132
+ # @return [ConditionalFillupResult] the state of the bucket after the operation and whether the operation succeeded
133
+ def fillup_conditionally(n_tokens)
134
+ capped_level_after_fillup, is_full, did_accept = Pecorino.adapter.add_tokens_conditionally(capacity: @capacity, key: @key, leak_rate: @leak_rate, n_tokens: n_tokens)
135
+ ConditionalFillupResult.new(capped_level_after_fillup, is_full, did_accept)
110
136
  end
111
137
 
112
138
  # Returns the current state of the bucket, containing the level and whether the bucket is full.
@@ -127,6 +153,6 @@ class Pecorino::LeakyBucket
127
153
  # @param n_tokens[Float]
128
154
  # @return [boolean]
129
155
  def able_to_accept?(n_tokens)
130
- (state.level + n_tokens) < @capacity
156
+ (state.level + n_tokens) <= @capacity
131
157
  end
132
158
  end
@@ -72,7 +72,7 @@ Pecorino::Postgres = Struct.new(:model_class) do
72
72
  RETURNING
73
73
  level,
74
74
  -- Compare level to the capacity inside the DB so that we won't have rounding issues
75
- level >= :capacity AS did_overflow
75
+ level >= :capacity AS at_capacity
76
76
  SQL
77
77
 
78
78
  # Note the use of .uncached here. The AR query cache will actually see our
@@ -80,8 +80,67 @@ Pecorino::Postgres = Struct.new(:model_class) do
80
80
  # correctly, thus the clock_timestamp() value would be frozen between calls. We don't want that here.
81
81
  # See https://stackoverflow.com/questions/73184531/why-would-postgres-clock-timestamp-freeze-inside-a-rails-unit-test
82
82
  upserted = model_class.connection.uncached { model_class.connection.select_one(sql) }
83
- capped_level_after_fillup, did_overflow = upserted.fetch("level"), upserted.fetch("did_overflow")
84
- [capped_level_after_fillup, did_overflow]
83
+ capped_level_after_fillup, at_capacity = upserted.fetch("level"), upserted.fetch("at_capacity")
84
+ [capped_level_after_fillup, at_capacity]
85
+ end
86
+
87
+ def add_tokens_conditionally(key:, capacity:, leak_rate:, n_tokens:)
88
+ # Take double the time it takes the bucket to empty under normal circumstances
89
+ # until the bucket may be deleted.
90
+ may_be_deleted_after_seconds = (capacity.to_f / leak_rate.to_f) * 2.0
91
+
92
+ # Create the leaky bucket if it does not exist, and update
93
+ # to the new level, taking the leak rate into account - if the bucket exists.
94
+ query_params = {
95
+ key: key.to_s,
96
+ capacity: capacity.to_f,
97
+ delete_after_s: may_be_deleted_after_seconds,
98
+ leak_rate: leak_rate.to_f,
99
+ fillup: n_tokens.to_f
100
+ }
101
+
102
+ sql = model_class.sanitize_sql_array([<<~SQL, query_params])
103
+ WITH pre AS MATERIALIZED (
104
+ SELECT
105
+ -- Note the double clamping here. First we clamp the "current level - leak" to not go below zero,
106
+ -- then we also clamp the above + fillup to not go below 0
107
+ GREATEST(0.0,
108
+ GREATEST(0.0, level - (EXTRACT(EPOCH FROM (clock_timestamp() - last_touched_at)) * :leak_rate)) + :fillup
109
+ ) AS level_post_with_uncapped_fillup,
110
+ GREATEST(0.0,
111
+ level - (EXTRACT(EPOCH FROM (clock_timestamp() - last_touched_at)) * :leak_rate)
112
+ ) AS level_post
113
+ FROM pecorino_leaky_buckets
114
+ WHERE key = :key
115
+ )
116
+ INSERT INTO pecorino_leaky_buckets AS t
117
+ (key, last_touched_at, may_be_deleted_after, level)
118
+ VALUES
119
+ (
120
+ :key,
121
+ clock_timestamp(),
122
+ clock_timestamp() + ':delete_after_s second'::interval,
123
+ GREATEST(0.0,
124
+ (CASE WHEN :fillup > :capacity THEN 0.0 ELSE :fillup END)
125
+ )
126
+ )
127
+ ON CONFLICT (key) DO UPDATE SET
128
+ last_touched_at = EXCLUDED.last_touched_at,
129
+ may_be_deleted_after = EXCLUDED.may_be_deleted_after,
130
+ level = CASE WHEN (SELECT level_post_with_uncapped_fillup FROM pre) <= :capacity THEN
131
+ (SELECT level_post_with_uncapped_fillup FROM pre)
132
+ ELSE
133
+ (SELECT level_post FROM pre)
134
+ END
135
+ RETURNING
136
+ COALESCE((SELECT level_post FROM pre), 0.0) AS level_before,
137
+ level AS level_after
138
+ SQL
139
+
140
+ upserted = model_class.connection.uncached { model_class.connection.select_one(sql) }
141
+ level_after = upserted.fetch("level_after")
142
+ level_before = upserted.fetch("level_before")
143
+ [level_after, level_after >= capacity, level_after != level_before]
85
144
  end
86
145
 
87
146
  def set_block(key:, block_for:)
@@ -90,17 +149,17 @@ Pecorino::Postgres = Struct.new(:model_class) do
90
149
  INSERT INTO pecorino_blocks AS t
91
150
  (key, blocked_until)
92
151
  VALUES
93
- (:key, NOW() + ':block_for seconds'::interval)
152
+ (:key, clock_timestamp() + ':block_for seconds'::interval)
94
153
  ON CONFLICT (key) DO UPDATE SET
95
154
  blocked_until = GREATEST(EXCLUDED.blocked_until, t.blocked_until)
96
- RETURNING blocked_until;
155
+ RETURNING blocked_until
97
156
  SQL
98
157
  model_class.connection.uncached { model_class.connection.select_value(block_set_query) }
99
158
  end
100
159
 
101
160
  def blocked_until(key:)
102
161
  block_check_query = model_class.sanitize_sql_array([<<~SQL, key])
103
- SELECT blocked_until FROM pecorino_blocks WHERE key = ? AND blocked_until >= NOW() LIMIT 1
162
+ SELECT blocked_until FROM pecorino_blocks WHERE key = ? AND blocked_until >= clock_timestamp() LIMIT 1
104
163
  SQL
105
164
  model_class.connection.uncached { model_class.connection.select_value(block_check_query) }
106
165
  end
@@ -94,6 +94,74 @@ Pecorino::Sqlite = Struct.new(:model_class) do
94
94
  [capped_level_after_fillup, one_if_did_overflow == 1]
95
95
  end
96
96
 
97
+ def add_tokens_conditionally(key:, capacity:, leak_rate:, n_tokens:)
98
+ # Take double the time it takes the bucket to empty under normal circumstances
99
+ # until the bucket may be deleted.
100
+ may_be_deleted_after_seconds = (capacity.to_f / leak_rate.to_f) * 2.0
101
+
102
+ # Create the leaky bucket if it does not exist, and update
103
+ # to the new level, taking the leak rate into account - if the bucket exists.
104
+ query_params = {
105
+ key: key.to_s,
106
+ capacity: capacity.to_f,
107
+ delete_after_s: may_be_deleted_after_seconds,
108
+ leak_rate: leak_rate.to_f,
109
+ now_s: Time.now.to_f, # See above as to why we are using a time value passed in
110
+ fillup: n_tokens.to_f,
111
+ id: SecureRandom.uuid # SQLite3 does not autogenerate UUIDs
112
+ }
113
+
114
+ # Sadly with SQLite we need to do an INSERT first, because otherwise the inserted row is visible
115
+ # to the WITH clause, so we cannot combine the initial fillup and the update into one statement.
116
+ # This shuld be fine however since we will suppress the INSERT on a key conflict
117
+ insert_sql = model_class.sanitize_sql_array([<<~SQL, query_params])
118
+ INSERT INTO pecorino_leaky_buckets AS t
119
+ (id, key, last_touched_at, may_be_deleted_after, level)
120
+ VALUES
121
+ (
122
+ :id,
123
+ :key,
124
+ :now_s, -- Precision loss must be avoided here as it is used for calculations
125
+ DATETIME('now', '+:delete_after_s seconds'), -- Precision loss is acceptable here
126
+ 0.0
127
+ )
128
+ ON CONFLICT (key) DO UPDATE SET
129
+ -- Make sure we extend the lifetime of the row
130
+ -- so that it can't be deleted between our INSERT and our UPDATE
131
+ may_be_deleted_after = EXCLUDED.may_be_deleted_after
132
+ SQL
133
+ model_class.connection.execute(insert_sql)
134
+
135
+ sql = model_class.sanitize_sql_array([<<~SQL, query_params])
136
+ -- With SQLite MATERIALIZED has to be used so that level_post is calculated before the UPDATE takes effect
137
+ WITH pre(level_post_with_uncapped_fillup, level_post) AS MATERIALIZED (
138
+ SELECT
139
+ -- Note the double clamping here. First we clamp the "current level - leak" to not go below zero,
140
+ -- then we also clamp the above + fillup to not go below 0
141
+ MAX(0.0, MAX(0.0, level - ((:now_s - last_touched_at) * :leak_rate)) + :fillup) AS level_post_with_uncapped_fillup,
142
+ MAX(0.0, level - ((:now_s - last_touched_at) * :leak_rate)) AS level_post
143
+ FROM
144
+ pecorino_leaky_buckets
145
+ WHERE key = :key
146
+ ) UPDATE pecorino_leaky_buckets SET
147
+ last_touched_at = :now_s,
148
+ may_be_deleted_after = DATETIME('now', '+:delete_after_s seconds'),
149
+ level = CASE WHEN (SELECT level_post_with_uncapped_fillup FROM pre) <= :capacity THEN
150
+ (SELECT level_post_with_uncapped_fillup FROM pre)
151
+ ELSE
152
+ (SELECT level_post FROM pre)
153
+ END
154
+ RETURNING
155
+ (SELECT level_post FROM pre) AS level_before,
156
+ level AS level_after
157
+ SQL
158
+
159
+ upserted = model_class.connection.uncached { model_class.connection.select_one(sql) }
160
+ level_after = upserted.fetch("level_after")
161
+ level_before = upserted.fetch("level_before")
162
+ [level_after, level_after >= capacity, level_after != level_before]
163
+ end
164
+
97
165
  def set_block(key:, block_for:)
98
166
  query_params = {id: SecureRandom.uuid, key: key.to_s, block_for: block_for.to_f, now_s: Time.now.to_f}
99
167
  block_set_query = model_class.sanitize_sql_array([<<~SQL, query_params])
@@ -14,6 +14,10 @@ class Pecorino::Throttle
14
14
  blocked_until ? true : false
15
15
  end
16
16
 
17
+ # Returns the number of seconds until the block will be lifted, rouded up to the closest
18
+ # whole second. This value can be used in a "Retry-After" HTTP response header.
19
+ #
20
+ # @return [Integer]
17
21
  def retry_after
18
22
  (blocked_until - Time.now.utc).ceil
19
23
  end
@@ -23,11 +27,14 @@ class Pecorino::Throttle
23
27
  # Returns the throttle which raised the exception. Can be used to disambiguiate between
24
28
  # multiple Throttled exceptions when multiple throttles are applied in a layered fashion:
25
29
  #
30
+ # @example
31
+ # begin
26
32
  # ip_addr_throttle.request!
27
33
  # user_email_throttle.request!
28
34
  # db_insert_throttle.request!(n_items_to_insert)
29
35
  # rescue Pecorino::Throttled => e
30
36
  # deliver_notification(user) if e.throttle == user_email_throttle
37
+ # end
31
38
  #
32
39
  # @return [Throttle]
33
40
  attr_reader :throttle
@@ -56,7 +63,8 @@ class Pecorino::Throttle
56
63
  # Tells whether the throttle will let this number of requests pass without raising
57
64
  # a Throttled. Note that this is not race-safe. Another request could overflow the bucket
58
65
  # after you call `able_to_accept?` but before you call `throttle!`. So before performing
59
- # the action you still need to call `throttle!`
66
+ # the action you still need to call `throttle!`. You may still use `able_to_accept?` to
67
+ # provide better UX to your users before they cause an action that would otherwise throttle.
60
68
  #
61
69
  # @param n_tokens[Float]
62
70
  # @return [boolean]
@@ -71,10 +79,13 @@ class Pecorino::Throttle
71
79
  # The exception can be rescued later to provide a 429 response. This method is better
72
80
  # to use before performing the unit of work that the throttle is guarding:
73
81
  #
74
- # @example t.request!
75
- # Note.create!(note_params)
76
- # rescue Pecorino::Throttle::Throttled => e
77
- # [429, {"Retry-After" => e.retry_after.to_s}, []]
82
+ # @example
83
+ # begin
84
+ # t.request!
85
+ # Note.create!(note_params)
86
+ # rescue Pecorino::Throttle::Throttled => e
87
+ # [429, {"Retry-After" => e.retry_after.to_s}, []]
88
+ # end
78
89
  #
79
90
  # If the method call succeeds it means that the request is not getting throttled.
80
91
  #
@@ -82,30 +93,33 @@ class Pecorino::Throttle
82
93
  def request!(n = 1)
83
94
  state = request(n)
84
95
  raise Throttled.new(self, state) if state.blocked?
96
+ nil
85
97
  end
86
98
 
87
99
  # Register that a request is being performed. Will not raise any exceptions but return
88
100
  # the time at which the block will be lifted if a block resulted from this request or
89
101
  # was already in effect. Can be used for registering actions which already took place,
90
- # but should result in subsequent actions being blocked in subsequent requests later.
102
+ # but should result in subsequent actions being blocked.
91
103
  #
92
- # @example unless t.able_to_accept?
93
- # Note.create!(note_params)
94
- # t.request
95
- # else
96
- # raise "Throttled or block in effect"
97
- # end
104
+ # @example
105
+ # if t.able_to_accept?
106
+ # Entry.create!(entry_params)
107
+ # t.request
108
+ # end
98
109
  #
99
110
  # @return [State] the state of the throttle after filling up the leaky bucket / trying to pass the block
100
111
  def request(n = 1)
101
112
  existing_blocked_until = Pecorino.adapter.blocked_until(key: @key)
102
113
  return State.new(existing_blocked_until.utc) if existing_blocked_until
103
114
 
104
- # Topup the leaky bucket
105
- return State.new(nil) unless @bucket.fillup(n.to_f).full?
106
-
107
- # and set the block if we reached it
108
- fresh_blocked_until = Pecorino.adapter.set_block(key: @key, block_for: @block_for)
109
- State.new(fresh_blocked_until.utc)
115
+ # Topup the leaky bucket, and if the topup gets rejected - block the caller
116
+ fillup = @bucket.fillup_conditionally(n)
117
+ if fillup.accepted?
118
+ State.new(nil)
119
+ else
120
+ # and set the block if the fillup was rejected
121
+ fresh_blocked_until = Pecorino.adapter.set_block(key: @key, block_for: @block_for)
122
+ State.new(fresh_blocked_until.utc)
123
+ end
110
124
  end
111
125
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Pecorino
4
- VERSION = "0.3.0"
4
+ VERSION = "0.4.0"
5
5
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: pecorino
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.3.0
4
+ version: 0.4.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Julik Tarkhanov
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2024-01-18 00:00:00.000000000 Z
11
+ date: 2024-01-22 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: activerecord