pecorino 0.4.1 → 0.6.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/CHANGELOG.md +17 -5
- data/README.md +114 -3
- data/lib/pecorino/block.rb +24 -0
- data/lib/pecorino/cached_throttle.rb +91 -0
- data/lib/pecorino/throttle.rb +83 -22
- data/lib/pecorino/version.rb +1 -1
- data/lib/pecorino.rb +4 -2
- metadata +5 -3
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: fbbf2138b0295606f7a2a217aa41f30bcd1d1b994a9e079f810b5c0088045714
|
4
|
+
data.tar.gz: b1b594fbdbeb1d1e4f2019a50869b0ba038eb161d4385d36707e5b74d82eedb6
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 4b3b0ba688ae6b04d85943b03da908c140db2d2ec7a326da8e87a54e4fa6a077a7b36d4a8ba29a9ed051762d771c036d0f3d2599c8d48d40f5073810a5524234
|
7
|
+
data.tar.gz: cc7b1d692935a8af49ec479fd3c5c98ea65749879ecaf22addeaeb285eedf37406df755c6dfa17d379d6cbcb07bc96150573b1262bbb98d4509ba1170fc195d9
|
data/CHANGELOG.md
CHANGED
@@ -1,8 +1,20 @@
|
|
1
|
-
##
|
1
|
+
## 0.6.0
|
2
|
+
|
3
|
+
- Add `Pecorino::Block` for setting blocks directly. These are available both to `Throttle` with the same key and on their own. This can be used to set arbitrary blocks without having to configure a `Throttle` first.
|
4
|
+
|
5
|
+
## 0.5.0
|
6
|
+
|
7
|
+
- Add `CachedThrottle` for caching the throttle blocks. This allows protection to the database when the throttle is in a blocked state.
|
8
|
+
- Add `Throttle#throttled` for silencing alerts
|
9
|
+
- **BREAKING CHANGE** Remove `Throttle::State#retry_after`, because there is no reasonable value for that member if the throttle is not in the "blocked" state
|
10
|
+
- Allow accessing `Throttle::State` from the `Throttled` exception so that the blocked throttle state can be cached downstream (in Rails cache, for example)
|
11
|
+
- Make `Throttle#request!` return the new state if there was no exception raised
|
12
|
+
|
13
|
+
## 0.4.1
|
2
14
|
|
3
15
|
- Make sure Pecorino works on Ruby 2.7 as well by removing 3.x-exclusive syntax
|
4
16
|
|
5
|
-
##
|
17
|
+
## 0.4.0
|
6
18
|
|
7
19
|
- Use Bucket#connditional_fillup inside Throttle and throttle only when the capacity _would_ be exceeded, as opposed
|
8
20
|
to throttling when capacity has already been exceeded. This allows for finer-grained throttles such as
|
@@ -13,17 +25,17 @@
|
|
13
25
|
- Allow "conditional fillup" - only add tokens to the leaky bucket if the bucket has enough space.
|
14
26
|
- Fix `over_time` leading to incorrect `leak_rate`. The divider/divisor were swapped, leading to the inverse leak rate getting computed.
|
15
27
|
|
16
|
-
##
|
28
|
+
## 0.3.0
|
17
29
|
|
18
30
|
- Allow `over_time` in addition to `leak_rate`, which is a more intuitive parameter to tweak
|
19
31
|
- Set default `block_for` to the time it takes the bucket to leak out completely instead of 30 seconds
|
20
32
|
|
21
|
-
##
|
33
|
+
## 0.2.0
|
22
34
|
|
23
35
|
- [Add support for SQLite](https://github.com/cheddar-me/pecorino/pull/9)
|
24
36
|
- [Use comparisons in SQL to determine whether the leaky bucket did overflow](https://github.com/cheddar-me/pecorino/pull/8)
|
25
37
|
- [Change the way Structs are defined to appease Tapioca/Sorbet](https://github.com/cheddar-me/pecorino/pull/6)
|
26
38
|
|
27
|
-
##
|
39
|
+
## 0.1.0
|
28
40
|
|
29
41
|
- Initial release
|
data/README.md
CHANGED
@@ -1,10 +1,10 @@
|
|
1
1
|
# Pecorino
|
2
2
|
|
3
|
-
Pecorino is a rate limiter based on the concept of leaky buckets. It uses your DB as the storage backend for the throttles. It is compact, easy to install, and does not require additional infrastructure. The approach used by Pecorino has been previously used by [prorate](https://github.com/WeTransfer/prorate) with Redis, and that approach has proven itself.
|
3
|
+
Pecorino is a rate limiter based on the concept of leaky buckets, or more specifically - based on the [generic cell rate](https://brandur.org/rate-limiting) algorithm. It uses your DB as the storage backend for the throttles. It is compact, easy to install, and does not require additional infrastructure. The approach used by Pecorino has been previously used by [prorate](https://github.com/WeTransfer/prorate) with Redis, and that approach has proven itself.
|
4
4
|
|
5
5
|
Pecorino is designed to integrate seamlessly into any Rails application using a PostgreSQL or SQLite database (at the moment there is no MySQL support, we would be delighted if you could add it).
|
6
6
|
|
7
|
-
If you would like to know more about the leaky bucket algorithm: [this article](http://live.julik.nl/2022/08/the-unreasonable-effectiveness-of-leaky-buckets) or the [Wikipedia article](https://en.wikipedia.org/wiki/Leaky_bucket) are both good starting points.
|
7
|
+
If you would like to know more about the leaky bucket algorithm: [this article](http://live.julik.nl/2022/08/the-unreasonable-effectiveness-of-leaky-buckets) or the [Wikipedia article](https://en.wikipedia.org/wiki/Leaky_bucket) are both good starting points. [This Wikipedia article](https://en.wikipedia.org/wiki/Generic_cell_rate_algorithm) describes the generic cell rate algorithm in more detail as well.
|
8
8
|
|
9
9
|
## Installation
|
10
10
|
|
@@ -24,8 +24,10 @@ And then execute:
|
|
24
24
|
|
25
25
|
Once the installation is done you can use Pecorino to start defining your throttles. Imagine you have a resource called `vault` and you want to limit the number of updates to it to 5 per second. To achieve that, instantiate a new `Throttle` in your controller or job code, and then trigger it using `Throttle#request!`. A call to `request!` registers 1 token getting added to the bucket. If the bucket would overspill (your request would make it overflow), or the throttle is currently in "block" mode (has recently been triggered), a `Pecorino::Throttle::Throttled` exception will be raised.
|
26
26
|
|
27
|
+
We call this pattern **prefix usage** - apply throttle before allowing the action to proceed. This is more secure than registering an action after it has taken place.
|
28
|
+
|
27
29
|
```ruby
|
28
|
-
throttle = Pecorino::Throttle.new(key: "
|
30
|
+
throttle = Pecorino::Throttle.new(key: "password-attempts-#{request.ip}", over_time: 1.minute, capacity: 5, block_for: 30.minutes)
|
29
31
|
throttle.request!
|
30
32
|
```
|
31
33
|
In a Rails controller you can then rescue from this exception to render the appropriate response:
|
@@ -67,6 +69,92 @@ throttle.request!(20) # Attempt to withdraw 20 dollars more
|
|
67
69
|
throttle.request!(2) # Attempt to withdraw 2 dollars more, will raise `Throttled` and block withdrawals for 3 hours
|
68
70
|
```
|
69
71
|
|
72
|
+
## Performing a block only if it would be allowed by the throttle
|
73
|
+
|
74
|
+
You can use Pecorino to avoid nuisance alerting - use it to limit the alert rate:
|
75
|
+
|
76
|
+
```ruby
|
77
|
+
alert_nuisance_t = Pecorino::Throttle.new(key: "disk-full-alert", over_time_: 2.hours, capacity: 1, block_for: 2.hours)
|
78
|
+
alert_nuisance_t.throttled do
|
79
|
+
Slack.alerts.deliver("Disk is full again! please investigate!")
|
80
|
+
end
|
81
|
+
```
|
82
|
+
|
83
|
+
This will not raise any exceptions. The `throttled` method performs **prefix throttling** to prevent multiple callers hitting the throttle at the same time, so it is guaranteed to be atomic.
|
84
|
+
|
85
|
+
## Postfix topup of the throttle
|
86
|
+
|
87
|
+
In addition to use case where you would want to trigger the throttle before performing an action, there are legitimate use cases where you actually want to use the throttle as a _meter_ instead, measuring the effect of an action which has already been permitted – and then only make it trigger on a subsequent action. This **postfix usage** is less secure, but it allows for a different sequencing of calls. Imagine you want to implement the popular [circuit breaker pattern](https://dzone.com/articles/introduction-to-the-circuit-breaker-pattern) where all your nodes are able to share the error rate information between them. Pecorino gives you all the tools to implement a binary state circuit breaker (open or closed) based on an error rate. Imagine you want to stop sending requests if the service you are calling raises `Timeout::Error` frequently. Then your call to the service could look like this:
|
88
|
+
|
89
|
+
```ruby
|
90
|
+
begin
|
91
|
+
error_rate_throttle = Pecorino::Throttle.new("some-fancy-ai-api-errors", capacity: 10, over_time: 30.seconds, block_for: 120.seconds)
|
92
|
+
|
93
|
+
if error_rate_throttle.able_to_accept? # See whether adding 1 request will overflow the error rate
|
94
|
+
fancy_ai_api.post_chat_message("Imagine I am a rocket scientist on a moonbase. Invent me...")
|
95
|
+
else
|
96
|
+
raise "The error rate for fancy_ai_api has been exceeded"
|
97
|
+
end
|
98
|
+
rescue Timeout::Error
|
99
|
+
error_rate_throttle.request(1) # use bang-less method since we do not need the Throttled exception
|
100
|
+
raise
|
101
|
+
end
|
102
|
+
```
|
103
|
+
|
104
|
+
This way, every time there is an error on the "fancy AI service" the throttle will be triggered, and if it overflows - a subsequent request will be blocked.
|
105
|
+
|
106
|
+
## A note on database transactions
|
107
|
+
|
108
|
+
Pecorino uses your main database. When calling the `Throttle` or `LeakyBucket` objects, SQL queries will be performed by Pecorino and those queries may result in changes to data. If you are currently inside a database transaction, your bucket topups or set blocks may get reverted. For example, imagine you have a controller like this:
|
109
|
+
|
110
|
+
```ruby
|
111
|
+
class WalletController < ApplicationController
|
112
|
+
rescue_from Pecorino::Throttle::Throttled do |e|
|
113
|
+
response.set_header('Retry-After', e.retry_after.to_s)
|
114
|
+
render nothing: true, status: 429
|
115
|
+
end
|
116
|
+
|
117
|
+
def withdraw
|
118
|
+
Wallet.transaction do
|
119
|
+
t = Pecorino::Throttle.new("wallet_#{current_user.id}_max_withdrawal", capacity: 200_00, over_time: 5.minutes)
|
120
|
+
t.request!(10_00)
|
121
|
+
current_user.wallet.withdraw(Money.new(10, "EUR"))
|
122
|
+
end
|
123
|
+
end
|
124
|
+
end
|
125
|
+
```
|
126
|
+
|
127
|
+
what will happen is that even though the `withdraw()` call is not going to be performed, the increment of the throttle will not either, because the exception will result in a `ROLLBACK`.
|
128
|
+
|
129
|
+
If you need to use Pecorino in combination with transactions, you will need to design with that in mind. Either call `Throttle` before entering the `transaction do`:
|
130
|
+
|
131
|
+
```ruby
|
132
|
+
def withdraw
|
133
|
+
t = Pecorino::Throttle.new("wallet_#{current_user.id}_max_withdrawal", capacity: 200_00, over_time: 5.minutes)
|
134
|
+
t.request!(10_00)
|
135
|
+
Wallet.transaction do
|
136
|
+
current_user.wallet.withdraw(Money.new(10, "EUR"))
|
137
|
+
end
|
138
|
+
end
|
139
|
+
```
|
140
|
+
|
141
|
+
or use the `request()` method instead to still commit:
|
142
|
+
|
143
|
+
```ruby
|
144
|
+
def withdraw
|
145
|
+
Wallet.transaction do
|
146
|
+
t = Pecorino::Throttle.new("wallet_#{current_user.id}_max_withdrawal", capacity: 200_00, over_time: 5.minutes)
|
147
|
+
throttle_state = t.request(10_00)
|
148
|
+
return render(nothing: true, status: 429) if throttle_state.blocked?
|
149
|
+
|
150
|
+
current_user.wallet.withdraw(Money.new(10, "EUR"))
|
151
|
+
end
|
152
|
+
end
|
153
|
+
```
|
154
|
+
|
155
|
+
Note also that this behaviour might be desirable for your use case (that the throttle and the data update together in
|
156
|
+
a transactional manner) – it just helps to be aware of it.
|
157
|
+
|
70
158
|
## Using just the leaky bucket
|
71
159
|
|
72
160
|
Sometimes you don't want to use a throttle, but you want to track the amount added to the leaky bucket over time. A lower-level abstraction is available for that purpose in the form of the `LeakyBucket` class. It will not raise any exceptions and will not install blocks, but will permit you to track a bucket's state over time:
|
@@ -90,6 +178,29 @@ We recommend running the following bit of code every couple of hours (via cron o
|
|
90
178
|
Pecorino.prune!
|
91
179
|
```
|
92
180
|
|
181
|
+
## Using cached throttles
|
182
|
+
|
183
|
+
If a throttle is triggered, Pecorino sets a "block" record for that throttle key. Any request to that throttle will fail until the block is lifted. If you are getting hammered by requests which are getting throttled, it might be a good idea to install a caching layer which will respond with a "rate limit exceeded" error even before hitting your database - until the moment when the block would be lifted. You can use any [ActiveSupport::Cache::Store](https://api.rubyonrails.org/classes/ActiveSupport/Cache/Store.html) to store your blocks. If you have a fast Rails cache configured, create a wrapped throttle:
|
184
|
+
|
185
|
+
```ruby
|
186
|
+
throttle = Pecorino::Throttle.new(key: "ip-#{request.ip}", capacity: 10, over_time: 2.seconds, block_for: 2.minutes)
|
187
|
+
cached_throttle = Pecorino::CachedThrottle.new(Rails.cache, throttle)
|
188
|
+
cached_throttle.request!
|
189
|
+
```
|
190
|
+
|
191
|
+
Note that the idea of using a cache store here is to avoid hitting the database when the block for your throttle is in effect. Therefore, if you are using something like [solid_cache](https://github.com/rails/solid_cache) you will be hitting the database regardless! A better approach is to have a [MemoryStore](https://api.rubyonrails.org/classes/ActiveSupport/Cache/MemoryStore.html) just for throttles - it will be local to your Rails process. This will avoid a database roundtrip once the process knows a particular throttle is being blocked at the moment:
|
192
|
+
|
193
|
+
```ruby
|
194
|
+
# in application.rb
|
195
|
+
config.pecorino_throttle_cache = ActiveSupport::Cache::MemoryStore.new
|
196
|
+
|
197
|
+
# in your controller
|
198
|
+
|
199
|
+
throttle = Pecorino::Throttle.new(key: "ip-#{request.ip}", capacity: 10, over_time: 2.seconds, block_for: 2.minutes)
|
200
|
+
cached_throttle = Pecorino::CachedThrottle.new(Rails.application.config.pecorino_throttle_cache, throttle)
|
201
|
+
cached_throttle.request!
|
202
|
+
```
|
203
|
+
|
93
204
|
## Using unlogged tables for reduced replication load (PostgreSQL)
|
94
205
|
|
95
206
|
Throttles and leaky buckets are transient resources. If you are using Postgres replication, it might be prudent to set the Pecorino tables to `UNLOGGED` which will exclude them from replication - and save you bandwidth and storage on your RR. To do so, add the following statements to your migration:
|
@@ -0,0 +1,24 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
# Provides access to Pecorino blocks - same blocks which get set when a throttle triggers. The blocks
|
4
|
+
# are just keys in the data store which have an expiry value. This can be useful if you want to restrict
|
5
|
+
# access to a resource for an arbitrary timespan.
|
6
|
+
class Pecorino::Block
|
7
|
+
# Sets a block for the given key. The block will also be seen by the Pecorino::Throttle with the same key
|
8
|
+
#
|
9
|
+
# @param key[String] the key to set the block for
|
10
|
+
# @param block_for[Float] the number of seconds or a time interval to block for
|
11
|
+
# @return [Time] the time when the block will be released
|
12
|
+
def self.set!(key:, block_for:)
|
13
|
+
Pecorino.adapter.set_block(key: key, block_for: block_for)
|
14
|
+
Time.now + block_for
|
15
|
+
end
|
16
|
+
|
17
|
+
# Returns the time until a certain block is in effect
|
18
|
+
#
|
19
|
+
# @return [Time,nil] the time when the block will be released
|
20
|
+
def self.blocked_until(key:)
|
21
|
+
t = Pecorino.adapter.blocked_until(key: key)
|
22
|
+
(t && t > Time.now) ? t : nil
|
23
|
+
end
|
24
|
+
end
|
@@ -0,0 +1,91 @@
|
|
1
|
+
# The cached throttles can be used when you want to lift your throttle blocks into
|
2
|
+
# a higher-level cache. If you are dealing with clients which are hammering on your
|
3
|
+
# throttles a lot, it is useful to have a process-local cache of the timestamp when
|
4
|
+
# the blocks that are set are going to expire. If you are running, say, 10 web app
|
5
|
+
# containers - and someone is hammering at an endpoint which starts blocking -
|
6
|
+
# you don't really need to query your DB for every request. The first request indicated
|
7
|
+
# as "blocked" by Pecorino can write a cache entry into a shared in-memory table,
|
8
|
+
# and all subsequent calls to the same process can reuse that `blocked_until` value
|
9
|
+
# to quickly refuse the request
|
10
|
+
class Pecorino::CachedThrottle
|
11
|
+
# @param cache_store[ActiveSupport::Cache::Store] the store for the cached blocks. We recommend a MemoryStore per-process.
|
12
|
+
# @param throttle[Pecorino::Throttle] the throttle to cache
|
13
|
+
def initialize(cache_store, throttle)
|
14
|
+
@cache_store = cache_store
|
15
|
+
@throttle = throttle
|
16
|
+
end
|
17
|
+
|
18
|
+
# @see Pecorino::Throttle#request!
|
19
|
+
def request!(n = 1)
|
20
|
+
blocked_state = read_cached_blocked_state
|
21
|
+
raise Pecorino::Throttle::Throttled.new(@throttle, blocked_state) if blocked_state&.blocked?
|
22
|
+
|
23
|
+
begin
|
24
|
+
@throttle.request!(n)
|
25
|
+
rescue Pecorino::Throttle::Throttled => throttled_ex
|
26
|
+
write_cache_blocked_state(throttled_ex.state) if throttled_ex.throttle == @throttle
|
27
|
+
raise
|
28
|
+
end
|
29
|
+
end
|
30
|
+
|
31
|
+
# Returns cached `state` for the throttle if there is a currently active block for that throttle in the cache. Otherwise forwards to underlying throttle.
|
32
|
+
#
|
33
|
+
# @see Pecorino::Throttle#request
|
34
|
+
def request(n = 1)
|
35
|
+
blocked_state = read_cached_blocked_state
|
36
|
+
return blocked_state if blocked_state&.blocked?
|
37
|
+
|
38
|
+
@throttle.request(n).tap do |state|
|
39
|
+
write_cache_blocked_state(state) if state.blocked_until
|
40
|
+
end
|
41
|
+
end
|
42
|
+
|
43
|
+
# Returns `false` if there is a currently active block for that throttle in the cache. Otherwise forwards to underlying throttle.
|
44
|
+
#
|
45
|
+
# @see Pecorino::Throttle#able_to_accept?
|
46
|
+
def able_to_accept?(n = 1)
|
47
|
+
blocked_state = read_cached_blocked_state
|
48
|
+
return false if blocked_state&.blocked?
|
49
|
+
|
50
|
+
@throttle.able_to_accept?(n)
|
51
|
+
end
|
52
|
+
|
53
|
+
# Does not run the block if there is a currently active block for that throttle in the cache. Otherwise forwards to underlying throttle.
|
54
|
+
#
|
55
|
+
# @see Pecorino::Throttle#throttled
|
56
|
+
def throttled(&blk)
|
57
|
+
# We can't wrap the implementation of "throttled". Or - we can, but it will be obtuse.
|
58
|
+
return if request(1).blocked?
|
59
|
+
yield
|
60
|
+
end
|
61
|
+
|
62
|
+
# Returns the key of the throttle
|
63
|
+
#
|
64
|
+
# @see Pecorino::Throttle#key
|
65
|
+
def key
|
66
|
+
@throttle.key
|
67
|
+
end
|
68
|
+
|
69
|
+
# Returns `false` if there is a currently active block for that throttle in the cache. Otherwise forwards to underlying throttle.
|
70
|
+
#
|
71
|
+
# @see Pecorino::Throttle#able_to_accept?
|
72
|
+
def state
|
73
|
+
blocked_state = read_cached_blocked_state
|
74
|
+
warn "Read blocked state #{blocked_state.inspect}"
|
75
|
+
return blocked_state if blocked_state&.blocked?
|
76
|
+
|
77
|
+
@throttle.state.tap do |state|
|
78
|
+
write_cache_blocked_state(state) if state.blocked?
|
79
|
+
end
|
80
|
+
end
|
81
|
+
|
82
|
+
private
|
83
|
+
|
84
|
+
def write_cache_blocked_state(state)
|
85
|
+
@cache_store.write("pecorino-cached-throttle-state-#{@throttle.key}", state, expires_after: state.blocked_until)
|
86
|
+
end
|
87
|
+
|
88
|
+
def read_cached_blocked_state
|
89
|
+
@cache_store.read("pecorino-cached-throttle-state-#{@throttle.key}")
|
90
|
+
end
|
91
|
+
end
|
data/lib/pecorino/throttle.rb
CHANGED
@@ -6,23 +6,28 @@
|
|
6
6
|
# the block is lifted. The block time can be arbitrarily higher or lower than the amount
|
7
7
|
# of time it takes for the leaky bucket to leak out
|
8
8
|
class Pecorino::Throttle
|
9
|
-
|
10
|
-
|
11
|
-
#
|
12
|
-
|
13
|
-
|
14
|
-
|
9
|
+
# The state represents a snapshot of the throttle state in time
|
10
|
+
class State
|
11
|
+
# @return [Time]
|
12
|
+
attr_reader :blocked_until
|
13
|
+
|
14
|
+
def initialize(blocked_until)
|
15
|
+
@blocked_until = blocked_until
|
15
16
|
end
|
16
17
|
|
17
|
-
#
|
18
|
-
#
|
18
|
+
# Tells whether this throttle still is in the blocked state.
|
19
|
+
# If the `blocked_until` value lies in the past, the method will
|
20
|
+
# return `false` - this is done so that the `State` can be cached.
|
19
21
|
#
|
20
|
-
# @return [
|
21
|
-
def
|
22
|
-
(blocked_until
|
22
|
+
# @return [Boolean]
|
23
|
+
def blocked?
|
24
|
+
!!(@blocked_until && @blocked_until > Time.now)
|
23
25
|
end
|
24
26
|
end
|
25
27
|
|
28
|
+
# {Pecorino::Throttle} will raise this exception from `request!`. The exception can be used
|
29
|
+
# to do matching, for setting appropriate response headers, and for distinguishing between
|
30
|
+
# multiple different throttles.
|
26
31
|
class Throttled < StandardError
|
27
32
|
# Returns the throttle which raised the exception. Can be used to disambiguiate between
|
28
33
|
# multiple Throttled exceptions when multiple throttles are applied in a layered fashion:
|
@@ -34,21 +39,63 @@ class Pecorino::Throttle
|
|
34
39
|
# db_insert_throttle.request!(n_items_to_insert)
|
35
40
|
# rescue Pecorino::Throttled => e
|
36
41
|
# deliver_notification(user) if e.throttle == user_email_throttle
|
42
|
+
# firewall.ban_ip(ip) if e.throttle == ip_addr_throttle
|
37
43
|
# end
|
38
44
|
#
|
39
45
|
# @return [Throttle]
|
40
46
|
attr_reader :throttle
|
41
47
|
|
42
|
-
# Returns the
|
43
|
-
|
48
|
+
# Returns the throttle state based on which the exception is getting raised. This can
|
49
|
+
# be used for caching the exception, because the state can tell when the block will be
|
50
|
+
# lifted. This can be used to shift the throttle verification into a faster layer of the
|
51
|
+
# system (like a blocklist in a firewall) or caching the state in an upstream cache. A block
|
52
|
+
# in Pecorino is set once and is active until expiry. If your service is under an attack
|
53
|
+
# and you know that the call is blocked until a certain future time, the block can be
|
54
|
+
# lifted up into a faster/cheaper storage destination, like Rails cache:
|
55
|
+
#
|
56
|
+
# @example
|
57
|
+
# begin
|
58
|
+
# ip_addr_throttle.request!
|
59
|
+
# rescue Pecorino::Throttled => e
|
60
|
+
# firewall.ban_ip(request.ip, ttl_seconds: e.state.retry_after)
|
61
|
+
# render :rate_limit_exceeded
|
62
|
+
# end
|
63
|
+
#
|
64
|
+
# @example
|
65
|
+
# state = Rails.cache.read(ip_addr_throttle.key)
|
66
|
+
# return render :rate_limit_exceeded if state && state.blocked? # No need to call Pecorino for this
|
67
|
+
#
|
68
|
+
# begin
|
69
|
+
# ip_addr_throttle.request!
|
70
|
+
# rescue Pecorino::Throttled => e
|
71
|
+
# Rails.cache.write(ip_addr_throttle.key, e.state, expires_in: (e.state.blocked_until - Time.now))
|
72
|
+
# render :rate_limit_exceeded
|
73
|
+
# end
|
74
|
+
#
|
75
|
+
# @return [Throttle::State]
|
76
|
+
attr_reader :state
|
44
77
|
|
45
78
|
def initialize(from_throttle, state)
|
46
79
|
@throttle = from_throttle
|
47
|
-
@
|
80
|
+
@state = state
|
48
81
|
super("Block in effect until #{state.blocked_until.iso8601}")
|
49
82
|
end
|
83
|
+
|
84
|
+
# Returns the `retry_after` value in seconds, suitable for use in an HTTP header
|
85
|
+
#
|
86
|
+
# @return [Integer]
|
87
|
+
def retry_after
|
88
|
+
(@state.blocked_until - Time.now).ceil
|
89
|
+
end
|
50
90
|
end
|
51
91
|
|
92
|
+
# The key for that throttle. Each key defines a unique throttle based on either a given name or
|
93
|
+
# discriminators. If there is a component you want to key your throttle by, include it in the
|
94
|
+
# `key` keyword argument to the constructor, like `"t-ip-#{request.ip}"`
|
95
|
+
#
|
96
|
+
# @return [String]
|
97
|
+
attr_reader :key
|
98
|
+
|
52
99
|
# @param key[String] the key for both the block record and the leaky bucket
|
53
100
|
# @param block_for[Numeric] the number of seconds to block any further requests for. Defaults to time it takes
|
54
101
|
# the bucket to leak out to the level of 0
|
@@ -73,8 +120,8 @@ class Pecorino::Throttle
|
|
73
120
|
end
|
74
121
|
|
75
122
|
# Register that a request is being performed. Will raise Throttled
|
76
|
-
# if there is a block in place
|
77
|
-
# and
|
123
|
+
# if there is a block in place for that throttle, or if the bucket cannot accept
|
124
|
+
# this fillup and the block has just been installed as a result of this particular request.
|
78
125
|
#
|
79
126
|
# The exception can be rescued later to provide a 429 response. This method is better
|
80
127
|
# to use before performing the unit of work that the throttle is guarding:
|
@@ -89,11 +136,11 @@ class Pecorino::Throttle
|
|
89
136
|
#
|
90
137
|
# If the method call succeeds it means that the request is not getting throttled.
|
91
138
|
#
|
92
|
-
# @return
|
139
|
+
# @return [State] the state of the throttle after filling up the leaky bucket / trying to pass the block
|
93
140
|
def request!(n = 1)
|
94
|
-
|
95
|
-
|
96
|
-
|
141
|
+
request(n).tap do |state_after|
|
142
|
+
raise Throttled.new(self, state_after) if state_after.blocked?
|
143
|
+
end
|
97
144
|
end
|
98
145
|
|
99
146
|
# Register that a request is being performed. Will not raise any exceptions but return
|
@@ -109,7 +156,7 @@ class Pecorino::Throttle
|
|
109
156
|
#
|
110
157
|
# @return [State] the state of the throttle after filling up the leaky bucket / trying to pass the block
|
111
158
|
def request(n = 1)
|
112
|
-
existing_blocked_until = Pecorino.
|
159
|
+
existing_blocked_until = Pecorino::Block.blocked_until(key: @key)
|
113
160
|
return State.new(existing_blocked_until.utc) if existing_blocked_until
|
114
161
|
|
115
162
|
# Topup the leaky bucket, and if the topup gets rejected - block the caller
|
@@ -118,8 +165,22 @@ class Pecorino::Throttle
|
|
118
165
|
State.new(nil)
|
119
166
|
else
|
120
167
|
# and set the block if the fillup was rejected
|
121
|
-
fresh_blocked_until = Pecorino.
|
168
|
+
fresh_blocked_until = Pecorino::Block.set!(key: @key, block_for: @block_for)
|
122
169
|
State.new(fresh_blocked_until.utc)
|
123
170
|
end
|
124
171
|
end
|
172
|
+
|
173
|
+
# Fillup the throttle with 1 request and then perform the passed block. This is useful to perform actions which should
|
174
|
+
# be rate-limited - alerts, calls to external services and the like. If the call is allowed to proceed,
|
175
|
+
# the passed block will be executed. If the throttle is in the blocked state or if the call puts the throttle in
|
176
|
+
# the blocked state the block will not be executed
|
177
|
+
#
|
178
|
+
# @example
|
179
|
+
# t.throttled { Slack.alert("Things are going wrong") }
|
180
|
+
#
|
181
|
+
# @return [Object] the return value of the block if the block gets executed, or `nil` if the call got throttled
|
182
|
+
def throttled(&blk)
|
183
|
+
return if request(1).blocked?
|
184
|
+
yield
|
185
|
+
end
|
125
186
|
end
|
data/lib/pecorino/version.rb
CHANGED
data/lib/pecorino.rb
CHANGED
@@ -4,13 +4,15 @@ require "active_support/concern"
|
|
4
4
|
require "active_record/sanitization"
|
5
5
|
|
6
6
|
require_relative "pecorino/version"
|
7
|
-
require_relative "pecorino/leaky_bucket"
|
8
|
-
require_relative "pecorino/throttle"
|
9
7
|
require_relative "pecorino/railtie" if defined?(Rails::Railtie)
|
10
8
|
|
11
9
|
module Pecorino
|
12
10
|
autoload :Postgres, "pecorino/postgres"
|
13
11
|
autoload :Sqlite, "pecorino/sqlite"
|
12
|
+
autoload :LeakyBucket, "pecorino/leaky_bucket"
|
13
|
+
autoload :Block, "pecorino/block"
|
14
|
+
autoload :Throttle, "pecorino/throttle"
|
15
|
+
autoload :CachedThrottle, "pecorino/cached_throttle"
|
14
16
|
|
15
17
|
# Deletes stale leaky buckets and blocks which have expired. Run this method regularly to
|
16
18
|
# avoid accumulating too many unused rows in your tables.
|
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: pecorino
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.
|
4
|
+
version: 0.6.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Julik Tarkhanov
|
8
8
|
autorequire:
|
9
9
|
bindir: exe
|
10
10
|
cert_chain: []
|
11
|
-
date: 2024-
|
11
|
+
date: 2024-03-12 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: activerecord
|
@@ -154,6 +154,8 @@ files:
|
|
154
154
|
- README.md
|
155
155
|
- Rakefile
|
156
156
|
- lib/pecorino.rb
|
157
|
+
- lib/pecorino/block.rb
|
158
|
+
- lib/pecorino/cached_throttle.rb
|
157
159
|
- lib/pecorino/install_generator.rb
|
158
160
|
- lib/pecorino/leaky_bucket.rb
|
159
161
|
- lib/pecorino/migrations/create_pecorino_tables.rb.erb
|
@@ -185,7 +187,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
185
187
|
- !ruby/object:Gem::Version
|
186
188
|
version: '0'
|
187
189
|
requirements: []
|
188
|
-
rubygems_version: 3.
|
190
|
+
rubygems_version: 3.4.10
|
189
191
|
signing_key:
|
190
192
|
specification_version: 4
|
191
193
|
summary: Database-based rate limiter using leaky buckets
|