prorate 0.3.0 → 0.7.1
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +5 -5
- data/.rubocop.yml +2 -0
- data/.travis.yml +7 -5
- data/CHANGELOG.md +45 -0
- data/README.md +128 -17
- data/Rakefile +14 -2
- data/lib/prorate.rb +0 -2
- data/lib/prorate/leaky_bucket.lua +77 -0
- data/lib/prorate/leaky_bucket.rb +134 -0
- data/lib/prorate/null_logger.rb +5 -0
- data/lib/prorate/null_pool.rb +3 -1
- data/lib/prorate/rate_limit.lua +39 -27
- data/lib/prorate/throttle.rb +142 -47
- data/lib/prorate/throttled.rb +20 -0
- data/lib/prorate/version.rb +1 -1
- data/prorate.gemspec +7 -5
- data/scripts/bm.rb +1 -1
- data/scripts/bm_latency_lb_vs_mget.rb +5 -7
- data/scripts/reload_lua.rb +1 -1
- metadata +55 -23
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
|
-
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
2
|
+
SHA256:
|
3
|
+
metadata.gz: 7e00071a8bb75be7ca3c74ecaf662e049f5c55cae7cb867c9851062efd5b8073
|
4
|
+
data.tar.gz: 277671f9b2dcce7d032e9f9f38b294ced2ca953cd0f85b940d488f3ff306ee38
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 40e22d2cdb70cb407b7cc2135624a373e95e83514cfa8bb2f42a6106b40640b458a017212967fb533ba080e3e4707199a1fb1e2414b68f1289e1d24b15b05fbc
|
7
|
+
data.tar.gz: 269d8ef7d384c08c928d3bed3412b2e7ae470e696ff03042744c79bd696962a9a7a82d099de8a153aef344029f40e8640d4e772bee90054b64b4f0183c96d2c9
|
data/.rubocop.yml
ADDED
data/.travis.yml
CHANGED
@@ -1,7 +1,10 @@
|
|
1
1
|
rvm:
|
2
|
-
- 2.2
|
3
|
-
- 2.3
|
4
|
-
- 2.4
|
2
|
+
- 2.2
|
3
|
+
- 2.3
|
4
|
+
- 2.4
|
5
|
+
- 2.5
|
6
|
+
- 2.6
|
7
|
+
- 2.7
|
5
8
|
|
6
9
|
services:
|
7
10
|
- redis
|
@@ -10,6 +13,5 @@ dist: trusty # https://docs.travis-ci.com/user/trusty-ci-environment/
|
|
10
13
|
sudo: false
|
11
14
|
cache: bundler
|
12
15
|
|
13
|
-
# Travis permits the following phases: before_install, install, after_install, before_script, script, after_script
|
14
16
|
script:
|
15
|
-
- bundle exec
|
17
|
+
- bundle exec rake
|
data/CHANGELOG.md
ADDED
@@ -0,0 +1,45 @@
|
|
1
|
+
# 0.7.1
|
2
|
+
|
3
|
+
* Fix use of a ConnectionPool as `redis:` argument which was broken in 0.7.0
|
4
|
+
* Use the Lua KEYS argument in `rate_limit.lua` for future-proof clustering support
|
5
|
+
instead of computing the touched keys inside the Lua script.
|
6
|
+
|
7
|
+
# 0.7.0
|
8
|
+
|
9
|
+
* Add a naked `LeakyBucket` object which allows one to build sophisticated rate limiting relying
|
10
|
+
on the Ruby side of things more. It has less features than the `Throttle` but can be used for more
|
11
|
+
fine-graned control of the throttling. It also does not use exceptions for flow control.
|
12
|
+
The `Throttle` object used them because it should make the code abort *loudly* if a throttle is hit, but
|
13
|
+
when the objective is to measure instead a smaller, less opinionated module can be more useful.
|
14
|
+
* Refactor the internals of the Throttle class so that it uses a default Logger, and document the arguments.
|
15
|
+
* Use fractional time measurement from Redis in Lua code. For our throttle to be precise we cannot really
|
16
|
+
limit ourselves to "anchored slots" on the start of a second, and we would be effectively doing that
|
17
|
+
with our previous setup.
|
18
|
+
* Fix the `redis` gem deprecation warnings when using `exists` - we will now use `exists?` if available.
|
19
|
+
* Remove dependency on the `ks` gem as we can use vanilla Structs or classes instead.
|
20
|
+
|
21
|
+
# 0.6.0
|
22
|
+
|
23
|
+
* Add `Throttle#status` method for retrieving the status of a throttle without placing any tokens
|
24
|
+
or raising any exceptions. This is useful for layered throttles.
|
25
|
+
|
26
|
+
# 0.5.0
|
27
|
+
|
28
|
+
* Allow setting the number of tokens to add to the bucket in `Throttle#throttle!` - this is useful because
|
29
|
+
sometimes a request effectively uses N of some resource in one go, and should thus cause a throttle
|
30
|
+
to fire without having to do repeated calls
|
31
|
+
|
32
|
+
# 0.4.0
|
33
|
+
|
34
|
+
* When raising a `Throttled` exception, add the name of the throttle to it. This is useful when multiple
|
35
|
+
throttles are used together and one needs to find out which throttle has fired.
|
36
|
+
* Reformat code according to wetransfer_style and make it compulsory on CI
|
37
|
+
|
38
|
+
# 0.3.0
|
39
|
+
|
40
|
+
* Replace the Ruby implementation of the throttle with a Lua script which runs within Redis. This allows us
|
41
|
+
to do atomic gets+sets very rapidly.
|
42
|
+
|
43
|
+
# 0.1.0
|
44
|
+
|
45
|
+
* Initial release of Prorate
|
data/README.md
CHANGED
@@ -1,8 +1,13 @@
|
|
1
1
|
# Prorate
|
2
2
|
|
3
|
-
Provides a low-level time-based throttle. Is mainly meant for situations where
|
4
|
-
|
5
|
-
|
3
|
+
Provides a low-level time-based throttle. Is mainly meant for situations where
|
4
|
+
using something like Rack::Attack is not very useful since you need access to
|
5
|
+
more variables. Under the hood, this uses a Lua script that implements the
|
6
|
+
[Leaky Bucket](https://en.wikipedia.org/wiki/Leaky_bucket) algorithm in a single
|
7
|
+
threaded and race condition safe way.
|
8
|
+
|
9
|
+
[![Build Status](https://travis-ci.org/WeTransfer/prorate.svg?branch=master)](https://travis-ci.org/WeTransfer/prorate)
|
10
|
+
[![Gem Version](https://badge.fury.io/rb/prorate.svg)](https://badge.fury.io/rb/prorate)
|
6
11
|
|
7
12
|
## Installation
|
8
13
|
|
@@ -14,29 +19,137 @@ gem 'prorate'
|
|
14
19
|
|
15
20
|
And then execute:
|
16
21
|
|
17
|
-
|
22
|
+
```shell
|
23
|
+
bundle install
|
24
|
+
```
|
18
25
|
|
19
26
|
Or install it yourself as:
|
20
27
|
|
21
|
-
|
28
|
+
```shell
|
29
|
+
gem install prorate
|
30
|
+
```
|
22
31
|
|
23
32
|
## Usage
|
24
33
|
|
34
|
+
The simplest mode of operation is throttling an endpoint, using the throttler
|
35
|
+
before the action happens.
|
36
|
+
|
25
37
|
Within your Rails controller:
|
26
38
|
|
27
|
-
|
28
|
-
|
29
|
-
|
30
|
-
|
31
|
-
|
32
|
-
|
33
|
-
|
39
|
+
```ruby
|
40
|
+
t = Prorate::Throttle.new(
|
41
|
+
redis: Redis.new,
|
42
|
+
logger: Rails.logger,
|
43
|
+
name: "throttle-login-email",
|
44
|
+
limit: 20,
|
45
|
+
period: 5.seconds
|
46
|
+
)
|
47
|
+
# Add all the parameters that function as a discriminator.
|
48
|
+
t << request.ip << params.require(:email)
|
49
|
+
# ...and call the throttle! method
|
50
|
+
t.throttle! # Will raise a Prorate::Throttled exception if the limit has been reached
|
51
|
+
#
|
52
|
+
# Your regular action happens after this point
|
53
|
+
```
|
54
|
+
|
55
|
+
To capture that exception, in the controller
|
56
|
+
|
57
|
+
```ruby
|
58
|
+
rescue_from Prorate::Throttled do |e|
|
59
|
+
response.set_header('Retry-After', e.retry_in_seconds.to_s)
|
60
|
+
render nothing: true, status: 429
|
61
|
+
end
|
62
|
+
```
|
63
|
+
|
64
|
+
### Throttling and checking status
|
65
|
+
|
66
|
+
More exquisite control can be achieved by combining throttling (see previous
|
67
|
+
step) and - in subsequent calls - checking the status of the throttle before
|
68
|
+
invoking the throttle. **When you call `throttle!`, you add tokens to the leaky bucket.**
|
69
|
+
|
70
|
+
Let's say you have an endpoint that not only needs throttling, but you want to
|
71
|
+
ban [credential stuffers](https://en.wikipedia.org/wiki/Credential_stuffing)
|
72
|
+
outright. This is a multi-step process:
|
73
|
+
|
74
|
+
1. Respond with a 429 if the discriminators of the request would land in an
|
75
|
+
already blocking 'credential-stuffing'-throttle
|
76
|
+
1. Run your regular throttling
|
77
|
+
1. Perform your sign in action
|
78
|
+
1. If the sign in was unsuccessful, add the discriminators to the
|
79
|
+
'credential-stuffing'-throttle
|
80
|
+
|
81
|
+
In your controller that would look like this:
|
82
|
+
|
83
|
+
```ruby
|
84
|
+
t = Prorate::Throttle.new(
|
85
|
+
redis: Redis.new,
|
86
|
+
logger: Rails.logger,
|
87
|
+
name: "credential-stuffing",
|
88
|
+
limit: 20,
|
89
|
+
period: 20.minutes
|
90
|
+
)
|
91
|
+
# Add all the parameters that function as a discriminator.
|
92
|
+
t << request.ip
|
93
|
+
# And before anything else, check whether it is throttled
|
94
|
+
if t.status.throttled?
|
95
|
+
response.set_header('Retry-After', t.status.remaining_throttle_seconds.to_s)
|
96
|
+
render(nothing: true, status: 429) and return
|
97
|
+
end
|
98
|
+
|
99
|
+
# run your regular throttles for the endpoint
|
100
|
+
other_throttles.map(:throttle!)
|
101
|
+
# Perform your sign in logic..
|
102
|
+
|
103
|
+
user = YourSignInLogic.valid?(
|
104
|
+
email: params[:email],
|
105
|
+
password: params[:password]
|
106
|
+
)
|
107
|
+
|
108
|
+
# Add the request to the credential stuffing throttle if we didn't succeed
|
109
|
+
t.throttle! unless user
|
110
|
+
|
111
|
+
# the rest of your action
|
112
|
+
```
|
34
113
|
|
35
114
|
To capture that exception, in the controller
|
36
115
|
|
37
|
-
|
38
|
-
|
39
|
-
|
116
|
+
```ruby
|
117
|
+
rescue_from Prorate::Throttled do |e|
|
118
|
+
response.set_header('Retry-After', e.retry_in_seconds.to_s)
|
119
|
+
render nothing: true, status: 429
|
120
|
+
end
|
121
|
+
```
|
122
|
+
|
123
|
+
## Using just the leaky bucket
|
124
|
+
|
125
|
+
There is also an object for using the heart of Prorate (the leaky bucket) without blocking or exceptions. This is useful
|
126
|
+
if you want to implement a more generic rate limiting solution and customise it in a fancier way. The leaky bucket on
|
127
|
+
it's own provides the following conveniences only:
|
128
|
+
|
129
|
+
* Track the number of tokens added and the number of tokens that have leaked
|
130
|
+
* Tracks whether a specific token fillup has overflown the bucket. This is only tracked momentarily if the bucket is limited
|
131
|
+
|
132
|
+
Level and leak rate are computed and provided as Floats instead of Integers (in the Throttle class).
|
133
|
+
To use it, employ the `LeakyBucket` object:
|
134
|
+
|
135
|
+
```ruby
|
136
|
+
# The leak_rate is in tokens per second
|
137
|
+
leaky_bucket = Prorate::LeakyBucket.new(redis: Redis.new, redis_key_prefix: "user123", leak_rate: 0.8, bucket_capacity: 2)
|
138
|
+
leaky_bucket.state.level #=> will return 0.0
|
139
|
+
leaky_bucket.state.full? #=> will return "false"
|
140
|
+
state_after_add = leaky_bucket.fillup(2) #=> returns a State object_
|
141
|
+
state_after_add.full? #=> will return "true"
|
142
|
+
state_after_add.level #=> will return 2.0
|
143
|
+
```
|
144
|
+
|
145
|
+
## Why Lua?
|
146
|
+
|
147
|
+
Prorate is implementing throttling using the "Leaky Bucket" algorithm and is extensively described [here](https://github.com/WeTransfer/prorate/blob/master/lib/prorate/throttle.rb). The implementation is using a Lua script, because is the only language available which runs _inside_ Redis. Thanks to the speed benefits of Lua the script runs fast enough to apply it on every throttle call.
|
148
|
+
|
149
|
+
Using a Lua script in Prorate helps us achieve the following guarantees:
|
150
|
+
|
151
|
+
- **The script will run atomically.** The script is evaluated as a single Redis command. This ensures that the commands in the Lua script will never be interleaved with another client: they will always execute together.
|
152
|
+
- **Any usages of time will use the Redis time.** Throttling requires a consistent and monotonic _time source_. The only monotonic and consistent time source which is usable in the context of Prorate, is the `TIME` result of Redis itself. We are throttling requests from different machines, which will invariably have clock drift between them. This way using the Redis server `TIME` helps achieve consistency.
|
40
153
|
|
41
154
|
## Development
|
42
155
|
|
@@ -48,8 +161,6 @@ To install this gem onto your local machine, run `bundle exec rake install`. To
|
|
48
161
|
|
49
162
|
Bug reports and pull requests are welcome on GitHub at https://github.com/WeTransfer/prorate.
|
50
163
|
|
51
|
-
|
52
164
|
## License
|
53
165
|
|
54
166
|
The gem is available as open source under the terms of the [MIT License](http://opensource.org/licenses/MIT).
|
55
|
-
|
data/Rakefile
CHANGED
@@ -1,6 +1,18 @@
|
|
1
1
|
require "bundler/gem_tasks"
|
2
2
|
require "rspec/core/rake_task"
|
3
|
+
require 'rubocop/rake_task'
|
4
|
+
require 'yard'
|
3
5
|
|
4
|
-
|
6
|
+
YARD::Rake::YardocTask.new(:doc) do |t|
|
7
|
+
# The dash has to be between the two to "divide" the source files and
|
8
|
+
# miscellaneous documentation files that contain no code
|
9
|
+
t.files = ['lib/**/*.rb', '-', 'LICENSE.txt', 'CHANGELOG.md']
|
10
|
+
end
|
5
11
|
|
6
|
-
|
12
|
+
RSpec::Core::RakeTask.new(:spec) do |spec|
|
13
|
+
spec.rspec_opts = ["-c", "--order=rand"]
|
14
|
+
spec.pattern = FileList['spec/**/*_spec.rb']
|
15
|
+
end
|
16
|
+
|
17
|
+
RuboCop::RakeTask.new(:rubocop)
|
18
|
+
task default: [:spec, :rubocop]
|
data/lib/prorate.rb
CHANGED
@@ -0,0 +1,77 @@
|
|
1
|
+
-- Single threaded Leaky Bucket implementation (without blocking).
|
2
|
+
-- args: key_base, leak_rate, bucket_ttl, fillup. To just verify the state of the bucket leak_rate of 0 may be passed.
|
3
|
+
-- returns: the leve of the bucket in number of tokens
|
4
|
+
|
5
|
+
-- this is required to be able to use TIME and writes; basically it lifts the script into IO
|
6
|
+
redis.replicate_commands()
|
7
|
+
|
8
|
+
-- Redis documentation recommends passing the keys separately so that Redis
|
9
|
+
-- can - in the future - verify that they live on the same shard of a cluster, and
|
10
|
+
-- raise an error if they are not. As far as can be understood this functionality is not
|
11
|
+
-- yet present, but if we can make a little effort to make ourselves more future proof
|
12
|
+
-- we should.
|
13
|
+
local bucket_level_key = KEYS[1]
|
14
|
+
local last_updated_key = KEYS[2]
|
15
|
+
|
16
|
+
local leak_rate = tonumber(ARGV[1])
|
17
|
+
local fillup = tonumber(ARGV[2]) -- How many tokens this call adds to the bucket.
|
18
|
+
local bucket_capacity = tonumber(ARGV[3]) -- How many tokens is the bucket allowed to contain
|
19
|
+
|
20
|
+
-- Compute the key TTL for the bucket. We are interested in how long it takes the bucket
|
21
|
+
-- to leak all the way to 0, as this is the time when the values stay relevant. We pad with 1 second
|
22
|
+
-- to have a little cushion.
|
23
|
+
local key_lifetime = math.ceil((bucket_capacity / leak_rate) + 1)
|
24
|
+
|
25
|
+
-- Take a timestamp
|
26
|
+
local redis_time = redis.call("TIME") -- Array of [seconds, microseconds]
|
27
|
+
local now = tonumber(redis_time[1]) + (tonumber(redis_time[2]) / 1000000)
|
28
|
+
|
29
|
+
-- get current bucket level. The throttle key might not exist yet in which
|
30
|
+
-- case we default to 0
|
31
|
+
local bucket_level = tonumber(redis.call("GET", bucket_level_key)) or 0
|
32
|
+
|
33
|
+
-- ...and then perform the leaky bucket fillup/leak. We need to do this also when the bucket has
|
34
|
+
-- just been created because the initial fillup to add might be so high that it will
|
35
|
+
-- immediately overflow the bucket and trigger the throttle, on the first call.
|
36
|
+
local last_updated = tonumber(redis.call("GET", last_updated_key)) or now -- use sensible default of 'now' if the key does not exist
|
37
|
+
|
38
|
+
-- Subtract the number of tokens leaked since last call
|
39
|
+
local dt = now - last_updated
|
40
|
+
local new_bucket_level = bucket_level - (leak_rate * dt) + fillup
|
41
|
+
|
42
|
+
-- and _then_ and add the tokens we fillup with. Cap the value to be 0 < capacity
|
43
|
+
new_bucket_level = math.max(0, math.min(bucket_capacity, new_bucket_level))
|
44
|
+
|
45
|
+
-- Since we return a floating point number string-formatted even if the bucket is full we
|
46
|
+
-- have some loss of precision in the formatting, even if the bucket was actually full.
|
47
|
+
-- This bit of information is useful to preserve.
|
48
|
+
local at_capacity = 0
|
49
|
+
if new_bucket_level == bucket_capacity then
|
50
|
+
at_capacity = 1
|
51
|
+
end
|
52
|
+
|
53
|
+
-- If both the initial level was 0, and the level after putting tokens in is 0 we
|
54
|
+
-- can avoid setting keys in Redis at all as this was only a level check.
|
55
|
+
if new_bucket_level == 0 and bucket_level == 0 then
|
56
|
+
return {"0.0", at_capacity}
|
57
|
+
end
|
58
|
+
|
59
|
+
-- Save the new bucket level
|
60
|
+
redis.call("SETEX", bucket_level_key, key_lifetime, new_bucket_level)
|
61
|
+
|
62
|
+
-- Record when we updated the bucket so that the amount of tokens leaked
|
63
|
+
-- can be correctly determined on the next invocation
|
64
|
+
redis.call("SETEX", last_updated_key, key_lifetime, now)
|
65
|
+
|
66
|
+
-- Most Redis adapters when used with the Lua interface truncate floats
|
67
|
+
-- to integers (at least in Python that is documented to be the case in
|
68
|
+
-- the Redis ebook here
|
69
|
+
-- https://redislabs.com/ebook/part-3-next-steps/chapter-11-scripting-redis-with-lua/11-1-adding-functionality-without-writing-c
|
70
|
+
-- We need access to the bucket level as a float value since our leak rate might as well be floating point, and to achieve that
|
71
|
+
-- we can go two ways. We can turn the float into a Lua string, and then parse it on the other side, or we can convert it to
|
72
|
+
-- a tuple of two integer values - one for the integer component and one for fraction.
|
73
|
+
-- Now, the unpleasant aspect is that when we do this we will lose precision - the number is not going to be
|
74
|
+
-- exactly equal to capacity, thus we lose the bit of information which tells us whether we filled up the bucket or not.
|
75
|
+
-- Also since the only moment we can register whether the bucket is above capacity is now - in this script, since
|
76
|
+
-- by the next call some tokens will have leaked.
|
77
|
+
return {string.format("%.9f", new_bucket_level), at_capacity}
|
@@ -0,0 +1,134 @@
|
|
1
|
+
module Prorate
|
2
|
+
|
3
|
+
# This offers just the leaky bucket implementation with fill control, but without the timed lock.
|
4
|
+
# It does not raise any exceptions, it just tracks the state of a leaky bucket in Redis.
|
5
|
+
#
|
6
|
+
# Important differences from the more full-featured Throttle class are:
|
7
|
+
#
|
8
|
+
# * No logging (as most meaningful code lives in Lua anyway)
|
9
|
+
# * No timed block - if you need to keep track of timed blocking it can be done externally
|
10
|
+
# * Leak rate is specified directly in tokens per second, instead of specifying the block period.
|
11
|
+
# * The bucket level is stored and returned as a Float which allows for finer-grained measurement,
|
12
|
+
# but more importantly - makes testing from the outside easier.
|
13
|
+
#
|
14
|
+
# It does have a few downsides compared to the Throttle though
|
15
|
+
#
|
16
|
+
# * Bucket is only full momentarily. On subsequent calls some tokens will leak already, so you either
|
17
|
+
# need to do delta checks on the value or rely on putting the token into the bucket.
|
18
|
+
class LeakyBucket
|
19
|
+
LUA_SCRIPT_CODE = File.read(File.join(__dir__, "leaky_bucket.lua"))
|
20
|
+
LUA_SCRIPT_HASH = Digest::SHA1.hexdigest(LUA_SCRIPT_CODE)
|
21
|
+
|
22
|
+
class BucketState < Struct.new(:level, :full)
|
23
|
+
# Returns the level of the bucket after the operation on the LeakyBucket
|
24
|
+
# object has taken place. There is a guarantee that no tokens have leaked
|
25
|
+
# from the bucket between the operation and the freezing of the BucketState
|
26
|
+
# struct.
|
27
|
+
#
|
28
|
+
# @!attribute [r] level
|
29
|
+
# @return [Float]
|
30
|
+
|
31
|
+
# Tells whether the bucket was detected to be full when the operation on
|
32
|
+
# the LeakyBucket was performed. There is a guarantee that no tokens have leaked
|
33
|
+
# from the bucket between the operation and the freezing of the BucketState
|
34
|
+
# struct.
|
35
|
+
#
|
36
|
+
# @!attribute [r] full
|
37
|
+
# @return [Boolean]
|
38
|
+
|
39
|
+
alias_method :full?, :full
|
40
|
+
|
41
|
+
# Returns the bucket level of the bucket state as a Float
|
42
|
+
#
|
43
|
+
# @return [Float]
|
44
|
+
def to_f
|
45
|
+
level.to_f
|
46
|
+
end
|
47
|
+
|
48
|
+
# Returns the bucket level of the bucket state rounded to an Integer
|
49
|
+
#
|
50
|
+
# @return [Integer]
|
51
|
+
def to_i
|
52
|
+
level.to_i
|
53
|
+
end
|
54
|
+
end
|
55
|
+
|
56
|
+
# Creates a new LeakyBucket. The object controls 2 keys in Redis: one
|
57
|
+
# for the last access time, and one for the contents of the key.
|
58
|
+
#
|
59
|
+
# @param redis_key_prefix[String] the prefix that is going to be used for keys.
|
60
|
+
# If your bucket is specific to a user, a browser or an IP address you need to mix in
|
61
|
+
# those values into the key prefix as appropriate.
|
62
|
+
# @param leak_rate[Float] the leak rate of the bucket, in tokens per second
|
63
|
+
# @param redis[Redis,#with] a Redis connection or a ConnectionPool instance
|
64
|
+
# if you are using the connection_pool gem. With a connection pool Prorate will
|
65
|
+
# checkout a connection using `#with` and check it in when it's done.
|
66
|
+
# @param bucket_capacity[Numeric] how many tokens is the bucket capped at.
|
67
|
+
# Filling up the bucket using `fillup()` will add to that number, but
|
68
|
+
# the bucket contents will then be capped at this value. So with
|
69
|
+
# bucket_capacity set to 12 and a `fillup(14)` the bucket will reach the level
|
70
|
+
# of 12, and will then immediately start leaking again.
|
71
|
+
def initialize(redis_key_prefix:, leak_rate:, redis:, bucket_capacity:)
|
72
|
+
@redis_key_prefix = redis_key_prefix
|
73
|
+
@redis = redis.respond_to?(:with) ? redis : NullPool.new(redis)
|
74
|
+
@leak_rate = leak_rate.to_f
|
75
|
+
@capacity = bucket_capacity.to_f
|
76
|
+
end
|
77
|
+
|
78
|
+
# Places `n` tokens in the bucket.
|
79
|
+
#
|
80
|
+
# @return [BucketState] the state of the bucket after the operation
|
81
|
+
def fillup(n_tokens)
|
82
|
+
run_lua_bucket_script(n_tokens.to_f)
|
83
|
+
end
|
84
|
+
|
85
|
+
# Returns the current state of the bucket, containing the level and whether the bucket is full
|
86
|
+
#
|
87
|
+
# @return [BucketState] the state of the bucket after the operation
|
88
|
+
def state
|
89
|
+
run_lua_bucket_script(0)
|
90
|
+
end
|
91
|
+
|
92
|
+
# Returns the Redis key for the leaky bucket itself
|
93
|
+
# Note that the key is not guaranteed to contain a value if the bucket has not been filled
|
94
|
+
# up recently.
|
95
|
+
#
|
96
|
+
# @return [String]
|
97
|
+
def leaky_bucket_key
|
98
|
+
"#{@redis_key_prefix}.leaky_bucket.bucket_level"
|
99
|
+
end
|
100
|
+
|
101
|
+
# Returns the Redis key under which the last updated time of the bucket gets stored.
|
102
|
+
# Note that the key is not guaranteed to contain a value if the bucket has not been filled
|
103
|
+
# up recently.
|
104
|
+
#
|
105
|
+
# @return [String]
|
106
|
+
def last_updated_key
|
107
|
+
"#{@redis_key_prefix}.leaky_bucket.last_updated"
|
108
|
+
end
|
109
|
+
|
110
|
+
private
|
111
|
+
|
112
|
+
def run_lua_bucket_script(n_tokens)
|
113
|
+
@redis.with do |r|
|
114
|
+
begin
|
115
|
+
# The script returns a tuple of "whole tokens, microtokens"
|
116
|
+
# to be able to smuggle the float across (similar to Redis TIME command)
|
117
|
+
level_str, is_full_int = r.evalsha(
|
118
|
+
LUA_SCRIPT_HASH,
|
119
|
+
keys: [leaky_bucket_key, last_updated_key], argv: [@leak_rate, n_tokens, @capacity])
|
120
|
+
BucketState.new(level_str.to_f, is_full_int == 1)
|
121
|
+
rescue Redis::CommandError => e
|
122
|
+
if e.message.include? "NOSCRIPT"
|
123
|
+
# The Redis server has never seen this script before. Needs to run only once in the entire lifetime
|
124
|
+
# of the Redis server, until the script changes - in which case it will be loaded under a different SHA
|
125
|
+
r.script(:load, LUA_SCRIPT_CODE)
|
126
|
+
retry
|
127
|
+
else
|
128
|
+
raise e
|
129
|
+
end
|
130
|
+
end
|
131
|
+
end
|
132
|
+
end
|
133
|
+
end
|
134
|
+
end
|
data/lib/prorate/null_logger.rb
CHANGED
data/lib/prorate/null_pool.rb
CHANGED
data/lib/prorate/rate_limit.lua
CHANGED
@@ -1,5 +1,5 @@
|
|
1
1
|
-- Single threaded Leaky Bucket implementation.
|
2
|
-
-- args: key_base, leak_rate, max_bucket_capacity, block_duration
|
2
|
+
-- args: key_base, leak_rate, max_bucket_capacity, block_duration, n_tokens
|
3
3
|
-- returns: an array of two integers, the first of which indicates the remaining block time.
|
4
4
|
-- if the block time is nonzero, the second integer is always zero. If the block time is zero,
|
5
5
|
-- the second integer indicates the level of the bucket
|
@@ -8,14 +8,25 @@
|
|
8
8
|
redis.replicate_commands()
|
9
9
|
-- make some nicer looking variable names:
|
10
10
|
local retval = nil
|
11
|
-
local bucket_level_key = ARGV[1] .. ".bucket_level"
|
12
|
-
local last_updated_key = ARGV[1] .. ".last_updated"
|
13
|
-
local block_key = ARGV[1] .. ".block"
|
14
|
-
local max_bucket_capacity = tonumber(ARGV[2])
|
15
|
-
local leak_rate = tonumber(ARGV[3])
|
16
|
-
local block_duration = tonumber(ARGV[4])
|
17
|
-
local now = tonumber(redis.call("TIME")[1]) --unix timestamp, will be required in all paths
|
18
11
|
|
12
|
+
-- Redis documentation recommends passing the keys separately so that Redis
|
13
|
+
-- can - in the future - verify that they live on the same shard of a cluster, and
|
14
|
+
-- raise an error if they are not. As far as can be understood this functionality is not
|
15
|
+
-- yet present, but if we can make a little effort to make ourselves more future proof
|
16
|
+
-- we should.
|
17
|
+
local bucket_level_key = KEYS[1]
|
18
|
+
local last_updated_key = KEYS[2]
|
19
|
+
local block_key = KEYS[3]
|
20
|
+
|
21
|
+
-- and the config variables
|
22
|
+
local max_bucket_capacity = tonumber(ARGV[1])
|
23
|
+
local leak_rate = tonumber(ARGV[2])
|
24
|
+
local block_duration = tonumber(ARGV[3])
|
25
|
+
local n_tokens = tonumber(ARGV[4]) -- How many tokens this call adds to the bucket. Defaults to 1
|
26
|
+
|
27
|
+
-- Take the Redis timestamp
|
28
|
+
local redis_time = redis.call("TIME") -- Array of [seconds, microseconds]
|
29
|
+
local now = tonumber(redis_time[1]) + (tonumber(redis_time[2]) / 1000000)
|
19
30
|
local key_lifetime = math.ceil(max_bucket_capacity / leak_rate)
|
20
31
|
|
21
32
|
local blocked_until = redis.call("GET", block_key)
|
@@ -23,28 +34,29 @@ if blocked_until then
|
|
23
34
|
return {(tonumber(blocked_until) - now), 0}
|
24
35
|
end
|
25
36
|
|
26
|
-
-- get current bucket level
|
27
|
-
|
28
|
-
|
29
|
-
|
30
|
-
|
31
|
-
|
37
|
+
-- get current bucket level. The throttle key might not exist yet in which
|
38
|
+
-- case we default to 0
|
39
|
+
local bucket_level = tonumber(redis.call("GET", bucket_level_key)) or 0
|
40
|
+
|
41
|
+
-- ...and then perform the leaky bucket fillup/leak. We need to do this also when the bucket has
|
42
|
+
-- just been created because the initial n_tokens to add might be so high that it will
|
43
|
+
-- immediately overflow the bucket and trigger the throttle, on the first call.
|
44
|
+
local last_updated = tonumber(redis.call("GET", last_updated_key)) or now -- use sensible default of 'now' if the key does not exist
|
45
|
+
local new_bucket_level = math.max(0, bucket_level - (leak_rate * (now - last_updated)))
|
46
|
+
|
47
|
+
if (new_bucket_level + n_tokens) <= max_bucket_capacity then
|
48
|
+
new_bucket_level = math.max(0, new_bucket_level + n_tokens)
|
49
|
+
retval = {0, math.ceil(new_bucket_level)}
|
32
50
|
else
|
33
|
-
|
34
|
-
|
35
|
-
local new_bucket_level = math.max(0, bucket_level - (leak_rate * (now - last_updated)))
|
36
|
-
|
37
|
-
if (new_bucket_level + 1) <= max_bucket_capacity then
|
38
|
-
new_bucket_level = new_bucket_level + 1
|
39
|
-
retval = {0, math.ceil(new_bucket_level)}
|
40
|
-
else
|
41
|
-
redis.call("SETEX", block_key, block_duration, now + block_duration)
|
42
|
-
retval = {block_duration, 0}
|
43
|
-
end
|
44
|
-
redis.call("SETEX", bucket_level_key, key_lifetime, new_bucket_level) --still needs to be saved
|
51
|
+
redis.call("SETEX", block_key, block_duration, now + block_duration)
|
52
|
+
retval = {block_duration, 0}
|
45
53
|
end
|
46
54
|
|
47
|
-
--
|
55
|
+
-- Save the new bucket level
|
56
|
+
redis.call("SETEX", bucket_level_key, key_lifetime, new_bucket_level)
|
57
|
+
|
58
|
+
-- Record when we updated the bucket so that the amount of tokens leaked
|
59
|
+
-- can be correctly determined on the next invocation
|
48
60
|
redis.call("SETEX", last_updated_key, key_lifetime, now)
|
49
61
|
|
50
62
|
return retval
|
data/lib/prorate/throttle.rb
CHANGED
@@ -1,70 +1,165 @@
|
|
1
1
|
require 'digest'
|
2
2
|
|
3
3
|
module Prorate
|
4
|
-
class
|
5
|
-
attr_reader :retry_in_seconds
|
6
|
-
def initialize(try_again_in)
|
7
|
-
@retry_in_seconds = try_again_in
|
8
|
-
super("Throttled, please lower your temper and try again in #{retry_in_seconds} seconds")
|
9
|
-
end
|
4
|
+
class MisconfiguredThrottle < StandardError
|
10
5
|
end
|
11
6
|
|
12
|
-
class
|
13
|
-
|
7
|
+
class Throttle
|
8
|
+
LUA_SCRIPT_CODE = File.read(File.join(__dir__, "rate_limit.lua"))
|
9
|
+
LUA_SCRIPT_HASH = Digest::SHA1.hexdigest(LUA_SCRIPT_CODE)
|
14
10
|
|
15
|
-
|
16
|
-
end
|
11
|
+
attr_reader :name, :limit, :period, :block_for, :redis, :logger
|
17
12
|
|
18
|
-
|
13
|
+
def initialize(name:, limit:, period:, block_for:, redis:, logger: Prorate::NullLogger)
|
14
|
+
@name = name.to_s
|
15
|
+
@discriminators = [name.to_s]
|
16
|
+
@redis = redis.respond_to?(:with) ? redis : NullPool.new(redis)
|
17
|
+
@logger = logger
|
18
|
+
@block_for = block_for
|
19
19
|
|
20
|
-
|
21
|
-
script_filepath = File.join(__dir__,"rate_limit.lua")
|
22
|
-
script = File.read(script_filepath)
|
23
|
-
Digest::SHA1.hexdigest(script)
|
24
|
-
end
|
20
|
+
raise MisconfiguredThrottle if (period <= 0) || (limit <= 0)
|
25
21
|
|
26
|
-
|
22
|
+
# Do not do type conversions here since we want to allow the caller to read
|
23
|
+
# those values back later
|
24
|
+
# (API contract which the previous implementation of Throttle already supported)
|
25
|
+
@limit = limit
|
26
|
+
@period = period
|
27
27
|
|
28
|
-
def initialize(*)
|
29
|
-
super
|
30
|
-
@discriminators = [name.to_s]
|
31
|
-
self.redis = NullPool.new(redis) unless redis.respond_to?(:with)
|
32
|
-
raise MisconfiguredThrottle if ((period <= 0) || (limit <= 0))
|
33
28
|
@leak_rate = limit.to_f / period # tokens per second;
|
34
29
|
end
|
35
|
-
|
30
|
+
|
31
|
+
# Add a value that will be used to distinguish this throttle from others.
|
32
|
+
# It has to be something user- or connection-specific, and multiple
|
33
|
+
# discriminators can be combined:
|
34
|
+
#
|
35
|
+
# throttle << ip_address << user_agent_fingerprint
|
36
|
+
#
|
37
|
+
# @param discriminator[Object] a Ruby object that can be marshaled
|
38
|
+
# in an equivalent way between requests, using `Marshal.dump
|
36
39
|
def <<(discriminator)
|
37
40
|
@discriminators << discriminator
|
38
41
|
end
|
39
|
-
|
40
|
-
|
42
|
+
|
43
|
+
# Applies the throttle and raises a {Throttled} exception if it has been triggered
|
44
|
+
#
|
45
|
+
# Accepts an optional number of tokens to put in the bucket (default is 1).
|
46
|
+
# The effect of `n_tokens:` set to 0 is a "ping".
|
47
|
+
# It makes sure the throttle keys in Redis get created and adjusts the
|
48
|
+
# last invoked time of the leaky bucket. Can be used when a throttle
|
49
|
+
# is applied in a "shadow" fashion. For example, imagine you
|
50
|
+
# have a cascade of throttles with the following block times:
|
51
|
+
#
|
52
|
+
# Throttle A: [-------]
|
53
|
+
# Throttle B: [----------]
|
54
|
+
#
|
55
|
+
# You apply Throttle A: and it fires, but when that happens you also
|
56
|
+
# want to enable a throttle that is applied to "repeat offenders" only -
|
57
|
+
# - for instance ones that probe for tokens and/or passwords.
|
58
|
+
#
|
59
|
+
# Throttle C: [-------------------------------]
|
60
|
+
#
|
61
|
+
# If your "Throttle A" fires, you can trigger Throttle C
|
62
|
+
#
|
63
|
+
# Throttle A: [-----|-]
|
64
|
+
# Throttle C: [-----|-------------------------]
|
65
|
+
#
|
66
|
+
# because you know that Throttle A has fired and thus Throttle C comes
|
67
|
+
# into effect. What you want to do, however, is to fire Throttle C
|
68
|
+
# even though Throttle A: would have unlatched, which would create this
|
69
|
+
# call sequence:
|
70
|
+
#
|
71
|
+
# Throttle A: [-------] *(A not triggered)
|
72
|
+
# Throttle C: [------------|------------------]
|
73
|
+
#
|
74
|
+
# To achieve that you can keep Throttle C alive using `throttle!(n_tokens: 0)`,
|
75
|
+
# on every check that touches Throttle A and/or Throttle C. It keeps the leaky bucket
|
76
|
+
# updated but does not add any tokens to it:
|
77
|
+
#
|
78
|
+
# Throttle A: [------] *(A not triggered since block period has ended)
|
79
|
+
# Throttle C: [-----------|(ping)------------------] C is still blocking
|
80
|
+
#
|
81
|
+
# So you can effectively "keep a throttle alive" without ever triggering it,
|
82
|
+
# or keep it alive in combination with other throttles.
|
83
|
+
#
|
84
|
+
# @param n_tokens[Integer] the number of tokens to put in the bucket. If you are
|
85
|
+
# using Prorate for rate limiting, and a single request is adding N objects to your
|
86
|
+
# database for example, you can "top up" the bucket with a set number of tokens
|
87
|
+
# with a arbitrary ratio - like 1 token per inserted row. Once the bucket fills up
|
88
|
+
# the Throttled exception is going to be raised. Defaults to 1.
|
89
|
+
def throttle!(n_tokens: 1)
|
90
|
+
@logger.debug { "Applying throttle counter %s" % @name }
|
91
|
+
remaining_block_time, bucket_level = run_lua_throttler(
|
92
|
+
identifier: identifier,
|
93
|
+
bucket_capacity: @limit,
|
94
|
+
leak_rate: @leak_rate,
|
95
|
+
block_for: @block_for,
|
96
|
+
n_tokens: n_tokens)
|
97
|
+
|
98
|
+
if remaining_block_time > 0
|
99
|
+
@logger.warn do
|
100
|
+
"Throttle %s exceeded limit of %d in %d seconds and is blocked for the next %d seconds" % [@name, @limit, @period, remaining_block_time]
|
101
|
+
end
|
102
|
+
raise ::Prorate::Throttled.new(@name, remaining_block_time)
|
103
|
+
end
|
104
|
+
|
105
|
+
@limit - bucket_level # Return how many calls remain
|
106
|
+
end
|
107
|
+
|
108
|
+
def status
|
109
|
+
redis_block_key = "#{identifier}.block"
|
110
|
+
@redis.with do |r|
|
111
|
+
is_blocked = redis_key_exists?(r, redis_block_key)
|
112
|
+
if is_blocked
|
113
|
+
remaining_seconds = r.get(redis_block_key).to_i - Time.now.to_i
|
114
|
+
Status.new(_is_throttled = true, remaining_seconds)
|
115
|
+
else
|
116
|
+
remaining_seconds = 0
|
117
|
+
Status.new(_is_throttled = false, remaining_seconds)
|
118
|
+
end
|
119
|
+
end
|
120
|
+
end
|
121
|
+
|
122
|
+
private
|
123
|
+
|
124
|
+
def identifier
|
41
125
|
discriminator = Digest::SHA1.hexdigest(Marshal.dump(@discriminators))
|
42
|
-
|
43
|
-
|
44
|
-
|
45
|
-
|
46
|
-
|
47
|
-
|
48
|
-
|
49
|
-
|
50
|
-
|
126
|
+
"#{@name}:#{discriminator}"
|
127
|
+
end
|
128
|
+
|
129
|
+
# redis-rb 4.2 started printing a warning for every single-argument use of `#exists`, because
|
130
|
+
# they intend to break compatibility in a future version (to return an integer instead of a
|
131
|
+
# boolean). The old behavior (returning a boolean) is available using the new `exists?` method.
|
132
|
+
def redis_key_exists?(redis, key)
|
133
|
+
return redis.exists?(key) if redis.respond_to?(:exists?)
|
134
|
+
redis.exists(key)
|
135
|
+
end
|
136
|
+
|
137
|
+
def run_lua_throttler(identifier:, bucket_capacity:, leak_rate:, block_for:, n_tokens:)
|
138
|
+
# Computing the identifier is somewhat involved so we should avoid doing it too often
|
139
|
+
id = identifier
|
140
|
+
bucket_level_key = "#{id}.bucket_level"
|
141
|
+
last_updated_key = "#{id}.last_updated"
|
142
|
+
block_key = "#{id}.block"
|
143
|
+
|
144
|
+
@redis.with do |redis|
|
145
|
+
begin
|
146
|
+
redis.evalsha(LUA_SCRIPT_HASH, keys: [bucket_level_key, last_updated_key, block_key], argv: [bucket_capacity, leak_rate, block_for, n_tokens])
|
147
|
+
rescue Redis::CommandError => e
|
148
|
+
if e.message.include? "NOSCRIPT"
|
149
|
+
# The Redis server has never seen this script before. Needs to run only once in the entire lifetime
|
150
|
+
# of the Redis server, until the script changes - in which case it will be loaded under a different SHA
|
151
|
+
redis.script(:load, LUA_SCRIPT_CODE)
|
152
|
+
retry
|
153
|
+
else
|
154
|
+
raise e
|
155
|
+
end
|
51
156
|
end
|
52
|
-
available_calls = limit - bucket_level
|
53
157
|
end
|
54
158
|
end
|
55
159
|
|
56
|
-
|
57
|
-
|
58
|
-
|
59
|
-
if e.message.include? "NOSCRIPT"
|
60
|
-
# The Redis server has never seen this script before. Needs to run only once in the entire lifetime of the Redis server (unless the script changes)
|
61
|
-
script_filepath = File.join(__dir__,"rate_limit.lua")
|
62
|
-
script = File.read(script_filepath)
|
63
|
-
raise ScriptHashMismatch if Digest::SHA1.hexdigest(script) != CURRENT_SCRIPT_HASH
|
64
|
-
redis.script(:load, script)
|
65
|
-
redis.evalsha(CURRENT_SCRIPT_HASH, [], [identifier, bucket_capacity, leak_rate, block_for])
|
66
|
-
else
|
67
|
-
raise e
|
160
|
+
class Status < Struct.new(:is_throttled, :remaining_throttle_seconds)
|
161
|
+
def throttled?
|
162
|
+
is_throttled
|
68
163
|
end
|
69
164
|
end
|
70
165
|
end
|
@@ -0,0 +1,20 @@
|
|
1
|
+
# The Throttled exception gets raised when a throttle is triggered.
|
2
|
+
#
|
3
|
+
# The exception carries additional attributes which can be used for
|
4
|
+
# error tracking and for creating a correct Retry-After HTTP header for
|
5
|
+
# a 429 response
|
6
|
+
class Prorate::Throttled < StandardError
|
7
|
+
# @attr [String] the name of the throttle (like "shpongs-per-ip").
|
8
|
+
# Can be used to detect which throttle has fired when multiple
|
9
|
+
# throttles are used within the same block.
|
10
|
+
attr_reader :throttle_name
|
11
|
+
|
12
|
+
# @attr [Integer] for how long the caller will be blocked, in seconds.
|
13
|
+
attr_reader :retry_in_seconds
|
14
|
+
|
15
|
+
def initialize(throttle_name, try_again_in)
|
16
|
+
@throttle_name = throttle_name
|
17
|
+
@retry_in_seconds = try_again_in
|
18
|
+
super("Throttled, please lower your temper and try again in #{retry_in_seconds} seconds")
|
19
|
+
end
|
20
|
+
end
|
data/lib/prorate/version.rb
CHANGED
data/prorate.gemspec
CHANGED
@@ -1,4 +1,4 @@
|
|
1
|
-
|
1
|
+
|
2
2
|
lib = File.expand_path('../lib', __FILE__)
|
3
3
|
$LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
|
4
4
|
require 'prorate/version'
|
@@ -27,10 +27,12 @@ Gem::Specification.new do |spec|
|
|
27
27
|
spec.executables = spec.files.grep(%r{^exe/}) { |f| File.basename(f) }
|
28
28
|
spec.require_paths = ["lib"]
|
29
29
|
|
30
|
-
spec.add_dependency "ks"
|
31
30
|
spec.add_dependency "redis", ">= 2"
|
32
|
-
spec.add_development_dependency "connection_pool", "~>
|
33
|
-
spec.add_development_dependency "bundler"
|
34
|
-
spec.add_development_dependency "rake", "~>
|
31
|
+
spec.add_development_dependency "connection_pool", "~> 2"
|
32
|
+
spec.add_development_dependency "bundler"
|
33
|
+
spec.add_development_dependency "rake", "~> 13.0"
|
35
34
|
spec.add_development_dependency "rspec", "~> 3.0"
|
35
|
+
spec.add_development_dependency 'wetransfer_style', '0.6.5'
|
36
|
+
spec.add_development_dependency 'yard', '~> 0.9'
|
37
|
+
spec.add_development_dependency 'pry', '~> 0.13.1'
|
36
38
|
end
|
data/scripts/bm.rb
CHANGED
@@ -6,7 +6,7 @@ require 'redis'
|
|
6
6
|
require 'securerandom'
|
7
7
|
|
8
8
|
def average_ms(ary)
|
9
|
-
ary.map{|x| x*1000}.inject(0
|
9
|
+
ary.map { |x| x * 1000 }.inject(0, &:+) / ary.length
|
10
10
|
end
|
11
11
|
|
12
12
|
r = Redis.new
|
@@ -31,24 +31,23 @@ end
|
|
31
31
|
puts average_ms times
|
32
32
|
def key_for_ts(ts)
|
33
33
|
"th:%s:%d" % [@id, ts]
|
34
|
-
end
|
34
|
+
end
|
35
35
|
|
36
36
|
times = []
|
37
37
|
15.times do
|
38
|
-
id = SecureRandom.hex(10)
|
39
38
|
sec, _ = r.time # Use Redis time instead of the system timestamp, so that all the nodes are consistent
|
40
39
|
ts = sec.to_i # All Redis results are strings
|
41
40
|
k = key_for_ts(ts)
|
42
41
|
times << Benchmark.realtime {
|
43
42
|
r.multi do |txn|
|
44
|
-
# Increment the counter
|
43
|
+
# Increment the counter
|
45
44
|
txn.incr(k)
|
46
45
|
txn.expire(k, 120)
|
47
46
|
|
48
47
|
span_start = ts - 120
|
49
48
|
span_end = ts + 1
|
50
|
-
possible_keys = (span_start..span_end).map{|prev_time| key_for_ts(prev_time) }
|
51
|
-
|
49
|
+
possible_keys = (span_start..span_end).map { |prev_time| key_for_ts(prev_time) }
|
50
|
+
|
52
51
|
# Fetch all the counter values within the time window. Despite the fact that this
|
53
52
|
# will return thousands of elements for large sliding window sizes, the values are
|
54
53
|
# small and an MGET in Redis is pretty cheap, so perf should stay well within limits.
|
@@ -58,4 +57,3 @@ times = []
|
|
58
57
|
end
|
59
58
|
|
60
59
|
puts average_ms times
|
61
|
-
|
data/scripts/reload_lua.rb
CHANGED
metadata
CHANGED
@@ -1,99 +1,127 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: prorate
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.
|
4
|
+
version: 0.7.1
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Julik Tarkhanov
|
8
8
|
autorequire:
|
9
9
|
bindir: exe
|
10
10
|
cert_chain: []
|
11
|
-
date:
|
11
|
+
date: 2020-07-20 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
|
-
name:
|
14
|
+
name: redis
|
15
15
|
requirement: !ruby/object:Gem::Requirement
|
16
16
|
requirements:
|
17
17
|
- - ">="
|
18
18
|
- !ruby/object:Gem::Version
|
19
|
-
version: '
|
19
|
+
version: '2'
|
20
20
|
type: :runtime
|
21
21
|
prerelease: false
|
22
22
|
version_requirements: !ruby/object:Gem::Requirement
|
23
23
|
requirements:
|
24
24
|
- - ">="
|
25
25
|
- !ruby/object:Gem::Version
|
26
|
-
version: '
|
26
|
+
version: '2'
|
27
27
|
- !ruby/object:Gem::Dependency
|
28
|
-
name:
|
28
|
+
name: connection_pool
|
29
29
|
requirement: !ruby/object:Gem::Requirement
|
30
30
|
requirements:
|
31
|
-
- - "
|
31
|
+
- - "~>"
|
32
32
|
- !ruby/object:Gem::Version
|
33
33
|
version: '2'
|
34
|
-
type: :
|
34
|
+
type: :development
|
35
35
|
prerelease: false
|
36
36
|
version_requirements: !ruby/object:Gem::Requirement
|
37
37
|
requirements:
|
38
|
-
- - "
|
38
|
+
- - "~>"
|
39
39
|
- !ruby/object:Gem::Version
|
40
40
|
version: '2'
|
41
41
|
- !ruby/object:Gem::Dependency
|
42
|
-
name:
|
42
|
+
name: bundler
|
43
|
+
requirement: !ruby/object:Gem::Requirement
|
44
|
+
requirements:
|
45
|
+
- - ">="
|
46
|
+
- !ruby/object:Gem::Version
|
47
|
+
version: '0'
|
48
|
+
type: :development
|
49
|
+
prerelease: false
|
50
|
+
version_requirements: !ruby/object:Gem::Requirement
|
51
|
+
requirements:
|
52
|
+
- - ">="
|
53
|
+
- !ruby/object:Gem::Version
|
54
|
+
version: '0'
|
55
|
+
- !ruby/object:Gem::Dependency
|
56
|
+
name: rake
|
43
57
|
requirement: !ruby/object:Gem::Requirement
|
44
58
|
requirements:
|
45
59
|
- - "~>"
|
46
60
|
- !ruby/object:Gem::Version
|
47
|
-
version: '
|
61
|
+
version: '13.0'
|
48
62
|
type: :development
|
49
63
|
prerelease: false
|
50
64
|
version_requirements: !ruby/object:Gem::Requirement
|
51
65
|
requirements:
|
52
66
|
- - "~>"
|
53
67
|
- !ruby/object:Gem::Version
|
54
|
-
version: '
|
68
|
+
version: '13.0'
|
55
69
|
- !ruby/object:Gem::Dependency
|
56
|
-
name:
|
70
|
+
name: rspec
|
57
71
|
requirement: !ruby/object:Gem::Requirement
|
58
72
|
requirements:
|
59
73
|
- - "~>"
|
60
74
|
- !ruby/object:Gem::Version
|
61
|
-
version: '
|
75
|
+
version: '3.0'
|
62
76
|
type: :development
|
63
77
|
prerelease: false
|
64
78
|
version_requirements: !ruby/object:Gem::Requirement
|
65
79
|
requirements:
|
66
80
|
- - "~>"
|
67
81
|
- !ruby/object:Gem::Version
|
68
|
-
version: '
|
82
|
+
version: '3.0'
|
69
83
|
- !ruby/object:Gem::Dependency
|
70
|
-
name:
|
84
|
+
name: wetransfer_style
|
85
|
+
requirement: !ruby/object:Gem::Requirement
|
86
|
+
requirements:
|
87
|
+
- - '='
|
88
|
+
- !ruby/object:Gem::Version
|
89
|
+
version: 0.6.5
|
90
|
+
type: :development
|
91
|
+
prerelease: false
|
92
|
+
version_requirements: !ruby/object:Gem::Requirement
|
93
|
+
requirements:
|
94
|
+
- - '='
|
95
|
+
- !ruby/object:Gem::Version
|
96
|
+
version: 0.6.5
|
97
|
+
- !ruby/object:Gem::Dependency
|
98
|
+
name: yard
|
71
99
|
requirement: !ruby/object:Gem::Requirement
|
72
100
|
requirements:
|
73
101
|
- - "~>"
|
74
102
|
- !ruby/object:Gem::Version
|
75
|
-
version: '
|
103
|
+
version: '0.9'
|
76
104
|
type: :development
|
77
105
|
prerelease: false
|
78
106
|
version_requirements: !ruby/object:Gem::Requirement
|
79
107
|
requirements:
|
80
108
|
- - "~>"
|
81
109
|
- !ruby/object:Gem::Version
|
82
|
-
version: '
|
110
|
+
version: '0.9'
|
83
111
|
- !ruby/object:Gem::Dependency
|
84
|
-
name:
|
112
|
+
name: pry
|
85
113
|
requirement: !ruby/object:Gem::Requirement
|
86
114
|
requirements:
|
87
115
|
- - "~>"
|
88
116
|
- !ruby/object:Gem::Version
|
89
|
-
version:
|
117
|
+
version: 0.13.1
|
90
118
|
type: :development
|
91
119
|
prerelease: false
|
92
120
|
version_requirements: !ruby/object:Gem::Requirement
|
93
121
|
requirements:
|
94
122
|
- - "~>"
|
95
123
|
- !ruby/object:Gem::Version
|
96
|
-
version:
|
124
|
+
version: 0.13.1
|
97
125
|
description: Can be used to implement all kinds of throttles
|
98
126
|
email:
|
99
127
|
- me@julik.nl
|
@@ -103,7 +131,9 @@ extra_rdoc_files: []
|
|
103
131
|
files:
|
104
132
|
- ".gitignore"
|
105
133
|
- ".rspec"
|
134
|
+
- ".rubocop.yml"
|
106
135
|
- ".travis.yml"
|
136
|
+
- CHANGELOG.md
|
107
137
|
- Gemfile
|
108
138
|
- LICENSE.txt
|
109
139
|
- README.md
|
@@ -111,10 +141,13 @@ files:
|
|
111
141
|
- bin/console
|
112
142
|
- bin/setup
|
113
143
|
- lib/prorate.rb
|
144
|
+
- lib/prorate/leaky_bucket.lua
|
145
|
+
- lib/prorate/leaky_bucket.rb
|
114
146
|
- lib/prorate/null_logger.rb
|
115
147
|
- lib/prorate/null_pool.rb
|
116
148
|
- lib/prorate/rate_limit.lua
|
117
149
|
- lib/prorate/throttle.rb
|
150
|
+
- lib/prorate/throttled.rb
|
118
151
|
- lib/prorate/version.rb
|
119
152
|
- prorate.gemspec
|
120
153
|
- scripts/bm.rb
|
@@ -140,8 +173,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
140
173
|
- !ruby/object:Gem::Version
|
141
174
|
version: '0'
|
142
175
|
requirements: []
|
143
|
-
|
144
|
-
rubygems_version: 2.4.5.1
|
176
|
+
rubygems_version: 3.0.6
|
145
177
|
signing_key:
|
146
178
|
specification_version: 4
|
147
179
|
summary: Time-restricted rate limiter using Redis
|