prorate 0.3.0 → 0.7.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
- SHA1:
3
- metadata.gz: db453351faca0b61a4517795368fe719fe5c07bb
4
- data.tar.gz: 165c6088be69a4a3b291059aa4e872061e5f999f
2
+ SHA256:
3
+ metadata.gz: 7e00071a8bb75be7ca3c74ecaf662e049f5c55cae7cb867c9851062efd5b8073
4
+ data.tar.gz: 277671f9b2dcce7d032e9f9f38b294ced2ca953cd0f85b940d488f3ff306ee38
5
5
  SHA512:
6
- metadata.gz: b9dd96de6c8915e8ef39f7737e930976d4f83909b8eb861456966ede6a2d62cd82f0b40b634af8da824bbd303d84e83205924d6ceb51369a17e82e6eec01523f
7
- data.tar.gz: 5e349bc7288a6da431d9ef7177fc77f2041638ace47fe319af1e533846472fe234e1760ccd3d26bb208ad3e649eb99ecbed7733dc14871f53f63e19dd4b512f7
6
+ metadata.gz: 40e22d2cdb70cb407b7cc2135624a373e95e83514cfa8bb2f42a6106b40640b458a017212967fb533ba080e3e4707199a1fb1e2414b68f1289e1d24b15b05fbc
7
+ data.tar.gz: 269d8ef7d384c08c928d3bed3412b2e7ae470e696ff03042744c79bd696962a9a7a82d099de8a153aef344029f40e8640d4e772bee90054b64b4f0183c96d2c9
@@ -0,0 +1,2 @@
1
+ inherit_gem:
2
+ wetransfer_style: ruby/default.yml
@@ -1,7 +1,10 @@
1
1
  rvm:
2
- - 2.2.5
3
- - 2.3.3
4
- - 2.4.1
2
+ - 2.2
3
+ - 2.3
4
+ - 2.4
5
+ - 2.5
6
+ - 2.6
7
+ - 2.7
5
8
 
6
9
  services:
7
10
  - redis
@@ -10,6 +13,5 @@ dist: trusty # https://docs.travis-ci.com/user/trusty-ci-environment/
10
13
  sudo: false
11
14
  cache: bundler
12
15
 
13
- # Travis permits the following phases: before_install, install, after_install, before_script, script, after_script
14
16
  script:
15
- - bundle exec rspec
17
+ - bundle exec rake
@@ -0,0 +1,45 @@
1
+ # 0.7.1
2
+
3
+ * Fix use of a ConnectionPool as `redis:` argument which was broken in 0.7.0
4
+ * Use the Lua KEYS argument in `rate_limit.lua` for future-proof clustering support
5
+ instead of computing the touched keys inside the Lua script.
6
+
7
+ # 0.7.0
8
+
9
+ * Add a naked `LeakyBucket` object which allows one to build sophisticated rate limiting relying
10
+ on the Ruby side of things more. It has less features than the `Throttle` but can be used for more
11
+ fine-graned control of the throttling. It also does not use exceptions for flow control.
12
+ The `Throttle` object used them because it should make the code abort *loudly* if a throttle is hit, but
13
+ when the objective is to measure instead a smaller, less opinionated module can be more useful.
14
+ * Refactor the internals of the Throttle class so that it uses a default Logger, and document the arguments.
15
+ * Use fractional time measurement from Redis in Lua code. For our throttle to be precise we cannot really
16
+ limit ourselves to "anchored slots" on the start of a second, and we would be effectively doing that
17
+ with our previous setup.
18
+ * Fix the `redis` gem deprecation warnings when using `exists` - we will now use `exists?` if available.
19
+ * Remove dependency on the `ks` gem as we can use vanilla Structs or classes instead.
20
+
21
+ # 0.6.0
22
+
23
+ * Add `Throttle#status` method for retrieving the status of a throttle without placing any tokens
24
+ or raising any exceptions. This is useful for layered throttles.
25
+
26
+ # 0.5.0
27
+
28
+ * Allow setting the number of tokens to add to the bucket in `Throttle#throttle!` - this is useful because
29
+ sometimes a request effectively uses N of some resource in one go, and should thus cause a throttle
30
+ to fire without having to do repeated calls
31
+
32
+ # 0.4.0
33
+
34
+ * When raising a `Throttled` exception, add the name of the throttle to it. This is useful when multiple
35
+ throttles are used together and one needs to find out which throttle has fired.
36
+ * Reformat code according to wetransfer_style and make it compulsory on CI
37
+
38
+ # 0.3.0
39
+
40
+ * Replace the Ruby implementation of the throttle with a Lua script which runs within Redis. This allows us
41
+ to do atomic gets+sets very rapidly.
42
+
43
+ # 0.1.0
44
+
45
+ * Initial release of Prorate
data/README.md CHANGED
@@ -1,8 +1,13 @@
1
1
  # Prorate
2
2
 
3
- Provides a low-level time-based throttle. Is mainly meant for situations where using something like Rack::Attack is not very
4
- useful since you need access to more variables. Under the hood, this uses a Lua script that implements the
5
- [Leaky Bucket](https://en.wikipedia.org/wiki/Leaky_bucket) algorithm in a single threaded and race condition safe way.
3
+ Provides a low-level time-based throttle. Is mainly meant for situations where
4
+ using something like Rack::Attack is not very useful since you need access to
5
+ more variables. Under the hood, this uses a Lua script that implements the
6
+ [Leaky Bucket](https://en.wikipedia.org/wiki/Leaky_bucket) algorithm in a single
7
+ threaded and race condition safe way.
8
+
9
+ [![Build Status](https://travis-ci.org/WeTransfer/prorate.svg?branch=master)](https://travis-ci.org/WeTransfer/prorate)
10
+ [![Gem Version](https://badge.fury.io/rb/prorate.svg)](https://badge.fury.io/rb/prorate)
6
11
 
7
12
  ## Installation
8
13
 
@@ -14,29 +19,137 @@ gem 'prorate'
14
19
 
15
20
  And then execute:
16
21
 
17
- $ bundle
22
+ ```shell
23
+ bundle install
24
+ ```
18
25
 
19
26
  Or install it yourself as:
20
27
 
21
- $ gem install prorate
28
+ ```shell
29
+ gem install prorate
30
+ ```
22
31
 
23
32
  ## Usage
24
33
 
34
+ The simplest mode of operation is throttling an endpoint, using the throttler
35
+ before the action happens.
36
+
25
37
  Within your Rails controller:
26
38
 
27
- t = Prorate::Throttle.new(redis: Redis.new, logger: Rails.logger,
28
- name: "throttle-login-email", limit: 20, period: 5.seconds)
29
- # Add all the parameters that function as a discriminator
30
- t << request.ip
31
- t << params.require(:email)
32
- # ...and call the throttle! method
33
- t.throttle! # Will raise a Prorate::Throttled exception if the limit has been reached
39
+ ```ruby
40
+ t = Prorate::Throttle.new(
41
+ redis: Redis.new,
42
+ logger: Rails.logger,
43
+ name: "throttle-login-email",
44
+ limit: 20,
45
+ period: 5.seconds
46
+ )
47
+ # Add all the parameters that function as a discriminator.
48
+ t << request.ip << params.require(:email)
49
+ # ...and call the throttle! method
50
+ t.throttle! # Will raise a Prorate::Throttled exception if the limit has been reached
51
+ #
52
+ # Your regular action happens after this point
53
+ ```
54
+
55
+ To capture that exception, in the controller
56
+
57
+ ```ruby
58
+ rescue_from Prorate::Throttled do |e|
59
+ response.set_header('Retry-After', e.retry_in_seconds.to_s)
60
+ render nothing: true, status: 429
61
+ end
62
+ ```
63
+
64
+ ### Throttling and checking status
65
+
66
+ More exquisite control can be achieved by combining throttling (see previous
67
+ step) and - in subsequent calls - checking the status of the throttle before
68
+ invoking the throttle. **When you call `throttle!`, you add tokens to the leaky bucket.**
69
+
70
+ Let's say you have an endpoint that not only needs throttling, but you want to
71
+ ban [credential stuffers](https://en.wikipedia.org/wiki/Credential_stuffing)
72
+ outright. This is a multi-step process:
73
+
74
+ 1. Respond with a 429 if the discriminators of the request would land in an
75
+ already blocking 'credential-stuffing'-throttle
76
+ 1. Run your regular throttling
77
+ 1. Perform your sign in action
78
+ 1. If the sign in was unsuccessful, add the discriminators to the
79
+ 'credential-stuffing'-throttle
80
+
81
+ In your controller that would look like this:
82
+
83
+ ```ruby
84
+ t = Prorate::Throttle.new(
85
+ redis: Redis.new,
86
+ logger: Rails.logger,
87
+ name: "credential-stuffing",
88
+ limit: 20,
89
+ period: 20.minutes
90
+ )
91
+ # Add all the parameters that function as a discriminator.
92
+ t << request.ip
93
+ # And before anything else, check whether it is throttled
94
+ if t.status.throttled?
95
+ response.set_header('Retry-After', t.status.remaining_throttle_seconds.to_s)
96
+ render(nothing: true, status: 429) and return
97
+ end
98
+
99
+ # run your regular throttles for the endpoint
100
+ other_throttles.map(:throttle!)
101
+ # Perform your sign in logic..
102
+
103
+ user = YourSignInLogic.valid?(
104
+ email: params[:email],
105
+ password: params[:password]
106
+ )
107
+
108
+ # Add the request to the credential stuffing throttle if we didn't succeed
109
+ t.throttle! unless user
110
+
111
+ # the rest of your action
112
+ ```
34
113
 
35
114
  To capture that exception, in the controller
36
115
 
37
- rescue_from Prorate::Throttled do |e|
38
- render nothing: true, status: 429
39
- end
116
+ ```ruby
117
+ rescue_from Prorate::Throttled do |e|
118
+ response.set_header('Retry-After', e.retry_in_seconds.to_s)
119
+ render nothing: true, status: 429
120
+ end
121
+ ```
122
+
123
+ ## Using just the leaky bucket
124
+
125
+ There is also an object for using the heart of Prorate (the leaky bucket) without blocking or exceptions. This is useful
126
+ if you want to implement a more generic rate limiting solution and customise it in a fancier way. The leaky bucket on
127
+ it's own provides the following conveniences only:
128
+
129
+ * Track the number of tokens added and the number of tokens that have leaked
130
+ * Tracks whether a specific token fillup has overflown the bucket. This is only tracked momentarily if the bucket is limited
131
+
132
+ Level and leak rate are computed and provided as Floats instead of Integers (in the Throttle class).
133
+ To use it, employ the `LeakyBucket` object:
134
+
135
+ ```ruby
136
+ # The leak_rate is in tokens per second
137
+ leaky_bucket = Prorate::LeakyBucket.new(redis: Redis.new, redis_key_prefix: "user123", leak_rate: 0.8, bucket_capacity: 2)
138
+ leaky_bucket.state.level #=> will return 0.0
139
+ leaky_bucket.state.full? #=> will return "false"
140
+ state_after_add = leaky_bucket.fillup(2) #=> returns a State object_
141
+ state_after_add.full? #=> will return "true"
142
+ state_after_add.level #=> will return 2.0
143
+ ```
144
+
145
+ ## Why Lua?
146
+
147
+ Prorate is implementing throttling using the "Leaky Bucket" algorithm and is extensively described [here](https://github.com/WeTransfer/prorate/blob/master/lib/prorate/throttle.rb). The implementation is using a Lua script, because is the only language available which runs _inside_ Redis. Thanks to the speed benefits of Lua the script runs fast enough to apply it on every throttle call.
148
+
149
+ Using a Lua script in Prorate helps us achieve the following guarantees:
150
+
151
+ - **The script will run atomically.** The script is evaluated as a single Redis command. This ensures that the commands in the Lua script will never be interleaved with another client: they will always execute together.
152
+ - **Any usages of time will use the Redis time.** Throttling requires a consistent and monotonic _time source_. The only monotonic and consistent time source which is usable in the context of Prorate, is the `TIME` result of Redis itself. We are throttling requests from different machines, which will invariably have clock drift between them. This way using the Redis server `TIME` helps achieve consistency.
40
153
 
41
154
  ## Development
42
155
 
@@ -48,8 +161,6 @@ To install this gem onto your local machine, run `bundle exec rake install`. To
48
161
 
49
162
  Bug reports and pull requests are welcome on GitHub at https://github.com/WeTransfer/prorate.
50
163
 
51
-
52
164
  ## License
53
165
 
54
166
  The gem is available as open source under the terms of the [MIT License](http://opensource.org/licenses/MIT).
55
-
data/Rakefile CHANGED
@@ -1,6 +1,18 @@
1
1
  require "bundler/gem_tasks"
2
2
  require "rspec/core/rake_task"
3
+ require 'rubocop/rake_task'
4
+ require 'yard'
3
5
 
4
- RSpec::Core::RakeTask.new(:spec)
6
+ YARD::Rake::YardocTask.new(:doc) do |t|
7
+ # The dash has to be between the two to "divide" the source files and
8
+ # miscellaneous documentation files that contain no code
9
+ t.files = ['lib/**/*.rb', '-', 'LICENSE.txt', 'CHANGELOG.md']
10
+ end
5
11
 
6
- task :default => :spec
12
+ RSpec::Core::RakeTask.new(:spec) do |spec|
13
+ spec.rspec_opts = ["-c", "--order=rand"]
14
+ spec.pattern = FileList['spec/**/*_spec.rb']
15
+ end
16
+
17
+ RuboCop::RakeTask.new(:rubocop)
18
+ task default: [:spec, :rubocop]
@@ -1,6 +1,4 @@
1
1
  require "prorate/version"
2
- require "ks"
3
- require "logger"
4
2
  require "redis"
5
3
 
6
4
  module Prorate
@@ -0,0 +1,77 @@
1
+ -- Single threaded Leaky Bucket implementation (without blocking).
2
+ -- args: key_base, leak_rate, bucket_ttl, fillup. To just verify the state of the bucket leak_rate of 0 may be passed.
3
+ -- returns: the leve of the bucket in number of tokens
4
+
5
+ -- this is required to be able to use TIME and writes; basically it lifts the script into IO
6
+ redis.replicate_commands()
7
+
8
+ -- Redis documentation recommends passing the keys separately so that Redis
9
+ -- can - in the future - verify that they live on the same shard of a cluster, and
10
+ -- raise an error if they are not. As far as can be understood this functionality is not
11
+ -- yet present, but if we can make a little effort to make ourselves more future proof
12
+ -- we should.
13
+ local bucket_level_key = KEYS[1]
14
+ local last_updated_key = KEYS[2]
15
+
16
+ local leak_rate = tonumber(ARGV[1])
17
+ local fillup = tonumber(ARGV[2]) -- How many tokens this call adds to the bucket.
18
+ local bucket_capacity = tonumber(ARGV[3]) -- How many tokens is the bucket allowed to contain
19
+
20
+ -- Compute the key TTL for the bucket. We are interested in how long it takes the bucket
21
+ -- to leak all the way to 0, as this is the time when the values stay relevant. We pad with 1 second
22
+ -- to have a little cushion.
23
+ local key_lifetime = math.ceil((bucket_capacity / leak_rate) + 1)
24
+
25
+ -- Take a timestamp
26
+ local redis_time = redis.call("TIME") -- Array of [seconds, microseconds]
27
+ local now = tonumber(redis_time[1]) + (tonumber(redis_time[2]) / 1000000)
28
+
29
+ -- get current bucket level. The throttle key might not exist yet in which
30
+ -- case we default to 0
31
+ local bucket_level = tonumber(redis.call("GET", bucket_level_key)) or 0
32
+
33
+ -- ...and then perform the leaky bucket fillup/leak. We need to do this also when the bucket has
34
+ -- just been created because the initial fillup to add might be so high that it will
35
+ -- immediately overflow the bucket and trigger the throttle, on the first call.
36
+ local last_updated = tonumber(redis.call("GET", last_updated_key)) or now -- use sensible default of 'now' if the key does not exist
37
+
38
+ -- Subtract the number of tokens leaked since last call
39
+ local dt = now - last_updated
40
+ local new_bucket_level = bucket_level - (leak_rate * dt) + fillup
41
+
42
+ -- and _then_ and add the tokens we fillup with. Cap the value to be 0 < capacity
43
+ new_bucket_level = math.max(0, math.min(bucket_capacity, new_bucket_level))
44
+
45
+ -- Since we return a floating point number string-formatted even if the bucket is full we
46
+ -- have some loss of precision in the formatting, even if the bucket was actually full.
47
+ -- This bit of information is useful to preserve.
48
+ local at_capacity = 0
49
+ if new_bucket_level == bucket_capacity then
50
+ at_capacity = 1
51
+ end
52
+
53
+ -- If both the initial level was 0, and the level after putting tokens in is 0 we
54
+ -- can avoid setting keys in Redis at all as this was only a level check.
55
+ if new_bucket_level == 0 and bucket_level == 0 then
56
+ return {"0.0", at_capacity}
57
+ end
58
+
59
+ -- Save the new bucket level
60
+ redis.call("SETEX", bucket_level_key, key_lifetime, new_bucket_level)
61
+
62
+ -- Record when we updated the bucket so that the amount of tokens leaked
63
+ -- can be correctly determined on the next invocation
64
+ redis.call("SETEX", last_updated_key, key_lifetime, now)
65
+
66
+ -- Most Redis adapters when used with the Lua interface truncate floats
67
+ -- to integers (at least in Python that is documented to be the case in
68
+ -- the Redis ebook here
69
+ -- https://redislabs.com/ebook/part-3-next-steps/chapter-11-scripting-redis-with-lua/11-1-adding-functionality-without-writing-c
70
+ -- We need access to the bucket level as a float value since our leak rate might as well be floating point, and to achieve that
71
+ -- we can go two ways. We can turn the float into a Lua string, and then parse it on the other side, or we can convert it to
72
+ -- a tuple of two integer values - one for the integer component and one for fraction.
73
+ -- Now, the unpleasant aspect is that when we do this we will lose precision - the number is not going to be
74
+ -- exactly equal to capacity, thus we lose the bit of information which tells us whether we filled up the bucket or not.
75
+ -- Also since the only moment we can register whether the bucket is above capacity is now - in this script, since
76
+ -- by the next call some tokens will have leaked.
77
+ return {string.format("%.9f", new_bucket_level), at_capacity}
@@ -0,0 +1,134 @@
1
+ module Prorate
2
+
3
+ # This offers just the leaky bucket implementation with fill control, but without the timed lock.
4
+ # It does not raise any exceptions, it just tracks the state of a leaky bucket in Redis.
5
+ #
6
+ # Important differences from the more full-featured Throttle class are:
7
+ #
8
+ # * No logging (as most meaningful code lives in Lua anyway)
9
+ # * No timed block - if you need to keep track of timed blocking it can be done externally
10
+ # * Leak rate is specified directly in tokens per second, instead of specifying the block period.
11
+ # * The bucket level is stored and returned as a Float which allows for finer-grained measurement,
12
+ # but more importantly - makes testing from the outside easier.
13
+ #
14
+ # It does have a few downsides compared to the Throttle though
15
+ #
16
+ # * Bucket is only full momentarily. On subsequent calls some tokens will leak already, so you either
17
+ # need to do delta checks on the value or rely on putting the token into the bucket.
18
+ class LeakyBucket
19
+ LUA_SCRIPT_CODE = File.read(File.join(__dir__, "leaky_bucket.lua"))
20
+ LUA_SCRIPT_HASH = Digest::SHA1.hexdigest(LUA_SCRIPT_CODE)
21
+
22
+ class BucketState < Struct.new(:level, :full)
23
+ # Returns the level of the bucket after the operation on the LeakyBucket
24
+ # object has taken place. There is a guarantee that no tokens have leaked
25
+ # from the bucket between the operation and the freezing of the BucketState
26
+ # struct.
27
+ #
28
+ # @!attribute [r] level
29
+ # @return [Float]
30
+
31
+ # Tells whether the bucket was detected to be full when the operation on
32
+ # the LeakyBucket was performed. There is a guarantee that no tokens have leaked
33
+ # from the bucket between the operation and the freezing of the BucketState
34
+ # struct.
35
+ #
36
+ # @!attribute [r] full
37
+ # @return [Boolean]
38
+
39
+ alias_method :full?, :full
40
+
41
+ # Returns the bucket level of the bucket state as a Float
42
+ #
43
+ # @return [Float]
44
+ def to_f
45
+ level.to_f
46
+ end
47
+
48
+ # Returns the bucket level of the bucket state rounded to an Integer
49
+ #
50
+ # @return [Integer]
51
+ def to_i
52
+ level.to_i
53
+ end
54
+ end
55
+
56
+ # Creates a new LeakyBucket. The object controls 2 keys in Redis: one
57
+ # for the last access time, and one for the contents of the key.
58
+ #
59
+ # @param redis_key_prefix[String] the prefix that is going to be used for keys.
60
+ # If your bucket is specific to a user, a browser or an IP address you need to mix in
61
+ # those values into the key prefix as appropriate.
62
+ # @param leak_rate[Float] the leak rate of the bucket, in tokens per second
63
+ # @param redis[Redis,#with] a Redis connection or a ConnectionPool instance
64
+ # if you are using the connection_pool gem. With a connection pool Prorate will
65
+ # checkout a connection using `#with` and check it in when it's done.
66
+ # @param bucket_capacity[Numeric] how many tokens is the bucket capped at.
67
+ # Filling up the bucket using `fillup()` will add to that number, but
68
+ # the bucket contents will then be capped at this value. So with
69
+ # bucket_capacity set to 12 and a `fillup(14)` the bucket will reach the level
70
+ # of 12, and will then immediately start leaking again.
71
+ def initialize(redis_key_prefix:, leak_rate:, redis:, bucket_capacity:)
72
+ @redis_key_prefix = redis_key_prefix
73
+ @redis = redis.respond_to?(:with) ? redis : NullPool.new(redis)
74
+ @leak_rate = leak_rate.to_f
75
+ @capacity = bucket_capacity.to_f
76
+ end
77
+
78
+ # Places `n` tokens in the bucket.
79
+ #
80
+ # @return [BucketState] the state of the bucket after the operation
81
+ def fillup(n_tokens)
82
+ run_lua_bucket_script(n_tokens.to_f)
83
+ end
84
+
85
+ # Returns the current state of the bucket, containing the level and whether the bucket is full
86
+ #
87
+ # @return [BucketState] the state of the bucket after the operation
88
+ def state
89
+ run_lua_bucket_script(0)
90
+ end
91
+
92
+ # Returns the Redis key for the leaky bucket itself
93
+ # Note that the key is not guaranteed to contain a value if the bucket has not been filled
94
+ # up recently.
95
+ #
96
+ # @return [String]
97
+ def leaky_bucket_key
98
+ "#{@redis_key_prefix}.leaky_bucket.bucket_level"
99
+ end
100
+
101
+ # Returns the Redis key under which the last updated time of the bucket gets stored.
102
+ # Note that the key is not guaranteed to contain a value if the bucket has not been filled
103
+ # up recently.
104
+ #
105
+ # @return [String]
106
+ def last_updated_key
107
+ "#{@redis_key_prefix}.leaky_bucket.last_updated"
108
+ end
109
+
110
+ private
111
+
112
+ def run_lua_bucket_script(n_tokens)
113
+ @redis.with do |r|
114
+ begin
115
+ # The script returns a tuple of "whole tokens, microtokens"
116
+ # to be able to smuggle the float across (similar to Redis TIME command)
117
+ level_str, is_full_int = r.evalsha(
118
+ LUA_SCRIPT_HASH,
119
+ keys: [leaky_bucket_key, last_updated_key], argv: [@leak_rate, n_tokens, @capacity])
120
+ BucketState.new(level_str.to_f, is_full_int == 1)
121
+ rescue Redis::CommandError => e
122
+ if e.message.include? "NOSCRIPT"
123
+ # The Redis server has never seen this script before. Needs to run only once in the entire lifetime
124
+ # of the Redis server, until the script changes - in which case it will be loaded under a different SHA
125
+ r.script(:load, LUA_SCRIPT_CODE)
126
+ retry
127
+ else
128
+ raise e
129
+ end
130
+ end
131
+ end
132
+ end
133
+ end
134
+ end
@@ -1,10 +1,15 @@
1
1
  module Prorate
2
2
  module NullLogger
3
3
  def self.debug(*); end
4
+
4
5
  def self.info(*); end
6
+
5
7
  def self.warn(*); end
8
+
6
9
  def self.error(*); end
10
+
7
11
  def self.fatal(*); end
12
+
8
13
  def self.unknown(*); end
9
14
  end
10
15
  end
@@ -1,5 +1,7 @@
1
1
  module Prorate
2
2
  class NullPool < Struct.new(:conn)
3
- def with; yield conn; end
3
+ def with
4
+ yield conn
5
+ end
4
6
  end
5
7
  end
@@ -1,5 +1,5 @@
1
1
  -- Single threaded Leaky Bucket implementation.
2
- -- args: key_base, leak_rate, max_bucket_capacity, block_duration
2
+ -- args: key_base, leak_rate, max_bucket_capacity, block_duration, n_tokens
3
3
  -- returns: an array of two integers, the first of which indicates the remaining block time.
4
4
  -- if the block time is nonzero, the second integer is always zero. If the block time is zero,
5
5
  -- the second integer indicates the level of the bucket
@@ -8,14 +8,25 @@
8
8
  redis.replicate_commands()
9
9
  -- make some nicer looking variable names:
10
10
  local retval = nil
11
- local bucket_level_key = ARGV[1] .. ".bucket_level"
12
- local last_updated_key = ARGV[1] .. ".last_updated"
13
- local block_key = ARGV[1] .. ".block"
14
- local max_bucket_capacity = tonumber(ARGV[2])
15
- local leak_rate = tonumber(ARGV[3])
16
- local block_duration = tonumber(ARGV[4])
17
- local now = tonumber(redis.call("TIME")[1]) --unix timestamp, will be required in all paths
18
11
 
12
+ -- Redis documentation recommends passing the keys separately so that Redis
13
+ -- can - in the future - verify that they live on the same shard of a cluster, and
14
+ -- raise an error if they are not. As far as can be understood this functionality is not
15
+ -- yet present, but if we can make a little effort to make ourselves more future proof
16
+ -- we should.
17
+ local bucket_level_key = KEYS[1]
18
+ local last_updated_key = KEYS[2]
19
+ local block_key = KEYS[3]
20
+
21
+ -- and the config variables
22
+ local max_bucket_capacity = tonumber(ARGV[1])
23
+ local leak_rate = tonumber(ARGV[2])
24
+ local block_duration = tonumber(ARGV[3])
25
+ local n_tokens = tonumber(ARGV[4]) -- How many tokens this call adds to the bucket. Defaults to 1
26
+
27
+ -- Take the Redis timestamp
28
+ local redis_time = redis.call("TIME") -- Array of [seconds, microseconds]
29
+ local now = tonumber(redis_time[1]) + (tonumber(redis_time[2]) / 1000000)
19
30
  local key_lifetime = math.ceil(max_bucket_capacity / leak_rate)
20
31
 
21
32
  local blocked_until = redis.call("GET", block_key)
@@ -23,28 +34,29 @@ if blocked_until then
23
34
  return {(tonumber(blocked_until) - now), 0}
24
35
  end
25
36
 
26
- -- get current bucket level
27
- local bucket_level = tonumber(redis.call("GET", bucket_level_key))
28
- if not bucket_level then
29
- -- this throttle/identifier combo does not exist yet, so much calculation can be skipped
30
- redis.call("SETEX", bucket_level_key, key_lifetime, 1) -- set bucket with initial value
31
- retval = {0, 1}
37
+ -- get current bucket level. The throttle key might not exist yet in which
38
+ -- case we default to 0
39
+ local bucket_level = tonumber(redis.call("GET", bucket_level_key)) or 0
40
+
41
+ -- ...and then perform the leaky bucket fillup/leak. We need to do this also when the bucket has
42
+ -- just been created because the initial n_tokens to add might be so high that it will
43
+ -- immediately overflow the bucket and trigger the throttle, on the first call.
44
+ local last_updated = tonumber(redis.call("GET", last_updated_key)) or now -- use sensible default of 'now' if the key does not exist
45
+ local new_bucket_level = math.max(0, bucket_level - (leak_rate * (now - last_updated)))
46
+
47
+ if (new_bucket_level + n_tokens) <= max_bucket_capacity then
48
+ new_bucket_level = math.max(0, new_bucket_level + n_tokens)
49
+ retval = {0, math.ceil(new_bucket_level)}
32
50
  else
33
- -- if it already exists, do the leaky bucket thing
34
- local last_updated = tonumber(redis.call("GET", last_updated_key)) or now -- use sensible default of 'now' if the key does not exist
35
- local new_bucket_level = math.max(0, bucket_level - (leak_rate * (now - last_updated)))
36
-
37
- if (new_bucket_level + 1) <= max_bucket_capacity then
38
- new_bucket_level = new_bucket_level + 1
39
- retval = {0, math.ceil(new_bucket_level)}
40
- else
41
- redis.call("SETEX", block_key, block_duration, now + block_duration)
42
- retval = {block_duration, 0}
43
- end
44
- redis.call("SETEX", bucket_level_key, key_lifetime, new_bucket_level) --still needs to be saved
51
+ redis.call("SETEX", block_key, block_duration, now + block_duration)
52
+ retval = {block_duration, 0}
45
53
  end
46
54
 
47
- -- update last_updated for this bucket, required in all branches
55
+ -- Save the new bucket level
56
+ redis.call("SETEX", bucket_level_key, key_lifetime, new_bucket_level)
57
+
58
+ -- Record when we updated the bucket so that the amount of tokens leaked
59
+ -- can be correctly determined on the next invocation
48
60
  redis.call("SETEX", last_updated_key, key_lifetime, now)
49
61
 
50
62
  return retval
@@ -1,70 +1,165 @@
1
1
  require 'digest'
2
2
 
3
3
  module Prorate
4
- class Throttled < StandardError
5
- attr_reader :retry_in_seconds
6
- def initialize(try_again_in)
7
- @retry_in_seconds = try_again_in
8
- super("Throttled, please lower your temper and try again in #{retry_in_seconds} seconds")
9
- end
4
+ class MisconfiguredThrottle < StandardError
10
5
  end
11
6
 
12
- class ScriptHashMismatch < StandardError
13
- end
7
+ class Throttle
8
+ LUA_SCRIPT_CODE = File.read(File.join(__dir__, "rate_limit.lua"))
9
+ LUA_SCRIPT_HASH = Digest::SHA1.hexdigest(LUA_SCRIPT_CODE)
14
10
 
15
- class MisconfiguredThrottle < StandardError
16
- end
11
+ attr_reader :name, :limit, :period, :block_for, :redis, :logger
17
12
 
18
- class Throttle < Ks.strict(:name, :limit, :period, :block_for, :redis, :logger)
13
+ def initialize(name:, limit:, period:, block_for:, redis:, logger: Prorate::NullLogger)
14
+ @name = name.to_s
15
+ @discriminators = [name.to_s]
16
+ @redis = redis.respond_to?(:with) ? redis : NullPool.new(redis)
17
+ @logger = logger
18
+ @block_for = block_for
19
19
 
20
- def self.get_script_hash
21
- script_filepath = File.join(__dir__,"rate_limit.lua")
22
- script = File.read(script_filepath)
23
- Digest::SHA1.hexdigest(script)
24
- end
20
+ raise MisconfiguredThrottle if (period <= 0) || (limit <= 0)
25
21
 
26
- CURRENT_SCRIPT_HASH = get_script_hash
22
+ # Do not do type conversions here since we want to allow the caller to read
23
+ # those values back later
24
+ # (API contract which the previous implementation of Throttle already supported)
25
+ @limit = limit
26
+ @period = period
27
27
 
28
- def initialize(*)
29
- super
30
- @discriminators = [name.to_s]
31
- self.redis = NullPool.new(redis) unless redis.respond_to?(:with)
32
- raise MisconfiguredThrottle if ((period <= 0) || (limit <= 0))
33
28
  @leak_rate = limit.to_f / period # tokens per second;
34
29
  end
35
-
30
+
31
+ # Add a value that will be used to distinguish this throttle from others.
32
+ # It has to be something user- or connection-specific, and multiple
33
+ # discriminators can be combined:
34
+ #
35
+ # throttle << ip_address << user_agent_fingerprint
36
+ #
37
+ # @param discriminator[Object] a Ruby object that can be marshaled
38
+ # in an equivalent way between requests, using `Marshal.dump
36
39
  def <<(discriminator)
37
40
  @discriminators << discriminator
38
41
  end
39
-
40
- def throttle!
42
+
43
+ # Applies the throttle and raises a {Throttled} exception if it has been triggered
44
+ #
45
+ # Accepts an optional number of tokens to put in the bucket (default is 1).
46
+ # The effect of `n_tokens:` set to 0 is a "ping".
47
+ # It makes sure the throttle keys in Redis get created and adjusts the
48
+ # last invoked time of the leaky bucket. Can be used when a throttle
49
+ # is applied in a "shadow" fashion. For example, imagine you
50
+ # have a cascade of throttles with the following block times:
51
+ #
52
+ # Throttle A: [-------]
53
+ # Throttle B: [----------]
54
+ #
55
+ # You apply Throttle A: and it fires, but when that happens you also
56
+ # want to enable a throttle that is applied to "repeat offenders" only -
57
+ # - for instance ones that probe for tokens and/or passwords.
58
+ #
59
+ # Throttle C: [-------------------------------]
60
+ #
61
+ # If your "Throttle A" fires, you can trigger Throttle C
62
+ #
63
+ # Throttle A: [-----|-]
64
+ # Throttle C: [-----|-------------------------]
65
+ #
66
+ # because you know that Throttle A has fired and thus Throttle C comes
67
+ # into effect. What you want to do, however, is to fire Throttle C
68
+ # even though Throttle A: would have unlatched, which would create this
69
+ # call sequence:
70
+ #
71
+ # Throttle A: [-------] *(A not triggered)
72
+ # Throttle C: [------------|------------------]
73
+ #
74
+ # To achieve that you can keep Throttle C alive using `throttle!(n_tokens: 0)`,
75
+ # on every check that touches Throttle A and/or Throttle C. It keeps the leaky bucket
76
+ # updated but does not add any tokens to it:
77
+ #
78
+ # Throttle A: [------] *(A not triggered since block period has ended)
79
+ # Throttle C: [-----------|(ping)------------------] C is still blocking
80
+ #
81
+ # So you can effectively "keep a throttle alive" without ever triggering it,
82
+ # or keep it alive in combination with other throttles.
83
+ #
84
+ # @param n_tokens[Integer] the number of tokens to put in the bucket. If you are
85
+ # using Prorate for rate limiting, and a single request is adding N objects to your
86
+ # database for example, you can "top up" the bucket with a set number of tokens
87
+ # with a arbitrary ratio - like 1 token per inserted row. Once the bucket fills up
88
+ # the Throttled exception is going to be raised. Defaults to 1.
89
+ def throttle!(n_tokens: 1)
90
+ @logger.debug { "Applying throttle counter %s" % @name }
91
+ remaining_block_time, bucket_level = run_lua_throttler(
92
+ identifier: identifier,
93
+ bucket_capacity: @limit,
94
+ leak_rate: @leak_rate,
95
+ block_for: @block_for,
96
+ n_tokens: n_tokens)
97
+
98
+ if remaining_block_time > 0
99
+ @logger.warn do
100
+ "Throttle %s exceeded limit of %d in %d seconds and is blocked for the next %d seconds" % [@name, @limit, @period, remaining_block_time]
101
+ end
102
+ raise ::Prorate::Throttled.new(@name, remaining_block_time)
103
+ end
104
+
105
+ @limit - bucket_level # Return how many calls remain
106
+ end
107
+
108
+ def status
109
+ redis_block_key = "#{identifier}.block"
110
+ @redis.with do |r|
111
+ is_blocked = redis_key_exists?(r, redis_block_key)
112
+ if is_blocked
113
+ remaining_seconds = r.get(redis_block_key).to_i - Time.now.to_i
114
+ Status.new(_is_throttled = true, remaining_seconds)
115
+ else
116
+ remaining_seconds = 0
117
+ Status.new(_is_throttled = false, remaining_seconds)
118
+ end
119
+ end
120
+ end
121
+
122
+ private
123
+
124
+ def identifier
41
125
  discriminator = Digest::SHA1.hexdigest(Marshal.dump(@discriminators))
42
- identifier = [name, discriminator].join(':')
43
-
44
- redis.with do |r|
45
- logger.info { "Applying throttle counter %s" % name }
46
- remaining_block_time, bucket_level = run_lua_throttler(redis: r, identifier: identifier, bucket_capacity: limit, leak_rate: @leak_rate, block_for: block_for)
47
-
48
- if remaining_block_time > 0
49
- logger.warn { "Throttle %s exceeded limit of %d in %d seconds and is blocked for the next %d seconds" % [name, limit, period, remaining_block_time] }
50
- raise Throttled.new(remaining_block_time)
126
+ "#{@name}:#{discriminator}"
127
+ end
128
+
129
+ # redis-rb 4.2 started printing a warning for every single-argument use of `#exists`, because
130
+ # they intend to break compatibility in a future version (to return an integer instead of a
131
+ # boolean). The old behavior (returning a boolean) is available using the new `exists?` method.
132
+ def redis_key_exists?(redis, key)
133
+ return redis.exists?(key) if redis.respond_to?(:exists?)
134
+ redis.exists(key)
135
+ end
136
+
137
+ def run_lua_throttler(identifier:, bucket_capacity:, leak_rate:, block_for:, n_tokens:)
138
+ # Computing the identifier is somewhat involved so we should avoid doing it too often
139
+ id = identifier
140
+ bucket_level_key = "#{id}.bucket_level"
141
+ last_updated_key = "#{id}.last_updated"
142
+ block_key = "#{id}.block"
143
+
144
+ @redis.with do |redis|
145
+ begin
146
+ redis.evalsha(LUA_SCRIPT_HASH, keys: [bucket_level_key, last_updated_key, block_key], argv: [bucket_capacity, leak_rate, block_for, n_tokens])
147
+ rescue Redis::CommandError => e
148
+ if e.message.include? "NOSCRIPT"
149
+ # The Redis server has never seen this script before. Needs to run only once in the entire lifetime
150
+ # of the Redis server, until the script changes - in which case it will be loaded under a different SHA
151
+ redis.script(:load, LUA_SCRIPT_CODE)
152
+ retry
153
+ else
154
+ raise e
155
+ end
51
156
  end
52
- available_calls = limit - bucket_level
53
157
  end
54
158
  end
55
159
 
56
- def run_lua_throttler(redis: , identifier: , bucket_capacity: , leak_rate: , block_for: )
57
- redis.evalsha(CURRENT_SCRIPT_HASH, [], [identifier, bucket_capacity, leak_rate, block_for])
58
- rescue Redis::CommandError => e
59
- if e.message.include? "NOSCRIPT"
60
- # The Redis server has never seen this script before. Needs to run only once in the entire lifetime of the Redis server (unless the script changes)
61
- script_filepath = File.join(__dir__,"rate_limit.lua")
62
- script = File.read(script_filepath)
63
- raise ScriptHashMismatch if Digest::SHA1.hexdigest(script) != CURRENT_SCRIPT_HASH
64
- redis.script(:load, script)
65
- redis.evalsha(CURRENT_SCRIPT_HASH, [], [identifier, bucket_capacity, leak_rate, block_for])
66
- else
67
- raise e
160
+ class Status < Struct.new(:is_throttled, :remaining_throttle_seconds)
161
+ def throttled?
162
+ is_throttled
68
163
  end
69
164
  end
70
165
  end
@@ -0,0 +1,20 @@
1
+ # The Throttled exception gets raised when a throttle is triggered.
2
+ #
3
+ # The exception carries additional attributes which can be used for
4
+ # error tracking and for creating a correct Retry-After HTTP header for
5
+ # a 429 response
6
+ class Prorate::Throttled < StandardError
7
+ # @attr [String] the name of the throttle (like "shpongs-per-ip").
8
+ # Can be used to detect which throttle has fired when multiple
9
+ # throttles are used within the same block.
10
+ attr_reader :throttle_name
11
+
12
+ # @attr [Integer] for how long the caller will be blocked, in seconds.
13
+ attr_reader :retry_in_seconds
14
+
15
+ def initialize(throttle_name, try_again_in)
16
+ @throttle_name = throttle_name
17
+ @retry_in_seconds = try_again_in
18
+ super("Throttled, please lower your temper and try again in #{retry_in_seconds} seconds")
19
+ end
20
+ end
@@ -1,3 +1,3 @@
1
1
  module Prorate
2
- VERSION = "0.3.0"
2
+ VERSION = "0.7.1"
3
3
  end
@@ -1,4 +1,4 @@
1
- # coding: utf-8
1
+
2
2
  lib = File.expand_path('../lib', __FILE__)
3
3
  $LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
4
4
  require 'prorate/version'
@@ -27,10 +27,12 @@ Gem::Specification.new do |spec|
27
27
  spec.executables = spec.files.grep(%r{^exe/}) { |f| File.basename(f) }
28
28
  spec.require_paths = ["lib"]
29
29
 
30
- spec.add_dependency "ks"
31
30
  spec.add_dependency "redis", ">= 2"
32
- spec.add_development_dependency "connection_pool", "~> 1"
33
- spec.add_development_dependency "bundler", "~> 1.12"
34
- spec.add_development_dependency "rake", "~> 10.0"
31
+ spec.add_development_dependency "connection_pool", "~> 2"
32
+ spec.add_development_dependency "bundler"
33
+ spec.add_development_dependency "rake", "~> 13.0"
35
34
  spec.add_development_dependency "rspec", "~> 3.0"
35
+ spec.add_development_dependency 'wetransfer_style', '0.6.5'
36
+ spec.add_development_dependency 'yard', '~> 0.9'
37
+ spec.add_development_dependency 'pry', '~> 0.13.1'
36
38
  end
@@ -6,7 +6,7 @@ require 'redis'
6
6
  require 'securerandom'
7
7
 
8
8
  def average_ms(ary)
9
- ary.map{|x| x*1000}.inject(0,&:+) / ary.length
9
+ ary.map { |x| x * 1000 }.inject(0, &:+) / ary.length
10
10
  end
11
11
 
12
12
  r = Redis.new
@@ -6,7 +6,7 @@ require 'redis'
6
6
  require 'securerandom'
7
7
 
8
8
  def average_ms(ary)
9
- ary.map{|x| x*1000}.inject(0,&:+) / ary.length
9
+ ary.map { |x| x * 1000 }.inject(0, &:+) / ary.length
10
10
  end
11
11
 
12
12
  r = Redis.new
@@ -31,24 +31,23 @@ end
31
31
  puts average_ms times
32
32
  def key_for_ts(ts)
33
33
  "th:%s:%d" % [@id, ts]
34
- end
34
+ end
35
35
 
36
36
  times = []
37
37
  15.times do
38
- id = SecureRandom.hex(10)
39
38
  sec, _ = r.time # Use Redis time instead of the system timestamp, so that all the nodes are consistent
40
39
  ts = sec.to_i # All Redis results are strings
41
40
  k = key_for_ts(ts)
42
41
  times << Benchmark.realtime {
43
42
  r.multi do |txn|
44
- # Increment the counter
43
+ # Increment the counter
45
44
  txn.incr(k)
46
45
  txn.expire(k, 120)
47
46
 
48
47
  span_start = ts - 120
49
48
  span_end = ts + 1
50
- possible_keys = (span_start..span_end).map{|prev_time| key_for_ts(prev_time) }
51
-
49
+ possible_keys = (span_start..span_end).map { |prev_time| key_for_ts(prev_time) }
50
+
52
51
  # Fetch all the counter values within the time window. Despite the fact that this
53
52
  # will return thousands of elements for large sliding window sizes, the values are
54
53
  # small and an MGET in Redis is pretty cheap, so perf should stay well within limits.
@@ -58,4 +57,3 @@ times = []
58
57
  end
59
58
 
60
59
  puts average_ms times
61
-
@@ -2,5 +2,5 @@
2
2
  require 'redis'
3
3
  r = Redis.new
4
4
  script = File.read('../lib/prorate/rate_limit.lua')
5
- sha = r.script(:load,script)
5
+ sha = r.script(:load, script)
6
6
  puts sha
metadata CHANGED
@@ -1,99 +1,127 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: prorate
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.3.0
4
+ version: 0.7.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Julik Tarkhanov
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2017-07-18 00:00:00.000000000 Z
11
+ date: 2020-07-20 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
- name: ks
14
+ name: redis
15
15
  requirement: !ruby/object:Gem::Requirement
16
16
  requirements:
17
17
  - - ">="
18
18
  - !ruby/object:Gem::Version
19
- version: '0'
19
+ version: '2'
20
20
  type: :runtime
21
21
  prerelease: false
22
22
  version_requirements: !ruby/object:Gem::Requirement
23
23
  requirements:
24
24
  - - ">="
25
25
  - !ruby/object:Gem::Version
26
- version: '0'
26
+ version: '2'
27
27
  - !ruby/object:Gem::Dependency
28
- name: redis
28
+ name: connection_pool
29
29
  requirement: !ruby/object:Gem::Requirement
30
30
  requirements:
31
- - - ">="
31
+ - - "~>"
32
32
  - !ruby/object:Gem::Version
33
33
  version: '2'
34
- type: :runtime
34
+ type: :development
35
35
  prerelease: false
36
36
  version_requirements: !ruby/object:Gem::Requirement
37
37
  requirements:
38
- - - ">="
38
+ - - "~>"
39
39
  - !ruby/object:Gem::Version
40
40
  version: '2'
41
41
  - !ruby/object:Gem::Dependency
42
- name: connection_pool
42
+ name: bundler
43
+ requirement: !ruby/object:Gem::Requirement
44
+ requirements:
45
+ - - ">="
46
+ - !ruby/object:Gem::Version
47
+ version: '0'
48
+ type: :development
49
+ prerelease: false
50
+ version_requirements: !ruby/object:Gem::Requirement
51
+ requirements:
52
+ - - ">="
53
+ - !ruby/object:Gem::Version
54
+ version: '0'
55
+ - !ruby/object:Gem::Dependency
56
+ name: rake
43
57
  requirement: !ruby/object:Gem::Requirement
44
58
  requirements:
45
59
  - - "~>"
46
60
  - !ruby/object:Gem::Version
47
- version: '1'
61
+ version: '13.0'
48
62
  type: :development
49
63
  prerelease: false
50
64
  version_requirements: !ruby/object:Gem::Requirement
51
65
  requirements:
52
66
  - - "~>"
53
67
  - !ruby/object:Gem::Version
54
- version: '1'
68
+ version: '13.0'
55
69
  - !ruby/object:Gem::Dependency
56
- name: bundler
70
+ name: rspec
57
71
  requirement: !ruby/object:Gem::Requirement
58
72
  requirements:
59
73
  - - "~>"
60
74
  - !ruby/object:Gem::Version
61
- version: '1.12'
75
+ version: '3.0'
62
76
  type: :development
63
77
  prerelease: false
64
78
  version_requirements: !ruby/object:Gem::Requirement
65
79
  requirements:
66
80
  - - "~>"
67
81
  - !ruby/object:Gem::Version
68
- version: '1.12'
82
+ version: '3.0'
69
83
  - !ruby/object:Gem::Dependency
70
- name: rake
84
+ name: wetransfer_style
85
+ requirement: !ruby/object:Gem::Requirement
86
+ requirements:
87
+ - - '='
88
+ - !ruby/object:Gem::Version
89
+ version: 0.6.5
90
+ type: :development
91
+ prerelease: false
92
+ version_requirements: !ruby/object:Gem::Requirement
93
+ requirements:
94
+ - - '='
95
+ - !ruby/object:Gem::Version
96
+ version: 0.6.5
97
+ - !ruby/object:Gem::Dependency
98
+ name: yard
71
99
  requirement: !ruby/object:Gem::Requirement
72
100
  requirements:
73
101
  - - "~>"
74
102
  - !ruby/object:Gem::Version
75
- version: '10.0'
103
+ version: '0.9'
76
104
  type: :development
77
105
  prerelease: false
78
106
  version_requirements: !ruby/object:Gem::Requirement
79
107
  requirements:
80
108
  - - "~>"
81
109
  - !ruby/object:Gem::Version
82
- version: '10.0'
110
+ version: '0.9'
83
111
  - !ruby/object:Gem::Dependency
84
- name: rspec
112
+ name: pry
85
113
  requirement: !ruby/object:Gem::Requirement
86
114
  requirements:
87
115
  - - "~>"
88
116
  - !ruby/object:Gem::Version
89
- version: '3.0'
117
+ version: 0.13.1
90
118
  type: :development
91
119
  prerelease: false
92
120
  version_requirements: !ruby/object:Gem::Requirement
93
121
  requirements:
94
122
  - - "~>"
95
123
  - !ruby/object:Gem::Version
96
- version: '3.0'
124
+ version: 0.13.1
97
125
  description: Can be used to implement all kinds of throttles
98
126
  email:
99
127
  - me@julik.nl
@@ -103,7 +131,9 @@ extra_rdoc_files: []
103
131
  files:
104
132
  - ".gitignore"
105
133
  - ".rspec"
134
+ - ".rubocop.yml"
106
135
  - ".travis.yml"
136
+ - CHANGELOG.md
107
137
  - Gemfile
108
138
  - LICENSE.txt
109
139
  - README.md
@@ -111,10 +141,13 @@ files:
111
141
  - bin/console
112
142
  - bin/setup
113
143
  - lib/prorate.rb
144
+ - lib/prorate/leaky_bucket.lua
145
+ - lib/prorate/leaky_bucket.rb
114
146
  - lib/prorate/null_logger.rb
115
147
  - lib/prorate/null_pool.rb
116
148
  - lib/prorate/rate_limit.lua
117
149
  - lib/prorate/throttle.rb
150
+ - lib/prorate/throttled.rb
118
151
  - lib/prorate/version.rb
119
152
  - prorate.gemspec
120
153
  - scripts/bm.rb
@@ -140,8 +173,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
140
173
  - !ruby/object:Gem::Version
141
174
  version: '0'
142
175
  requirements: []
143
- rubyforge_project:
144
- rubygems_version: 2.4.5.1
176
+ rubygems_version: 3.0.6
145
177
  signing_key:
146
178
  specification_version: 4
147
179
  summary: Time-restricted rate limiter using Redis