rack-throttle 0.2.0 → 0.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- data/AUTHORS +1 -0
- data/README +58 -15
- data/VERSION +1 -1
- data/lib/rack/throttle.rb +6 -5
- data/lib/rack/throttle/daily.rb +35 -3
- data/lib/rack/throttle/hourly.rb +35 -3
- data/lib/rack/throttle/limiter.rb +3 -3
- data/lib/rack/throttle/time_window.rb +21 -0
- data/lib/rack/throttle/version.rb +1 -1
- metadata +6 -4
data/AUTHORS
CHANGED
data/README
CHANGED
@@ -11,8 +11,10 @@ Sinatra.
|
|
11
11
|
Features
|
12
12
|
--------
|
13
13
|
|
14
|
-
* Throttles a Rack application by enforcing a minimum interval
|
15
|
-
|
14
|
+
* Throttles a Rack application by enforcing a minimum time interval between
|
15
|
+
subsequent HTTP requests from a particular client, as well as by defining
|
16
|
+
a maximum number of allowed HTTP requests per a given time period (hourly
|
17
|
+
or daily).
|
16
18
|
* Compatible with any Rack application and any Rack-based framework.
|
17
19
|
* Stores rate-limiting counters in any key/value store implementation that
|
18
20
|
responds to `#[]`/`#[]=` (like Ruby's hashes) or to `#get`/`#set` (like
|
@@ -34,25 +36,65 @@ Examples
|
|
34
36
|
|
35
37
|
run lambda { |env| [200, {'Content-Type' => 'text/plain'}, "Hello, world!\n"] }
|
36
38
|
|
37
|
-
### Enforcing a 3-second interval between requests
|
39
|
+
### Enforcing a minimum 3-second interval between requests
|
38
40
|
|
39
41
|
use Rack::Throttle::Interval, :min => 3.0
|
40
42
|
|
41
|
-
###
|
43
|
+
### Allowing a maximum of 100 requests per hour
|
44
|
+
|
45
|
+
use Rack::Throttle::Hourly, :max => 100
|
46
|
+
|
47
|
+
### Allowing a maximum of 1,000 requests per day
|
48
|
+
|
49
|
+
use Rack::Throttle::Daily, :max => 1000
|
50
|
+
|
51
|
+
### Combining various throttling constraints into one overall policy
|
52
|
+
|
53
|
+
use Rack::Throttle::Daily, :max => 1000 # requests
|
54
|
+
use Rack::Throttle::Hourly, :max => 100 # requests
|
55
|
+
use Rack::Throttle::Interval, :min => 3.0 # seconds
|
56
|
+
|
57
|
+
### Storing the rate-limiting counters in a GDBM database
|
42
58
|
|
43
59
|
require 'gdbm'
|
60
|
+
|
44
61
|
use Rack::Throttle::Interval, :cache => GDBM.new('tmp/throttle.db')
|
45
62
|
|
46
|
-
###
|
63
|
+
### Storing the rate-limiting counters on a Memcached server
|
47
64
|
|
48
65
|
require 'memcached'
|
66
|
+
|
49
67
|
use Rack::Throttle::Interval, :cache => Memcached.new, :key_prefix => :throttle
|
50
68
|
|
51
|
-
###
|
69
|
+
### Storing the rate-limiting counters on a Redis server
|
52
70
|
|
53
71
|
require 'redis'
|
72
|
+
|
54
73
|
use Rack::Throttle::Interval, :cache => Redis.new, :key_prefix => :throttle
|
55
74
|
|
75
|
+
Throttling Strategies
|
76
|
+
---------------------
|
77
|
+
|
78
|
+
`Rack::Throttle` supports three built-in throttling strategies:
|
79
|
+
|
80
|
+
* `Rack::Throttle::Interval`: Throttles the application by enforcing a
|
81
|
+
minimum interval (by default, 1 second) between subsequent HTTP requests.
|
82
|
+
* `Rack::Throttle::Hourly`: Throttles the application by defining a
|
83
|
+
maximum number of allowed HTTP requests per hour (by default, 3,600
|
84
|
+
requests per 60 minutes, which works out to an average of 1 request per
|
85
|
+
second).
|
86
|
+
* `Rack::Throttle::Daily`: Throttles the application by defining a
|
87
|
+
maximum number of allowed HTTP requests per day (by default, 86,400
|
88
|
+
requests per 24 hours, which works out to an average of 1 request per
|
89
|
+
second).
|
90
|
+
|
91
|
+
You can fully customize the implementation details of any of these strategies
|
92
|
+
by simply subclassing one of the aforementioned default implementations.
|
93
|
+
And, of course, should your application-specific requirements be
|
94
|
+
significantly more complex than what we've provided for, you can also define
|
95
|
+
entirely new kinds of throttling strategies by subclassing the
|
96
|
+
`Rack::Throttle::Limiter` base class directly.
|
97
|
+
|
56
98
|
HTTP Client Identification
|
57
99
|
--------------------------
|
58
100
|
|
@@ -62,8 +104,8 @@ keyed to unique HTTP clients.
|
|
62
104
|
By default, HTTP clients are uniquely identified by their IP address as
|
63
105
|
returned by `Rack::Request#ip`. If you wish to instead use a more granular,
|
64
106
|
application-specific identifier such as a session key or a user account
|
65
|
-
name, you need only subclass
|
66
|
-
`#client_identifier` method.
|
107
|
+
name, you need only subclass a throttling strategy implementation and
|
108
|
+
override the `#client_identifier` method.
|
67
109
|
|
68
110
|
HTTP Response Codes and Headers
|
69
111
|
-------------------------------
|
@@ -81,11 +123,11 @@ outlined in the acceptable use policy for the site, service, or API.
|
|
81
123
|
|
82
124
|
### 503 Service Unavailable (Rate Limit Exceeded)
|
83
125
|
|
84
|
-
However, there
|
85
|
-
|
86
|
-
|
87
|
-
|
88
|
-
|
126
|
+
However, there exists a widespread practice of instead returning a "503
|
127
|
+
Service Unavailable" response when a client exceeds the set rate limits.
|
128
|
+
This is technically dubious because it indicates an error on the server's
|
129
|
+
part, which is certainly not the case with rate limiting - it was the client
|
130
|
+
that committed the oops, not the server.
|
89
131
|
|
90
132
|
An HTTP 503 response would be correct in situations where the server was
|
91
133
|
genuinely overloaded and couldn't handle more requests, but for rate
|
@@ -129,10 +171,11 @@ as follows:
|
|
129
171
|
|
130
172
|
% wget http://github.com/datagraph/rack-throttle/tarball/master
|
131
173
|
|
132
|
-
|
133
|
-
|
174
|
+
Authors
|
175
|
+
-------
|
134
176
|
|
135
177
|
* [Arto Bendiken](mailto:arto.bendiken@gmail.com) - <http://ar.to/>
|
178
|
+
* [Brendon Murphy](mailto:disposable.20.xternal@spamourmet.com>) - <http://www.techfreak.net/>
|
136
179
|
|
137
180
|
License
|
138
181
|
-------
|
data/VERSION
CHANGED
@@ -1 +1 @@
|
|
1
|
-
0.
|
1
|
+
0.3.0
|
data/lib/rack/throttle.rb
CHANGED
@@ -2,10 +2,11 @@ require 'rack'
|
|
2
2
|
|
3
3
|
module Rack
|
4
4
|
module Throttle
|
5
|
-
autoload :
|
6
|
-
autoload :
|
7
|
-
autoload :
|
8
|
-
autoload :
|
9
|
-
autoload :
|
5
|
+
autoload :Limiter, 'rack/throttle/limiter'
|
6
|
+
autoload :Interval, 'rack/throttle/interval'
|
7
|
+
autoload :TimeWindow, 'rack/throttle/time_window'
|
8
|
+
autoload :Daily, 'rack/throttle/daily'
|
9
|
+
autoload :Hourly, 'rack/throttle/hourly'
|
10
|
+
autoload :VERSION, 'rack/throttle/version'
|
10
11
|
end
|
11
12
|
end
|
data/lib/rack/throttle/daily.rb
CHANGED
@@ -5,8 +5,40 @@ module Rack; module Throttle
|
|
5
5
|
# requests per 24 hours, which works out to an average of 1 request per
|
6
6
|
# second).
|
7
7
|
#
|
8
|
-
#
|
9
|
-
|
10
|
-
|
8
|
+
# Note that this strategy doesn't use a sliding time window, but rather
|
9
|
+
# tracks requests per calendar day. This means that the throttling counter
|
10
|
+
# is reset at midnight (according to the server's local timezone) every
|
11
|
+
# night.
|
12
|
+
#
|
13
|
+
# @example Allowing up to 86,400 requests per day
|
14
|
+
# use Rack::Throttle::Daily
|
15
|
+
#
|
16
|
+
# @example Allowing up to 1,000 requests per day
|
17
|
+
# use Rack::Throttle::Daily, :max => 1000
|
18
|
+
#
|
19
|
+
class Daily < TimeWindow
|
20
|
+
##
|
21
|
+
# @param [#call] app
|
22
|
+
# @param [Hash{Symbol => Object}] options
|
23
|
+
# @option options [Integer] :max (86400)
|
24
|
+
def initialize(app, options = {})
|
25
|
+
super
|
26
|
+
end
|
27
|
+
|
28
|
+
##
|
29
|
+
def max_per_day
|
30
|
+
@max_per_hour ||= options[:max_per_day] || options[:max] || 86_400
|
31
|
+
end
|
32
|
+
|
33
|
+
alias_method :max_per_window, :max_per_day
|
34
|
+
|
35
|
+
protected
|
36
|
+
|
37
|
+
##
|
38
|
+
# @param [Rack::Request] request
|
39
|
+
# @return [String]
|
40
|
+
def cache_key(request)
|
41
|
+
[super, Time.now.strftime('%Y-%m-%d')].join(':')
|
42
|
+
end
|
11
43
|
end
|
12
44
|
end; end
|
data/lib/rack/throttle/hourly.rb
CHANGED
@@ -5,8 +5,40 @@ module Rack; module Throttle
|
|
5
5
|
# requests per 60 minutes, which works out to an average of 1 request per
|
6
6
|
# second).
|
7
7
|
#
|
8
|
-
#
|
9
|
-
|
10
|
-
|
8
|
+
# Note that this strategy doesn't use a sliding time window, but rather
|
9
|
+
# tracks requests per distinct hour. This means that the throttling
|
10
|
+
# counter is reset every hour on the hour (according to the server's local
|
11
|
+
# timezone).
|
12
|
+
#
|
13
|
+
# @example Allowing up to 3,600 requests per hour
|
14
|
+
# use Rack::Throttle::Hourly
|
15
|
+
#
|
16
|
+
# @example Allowing up to 100 requests per hour
|
17
|
+
# use Rack::Throttle::Hourly, :max => 100
|
18
|
+
#
|
19
|
+
class Hourly < TimeWindow
|
20
|
+
##
|
21
|
+
# @param [#call] app
|
22
|
+
# @param [Hash{Symbol => Object}] options
|
23
|
+
# @option options [Integer] :max (3600)
|
24
|
+
def initialize(app, options = {})
|
25
|
+
super
|
26
|
+
end
|
27
|
+
|
28
|
+
##
|
29
|
+
def max_per_hour
|
30
|
+
@max_per_hour ||= options[:max_per_hour] || options[:max] || 3_600
|
31
|
+
end
|
32
|
+
|
33
|
+
alias_method :max_per_window, :max_per_hour
|
34
|
+
|
35
|
+
protected
|
36
|
+
|
37
|
+
##
|
38
|
+
# @param [Rack::Request] request
|
39
|
+
# @return [String]
|
40
|
+
def cache_key(request)
|
41
|
+
[super, Time.now.strftime('%Y-%m-%dT%H')].join(':')
|
42
|
+
end
|
11
43
|
end
|
12
44
|
end; end
|
@@ -14,8 +14,8 @@ module Rack; module Throttle
|
|
14
14
|
attr_reader :options
|
15
15
|
|
16
16
|
##
|
17
|
-
# @param [#call]
|
18
|
-
# @param [Hash{Symbol => Object}]
|
17
|
+
# @param [#call] app
|
18
|
+
# @param [Hash{Symbol => Object}] options
|
19
19
|
# @option options [String] :cache (Hash.new)
|
20
20
|
# @option options [String] :key (nil)
|
21
21
|
# @option options [String] :key_prefix (nil)
|
@@ -85,7 +85,7 @@ module Rack; module Throttle
|
|
85
85
|
##
|
86
86
|
# @return [Hash]
|
87
87
|
def cache
|
88
|
-
case cache = (
|
88
|
+
case cache = (options[:cache] ||= {})
|
89
89
|
when Proc then cache.call
|
90
90
|
else cache
|
91
91
|
end
|
@@ -0,0 +1,21 @@
|
|
1
|
+
module Rack; module Throttle
|
2
|
+
##
|
3
|
+
class TimeWindow < Limiter
|
4
|
+
##
|
5
|
+
# Returns `true` if fewer than the maximum number of requests permitted
|
6
|
+
# for the current window of time have been made.
|
7
|
+
#
|
8
|
+
# @param [Rack::Request] request
|
9
|
+
# @return [Boolean]
|
10
|
+
def allowed?(request)
|
11
|
+
count = cache_get(key = cache_key(request)).to_i + 1 rescue 1
|
12
|
+
allowed = count <= max_per_window
|
13
|
+
begin
|
14
|
+
cache_set(key, count)
|
15
|
+
allowed
|
16
|
+
rescue => e
|
17
|
+
allowed = true
|
18
|
+
end
|
19
|
+
end
|
20
|
+
end
|
21
|
+
end; end
|
metadata
CHANGED
@@ -4,17 +4,18 @@ version: !ruby/object:Gem::Version
|
|
4
4
|
prerelease: false
|
5
5
|
segments:
|
6
6
|
- 0
|
7
|
-
-
|
7
|
+
- 3
|
8
8
|
- 0
|
9
|
-
version: 0.
|
9
|
+
version: 0.3.0
|
10
10
|
platform: ruby
|
11
11
|
authors:
|
12
12
|
- Arto Bendiken
|
13
|
+
- Brendon Murphy
|
13
14
|
autorequire:
|
14
15
|
bindir: bin
|
15
16
|
cert_chain: []
|
16
17
|
|
17
|
-
date: 2010-03-
|
18
|
+
date: 2010-03-22 00:00:00 +01:00
|
18
19
|
default_executable:
|
19
20
|
dependencies:
|
20
21
|
- !ruby/object:Gem::Dependency
|
@@ -90,6 +91,7 @@ files:
|
|
90
91
|
- lib/rack/throttle/hourly.rb
|
91
92
|
- lib/rack/throttle/interval.rb
|
92
93
|
- lib/rack/throttle/limiter.rb
|
94
|
+
- lib/rack/throttle/time_window.rb
|
93
95
|
- lib/rack/throttle/version.rb
|
94
96
|
- lib/rack/throttle.rb
|
95
97
|
has_rdoc: false
|
@@ -123,6 +125,6 @@ rubyforge_project: datagraph
|
|
123
125
|
rubygems_version: 1.3.6
|
124
126
|
signing_key:
|
125
127
|
specification_version: 3
|
126
|
-
summary:
|
128
|
+
summary: HTTP request rate limiter for Rack applications.
|
127
129
|
test_files: []
|
128
130
|
|