rack-throttle 0.1.0 → 0.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- data/README +80 -6
- data/VERSION +1 -1
- data/lib/rack/throttle/interval.rb +10 -1
- data/lib/rack/throttle/limiter.rb +18 -10
- data/lib/rack/throttle/version.rb +1 -1
- metadata +3 -3
data/README
CHANGED
@@ -1,13 +1,28 @@
|
|
1
|
-
HTTP Request Rate Limiter for Rack
|
2
|
-
|
1
|
+
HTTP Request Rate Limiter for Rack Applications
|
2
|
+
===============================================
|
3
3
|
|
4
4
|
This is [Rack][] middleware that provides logic for rate-limiting incoming
|
5
|
-
HTTP requests to
|
6
|
-
|
7
|
-
|
5
|
+
HTTP requests to Rack applications. You can use `Rack::Throttle` with any
|
6
|
+
Ruby web framework based on Rack, including with Ruby on Rails 3.0 and with
|
7
|
+
Sinatra.
|
8
8
|
|
9
9
|
* <http://github.com/datagraph/rack-throttle>
|
10
10
|
|
11
|
+
Features
|
12
|
+
--------
|
13
|
+
|
14
|
+
* Throttles a Rack application by enforcing a minimum interval (by default,
|
15
|
+
1 second) between subsequent HTTP requests from a particular client.
|
16
|
+
* Compatible with any Rack application and any Rack-based framework.
|
17
|
+
* Stores rate-limiting counters in any key/value store implementation that
|
18
|
+
responds to `#[]`/`#[]=` (like Ruby's hashes) or to `#get`/`#set` (like
|
19
|
+
memcached or Redis).
|
20
|
+
* Compatible with the [gdbm][] binding included in Ruby's standard library.
|
21
|
+
* Compatible with the [memcached][], [memcache-client][], [memcache][] and
|
22
|
+
[redis][] gems.
|
23
|
+
* Compatible with [Heroku][]'s [memcached add-on][Heroku memcache]
|
24
|
+
(currently available as a free beta service).
|
25
|
+
|
11
26
|
Examples
|
12
27
|
--------
|
13
28
|
|
@@ -23,10 +38,62 @@ Examples
|
|
23
38
|
|
24
39
|
use Rack::Throttle::Interval, :min => 3.0
|
25
40
|
|
41
|
+
### Using GDBM to store rate-limiting counters
|
42
|
+
|
43
|
+
require 'gdbm'
|
44
|
+
use Rack::Throttle::Interval, :cache => GDBM.new('tmp/throttle.db')
|
45
|
+
|
26
46
|
### Using Memcached to store rate-limiting counters
|
27
47
|
|
48
|
+
require 'memcached'
|
28
49
|
use Rack::Throttle::Interval, :cache => Memcached.new, :key_prefix => :throttle
|
29
50
|
|
51
|
+
### Using Redis to store rate-limiting counters
|
52
|
+
|
53
|
+
require 'redis'
|
54
|
+
use Rack::Throttle::Interval, :cache => Redis.new, :key_prefix => :throttle
|
55
|
+
|
56
|
+
HTTP Client Identification
|
57
|
+
--------------------------
|
58
|
+
|
59
|
+
The rate-limiting counters stored and maintained by `Rack::Throttle` are
|
60
|
+
keyed to unique HTTP clients.
|
61
|
+
|
62
|
+
By default, HTTP clients are uniquely identified by their IP address as
|
63
|
+
returned by `Rack::Request#ip`. If you wish to instead use a more granular,
|
64
|
+
application-specific identifier such as a session key or a user account
|
65
|
+
name, you need only subclass `Rack::Throttle::Interval` and override the
|
66
|
+
`#client_identifier` method.
|
67
|
+
|
68
|
+
HTTP Response Codes and Headers
|
69
|
+
-------------------------------
|
70
|
+
|
71
|
+
### 403 Forbidden (Rate Limit Exceeded)
|
72
|
+
|
73
|
+
When a client exceeds their rate limit, `Rack::Throttle` by default returns
|
74
|
+
a "403 Forbidden" response with an associated "Rate Limit Exceeded" message
|
75
|
+
in the response body.
|
76
|
+
|
77
|
+
An HTTP 403 response means that the server understood the request, but is
|
78
|
+
refusing to respond to it and an accompanying message will explain why.
|
79
|
+
This indicates an error on the client's part in exceeding the rate limits
|
80
|
+
outlined in the acceptable use policy for the site, service, or API.
|
81
|
+
|
82
|
+
### 503 Service Unavailable (Rate Limit Exceeded)
|
83
|
+
|
84
|
+
However, there is an unfortunately widespread practice of instead returning
|
85
|
+
a "503 Service Unavailable" response when a client exceeds the set rate
|
86
|
+
limits. This is actually technically incorrect because it indicates an
|
87
|
+
error on the server's part, which is certainly not the case with rate
|
88
|
+
limiting - it was the client that committed the oops, not the server.
|
89
|
+
|
90
|
+
An HTTP 503 response would be correct in situations where the server was
|
91
|
+
genuinely overloaded and couldn't handle more requests, but for rate
|
92
|
+
limiting an HTTP 403 response is more appropriate. Nonetheless, if you think
|
93
|
+
otherwise, `Rack::Throttle` does allow you to override the returned HTTP
|
94
|
+
status code by passing in a `:code => 503` option when constructing a
|
95
|
+
`Rack::Throttle::Limiter` instance.
|
96
|
+
|
30
97
|
Documentation
|
31
98
|
-------------
|
32
99
|
|
@@ -73,4 +140,11 @@ License
|
|
73
140
|
`Rack::Throttle` is free and unencumbered public domain software. For more
|
74
141
|
information, see <http://unlicense.org/> or the accompanying UNLICENSE file.
|
75
142
|
|
76
|
-
[Rack]:
|
143
|
+
[Rack]: http://rack.rubyforge.org/
|
144
|
+
[gdbm]: http://ruby-doc.org/stdlib/libdoc/gdbm/rdoc/classes/GDBM.html
|
145
|
+
[memcached]: http://rubygems.org/gems/memcached
|
146
|
+
[memcache-client]: http://rubygems.org/gems/memcache-client
|
147
|
+
[memcache]: http://rubygems.org/gems/memcache
|
148
|
+
[redis]: http://rubygems.org/gems/redis
|
149
|
+
[Heroku]: http://heroku.com/
|
150
|
+
[Heroku memcache]: http://docs.heroku.com/memcache
|
data/VERSION
CHANGED
@@ -1 +1 @@
|
|
1
|
-
0.
|
1
|
+
0.2.0
|
@@ -29,7 +29,7 @@ module Rack; module Throttle
|
|
29
29
|
def allowed?(request)
|
30
30
|
t1 = request_start_time(request)
|
31
31
|
t0 = cache_get(key = cache_key(request)) rescue nil
|
32
|
-
allowed = !t0 || (t1 - t0.to_f) >= minimum_interval
|
32
|
+
allowed = !t0 || (dt = t1 - t0.to_f) >= minimum_interval
|
33
33
|
begin
|
34
34
|
cache_set(key, t1)
|
35
35
|
allowed
|
@@ -42,6 +42,15 @@ module Rack; module Throttle
|
|
42
42
|
end
|
43
43
|
end
|
44
44
|
|
45
|
+
##
|
46
|
+
# Returns the number of seconds before the client is allowed to retry an
|
47
|
+
# HTTP request.
|
48
|
+
#
|
49
|
+
# @return [Float]
|
50
|
+
def retry_after
|
51
|
+
minimum_interval
|
52
|
+
end
|
53
|
+
|
45
54
|
##
|
46
55
|
# Returns the required minimal interval (in terms of seconds) that must
|
47
56
|
# elapse between two subsequent HTTP requests.
|
@@ -122,7 +122,16 @@ module Rack; module Throttle
|
|
122
122
|
def cache_set(key, value)
|
123
123
|
case
|
124
124
|
when cache.respond_to?(:[]=)
|
125
|
-
|
125
|
+
begin
|
126
|
+
cache[key] = value
|
127
|
+
rescue TypeError => e
|
128
|
+
# GDBM throws a "TypeError: can't convert Float into String"
|
129
|
+
# exception when trying to store a Float. On the other hand, we
|
130
|
+
# don't want to unnecessarily coerce the value to a String for
|
131
|
+
# any stores that do support other data types (e.g. in-memory
|
132
|
+
# hash objects). So, this is a compromise.
|
133
|
+
cache[key] = value.to_s
|
134
|
+
end
|
126
135
|
when cache.respond_to?(:set)
|
127
136
|
cache.set(key, value)
|
128
137
|
end
|
@@ -164,22 +173,21 @@ module Rack; module Throttle
|
|
164
173
|
##
|
165
174
|
# Outputs a `Rate Limit Exceeded` error.
|
166
175
|
#
|
167
|
-
# @param [Integer] code
|
168
|
-
# @param [String] message
|
169
176
|
# @return [Array(Integer, Hash, #each)]
|
170
|
-
def rate_limit_exceeded
|
171
|
-
|
172
|
-
|
177
|
+
def rate_limit_exceeded
|
178
|
+
headers = respond_to?(:retry_after) ? {'Retry-After' => retry_after.to_f.ceil.to_s} : {}
|
179
|
+
http_error(options[:code] || 403, options[:message] || 'Rate Limit Exceeded', headers)
|
173
180
|
end
|
174
181
|
|
175
182
|
##
|
176
183
|
# Outputs an HTTP `4xx` or `5xx` response.
|
177
184
|
#
|
178
|
-
# @param [Integer]
|
179
|
-
# @param [String, #to_s]
|
185
|
+
# @param [Integer] code
|
186
|
+
# @param [String, #to_s] message
|
187
|
+
# @param [Hash{String => String}] headers
|
180
188
|
# @return [Array(Integer, Hash, #each)]
|
181
|
-
def http_error(code, message = nil)
|
182
|
-
[code, {'Content-Type' => 'text/plain; charset=utf-8'},
|
189
|
+
def http_error(code, message = nil, headers = {})
|
190
|
+
[code, {'Content-Type' => 'text/plain; charset=utf-8'}.merge(headers),
|
183
191
|
http_status(code) + (message.nil? ? "\n" : " (#{message})\n")]
|
184
192
|
end
|
185
193
|
|
metadata
CHANGED
@@ -4,9 +4,9 @@ version: !ruby/object:Gem::Version
|
|
4
4
|
prerelease: false
|
5
5
|
segments:
|
6
6
|
- 0
|
7
|
-
-
|
7
|
+
- 2
|
8
8
|
- 0
|
9
|
-
version: 0.
|
9
|
+
version: 0.2.0
|
10
10
|
platform: ruby
|
11
11
|
authors:
|
12
12
|
- Arto Bendiken
|
@@ -73,7 +73,7 @@ dependencies:
|
|
73
73
|
version: 1.0.0
|
74
74
|
type: :runtime
|
75
75
|
version_requirements: *id004
|
76
|
-
description: Rack middleware for rate-limiting HTTP requests.
|
76
|
+
description: Rack middleware for rate-limiting incoming HTTP requests.
|
77
77
|
email: arto.bendiken@gmail.com
|
78
78
|
executables: []
|
79
79
|
|