jduff-api-throttling 0.1.0
Sign up to get free protection for your applications and to get access to all the features.
- data/LICENSE +20 -0
- data/README.md +175 -0
- data/Rakefile +38 -0
- data/TODO.md +6 -0
- data/VERSION.yml +4 -0
- data/lib/api_throttling.rb +49 -0
- data/lib/handlers/handlers.rb +46 -0
- data/lib/handlers/memcache_handler.rb +15 -0
- data/lib/handlers/redis_handler.rb +14 -0
- data/test/test_api_throttling.rb +39 -0
- data/test/test_api_throttling_memcache.rb +24 -0
- data/test/test_handlers.rb +39 -0
- data/test/test_helper.rb +61 -0
- metadata +70 -0
data/LICENSE
ADDED
@@ -0,0 +1,20 @@
|
|
1
|
+
Copyright (c) 2009 Luc Castera
|
2
|
+
|
3
|
+
Permission is hereby granted, free of charge, to any person obtaining
|
4
|
+
a copy of this software and associated documentation files (the
|
5
|
+
"Software"), to deal in the Software without restriction, including
|
6
|
+
without limitation the rights to use, copy, modify, merge, publish,
|
7
|
+
distribute, sublicense, and/or sell copies of the Software, and to
|
8
|
+
permit persons to whom the Software is furnished to do so, subject to
|
9
|
+
the following conditions:
|
10
|
+
|
11
|
+
The above copyright notice and this permission notice shall be
|
12
|
+
included in all copies or substantial portions of the Software.
|
13
|
+
|
14
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
15
|
+
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
16
|
+
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
17
|
+
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
18
|
+
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
19
|
+
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
20
|
+
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
data/README.md
ADDED
@@ -0,0 +1,175 @@
|
|
1
|
+
# Rack Middleware for Api Throttling
|
2
|
+
|
3
|
+
<p>I will show you a technique to impose a rate limit (aka API Throttling) on a Ruby Web Service. I will be using Rack middleware so you can use this no matter what Ruby Web Framework you are using, as long as it is Rack-compliant.</p>
|
4
|
+
|
5
|
+
<h2>Introduction to Rack</h2>
|
6
|
+
|
7
|
+
<p>There are plenty of <a href="http://jasonseifer.com/2009/04/08/32-rack-resources-to-get-you-started">great resources</a> to learn the basic of Rack so I will not be explaining how Rack works here but you will need to understand it in order to follow this post. I highly recommend watching the <a href="http://remi.org/2009/02/19/rack-basics.html">three</a> <a href="http://remi.org/2009/02/24/rack-part-2.html">Rack</a> <a href="http://remi.org/2009/02/28/rack-part-3-middleware.html">screencasts</a> from <a href="http://remi.org/">Remi</a> to get started with Rack.</p>
|
8
|
+
|
9
|
+
<h2>Basic Rack Application</h2>
|
10
|
+
|
11
|
+
<p>First, make sure you have the <a href="http://code.macournoyer.com/thin/">thin webserver</a> installed.</p>
|
12
|
+
|
13
|
+
<pre>sudo gem install thin</pre>
|
14
|
+
|
15
|
+
<p>We are going to use the following 'Hello World' Rack application to test our API Throttling middleware.</p>
|
16
|
+
|
17
|
+
<pre>
|
18
|
+
use Rack::ShowExceptions
|
19
|
+
use Rack::Lint
|
20
|
+
|
21
|
+
run lambda {|env| [200, { 'Content-Type' => 'text/plain', 'Content-Length' => '12'}, ["Hello World!"] ] }
|
22
|
+
</pre>
|
23
|
+
|
24
|
+
|
25
|
+
<p>Save this code in a file called <em>config.ru</em> and then you can run it with the thin webserver, using the following command:</p>
|
26
|
+
|
27
|
+
<pre>thin --rackup config.ru start</pre>
|
28
|
+
|
29
|
+
<p>Now you can open another terminal window (or a browser) to test that this is working as expected:</p>
|
30
|
+
|
31
|
+
<pre>curl -i http://localhost:3000</pre>
|
32
|
+
|
33
|
+
<p>The -i option tells curl to include the HTTP-header in the output so you should see the following:</p>
|
34
|
+
|
35
|
+
<pre>
|
36
|
+
$ curl -i http://localhost:3000
|
37
|
+
HTTP/1.1 200 OK
|
38
|
+
Content-Type: text/plain
|
39
|
+
Content-Length: 12
|
40
|
+
Connection: keep-alive
|
41
|
+
Server: thin 1.0.0 codename That's What She Said
|
42
|
+
|
43
|
+
Hello World!
|
44
|
+
</pre>
|
45
|
+
|
46
|
+
<p>At this point, we have a basic rack application that we can use to test our rack middleware. Now let's get started.</p>
|
47
|
+
|
48
|
+
|
49
|
+
<h2>Redis</h2>
|
50
|
+
|
51
|
+
<p>We need a way to memorize the number of requests that users are making to our web service if we want to limit the rate at which they can use the API. Every time they make a request, we want to check if they've gone past their rate limit before we respond to the request. We also want to store the fact that they've just made a request. Since every call to our web service requires this check and memorization process, we would like this to be done as fast as possible.</p>
|
52
|
+
|
53
|
+
<p>This is where Redis comes in. Redis is a super-fast key-value database that we've highlighted <a href="http://blog.messagepub.com/2009/04/20/project-spotlight-redis-a-fast-data-structure-database/">in a previous blog post</a>. It can do about 110,000 SETs per second, about 81,000 GETs per second. That's the kind of performance that we are looking for since we would not like our 'rate limiting' middleware to reduce the performance of our web service.</p>
|
54
|
+
|
55
|
+
<p>Install the redis ruby client library with <pre>sudo gem install ezmobius-redis-rb</pre></p>
|
56
|
+
|
57
|
+
|
58
|
+
<h2>Our Rack Middleware</h2>
|
59
|
+
|
60
|
+
<p>We are assuming that the web service is using HTTP Basic Authentication. You could use another type of authentication and adapt the code to fit your model.</p>
|
61
|
+
|
62
|
+
<p>Our rack middleware will do the following:</p>
|
63
|
+
<ul>
|
64
|
+
<li>For every request received, increment a key in our database. The key string will consists of the authenticated username followed by a timestamp for the current hour. For example, for a user called joe, the key would be: <em><strong>joe_2009-05-01-12</em></strong></li>
|
65
|
+
<li>If the value of that key is less than our 'maximum requests per hour limit', then return an HTTP Response with a status code of 503, indicating that the user has gone over his rate limit.</li>
|
66
|
+
<li>If the value of the key is less than the maximum requests per hour limit, then allow the user's request to go through.</li>
|
67
|
+
</ul>
|
68
|
+
|
69
|
+
<p>Redis has an atomic <a href="http://code.google.com/p/redis/wiki/IncrCommand">INCR command</a> that is the perfect fit for our use case. It increments the key value by one. If the key does not exist, it sets the key to the value of "0" and then increments it. Awesome! We don't even need to write our own logic to check if the key exists before incrementing it, Redis takes care of that for us.</p>
|
70
|
+
|
71
|
+
<pre>
|
72
|
+
r = Redis.new
|
73
|
+
key = "#{auth.username}_#{Time.now.strftime("%Y-%m-%d-%H")}"
|
74
|
+
r.incr(key)
|
75
|
+
return over_rate_limit if r[key].to_i > @options[:requests_per_hour]
|
76
|
+
</pre>
|
77
|
+
|
78
|
+
<p>If our redis-server is not running, rather than throwing an error affecting all our users, we will let all the requests pass through by catching the exception and doing nothing. That means that if your redis-server goes down, you are no longer throttling the use of your web service so you need to make sure it's always running (using <a href="http://mmonit.com/monit/">monit</a> or <a href="http://god.rubyforge.org/">god</a>, for example).</p>
|
79
|
+
|
80
|
+
<p>Finally, we want anyone who might use this Rack middleware to be able to set their limit via the <em>requests_per_hour</em> option.</p>
|
81
|
+
|
82
|
+
<p>The full code for our middleware is below. You can also find it at <a href="https://github.com/dambalah/api-throttling">github.com/dambalah/api-throttling</a>.</p>
|
83
|
+
|
84
|
+
<pre>
|
85
|
+
require 'rubygems'
|
86
|
+
require 'rack'
|
87
|
+
require 'redis'
|
88
|
+
|
89
|
+
class ApiThrottling
|
90
|
+
def initialize(app, options={})
|
91
|
+
@app = app
|
92
|
+
@options = {:requests_per_hour => 60}.merge(options)
|
93
|
+
end
|
94
|
+
|
95
|
+
def call(env, options={})
|
96
|
+
auth = Rack::Auth::Basic::Request.new(env)
|
97
|
+
if auth.provided?
|
98
|
+
return bad_request unless auth.basic?
|
99
|
+
begin
|
100
|
+
r = Redis.new
|
101
|
+
key = "#{auth.username}_#{Time.now.strftime("%Y-%m-%d-%H")}"
|
102
|
+
r.incr(key)
|
103
|
+
return over_rate_limit if r[key].to_i > @options[:requests_per_hour]
|
104
|
+
rescue Errno::ECONNREFUSED
|
105
|
+
# If Redis-server is not running, instead of throwing an error, we simply do not throttle the API
|
106
|
+
# It's better if your service is up and running but not throttling API, then to have it throw errors for all users
|
107
|
+
# Make sure you monitor your redis-server so that it's never down. monit is a great tool for that.
|
108
|
+
end
|
109
|
+
end
|
110
|
+
@app.call(env)
|
111
|
+
end
|
112
|
+
|
113
|
+
def bad_request
|
114
|
+
body_text = "Bad Request"
|
115
|
+
[ 400, { 'Content-Type' => 'text/plain', 'Content-Length' => body_text.size.to_s }, [body_text] ]
|
116
|
+
end
|
117
|
+
|
118
|
+
def over_rate_limit
|
119
|
+
body_text = "Over Rate Limit"
|
120
|
+
[ 503, { 'Content-Type' => 'text/plain', 'Content-Length' => body_text.size.to_s }, [body_text] ]
|
121
|
+
end
|
122
|
+
end
|
123
|
+
</pre>
|
124
|
+
|
125
|
+
<p>To use it on our 'Hello World' rack application, simply add it with the <em>use</em> keyword and the <em>:requests_per_hour</em> option:</p>
|
126
|
+
|
127
|
+
<pre>
|
128
|
+
require 'api_throttling'
|
129
|
+
|
130
|
+
use Rack::Lint
|
131
|
+
use Rack::ShowExceptions
|
132
|
+
use ApiThrottling, :requests_per_hour => 3
|
133
|
+
|
134
|
+
run lambda {|env| [200, {'Content-Type' => 'text/plain', 'Content-Length' => '12'}, ["Hello World!"] ] }
|
135
|
+
</pre>
|
136
|
+
|
137
|
+
<p><strong>That's it!</strong> Make sure your <em>redis-server</em> is running on port 6379 and try making calls to your api with curl. The first 3 calls will be succesful but the next ones will block because you've reached the limit that we've set:</p>
|
138
|
+
|
139
|
+
<pre>
|
140
|
+
$ curl -i http://joe@localhost:3000
|
141
|
+
HTTP/1.1 200 OK
|
142
|
+
Content-Type: text/plain
|
143
|
+
Content-Length: 12
|
144
|
+
Connection: keep-alive
|
145
|
+
Server: thin 1.0.0 codename That's What She Said
|
146
|
+
|
147
|
+
Hello World!
|
148
|
+
|
149
|
+
$ curl -i http://joe@localhost:3000
|
150
|
+
HTTP/1.1 200 OK
|
151
|
+
Content-Type: text/plain
|
152
|
+
Content-Length: 12
|
153
|
+
Connection: keep-alive
|
154
|
+
Server: thin 1.0.0 codename That's What She Said
|
155
|
+
|
156
|
+
Hello World!
|
157
|
+
|
158
|
+
$ curl -i http://joe@localhost:3000
|
159
|
+
HTTP/1.1 200 OK
|
160
|
+
Content-Type: text/plain
|
161
|
+
Content-Length: 12
|
162
|
+
Connection: keep-alive
|
163
|
+
Server: thin 1.0.0 codename That's What She Said
|
164
|
+
|
165
|
+
Hello World!
|
166
|
+
|
167
|
+
$ curl -i http://joe@localhost:3000
|
168
|
+
HTTP/1.1 503 Service Unavailable
|
169
|
+
Content-Type: text/plain
|
170
|
+
Content-Length: 15
|
171
|
+
Connection: keep-alive
|
172
|
+
Server: thin 1.0.0 codename That's What She Said
|
173
|
+
|
174
|
+
Over Rate Limit
|
175
|
+
</pre>
|
data/Rakefile
ADDED
@@ -0,0 +1,38 @@
|
|
1
|
+
require 'rubygems'
|
2
|
+
require 'rake'
|
3
|
+
|
4
|
+
begin
|
5
|
+
require 'jeweler'
|
6
|
+
Jeweler::Tasks.new do |gemspec|
|
7
|
+
gemspec.name = "api-throttling"
|
8
|
+
gemspec.summary = "Rack Middleware to impose a rate limit on a web service (aka API Throttling)"
|
9
|
+
gemspec.email = "duff.john@gmail.com"
|
10
|
+
gemspec.homepage = "http://github.com/jduff/api-throttling/tree"
|
11
|
+
gemspec.description = "TODO"
|
12
|
+
gemspec.authors = ["Luc Castera", "John Duff"]
|
13
|
+
end
|
14
|
+
rescue LoadError
|
15
|
+
puts "Jeweler not available. Install it with: sudo gem install technicalpickles-jeweler -s http://gems.github.com"
|
16
|
+
end
|
17
|
+
|
18
|
+
require 'rake/testtask'
|
19
|
+
Rake::TestTask.new(:test) do |test|
|
20
|
+
test.libs << 'lib' << 'test'
|
21
|
+
test.pattern = 'test/**/test_*.rb'
|
22
|
+
test.verbose = true
|
23
|
+
end
|
24
|
+
|
25
|
+
begin
|
26
|
+
require 'rcov/rcovtask'
|
27
|
+
Rcov::RcovTask.new do |test|
|
28
|
+
test.libs << 'test'
|
29
|
+
test.pattern = 'test/**/*_test.rb'
|
30
|
+
test.verbose = true
|
31
|
+
end
|
32
|
+
rescue LoadError
|
33
|
+
task :rcov do
|
34
|
+
abort "RCov is not available. In order to run rcov, you must: sudo gem install spicycode-rcov"
|
35
|
+
end
|
36
|
+
end
|
37
|
+
|
38
|
+
task :default => :test
|
data/VERSION.yml
ADDED
@@ -0,0 +1,49 @@
|
|
1
|
+
require 'rubygems'
|
2
|
+
require 'rack'
|
3
|
+
require File.expand_path(File.dirname(__FILE__) + '/handlers/handlers')
|
4
|
+
|
5
|
+
class ApiThrottling
|
6
|
+
def initialize(app, options={})
|
7
|
+
@app = app
|
8
|
+
@options = {:requests_per_hour => 60, :cache=>:redis}.merge(options)
|
9
|
+
@handler = Handlers.cache_handler_for(@options[:cache])
|
10
|
+
raise "Sorry, we couldn't find a handler for the cache you specified: #{@options[:cache]}" unless @handler
|
11
|
+
end
|
12
|
+
|
13
|
+
def call(env, options={})
|
14
|
+
auth = Rack::Auth::Basic::Request.new(env)
|
15
|
+
if auth.provided?
|
16
|
+
return bad_request unless auth.basic?
|
17
|
+
begin
|
18
|
+
cache = @handler.new(@options[:cache])
|
19
|
+
key = "#{auth.username}_#{Time.now.strftime("%Y-%m-%d-%H")}"
|
20
|
+
cache.increment(key)
|
21
|
+
return over_rate_limit if cache.get(key).to_i > @options[:requests_per_hour]
|
22
|
+
rescue Errno::ECONNREFUSED
|
23
|
+
# If Redis-server is not running, instead of throwing an error, we simply do not throttle the API
|
24
|
+
# It's better if your service is up and running but not throttling API, then to have it throw errors for all users
|
25
|
+
# Make sure you monitor your redis-server so that it's never down. monit is a great tool for that.
|
26
|
+
end
|
27
|
+
end
|
28
|
+
@app.call(env)
|
29
|
+
end
|
30
|
+
|
31
|
+
def bad_request
|
32
|
+
body_text = "Bad Request"
|
33
|
+
[ 400, { 'Content-Type' => 'text/plain', 'Content-Length' => body_text.size.to_s }, [body_text] ]
|
34
|
+
end
|
35
|
+
|
36
|
+
def over_rate_limit
|
37
|
+
body_text = "Over Rate Limit"
|
38
|
+
retry_after_in_seconds = (60 - Time.now.min) * 60
|
39
|
+
[ 503,
|
40
|
+
{ 'Content-Type' => 'text/plain',
|
41
|
+
'Content-Length' => body_text.size.to_s,
|
42
|
+
'Retry-After' => retry_after_in_seconds.to_s
|
43
|
+
},
|
44
|
+
[body_text]
|
45
|
+
]
|
46
|
+
end
|
47
|
+
end
|
48
|
+
|
49
|
+
|
@@ -0,0 +1,46 @@
|
|
1
|
+
module Handlers
|
2
|
+
# creating a new cache handler is as simple as extending from the handler class,
|
3
|
+
# setting the class to use as the cache by calling cache_class("Redis")
|
4
|
+
# and then implementing the increment and get methods for that cache type.
|
5
|
+
#
|
6
|
+
# If you don't want to extend from Handler you can just create a class that implements
|
7
|
+
# increment(key), get(key) and handles?(info)
|
8
|
+
#
|
9
|
+
# Once you have a new handler make sure it is required in here and added to the Handlers list,
|
10
|
+
# you can then initialize the middleware and pass :cache=>CACHE_NAME as an option.
|
11
|
+
class Handler
|
12
|
+
def initialize(object=nil)
|
13
|
+
@cache = object.is_a?(self.class.cache_class) ? object : self.class.cache_class.new
|
14
|
+
end
|
15
|
+
|
16
|
+
def increment(key)
|
17
|
+
raise "Cache Handlers must implement an increment method"
|
18
|
+
end
|
19
|
+
|
20
|
+
def get(key)
|
21
|
+
raise "Cache Handlers must implement a get method"
|
22
|
+
end
|
23
|
+
|
24
|
+
class << self
|
25
|
+
def handles?(info)
|
26
|
+
info.to_s.downcase == cache_class.to_s.downcase || info.is_a?(self.cache_class)
|
27
|
+
end
|
28
|
+
|
29
|
+
def cache_class(name = nil)
|
30
|
+
@cache_class = name if name
|
31
|
+
Object.const_get(@cache_class) if @cache_class
|
32
|
+
end
|
33
|
+
end
|
34
|
+
end
|
35
|
+
|
36
|
+
%w(redis_handler memcache_handler).each do |handler|
|
37
|
+
require File.expand_path(File.dirname(__FILE__) + "/#{handler}")
|
38
|
+
end
|
39
|
+
|
40
|
+
HANDLERS = [RedisHandler, MemCacheHandler]
|
41
|
+
|
42
|
+
def self.cache_handler_for(info)
|
43
|
+
HANDLERS.detect{|handler| handler.handles?(info)}
|
44
|
+
end
|
45
|
+
|
46
|
+
end
|
@@ -0,0 +1,39 @@
|
|
1
|
+
require File.expand_path(File.dirname(__FILE__) + '/test_helper')
|
2
|
+
require 'redis'
|
3
|
+
|
4
|
+
# To Run this test, you need to have the redis-server running.
|
5
|
+
# And you need to have rack-test gem installed: sudo gem install rack-test
|
6
|
+
# For more information on rack-test, visit: http://github.com/brynary/rack-test
|
7
|
+
|
8
|
+
class ApiThrottlingTest < Test::Unit::TestCase
|
9
|
+
include Rack::Test::Methods
|
10
|
+
include BasicTests
|
11
|
+
|
12
|
+
def app
|
13
|
+
app = Rack::Builder.new {
|
14
|
+
use ApiThrottling, :requests_per_hour => 3
|
15
|
+
run lambda {|env| [200, {'Content-Type' => 'text/plain', 'Content-Length' => '12'}, ["Hello World!"] ] }
|
16
|
+
}
|
17
|
+
end
|
18
|
+
|
19
|
+
def setup
|
20
|
+
# Delete all the keys for 'joe' in Redis so that every test starts fresh
|
21
|
+
# Having this here also helps as a reminder to start redis-server
|
22
|
+
begin
|
23
|
+
r = Redis.new
|
24
|
+
r.keys("joe*").each do |key|
|
25
|
+
r.delete key
|
26
|
+
end
|
27
|
+
r.keys("luc*").each do |key|
|
28
|
+
r.delete key
|
29
|
+
end
|
30
|
+
rescue Errno::ECONNREFUSED
|
31
|
+
assert false, "You need to start redis-server"
|
32
|
+
end
|
33
|
+
end
|
34
|
+
|
35
|
+
def test_cache_handler_should_be_redis
|
36
|
+
assert_equal "Handlers::RedisHandler", app.to_app.instance_variable_get(:@handler).to_s
|
37
|
+
end
|
38
|
+
|
39
|
+
end
|
@@ -0,0 +1,24 @@
|
|
1
|
+
require File.expand_path(File.dirname(__FILE__) + '/test_helper')
|
2
|
+
require 'memcache'
|
3
|
+
|
4
|
+
class TestApiThrottlingMemcache < Test::Unit::TestCase
|
5
|
+
include Rack::Test::Methods
|
6
|
+
include BasicTests
|
7
|
+
CACHE = MemCache.new 'localhost:11211', :namespace=>'api-throttling-tests'
|
8
|
+
|
9
|
+
def app
|
10
|
+
app = Rack::Builder.new {
|
11
|
+
use ApiThrottling, :requests_per_hour => 3, :cache => CACHE, :read_method=>"get", :write_method=>"add"
|
12
|
+
run lambda {|env| [200, {'Content-Type' => 'text/plain', 'Content-Length' => '12'}, ["Hello World!"] ] }
|
13
|
+
}
|
14
|
+
end
|
15
|
+
|
16
|
+
def setup
|
17
|
+
CACHE.flush_all
|
18
|
+
end
|
19
|
+
|
20
|
+
def test_cache_handler_should_be_memcache
|
21
|
+
assert_equal "Handlers::MemCacheHandler", app.to_app.instance_variable_get(:@handler).to_s
|
22
|
+
end
|
23
|
+
|
24
|
+
end
|
@@ -0,0 +1,39 @@
|
|
1
|
+
require File.expand_path(File.dirname(__FILE__) + '/test_helper')
|
2
|
+
require 'redis'
|
3
|
+
require 'memcache'
|
4
|
+
|
5
|
+
class HandlersTest < Test::Unit::TestCase
|
6
|
+
|
7
|
+
def setup
|
8
|
+
|
9
|
+
end
|
10
|
+
|
11
|
+
def test_redis_should_handle_redis
|
12
|
+
assert Handlers::RedisHandler.handles?(:redis)
|
13
|
+
assert Handlers::RedisHandler.handles?('redis')
|
14
|
+
assert Handlers::RedisHandler.handles?('Redis')
|
15
|
+
assert Handlers::RedisHandler.handles?(Redis.new)
|
16
|
+
end
|
17
|
+
|
18
|
+
def test_redis_should_not_handle_memcache
|
19
|
+
assert !Handlers::RedisHandler.handles?(:memcache)
|
20
|
+
assert !Handlers::RedisHandler.handles?('memcache')
|
21
|
+
assert !Handlers::RedisHandler.handles?('MemCache')
|
22
|
+
assert !Handlers::RedisHandler.handles?(MemCache.new)
|
23
|
+
end
|
24
|
+
|
25
|
+
def test_memcache_should_not_handle_redis
|
26
|
+
assert !Handlers::MemCacheHandler.handles?(:redis)
|
27
|
+
assert !Handlers::MemCacheHandler.handles?('redis')
|
28
|
+
assert !Handlers::MemCacheHandler.handles?('Redis')
|
29
|
+
assert !Handlers::MemCacheHandler.handles?(Redis.new)
|
30
|
+
end
|
31
|
+
|
32
|
+
def test_memcache_should_handle_memcache
|
33
|
+
assert Handlers::MemCacheHandler.handles?(:memcache)
|
34
|
+
assert Handlers::MemCacheHandler.handles?('memcache')
|
35
|
+
assert Handlers::MemCacheHandler.handles?('MemCache')
|
36
|
+
assert Handlers::MemCacheHandler.handles?(MemCache.new)
|
37
|
+
end
|
38
|
+
|
39
|
+
end
|
data/test/test_helper.rb
ADDED
@@ -0,0 +1,61 @@
|
|
1
|
+
require 'rubygems'
|
2
|
+
require 'rack/test'
|
3
|
+
require 'test/unit'
|
4
|
+
require File.expand_path(File.dirname(__FILE__) + '/../lib/api_throttling')
|
5
|
+
|
6
|
+
# this way we can include the module for any of the handler tests
|
7
|
+
module BasicTests
|
8
|
+
def test_first_request_should_return_hello_world
|
9
|
+
authorize "joe", "secret"
|
10
|
+
get '/'
|
11
|
+
assert_equal 200, last_response.status
|
12
|
+
assert_equal "Hello World!", last_response.body
|
13
|
+
end
|
14
|
+
|
15
|
+
def test_fourth_request_should_be_blocked
|
16
|
+
authorize "joe", "secret"
|
17
|
+
get '/'
|
18
|
+
assert_equal 200, last_response.status
|
19
|
+
get '/'
|
20
|
+
assert_equal 200, last_response.status
|
21
|
+
get '/'
|
22
|
+
assert_equal 200, last_response.status
|
23
|
+
get '/'
|
24
|
+
assert_equal 503, last_response.status
|
25
|
+
get '/'
|
26
|
+
assert_equal 503, last_response.status
|
27
|
+
end
|
28
|
+
|
29
|
+
def test_over_rate_limit_should_only_apply_to_user_that_went_over_the_limit
|
30
|
+
authorize "joe", "secret"
|
31
|
+
get '/'
|
32
|
+
get '/'
|
33
|
+
get '/'
|
34
|
+
get '/'
|
35
|
+
get '/'
|
36
|
+
assert_equal 503, last_response.status
|
37
|
+
authorize "luc", "secret"
|
38
|
+
get '/'
|
39
|
+
assert_equal 200, last_response.status
|
40
|
+
end
|
41
|
+
|
42
|
+
def test_over_rate_limit_should_return_a_retry_after_header
|
43
|
+
authorize "joe", "secret"
|
44
|
+
get '/'
|
45
|
+
get '/'
|
46
|
+
get '/'
|
47
|
+
get '/'
|
48
|
+
assert_equal 503, last_response.status
|
49
|
+
assert_not_nil last_response.headers['Retry-After']
|
50
|
+
end
|
51
|
+
|
52
|
+
def test_retry_after_should_be_less_than_60_minutes
|
53
|
+
authorize "joe", "secret"
|
54
|
+
get '/'
|
55
|
+
get '/'
|
56
|
+
get '/'
|
57
|
+
get '/'
|
58
|
+
assert_equal 503, last_response.status
|
59
|
+
assert last_response.headers['Retry-After'].to_i <= (60 * 60)
|
60
|
+
end
|
61
|
+
end
|
metadata
ADDED
@@ -0,0 +1,70 @@
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
2
|
+
name: jduff-api-throttling
|
3
|
+
version: !ruby/object:Gem::Version
|
4
|
+
version: 0.1.0
|
5
|
+
platform: ruby
|
6
|
+
authors:
|
7
|
+
- Luc Castera
|
8
|
+
- John Duff
|
9
|
+
autorequire:
|
10
|
+
bindir: bin
|
11
|
+
cert_chain: []
|
12
|
+
|
13
|
+
date: 2009-07-03 00:00:00 -07:00
|
14
|
+
default_executable:
|
15
|
+
dependencies: []
|
16
|
+
|
17
|
+
description: TODO
|
18
|
+
email: duff.john@gmail.com
|
19
|
+
executables: []
|
20
|
+
|
21
|
+
extensions: []
|
22
|
+
|
23
|
+
extra_rdoc_files:
|
24
|
+
- LICENSE
|
25
|
+
- README.md
|
26
|
+
files:
|
27
|
+
- LICENSE
|
28
|
+
- README.md
|
29
|
+
- Rakefile
|
30
|
+
- TODO.md
|
31
|
+
- VERSION.yml
|
32
|
+
- lib/api_throttling.rb
|
33
|
+
- lib/handlers/handlers.rb
|
34
|
+
- lib/handlers/memcache_handler.rb
|
35
|
+
- lib/handlers/redis_handler.rb
|
36
|
+
- test/test_api_throttling.rb
|
37
|
+
- test/test_api_throttling_memcache.rb
|
38
|
+
- test/test_handlers.rb
|
39
|
+
- test/test_helper.rb
|
40
|
+
has_rdoc: true
|
41
|
+
homepage: http://github.com/jduff/api-throttling/tree
|
42
|
+
post_install_message:
|
43
|
+
rdoc_options:
|
44
|
+
- --charset=UTF-8
|
45
|
+
require_paths:
|
46
|
+
- lib
|
47
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
48
|
+
requirements:
|
49
|
+
- - ">="
|
50
|
+
- !ruby/object:Gem::Version
|
51
|
+
version: "0"
|
52
|
+
version:
|
53
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
54
|
+
requirements:
|
55
|
+
- - ">="
|
56
|
+
- !ruby/object:Gem::Version
|
57
|
+
version: "0"
|
58
|
+
version:
|
59
|
+
requirements: []
|
60
|
+
|
61
|
+
rubyforge_project:
|
62
|
+
rubygems_version: 1.2.0
|
63
|
+
signing_key:
|
64
|
+
specification_version: 2
|
65
|
+
summary: Rack Middleware to impose a rate limit on a web service (aka API Throttling)
|
66
|
+
test_files:
|
67
|
+
- test/test_api_throttling.rb
|
68
|
+
- test/test_api_throttling_memcache.rb
|
69
|
+
- test/test_handlers.rb
|
70
|
+
- test/test_helper.rb
|