prop 1.2.0 → 2.0.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 2601b63983949b53ff9f31664203316614466b74
4
- data.tar.gz: dc5deeebea017087d28a378e83783e3d7a5001ae
3
+ metadata.gz: c346d4ae613e60059428af7761f47af474a02c63
4
+ data.tar.gz: 5eadca75e381659c036eaf348ff24181148d127b
5
5
  SHA512:
6
- metadata.gz: fcd418b8e8bd9c96bb84ad902db628ce7c792a3ff3324b52c682ed989ad951595aa35564f26386f742ee07f73c3af42b490cc1731614ae255b4cc2d2988608eb
7
- data.tar.gz: 43d95cc4198eacf0c3fe20182788085c778a7d20d67ffeb97c1875f3f8488275d2ea2e0aa8e997e82ffffb2aab56a7c37bbab619fb8ca930d7b3c72b94a76150
6
+ metadata.gz: ce9e3393a3ff39720b1083a34d96b9c3c960f0ba5b47ab1f1cc7cb131132183880a6de7b0928a9736fb93c29ea1a81bd39abc79287783b5b85633bddf36edd48
7
+ data.tar.gz: 1ee56b82228280ef143b2cece612d2e310da75a99685259da374b05b6fe9a39810fb6fbecd006d260e9220c1b88e172b3793919fa1ca2ce4e1e4c26c889a3abc
data/README.md CHANGED
@@ -1,30 +1,29 @@
1
1
 
2
2
  # Prop [![Build Status](https://travis-ci.org/zendesk/prop.png)](https://travis-ci.org/zendesk/prop)
3
3
 
4
- Prop is a simple gem for rate limiting requests of any kind. It allows you to configure hooks for registering certain actions, such that you can define thresholds, register usage and finally act on exceptions once thresholds get exceeded.
4
+ A gem to rate limit requests/actions of any kind.<br/>
5
+ Define thresholds, register usage and finally act on exceptions once thresholds get exceeded.
5
6
 
6
7
  Prop supports two limiting strategies:
7
8
 
8
- * Basic strategy (default): Prop will use an interval to define a window of time using simple div arithmetic. This means that it's a worst-case throttle that will allow up to two times the specified requests within the specified interval.
9
- * Leaky bucket strategy: Prop also supports the [Leaky Bucket](https://en.wikipedia.org/wiki/Leaky_bucket) algorithm, which is similar to the basic strategy but also supports bursts up to a specified threshold.
9
+ * Basic strategy (default): Prop will use an interval to define a window of time using simple div arithmetic.
10
+ This means that it's a worst-case throttle that will allow up to two times the specified requests within the specified interval.
11
+ * Leaky bucket strategy: Prop also supports the [Leaky Bucket](https://en.wikipedia.org/wiki/Leaky_bucket) algorithm,
12
+ which is similar to the basic strategy but also supports bursts up to a specified threshold.
10
13
 
11
- To get going with Prop, you first define the read and write operations. These define how you write a registered request and how to read the number of requests for a given action. For example, do something like the below in a Rails initializer:
14
+ To store values, prop needs a cache:
12
15
 
13
16
  ```ruby
14
- Prop.read do |key|
15
- Rails.cache.read(key)
16
- end
17
-
18
- Prop.write do |key, value|
19
- Rails.cache.write(key, value)
20
- end
17
+ # config/initializers/prop.rb
18
+ Prop.cache = Rails.cache # needs read/write/increment methods
21
19
  ```
22
20
 
23
- You can choose to rely on whatever you'd like to use for transient storage. Prop does not do any sort of clean up of its key space, so you would have to implement that manually should you be using anything but an LRU cache like memcached.
21
+ Prop does not expire its used keys, so use memcached or similar, not redis.
24
22
 
25
23
  ## Setting a Callback
26
24
 
27
- You can define an optional callback that is invoked when a rate limit is reached. In a Rails application you could use such a handler to add notification support:
25
+ You can define an optional callback that is invoked when a rate limit is reached. In a Rails application you
26
+ could use such a handler to add notification support:
28
27
 
29
28
  ```ruby
30
29
  Prop.before_throttle do |handle, key, threshold, interval|
@@ -34,14 +33,12 @@ end
34
33
 
35
34
  ## Defining thresholds
36
35
 
37
- Once the read and write operations are defined, you can optionally define thresholds. If, for example, you want to have a threshold on accepted emails per hour from a given user, you could define a threshold and interval (in seconds) for this like so:
36
+ Example: Limit on accepted emails per hour from a given user, by defining a threshold and interval (in seconds):
38
37
 
39
38
  ```ruby
40
39
  Prop.configure(:mails_per_hour, threshold: 100, interval: 1.hour, description: "Mail rate limit exceeded")
41
40
  ```
42
41
 
43
- The `:mails_per_hour` in the above is called the "handle". You can now put the throttle to work with these values, by passing the handle to the respective methods in Prop:
44
-
45
42
  ```ruby
46
43
  # Throws Prop::RateLimitExceededError if the threshold/interval has been reached
47
44
  Prop.throttle!(:mails_per_hour)
@@ -52,18 +49,18 @@ Prop.throttle!(:expensive_request) { calculator.something_very_hard }
52
49
  # Returns true if the threshold/interval has been reached
53
50
  Prop.throttled?(:mails_per_hour)
54
51
 
55
- # Sets the throttle "count" to 0
52
+ # Sets the throttle count to 0
56
53
  Prop.reset(:mails_per_hour)
57
54
 
58
55
  # Returns the value of this throttle, usually a count, but see below for more
59
56
  Prop.count(:mails_per_hour)
60
57
  ```
61
58
 
62
- Prop will raise a `RuntimeError` if you attempt to operate on an undefined handle.
59
+ Prop will raise a `KeyError` if you attempt to operate on an undefined handle.
63
60
 
64
61
  ## Scoping a throttle
65
62
 
66
- In many cases you will want to tie a specific key to a defined throttle. For example, you can scope the throttling to a specific sender rather than running a global "mails per hour" throttle:
63
+ Example: scope the throttling to a specific sender rather than running a global "mails per hour" throttle:
67
64
 
68
65
  ```ruby
69
66
  Prop.throttle!(:mails_per_hour, mail.from)
@@ -72,7 +69,7 @@ Prop.reset(:mails_per_hour, mail.from)
72
69
  Prop.query(:mails_per_hour, mail.from)
73
70
  ```
74
71
 
75
- The throttle scope can also be an array of values, e.g.:
72
+ The throttle scope can also be an array of values:
76
73
 
77
74
  ```ruby
78
75
  Prop.throttle!(:mails_per_hour, [ account.id, mail.from ])
@@ -80,7 +77,10 @@ Prop.throttle!(:mails_per_hour, [ account.id, mail.from ])
80
77
 
81
78
  ## Error handling
82
79
 
83
- If the throttle! method gets called more than "threshold" times within "interval in seconds" for a given handle and key combination, Prop throws a `Prop::RateLimited` error which is a subclass of `StandardError`. This exception contains a "handle" reference and a "description" if specified during the configuration. The handle allows you to rescue `Prop::RateLimited` and differentiate action depending on the handle. For example, in Rails you can use this in e.g. `ApplicationController`:
80
+ If the threshold for a given handle and key combination is exceeded, Prop throws a `Prop::RateLimited`.
81
+ This exception contains a "handle" reference and a "description" if specified during the configuration.
82
+ The handle allows you to rescue `Prop::RateLimited` and differentiate action depending on the handle.
83
+ For example, in Rails you can use this in e.g. `ApplicationController`:
84
84
 
85
85
  ```ruby
86
86
  rescue_from Prop::RateLimited do |e|
@@ -94,15 +94,22 @@ end
94
94
 
95
95
  ### Using the Middleware
96
96
 
97
- Prop ships with a built-in Rack middleware that you can use to do all the exception handling. When a `Prop::RateLimited` error is caught, it will build an HTTP [429 Too Many Requests](http://tools.ietf.org/html/draft-nottingham-http-new-status-02#section-4) response and set the following headers:
97
+ Prop ships with a built-in Rack middleware that you can use to do all the exception handling.
98
+ When a `Prop::RateLimited` error is caught, it will build an HTTP
99
+ [429 Too Many Requests](http://tools.ietf.org/html/draft-nottingham-http-new-status-02#section-4)
100
+ response and set the following headers:
98
101
 
99
102
  Retry-After: 32
100
103
  Content-Type: text/plain
101
104
  Content-Length: 72
102
105
 
103
- Where `Retry-After` is the number of seconds the client has to wait before retrying this end point. The body of this response is whatever description Prop has configured for the throttle that got violated, or a default string if there's none configured.
106
+ Where `Retry-After` is the number of seconds the client has to wait before retrying this end point.
107
+ The body of this response is whatever description Prop has configured for the throttle that got violated,
108
+ or a default string if there's none configured.
104
109
 
105
- If you wish to do manual error messaging in these cases, you can define an error handler in your Prop configuration. Here's how the default error handler looks - you use anything that responds to `.call` and takes the environment and a `RateLimited` instance as argument:
110
+ If you wish to do manual error messaging in these cases, you can define an error handler in your Prop configuration.
111
+ Here's how the default error handler looks - you use anything that responds to `.call` and
112
+ takes the environment and a `RateLimited` instance as argument:
106
113
 
107
114
  ```ruby
108
115
  error_handler = Proc.new do |env, error|
@@ -112,7 +119,7 @@ error_handler = Proc.new do |env, error|
112
119
  [ 429, headers, [ body ]]
113
120
  end
114
121
 
115
- ActionController::Dispatcher.middleware.insert_before(ActionController::ParamsParser, :error_handler => error_handler)
122
+ ActionController::Dispatcher.middleware.insert_before(ActionController::ParamsParser, error_handler: error_handler)
116
123
  ```
117
124
 
118
125
  An alternative to this, is to extend `Prop::Middleware` and override the `render_response(env, error)` method.
@@ -127,22 +134,23 @@ Prop.disabled do
127
134
  end
128
135
  ```
129
136
 
130
- ## Threshold settings
137
+ ## Overriding threshold
131
138
 
132
139
  You can chose to override the threshold for a given key:
133
140
 
134
141
  ```ruby
135
- Prop.throttle!(:mails_per_hour, mail.from, :threshold => current_account.mail_throttle_threshold)
142
+ Prop.throttle!(:mails_per_hour, mail.from, threshold: current_account.mail_throttle_threshold)
136
143
  ```
137
144
 
138
- When the threshold are invoked without argument, the key is nil and as such a scope of its own, i.e. these are equivalent:
145
+ When `throttle` is invoked without argument, the key is nil and as such a scope of its own, i.e. these are equivalent:
139
146
 
140
147
  ```ruby
141
148
  Prop.throttle!(:mails_per_hour)
142
149
  Prop.throttle!(:mails_per_hour, nil)
143
150
  ```
144
151
 
145
- The default (and smallest possible) increment is 1, you can set that to any integer value using :increment which is handy for building time based throttles:
152
+ The default (and smallest possible) increment is 1, you can set that to any integer value using
153
+ `:increment` which is handy for building time based throttles:
146
154
 
147
155
  ```ruby
148
156
  Prop.configure(:execute_time, threshold: 10, interval: 1.minute)
@@ -173,18 +181,21 @@ rescue Prop::RateLimited => e
173
181
  when :auth
174
182
  raise AuthFailure
175
183
  ...
176
- end
184
+ end
177
185
  ```
178
186
 
179
187
  ## Using Leaky Bucket Algorithm
180
188
 
181
- You can add two additional configurations: `:strategy` and `:burst_rate` to use the [leaky bucket algorithm](https://en.wikipedia.org/wiki/Leaky_bucket). Prop will handle the details after configured, and you don't have to specify `:strategy` again when using `throttle`, `throttle!` or any other methods.
189
+ You can add two additional configurations: `:strategy` and `:burst_rate` to use the
190
+ [leaky bucket algorithm](https://en.wikipedia.org/wiki/Leaky_bucket).
191
+ Prop will handle the details after configured, and you don't have to specify `:strategy`
192
+ again when using `throttle`, `throttle!` or any other methods.
182
193
 
183
194
  ```ruby
184
195
  Prop.configure(:api_request, strategy: :leaky_bucket, burst_rate: 20, threshold: 5, interval: 1.minute)
185
196
  ```
186
197
 
187
- * `:threshold` value here would be the "leak rate" of leaky bucket algorithm.
198
+ * `:threshold` value here would be the "leak rate" of leaky bucket algorithm.
188
199
 
189
200
 
190
201
  ## License
@@ -196,4 +207,7 @@ You may obtain a copy of the License at
196
207
 
197
208
  http://www.apache.org/licenses/LICENSE-2.0
198
209
 
199
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
210
+ Unless required by applicable law or agreed to in writing,
211
+ software distributed under the License is distributed on an "AS IS" BASIS,
212
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
213
+ See the License for the specific language governing permissions and limitations under the License.
data/lib/prop.rb CHANGED
@@ -7,7 +7,7 @@ module Prop
7
7
  # Short hand for accessing Prop::Limiter methods
8
8
  class << self
9
9
  extend Forwardable
10
- def_delegators :"Prop::Limiter", :read, :write, :configure, :configurations, :disabled, :before_throttle
10
+ def_delegators :"Prop::Limiter", :read, :write, :cache=, :configure, :configurations, :disabled, :before_throttle
11
11
  def_delegators :"Prop::Limiter", :throttle, :throttle!, :throttled?, :count, :query, :reset
12
12
  end
13
13
  end
@@ -6,20 +6,21 @@ module Prop
6
6
  class IntervalStrategy
7
7
  class << self
8
8
  def counter(cache_key, options)
9
- Prop::Limiter.reader.call(cache_key).to_i
9
+ Prop::Limiter.cache.read(cache_key).to_i
10
10
  end
11
11
 
12
- def increment(cache_key, options, counter)
12
+ def increment(cache_key, options)
13
13
  increment = options.fetch(:increment, 1)
14
- Prop::Limiter.writer.call(cache_key, counter + increment)
14
+ cache = Prop::Limiter.cache
15
+ cache.increment(cache_key, increment) || (cache.write(cache_key, increment, raw: true) && increment) # WARNING: potential race condition
15
16
  end
16
17
 
17
18
  def reset(cache_key)
18
- Prop::Limiter.writer.call(cache_key, 0)
19
+ Prop::Limiter.cache.write(cache_key, 0)
19
20
  end
20
21
 
21
- def at_threshold?(counter, options)
22
- counter >= options.fetch(:threshold)
22
+ def compare_threshold?(counter, operator, options)
23
+ counter.send operator, options.fetch(:threshold)
23
24
  end
24
25
 
25
26
  # Builds the expiring cache key
@@ -37,7 +38,7 @@ module Prop
37
38
  def threshold_reached(options)
38
39
  threshold = options.fetch(:threshold)
39
40
 
40
- "#{options[:handle]} threshold of #{threshold} tries per #{options[:interval]}s exceeded for key '#{options[:key].inspect}', hash #{options[:cache_key]}"
41
+ "#{options[:handle]} threshold of #{threshold} tries per #{options[:interval]}s exceeded for key #{options[:key].inspect}, hash #{options[:cache_key]}"
41
42
  end
42
43
 
43
44
  def validate_options!(options)
@@ -6,34 +6,31 @@ require 'prop/interval_strategy'
6
6
  module Prop
7
7
  class LeakyBucketStrategy
8
8
  class << self
9
- def update_bucket(cache_key, interval, leak_rate)
10
- bucket = Prop::Limiter.reader.call(cache_key) || default_bucket
9
+ def counter(cache_key, options)
10
+ bucket = Prop::Limiter.cache.read(cache_key) || default_bucket
11
11
  now = Time.now.to_i
12
- leak_amount = (now - bucket[:last_updated]) / interval * leak_rate
12
+ leak_amount = (now - bucket.fetch(:last_updated)) / options.fetch(:interval) * options.fetch(:threshold)
13
13
 
14
- bucket[:bucket] = [bucket[:bucket] - leak_amount, 0].max
14
+ bucket[:bucket] = [bucket.fetch(:bucket) - leak_amount, 0].max
15
15
  bucket[:last_updated] = now
16
-
17
- Prop::Limiter.writer.call(cache_key, bucket)
18
16
  bucket
19
17
  end
20
18
 
21
- def counter(cache_key, options)
22
- update_bucket(cache_key, options[:interval], options[:threshold]).merge(burst_rate: options[:burst_rate])
23
- end
24
-
25
- def increment(cache_key, options, counter)
26
- increment = options.fetch(:increment, 1)
27
- bucket = { :bucket => counter[:bucket].to_i + increment, :last_updated => Time.now.to_i }
28
- Prop::Limiter.writer.call(cache_key, bucket)
19
+ # WARNING: race condition
20
+ # this increment is not atomic, so it might miss counts when used frequently
21
+ def increment(cache_key, options)
22
+ counter = counter(cache_key, options)
23
+ counter[:bucket] += options.fetch(:increment, 1)
24
+ Prop::Limiter.cache.write(cache_key, counter)
25
+ counter
29
26
  end
30
27
 
31
28
  def reset(cache_key)
32
- Prop::Limiter.writer.call(cache_key, default_bucket)
29
+ Prop::Limiter.cache.write(cache_key, default_bucket)
33
30
  end
34
31
 
35
- def at_threshold?(counter, options)
36
- counter[:bucket].to_i >= options.fetch(:burst_rate)
32
+ def compare_threshold?(counter, operator, options)
33
+ counter.fetch(:bucket).to_i.send operator, options.fetch(:burst_rate)
37
34
  end
38
35
 
39
36
  def build(options)
@@ -45,15 +42,11 @@ module Prop
45
42
  "prop/leaky_bucket/#{Digest::MD5.hexdigest(cache_key)}"
46
43
  end
47
44
 
48
- def default_bucket
49
- { :bucket => 0, :last_updated => 0 }
50
- end
51
-
52
45
  def threshold_reached(options)
53
46
  burst_rate = options.fetch(:burst_rate)
54
47
  threshold = options.fetch(:threshold)
55
48
 
56
- "#{options[:handle]} threshold of #{threshold} tries per #{options[:interval]}s and burst rate #{burst_rate} tries exceeded for key '#{options[:key].inspect}', hash #{options[:cache_key]}"
49
+ "#{options[:handle]} threshold of #{threshold} tries per #{options[:interval]}s and burst rate #{burst_rate} tries exceeded for key #{options[:key].inspect}, hash #{options[:cache_key]}"
57
50
  end
58
51
 
59
52
  def validate_options!(options)
@@ -63,6 +56,12 @@ module Prop
63
56
  raise ArgumentError.new(":burst_rate must be an Integer and larger than :threshold")
64
57
  end
65
58
  end
59
+
60
+ private
61
+
62
+ def default_bucket
63
+ { bucket: 0, last_updated: 0 }
64
+ end
66
65
  end
67
66
  end
68
67
  end
data/lib/prop/limiter.rb CHANGED
@@ -8,14 +8,22 @@ module Prop
8
8
  class Limiter
9
9
 
10
10
  class << self
11
- attr_accessor :handles, :reader, :writer, :before_throttle_callback
11
+ attr_accessor :handles, :before_throttle_callback, :cache
12
12
 
13
13
  def read(&blk)
14
- self.reader = blk
14
+ raise "Use .cache = "
15
15
  end
16
16
 
17
17
  def write(&blk)
18
- self.writer = blk
18
+ raise "Use .cache = "
19
+ end
20
+
21
+ def cache=(cache)
22
+ [:read, :write, :increment].each do |method|
23
+ next if cache.respond_to?(method)
24
+ raise ArgumentError, "Cache needs to respond to #{method}"
25
+ end
26
+ @cache = cache
19
27
  end
20
28
 
21
29
  def before_throttle(&blk)
@@ -25,12 +33,12 @@ module Prop
25
33
  # Public: Registers a handle for rate limiting
26
34
  #
27
35
  # handle - the name of the handle you wish to use in your code, e.g. :login_attempt
28
- # defaults - the settings for this handle, e.g. { :threshold => 5, :interval => 5.minutes }
36
+ # defaults - the settings for this handle, e.g. { threshold: 5, interval: 5.minutes }
29
37
  #
30
38
  # Raises Prop::RateLimited if the number if the threshold for this handle has been reached
31
39
  def configure(handle, defaults)
32
- raise RuntimeError.new("Invalid threshold setting") unless defaults[:threshold].to_i > 0
33
- raise RuntimeError.new("Invalid interval setting") unless defaults[:interval].to_i > 0
40
+ raise ArgumentError.new("Invalid threshold setting") unless defaults[:threshold].to_i > 0
41
+ raise ArgumentError.new("Invalid interval setting") unless defaults[:interval].to_i > 0
34
42
 
35
43
  self.handles ||= {}
36
44
  self.handles[handle] = defaults
@@ -55,23 +63,19 @@ module Prop
55
63
  #
56
64
  # Returns true if the threshold for this handle has been reached, else returns false
57
65
  def throttle(handle, key = nil, options = {})
58
- options, cache_key = prepare(handle, key, options)
59
- counter = @strategy.counter(cache_key, options)
66
+ return false if disabled?
60
67
 
61
- unless disabled?
62
- if @strategy.at_threshold?(counter, options)
63
- unless before_throttle_callback.nil?
64
- before_throttle_callback.call(handle, key, options[:threshold], options[:interval])
65
- end
66
-
67
- true
68
- else
69
- @strategy.increment(cache_key, options, counter)
68
+ options, cache_key = prepare(handle, key, options)
69
+ counter = @strategy.increment(cache_key, options)
70
70
 
71
- yield if block_given?
71
+ if @strategy.compare_threshold?(counter, :>, options)
72
+ before_throttle_callback &&
73
+ before_throttle_callback.call(handle, key, options[:threshold], options[:interval])
72
74
 
73
- false
74
- end
75
+ true
76
+ else
77
+ yield if block_given?
78
+ false
75
79
  end
76
80
  end
77
81
 
@@ -82,19 +86,19 @@ module Prop
82
86
  # options - request specific overrides to the defaults configured for this handle
83
87
  # (optional) a block of code that this throttle is guarding
84
88
  #
85
- # Raises Prop::RateLimited if the number if the threshold for this handle has been reached
89
+ # Raises Prop::RateLimited if the threshold for this handle has been reached
86
90
  # Returns the value of the block if given a such, otherwise the current count of the throttle
87
91
  def throttle!(handle, key = nil, options = {})
88
92
  options, cache_key = prepare(handle, key, options)
89
93
 
90
94
  if throttle(handle, key, options)
91
- raise Prop::RateLimited.new(options.merge(:cache_key => cache_key, :handle => handle))
95
+ raise Prop::RateLimited.new(options.merge(cache_key: cache_key, handle: handle))
92
96
  end
93
97
 
94
98
  block_given? ? yield : @strategy.counter(cache_key, options)
95
99
  end
96
100
 
97
- # Public: Allows to query whether the given handle/key combination is currently throttled
101
+ # Public: Is the given handle/key combination currently throttled ?
98
102
  #
99
103
  # handle - the throttle identifier
100
104
  # key - the associated key
@@ -103,7 +107,7 @@ module Prop
103
107
  def throttled?(handle, key = nil, options = {})
104
108
  options, cache_key = prepare(handle, key, options)
105
109
  counter = @strategy.counter(cache_key, options)
106
- @strategy.at_threshold?(counter, options)
110
+ @strategy.compare_threshold?(counter, :>=, options)
107
111
  end
108
112
 
109
113
  # Public: Resets a specific throttle
@@ -113,7 +117,7 @@ module Prop
113
117
  #
114
118
  # Returns nothing
115
119
  def reset(handle, key = nil, options = {})
116
- options, cache_key = prepare(handle, key, options)
120
+ _options, cache_key = prepare(handle, key, options)
117
121
  @strategy.reset(cache_key)
118
122
  end
119
123
 
@@ -141,14 +145,15 @@ module Prop
141
145
  end
142
146
 
143
147
  def prepare(handle, key, params)
144
- raise RuntimeError.new("No such handle configured: #{handle.inspect}") unless (handles || {}).key?(handle)
148
+ unless defaults = handles[handle]
149
+ raise KeyError.new("No such handle configured: #{handle.inspect}")
150
+ end
145
151
 
146
- defaults = handles[handle]
147
- options = Prop::Options.build(:key => key, :params => params, :defaults => defaults)
152
+ options = Prop::Options.build(key: key, params: params, defaults: defaults)
148
153
 
149
154
  @strategy = options.fetch(:strategy)
150
155
 
151
- cache_key = @strategy.build(:key => key, :handle => handle, :interval => options[:interval])
156
+ cache_key = @strategy.build(key: key, handle: handle, interval: options[:interval])
152
157
 
153
158
  [ options, cache_key ]
154
159
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: prop
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.2.0
4
+ version: 2.0.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Morten Primdahl
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2015-07-09 00:00:00.000000000 Z
11
+ date: 2015-10-27 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: rake
@@ -25,7 +25,7 @@ dependencies:
25
25
  - !ruby/object:Gem::Version
26
26
  version: '0'
27
27
  - !ruby/object:Gem::Dependency
28
- name: bundler
28
+ name: maxitest
29
29
  requirement: !ruby/object:Gem::Requirement
30
30
  requirements:
31
31
  - - ">="
@@ -39,7 +39,7 @@ dependencies:
39
39
  - !ruby/object:Gem::Version
40
40
  version: '0'
41
41
  - !ruby/object:Gem::Dependency
42
- name: minitest
42
+ name: mocha
43
43
  requirement: !ruby/object:Gem::Requirement
44
44
  requirements:
45
45
  - - ">="
@@ -53,7 +53,21 @@ dependencies:
53
53
  - !ruby/object:Gem::Version
54
54
  version: '0'
55
55
  - !ruby/object:Gem::Dependency
56
- name: mocha
56
+ name: activesupport
57
+ requirement: !ruby/object:Gem::Requirement
58
+ requirements:
59
+ - - ">="
60
+ - !ruby/object:Gem::Version
61
+ version: '0'
62
+ type: :development
63
+ prerelease: false
64
+ version_requirements: !ruby/object:Gem::Requirement
65
+ requirements:
66
+ - - ">="
67
+ - !ruby/object:Gem::Version
68
+ version: '0'
69
+ - !ruby/object:Gem::Dependency
70
+ name: bump
57
71
  requirement: !ruby/object:Gem::Requirement
58
72
  requirements:
59
73
  - - ">="
@@ -102,7 +116,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
102
116
  version: '0'
103
117
  requirements: []
104
118
  rubyforge_project:
105
- rubygems_version: 2.4.7
119
+ rubygems_version: 2.4.5.1
106
120
  signing_key:
107
121
  specification_version: 4
108
122
  summary: Gem for implementing rate limits.