rack-timeout 0.6.3 → 0.7.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/CHANGELOG.md +7 -1
- data/Gemfile +2 -0
- data/README.md +17 -2
- data/doc/risks.md +7 -3
- data/doc/settings.md +15 -3
- data/lib/rack/timeout/core.rb +20 -10
- data/lib/rack/timeout/logging-observer.rb +1 -1
- data/lib/rack/timeout/support/scheduler.rb +0 -1
- data/lib/rack-timeout.rb +1 -1
- data/test/basic_test.rb +7 -0
- data/test/env_settings_test.rb +23 -11
- data/test/test_helper.rb +9 -1
- metadata +5 -5
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 3476565723b333fd6d528af741737267ba9e90cc9805d7081597acb42e0da4db
|
4
|
+
data.tar.gz: 7c2f2374184d968a317b3f9987df6a1056d734627f185b42833b760476eb56d5
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 76a2e8a80daa99c11160e306166c75b55775fa32806e6f20e133813584af5287812b00a9c4f6111ad942986e5dad86fb595223403b856a583bb83f96f56d0a4e
|
7
|
+
data.tar.gz: 70b21edae835125d5dddfda576791a0db7bb990ec22152948ba1b6f39aae734ef0fdc6d0806dea9767b7788eb2f77bd256c8b36bb484932fa5b36610502f9ebf
|
data/CHANGELOG.md
CHANGED
@@ -1,4 +1,10 @@
|
|
1
|
-
##
|
1
|
+
## 0.7.0
|
2
|
+
|
3
|
+
- Honor an `X-Request-Start` header with the `t=<microseconds>` format, to allow using `wait_timeout` functionality with Apache (https://github.com/zombocom/rack-timeout/pull/210)
|
4
|
+
- Improve message when Terminate on Timeout is used on a platform that does not support it (eg. Windows or JVM) (https://github.com/zombocom/rack-timeout/pull/192)
|
5
|
+
- Fix a thread safety issue for forks that are not on the main thread (https://github.com/zombocom/rack-timeout/pull/212)
|
6
|
+
- Add compatibility with frozen_string_literal: true (https://github.com/zombocom/rack-timeout/pull/196)
|
7
|
+
- Fix if Rails is defined but Rails::VERSION is not defined (https://github.com/zombocom/rack-timeout/pull/191)
|
2
8
|
|
3
9
|
## 0.6.3
|
4
10
|
|
data/Gemfile
CHANGED
data/README.md
CHANGED
@@ -84,7 +84,7 @@ service_past_wait: false # RACK_TIMEOUT_SERVICE_PAST_WAIT
|
|
84
84
|
term_on_timeout: false # RACK_TIMEOUT_TERM_ON_TIMEOUT
|
85
85
|
```
|
86
86
|
|
87
|
-
These settings can be
|
87
|
+
These settings can be overridden during middleware initialization or
|
88
88
|
environment variables `RACK_TIMEOUT_*` mentioned above. Middleware
|
89
89
|
parameters take precedence:
|
90
90
|
|
@@ -104,8 +104,23 @@ Please see the [doc](doc) folder for further documentation on:
|
|
104
104
|
* [Exceptions raised by Rack::Timeout](doc/exceptions.md)
|
105
105
|
* [Rollbar fingerprinting](doc/rollbar.md)
|
106
106
|
* [Observers](doc/observers.md)
|
107
|
+
* [Settings](doc/settings.md)
|
107
108
|
* [Logging](doc/logging.md)
|
108
109
|
|
110
|
+
Additionally there is a [demo app](https://github.com/zombocom/rack_timeout_demos)
|
111
|
+
that shows the impact of changing settings and how the library behaves
|
112
|
+
when a timeout is hit.
|
113
|
+
|
114
|
+
Contributing
|
115
|
+
------------
|
116
|
+
|
117
|
+
Run the test suite:
|
118
|
+
|
119
|
+
```console
|
120
|
+
bundle
|
121
|
+
bundle exec rake test
|
122
|
+
```
|
123
|
+
|
109
124
|
Compatibility
|
110
125
|
-------------
|
111
126
|
|
@@ -115,4 +130,4 @@ for Rails apps, Rails 3.x and up.
|
|
115
130
|
|
116
131
|
---
|
117
132
|
Copyright © 2010-2020 Caio Chassot, released under the MIT license
|
118
|
-
<http://github.com/
|
133
|
+
<http://github.com/zombocom/rack-timeout>
|
data/doc/risks.md
CHANGED
@@ -5,7 +5,7 @@ Risks and shortcomings of using Rack::Timeout
|
|
5
5
|
|
6
6
|
Sometimes a request is taking too long to complete because it's blocked waiting on synchronous IO. Such IO does not need to be file operations, it could be, say, network or database operations. If said IO is happening in a C library that's unaware of ruby's interrupt system (i.e. anything written without ruby in mind), calling `Thread#raise` (that's what rack-timeout uses) will not have effect until after the IO block is gone.
|
7
7
|
|
8
|
-
|
8
|
+
As a fail-safe against these cases, a blunter solution that kills the entire process is recommended, such as unicorn's timeouts. You can enable this process killing behavior by enabling `term_on_timeout` for more info see [setting][term-on-timeout].
|
9
9
|
|
10
10
|
More detailed explanations of the issues surrounding timing out in ruby during IO blocks can be found at:
|
11
11
|
|
@@ -15,14 +15,16 @@ More detailed explanations of the issues surrounding timing out in ruby during I
|
|
15
15
|
|
16
16
|
Raising mid-flight in stateful applications is inherently unsafe. A request can be aborted at any moment in the code flow, and the application can be left in an inconsistent state. There's little way rack-timeout could be aware of ongoing state changes. Applications that rely on a set of globals (like class variables) or any other state that lives beyond a single request may find those left in an unexpected/inconsistent state after an aborted request. Some cleanup code might not have run, or only half of a set of related changes may have been applied.
|
17
17
|
|
18
|
-
A lot more can go wrong. An intricate explanation of the issue by JRuby's Charles Nutter can be found [
|
18
|
+
A lot more can go wrong. An intricate explanation of the issue by JRuby's Charles Nutter can be found [
|
19
|
+
Ruby's Thread#raise, Thread#kill, timeout.rb, and net/protocol.rb libraries are broken][broken-timeout]. In addition Richard Schneeman talked about this issue in [The Oldest Bug In Ruby - Why Rack::Timeout Might Hose your Server][oldest-bug]. One solution from having `rack-timeout` corrupt process state is to restart the entire process on timeout. You can enable this behavior by setting [term_on_timeout][term-on-timeout].
|
19
20
|
|
20
|
-
Ruby 2.1 provides a way to defer the result of raising exceptions through the [Thread.handle_interrupt][handle-interrupt] method. This could be used in critical areas of your application code to prevent Rack::Timeout from accidentally wreaking havoc by raising just in the wrong moment. That said, `handle_interrupt` and threads in general are hard to reason about, and detecting all cases where it would be needed in an application is a tall order, and the added code complexity is probably not worth the trouble.
|
21
|
+
Ruby 2.1+ provides a way to defer the result of raising exceptions through the [Thread.handle_interrupt][handle-interrupt] method. This low level interface is meant more for library authors than higher level application developers. This interface could be used in critical areas of your application code to prevent Rack::Timeout from accidentally wreaking havoc by raising just in the wrong moment. That said, `handle_interrupt` and threads in general are hard to reason about, and detecting all cases where it would be needed in an application is a tall order, and the added code complexity is probably not worth the trouble.
|
21
22
|
|
22
23
|
Your time is better spent ensuring requests run fast and don't need to timeout.
|
23
24
|
|
24
25
|
That said, it's something to be aware of, and may explain some eerie wonkiness seen in logs.
|
25
26
|
|
27
|
+
[oldest-bug]: https://www.schneems.com/2017/02/21/the-oldest-bug-in-ruby-why-racktimeout-might-hose-your-server/
|
26
28
|
[broken-timeout]: http://headius.blogspot.de/2008/02/rubys-threadraise-threadkill-timeoutrb.html
|
27
29
|
[handle-interrupt]: http://www.ruby-doc.org/core-2.1.3/Thread.html#method-c-handle_interrupt
|
28
30
|
|
@@ -33,3 +35,5 @@ Because of the aforementioned issues, it's recommended you set library-specific
|
|
33
35
|
You'll want to set all relevant timeouts to something lower than Rack::Timeout's `service_timeout`. Generally you want them to be at least 1s lower, so as to account for time spent elsewhere during the request's lifetime while still giving libraries a chance to time out before Rack::Timeout.
|
34
36
|
|
35
37
|
[ruby-timeouts]: https://github.com/ankane/the-ultimate-guide-to-ruby-timeouts
|
38
|
+
[term-on-timeout]: https://github.com/zombocom/rack-timeout/blob/main/doc/settings.md#term-on-timeout
|
39
|
+
|
data/doc/settings.md
CHANGED
@@ -3,6 +3,9 @@
|
|
3
3
|
Rack::Timeout has 4 settings, each of which impacts when Rack::Timeout
|
4
4
|
will raise an exception, and which type of exception will be raised.
|
5
5
|
|
6
|
+
|
7
|
+
Additionally there is a [demo app](https://github.com/zombocom/rack_timeout_demos) that shows the impact of changing settings and how the library behaves when a timeout is hit.
|
8
|
+
|
6
9
|
### Service Timeout
|
7
10
|
|
8
11
|
`service_timeout` is the most important setting.
|
@@ -26,9 +29,18 @@ Wait timeout can be disabled entirely by setting the property to `0` or `false`.
|
|
26
29
|
|
27
30
|
A request's computed wait time may affect the service timeout used for it. Basically, a request's wait time plus service time may not exceed the wait timeout. The reasoning for that is based on Heroku router's behavior, that the request would be dropped anyway after the wait timeout. So, for example, with the default settings of `service_timeout=15`, `wait_timeout=30`, a request that had 20 seconds of wait time will not have a service timeout of 15, but instead of 10, as there are only 10 seconds left before `wait_timeout` is reached. This behavior can be disabled by setting `service_past_wait` to `true`. When set, the `service_timeout` setting will always be honored. Please note that if you're using the `RACK_TIMEOUT_SERVICE_PAST_WAIT` environment variable, any value different than `"false"` will be considered `true`.
|
28
31
|
|
29
|
-
The way we're able to infer a request's start time, and from that its wait time, is through the availability of the `X-Request-Start` HTTP header, which is expected to contain the time since epoch in milliseconds
|
32
|
+
The way we're able to infer a request's start time, and from that its wait time, is through the availability of the `X-Request-Start` HTTP header, which is expected to contain the time since UNIX epoch in milliseconds or microseconds.
|
33
|
+
|
34
|
+
Compatible header string formats are:
|
35
|
+
|
36
|
+
- `seconds.milliseconds`, e.g. `1700173924.763` - 10.3 digits (nginx format)
|
37
|
+
- `t=seconds.milliseconds`, e.g. `t=1700173924.763` - 10.3 digits, nginx format with [New Relic recommended][new-relic-recommended-format] `t=` prefix
|
38
|
+
- `milliseconds`, e.g. `1700173924763` - 13 digits (Heroku format)
|
39
|
+
- `t=microseconds`, e.g. `t=1700173924763384` - 16 digits with `t=` prefix (Apache format)
|
40
|
+
|
41
|
+
[new-relic-recommended-format]: https://docs.newrelic.com/docs/apm/applications-menu/features/request-queue-server-configuration-examples/
|
30
42
|
|
31
|
-
If the `X-Request-Start` header is not present `wait_timeout` handling is skipped entirely.
|
43
|
+
If the `X-Request-Start` header is not present, or does not match one of these formats, `wait_timeout` handling is skipped entirely.
|
32
44
|
|
33
45
|
### Wait Overtime
|
34
46
|
|
@@ -55,7 +67,7 @@ If your application timeouts fire frequently then [they can cause your applicati
|
|
55
67
|
- [Ruby Application Restart Behavior](https://devcenter.heroku.com/articles/what-happens-to-ruby-apps-when-they-are-restarted)
|
56
68
|
- [License to SIGKILL](https://www.sitepoint.com/license-to-sigkill/)
|
57
69
|
|
58
|
-
**Puma SIGTERM behavior** When a Puma worker receives a `SIGTERM` it will begin to shut down, but not exit right away. It stops accepting new requests and waits for any existing requests to finish before fully shutting down. This means that only the request that experiences a timeout will be
|
70
|
+
**Puma SIGTERM behavior** When a Puma worker receives a `SIGTERM` it will begin to shut down, but not exit right away. It stops accepting new requests and waits for any existing requests to finish before fully shutting down. This means that only the request that experiences a timeout will be interrupted, all other in-flight requests will be allowed to run until they return or also are timed out.
|
59
71
|
|
60
72
|
After the worker process exists will Puma's parent process know to boot a replacement worker. While one process is restarting, another can still serve requests (if you have more than 1 worker process per server/dyno). Between when a process exits and when a new process boots, there will be a reduction in throughput. If all processes are restarting, then incoming requests will be blocked while new processes boot.
|
61
73
|
|
data/lib/rack/timeout/core.rb
CHANGED
@@ -73,9 +73,14 @@ module Rack
|
|
73
73
|
@wait_overtime = read_timeout_property wait_overtime, ENV.fetch("RACK_TIMEOUT_WAIT_OVERTIME", 60).to_i
|
74
74
|
@service_past_wait = service_past_wait == "not_specified" ? ENV.fetch("RACK_TIMEOUT_SERVICE_PAST_WAIT", false).to_s != "false" : service_past_wait
|
75
75
|
|
76
|
-
|
77
|
-
|
78
|
-
|
76
|
+
if @term_on_timeout && !::Process.respond_to?(:fork)
|
77
|
+
raise(NotImplementedError, <<-MSG)
|
78
|
+
The platform running your application does not support forking (i.e. Windows, JVM, etc).
|
79
|
+
|
80
|
+
To avoid this error, either specify RACK_TIMEOUT_TERM_ON_TIMEOUT=0 or
|
81
|
+
leave it as default (which will have the same result).
|
82
|
+
|
83
|
+
MSG
|
79
84
|
end
|
80
85
|
@app = app
|
81
86
|
end
|
@@ -97,7 +102,7 @@ module Rack
|
|
97
102
|
seconds_waited = time_started_service - time_started_wait # how long it took between the web server first receiving the request and rack being able to handle it
|
98
103
|
seconds_waited = 0 if seconds_waited < 0 # make up for potential time drift between the routing server and the application server
|
99
104
|
final_wait_timeout = wait_timeout + effective_overtime # how long the request will be allowed to have waited
|
100
|
-
seconds_service_left = final_wait_timeout - seconds_waited # first calculation of service timeout (relevant if request doesn't get expired, may be
|
105
|
+
seconds_service_left = final_wait_timeout - seconds_waited # first calculation of service timeout (relevant if request doesn't get expired, may be overridden later)
|
101
106
|
info.wait = seconds_waited # updating the info properties; info.timeout will be the wait timeout at this point
|
102
107
|
info.timeout = final_wait_timeout
|
103
108
|
|
@@ -127,13 +132,14 @@ module Rack
|
|
127
132
|
timeout = RT::Scheduler::Timeout.new do |app_thread| # creates a timeout instance responsible for timing out the request. the given block runs if timed out
|
128
133
|
register_state_change.call :timed_out
|
129
134
|
|
130
|
-
message = "Request "
|
135
|
+
message = +"Request "
|
131
136
|
message << "waited #{info.ms(:wait)}, then " if info.wait
|
132
137
|
message << "ran for longer than #{info.ms(:timeout)} "
|
133
138
|
if term_on_timeout
|
139
|
+
Thread.main['RACK_TIMEOUT_COUNT'] ||= 0
|
134
140
|
Thread.main['RACK_TIMEOUT_COUNT'] += 1
|
135
141
|
|
136
|
-
if Thread.main['RACK_TIMEOUT_COUNT'] >=
|
142
|
+
if Thread.main['RACK_TIMEOUT_COUNT'] >= term_on_timeout
|
137
143
|
message << ", sending SIGTERM to process #{Process.pid}"
|
138
144
|
Process.kill("SIGTERM", Process.pid)
|
139
145
|
else
|
@@ -161,9 +167,9 @@ module Rack
|
|
161
167
|
# X-Request-Start contains the time the request was first seen by the server. Format varies wildly amongst servers, yay!
|
162
168
|
# - nginx gives the time since epoch as seconds.milliseconds[1]. New Relic documentation recommends preceding it with t=[2], so might as well detect it.
|
163
169
|
# - Heroku gives the time since epoch in milliseconds. [3]
|
164
|
-
# - Apache uses t=microseconds[4], so
|
170
|
+
# - Apache uses t=microseconds[4], so 16 digits (until November 2286).
|
165
171
|
#
|
166
|
-
# The sane way to handle this would be by knowing the server being used, instead let's just hack around with regular expressions
|
172
|
+
# The sane way to handle this would be by knowing the server being used, instead let's just hack around with regular expressions.
|
167
173
|
# [1]: http://nginx.org/en/docs/http/ngx_http_log_module.html#var_msec
|
168
174
|
# [2]: https://docs.newrelic.com/docs/apm/other-features/request-queueing/request-queue-server-configuration-examples#nginx
|
169
175
|
# [3]: https://devcenter.heroku.com/articles/http-routing#heroku-headers
|
@@ -172,11 +178,15 @@ module Rack
|
|
172
178
|
# This is a code extraction for readability, this method is only called from a single point.
|
173
179
|
RX_NGINX_X_REQUEST_START = /^(?:t=)?(\d+)\.(\d{3})$/
|
174
180
|
RX_HEROKU_X_REQUEST_START = /^(\d+)$/
|
181
|
+
RX_APACHE_X_REQUEST_START = /^t=(\d{16})$/
|
175
182
|
HTTP_X_REQUEST_START = "HTTP_X_REQUEST_START".freeze
|
176
183
|
def self._read_x_request_start(env)
|
177
184
|
return unless s = env[HTTP_X_REQUEST_START]
|
178
|
-
|
179
|
-
|
185
|
+
if m = s.match(RX_HEROKU_X_REQUEST_START) || s.match(RX_NGINX_X_REQUEST_START)
|
186
|
+
Time.at(m[1,2].join.to_f / 1000)
|
187
|
+
elsif m = s.match(RX_APACHE_X_REQUEST_START)
|
188
|
+
Time.at(m[1].to_f / 1_000_000)
|
189
|
+
end
|
180
190
|
end
|
181
191
|
|
182
192
|
# This method determines if a body is present. requests with a body (generally POST, PUT) can have a lengthy body which may have taken a while to be received by the web server, inflating their computed wait time. This in turn could lead to unwanted expirations. See wait_overtime property as a way to overcome those.
|
@@ -43,7 +43,7 @@ class Rack::Timeout::StateChangeLoggingObserver
|
|
43
43
|
info = env[::Rack::Timeout::ENV_INFO_KEY]
|
44
44
|
level = STATE_LOG_LEVEL[info.state]
|
45
45
|
logger(env).send(level) do
|
46
|
-
s = "source=rack-timeout"
|
46
|
+
s = +"source=rack-timeout"
|
47
47
|
s << " id=" << info.id if info.id
|
48
48
|
s << " wait=" << info.ms(:wait) if info.wait
|
49
49
|
s << " timeout=" << info.ms(:timeout) if info.timeout
|
data/lib/rack-timeout.rb
CHANGED
@@ -1,2 +1,2 @@
|
|
1
1
|
require_relative "rack/timeout/base"
|
2
|
-
require_relative "rack/timeout/rails" if defined?(Rails) && Rails::VERSION::MAJOR >= 3
|
2
|
+
require_relative "rack/timeout/rails" if defined?(Rails) && Rails.const_defined?(:VERSION) && Rails::VERSION::MAJOR >= 3
|
data/test/basic_test.rb
CHANGED
@@ -20,4 +20,11 @@ class BasicTest < RackTimeoutTest
|
|
20
20
|
get "/", "", 'HTTP_X_REQUEST_START' => time_in_msec(Time.now - 100)
|
21
21
|
end
|
22
22
|
end
|
23
|
+
|
24
|
+
def test_apache_formatted_header_wait_timeout
|
25
|
+
self.settings = { service_timeout: 1, wait_timeout: 15 }
|
26
|
+
assert_raises(Rack::Timeout::RequestExpiryError) do
|
27
|
+
get "/", "", 'HTTP_X_REQUEST_START' => "t=#{time_in_usec(Time.now - 100)}"
|
28
|
+
end
|
29
|
+
end
|
23
30
|
end
|
data/test/env_settings_test.rb
CHANGED
@@ -2,13 +2,6 @@ require 'test_helper'
|
|
2
2
|
|
3
3
|
class EnvSettingsTest < RackTimeoutTest
|
4
4
|
|
5
|
-
def test_service_timeout
|
6
|
-
with_env(RACK_TIMEOUT_SERVICE_TIMEOUT: 1) do
|
7
|
-
assert_raises(Rack::Timeout::RequestTimeoutError) do
|
8
|
-
get "/sleep"
|
9
|
-
end
|
10
|
-
end
|
11
|
-
end
|
12
5
|
|
13
6
|
def test_zero_wait_timeout
|
14
7
|
with_env(RACK_TIMEOUT_WAIT_TIMEOUT: 0) do
|
@@ -17,10 +10,29 @@ class EnvSettingsTest < RackTimeoutTest
|
|
17
10
|
end
|
18
11
|
end
|
19
12
|
|
20
|
-
|
21
|
-
|
22
|
-
|
23
|
-
|
13
|
+
|
14
|
+
if Process.respond_to?(:fork) # This functionality does not work on windows, so we cannot test it there.
|
15
|
+
def test_service_timeout
|
16
|
+
with_env(RACK_TIMEOUT_SERVICE_TIMEOUT: 1) do
|
17
|
+
assert_raises(Rack::Timeout::RequestTimeoutError) do
|
18
|
+
get "/sleep"
|
19
|
+
end
|
20
|
+
end
|
21
|
+
end
|
22
|
+
|
23
|
+
def test_term
|
24
|
+
with_env(RACK_TIMEOUT_TERM_ON_TIMEOUT: 1) do
|
25
|
+
assert_raises(SignalException) do
|
26
|
+
get "/sleep"
|
27
|
+
end
|
28
|
+
end
|
29
|
+
end
|
30
|
+
else
|
31
|
+
def test_service_timeout # Confirm that on Windows we raise an exception when someone attempts to use term on timeout
|
32
|
+
with_env(RACK_TIMEOUT_TERM_ON_TIMEOUT: 1) do
|
33
|
+
assert_raises(NotImplementedError) do
|
34
|
+
get "/"
|
35
|
+
end
|
24
36
|
end
|
25
37
|
end
|
26
38
|
end
|
data/test/test_helper.rb
CHANGED
@@ -1,5 +1,8 @@
|
|
1
1
|
require "test/unit"
|
2
|
+
require "rack"
|
2
3
|
require "rack/test"
|
4
|
+
require "rack/builder"
|
5
|
+
require "rack/null_logger"
|
3
6
|
require "rack-timeout"
|
4
7
|
|
5
8
|
class RackTimeoutTest < Test::Unit::TestCase
|
@@ -27,7 +30,7 @@ class RackTimeoutTest < Test::Unit::TestCase
|
|
27
30
|
end
|
28
31
|
end
|
29
32
|
|
30
|
-
# runs the test with the given environment, but
|
33
|
+
# runs the test with the given environment, but doesn't restore the original
|
31
34
|
# environment afterwards. This should be sufficient for rack-timeout testing.
|
32
35
|
def with_env(hash)
|
33
36
|
hash.each_pair do |k, v|
|
@@ -42,4 +45,9 @@ class RackTimeoutTest < Test::Unit::TestCase
|
|
42
45
|
def time_in_msec(t = Time.now)
|
43
46
|
"#{t.tv_sec}#{t.tv_usec/1000}"
|
44
47
|
end
|
48
|
+
|
49
|
+
def time_in_usec(t = Time.now)
|
50
|
+
# time in microseconds, currently 16 digits
|
51
|
+
"%d%06d" % [t.tv_sec, t.tv_usec]
|
52
|
+
end
|
45
53
|
end
|
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: rack-timeout
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.
|
4
|
+
version: 0.7.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Caio Chassot
|
8
8
|
autorequire:
|
9
9
|
bindir: bin
|
10
10
|
cert_chain: []
|
11
|
-
date:
|
11
|
+
date: 2024-05-20 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: rake
|
@@ -91,8 +91,8 @@ licenses:
|
|
91
91
|
- MIT
|
92
92
|
metadata:
|
93
93
|
bug_tracker_uri: https://github.com/zombocom/rack-timeout/issues
|
94
|
-
changelog_uri: https://github.com/zombocom/rack-timeout/blob/v0.
|
95
|
-
documentation_uri: https://rubydoc.info/gems/rack-timeout/0.
|
94
|
+
changelog_uri: https://github.com/zombocom/rack-timeout/blob/v0.7.0/CHANGELOG.md
|
95
|
+
documentation_uri: https://rubydoc.info/gems/rack-timeout/0.7.0/
|
96
96
|
source_code_uri: https://github.com/zombocom/rack-timeout
|
97
97
|
post_install_message:
|
98
98
|
rdoc_options: []
|
@@ -109,7 +109,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
109
109
|
- !ruby/object:Gem::Version
|
110
110
|
version: '0'
|
111
111
|
requirements: []
|
112
|
-
rubygems_version: 3.
|
112
|
+
rubygems_version: 3.4.18
|
113
113
|
signing_key:
|
114
114
|
specification_version: 4
|
115
115
|
summary: Abort requests that are taking too long
|