rack-timeout 0.5.2 → 0.6.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +5 -5
- data/CHANGELOG.md +4 -0
- data/README.md +5 -4
- data/doc/risks.md +0 -1
- data/doc/settings.md +52 -0
- data/lib/rack/timeout/core.rb +33 -6
- data/lib/rack/timeout/logger.rb +0 -1
- data/lib/rack/timeout/logging-observer.rb +1 -1
- data/lib/rack/timeout/support/monotonic_time.rb +0 -1
- data/lib/rack/timeout/support/scheduler.rb +0 -1
- data/lib/rack/timeout/support/timeout.rb +0 -1
- data/test/env_settings_test.rb +7 -0
- data/test/test_helper.rb +0 -1
- metadata +5 -6
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
|
-
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
2
|
+
SHA256:
|
3
|
+
metadata.gz: 45a8b583f5c8ec73b0659348e53083fd449d1ae732c020c45ab3decfd4d7c913
|
4
|
+
data.tar.gz: 832b443cc5678f0c55df7a8c741dc2f5304e024da77f021ecbaa352c03279e51
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 2279854e2ca96bc0fa0c9e6fe4a67a4d217f79df5840b59cf36b97518ddab7823a134c15fe1e527ce1a24972ee9113825bb58d52520accdcb7dc2a3d8147cb16
|
7
|
+
data.tar.gz: 00651f0c2e2449d490e88db4cf7d899f1287f22bc531fc4c22270024790635b552119572847de65a1933a5b7e4e6514c530a33b0689c02d6aeb34e73de392245
|
data/CHANGELOG.md
CHANGED
data/README.md
CHANGED
@@ -47,7 +47,7 @@ stack `Rack::Timeout` gets inserted.
|
|
47
47
|
|
48
48
|
```ruby
|
49
49
|
# Gemfile
|
50
|
-
gem "rack-timeout", require:"rack/timeout/base"
|
50
|
+
gem "rack-timeout", require: "rack/timeout/base"
|
51
51
|
```
|
52
52
|
|
53
53
|
```ruby
|
@@ -55,7 +55,7 @@ gem "rack-timeout", require:"rack/timeout/base"
|
|
55
55
|
|
56
56
|
# insert middleware wherever you want in the stack, optionally pass
|
57
57
|
# initialization arguments, or use environment variables
|
58
|
-
Rails.application.config.middleware.insert_before Rack::Runtime, Rack::Timeout, service_timeout:
|
58
|
+
Rails.application.config.middleware.insert_before Rack::Runtime, Rack::Timeout, service_timeout: 15
|
59
59
|
```
|
60
60
|
|
61
61
|
### Sinatra and other Rack apps
|
@@ -67,7 +67,7 @@ require "rack-timeout"
|
|
67
67
|
# Call as early as possible so rack-timeout runs before all other middleware.
|
68
68
|
# Setting service_timeout or `RACK_TIMEOUT_SERVICE_TIMEOUT` environment
|
69
69
|
# variable is recommended. If omitted, defaults to 15 seconds.
|
70
|
-
use Rack::Timeout, service_timeout:
|
70
|
+
use Rack::Timeout, service_timeout: 15
|
71
71
|
```
|
72
72
|
|
73
73
|
Configuring
|
@@ -81,6 +81,7 @@ service_timeout: 15 # RACK_TIMEOUT_SERVICE_TIMEOUT
|
|
81
81
|
wait_timeout: 30 # RACK_TIMEOUT_WAIT_TIMEOUT
|
82
82
|
wait_overtime: 60 # RACK_TIMEOUT_WAIT_OVERTIME
|
83
83
|
service_past_wait: false # RACK_TIMEOUT_SERVICE_PAST_WAIT
|
84
|
+
term_on_timeout: false # RACK_TIMEOUT_TERM_ON_TIMEOUT
|
84
85
|
```
|
85
86
|
|
86
87
|
These settings can be overriden during middleware initialization or
|
@@ -88,7 +89,7 @@ environment variables `RACK_TIMEOUT_*` mentioned above. Middleware
|
|
88
89
|
parameters take precedence:
|
89
90
|
|
90
91
|
```ruby
|
91
|
-
use Rack::Timeout, service_timeout:
|
92
|
+
use Rack::Timeout, service_timeout: 15, wait_timeout: 30
|
92
93
|
```
|
93
94
|
|
94
95
|
For more on these settings, please see [doc/settings](doc/settings.md).
|
data/doc/risks.md
CHANGED
@@ -26,7 +26,6 @@ That said, it's something to be aware of, and may explain some eerie wonkiness s
|
|
26
26
|
[broken-timeout]: http://headius.blogspot.de/2008/02/rubys-threadraise-threadkill-timeoutrb.html
|
27
27
|
[handle-interrupt]: http://www.ruby-doc.org/core-2.1.3/Thread.html#method-c-handle_interrupt
|
28
28
|
|
29
|
-
|
30
29
|
### Time Out Early and Often
|
31
30
|
|
32
31
|
Because of the aforementioned issues, it's recommended you set library-specific timeouts and leave Rack::Timeout as a last resort measure. Library timeouts will generally take care of IO issues and abort the operation safely. See [The Ultimate Guide to Ruby Timeouts][ruby-timeouts].
|
data/doc/settings.md
CHANGED
@@ -47,3 +47,55 @@ This extra time is called *wait overtime* and can be set via `wait_overtime`. It
|
|
47
47
|
Keep in mind that Heroku [recommends][uploads] uploading large files directly to S3, so as to prevent the dyno from being blocked for too long and hence unable to handle further incoming requests.
|
48
48
|
|
49
49
|
[uploads]: https://devcenter.heroku.com/articles/s3#file-uploads
|
50
|
+
|
51
|
+
### Term on Timeout
|
52
|
+
|
53
|
+
If your application timeouts fire frequently then [they can cause your application to enter a corrupt state](https://www.schneems.com/2017/02/21/the-oldest-bug-in-ruby-why-racktimeout-might-hose-your-server/). One option for resetting that bad state is to restart the entire process. If you are running in an environment with multiple processes (such as `puma -w 2`) then when a process is sent a `SIGTERM` it will exit. The webserver then knows how to restart the process. For more information on process restart behavior see:
|
54
|
+
|
55
|
+
- [Ruby Application Restart Behavior](https://devcenter.heroku.com/articles/what-happens-to-ruby-apps-when-they-are-restarted)
|
56
|
+
- [License to SIGKILL](https://www.sitepoint.com/license-to-sigkill/)
|
57
|
+
|
58
|
+
**Puma SIGTERM behavior** When a Puma worker receives a `SIGTERM` it will begin to shut down, but not exit right away. It stops accepting new requests and waits for any existing requests to finish before fully shutting down. This means that only the request that experiences a timeout will be interupted, all other in-flight requests will be allowed to run until they return or also are timed out.
|
59
|
+
|
60
|
+
After the worker process exists will Puma's parent process know to boot a replacement worker. While one process is restarting, another can still serve requests (if you have more than 1 worker process per server/dyno). Between when a process exits and when a new process boots, there will be a reduction in throughput. If all processes are restarting, then incoming requests will be blocked while new processes boot.
|
61
|
+
|
62
|
+
**How to enable** To enable this behavior you can set `term_on_timeout: 1` to an integer value. If you set it to zero or one, then the first time the process encounters a timeout, it will receive a SIGTERM.
|
63
|
+
|
64
|
+
To enable on Heroku run:
|
65
|
+
|
66
|
+
```
|
67
|
+
$ heroku config:set RACK_TIMEOUT_TERM_ON_TIMEOUT=1
|
68
|
+
```
|
69
|
+
|
70
|
+
**Caution** If you use this setting inside of a webserver without enabling multi-process mode, then it will exit the entire server when it fires:
|
71
|
+
|
72
|
+
- ✅ `puma -w 2 -t 5` This is OKAY
|
73
|
+
- ❌ `puma -t 5` This is NOT OKAY
|
74
|
+
|
75
|
+
If you're using a `config/puma.rb` file then make sure you are calling `workers` configuration DSL. You should see multiple workers when the server boots:
|
76
|
+
|
77
|
+
```
|
78
|
+
[3922] Puma starting in cluster mode...
|
79
|
+
[3922] * Version 4.3.0 (ruby 2.6.5-p114), codename: Mysterious Traveller
|
80
|
+
[3922] * Min threads: 0, max threads: 16
|
81
|
+
[3922] * Environment: development
|
82
|
+
[3922] * Process workers: 2
|
83
|
+
[3922] * Phased restart available
|
84
|
+
[3922] * Listening on tcp://0.0.0.0:9292
|
85
|
+
[3922] Use Ctrl-C to stop
|
86
|
+
[3922] - Worker 0 (pid: 3924) booted, phase: 0
|
87
|
+
[3922] - Worker 1 (pid: 3925) booted, phase: 0
|
88
|
+
```
|
89
|
+
|
90
|
+
> ✅ Notice how it says it is booting in "cluster mode" and how it gives PIDs for two worker processes at the bottom.
|
91
|
+
|
92
|
+
**How to decide the term_on_timeout value** If you set to a higher value such as `5` then rack-timeout will wait until the process has experienced five timeouts before restarting the process. Setting this value to a higher number means the application restarts processes less frequently, so throughput will be less impacted. If you set it to too high of a number, then the underlying issue of the application being put into a bad state will not be effectively mitigated.
|
93
|
+
|
94
|
+
|
95
|
+
**How do I know when a process is being restarted by rack-timeout?** This exception error should be visible in the logs:
|
96
|
+
|
97
|
+
```
|
98
|
+
Request ran for longer than 1000ms, sending SIGTERM to process 3925
|
99
|
+
```
|
100
|
+
|
101
|
+
> Note: Since the worker waits for all in-flight requests to finish (with puma) you may see multiple SIGTERMs to the same PID before it exits, this means that multiple requests timed out.
|
data/lib/rack/timeout/core.rb
CHANGED
@@ -30,6 +30,7 @@ module Rack
|
|
30
30
|
:service, # time rack spent processing the request (updated ~ every second)
|
31
31
|
:timeout, # the actual computed timeout to be used for this request
|
32
32
|
:state, # the request's current state, see VALID_STATES below
|
33
|
+
:term,
|
33
34
|
) {
|
34
35
|
def ms(k) # helper method used for formatting values in milliseconds
|
35
36
|
"%.fms" % (self[k] * 1000) if self[k]
|
@@ -52,6 +53,8 @@ module Rack
|
|
52
53
|
when nil ; read_timeout_property default, default
|
53
54
|
when false ; false
|
54
55
|
when 0 ; false
|
56
|
+
when String
|
57
|
+
read_timeout_property value.to_i, default
|
55
58
|
else
|
56
59
|
value.is_a?(Numeric) && value > 0 or raise ArgumentError, "value #{value.inspect} should be false, zero, or a positive number."
|
57
60
|
value
|
@@ -62,13 +65,21 @@ module Rack
|
|
62
65
|
:service_timeout, # How long the application can take to complete handling the request once it's passed down to it.
|
63
66
|
:wait_timeout, # How long the request is allowed to have waited before reaching rack. If exceeded, the request is 'expired', i.e. dropped entirely without being passed down to the application.
|
64
67
|
:wait_overtime, # Additional time over @wait_timeout for requests with a body, like POST requests. These may take longer to be received by the server before being passed down to the application, but should not be expired.
|
65
|
-
:service_past_wait
|
68
|
+
:service_past_wait, # when false, reduces the request's computed timeout from the service_timeout value if the complete request lifetime (wait + service) would have been longer than wait_timeout (+ wait_overtime when applicable). When true, always uses the service_timeout value. we default to false under the assumption that the router would drop a request that's not responded within wait_timeout, thus being there no point in servicing beyond seconds_service_left (see code further down) up until service_timeout.
|
69
|
+
:term_on_timeout
|
66
70
|
|
67
|
-
def initialize(app, service_timeout:nil, wait_timeout:nil, wait_overtime:nil, service_past_wait:"not_specified")
|
71
|
+
def initialize(app, service_timeout:nil, wait_timeout:nil, wait_overtime:nil, service_past_wait:"not_specified", term_on_timeout: nil)
|
72
|
+
@term_on_timeout = read_timeout_property term_on_timeout, ENV.fetch("RACK_TIMEOUT_TERM_ON_TIMEOUT", false)
|
68
73
|
@service_timeout = read_timeout_property service_timeout, ENV.fetch("RACK_TIMEOUT_SERVICE_TIMEOUT", 15).to_i
|
69
74
|
@wait_timeout = read_timeout_property wait_timeout, ENV.fetch("RACK_TIMEOUT_WAIT_TIMEOUT", 30).to_i
|
70
75
|
@wait_overtime = read_timeout_property wait_overtime, ENV.fetch("RACK_TIMEOUT_WAIT_OVERTIME", 60).to_i
|
71
76
|
@service_past_wait = service_past_wait == "not_specified" ? ENV.fetch("RACK_TIMEOUT_SERVICE_PAST_WAIT", false).to_s != "false" : service_past_wait
|
77
|
+
|
78
|
+
Thread.main['RACK_TIMEOUT_COUNT'] ||= 0
|
79
|
+
if @term_on_timeout
|
80
|
+
raise "term_on_timeout must be an integer but is #{@term_on_timeout.class}: #{@term_on_timeout}" unless @term_on_timeout.is_a?(Numeric)
|
81
|
+
raise "Current Runtime does not support processes" unless ::Process.respond_to?(:fork)
|
82
|
+
end
|
72
83
|
@app = app
|
73
84
|
end
|
74
85
|
|
@@ -90,7 +101,9 @@ module Rack
|
|
90
101
|
seconds_waited = 0 if seconds_waited < 0 # make up for potential time drift between the routing server and the application server
|
91
102
|
final_wait_timeout = wait_timeout + effective_overtime # how long the request will be allowed to have waited
|
92
103
|
seconds_service_left = final_wait_timeout - seconds_waited # first calculation of service timeout (relevant if request doesn't get expired, may be overriden later)
|
93
|
-
info.wait
|
104
|
+
info.wait = seconds_waited # updating the info properties; info.timeout will be the wait timeout at this point
|
105
|
+
info.timeout = final_wait_timeout
|
106
|
+
|
94
107
|
if seconds_service_left <= 0 # expire requests that have waited for too long in the queue (as they are assumed to have been dropped by the web server / routing layer at this point)
|
95
108
|
RT._set_state! env, :expired
|
96
109
|
raise RequestExpiryError.new(env), "Request older than #{info.ms(:timeout)}."
|
@@ -103,7 +116,7 @@ module Rack
|
|
103
116
|
# compute actual timeout to be used for this request; if service_past_wait is true, this is just service_timeout. If false (the default), and wait time was determined, we'll use the shortest value between seconds_service_left and service_timeout. See comment above at service_past_wait for justification.
|
104
117
|
info.timeout = service_timeout # nice and simple, when service_past_wait is true, not so much otherwise:
|
105
118
|
info.timeout = seconds_service_left if !service_past_wait && seconds_service_left && seconds_service_left > 0 && seconds_service_left < service_timeout
|
106
|
-
|
119
|
+
info.term = term_on_timeout
|
107
120
|
RT._set_state! env, :ready # we're good to go, but have done nothing yet
|
108
121
|
|
109
122
|
heartbeat_event = nil # init var so it's in scope for following proc
|
@@ -116,7 +129,22 @@ module Rack
|
|
116
129
|
|
117
130
|
timeout = RT::Scheduler::Timeout.new do |app_thread| # creates a timeout instance responsible for timing out the request. the given block runs if timed out
|
118
131
|
register_state_change.call :timed_out
|
119
|
-
|
132
|
+
|
133
|
+
message = "Request "
|
134
|
+
message << "waited #{info.ms(:wait)}, then " if info.wait
|
135
|
+
message << "ran for longer than #{info.ms(:timeout)} "
|
136
|
+
if term_on_timeout
|
137
|
+
Thread.main['RACK_TIMEOUT_COUNT'] += 1
|
138
|
+
|
139
|
+
if Thread.main['RACK_TIMEOUT_COUNT'] >= @term_on_timeout
|
140
|
+
message << ", sending SIGTERM to process #{Process.pid}"
|
141
|
+
Process.kill("SIGTERM", Process.pid)
|
142
|
+
else
|
143
|
+
message << ", #{Thread.main['RACK_TIMEOUT_COUNT']}/#{term_on_timeout} timeouts allowed before SIGTERM for process #{Process.pid}"
|
144
|
+
end
|
145
|
+
end
|
146
|
+
|
147
|
+
app_thread.raise(RequestTimeoutException.new(env), message)
|
120
148
|
end
|
121
149
|
|
122
150
|
response = timeout.timeout(info.timeout) do # perform request with timeout
|
@@ -191,6 +219,5 @@ module Rack
|
|
191
219
|
def self.notify_state_change_observers(env)
|
192
220
|
@state_change_observers.values.each { |observer| observer.call(env) }
|
193
221
|
end
|
194
|
-
|
195
222
|
end
|
196
223
|
end
|
data/lib/rack/timeout/logger.rb
CHANGED
@@ -48,9 +48,9 @@ class Rack::Timeout::StateChangeLoggingObserver
|
|
48
48
|
s << " wait=" << info.ms(:wait) if info.wait
|
49
49
|
s << " timeout=" << info.ms(:timeout) if info.timeout
|
50
50
|
s << " service=" << info.ms(:service) if info.service
|
51
|
+
s << " term_on_timeout=" << info.term.to_s if info.term
|
51
52
|
s << " state=" << info.state.to_s if info.state
|
52
53
|
s
|
53
54
|
end
|
54
55
|
end
|
55
|
-
|
56
56
|
end
|
data/test/env_settings_test.rb
CHANGED
data/test/test_helper.rb
CHANGED
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: rack-timeout
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.
|
4
|
+
version: 0.6.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Caio Chassot
|
8
8
|
autorequire:
|
9
9
|
bindir: bin
|
10
10
|
cert_chain: []
|
11
|
-
date: 2019-
|
11
|
+
date: 2019-12-11 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: rake
|
@@ -91,8 +91,8 @@ licenses:
|
|
91
91
|
- MIT
|
92
92
|
metadata:
|
93
93
|
bug_tracker_uri: https://github.com/sharpstone/rack-timeout/issues
|
94
|
-
changelog_uri: https://github.com/sharpstone/rack-timeout/blob/v0.
|
95
|
-
documentation_uri: https://rubydoc.info/gems/rack-timeout/0.
|
94
|
+
changelog_uri: https://github.com/sharpstone/rack-timeout/blob/v0.6.0/CHANGELOG.md
|
95
|
+
documentation_uri: https://rubydoc.info/gems/rack-timeout/0.6.0/
|
96
96
|
source_code_uri: https://github.com/sharpstone/rack-timeout
|
97
97
|
post_install_message:
|
98
98
|
rdoc_options: []
|
@@ -109,8 +109,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
109
109
|
- !ruby/object:Gem::Version
|
110
110
|
version: '0'
|
111
111
|
requirements: []
|
112
|
-
|
113
|
-
rubygems_version: 2.5.2.3
|
112
|
+
rubygems_version: 3.0.6
|
114
113
|
signing_key:
|
115
114
|
specification_version: 4
|
116
115
|
summary: Abort requests that are taking too long
|