puma 5.3.2 → 6.0.0
Sign up to get free protection for your applications and to get access to all the features.
Potentially problematic release.
This version of puma might be problematic. Click here for more details.
- checksums.yaml +4 -4
- data/History.md +284 -11
- data/LICENSE +0 -0
- data/README.md +61 -16
- data/bin/puma-wild +1 -1
- data/docs/architecture.md +49 -16
- data/docs/compile_options.md +38 -2
- data/docs/deployment.md +53 -67
- data/docs/fork_worker.md +1 -3
- data/docs/images/puma-connection-flow-no-reactor.png +0 -0
- data/docs/images/puma-connection-flow.png +0 -0
- data/docs/images/puma-general-arch.png +0 -0
- data/docs/jungle/README.md +0 -0
- data/docs/jungle/rc.d/README.md +0 -0
- data/docs/jungle/rc.d/puma.conf +0 -0
- data/docs/kubernetes.md +0 -0
- data/docs/nginx.md +0 -0
- data/docs/plugins.md +15 -15
- data/docs/rails_dev_mode.md +2 -3
- data/docs/restart.md +6 -6
- data/docs/signals.md +11 -10
- data/docs/stats.md +8 -8
- data/docs/systemd.md +64 -67
- data/docs/testing_benchmarks_local_files.md +150 -0
- data/docs/testing_test_rackup_ci_files.md +36 -0
- data/ext/puma_http11/PumaHttp11Service.java +0 -0
- data/ext/puma_http11/ext_help.h +0 -0
- data/ext/puma_http11/extconf.rb +44 -13
- data/ext/puma_http11/http11_parser.c +24 -11
- data/ext/puma_http11/http11_parser.h +1 -1
- data/ext/puma_http11/http11_parser.java.rl +2 -2
- data/ext/puma_http11/http11_parser.rl +2 -2
- data/ext/puma_http11/http11_parser_common.rl +3 -3
- data/ext/puma_http11/mini_ssl.c +122 -23
- data/ext/puma_http11/no_ssl/PumaHttp11Service.java +0 -0
- data/ext/puma_http11/org/jruby/puma/Http11.java +3 -3
- data/ext/puma_http11/org/jruby/puma/Http11Parser.java +50 -48
- data/ext/puma_http11/org/jruby/puma/MiniSSL.java +188 -102
- data/ext/puma_http11/puma_http11.c +18 -10
- data/lib/puma/app/status.rb +9 -6
- data/lib/puma/binder.rb +81 -42
- data/lib/puma/cli.rb +23 -19
- data/lib/puma/client.rb +124 -30
- data/lib/puma/cluster/worker.rb +21 -29
- data/lib/puma/cluster/worker_handle.rb +8 -1
- data/lib/puma/cluster.rb +57 -48
- data/lib/puma/commonlogger.rb +0 -0
- data/lib/puma/configuration.rb +74 -55
- data/lib/puma/const.rb +21 -24
- data/lib/puma/control_cli.rb +22 -19
- data/lib/puma/detect.rb +10 -2
- data/lib/puma/dsl.rb +196 -57
- data/lib/puma/error_logger.rb +17 -9
- data/lib/puma/events.rb +6 -126
- data/lib/puma/io_buffer.rb +29 -4
- data/lib/puma/jruby_restart.rb +2 -1
- data/lib/puma/{json.rb → json_serialization.rb} +1 -1
- data/lib/puma/launcher/bundle_pruner.rb +104 -0
- data/lib/puma/launcher.rb +108 -154
- data/lib/puma/log_writer.rb +137 -0
- data/lib/puma/minissl/context_builder.rb +29 -16
- data/lib/puma/minissl.rb +115 -38
- data/lib/puma/null_io.rb +5 -0
- data/lib/puma/plugin/tmp_restart.rb +1 -1
- data/lib/puma/plugin.rb +2 -2
- data/lib/puma/rack/builder.rb +5 -5
- data/lib/puma/rack/urlmap.rb +0 -0
- data/lib/puma/rack_default.rb +1 -1
- data/lib/puma/reactor.rb +3 -3
- data/lib/puma/request.rb +293 -153
- data/lib/puma/runner.rb +63 -28
- data/lib/puma/server.rb +83 -88
- data/lib/puma/single.rb +10 -10
- data/lib/puma/state_file.rb +39 -7
- data/lib/puma/systemd.rb +3 -2
- data/lib/puma/thread_pool.rb +22 -17
- data/lib/puma/util.rb +20 -15
- data/lib/puma.rb +12 -9
- data/lib/rack/handler/puma.rb +9 -9
- data/tools/Dockerfile +1 -1
- data/tools/trickletest.rb +0 -0
- metadata +13 -9
- data/lib/puma/queue_close.rb +0 -26
data/docs/compile_options.md
CHANGED
@@ -1,10 +1,12 @@
|
|
1
1
|
# Compile Options
|
2
2
|
|
3
|
-
There are some `cflags` provided to change Puma's default configuration for its
|
3
|
+
There are some `cflags` provided to change Puma's default configuration for its
|
4
|
+
C extension.
|
4
5
|
|
5
6
|
## Query String, `PUMA_QUERY_STRING_MAX_LENGTH`
|
6
7
|
|
7
|
-
By default, the max length of `QUERY_STRING` is `1024 * 10`. But you may want to
|
8
|
+
By default, the max length of `QUERY_STRING` is `1024 * 10`. But you may want to
|
9
|
+
adjust it to accept longer queries in GET requests.
|
8
10
|
|
9
11
|
For manual install, pass the `PUMA_QUERY_STRING_MAX_LENGTH` option like this:
|
10
12
|
|
@@ -17,3 +19,37 @@ For Bundler, use its configuration system:
|
|
17
19
|
```
|
18
20
|
bundle config build.puma "--with-cflags='-D PUMA_QUERY_STRING_MAX_LENGTH=64000'"
|
19
21
|
```
|
22
|
+
|
23
|
+
## Request Path, `PUMA_REQUEST_PATH_MAX_LENGTH`
|
24
|
+
|
25
|
+
By default, the max length of `REQUEST_PATH` is `8192`. But you may want to
|
26
|
+
adjust it to accept longer paths in requests.
|
27
|
+
|
28
|
+
For manual install, pass the `PUMA_REQUEST_PATH_MAX_LENGTH` option like this:
|
29
|
+
|
30
|
+
```
|
31
|
+
gem install puma -- --with-cflags="-D PUMA_REQUEST_PATH_MAX_LENGTH=64000"
|
32
|
+
```
|
33
|
+
|
34
|
+
For Bundler, use its configuration system:
|
35
|
+
|
36
|
+
```
|
37
|
+
bundle config build.puma "--with-cflags='-D PUMA_REQUEST_PATH_MAX_LENGTH=64000'"
|
38
|
+
```
|
39
|
+
|
40
|
+
## Request URI, `PUMA_REQUEST_URI_MAX_LENGTH`
|
41
|
+
|
42
|
+
By default, the max length of `REQUEST_URI` is `1024 * 12`. But you may want to
|
43
|
+
adjust it to accept longer URIs in requests.
|
44
|
+
|
45
|
+
For manual install, pass the `PUMA_REQUEST_URI_MAX_LENGTH` option like this:
|
46
|
+
|
47
|
+
```
|
48
|
+
gem install puma -- --with-cflags="-D PUMA_REQUEST_URI_MAX_LENGTH=64000"
|
49
|
+
```
|
50
|
+
|
51
|
+
For Bundler, use its configuration system:
|
52
|
+
|
53
|
+
```
|
54
|
+
bundle config build.puma "--with-cflags='-D PUMA_REQUEST_URI_MAX_LENGTH=64000'"
|
55
|
+
```
|
data/docs/deployment.md
CHANGED
@@ -1,35 +1,32 @@
|
|
1
1
|
# Deployment engineering for Puma
|
2
2
|
|
3
|
-
Puma
|
4
|
-
|
5
|
-
it in their production deployments as well.
|
3
|
+
Puma expects to be run in a deployed environment eventually. You can use it as
|
4
|
+
your development server, but most people use it in their production deployments.
|
6
5
|
|
7
|
-
To that end, this
|
8
|
-
|
6
|
+
To that end, this document serves as a foundation of wisdom regarding deploying
|
7
|
+
Puma to production while increasing happiness and decreasing downtime.
|
9
8
|
|
10
9
|
## Specifying Puma
|
11
10
|
|
12
|
-
Most people
|
13
|
-
|
11
|
+
Most people will specify Puma by including `gem "puma"` in a Gemfile, so we'll
|
12
|
+
assume this is how you're using Puma.
|
14
13
|
|
15
|
-
|
14
|
+
## Single vs. Cluster mode
|
16
15
|
|
17
|
-
|
16
|
+
Initially, Puma was conceived as a thread-only web server, but support for
|
17
|
+
processes was added in version 2.
|
18
18
|
|
19
|
-
|
20
|
-
|
19
|
+
To run `puma` in single mode (i.e., as a development environment), set the
|
20
|
+
number of workers to 0; anything higher will run in cluster mode.
|
21
21
|
|
22
|
-
|
23
|
-
set the number of workers to 0, anything above will run in cluster mode.
|
24
|
-
|
25
|
-
Here are some rules of thumb for cluster mode:
|
22
|
+
Here are some tips for cluster mode:
|
26
23
|
|
27
24
|
### MRI
|
28
25
|
|
29
|
-
* Use cluster mode and set the number of workers to 1.5x the number of
|
30
|
-
in the machine, minimum 2.
|
31
|
-
* Set the number of threads to desired concurrent requests
|
32
|
-
Puma defaults to 5 and that's a decent number.
|
26
|
+
* Use cluster mode and set the number of workers to 1.5x the number of CPU cores
|
27
|
+
in the machine, starting from a minimum of 2.
|
28
|
+
* Set the number of threads to desired concurrent requests/number of workers.
|
29
|
+
Puma defaults to 5, and that's a decent number.
|
33
30
|
|
34
31
|
#### Migrating from Unicorn
|
35
32
|
|
@@ -37,7 +34,7 @@ Here are some rules of thumb for cluster mode:
|
|
37
34
|
* Set workers to half the number of unicorn workers you're using
|
38
35
|
* Set threads to 2
|
39
36
|
* Enjoy 50% memory savings
|
40
|
-
* As you grow more confident in the thread
|
37
|
+
* As you grow more confident in the thread-safety of your app, you can tune the
|
41
38
|
workers down and the threads up.
|
42
39
|
|
43
40
|
#### Ubuntu / Systemd (Systemctl) Installation
|
@@ -48,69 +45,58 @@ See [systemd.md](systemd.md)
|
|
48
45
|
|
49
46
|
**How do you know if you've got enough (or too many workers)?**
|
50
47
|
|
51
|
-
A good question. Due to MRI's GIL, only one thread can be executing Ruby code at
|
52
|
-
But since so many apps are waiting on IO from DBs, etc., they can
|
53
|
-
|
48
|
+
A good question. Due to MRI's GIL, only one thread can be executing Ruby code at
|
49
|
+
a time. But since so many apps are waiting on IO from DBs, etc., they can
|
50
|
+
utilize threads to use the process more efficiently.
|
54
51
|
|
55
|
-
|
56
|
-
|
57
|
-
|
58
|
-
|
52
|
+
Generally, you never want processes that are pegged all the time. That can mean
|
53
|
+
there is more work to do than the process can get through. On the other hand, if
|
54
|
+
you have processes that sit around doing nothing, then they're just eating up
|
55
|
+
resources.
|
59
56
|
|
60
|
-
Watch your CPU utilization over time and aim for about 70% on average.
|
61
|
-
you've got capacity still but aren't starving threads.
|
57
|
+
Watch your CPU utilization over time and aim for about 70% on average. 70%
|
58
|
+
utilization means you've got capacity still but aren't starving threads.
|
62
59
|
|
63
60
|
**Measuring utilization**
|
64
61
|
|
65
|
-
Using a timestamp header from an upstream proxy server (
|
66
|
-
|
67
|
-
thread to become available.
|
62
|
+
Using a timestamp header from an upstream proxy server (e.g., `nginx` or
|
63
|
+
`haproxy`) makes it possible to indicate how long requests have been waiting for
|
64
|
+
a Puma thread to become available.
|
68
65
|
|
69
66
|
* Have your upstream proxy set a header with the time it received the request:
|
70
67
|
* nginx: `proxy_set_header X-Request-Start "${msec}";`
|
71
|
-
* haproxy >= 1.9: `http-request set-header X-Request-Start
|
68
|
+
* haproxy >= 1.9: `http-request set-header X-Request-Start
|
69
|
+
t=%[date()]%[date_us()]`
|
72
70
|
* haproxy < 1.9: `http-request set-header X-Request-Start t=%[date()]`
|
73
|
-
* In your Rack middleware, determine the amount of time elapsed since
|
74
|
-
|
75
|
-
|
76
|
-
|
77
|
-
*
|
78
|
-
|
71
|
+
* In your Rack middleware, determine the amount of time elapsed since
|
72
|
+
`X-Request-Start`.
|
73
|
+
* To improve accuracy, you will want to subtract time spent waiting for slow
|
74
|
+
clients:
|
75
|
+
* `env['puma.request_body_wait']` contains the number of milliseconds Puma
|
76
|
+
spent waiting for the client to send the request body.
|
77
|
+
* haproxy: `%Th` (TLS handshake time) and `%Ti` (idle time before request)
|
78
|
+
can can also be added as headers.
|
79
79
|
|
80
80
|
## Should I daemonize?
|
81
81
|
|
82
|
-
|
82
|
+
The Puma 5.0 release removed daemonization. For older versions and alternatives,
|
83
|
+
continue reading.
|
83
84
|
|
84
|
-
I prefer to
|
85
|
-
monitor them as child processes. This gives them fast response to crashes and
|
85
|
+
I prefer not to daemonize my servers and use something like `runit` or `systemd`
|
86
|
+
to monitor them as child processes. This gives them fast response to crashes and
|
86
87
|
makes it easy to figure out what is going on. Additionally, unlike `unicorn`,
|
87
|
-
|
88
|
+
Puma does not require daemonization to do zero-downtime restarts.
|
88
89
|
|
89
|
-
I see people using daemonization because they start puma directly via
|
90
|
-
task and thus want it to live on past the `cap deploy`. To these people I say:
|
91
|
-
You need to be using a process monitor. Nothing is making sure
|
92
|
-
this scenario! You're just waiting for something weird to happen,
|
93
|
-
and to get paged at
|
94
|
-
your OS comes with, be it `sysvinit` or `systemd`. Or branch out
|
95
|
-
|
90
|
+
I see people using daemonization because they start puma directly via Capistrano
|
91
|
+
task and thus want it to live on past the `cap deploy`. To these people, I say:
|
92
|
+
You need to be using a process monitor. Nothing is making sure Puma stays up in
|
93
|
+
this scenario! You're just waiting for something weird to happen, Puma to die,
|
94
|
+
and to get paged at 3 AM. Do yourself a favor, at least the process monitoring
|
95
|
+
your OS comes with, be it `sysvinit` or `systemd`. Or branch out and use `runit`
|
96
|
+
or hell, even `monit`.
|
96
97
|
|
97
98
|
## Restarting
|
98
99
|
|
99
100
|
You probably will want to deploy some new code at some point, and you'd like
|
100
|
-
|
101
|
-
|
102
|
-
|
103
|
-
1. Don't use `preload!`. This dirties the master process and means it will have
|
104
|
-
to shutdown all the workers and re-exec itself to get your new code. It is not compatible with phased-restart and `prune_bundler` as well.
|
105
|
-
|
106
|
-
1. Use `prune_bundler`. This makes it so that the cluster master will detach itself
|
107
|
-
from a Bundler context on start. This allows the cluster workers to load your app
|
108
|
-
and start a brand new Bundler context within the worker only. This means your
|
109
|
-
master remains pristine and can live on between new releases of your code.
|
110
|
-
|
111
|
-
1. Use phased-restart (`SIGUSR1` or `pumactl phased-restart`). This tells the master
|
112
|
-
to kill off one worker at a time and restart them in your new code. This minimizes
|
113
|
-
downtime and staggers the restart nicely. **WARNING** This means that both your
|
114
|
-
old code and your new code will be running concurrently. Most deployment solutions
|
115
|
-
already cause that, but it's worth warning you about it again. Be careful with your
|
116
|
-
migrations, etc!
|
101
|
+
Puma to start running that new code. There are a few options for restarting
|
102
|
+
Puma, described separately in our [restart documentation](restart.md).
|
data/docs/fork_worker.md
CHANGED
@@ -10,7 +10,7 @@ Puma 5 introduces an experimental new cluster-mode configuration option, `fork_w
|
|
10
10
|
10004 \_ puma: cluster worker 3: 10000 [puma]
|
11
11
|
```
|
12
12
|
|
13
|
-
|
13
|
+
The `fork_worker` option allows your application to be initialized only once for copy-on-write memory savings, and it has two additional advantages:
|
14
14
|
|
15
15
|
1. **Compatible with phased restart.** Because the master process itself doesn't preload the application, this mode works with phased restart (`SIGUSR1` or `pumactl phased-restart`). When worker 0 reloads as part of a phased restart, it initializes a new copy of your application first, then the other workers reload by forking from this new worker already containing the new preloaded application.
|
16
16
|
|
@@ -24,8 +24,6 @@ Similar to the `preload_app!` option, the `fork_worker` option allows your appli
|
|
24
24
|
|
25
25
|
### Limitations
|
26
26
|
|
27
|
-
- Not compatible with the `preload_app!` option
|
28
|
-
|
29
27
|
- This mode is still very experimental so there may be bugs or edge-cases, particularly around expected behavior of existing hooks. Please open a [bug report](https://github.com/puma/puma/issues/new?template=bug_report.md) if you encounter any issues.
|
30
28
|
|
31
29
|
- In order to fork new workers cleanly, worker 0 shuts down its server and stops serving requests so there are no open file descriptors or other kinds of shared global state between processes, and to maximize copy-on-write efficiency across the newly-forked workers. This may temporarily reduce total capacity of the cluster during a phased restart / refork.
|
File without changes
|
File without changes
|
File without changes
|
data/docs/jungle/README.md
CHANGED
File without changes
|
data/docs/jungle/rc.d/README.md
CHANGED
File without changes
|
data/docs/jungle/rc.d/puma.conf
CHANGED
File without changes
|
data/docs/kubernetes.md
CHANGED
File without changes
|
data/docs/nginx.md
CHANGED
File without changes
|
data/docs/plugins.md
CHANGED
@@ -3,22 +3,22 @@
|
|
3
3
|
Puma 3.0 added support for plugins that can augment configuration and service
|
4
4
|
operations.
|
5
5
|
|
6
|
-
|
6
|
+
There are two canonical plugins to aid in the development of new plugins:
|
7
7
|
|
8
8
|
* [tmp\_restart](https://github.com/puma/puma/blob/master/lib/puma/plugin/tmp_restart.rb):
|
9
9
|
Restarts the server if the file `tmp/restart.txt` is touched
|
10
10
|
* [heroku](https://github.com/puma/puma-heroku/blob/master/lib/puma/plugin/heroku.rb):
|
11
|
-
Packages up the default configuration used by
|
11
|
+
Packages up the default configuration used by Puma on Heroku (being sunset
|
12
|
+
with the release of Puma 5.0)
|
12
13
|
|
13
|
-
Plugins are activated in a
|
14
|
+
Plugins are activated in a Puma configuration file (such as `config/puma.rb'`)
|
14
15
|
by adding `plugin "name"`, such as `plugin "heroku"`.
|
15
16
|
|
16
|
-
Plugins are activated based
|
17
|
-
|
18
|
-
|
19
|
-
puma plugins).
|
17
|
+
Plugins are activated based on path requirements so, activating the `heroku`
|
18
|
+
plugin is much like `require "puma/plugin/heroku"`. This allows gems to provide
|
19
|
+
multiple plugins (as well as unrelated gems to provide Puma plugins).
|
20
20
|
|
21
|
-
The `tmp_restart` plugin
|
21
|
+
The `tmp_restart` plugin comes with Puma, so it is always available.
|
22
22
|
|
23
23
|
To use the `heroku` plugin, add `puma-heroku` to your Gemfile or install it.
|
24
24
|
|
@@ -26,13 +26,13 @@ To use the `heroku` plugin, add `puma-heroku` to your Gemfile or install it.
|
|
26
26
|
|
27
27
|
## Server-wide hooks
|
28
28
|
|
29
|
-
Plugins can use a couple of hooks at server level: `start` and `config`.
|
29
|
+
Plugins can use a couple of hooks at the server level: `start` and `config`.
|
30
30
|
|
31
|
-
`start` runs when the server has started and allows the plugin to
|
32
|
-
functionality to augment
|
31
|
+
`start` runs when the server has started and allows the plugin to initiate other
|
32
|
+
functionality to augment Puma.
|
33
33
|
|
34
|
-
`config` runs when the server is being configured and
|
35
|
-
object that
|
34
|
+
`config` runs when the server is being configured and receives a `Puma::DSL`
|
35
|
+
object that is useful for additional configuration.
|
36
36
|
|
37
|
-
|
38
|
-
|
37
|
+
Public methods in [`Puma::Plugin`](../lib/puma/plugin.rb) are treated as a
|
38
|
+
public API for plugins.
|
data/docs/rails_dev_mode.md
CHANGED
@@ -2,16 +2,15 @@
|
|
2
2
|
|
3
3
|
## "Loopback requests"
|
4
4
|
|
5
|
-
Be cautious of "loopback requests"
|
5
|
+
Be cautious of "loopback requests," where a Rails application executes a request to a server that, in turn, results in another request back to the same Rails application before the first request completes. Having a loopback request will trigger [Rails' load interlock](https://guides.rubyonrails.org/threading_and_code_execution.html#load-interlock) mechanism. The load interlock mechanism prevents a thread from using Rails autoloading mechanism to load constants while the application code is still running inside another thread.
|
6
6
|
|
7
7
|
This issue only occurs in the development environment as Rails' load interlock is not used in production environments. Although we're not sure, we believe this issue may not occur with the new `zeitwerk` code loader.
|
8
8
|
|
9
9
|
### Solutions
|
10
10
|
|
11
|
-
|
12
11
|
#### 1. Bypass Rails' load interlock with `.permit_concurrent_loads`
|
13
12
|
|
14
|
-
Wrap the first request inside a block that will allow concurrent loads
|
13
|
+
Wrap the first request inside a block that will allow concurrent loads: [`ActiveSupport::Dependencies.interlock.permit_concurrent_loads`](https://guides.rubyonrails.org/threading_and_code_execution.html#permit-concurrent-loads). Anything wrapped inside the `.permit_concurrent_loads` block will bypass the load interlock mechanism, allowing new threads to access the Rails environment and boot properly.
|
15
14
|
|
16
15
|
###### Example
|
17
16
|
|
data/docs/restart.md
CHANGED
@@ -1,8 +1,8 @@
|
|
1
|
-
Puma provides three distinct kinds of restart operations, each for different use cases.
|
1
|
+
Puma provides three distinct kinds of restart operations, each for different use cases. This document describes "hot restarts" and "phased restarts." The third kind of restart operation is called "refork" and is described in the documentation for [`fork_worker`](fork_worker.md).
|
2
2
|
|
3
3
|
## Hot restart
|
4
4
|
|
5
|
-
To perform a "hot" restart, Puma performs an `exec` operation to start the process up again, so no memory is shared between the old process and the new process. As a result, it is safe to issue a restart any place where you would manually stop Puma and start it again. In particular, it is safe to upgrade Puma itself using a hot restart.
|
5
|
+
To perform a "hot" restart, Puma performs an `exec` operation to start the process up again, so no memory is shared between the old process and the new process. As a result, it is safe to issue a restart at any place where you would manually stop Puma and start it again. In particular, it is safe to upgrade Puma itself using a hot restart.
|
6
6
|
|
7
7
|
If the new process is unable to load, it will simply exit. You should therefore run Puma under a process monitor when using it in production.
|
8
8
|
|
@@ -16,14 +16,14 @@ Any of the following will cause a Puma server to perform a hot restart:
|
|
16
16
|
|
17
17
|
### Supported configurations
|
18
18
|
|
19
|
-
* Works in cluster mode and
|
19
|
+
* Works in cluster mode and single mode
|
20
20
|
* Supported on all platforms
|
21
21
|
|
22
22
|
### Client experience
|
23
23
|
|
24
|
-
* All platforms:
|
24
|
+
* All platforms: clients with an in-flight request are served responses before the connection is closed gracefully. Puma gracefully disconnects any idle HTTP persistent connections before restarting.
|
25
25
|
* On MRI or TruffleRuby on Linux and BSD: Clients who connect just before the server restarts may experience increased latency while the server stops and starts again, but their connections will not be closed prematurely.
|
26
|
-
* On Windows and
|
26
|
+
* On Windows and JRuby: Clients who connect just before a restart may experience "connection reset" errors.
|
27
27
|
|
28
28
|
### Additional notes
|
29
29
|
|
@@ -32,7 +32,7 @@ Any of the following will cause a Puma server to perform a hot restart:
|
|
32
32
|
|
33
33
|
## Phased restart
|
34
34
|
|
35
|
-
Phased restarts replace all running workers in a Puma cluster. This is a useful way to
|
35
|
+
Phased restarts replace all running workers in a Puma cluster. This is a useful way to upgrade the application that Puma is serving gracefully. A phased restart works by first killing an old worker, then starting a new worker, waiting until the new worker has successfully started before proceeding to the next worker. This process continues until all workers are replaced. The master process is not restarted.
|
36
36
|
|
37
37
|
### How-to
|
38
38
|
|
data/docs/signals.md
CHANGED
@@ -1,8 +1,8 @@
|
|
1
|
-
The [unix signal](https://en.wikipedia.org/wiki/Unix_signal) is a method of sending messages between [processes](https://en.wikipedia.org/wiki/Process_(computing)). When a signal is sent, the operating system interrupts the target process's normal flow of execution. There are standard signals that are used to stop a process but there are also custom signals that can be used for other purposes. This document is an attempt to list all supported signals that Puma will respond to. In general, signals need only be sent to the master process of a cluster.
|
1
|
+
The [unix signal](https://en.wikipedia.org/wiki/Unix_signal) is a method of sending messages between [processes](https://en.wikipedia.org/wiki/Process_(computing)). When a signal is sent, the operating system interrupts the target process's normal flow of execution. There are standard signals that are used to stop a process, but there are also custom signals that can be used for other purposes. This document is an attempt to list all supported signals that Puma will respond to. In general, signals need only be sent to the master process of a cluster.
|
2
2
|
|
3
3
|
## Sending Signals
|
4
4
|
|
5
|
-
If you are new to signals it can be
|
5
|
+
If you are new to signals, it can be helpful to see how they are used. When a process starts in a *nix-like operating system, it will have a [PID - or process identifier](https://en.wikipedia.org/wiki/Process_identifier) that can be used to send signals to the process. For demonstration, we will create an infinitely running process by tailing a file:
|
6
6
|
|
7
7
|
```sh
|
8
8
|
$ echo "foo" >> my.log
|
@@ -10,7 +10,7 @@ $ irb
|
|
10
10
|
> pid = Process.spawn 'tail -f my.log'
|
11
11
|
```
|
12
12
|
|
13
|
-
From here we can see that the tail process is running by using the `ps` command:
|
13
|
+
From here, we can see that the tail process is running by using the `ps` command:
|
14
14
|
|
15
15
|
```sh
|
16
16
|
$ ps aux | grep tail
|
@@ -27,7 +27,7 @@ Process.detach(pid) # https://ruby-doc.org/core-2.1.1/Process.html#method-c-deta
|
|
27
27
|
Process.kill("TERM", pid)
|
28
28
|
```
|
29
29
|
|
30
|
-
Now you will see via `ps` that there is no more `tail` process. Sometimes when referring to signals the `SIG` prefix will be used
|
30
|
+
Now you will see via `ps` that there is no more `tail` process. Sometimes when referring to signals, the `SIG` prefix will be used. For example, `SIGTERM` is equivalent to sending `TERM` via `Process.kill`.
|
31
31
|
|
32
32
|
## Puma Signals
|
33
33
|
|
@@ -35,13 +35,14 @@ Puma cluster responds to these signals:
|
|
35
35
|
|
36
36
|
- `TTIN` increment the worker count by 1
|
37
37
|
- `TTOU` decrement the worker count by 1
|
38
|
-
- `TERM` send `TERM` to worker.
|
39
|
-
- `USR2` restart workers. This also reloads
|
40
|
-
- `USR1` restart workers in phases, a rolling restart. This will not reload configuration file.
|
41
|
-
- `HUP ` reopen log files defined in stdout_redirect configuration parameter. If there is no stdout_redirect option provided it will behave like `INT`
|
42
|
-
- `INT ` equivalent of sending Ctrl-C to cluster.
|
38
|
+
- `TERM` send `TERM` to worker. The worker will attempt to finish then exit.
|
39
|
+
- `USR2` restart workers. This also reloads the Puma configuration file, if there is one.
|
40
|
+
- `USR1` restart workers in phases, a rolling restart. This will not reload the configuration file.
|
41
|
+
- `HUP ` reopen log files defined in stdout_redirect configuration parameter. If there is no stdout_redirect option provided, it will behave like `INT`
|
42
|
+
- `INT ` equivalent of sending Ctrl-C to cluster. Puma will attempt to finish then exit.
|
43
43
|
- `CHLD`
|
44
|
-
- `URG ` refork workers in phases from worker 0
|
44
|
+
- `URG ` refork workers in phases from worker 0 if `fork_workers` option is enabled.
|
45
|
+
- `INFO` print backtraces of all puma threads
|
45
46
|
|
46
47
|
## Callbacks order in case of different signals
|
47
48
|
|
data/docs/stats.md
CHANGED
@@ -1,4 +1,4 @@
|
|
1
|
-
##
|
1
|
+
## Accessing stats
|
2
2
|
|
3
3
|
Stats can be accessed in two ways:
|
4
4
|
|
@@ -47,18 +47,18 @@ end
|
|
47
47
|
|
48
48
|
## Explanation of stats
|
49
49
|
|
50
|
-
`Puma.stats` returns different information and a different structure depending on if Puma is in single vs cluster mode. There is one top-level attribute that is common to both modes:
|
50
|
+
`Puma.stats` returns different information and a different structure depending on if Puma is in single vs. cluster mode. There is one top-level attribute that is common to both modes:
|
51
51
|
|
52
|
-
* started_at: when
|
52
|
+
* started_at: when Puma was started
|
53
53
|
|
54
54
|
### single mode and individual workers in cluster mode
|
55
55
|
|
56
|
-
When Puma
|
56
|
+
When Puma runs in single mode, these stats are available at the top level. When Puma runs in cluster mode, these stats are available within the `worker_status` array in a hash labeled `last_status`, in an array of hashes where one hash represents each worker.
|
57
57
|
|
58
58
|
* backlog: requests that are waiting for an available thread to be available. if this is above 0, you need more capacity [always true?]
|
59
59
|
* running: how many threads are running
|
60
|
-
* pool_capacity: the number of requests that the server is capable of taking right now. For example if the number is 5 then it means there are 5 threads sitting idle ready to take a request. If one request comes in, then the value would be 4 until it finishes processing. If the minimum threads allowed is zero, this number will still have a maximum value of the maximum threads allowed.
|
61
|
-
* max_threads: the maximum number of threads
|
60
|
+
* pool_capacity: the number of requests that the server is capable of taking right now. For example, if the number is 5, then it means there are 5 threads sitting idle ready to take a request. If one request comes in, then the value would be 4 until it finishes processing. If the minimum threads allowed is zero, this number will still have a maximum value of the maximum threads allowed.
|
61
|
+
* max_threads: the maximum number of threads Puma is configured to spool per worker
|
62
62
|
* requests_count: the number of requests this worker has served since starting
|
63
63
|
|
64
64
|
|
@@ -72,9 +72,9 @@ When Puma is run in single mode, these stats are available at the top level. Whe
|
|
72
72
|
|
73
73
|
### worker status
|
74
74
|
|
75
|
-
* started_at: when the worker
|
75
|
+
* started_at: when the worker started
|
76
76
|
* pid: the process id of the worker process
|
77
|
-
* index: each worker gets a number. if
|
77
|
+
* index: each worker gets a number. if Puma is configured to have 3 workers, then this will be 0, 1, or 2
|
78
78
|
* booted: if it's done booting [?]
|
79
79
|
* last_checkin: Last time the worker responded to the master process' heartbeat check.
|
80
80
|
* last_status: a hash of info about the worker's state handling requests. See the explanation for this in "single mode and individual workers in cluster mode" section above.
|