puma 3.8.2 → 4.0.0
Sign up to get free protection for your applications and to get access to all the features.
Potentially problematic release.
This version of puma might be problematic. Click here for more details.
- checksums.yaml +5 -5
- data/History.md +157 -0
- data/README.md +155 -225
- data/docs/architecture.md +37 -0
- data/{DEPLOYMENT.md → docs/deployment.md} +24 -4
- data/docs/images/puma-connection-flow-no-reactor.png +0 -0
- data/docs/images/puma-connection-flow.png +0 -0
- data/docs/images/puma-general-arch.png +0 -0
- data/docs/plugins.md +28 -0
- data/docs/restart.md +41 -0
- data/docs/signals.md +56 -3
- data/docs/systemd.md +130 -37
- data/ext/puma_http11/PumaHttp11Service.java +2 -0
- data/ext/puma_http11/http11_parser.c +84 -84
- data/ext/puma_http11/http11_parser.rl +9 -9
- data/ext/puma_http11/mini_ssl.c +51 -9
- data/ext/puma_http11/org/jruby/puma/Http11Parser.java +13 -16
- data/ext/puma_http11/org/jruby/puma/IOBuffer.java +72 -0
- data/ext/puma_http11/org/jruby/puma/MiniSSL.java +26 -6
- data/lib/puma.rb +8 -0
- data/lib/puma/app/status.rb +9 -0
- data/lib/puma/binder.rb +31 -18
- data/lib/puma/cli.rb +22 -7
- data/lib/puma/client.rb +67 -18
- data/lib/puma/cluster.rb +64 -19
- data/lib/puma/commonlogger.rb +2 -0
- data/lib/puma/configuration.rb +22 -14
- data/lib/puma/const.rb +13 -2
- data/lib/puma/control_cli.rb +26 -14
- data/lib/puma/convenient.rb +2 -0
- data/lib/puma/daemon_ext.rb +2 -0
- data/lib/puma/delegation.rb +2 -0
- data/lib/puma/detect.rb +2 -0
- data/lib/puma/dsl.rb +91 -12
- data/lib/puma/events.rb +3 -2
- data/lib/puma/io_buffer.rb +3 -6
- data/lib/puma/jruby_restart.rb +2 -1
- data/lib/puma/launcher.rb +51 -30
- data/lib/puma/minissl.rb +79 -28
- data/lib/puma/null_io.rb +2 -0
- data/lib/puma/plugin.rb +2 -0
- data/lib/puma/plugin/tmp_restart.rb +0 -1
- data/lib/puma/rack/builder.rb +2 -1
- data/lib/puma/reactor.rb +218 -30
- data/lib/puma/runner.rb +17 -4
- data/lib/puma/server.rb +113 -49
- data/lib/puma/single.rb +16 -5
- data/lib/puma/state_file.rb +2 -0
- data/lib/puma/tcp_logger.rb +2 -0
- data/lib/puma/thread_pool.rb +59 -6
- data/lib/puma/util.rb +2 -6
- data/lib/rack/handler/puma.rb +13 -2
- data/tools/jungle/README.md +12 -2
- data/tools/jungle/init.d/README.md +2 -0
- data/tools/jungle/init.d/puma +7 -7
- data/tools/jungle/init.d/run-puma +1 -1
- data/tools/jungle/rc.d/README.md +74 -0
- data/tools/jungle/rc.d/puma +61 -0
- data/tools/jungle/rc.d/puma.conf +10 -0
- data/tools/trickletest.rb +1 -1
- metadata +25 -87
- data/.github/issue_template.md +0 -20
- data/Gemfile +0 -12
- data/Manifest.txt +0 -78
- data/Rakefile +0 -158
- data/Release.md +0 -9
- data/gemfiles/2.1-Gemfile +0 -12
- data/lib/puma/compat.rb +0 -14
- data/lib/puma/java_io_buffer.rb +0 -45
- data/lib/puma/rack/backports/uri/common_193.rb +0 -33
- data/puma.gemspec +0 -52
@@ -0,0 +1,37 @@
|
|
1
|
+
# Architecture
|
2
|
+
|
3
|
+
## Overview
|
4
|
+
|
5
|
+
![http://bit.ly/2iJuFky](images/puma-general-arch.png)
|
6
|
+
|
7
|
+
Puma is a threaded web server, processing requests across a TCP or UNIX socket.
|
8
|
+
|
9
|
+
Workers accept connections from the socket and a thread in the worker's thread pool processes the client's request.
|
10
|
+
|
11
|
+
Clustered mode is shown/discussed here. Single mode is analogous to having a single worker process.
|
12
|
+
|
13
|
+
## Connection pipeline
|
14
|
+
|
15
|
+
![http://bit.ly/2zwzhEK](images/puma-connection-flow.png)
|
16
|
+
|
17
|
+
* Upon startup, Puma listens on a TCP or UNIX socket.
|
18
|
+
* The backlog of this socket is configured (with a default of 1024), determining how many established but unaccepted connections can exist concurrently.
|
19
|
+
* This socket backlog is distinct from the "backlog" of work as reported by the control server stats. The latter is the number of connections in that worker's "todo" set waiting for a worker thread.
|
20
|
+
* By default, a single, separate thread is used to receive HTTP requests across the socket.
|
21
|
+
* When at least one worker thread is available for work, a connection is accepted and placed in this request buffer
|
22
|
+
* This thread waits for entire HTTP requests to be received over the connection
|
23
|
+
* The time spent waiting for the HTTP request body to be received is exposed to the Rack app as `env['puma.request_body_wait']` (milliseconds)
|
24
|
+
* Once received, the connection is pushed into the "todo" set
|
25
|
+
* Worker threads pop work off the "todo" set for processing
|
26
|
+
* The thread processes the request via the rack application (which generates the HTTP response)
|
27
|
+
* The thread writes the response to the connection
|
28
|
+
* Finally, the thread become available to process another connection in the "todo" set
|
29
|
+
|
30
|
+
### Disabling `queue_requests`
|
31
|
+
|
32
|
+
![http://bit.ly/2zxCJ1Z](images/puma-connection-flow-no-reactor.png)
|
33
|
+
|
34
|
+
The `queue_requests` option is `true` by default, enabling the separate thread used to buffer requests as described above.
|
35
|
+
|
36
|
+
If set to `false`, this buffer will not be used for connections while waiting for the request to arrive.
|
37
|
+
In this mode, when a connection is accepted, it is added to the "todo" queue immediately, and a worker will synchronously do any waiting necessary to read the HTTP request from the socket.
|
@@ -38,22 +38,42 @@ Here are some rules of thumb:
|
|
38
38
|
* As you grow more confident in the thread safety of your app, you can tune the
|
39
39
|
workers down and the threads up.
|
40
40
|
|
41
|
+
#### Ubuntu / Systemd (Systemctl) Installation
|
42
|
+
|
43
|
+
See [systemd.md](systemd.md)
|
44
|
+
|
41
45
|
#### Worker utilization
|
42
46
|
|
43
|
-
**How do you know if you'
|
47
|
+
**How do you know if you've got enough (or too many workers)?**
|
44
48
|
|
45
49
|
A good question. Due to MRI's GIL, only one thread can be executing Ruby code at a time.
|
46
50
|
But since so many apps are waiting on IO from DBs, etc., they can utilize threads
|
47
51
|
to make better use of the process.
|
48
52
|
|
49
53
|
The rule of thumb is you never want processes that are pegged all the time. This
|
50
|
-
means that there is more work to do
|
54
|
+
means that there is more work to do than the process can get through. On the other
|
51
55
|
hand, if you have processes that sit around doing nothing, then they're just eating
|
52
56
|
up resources.
|
53
57
|
|
54
|
-
|
58
|
+
Watch your CPU utilization over time and aim for about 70% on average. This means
|
55
59
|
you've got capacity still but aren't starving threads.
|
56
60
|
|
61
|
+
**Measuring utilization**
|
62
|
+
|
63
|
+
Using a timestamp header from an upstream proxy server (eg. nginx or haproxy), it's
|
64
|
+
possible to get an indication of how long requests have been waiting for a Puma
|
65
|
+
thread to become available.
|
66
|
+
|
67
|
+
* Have your upstream proxy set a header with the time it received the request:
|
68
|
+
* nginx: `proxy_set_header X-Request-Start "${msec}";`
|
69
|
+
* haproxy: `http-request set-header X-Request-Start "%t";`
|
70
|
+
* In your Rack middleware, determine the amount of time elapsed since `X-Request-Start`.
|
71
|
+
* To improve accuracy, you will want to subtract time spent waiting for slow clients:
|
72
|
+
* `env['puma.request_body_wait']` contains the number of milliseconds Puma spent
|
73
|
+
waiting for the client to send the request body.
|
74
|
+
* haproxy: `%Th` (TLS handshake time) and `%Ti` (idle time before request) can
|
75
|
+
can also be added as headers.
|
76
|
+
|
57
77
|
## Daemonizing
|
58
78
|
|
59
79
|
I prefer to not daemonize my servers and use something like `runit` or `upstart` to
|
@@ -62,7 +82,7 @@ makes it easy to figure out what is going on. Additionally, unlike `unicorn`,
|
|
62
82
|
puma does not require daemonization to do zero-downtime restarts.
|
63
83
|
|
64
84
|
I see people using daemonization because they start puma directly via capistrano
|
65
|
-
task and thus want it to live on past the `cap deploy`. To
|
85
|
+
task and thus want it to live on past the `cap deploy`. To these people I say:
|
66
86
|
You need to be using a process monitor. Nothing is making sure puma stays up in
|
67
87
|
this scenario! You're just waiting for something weird to happen, puma to die,
|
68
88
|
and to get paged at 3am. Do yourself a favor, at least the process monitoring
|
Binary file
|
Binary file
|
Binary file
|
data/docs/plugins.md
ADDED
@@ -0,0 +1,28 @@
|
|
1
|
+
## Plugins
|
2
|
+
|
3
|
+
Puma 3.0 added support for plugins that can augment configuration and service operations.
|
4
|
+
|
5
|
+
2 canonical plugins to look to aid in development of further plugins:
|
6
|
+
|
7
|
+
* [tmp\_restart](https://github.com/puma/puma/blob/master/lib/puma/plugin/tmp_restart.rb): Restarts the server if the file `tmp/restart.txt` is touched
|
8
|
+
* [heroku](https://github.com/puma/puma-heroku/blob/master/lib/puma/plugin/heroku.rb): Packages up the default configuration used by puma on Heroku
|
9
|
+
|
10
|
+
Plugins are activated in a puma configuration file (such as `config/puma.rb'`) by adding `plugin "name"`, such as `plugin "heroku"`.
|
11
|
+
|
12
|
+
Plugins are activated based simply on path requirements so, activating the `heroku` plugin will simply be doing `require "puma/plugin/heroku"`. This allows gems to provide multiple plugins (as well as unrelated gems to provide puma plugins).
|
13
|
+
|
14
|
+
The `tmp_restart` plugin is bundled with puma, so it can always be used.
|
15
|
+
|
16
|
+
To use the `heroku` plugin, add `puma-heroku` to your Gemfile or install it.
|
17
|
+
|
18
|
+
### API
|
19
|
+
|
20
|
+
At present, there are 2 hooks that plugins can use: `start` and `config`.
|
21
|
+
|
22
|
+
`start` runs when the server has started and allows the plugin to start other functionality to augment puma.
|
23
|
+
|
24
|
+
`config` runs when the server is being configured and is passed a `Puma::DSL` object that can be used to add additional configuration.
|
25
|
+
|
26
|
+
Any public methods in `Puma::Plugin` are the public API that any plugin may use.
|
27
|
+
|
28
|
+
In the future, more hooks and APIs will be added.
|
data/docs/restart.md
ADDED
@@ -0,0 +1,41 @@
|
|
1
|
+
# Restarts
|
2
|
+
|
3
|
+
To perform a restart, there are 3 builtin mechanisms:
|
4
|
+
|
5
|
+
* Send the `puma` process the `SIGUSR2` signal (normal restart)
|
6
|
+
* Send the `puma` process the `SIGUSR1` signal (restart in phases (a "rolling restart"), cluster mode only)
|
7
|
+
* Use the status server and issue `/restart`
|
8
|
+
|
9
|
+
No code is shared between the current and restarted process, so it should be safe to issue a restart any place where you would manually stop Puma and start it again.
|
10
|
+
|
11
|
+
If the new process is unable to load, it will simply exit. You should therefore run Puma under a process monitor (see below) when using it in production.
|
12
|
+
|
13
|
+
### Normal vs Hot vs Phased Restart
|
14
|
+
|
15
|
+
A hot restart means that no requests will be lost while deploying your new code, since the server socket is kept open between restarts.
|
16
|
+
|
17
|
+
But beware, hot restart does not mean that the incoming requests won’t hang for multiple seconds while your new code has not fully deployed. If you need a zero downtime and zero hanging requests deploy, you must use phased restart.
|
18
|
+
|
19
|
+
When you run pumactl phased-restart, Puma kills workers one-by-one, meaning that at least another worker is still available to serve requests, which lead to zero hanging requests (yay!).
|
20
|
+
|
21
|
+
But again beware, upgrading an application sometimes involves upgrading the database schema. With phased restart, there may be a moment during the deployment where processes belonging to the previous version and processes belonging to the new version both exist at the same time. Any database schema upgrades you perform must therefore be backwards-compatible with the old application version.
|
22
|
+
|
23
|
+
If you perform a lot of database migrations, you probably should not use phased restart and use a normal/hot restart instead (`pumactl restart`). That way, no code is shared while deploying (in that case, `preload_app!` might help for quicker deployment, see ["Clustered Mode" in the README](../README.md#clustered-mode)).
|
24
|
+
|
25
|
+
**Note**: Hot and phased restarts are only available on MRI, not on JRuby. They are also unavailable on Windows servers.
|
26
|
+
|
27
|
+
### Release Directory
|
28
|
+
|
29
|
+
If your symlink releases into a common working directory (i.e., `/current` from Capistrano), Puma won't pick up your new changes when running phased restarts without additional configuration. You should set your working directory within Puma's config to specify the directory it should use. This is a change from earlier versions of Puma (< 2.15) that would infer the directory for you.
|
30
|
+
|
31
|
+
```ruby
|
32
|
+
# config/puma.rb
|
33
|
+
|
34
|
+
directory '/var/www/current'
|
35
|
+
```
|
36
|
+
|
37
|
+
### Cleanup Code
|
38
|
+
|
39
|
+
Puma isn't able to understand all the resources that your app may use, so it provides a hook in the configuration file you pass to `-C` called `on_restart`. The block passed to `on_restart` will be called, unsurprisingly, just before Puma restarts itself.
|
40
|
+
|
41
|
+
You should place code to close global log files, redis connections, etc. in this block so that their file descriptors don't leak into the restarted process. Failure to do so will result in slowly running out of descriptors and eventually obscure crashes as the server is restarted many times.
|
data/docs/signals.md
CHANGED
@@ -36,8 +36,61 @@ Puma cluster responds to these signals:
|
|
36
36
|
- `TTIN` increment the worker count by 1
|
37
37
|
- `TTOU` decrement the worker count by 1
|
38
38
|
- `TERM` send `TERM` to worker. Worker will attempt to finish then exit.
|
39
|
-
- `USR2` restart workers
|
40
|
-
- `USR1` restart workers in phases, a rolling restart.
|
41
|
-
- `HUP` reopen log files defined in stdout_redirect configuration parameter
|
39
|
+
- `USR2` restart workers. This also reloads puma configuration file, if there is one.
|
40
|
+
- `USR1` restart workers in phases, a rolling restart. This will not reload configuration file.
|
41
|
+
- `HUP` reopen log files defined in stdout_redirect configuration parameter. If there is no stdout_redirect option provided it will behave like `INT`
|
42
42
|
- `INT` equivalent of sending Ctrl-C to cluster. Will attempt to finish then exit.
|
43
43
|
- `CHLD`
|
44
|
+
|
45
|
+
## Callbacks order in case of different signals
|
46
|
+
|
47
|
+
### Start application
|
48
|
+
|
49
|
+
```
|
50
|
+
puma configuration file reloaded, if there is one
|
51
|
+
* Pruning Bundler environment
|
52
|
+
puma configuration file reloaded, if there is one
|
53
|
+
|
54
|
+
before_fork
|
55
|
+
on_worker_fork
|
56
|
+
after_worker_fork
|
57
|
+
|
58
|
+
Gemfile in context
|
59
|
+
|
60
|
+
on_worker_boot
|
61
|
+
|
62
|
+
Code of the app is loaded and running
|
63
|
+
```
|
64
|
+
|
65
|
+
### Send USR2
|
66
|
+
|
67
|
+
```
|
68
|
+
on_worker_shutdown
|
69
|
+
on_restart
|
70
|
+
|
71
|
+
puma configuration file reloaded, if there is one
|
72
|
+
|
73
|
+
before_fork
|
74
|
+
on_worker_fork
|
75
|
+
after_worker_fork
|
76
|
+
|
77
|
+
Gemfile in context
|
78
|
+
|
79
|
+
on_worker_boot
|
80
|
+
|
81
|
+
Code of the app is loaded and running
|
82
|
+
```
|
83
|
+
|
84
|
+
### Send USR1
|
85
|
+
|
86
|
+
```
|
87
|
+
on_worker_shutdown
|
88
|
+
on_worker_fork
|
89
|
+
after_worker_fork
|
90
|
+
|
91
|
+
Gemfile in context
|
92
|
+
|
93
|
+
on_worker_boot
|
94
|
+
|
95
|
+
Code of the app is loaded and running
|
96
|
+
```
|
data/docs/systemd.md
CHANGED
@@ -3,10 +3,21 @@
|
|
3
3
|
[systemd](https://www.freedesktop.org/wiki/Software/systemd/) is a
|
4
4
|
commonly available init system (PID 1) on many Linux distributions. It
|
5
5
|
offers process monitoring (including automatic restarts) and other
|
6
|
-
useful features for running Puma in production.
|
7
|
-
puma.service configuration file for systemd:
|
6
|
+
useful features for running Puma in production.
|
8
7
|
|
9
|
-
|
8
|
+
## Service Configuration
|
9
|
+
|
10
|
+
Below is a sample puma.service configuration file for systemd, which
|
11
|
+
can be copied or symlinked to /etc/systemd/system/puma.service, or if
|
12
|
+
desired, using an application or instance specific name.
|
13
|
+
|
14
|
+
Note that this uses the systemd preferred "simple" type where the
|
15
|
+
start command remains running in the foreground (does not fork and
|
16
|
+
exit). See also, the
|
17
|
+
[Alternative Forking Configuration](#alternative-forking-configuration)
|
18
|
+
below.
|
19
|
+
|
20
|
+
~~~~ ini
|
10
21
|
[Unit]
|
11
22
|
Description=Puma HTTP Server
|
12
23
|
After=network.target
|
@@ -21,22 +32,26 @@ Type=simple
|
|
21
32
|
# Preferably configure a non-privileged user
|
22
33
|
# User=
|
23
34
|
|
24
|
-
#
|
25
|
-
#
|
35
|
+
# The path to the your application code root directory.
|
36
|
+
# Also replace the "<YOUR_APP_PATH>" place holders below with this path.
|
37
|
+
# Example /home/username/myapp
|
38
|
+
WorkingDirectory=<YOUR_APP_PATH>
|
26
39
|
|
27
40
|
# Helpful for debugging socket activation, etc.
|
28
41
|
# Environment=PUMA_DEBUG=1
|
29
42
|
|
30
|
-
#
|
31
|
-
#
|
32
|
-
# `bundle binstubs puma --path ./sbin`
|
33
|
-
|
34
|
-
|
35
|
-
#
|
43
|
+
# SystemD will not run puma even if it is in your path. You must specify
|
44
|
+
# an absolute URL to puma. For example /usr/local/bin/puma
|
45
|
+
# Alternatively, create a binstub with `bundle binstubs puma --path ./sbin` in the WorkingDirectory
|
46
|
+
ExecStart=/<FULLPATH>/bin/puma -C <YOUR_APP_PATH>/puma.rb
|
47
|
+
|
48
|
+
# Variant: Rails start.
|
49
|
+
# ExecStart=/<FULLPATH>/bin/puma -C <YOUR_APP_PATH>/config/puma.rb ../config.ru
|
50
|
+
|
51
|
+
# Variant: Use `bundle exec --keep-file-descriptors puma` instead of binstub
|
52
|
+
# Variant: Specify directives inline.
|
53
|
+
# ExecStart=/<FULLPATH>/puma -b tcp://0.0.0.0:9292 -b ssl://0.0.0.0:9293?key=key.pem&cert=cert.pem
|
36
54
|
|
37
|
-
# Alternatively with a config file (in WorkingDirectory) and
|
38
|
-
# comparable `bind` directives
|
39
|
-
# ExecStart=<WD>/sbin/puma -C config.rb
|
40
55
|
|
41
56
|
Restart=always
|
42
57
|
|
@@ -50,14 +65,29 @@ for additional details.
|
|
50
65
|
## Socket Activation
|
51
66
|
|
52
67
|
systemd and puma also support socket activation, where systemd opens
|
53
|
-
the listening socket(s) in advance and provides them to the puma
|
54
|
-
process on startup. Among other advantages, this keeps
|
55
|
-
sockets open across puma restarts and achieves graceful
|
56
|
-
|
57
|
-
|
58
|
-
|
59
|
-
|
60
|
-
|
68
|
+
the listening socket(s) in advance and provides them to the puma
|
69
|
+
master process on startup. Among other advantages, this keeps
|
70
|
+
listening sockets open across puma restarts and achieves graceful
|
71
|
+
restarts, including when upgraded puma, and is compatible with both
|
72
|
+
clustered mode and application preload.
|
73
|
+
|
74
|
+
**Note:** Any wrapper scripts which `exec`, or other indirections in
|
75
|
+
`ExecStart`, may result in activated socket file descriptors being closed
|
76
|
+
before they reach the puma master process. For example, if using `bundle exec`,
|
77
|
+
pass the `--keep-file-descriptors` flag. `bundle exec` can be avoided by using a
|
78
|
+
`puma` executable generated by `bundle binstubs puma`. This is tracked in
|
79
|
+
[#1499].
|
80
|
+
|
81
|
+
**Note:** Socket activation doesn't currently work on jruby. This is
|
82
|
+
tracked in [#1367].
|
83
|
+
|
84
|
+
To use socket activation, configure one or more `ListenStream` sockets
|
85
|
+
in a companion `*.socket` unit file. Also uncomment the associated
|
86
|
+
`Requires` directive for the socket unit in the service file (see
|
87
|
+
above.) Here is a sample puma.socket, matching the ports used in the
|
88
|
+
above puma.service:
|
89
|
+
|
90
|
+
~~~~ ini
|
61
91
|
[Unit]
|
62
92
|
Description=Puma HTTP Server Accept Sockets
|
63
93
|
|
@@ -84,6 +114,16 @@ for additional configuration details.
|
|
84
114
|
Note that the above configurations will work with Puma in either
|
85
115
|
single process or cluster mode.
|
86
116
|
|
117
|
+
### Sockets and symlinks
|
118
|
+
|
119
|
+
When using releases folders, you should set the socket path using the
|
120
|
+
shared folder path (ex. `/srv/projet/shared/tmp/puma.sock`), not the
|
121
|
+
release folder path (`/srv/projet/releases/1234/tmp/puma.sock`).
|
122
|
+
|
123
|
+
Puma will detect the release path socket as different than the one provided by
|
124
|
+
systemd and attempt to bind it again, resulting in the exception
|
125
|
+
`There is already a server bound to:`.
|
126
|
+
|
87
127
|
## Usage
|
88
128
|
|
89
129
|
Without socket activation, use `systemctl` as root (e.g. via `sudo`) as
|
@@ -169,29 +209,82 @@ Apr 07 08:40:19 hx puma[28320]: * Activated ssl://0.0.0.0:9234?key=key.pem&cert=
|
|
169
209
|
Apr 07 08:40:19 hx puma[28320]: Use Ctrl-C to stop
|
170
210
|
~~~~
|
171
211
|
|
172
|
-
## Alternative
|
173
|
-
|
174
|
-
|
212
|
+
## Alternative Forking Configuration
|
213
|
+
|
214
|
+
Other systems/tools might expect or need puma to be run as a
|
215
|
+
"traditional" forking server, for example so that the `pumactl`
|
216
|
+
command can be used directly and outside of systemd for
|
217
|
+
stop/start/restart. This use case is incompatible with systemd socket
|
218
|
+
activation, so it should not be configured. Below is an alternative
|
219
|
+
puma.service config sample, using `Type=forking` and the `--daemon`
|
220
|
+
flag in `ExecStart`. Here systemd is playing a role more equivalent to
|
221
|
+
SysV init.d, where it is responsible for starting Puma on boot
|
222
|
+
(multi-user.target) and stopping it on shutdown, but is not performing
|
223
|
+
continuous restarts. Therefore running Puma in cluster mode, where the
|
224
|
+
master can restart workers, is highly recommended. See the systemd
|
225
|
+
[Restart] directive for details.
|
226
|
+
|
227
|
+
~~~~ ini
|
228
|
+
[Unit]
|
229
|
+
Description=Puma HTTP Forking Server
|
230
|
+
After=network.target
|
175
231
|
|
176
|
-
~~~~
|
177
232
|
[Service]
|
178
233
|
# Background process configuration (use with --daemon in ExecStart)
|
179
234
|
Type=forking
|
180
235
|
|
181
|
-
#
|
182
|
-
#
|
183
|
-
|
184
|
-
#
|
185
|
-
# path.
|
236
|
+
# Preferably configure a non-privileged user
|
237
|
+
# User=
|
238
|
+
|
239
|
+
# The path to the puma application root
|
240
|
+
# Also replace the "<WD>" place holders below with this path.
|
241
|
+
WorkingDirectory=
|
242
|
+
|
243
|
+
# The command to start Puma
|
244
|
+
# (replace "<WD>" below)
|
186
245
|
ExecStart=bundle exec puma -C <WD>/shared/puma.rb --daemon
|
187
246
|
|
188
|
-
#
|
189
|
-
#
|
190
|
-
# may differ from this example, for example if you use a Ruby version
|
191
|
-
# manager. `<WD>` is short for "your working directory". Replace it with your
|
192
|
-
# path.
|
247
|
+
# The command to stop Puma
|
248
|
+
# (replace "<WD>" below)
|
193
249
|
ExecStop=bundle exec pumactl -S <WD>/shared/tmp/pids/puma.state stop
|
194
250
|
|
195
|
-
#
|
251
|
+
# Path to PID file so that systemd knows which is the master process
|
196
252
|
PIDFile=<WD>/shared/tmp/pids/puma.pid
|
253
|
+
|
254
|
+
# Should systemd restart puma?
|
255
|
+
# Use "no" (the default) to ensure no interference when using
|
256
|
+
# stop/start/restart via `pumactl`. The "on-failure" setting might
|
257
|
+
# work better for this purpose, but you must test it.
|
258
|
+
# Use "always" if only `systemctl` is used for start/stop/restart, and
|
259
|
+
# reconsider if you actually need the forking config.
|
260
|
+
Restart=no
|
261
|
+
|
262
|
+
# `puma_ctl restart` wouldn't work without this. It's because `pumactl`
|
263
|
+
# changes PID on restart and systemd stops the service afterwards
|
264
|
+
# because of the PID change. This option prevents stopping after PID
|
265
|
+
# change.
|
266
|
+
RemainAfterExit=yes
|
267
|
+
|
268
|
+
[Install]
|
269
|
+
WantedBy=multi-user.target
|
270
|
+
~~~~
|
271
|
+
|
272
|
+
### capistrano3-puma
|
273
|
+
|
274
|
+
By default,
|
275
|
+
[capistrano3-puma](https://github.com/seuros/capistrano-puma) uses
|
276
|
+
`pumactl` for deployment restarts, outside of systemd. To learn the
|
277
|
+
exact commands that this tool would use for `ExecStart` and
|
278
|
+
`ExecStop`, use the following `cap` commands in dry-run mode, and
|
279
|
+
update from the above forking service configuration accordingly. Note
|
280
|
+
also that the configured `User` should likely be the same as the
|
281
|
+
capistrano3-puma `:puma_user` option.
|
282
|
+
|
283
|
+
~~~~ sh
|
284
|
+
stage=production # or different stage, as needed
|
285
|
+
cap $stage puma:start --dry-run
|
286
|
+
cap $stage puma:stop --dry-run
|
197
287
|
~~~~
|
288
|
+
|
289
|
+
[Restart]: https://www.freedesktop.org/software/systemd/man/systemd.service.html#Restart=
|
290
|
+
[#1367]: https://github.com/puma/puma/issues/1367
|