jun-puma 1.0.1-java → 1.0.3-java

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (85) hide show
  1. checksums.yaml +4 -4
  2. data/History.md +451 -1870
  3. data/LICENSE +20 -23
  4. data/README.md +65 -226
  5. data/ext/puma_http11/extconf.rb +3 -64
  6. data/lib/puma/puma_http11.jar +0 -0
  7. metadata +9 -105
  8. data/bin/puma-wild +0 -25
  9. data/docs/architecture.md +0 -74
  10. data/docs/compile_options.md +0 -55
  11. data/docs/deployment.md +0 -102
  12. data/docs/fork_worker.md +0 -31
  13. data/docs/images/puma-connection-flow-no-reactor.png +0 -0
  14. data/docs/images/puma-connection-flow.png +0 -0
  15. data/docs/images/puma-general-arch.png +0 -0
  16. data/docs/jungle/README.md +0 -9
  17. data/docs/jungle/rc.d/README.md +0 -74
  18. data/docs/jungle/rc.d/puma +0 -61
  19. data/docs/jungle/rc.d/puma.conf +0 -10
  20. data/docs/kubernetes.md +0 -78
  21. data/docs/nginx.md +0 -80
  22. data/docs/plugins.md +0 -38
  23. data/docs/rails_dev_mode.md +0 -28
  24. data/docs/restart.md +0 -64
  25. data/docs/signals.md +0 -98
  26. data/docs/stats.md +0 -142
  27. data/docs/systemd.md +0 -244
  28. data/docs/testing_benchmarks_local_files.md +0 -150
  29. data/docs/testing_test_rackup_ci_files.md +0 -36
  30. data/ext/puma_http11/PumaHttp11Service.java +0 -17
  31. data/ext/puma_http11/ext_help.h +0 -15
  32. data/ext/puma_http11/http11_parser.c +0 -1057
  33. data/ext/puma_http11/http11_parser.h +0 -65
  34. data/ext/puma_http11/http11_parser.java.rl +0 -145
  35. data/ext/puma_http11/http11_parser.rl +0 -149
  36. data/ext/puma_http11/http11_parser_common.rl +0 -54
  37. data/ext/puma_http11/mini_ssl.c +0 -832
  38. data/ext/puma_http11/no_ssl/PumaHttp11Service.java +0 -15
  39. data/ext/puma_http11/org/jruby/puma/Http11.java +0 -226
  40. data/ext/puma_http11/org/jruby/puma/Http11Parser.java +0 -455
  41. data/ext/puma_http11/org/jruby/puma/MiniSSL.java +0 -508
  42. data/ext/puma_http11/puma_http11.c +0 -492
  43. data/lib/puma/app/status.rb +0 -96
  44. data/lib/puma/binder.rb +0 -501
  45. data/lib/puma/cli.rb +0 -243
  46. data/lib/puma/client.rb +0 -632
  47. data/lib/puma/cluster/worker.rb +0 -182
  48. data/lib/puma/cluster/worker_handle.rb +0 -97
  49. data/lib/puma/cluster.rb +0 -562
  50. data/lib/puma/commonlogger.rb +0 -115
  51. data/lib/puma/configuration.rb +0 -391
  52. data/lib/puma/const.rb +0 -289
  53. data/lib/puma/control_cli.rb +0 -316
  54. data/lib/puma/detect.rb +0 -45
  55. data/lib/puma/dsl.rb +0 -1204
  56. data/lib/puma/error_logger.rb +0 -113
  57. data/lib/puma/events.rb +0 -57
  58. data/lib/puma/io_buffer.rb +0 -46
  59. data/lib/puma/jruby_restart.rb +0 -27
  60. data/lib/puma/json_serialization.rb +0 -96
  61. data/lib/puma/launcher/bundle_pruner.rb +0 -104
  62. data/lib/puma/launcher.rb +0 -484
  63. data/lib/puma/log_writer.rb +0 -147
  64. data/lib/puma/minissl/context_builder.rb +0 -95
  65. data/lib/puma/minissl.rb +0 -458
  66. data/lib/puma/null_io.rb +0 -61
  67. data/lib/puma/plugin/systemd.rb +0 -90
  68. data/lib/puma/plugin/tmp_restart.rb +0 -36
  69. data/lib/puma/plugin.rb +0 -111
  70. data/lib/puma/rack/builder.rb +0 -297
  71. data/lib/puma/rack/urlmap.rb +0 -93
  72. data/lib/puma/rack_default.rb +0 -24
  73. data/lib/puma/reactor.rb +0 -125
  74. data/lib/puma/request.rb +0 -671
  75. data/lib/puma/runner.rb +0 -213
  76. data/lib/puma/sd_notify.rb +0 -149
  77. data/lib/puma/server.rb +0 -664
  78. data/lib/puma/single.rb +0 -69
  79. data/lib/puma/state_file.rb +0 -68
  80. data/lib/puma/thread_pool.rb +0 -434
  81. data/lib/puma/util.rb +0 -141
  82. data/lib/puma.rb +0 -78
  83. data/lib/rack/handler/puma.rb +0 -141
  84. data/tools/Dockerfile +0 -16
  85. data/tools/trickletest.rb +0 -44
data/docs/kubernetes.md DELETED
@@ -1,78 +0,0 @@
1
- # Kubernetes
2
-
3
- ## Running Puma in Kubernetes
4
-
5
- In general running Puma in Kubernetes works as-is, no special configuration is needed beyond what you would write anyway to get a new Kubernetes Deployment going. There is one known interaction between the way Kubernetes handles pod termination and how Puma handles `SIGINT`, where some request might be sent to Puma after it has already entered graceful shutdown mode and is no longer accepting requests. This can lead to dropped requests during rolling deploys. A workaround for this is listed at the end of this article.
6
-
7
- ## Basic setup
8
-
9
- Assuming you already have a running cluster and docker image repository, you can run a simple Puma app with the following example Dockerfile and Deployment specification. These are meant as examples only and are deliberately very minimal to the point of skipping many options that are recommended for running in production, like healthchecks and envvar configuration with ConfigMaps. In general you should check the [Kubernetes documentation](https://kubernetes.io/docs/home/) and [Docker documentation](https://docs.docker.com/) for a more comprehensive overview of the available options.
10
-
11
- A basic Dockerfile example:
12
- ```
13
- FROM ruby:2.5.1-alpine # can be updated to newer ruby versions
14
- RUN apk update && apk add build-base # and any other packages you need
15
-
16
- # Only rebuild gem bundle if Gemfile changes
17
- COPY Gemfile Gemfile.lock ./
18
- RUN bundle install
19
-
20
- # Copy over the rest of the files
21
- COPY . .
22
-
23
- # Open up port and start the service
24
- EXPOSE 9292
25
- CMD bundle exec rackup -o 0.0.0.0
26
- ```
27
-
28
- A sample `deployment.yaml`:
29
- ```
30
- ---
31
- apiVersion: apps/v1
32
- kind: Deployment
33
- metadata:
34
- name: my-awesome-puma-app
35
- spec:
36
- selector:
37
- matchLabels:
38
- app: my-awesome-puma-app
39
- template:
40
- metadata:
41
- labels:
42
- app: my-awesome-puma-app
43
- service: my-awesome-puma-app
44
- spec:
45
- containers:
46
- - name: my-awesome-puma-app
47
- image: <your image here>
48
- ports:
49
- - containerPort: 9292
50
- ```
51
-
52
- ## Graceful shutdown and pod termination
53
-
54
- For some high-throughput systems, it is possible that some HTTP requests will return responses with response codes in the 5XX range during a rolling deploy to a new version. This is caused by [the way that Kubernetes terminates a pod during rolling deploys](https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-terminating-with-grace):
55
-
56
- 1. The replication controller determines a pod should be shut down.
57
- 2. The Pod is set to the “Terminating” State and removed from the endpoints list of all Services, so that it receives no more requests.
58
- 3. The pods pre-stop hook get called. The default for this is to send `SIGTERM` to the process inside the pod.
59
- 4. The pod has up to `terminationGracePeriodSeconds` (default: 30 seconds) to gracefully shut down. Puma will do this (after it receives SIGTERM) by closing down the socket that accepts new requests and finishing any requests already running before exiting the Puma process.
60
- 5. If the pod is still running after `terminationGracePeriodSeconds` has elapsed, the pod receives `SIGKILL` to make sure the process inside it stops. After that, the container exits and all other Kubernetes objects associated with it are cleaned up.
61
-
62
- There is a subtle race condition between step 2 and 3: The replication controller does not synchronously remove the pod from the Services AND THEN call the pre-stop hook of the pod, but rather it asynchronously sends "remove this pod from your endpoints" requests to the Services and then immediately proceeds to invoke the pods' pre-stop hook. If the Service controller (typically something like nginx or haproxy) receives this request handles this request "too" late (due to internal lag or network latency between the replication and Service controllers) then it is possible that the Service controller will send one or more requests to a Puma process which has already shut down its listening socket. These requests will then fail with 5XX error codes.
63
-
64
- The way Kubernetes works this way, rather than handling step 2 synchronously, is due to the CAP theorem: in a distributed system there is no way to guarantee that any message will arrive promptly. In particular, waiting for all Service controllers to report back might get stuck for an indefinite time if one of them has already been terminated or if there has been a net split. A way to work around this is to add a sleep to the pre-stop hook of the same time as the `terminationGracePeriodSeconds` time. This will allow the Puma process to keep serving new requests during the entire grace period, although it will no longer receive new requests after all Service controllers have propagated the removal of the pod from their endpoint lists. Then, after `terminationGracePeriodSeconds`, the pod receives `SIGKILL` and closes down. If your process can't handle SIGKILL properly, for example because it needs to release locks in different services, you can also sleep for a shorter period (and/or increase `terminationGracePeriodSeconds`) as long as the time slept is longer than the time that your Service controllers take to propagate the pod removal. The downside of this workaround is that all pods will take at minimum the amount of time slept to shut down and this will increase the time required for your rolling deploy.
65
-
66
- More discussions and links to relevant articles can be found in https://github.com/puma/puma/issues/2343.
67
-
68
- ## Workers Per Pod, and Other Config Issues
69
-
70
- With containerization, you will have to make a decision about how "big" to make each pod. Should you run 2 pods with 50 workers each? 25 pods, each with 4 workers? 100 pods, with each Puma running in single mode? Each scenario represents the same total amount of capacity (100 Puma processes that can respond to requests), but there are tradeoffs to make.
71
-
72
- * Worker counts should be somewhere between 4 and 32 in most cases. You want more than 4 in order to minimize time spent in request queueing for a free Puma worker, but probably less than ~32 because otherwise autoscaling is working in too large of an increment or they probably won't fit very well into your nodes. In any queueing system, queue time is proportional to 1/n, where n is the number of things pulling from the queue. Each pod will have its own request queue (i.e., the socket backlog). If you have 4 pods with 1 worker each (4 request queues), wait times are, proportionally, about 4 times higher than if you had 1 pod with 4 workers (1 request queue).
73
- * Unless you have a very I/O-heavy application (50%+ time spent waiting on IO), use the default thread count (5 for MRI). Using higher numbers of threads with low I/O wait (<50%) will lead to additional request queueing time (latency!) and additional memory usage.
74
- * More processes per pod reduces memory usage per process, because of copy-on-write memory and because the cost of the single master process is "amortized" over more child processes.
75
- * Don't run less than 4 processes per pod if you can. Low numbers of processes per pod will lead to high request queueing, which means you will have to run more pods.
76
- * If multithreaded, allocate 1 CPU per worker. If single threaded, allocate 0.75 cpus per worker. Most web applications spend about 25% of their time in I/O - but when you're running multi-threaded, your Puma process will have higher CPU usage and should be able to fully saturate a CPU core.
77
- * Most Puma processes will use about ~512MB-1GB per worker, and about 1GB for the master process. However, you probably shouldn't bother with setting memory limits lower than around 2GB per process, because most places you are deploying will have 2GB of RAM per CPU. A sensible memory limit for a Puma configuration of 4 child workers might be something like 8 GB (1 GB for the master, 7GB for the 4 children).
78
-
data/docs/nginx.md DELETED
@@ -1,80 +0,0 @@
1
- # Nginx configuration example file
2
-
3
- This is a very common setup using an upstream. It was adapted from some Capistrano recipe I found on the Internet a while ago.
4
-
5
- ```nginx
6
- upstream myapp {
7
- server unix:///myapp/tmp/puma.sock;
8
- }
9
-
10
- server {
11
- listen 80;
12
- server_name myapp.com;
13
-
14
- # ~2 seconds is often enough for most folks to parse HTML/CSS and
15
- # retrieve needed images/icons/frames, connections are cheap in
16
- # nginx so increasing this is generally safe...
17
- keepalive_timeout 5;
18
-
19
- # path for static files
20
- root /myapp/public;
21
- access_log /myapp/log/nginx.access.log;
22
- error_log /myapp/log/nginx.error.log info;
23
-
24
- # this rewrites all the requests to the maintenance.html
25
- # page if it exists in the doc root. This is for capistrano's
26
- # disable web task
27
- if (-f $document_root/maintenance.html) {
28
- rewrite ^(.*)$ /maintenance.html last;
29
- break;
30
- }
31
-
32
- location / {
33
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
34
- proxy_set_header Host $host;
35
-
36
- # If the file exists as a static file serve it directly without
37
- # running all the other rewrite tests on it
38
- if (-f $request_filename) {
39
- break;
40
- }
41
-
42
- # check for index.html for directory index
43
- # if it's there on the filesystem then rewrite
44
- # the url to add /index.html to the end of it
45
- # and then break to send it to the next config rules.
46
- if (-f $request_filename/index.html) {
47
- rewrite (.*) $1/index.html break;
48
- }
49
-
50
- # this is the meat of the rack page caching config
51
- # it adds .html to the end of the url and then checks
52
- # the filesystem for that file. If it exists, then we
53
- # rewrite the url to have explicit .html on the end
54
- # and then send it on its way to the next config rule.
55
- # if there is no file on the fs then it sets all the
56
- # necessary headers and proxies to our upstream pumas
57
- if (-f $request_filename.html) {
58
- rewrite (.*) $1.html break;
59
- }
60
-
61
- if (!-f $request_filename) {
62
- proxy_pass http://myapp;
63
- break;
64
- }
65
- }
66
-
67
- # Now this supposedly should work as it gets the filenames with querystrings that Rails provides.
68
- # BUT there's a chance it could break the ajax calls.
69
- location ~* \.(ico|css|gif|jpe?g|png|js)(\?[0-9]+)?$ {
70
- expires max;
71
- break;
72
- }
73
-
74
- # Error pages
75
- # error_page 500 502 503 504 /500.html;
76
- location = /500.html {
77
- root /myapp/current/public;
78
- }
79
- }
80
- ```
data/docs/plugins.md DELETED
@@ -1,38 +0,0 @@
1
- ## Plugins
2
-
3
- Puma 3.0 added support for plugins that can augment configuration and service
4
- operations.
5
-
6
- There are two canonical plugins to aid in the development of new plugins:
7
-
8
- * [tmp\_restart](https://github.com/puma/puma/blob/master/lib/puma/plugin/tmp_restart.rb):
9
- Restarts the server if the file `tmp/restart.txt` is touched
10
- * [heroku](https://github.com/puma/puma-heroku/blob/master/lib/puma/plugin/heroku.rb):
11
- Packages up the default configuration used by Puma on Heroku (being sunset
12
- with the release of Puma 5.0)
13
-
14
- Plugins are activated in a Puma configuration file (such as `config/puma.rb'`)
15
- by adding `plugin "name"`, such as `plugin "heroku"`.
16
-
17
- Plugins are activated based on path requirements so, activating the `heroku`
18
- plugin is much like `require "puma/plugin/heroku"`. This allows gems to provide
19
- multiple plugins (as well as unrelated gems to provide Puma plugins).
20
-
21
- The `tmp_restart` plugin comes with Puma, so it is always available.
22
-
23
- To use the `heroku` plugin, add `puma-heroku` to your Gemfile or install it.
24
-
25
- ### API
26
-
27
- ## Server-wide hooks
28
-
29
- Plugins can use a couple of hooks at the server level: `start` and `config`.
30
-
31
- `start` runs when the server has started and allows the plugin to initiate other
32
- functionality to augment Puma.
33
-
34
- `config` runs when the server is being configured and receives a `Puma::DSL`
35
- object that is useful for additional configuration.
36
-
37
- Public methods in [`Puma::Plugin`](../lib/puma/plugin.rb) are treated as a
38
- public API for plugins.
@@ -1,28 +0,0 @@
1
- # Running Puma in Rails Development Mode
2
-
3
- ## "Loopback requests"
4
-
5
- Be cautious of "loopback requests," where a Rails application executes a request to a server that, in turn, results in another request back to the same Rails application before the first request completes. Having a loopback request will trigger [Rails' load interlock](https://guides.rubyonrails.org/threading_and_code_execution.html#load-interlock) mechanism. The load interlock mechanism prevents a thread from using Rails autoloading mechanism to load constants while the application code is still running inside another thread.
6
-
7
- This issue only occurs in the development environment as Rails' load interlock is not used in production environments. Although we're not sure, we believe this issue may not occur with the new `zeitwerk` code loader.
8
-
9
- ### Solutions
10
-
11
- #### 1. Bypass Rails' load interlock with `.permit_concurrent_loads`
12
-
13
- Wrap the first request inside a block that will allow concurrent loads: [`ActiveSupport::Dependencies.interlock.permit_concurrent_loads`](https://guides.rubyonrails.org/threading_and_code_execution.html#permit-concurrent-loads). Anything wrapped inside the `.permit_concurrent_loads` block will bypass the load interlock mechanism, allowing new threads to access the Rails environment and boot properly.
14
-
15
- ###### Example
16
-
17
- ```ruby
18
- response = ActiveSupport::Dependencies.interlock.permit_concurrent_loads do
19
- # Your HTTP request code here. For example:
20
- Faraday.post url, data: 'foo'
21
- end
22
-
23
- do_something_with response
24
- ```
25
-
26
- #### 2. Use multiple processes on Puma
27
-
28
- Alternatively, you may also enable multiple (single-threaded) workers on Puma. By doing so, you are sidestepping the problem by creating multiple processes rather than new threads. However, this workaround is not ideal because debugging tools such as [byebug](https://github.com/deivid-rodriguez/byebug/issues/487) and [pry](https://github.com/pry/pry/issues/2153), work poorly with any multi-process web server.
data/docs/restart.md DELETED
@@ -1,64 +0,0 @@
1
- Puma provides three distinct kinds of restart operations, each for different use cases. This document describes "hot restarts" and "phased restarts." The third kind of restart operation is called "refork" and is described in the documentation for [`fork_worker`](fork_worker.md).
2
-
3
- ## Hot restart
4
-
5
- To perform a "hot" restart, Puma performs an `exec` operation to start the process up again, so no memory is shared between the old process and the new process. As a result, it is safe to issue a restart at any place where you would manually stop Puma and start it again. In particular, it is safe to upgrade Puma itself using a hot restart.
6
-
7
- If the new process is unable to load, it will simply exit. You should therefore run Puma under a process monitor when using it in production.
8
-
9
- ### How-to
10
-
11
- Any of the following will cause a Puma server to perform a hot restart:
12
-
13
- * Send the `puma` process the `SIGUSR2` signal
14
- * Issue a `GET` request to the Puma status/control server with the path `/restart`
15
- * Issue `pumactl restart` (this uses the control server method if available, otherwise sends the `SIGUSR2` signal to the process)
16
-
17
- ### Supported configurations
18
-
19
- * Works in cluster mode and single mode
20
- * Supported on all platforms
21
-
22
- ### Client experience
23
-
24
- * All platforms: clients with an in-flight request are served responses before the connection is closed gracefully. Puma gracefully disconnects any idle HTTP persistent connections before restarting.
25
- * On MRI or TruffleRuby on Linux and BSD: Clients who connect just before the server restarts may experience increased latency while the server stops and starts again, but their connections will not be closed prematurely.
26
- * On Windows and JRuby: Clients who connect just before a restart may experience "connection reset" errors.
27
-
28
- ### Additional notes
29
-
30
- * Only one version of the application is running at a time.
31
- * `on_restart` is invoked just before the server shuts down. This can be used to clean up resources (like long-lived database connections) gracefully. Since Ruby 2.0, it is not typically necessary to explicitly close file descriptors on restart. This is because any file descriptor opened by Ruby will have the `FD_CLOEXEC` flag set, meaning that file descriptors are closed on `exec`. `on_restart` is useful, though, if your application needs to perform any more graceful protocol-specific shutdown procedures before closing connections.
32
-
33
- ## Phased restart
34
-
35
- Phased restarts replace all running workers in a Puma cluster. This is a useful way to upgrade the application that Puma is serving gracefully. A phased restart works by first killing an old worker, then starting a new worker, waiting until the new worker has successfully started before proceeding to the next worker. This process continues until all workers are replaced. The master process is not restarted.
36
-
37
- ### How-to
38
-
39
- Any of the following will cause a Puma server to perform a phased restart:
40
-
41
- * Send the `puma` process the `SIGUSR1` signal
42
- * Issue a `GET` request to the Puma status/control server with the path `/phased-restart`
43
- * Issue `pumactl phased-restart` (this uses the control server method if available, otherwise sends the `SIGUSR1` signal to the process)
44
-
45
- ### Supported configurations
46
-
47
- * Works in cluster mode only
48
- * To support upgrading the application that Puma is serving, ensure `prune_bundler` is enabled and that `preload_app!` is disabled
49
- * Supported on all platforms where cluster mode is supported
50
-
51
- ### Client experience
52
-
53
- * In-flight requests are always served responses before the connection is closed gracefully
54
- * Idle persistent connections are gracefully disconnected
55
- * New connections are not lost, and clients will not experience any increase in latency (as long as the number of configured workers is greater than one)
56
-
57
- ### Additional notes
58
-
59
- * When a phased restart begins, the Puma master process changes its current working directory to the directory specified by the `directory` option. If `directory` is set to symlink, this is automatically re-evaluated, so this mechanism can be used to upgrade the application.
60
- * On a single server, it's possible that two versions of the application are running concurrently during a phased restart.
61
- * `on_restart` is not invoked
62
- * Phased restarts can be slow for Puma clusters with many workers. Hot restarts often complete more quickly, but at the cost of increased latency during the restart.
63
- * Phased restarts cannot be used to upgrade any gems loaded by the Puma master process, including `puma` itself, anything in `extra_runtime_dependencies`, or dependencies thereof. Upgrading other gems is safe.
64
- * If you remove the gems from old releases as part of your deployment strategy, there are additional considerations. Do not put any gems into `extra_runtime_dependencies` that have native extensions or have dependencies that have native extensions (one common example is `puma_worker_killer` and its dependency on `ffi`). Workers will fail on boot during a phased restart. The underlying issue is recorded in [an issue on the rubygems project](https://github.com/rubygems/rubygems/issues/4004). Hot restarts are your only option here if you need these dependencies.
data/docs/signals.md DELETED
@@ -1,98 +0,0 @@
1
- The [unix signal](https://en.wikipedia.org/wiki/Unix_signal) is a method of sending messages between [processes](https://en.wikipedia.org/wiki/Process_(computing)). When a signal is sent, the operating system interrupts the target process's normal flow of execution. There are standard signals that are used to stop a process, but there are also custom signals that can be used for other purposes. This document is an attempt to list all supported signals that Puma will respond to. In general, signals need only be sent to the master process of a cluster.
2
-
3
- ## Sending Signals
4
-
5
- If you are new to signals, it can be helpful to see how they are used. When a process starts in a *nix-like operating system, it will have a [PID - or process identifier](https://en.wikipedia.org/wiki/Process_identifier) that can be used to send signals to the process. For demonstration, we will create an infinitely running process by tailing a file:
6
-
7
- ```sh
8
- $ echo "foo" >> my.log
9
- $ irb
10
- > pid = Process.spawn 'tail -f my.log'
11
- ```
12
-
13
- From here, we can see that the tail process is running by using the `ps` command:
14
-
15
- ```sh
16
- $ ps aux | grep tail
17
- schneems 87152 0.0 0.0 2432772 492 s032 S+ 12:46PM 0:00.00 tail -f my.log
18
- ```
19
-
20
- You can send a signal in Ruby using the [Process module](https://www.ruby-doc.org/core-2.1.1/Process.html#kill-method):
21
-
22
- ```
23
- $ irb
24
- > puts pid
25
- => 87152
26
- Process.detach(pid) # https://ruby-doc.org/core-2.1.1/Process.html#method-c-detach
27
- Process.kill("TERM", pid)
28
- ```
29
-
30
- Now you will see via `ps` that there is no more `tail` process. Sometimes when referring to signals, the `SIG` prefix will be used. For example, `SIGTERM` is equivalent to sending `TERM` via `Process.kill`.
31
-
32
- ## Puma Signals
33
-
34
- Puma cluster responds to these signals:
35
-
36
- - `TTIN` increment the worker count by 1
37
- - `TTOU` decrement the worker count by 1
38
- - `TERM` send `TERM` to worker. The worker will attempt to finish then exit.
39
- - `USR2` restart workers. This also reloads the Puma configuration file, if there is one.
40
- - `USR1` restart workers in phases, a rolling restart. This will not reload the configuration file.
41
- - `HUP ` reopen log files defined in stdout_redirect configuration parameter. If there is no stdout_redirect option provided, it will behave like `INT`
42
- - `INT ` equivalent of sending Ctrl-C to cluster. Puma will attempt to finish then exit.
43
- - `CHLD`
44
- - `URG ` refork workers in phases from worker 0 if `fork_workers` option is enabled.
45
- - `INFO` print backtraces of all puma threads
46
-
47
- ## Callbacks order in case of different signals
48
-
49
- ### Start application
50
-
51
- ```
52
- puma configuration file reloaded, if there is one
53
- * Pruning Bundler environment
54
- puma configuration file reloaded, if there is one
55
-
56
- before_fork
57
- on_worker_fork
58
- after_worker_fork
59
-
60
- Gemfile in context
61
-
62
- on_worker_boot
63
-
64
- Code of the app is loaded and running
65
- ```
66
-
67
- ### Send USR2
68
-
69
- ```
70
- on_worker_shutdown
71
- on_restart
72
-
73
- puma configuration file reloaded, if there is one
74
-
75
- before_fork
76
- on_worker_fork
77
- after_worker_fork
78
-
79
- Gemfile in context
80
-
81
- on_worker_boot
82
-
83
- Code of the app is loaded and running
84
- ```
85
-
86
- ### Send USR1
87
-
88
- ```
89
- on_worker_shutdown
90
- on_worker_fork
91
- after_worker_fork
92
-
93
- Gemfile in context
94
-
95
- on_worker_boot
96
-
97
- Code of the app is loaded and running
98
- ```
data/docs/stats.md DELETED
@@ -1,142 +0,0 @@
1
- ## Accessing stats
2
-
3
- Stats can be accessed in two ways:
4
-
5
- ### control server
6
-
7
- `$ pumactl stats` or `GET /stats`
8
-
9
- [Read more about `pumactl` and the control server in the README.](https://github.com/puma/puma#controlstatus-server).
10
-
11
- ### Puma.stats
12
-
13
- `Puma.stats` produces a JSON string. `Puma.stats_hash` produces a ruby hash.
14
-
15
- #### in single mode
16
-
17
- Invoke `Puma.stats` anywhere in runtime, e.g. in a rails initializer:
18
-
19
- ```ruby
20
- # config/initializers/puma_stats.rb
21
-
22
- Thread.new do
23
- loop do
24
- sleep 30
25
- puts Puma.stats
26
- end
27
- end
28
- ```
29
-
30
- #### in cluster mode
31
-
32
- Invoke `Puma.stats` from the master process
33
-
34
- ```ruby
35
- # config/puma.rb
36
-
37
- before_fork do
38
- Thread.new do
39
- loop do
40
- puts Puma.stats
41
- sleep 30
42
- end
43
- end
44
- end
45
- ```
46
-
47
-
48
- ## Explanation of stats
49
-
50
- `Puma.stats` returns different information and a different structure depending on if Puma is in single vs. cluster mode. There is one top-level attribute that is common to both modes:
51
-
52
- * started_at: when Puma was started
53
-
54
- ### single mode and individual workers in cluster mode
55
-
56
- When Puma runs in single mode, these stats are available at the top level. When Puma runs in cluster mode, these stats are available within the `worker_status` array in a hash labeled `last_status`, in an array of hashes where one hash represents each worker.
57
-
58
- * backlog: requests that are waiting for an available thread to be available. if this is above 0, you need more capacity [always true?]
59
- * running: how many threads are running
60
- * pool_capacity: the number of requests that the server is capable of taking right now. For example, if the number is 5, then it means there are 5 threads sitting idle ready to take a request. If one request comes in, then the value would be 4 until it finishes processing. If the minimum threads allowed is zero, this number will still have a maximum value of the maximum threads allowed.
61
- * max_threads: the maximum number of threads Puma is configured to spool per worker
62
- * requests_count: the number of requests this worker has served since starting
63
-
64
-
65
- ### cluster mode
66
-
67
- * phase: which phase of restart the process is in, during [phased restart](https://github.com/puma/puma/blob/master/docs/restart.md)
68
- * workers: ??
69
- * booted_workers: how many workers currently running?
70
- * old_workers: ??
71
- * worker_status: array of hashes of info for each worker (see below)
72
-
73
- ### worker status
74
-
75
- * started_at: when the worker started
76
- * pid: the process id of the worker process
77
- * index: each worker gets a number. if Puma is configured to have 3 workers, then this will be 0, 1, or 2
78
- * booted: if it's done booting [?]
79
- * last_checkin: Last time the worker responded to the master process' heartbeat check.
80
- * last_status: a hash of info about the worker's state handling requests. See the explanation for this in "single mode and individual workers in cluster mode" section above.
81
-
82
-
83
- ## Examples
84
-
85
- Here are two example stats hashes produced by `Puma.stats`:
86
-
87
- ### single
88
-
89
- ```json
90
- {
91
- "started_at": "2021-01-14T07:12:35Z",
92
- "backlog": 0,
93
- "running": 5,
94
- "pool_capacity": 5,
95
- "max_threads": 5,
96
- "requests_count": 3
97
- }
98
- ```
99
-
100
- ### cluster
101
-
102
- ```json
103
- {
104
- "started_at": "2021-01-14T07:09:17Z",
105
- "workers": 2,
106
- "phase": 0,
107
- "booted_workers": 2,
108
- "old_workers": 0,
109
- "worker_status": [
110
- {
111
- "started_at": "2021-01-14T07:09:24Z",
112
- "pid": 64136,
113
- "index": 0,
114
- "phase": 0,
115
- "booted": true,
116
- "last_checkin": "2021-01-14T07:11:09Z",
117
- "last_status": {
118
- "backlog": 0,
119
- "running": 5,
120
- "pool_capacity": 5,
121
- "max_threads": 5,
122
- "requests_count": 2
123
- }
124
- },
125
- {
126
- "started_at": "2021-01-14T07:09:24Z",
127
- "pid": 64137,
128
- "index": 1,
129
- "phase": 0,
130
- "booted": true,
131
- "last_checkin": "2021-01-14T07:11:09Z",
132
- "last_status": {
133
- "backlog": 0,
134
- "running": 5,
135
- "pool_capacity": 5,
136
- "max_threads": 5,
137
- "requests_count": 1
138
- }
139
- }
140
- ]
141
- }
142
- ```