pitchfork 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of pitchfork might be problematic. Click here for more details.
- checksums.yaml +7 -0
- data/.git-blame-ignore-revs +3 -0
- data/.gitattributes +5 -0
- data/.github/workflows/ci.yml +30 -0
- data/.gitignore +23 -0
- data/COPYING +674 -0
- data/Dockerfile +4 -0
- data/Gemfile +9 -0
- data/Gemfile.lock +30 -0
- data/LICENSE +67 -0
- data/README.md +123 -0
- data/Rakefile +72 -0
- data/docs/Application_Timeouts.md +74 -0
- data/docs/CONFIGURATION.md +388 -0
- data/docs/DESIGN.md +86 -0
- data/docs/FORK_SAFETY.md +80 -0
- data/docs/PHILOSOPHY.md +90 -0
- data/docs/REFORKING.md +113 -0
- data/docs/SIGNALS.md +38 -0
- data/docs/TUNING.md +106 -0
- data/examples/constant_caches.ru +43 -0
- data/examples/echo.ru +25 -0
- data/examples/hello.ru +5 -0
- data/examples/nginx.conf +156 -0
- data/examples/pitchfork.conf.minimal.rb +5 -0
- data/examples/pitchfork.conf.rb +77 -0
- data/examples/unicorn.socket +11 -0
- data/exe/pitchfork +116 -0
- data/ext/pitchfork_http/CFLAGS +13 -0
- data/ext/pitchfork_http/c_util.h +116 -0
- data/ext/pitchfork_http/child_subreaper.h +25 -0
- data/ext/pitchfork_http/common_field_optimization.h +130 -0
- data/ext/pitchfork_http/epollexclusive.h +124 -0
- data/ext/pitchfork_http/ext_help.h +38 -0
- data/ext/pitchfork_http/extconf.rb +14 -0
- data/ext/pitchfork_http/global_variables.h +97 -0
- data/ext/pitchfork_http/httpdate.c +79 -0
- data/ext/pitchfork_http/pitchfork_http.c +4318 -0
- data/ext/pitchfork_http/pitchfork_http.rl +1024 -0
- data/ext/pitchfork_http/pitchfork_http_common.rl +76 -0
- data/lib/pitchfork/app/old_rails/static.rb +59 -0
- data/lib/pitchfork/children.rb +124 -0
- data/lib/pitchfork/configurator.rb +314 -0
- data/lib/pitchfork/const.rb +23 -0
- data/lib/pitchfork/http_parser.rb +206 -0
- data/lib/pitchfork/http_response.rb +63 -0
- data/lib/pitchfork/http_server.rb +822 -0
- data/lib/pitchfork/launcher.rb +9 -0
- data/lib/pitchfork/mem_info.rb +36 -0
- data/lib/pitchfork/message.rb +130 -0
- data/lib/pitchfork/mold_selector.rb +29 -0
- data/lib/pitchfork/preread_input.rb +33 -0
- data/lib/pitchfork/refork_condition.rb +21 -0
- data/lib/pitchfork/select_waiter.rb +9 -0
- data/lib/pitchfork/socket_helper.rb +199 -0
- data/lib/pitchfork/stream_input.rb +152 -0
- data/lib/pitchfork/tee_input.rb +133 -0
- data/lib/pitchfork/tmpio.rb +35 -0
- data/lib/pitchfork/version.rb +8 -0
- data/lib/pitchfork/worker.rb +244 -0
- data/lib/pitchfork.rb +158 -0
- data/pitchfork.gemspec +30 -0
- metadata +137 -0
data/docs/REFORKING.md
ADDED
@@ -0,0 +1,113 @@
|
|
1
|
+
## Reforking
|
2
|
+
|
3
|
+
Reforking is `pitchfork`'s main feature. To understand how it works, you must first understand Copy-on-Write.
|
4
|
+
|
5
|
+
### Copy-on-Write
|
6
|
+
|
7
|
+
In old UNIX systems of the ’70s or ’80s, forking a process involved copying its entire addressable memory over
|
8
|
+
to the new process address space, effectively doubling the memory usage. But since the mid ’90s, that’s no longer
|
9
|
+
true as, most, if not all, fork implementations are now sophisticated enough to trick the processes into thinking
|
10
|
+
they have their own private memory regions, while in reality they’re sharing it with other processes.
|
11
|
+
|
12
|
+
When the child process is forked, its pages tables are initialized to point to the parent’s memory pages. Later on,
|
13
|
+
if either the parent or the child tries to write in one of these pages, the operating system is notified and will
|
14
|
+
actually copy the page before it’s modified.
|
15
|
+
|
16
|
+
This means that if neither the child nor the parent write in these shared pages after the fork happens,
|
17
|
+
forked processes are essentially free.
|
18
|
+
|
19
|
+
### Shared Memory Invalidation in Ruby
|
20
|
+
|
21
|
+
So in theory, preforking servers shouldn't use more memory than threaded servers.
|
22
|
+
|
23
|
+
However, in a Ruby process, there is generally a lot of memory regions that are lazily initialized.
|
24
|
+
This include the Ruby Virtual Machine inline caches, JITed code if you use YJIT, and also
|
25
|
+
some common patterns in applications, such as memoization:
|
26
|
+
|
27
|
+
```ruby
|
28
|
+
module MyCode
|
29
|
+
def self.some_data
|
30
|
+
@some_data ||= File.read("path/to/something")
|
31
|
+
end
|
32
|
+
end
|
33
|
+
```
|
34
|
+
|
35
|
+
However, since workers are forked right after boot, most codepaths have never been executed,
|
36
|
+
so most of these caches are not yet initialized.
|
37
|
+
|
38
|
+
As more code get executed, more an more memory pages get invalidated. If you were to graph the ratio
|
39
|
+
of shared memory of a ruby process over time, you'd likely see a logarithmic curve, with a quick degradation
|
40
|
+
during the first few processed request as the most common code paths get warmed up, and then a stabilization.
|
41
|
+
|
42
|
+
### Reforking
|
43
|
+
|
44
|
+
That is where reforking helps. Since move of these invalidations only happens when a codepath is executed for the
|
45
|
+
first time, if you take a warmed up worker out of rotation, and use it to fork new workers, warmed up pages will
|
46
|
+
be shared again, and most of them won't be invalidated anymore.
|
47
|
+
|
48
|
+
|
49
|
+
When you start `pitchfork` it forks the desired amount of workers:
|
50
|
+
|
51
|
+
```
|
52
|
+
PID COMMAND
|
53
|
+
100 \_ pitchfork master
|
54
|
+
101 \_ pitchfork worker[0] (gen:0)
|
55
|
+
102 \_ pitchfork worker[1] (gen:0)
|
56
|
+
103 \_ pitchfork worker[2] (gen:0)
|
57
|
+
104 \_ pitchfork worker[3] (gen:0)
|
58
|
+
```
|
59
|
+
|
60
|
+
When a reforking is triggered, one of the workers is selected to become a `mold`, and is taken out of rotation.
|
61
|
+
When promoted, molds no longer process any incoming HTTP requests, they become inactive:
|
62
|
+
|
63
|
+
```
|
64
|
+
PID COMMAND
|
65
|
+
100 \_ pitchfork master
|
66
|
+
101 \_ pitchfork mold (gen:1)
|
67
|
+
102 \_ pitchfork worker[1] (gen:0)
|
68
|
+
103 \_ pitchfork worker[2] (gen:0)
|
69
|
+
104 \_ pitchfork worker[3] (gen:0)
|
70
|
+
```
|
71
|
+
|
72
|
+
When a new mold has been promoted, `pitchfork` starts a slow rollout of older workers and replace them with fresh workers
|
73
|
+
forked from the mold:
|
74
|
+
|
75
|
+
```
|
76
|
+
PID COMMAND
|
77
|
+
100 \_ pitchfork master
|
78
|
+
101 \_ pitchfork mold (gen:1)
|
79
|
+
105 \_ pitchfork worker[0] (gen:1)
|
80
|
+
102 \_ pitchfork worker[1] (gen:0)
|
81
|
+
103 \_ pitchfork worker[2] (gen:0)
|
82
|
+
104 \_ pitchfork worker[3] (gen:0)
|
83
|
+
```
|
84
|
+
|
85
|
+
```
|
86
|
+
PID COMMAND
|
87
|
+
100 \_ pitchfork master
|
88
|
+
101 \_ pitchfork mold (gen:1)
|
89
|
+
105 \_ pitchfork worker[0] (gen:1)
|
90
|
+
106 \_ pitchfork worker[1] (gen:1)
|
91
|
+
103 \_ pitchfork worker[2] (gen:0)
|
92
|
+
104 \_ pitchfork worker[3] (gen:0)
|
93
|
+
```
|
94
|
+
|
95
|
+
etc.
|
96
|
+
|
97
|
+
### Forking Sibling Processes
|
98
|
+
|
99
|
+
Normally on unix systems, when calling `fork(2)`, the newly created process is a child of the original one, so forking from the mold should create
|
100
|
+
a process tree such as:
|
101
|
+
|
102
|
+
```
|
103
|
+
PID COMMAND
|
104
|
+
100 \_ pitchfork master
|
105
|
+
101 \_ pitchfork mold (gen:1)
|
106
|
+
105 \_ pitchfork worker[0] (gen:1)
|
107
|
+
```
|
108
|
+
|
109
|
+
However the `pitchfork` master process registers itself as a "child subreaper" via [`PR_SET_CHILD_SUBREAPER`](https://man7.org/linux/man-pages/man2/prctl.2.html).
|
110
|
+
This means any descendant process that is orphaned will be re-parented as a child of the master rather than a child of the init process (pid 1).
|
111
|
+
|
112
|
+
With this in mind, the mold fork twice to create an orphaned process that will get re-attached to the master, effectively forking a sibling rather than a child.
|
113
|
+
The need for `PR_SET_CHILD_SUBREAPER` is the main reason why reforking is only available on Linux.
|
data/docs/SIGNALS.md
ADDED
@@ -0,0 +1,38 @@
|
|
1
|
+
## Signal handling
|
2
|
+
|
3
|
+
In general, signals need only be sent to the master process. However,
|
4
|
+
the signals Pitchfork uses internally to communicate with the worker
|
5
|
+
processes are documented here as well.
|
6
|
+
|
7
|
+
### Master Process
|
8
|
+
|
9
|
+
* `INT/TERM` - quick shutdown, kills all workers immediately
|
10
|
+
|
11
|
+
* `QUIT` - graceful shutdown, waits for workers to finish their
|
12
|
+
current request before finishing.
|
13
|
+
|
14
|
+
* `USR2` - trigger a manual refork. A worker is promoted as
|
15
|
+
a new mold, and existing workers progressively replaced
|
16
|
+
by fresh ones.
|
17
|
+
|
18
|
+
* `TTIN` - increment the number of worker processes by one
|
19
|
+
|
20
|
+
* `TTOU` - decrement the number of worker processes by one
|
21
|
+
|
22
|
+
### Worker Processes
|
23
|
+
|
24
|
+
Note: the master uses a pipe to signal workers
|
25
|
+
instead of `kill(2)` for most cases. Using signals still (and works and
|
26
|
+
remains supported for external tools/libraries), however.
|
27
|
+
|
28
|
+
Sending signals directly to the worker processes should not normally be
|
29
|
+
needed. If the master process is running, any exited worker will be
|
30
|
+
automatically respawned.
|
31
|
+
|
32
|
+
* `INT/TERM` - Quick shutdown, immediately exit.
|
33
|
+
The master process will respawn a worker to replace this one.
|
34
|
+
Immediate shutdown is still triggered using kill(2) and not the
|
35
|
+
internal pipe as of unicorn 4.8
|
36
|
+
|
37
|
+
* `QUIT` - Gracefully exit after finishing the current request.
|
38
|
+
The master process will respawn a worker to replace this one.
|
data/docs/TUNING.md
ADDED
@@ -0,0 +1,106 @@
|
|
1
|
+
# Tuning pitchfork
|
2
|
+
|
3
|
+
unicorn performance is generally as good as a (mostly) Ruby web server
|
4
|
+
can provide. Most often the performance bottleneck is in the web
|
5
|
+
application running on Pitchfork rather than Pitchfork itself.
|
6
|
+
|
7
|
+
## pitchfork Configuration
|
8
|
+
|
9
|
+
See Pitchfork::Configurator for details on the config file format.
|
10
|
+
`worker_processes` is the most-commonly needed tuning parameter.
|
11
|
+
|
12
|
+
### Pitchfork::Configurator#worker_processes
|
13
|
+
|
14
|
+
* `worker_processes` should be scaled to the number of processes your
|
15
|
+
backend system(s) can support. DO NOT scale it to the number of
|
16
|
+
external network clients your application expects to be serving.
|
17
|
+
unicorn is NOT for serving slow clients, that is the job of nginx.
|
18
|
+
|
19
|
+
* `worker_processes` should be *at* *least* the number of CPU cores on
|
20
|
+
a dedicated server (unless you do not have enough memory).
|
21
|
+
If your application has occasionally slow responses that are /not/
|
22
|
+
CPU-intensive, you may increase this to workaround those inefficiencies.
|
23
|
+
|
24
|
+
* `Etc.nprocessors` may be used to determine the number of CPU cores present.
|
25
|
+
|
26
|
+
* Never, ever, increase `worker_processes` to the point where the system
|
27
|
+
runs out of physical memory and hits swap. Production servers should
|
28
|
+
never see swap activity.
|
29
|
+
|
30
|
+
* Bigger is better. The more `worker_processes` you run, the more you'll
|
31
|
+
benefit from Copy-on-Write. If your application use 1GiB of memory after boot,
|
32
|
+
running `10` worker process, the relative memory usage per worker will only be
|
33
|
+
`~100MiB`, whereas if you only run `5` worker processes, there relative usage will be
|
34
|
+
`~200MiB`.
|
35
|
+
So if you can chose your hardware, it's preferable to use a smaller number
|
36
|
+
of bigger servers rather than a large number of smaller servers.
|
37
|
+
The same applies for containers, it's preferable to run a smaller number of larger containers.
|
38
|
+
|
39
|
+
### Pitchfork::Configurator#refork_after
|
40
|
+
|
41
|
+
* Reforking allows to share again memory pages that have been written into.
|
42
|
+
|
43
|
+
* In general, the main source of shared memory pages invalidation in Ruby
|
44
|
+
is inline caches and JITed code. This means that calling a method for the
|
45
|
+
first time tend to degrade Copy-on-Write performance, and that over time
|
46
|
+
as more and more codepaths get executed at least once, less and less memory
|
47
|
+
is shared until it stabilize as most codepaths have been warmed up.
|
48
|
+
|
49
|
+
* This is why automatic reforking is based on the number of processed requests.
|
50
|
+
You want to refork relatively frequently when the `pitchfork` server is fresh,
|
51
|
+
and then less and less frequently over time.
|
52
|
+
|
53
|
+
### Pitchfork::Configurator#listen Options
|
54
|
+
|
55
|
+
* Setting a very low value for the :backlog parameter in "listen"
|
56
|
+
directives can allow failover to happen more quickly if your
|
57
|
+
cluster is configured for it.
|
58
|
+
|
59
|
+
* If you're doing extremely simple benchmarks and getting connection
|
60
|
+
errors under high request rates, increasing your :backlog parameter
|
61
|
+
above the already-generous default of 1024 can help avoid connection
|
62
|
+
errors. Keep in mind this is not recommended for real traffic if
|
63
|
+
you have another machine to failover to (see above).
|
64
|
+
|
65
|
+
* :rcvbuf and :sndbuf parameters generally do not need to be set for TCP
|
66
|
+
listeners under Linux 2.6 because auto-tuning is enabled. UNIX domain
|
67
|
+
sockets do not have auto-tuning buffer sizes; so increasing those will
|
68
|
+
allow syscalls and task switches to be saved for larger requests
|
69
|
+
and responses. If your app only generates small responses or expects
|
70
|
+
small requests, you may shrink the buffer sizes to save memory, too.
|
71
|
+
|
72
|
+
* Having socket buffers too large can also be detrimental or have
|
73
|
+
little effect. Huge buffers can put more pressure on the allocator
|
74
|
+
and may also thrash CPU caches, cancelling out performance gains
|
75
|
+
one would normally expect.
|
76
|
+
|
77
|
+
* UNIX domain sockets are slightly faster than TCP sockets, but only
|
78
|
+
work if nginx is on the same machine.
|
79
|
+
|
80
|
+
## Kernel Parameters (Linux sysctl and sysfs)
|
81
|
+
|
82
|
+
WARNING: Do not change system parameters unless you know what you're doing!
|
83
|
+
|
84
|
+
* net.core.rmem_max and net.core.wmem_max can increase the allowed
|
85
|
+
size of :rcvbuf and :sndbuf respectively. This is mostly only useful
|
86
|
+
for UNIX domain sockets which do not have auto-tuning buffer sizes.
|
87
|
+
|
88
|
+
* For load testing/benchmarking with UNIX domain sockets, you should
|
89
|
+
consider increasing net.core.somaxconn or else nginx will start
|
90
|
+
failing to connect under heavy load. You may also consider setting
|
91
|
+
a higher :backlog to listen on as noted earlier.
|
92
|
+
|
93
|
+
* If you're running out of local ports, consider lowering
|
94
|
+
net.ipv4.tcp_fin_timeout to 20-30 (default: 60 seconds). Also
|
95
|
+
consider widening the usable port range by changing
|
96
|
+
net.ipv4.ip_local_port_range.
|
97
|
+
|
98
|
+
* Setting net.ipv4.tcp_timestamps=1 will also allow setting
|
99
|
+
net.ipv4.tcp_tw_reuse=1 and net.ipv4.tcp_tw_recycle=1, which along
|
100
|
+
with the above settings can slow down port exhaustion. Not all
|
101
|
+
networks are compatible with these settings, check with your friendly
|
102
|
+
network administrator before changing these.
|
103
|
+
|
104
|
+
* Increasing the MTU size can reduce framing overhead for larger
|
105
|
+
transfers. One often-overlooked detail is that the loopback
|
106
|
+
device (usually "lo") can have its MTU increased, too.
|
@@ -0,0 +1,43 @@
|
|
1
|
+
require "pitchfork/mem_info"
|
2
|
+
|
3
|
+
module App
|
4
|
+
CONST_NUM = Integer(ENV.fetch("NUM", 100_000))
|
5
|
+
|
6
|
+
CONST_NUM.times do |i|
|
7
|
+
class_eval(<<~RUBY, __FILE__, __LINE__ + 1)
|
8
|
+
Const#{i} = Module.new
|
9
|
+
|
10
|
+
def self.lookup_#{i}
|
11
|
+
Const#{i}
|
12
|
+
end
|
13
|
+
RUBY
|
14
|
+
end
|
15
|
+
|
16
|
+
class_eval(<<~RUBY, __FILE__, __LINE__ + 1)
|
17
|
+
def self.warmup
|
18
|
+
#{CONST_NUM.times.map { |i| "lookup_#{i}"}.join("\n")}
|
19
|
+
end
|
20
|
+
RUBY
|
21
|
+
end
|
22
|
+
|
23
|
+
run lambda { |env|
|
24
|
+
parent_meminfo = Pitchfork::MemInfo.new(Process.ppid)
|
25
|
+
siblings_pids = File.read("/proc/#{Process.ppid}/task/#{Process.ppid}/children").split
|
26
|
+
siblings = siblings_pids.map do |pid|
|
27
|
+
Pitchfork::MemInfo.new(pid)
|
28
|
+
rescue Errno::ENOENT, Errno::ESRCH # The process just died
|
29
|
+
nil
|
30
|
+
end.compact
|
31
|
+
|
32
|
+
total_pss = parent_meminfo.pss + siblings.map(&:pss).sum
|
33
|
+
self_info = Pitchfork::MemInfo.new(Process.pid)
|
34
|
+
|
35
|
+
body = <<~EOS
|
36
|
+
Single Worker Memory Usage: #{(self_info.pss / 1024.0).round(1)} MiB
|
37
|
+
Total Cluster Memory Usage: #{(total_pss / 1024.0).round(1)} MiB
|
38
|
+
EOS
|
39
|
+
|
40
|
+
App.warmup
|
41
|
+
|
42
|
+
[ 200, { 'content-type' => 'text/plain' }, [ body ] ]
|
43
|
+
}
|
data/examples/echo.ru
ADDED
@@ -0,0 +1,25 @@
|
|
1
|
+
# Example application that echoes read data back to the HTTP client.
|
2
|
+
# This emulates the old echo protocol people used to run.
|
3
|
+
#
|
4
|
+
# An example of using this in a client would be to run:
|
5
|
+
# curl --no-buffer -T- http://host:port/
|
6
|
+
#
|
7
|
+
# Then type random stuff in your terminal to watch it get echoed back!
|
8
|
+
|
9
|
+
class EchoBody < Struct.new(:input)
|
10
|
+
|
11
|
+
def each(&block)
|
12
|
+
while buf = input.read(4096)
|
13
|
+
yield buf
|
14
|
+
end
|
15
|
+
self
|
16
|
+
end
|
17
|
+
|
18
|
+
end
|
19
|
+
|
20
|
+
use Rack::Chunked
|
21
|
+
run lambda { |env|
|
22
|
+
/\A100-continue\z/i =~ env['HTTP_EXPECT'] and return [100, {}, []]
|
23
|
+
[ 200, { 'content-type' => 'application/octet-stream' },
|
24
|
+
EchoBody.new(env['rack.input']) ]
|
25
|
+
}
|
data/examples/hello.ru
ADDED
data/examples/nginx.conf
ADDED
@@ -0,0 +1,156 @@
|
|
1
|
+
# This is example contains the bare mininum to get nginx going with
|
2
|
+
# unicorn servers. Generally these configuration settings
|
3
|
+
# are applicable to other HTTP application servers (and not just Ruby
|
4
|
+
# ones), so if you have one working well for proxying another app
|
5
|
+
# server, feel free to continue using it.
|
6
|
+
#
|
7
|
+
# The only setting we feel strongly about is the fail_timeout=0
|
8
|
+
# directive in the "upstream" block. max_fails=0 also has the same
|
9
|
+
# effect as fail_timeout=0 for current versions of nginx and may be
|
10
|
+
# used in its place.
|
11
|
+
#
|
12
|
+
# Users are strongly encouraged to refer to nginx documentation for more
|
13
|
+
# details and search for other example configs.
|
14
|
+
|
15
|
+
# you generally only need one nginx worker unless you're serving
|
16
|
+
# large amounts of static files which require blocking disk reads
|
17
|
+
worker_processes 1;
|
18
|
+
|
19
|
+
# # drop privileges, root is needed on most systems for binding to port 80
|
20
|
+
# # (or anything < 1024). Capability-based security may be available for
|
21
|
+
# # your system and worth checking out so you won't need to be root to
|
22
|
+
# # start nginx to bind on 80
|
23
|
+
user nobody nogroup; # for systems with a "nogroup"
|
24
|
+
# user nobody nobody; # for systems with "nobody" as a group instead
|
25
|
+
|
26
|
+
# Feel free to change all paths to suite your needs here, of course
|
27
|
+
pid /path/to/nginx.pid;
|
28
|
+
error_log /path/to/nginx.error.log;
|
29
|
+
|
30
|
+
events {
|
31
|
+
worker_connections 1024; # increase if you have lots of clients
|
32
|
+
accept_mutex off; # "on" if nginx worker_processes > 1
|
33
|
+
# use epoll; # enable for Linux 2.6+
|
34
|
+
# use kqueue; # enable for FreeBSD, OSX
|
35
|
+
}
|
36
|
+
|
37
|
+
http {
|
38
|
+
# nginx will find this file in the config directory set at nginx build time
|
39
|
+
include mime.types;
|
40
|
+
|
41
|
+
# fallback in case we can't determine a type
|
42
|
+
default_type application/octet-stream;
|
43
|
+
|
44
|
+
# click tracking!
|
45
|
+
access_log /path/to/nginx.access.log combined;
|
46
|
+
|
47
|
+
# you generally want to serve static files with nginx since
|
48
|
+
# unicorn is not and will never be optimized for it
|
49
|
+
sendfile on;
|
50
|
+
|
51
|
+
tcp_nopush on; # off may be better for *some* Comet/long-poll stuff
|
52
|
+
tcp_nodelay off; # on may be better for some Comet/long-poll stuff
|
53
|
+
|
54
|
+
# we haven't checked to see if Rack::Deflate on the app server is
|
55
|
+
# faster or not than doing compression via nginx. It's easier
|
56
|
+
# to configure it all in one place here for static files and also
|
57
|
+
# to disable gzip for clients who don't get gzip/deflate right.
|
58
|
+
# There are other gzip settings that may be needed used to deal with
|
59
|
+
# bad clients out there, see
|
60
|
+
# https://nginx.org/en/docs/http/ngx_http_gzip_module.html
|
61
|
+
gzip on;
|
62
|
+
gzip_http_version 1.0;
|
63
|
+
gzip_proxied any;
|
64
|
+
gzip_min_length 500;
|
65
|
+
gzip_disable "MSIE [1-6]\.";
|
66
|
+
gzip_types text/plain text/html text/xml text/css
|
67
|
+
text/comma-separated-values
|
68
|
+
text/javascript application/x-javascript
|
69
|
+
application/atom+xml;
|
70
|
+
|
71
|
+
# this can be any application server, not just unicorn
|
72
|
+
upstream app_server {
|
73
|
+
# fail_timeout=0 means we always retry an upstream even if it failed
|
74
|
+
# to return a good HTTP response (in case the unicorn master nukes a
|
75
|
+
# single worker for timing out).
|
76
|
+
|
77
|
+
# for UNIX domain socket setups:
|
78
|
+
server unix:/path/to/.unicorn.sock fail_timeout=0;
|
79
|
+
|
80
|
+
# for TCP setups, point these to your backend servers
|
81
|
+
# server 192.168.0.7:8080 fail_timeout=0;
|
82
|
+
# server 192.168.0.8:8080 fail_timeout=0;
|
83
|
+
# server 192.168.0.9:8080 fail_timeout=0;
|
84
|
+
}
|
85
|
+
|
86
|
+
server {
|
87
|
+
# enable one of the following if you're on Linux or FreeBSD
|
88
|
+
# listen 80 default deferred; # for Linux
|
89
|
+
# listen 80 default accept_filter=httpready; # for FreeBSD
|
90
|
+
|
91
|
+
# If you have IPv6, you'll likely want to have two separate listeners.
|
92
|
+
# One on IPv4 only (the default), and another on IPv6 only instead
|
93
|
+
# of a single dual-stack listener. A dual-stack listener will make
|
94
|
+
# for ugly IPv4 addresses in $remote_addr (e.g ":ffff:10.0.0.1"
|
95
|
+
# instead of just "10.0.0.1") and potentially trigger bugs in
|
96
|
+
# some software.
|
97
|
+
# listen [::]:80 ipv6only=on; # deferred or accept_filter recommended
|
98
|
+
|
99
|
+
client_max_body_size 4G;
|
100
|
+
server_name _;
|
101
|
+
|
102
|
+
# ~2 seconds is often enough for most folks to parse HTML/CSS and
|
103
|
+
# retrieve needed images/icons/frames, connections are cheap in
|
104
|
+
# nginx so increasing this is generally safe...
|
105
|
+
keepalive_timeout 5;
|
106
|
+
|
107
|
+
# path for static files
|
108
|
+
root /path/to/app/current/public;
|
109
|
+
|
110
|
+
# Prefer to serve static files directly from nginx to avoid unnecessary
|
111
|
+
# data copies from the application server.
|
112
|
+
#
|
113
|
+
# try_files directive appeared in in nginx 0.7.27 and has stabilized
|
114
|
+
# over time. Older versions of nginx (e.g. 0.6.x) requires
|
115
|
+
# "if (!-f $request_filename)" which was less efficient:
|
116
|
+
# https://yhbt.net/unicorn.git/tree/examples/nginx.conf?id=v3.3.1#n127
|
117
|
+
try_files $uri/index.html $uri.html $uri @app;
|
118
|
+
|
119
|
+
location @app {
|
120
|
+
# an HTTP header important enough to have its own Wikipedia entry:
|
121
|
+
# https://en.wikipedia.org/wiki/X-Forwarded-For
|
122
|
+
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
123
|
+
|
124
|
+
# enable this if you forward HTTPS traffic to unicorn,
|
125
|
+
# this helps Rack set the proper URL scheme for doing redirects:
|
126
|
+
# proxy_set_header X-Forwarded-Proto $scheme;
|
127
|
+
|
128
|
+
# pass the Host: header from the client right along so redirects
|
129
|
+
# can be set properly within the Rack application
|
130
|
+
proxy_set_header Host $http_host;
|
131
|
+
|
132
|
+
# we don't want nginx trying to do something clever with
|
133
|
+
# redirects, we set the Host: header above already.
|
134
|
+
proxy_redirect off;
|
135
|
+
|
136
|
+
# It's also safe to set if you're using only serving fast clients
|
137
|
+
# with unicorn + nginx, but not slow clients. You normally want
|
138
|
+
# nginx to buffer responses to slow clients, even with Rails 3.1
|
139
|
+
# streaming because otherwise a slow client can become a bottleneck
|
140
|
+
# of unicorn.
|
141
|
+
#
|
142
|
+
# The Rack application may also set "X-Accel-Buffering (yes|no)"
|
143
|
+
# in the response headers do disable/enable buffering on a
|
144
|
+
# per-response basis.
|
145
|
+
# proxy_buffering off;
|
146
|
+
|
147
|
+
proxy_pass http://app_server;
|
148
|
+
}
|
149
|
+
|
150
|
+
# Rails error pages
|
151
|
+
error_page 500 502 503 504 /500.html;
|
152
|
+
location = /500.html {
|
153
|
+
root /path/to/app/current/public;
|
154
|
+
}
|
155
|
+
}
|
156
|
+
}
|
@@ -0,0 +1,77 @@
|
|
1
|
+
# Sample verbose configuration file for Pitchfork
|
2
|
+
|
3
|
+
# Use at least one worker per core if you're on a dedicated server,
|
4
|
+
# more will usually help for _short_ waits on databases/caches.
|
5
|
+
worker_processes 4
|
6
|
+
|
7
|
+
# listen on both a Unix domain socket and a TCP port,
|
8
|
+
# we use a shorter backlog for quicker failover when busy
|
9
|
+
listen "/path/to/.pitchfork.sock", :backlog => 64
|
10
|
+
listen 8080, :tcp_nopush => true
|
11
|
+
|
12
|
+
# nuke workers after 30 seconds instead of 60 seconds (the default)
|
13
|
+
timeout 30
|
14
|
+
|
15
|
+
# Enable this flag to have pitchfork test client connections by writing the
|
16
|
+
# beginning of the HTTP headers before calling the application. This
|
17
|
+
# prevents calling the application for connections that have disconnected
|
18
|
+
# while queued. This is only guaranteed to detect clients on the same
|
19
|
+
# host pitchfork runs on, and unlikely to detect disconnects even on a
|
20
|
+
# fast LAN.
|
21
|
+
check_client_connection false
|
22
|
+
|
23
|
+
# local variable to guard against running a hook multiple times
|
24
|
+
run_once = true
|
25
|
+
|
26
|
+
before_fork do |server, worker|
|
27
|
+
# the following is highly recommended for Rails
|
28
|
+
# as there's no need for the master process to hold a connection
|
29
|
+
defined?(ActiveRecord::Base) and
|
30
|
+
ActiveRecord::Base.connection.disconnect!
|
31
|
+
|
32
|
+
# Occasionally, it may be necessary to run non-idempotent code in the
|
33
|
+
# master before forking. Keep in mind the above disconnect! example
|
34
|
+
# is idempotent and does not need a guard.
|
35
|
+
if run_once
|
36
|
+
# do_something_once_here ...
|
37
|
+
run_once = false # prevent from firing again
|
38
|
+
end
|
39
|
+
|
40
|
+
# The following is only recommended for memory/DB-constrained
|
41
|
+
# installations. It is not needed if your system can house
|
42
|
+
# twice as many worker_processes as you have configured.
|
43
|
+
#
|
44
|
+
# # This allows a new master process to incrementally
|
45
|
+
# # phase out the old master process with SIGTTOU to avoid a
|
46
|
+
# # thundering herd
|
47
|
+
# # when doing a transparent upgrade. The last worker spawned
|
48
|
+
# # will then kill off the old master process with a SIGQUIT.
|
49
|
+
# old_pid = "#{server.config[:pid]}.oldbin"
|
50
|
+
# if old_pid != server.pid
|
51
|
+
# begin
|
52
|
+
# sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU
|
53
|
+
# Process.kill(sig, File.read(old_pid).to_i)
|
54
|
+
# rescue Errno::ENOENT, Errno::ESRCH
|
55
|
+
# end
|
56
|
+
# end
|
57
|
+
#
|
58
|
+
# Throttle the master from forking too quickly by sleeping. Due
|
59
|
+
# to the implementation of standard Unix signal handlers, this
|
60
|
+
# helps (but does not completely) prevent identical, repeated signals
|
61
|
+
# from being lost when the receiving process is busy.
|
62
|
+
# sleep 1
|
63
|
+
end
|
64
|
+
|
65
|
+
after_fork do |server, worker|
|
66
|
+
# per-process listener ports for debugging/admin/migrations
|
67
|
+
# addr = "127.0.0.1:#{9293 + worker.nr}"
|
68
|
+
# server.listen(addr, :tries => -1, :delay => 5, :tcp_nopush => true)
|
69
|
+
|
70
|
+
# the following is *required* for Rails
|
71
|
+
defined?(ActiveRecord::Base) and
|
72
|
+
ActiveRecord::Base.establish_connection
|
73
|
+
|
74
|
+
# You may also want to check and
|
75
|
+
# restart any other shared sockets/descriptors such as Memcached,
|
76
|
+
# and Redis.
|
77
|
+
end
|