unicorn 0.4.2 → 0.5.0

Sign up to get free protection for your applications and to get access to all the features.
data/CHANGELOG CHANGED
@@ -1,3 +1,4 @@
1
+ v0.5.0 - {after,before}_fork API change, small tweaks/fixes
1
2
  v0.4.2 - fix Rails ARStore, FD leak prevention, descriptive proctitles
2
3
  v0.4.1 - Rails support, per-listener backlog and {snd,rcv}buf
3
4
  v0.2.3 - Unlink Tempfiles after use (they were closed, just not unlinked)
data/DESIGN CHANGED
@@ -37,10 +37,11 @@
37
37
 
38
38
  * The number of worker processes should be scaled to the number of
39
39
  CPUs, memory or even spindles you have. If you have an existing
40
- Mongrel cluster, using the same amount of processes should work.
41
- Let a full-HTTP-request-buffering reverse proxy like nginx manage
42
- concurrency to thousands of slow clients for you. Unicorn scaling
43
- should only be concerned about limits of your backend system(s).
40
+ Mongrel cluster on a single-threaded app, using the same amount of
41
+ processes should work. Let a full-HTTP-request-buffering reverse
42
+ proxy like nginx manage concurrency to thousands of slow clients for
43
+ you. Unicorn scaling should only be concerned about limits of your
44
+ backend system(s).
44
45
 
45
46
  * Load balancing between worker processes is done by the OS kernel.
46
47
  All workers share a common set of listener sockets and does
@@ -56,8 +57,8 @@
56
57
 
57
58
  * Blocking I/O is used for clients. This allows a simpler code path
58
59
  to be followed within the Ruby interpreter and fewer syscalls.
59
- Applications that use threads should continue to work if Unicorn
60
- is serving LAN or localhost clients.
60
+ Applications that use threads continue to work if Unicorn
61
+ is only serving LAN or localhost clients.
61
62
 
62
63
  * Timeout implementation is done via fchmod(2) in each worker
63
64
  on a shared file descriptor to update st_ctime on the inode.
@@ -67,8 +68,9 @@
67
68
  pwrite(2)/pread(2) are supported by base MRI, nor are they as
68
69
  portable on UNIX systems as fchmod(2).
69
70
 
70
- * SIGKILL is used to terminate the timed-out workers as reliably
71
- as possible on a UNIX system.
71
+ * SIGKILL is used to terminate the timed-out workers from misbehaving apps
72
+ as reliably as possible on a UNIX system. The default timeout is a
73
+ generous 60 seconds (same default as in Mongrel).
72
74
 
73
75
  * The poor performance of select() on large FD sets is avoided
74
76
  as few file descriptors are used in each worker.
@@ -81,5 +83,7 @@
81
83
  the master to its death.
82
84
 
83
85
  * There is never any explicit real-time dependency or communication
84
- between the worker processes themselves nor to the master process.
85
- Synchronization is handled entirely by the OS kernel.
86
+ between the worker processes nor to the master process.
87
+ Synchronization is handled entirely by the OS kernel and shared
88
+ resources are never accessed by the worker when it is servicing
89
+ a client.
data/GNUmakefile CHANGED
@@ -66,8 +66,19 @@ $(slow_tests):
66
66
  @$(MAKE) $(shell $(awk_slow) $@)
67
67
 
68
68
  TEST_OPTS = -v
69
- run_test = @echo '*** $(arg)$(extra) ***'; \
70
- setsid $(ruby) $(arg) $(TEST_OPTS) >$(t) 2>&1 || \
69
+ TEST_OPTS = -v
70
+ ifndef V
71
+ quiet_pre = @echo '* $(arg)$(extra)';
72
+ quiet_post = >$(t) 2>&1
73
+ else
74
+ # we can't rely on -o pipefail outside of bash 3+,
75
+ # so we use a stamp file to indicate success and
76
+ # have rm fail if the stamp didn't get created
77
+ stamp = $@$(log_suffix).ok
78
+ quiet_pre = @echo $(ruby) $(arg) $(TEST_OPTS); ! test -f $(stamp) && (
79
+ quiet_post = && > $(stamp) )>&2 | tee $(t); rm $(stamp) 2>/dev/null
80
+ endif
81
+ run_test = $(quiet_pre) setsid $(ruby) -w $(arg) $(TEST_OPTS) $(quiet_post) || \
71
82
  (sed "s,^,$(extra): ," >&2 < $(t); exit 1)
72
83
 
73
84
  %.n: arg = $(subst .n,,$(subst --, -n ,$@))
@@ -134,6 +145,5 @@ $(T_r).%.r: export UNICORN_RAILS_TEST_VERSION = $(rv)
134
145
  $(T_r).%.r: export RAILS_GIT_REPO = $(CURDIR)/$(rails_git)
135
146
  $(T_r).%.r: $(test_prefix)/.stamp $(rails_git)/info/cloned-stamp
136
147
  $(run_test)
137
- @sed 's,^,$(rv): ,' < $(t)
138
148
 
139
149
  .PHONY: doc $(T) $(slow_tests) Manifest
data/README CHANGED
@@ -1,33 +1,35 @@
1
- = Unicorn: Unix + LAN/localhost-optimized fork of Mongrel
2
-
3
- Unicorn is designed to only serve fast clients. See the PHILOSOPHY
4
- and DESIGN documents for more details regarding this.
1
+ = Unicorn: Unix + LAN/localhost-only fork of Mongrel
5
2
 
6
3
  == Features
7
4
 
8
- * Built on the solid Mongrel code base and takes full advantage
9
- of functionality exclusive to Unix-like operating systems.
5
+ * Designed for Rack, Unix, fast clients, and ease-of-debugging. We
6
+ cut out all things that are better-supported by nginx or Rack.
10
7
 
11
8
  * Mostly written in Ruby, only the HTTP parser (stolen and trimmed
12
9
  down from Mongrel) is written in C. Unicorn is compatible with
13
- both Ruby 1.8 and 1.9.
10
+ both Ruby 1.8 and 1.9. A pure-Ruby (but still Unix-only) version
11
+ is planned.
14
12
 
15
13
  * Process management: Unicorn will reap and restart workers that
16
14
  die from broken apps. There is no need to manage multiple processes
17
- yourself.
15
+ or ports yourself. Unicorn can spawn and manage any fixed number of
16
+ worker processes you choose to scale to your backend.
18
17
 
19
18
  * Load balancing is done entirely by the operating system kernel.
20
- Requests never pile up behind a busy worker.
19
+ Requests never pile up behind a busy worker process.
21
20
 
22
21
  * Does not care if your application is thread-safe or not, workers
23
22
  all run within their own isolated address space and only serve one
24
- client at a time...
23
+ client at a time.
25
24
 
26
25
  * Supports all Rack applications, along with pre-Rack versions of
27
26
  Ruby on Rails via a Rack wrapper.
28
27
 
29
- * Builtin log rotation of all log files in your application via USR1
30
- signal.
28
+ * Builtin reopening of all log files in your application via
29
+ USR1 signal. This allows logrotate to rotate files atomically and
30
+ quickly via rename instead of the racy and slow copytruncate method.
31
+ Unicorn also takes steps to ensure multi-line log entries from one
32
+ request all stay within the same file.
31
33
 
32
34
  * nginx-style binary re-execution without losing connections.
33
35
  You can upgrade Unicorn, your entire application, libraries
@@ -35,10 +37,12 @@ and DESIGN documents for more details regarding this.
35
37
  installed in the same path.
36
38
 
37
39
  * before_fork and after_fork hooks in case your application
38
- has special needs when dealing with forked processes.
40
+ has special needs when dealing with forked processes. These
41
+ should not be needed when the "preload_app" directive is
42
+ false (the default).
39
43
 
40
44
  * Can be used with copy-on-write-friendly memory management
41
- to save memory.
45
+ to save memory (by setting "preload_app" to true).
42
46
 
43
47
  * Able to listen on multiple interfaces including UNIX sockets,
44
48
  each worker process can also bind to a private port via the
@@ -50,7 +54,7 @@ Unicorn is copyright 2009 Eric Wong and contributors.
50
54
  It is based on Mongrel and carries the same license:
51
55
 
52
56
  Mongrel is copyright 2007 Zed A. Shaw and contributors. It is licensed
53
- under the Ruby license and the GPL2. See the include LICENSE file for
57
+ under the Ruby license and the GPL2. See the included LICENSE file for
54
58
  details.
55
59
 
56
60
  Unicorn is 100% Free Software.
@@ -81,8 +85,8 @@ If you have web browser software for the World Wide Web
81
85
  (on the Information Superhighway), you may browse the code from
82
86
  your web browser and download the latest snapshot tarballs here:
83
87
 
84
- * http://git.bogomips.org/cgit/unicorn.git (this server runs Unicorn!)
85
- * http://repo.or.cz/w/unicorn.git (gitweb mirror)
88
+ * http://git.bogomips.org/cgit/unicorn.git (cgit)
89
+ * http://repo.or.cz/w/unicorn.git (gitweb)
86
90
 
87
91
  == Usage
88
92
 
@@ -118,9 +122,12 @@ options.
118
122
 
119
123
  == Disclaimer
120
124
 
121
- Like the creatures themselves, production deployments of Unicorn are rare.
122
- There is NO WARRANTY whatsoever if anything goes wrong, but let us know and
123
- we'll try our best to fix it.
125
+ Like the creatures themselves, production deployments of Unicorn are
126
+ rare or even non-existent. There is NO WARRANTY whatsoever if anything
127
+ goes wrong, but let us know and we'll try our best to fix it.
128
+
129
+ Unicorn is designed to only serve fast clients. See the PHILOSOPHY
130
+ and DESIGN documents for more details regarding this.
124
131
 
125
132
  Rainbows are NOT included.
126
133
 
@@ -137,5 +144,5 @@ Rainbows are NOT included.
137
144
 
138
145
  == Contact
139
146
 
140
- Newsgroup and mailing list maybe coming...
141
147
  Email Eric Wong at normalperson@yhbt.net for now.
148
+ Newsgroup and mailing list maybe coming...
data/SIGNALS CHANGED
@@ -7,7 +7,7 @@ processes are documented here as well.
7
7
  === Master Process
8
8
 
9
9
  * HUP - reload config file and gracefully restart all workers
10
- If preload_app is false (the default), the application code
10
+ If "preload_app" is false (the default), the application code
11
11
  will be reloaded when workers are restarted as well.
12
12
 
13
13
  * INT/TERM - quick shutdown, kills all workers immediately
@@ -31,12 +31,19 @@ Sending signals directly to the worker processes should not normally be
31
31
  needed. If the master process is running, any exited worker will be
32
32
  automatically respawned.
33
33
 
34
- * INT/TERM - quick shutdown, immediately exit
34
+ * INT/TERM - Quick shutdown, immediately exit.
35
+ Unless WINCH has been sent to the master (or the master is killed),
36
+ the master process will respawn a worker to replace this one.
35
37
 
36
- * QUIT - gracefully exit after finishing the current request
38
+ * QUIT - Gracefully exit after finishing the current request.
39
+ Unless WINCH has been sent to the master (or the master is killed),
40
+ the master process will respawn a worker to replace this one.
37
41
 
38
- * USR1 - reopen all logs owned by the worker process
42
+ * USR1 - Reopen all logs owned by the worker process.
39
43
  See Unicorn::Util.reopen_logs for what is considered a log.
44
+ Log files are not reopened until it is done processing
45
+ the current request, so multiple log lines for one request
46
+ (as done by Rails) will not be split across multiple logs.
40
47
 
41
48
  === Procedure to replace a running unicorn executable
42
49
 
data/TODO CHANGED
@@ -1,7 +1,5 @@
1
1
  == 1.0.0
2
2
 
3
- * tests for preload_app boolean
4
-
5
3
  * reexec_worker_processes config option:
6
4
  This is the number of worker processes to startup initially
7
5
  when being reexecuted.
@@ -24,6 +24,7 @@ static VALUE sym_http_body;
24
24
  #define HTTP_PREFIX "HTTP_"
25
25
  #define HTTP_PREFIX_LEN (sizeof(HTTP_PREFIX) - 1)
26
26
 
27
+ static VALUE global_rack_url_scheme;
27
28
  static VALUE global_request_method;
28
29
  static VALUE global_request_uri;
29
30
  static VALUE global_fragment;
@@ -37,8 +38,10 @@ static VALUE global_server_port;
37
38
  static VALUE global_server_protocol;
38
39
  static VALUE global_server_protocol_value;
39
40
  static VALUE global_http_host;
41
+ static VALUE global_http_x_forwarded_proto;
40
42
  static VALUE global_port_80;
41
43
  static VALUE global_localhost;
44
+ static VALUE global_http;
42
45
 
43
46
  /** Defines common length and error messages for input length validation. */
44
47
  #define DEF_MAX_LENGTH(N,length) const size_t MAX_##N##_LENGTH = length; const char *MAX_##N##_LENGTH_ERR = "HTTP element " # N " is longer than the " # length " allowed length."
@@ -106,26 +109,12 @@ static struct common_field common_http_fields[] = {
106
109
  f("USER_AGENT"),
107
110
  f("VIA"),
108
111
  f("X_FORWARDED_FOR"), /* common for proxies */
112
+ f("X_FORWARDED_PROTO"), /* common for proxies */
109
113
  f("X_REAL_IP"), /* common for proxies */
110
114
  f("WARNING")
111
115
  # undef f
112
116
  };
113
117
 
114
- /*
115
- * qsort(3) and bsearch(3) improve average performance slightly, but may
116
- * not be worth it for lack of portability to certain platforms...
117
- */
118
- #if defined(HAVE_QSORT_BSEARCH)
119
- /* sort by length, then by name if there's a tie */
120
- static int common_field_cmp(const void *a, const void *b)
121
- {
122
- struct common_field *cfa = (struct common_field *)a;
123
- struct common_field *cfb = (struct common_field *)b;
124
- signed long diff = cfa->len - cfb->len;
125
- return diff ? diff : memcmp(cfa->name, cfb->name, cfa->len);
126
- }
127
- #endif /* HAVE_QSORT_BSEARCH */
128
-
129
118
  /* this function is not performance-critical */
130
119
  static void init_common_fields(void)
131
120
  {
@@ -146,28 +135,10 @@ static void init_common_fields(void)
146
135
  cf->value = rb_obj_freeze(cf->value);
147
136
  rb_global_variable(&cf->value);
148
137
  }
149
-
150
- #if defined(HAVE_QSORT_BSEARCH)
151
- qsort(common_http_fields,
152
- ARRAY_SIZE(common_http_fields),
153
- sizeof(struct common_field),
154
- common_field_cmp);
155
- #endif /* HAVE_QSORT_BSEARCH */
156
138
  }
157
139
 
158
140
  static VALUE find_common_field_value(const char *field, size_t flen)
159
141
  {
160
- #if defined(HAVE_QSORT_BSEARCH)
161
- struct common_field key;
162
- struct common_field *found;
163
- key.name = field;
164
- key.len = (signed long)flen;
165
- found = (struct common_field *)bsearch(&key, common_http_fields,
166
- ARRAY_SIZE(common_http_fields),
167
- sizeof(struct common_field),
168
- common_field_cmp);
169
- return found ? found->value : Qnil;
170
- #else /* !HAVE_QSORT_BSEARCH */
171
142
  int i;
172
143
  struct common_field *cf = common_http_fields;
173
144
  for(i = 0; i < ARRAY_SIZE(common_http_fields); i++, cf++) {
@@ -175,7 +146,6 @@ static VALUE find_common_field_value(const char *field, size_t flen)
175
146
  return cf->value;
176
147
  }
177
148
  return Qnil;
178
- #endif /* !HAVE_QSORT_BSEARCH */
179
149
  }
180
150
 
181
151
  static void http_field(void *data, const char *field,
@@ -282,6 +252,16 @@ static void header_done(void *data, const char *at, size_t length)
282
252
  VALUE temp = Qnil;
283
253
  char *colon = NULL;
284
254
 
255
+ /* set rack.url_scheme to "https" or "http", no others are allowed by Rack */
256
+ temp = rb_hash_aref(req, global_http_x_forwarded_proto);
257
+ switch (temp == Qnil ? 0 : RSTRING_LEN(temp)) {
258
+ case 5: if (!memcmp("https", RSTRING_PTR(temp), 5)) break;
259
+ case 4: if (!memcmp("http", RSTRING_PTR(temp), 4)) break;
260
+ default: temp = global_http;
261
+ }
262
+ rb_hash_aset(req, global_rack_url_scheme, temp);
263
+
264
+ /* set the SERVER_NAME and SERVER_PORT variables */
285
265
  if((temp = rb_hash_aref(req, global_http_host)) != Qnil) {
286
266
  colon = memchr(RSTRING_PTR(temp), ':', RSTRING_LEN(temp));
287
267
  if(colon != NULL) {
@@ -406,6 +386,7 @@ void Init_http11()
406
386
 
407
387
  mUnicorn = rb_define_module("Unicorn");
408
388
 
389
+ DEF_GLOBAL(rack_url_scheme, "rack.url_scheme");
409
390
  DEF_GLOBAL(request_method, "REQUEST_METHOD");
410
391
  DEF_GLOBAL(request_uri, "REQUEST_URI");
411
392
  DEF_GLOBAL(fragment, "FRAGMENT");
@@ -419,8 +400,10 @@ void Init_http11()
419
400
  DEF_GLOBAL(server_protocol, "SERVER_PROTOCOL");
420
401
  DEF_GLOBAL(server_protocol_value, "HTTP/1.1");
421
402
  DEF_GLOBAL(http_host, "HTTP_HOST");
403
+ DEF_GLOBAL(http_x_forwarded_proto, "HTTP_X_FORWARDED_PROTO");
422
404
  DEF_GLOBAL(port_80, "80");
423
405
  DEF_GLOBAL(localhost, "localhost");
406
+ DEF_GLOBAL(http, "http");
424
407
 
425
408
  eHttpParserError = rb_define_class_under(mUnicorn, "HttpParserError", rb_eIOError);
426
409
 
data/lib/unicorn.rb CHANGED
@@ -26,14 +26,13 @@ module Unicorn
26
26
  attr_reader :logger
27
27
  include ::Unicorn::SocketHelper
28
28
 
29
+ SIG_QUEUE = []
29
30
  DEFAULT_START_CTX = {
30
31
  :argv => ARGV.map { |arg| arg.dup },
31
32
  # don't rely on Dir.pwd here since it's not symlink-aware, and
32
33
  # symlink dirs are the default with Capistrano...
33
34
  :cwd => `/bin/sh -c pwd`.chomp("\n"),
34
35
  :zero => $0.dup,
35
- :environ => {}.merge!(ENV),
36
- :umask => File.umask,
37
36
  }.freeze
38
37
 
39
38
  Worker = Struct.new(:nr, :tempfile) unless defined?(Worker)
@@ -53,12 +52,12 @@ module Unicorn
53
52
  @start_ctx = DEFAULT_START_CTX.dup
54
53
  @start_ctx.merge!(start_ctx) if start_ctx
55
54
  @app = app
56
- @sig_queue = []
57
55
  @master_pid = $$
58
56
  @workers = Hash.new
59
57
  @io_purgatory = [] # prevents IO objects in here from being GC-ed
60
58
  @request = @rd_sig = @wr_sig = nil
61
59
  @reexec_pid = 0
60
+ @init_listeners = options[:listeners] ? options[:listeners].dup : []
62
61
  @config = Configurator.new(options.merge(:use_defaults => true))
63
62
  @listener_opts = {}
64
63
  @config.commit!(self, :skip => [:listeners, :pid])
@@ -73,9 +72,9 @@ module Unicorn
73
72
  # before they become UNIXServer or TCPServer
74
73
  inherited = ENV['UNICORN_FD'].to_s.split(/,/).map do |fd|
75
74
  io = Socket.for_fd(fd.to_i)
76
- set_server_sockopt(io)
75
+ set_server_sockopt(io, @listener_opts[sock_name(io)])
77
76
  @io_purgatory << io
78
- logger.info "inherited: #{io} fd=#{fd} addr=#{sock_name(io)}"
77
+ logger.info "inherited addr=#{sock_name(io)} fd=#{fd}"
79
78
  server_cast(io)
80
79
  end
81
80
 
@@ -104,15 +103,27 @@ module Unicorn
104
103
  # replaces current listener set with +listeners+. This will
105
104
  # close the socket if it will not exist in the new listener set
106
105
  def listeners=(listeners)
107
- cur_names = listener_names
106
+ cur_names, dead_names = [], []
107
+ listener_names.each do |name|
108
+ if "/" == name[0..0]
109
+ # mark unlinked sockets as dead so we can rebind them
110
+ (File.socket?(name) ? cur_names : dead_names) << name
111
+ else
112
+ cur_names << name
113
+ end
114
+ end
108
115
  set_names = listener_names(listeners)
109
- dead_names = cur_names - set_names
116
+ dead_names += cur_names - set_names
117
+ dead_names.uniq!
110
118
 
111
119
  @listeners.delete_if do |io|
112
120
  if dead_names.include?(sock_name(io))
113
- @io_purgatory.delete_if { |pio| pio.fileno == io.fileno }
114
- true
121
+ @io_purgatory.delete_if do |pio|
122
+ pio.fileno == io.fileno && (pio.close rescue nil).nil? # true
123
+ end
124
+ (io.close rescue nil).nil? # true
115
125
  else
126
+ set_server_sockopt(io, @listener_opts[sock_name(io)])
116
127
  false
117
128
  end
118
129
  end
@@ -141,12 +152,11 @@ module Unicorn
141
152
  return if String === address && listener_names.include?(address)
142
153
 
143
154
  if io = bind_listen(address, opt)
144
- if Socket == io.class
155
+ unless TCPServer === io || UNIXServer === io
145
156
  @io_purgatory << io
146
157
  io = server_cast(io)
147
158
  end
148
- logger.info "#{io} listening on PID:#{$$} " \
149
- "fd=#{io.fileno} addr=#{sock_name(io)}"
159
+ logger.info "listening on addr=#{sock_name(io)} fd=#{io.fileno}"
150
160
  @listeners << io
151
161
  else
152
162
  logger.error "adding listener failed addr=#{address} (in use)"
@@ -172,7 +182,7 @@ module Unicorn
172
182
  begin
173
183
  loop do
174
184
  reap_all_workers
175
- case (mode = @sig_queue.shift)
185
+ case (mode = SIG_QUEUE.shift)
176
186
  when nil
177
187
  murder_lazy_workers
178
188
  spawn_missing_workers if respawn
@@ -219,7 +229,7 @@ module Unicorn
219
229
  retry
220
230
  end
221
231
  stop # gracefully shutdown all workers on our way out
222
- logger.info "master PID:#{$$} join complete"
232
+ logger.info "master complete"
223
233
  unlink_pid_safe(@pid) if @pid
224
234
  end
225
235
 
@@ -247,11 +257,11 @@ module Unicorn
247
257
  # defer a signal for later processing in #join (master process)
248
258
  def trap_deferred(signal)
249
259
  trap(signal) do |sig_nr|
250
- if @sig_queue.size < 5
251
- @sig_queue << signal
260
+ if SIG_QUEUE.size < 5
261
+ SIG_QUEUE << signal
252
262
  awaken_master
253
263
  else
254
- logger.error "ignoring SIG#{signal}, queue=#{@sig_queue.inspect}"
264
+ logger.error "ignoring SIG#{signal}, queue=#{SIG_QUEUE.inspect}"
255
265
  end
256
266
  end
257
267
  end
@@ -326,10 +336,8 @@ module Unicorn
326
336
  end
327
337
 
328
338
  @reexec_pid = fork do
329
- ENV.replace(@start_ctx[:environ])
330
339
  listener_fds = @listeners.map { |sock| sock.fileno }
331
340
  ENV['UNICORN_FD'] = listener_fds.join(',')
332
- File.umask(@start_ctx[:umask])
333
341
  Dir.chdir(@start_ctx[:cwd])
334
342
  cmd = [ @start_ctx[:zero] ] + @start_ctx[:argv]
335
343
 
@@ -375,14 +383,13 @@ module Unicorn
375
383
  Dir.chdir(@start_ctx[:cwd])
376
384
  rescue Errno::ENOENT => err
377
385
  logger.fatal "#{err.inspect} (#{@start_ctx[:cwd]})"
378
- @sig_queue << :QUIT # forcibly emulate SIGQUIT
386
+ SIG_QUEUE << :QUIT # forcibly emulate SIGQUIT
379
387
  return
380
388
  end
381
389
  tempfile = Tempfile.new('') # as short as possible to save dir space
382
390
  tempfile.unlink # don't allow other processes to find or see it
383
- tempfile.sync = true
384
391
  worker = Worker.new(worker_nr, tempfile)
385
- @before_fork.call(self, worker.nr)
392
+ @before_fork.call(self, worker)
386
393
  pid = fork { worker_loop(worker) }
387
394
  @workers[pid] = worker
388
395
  end
@@ -391,11 +398,9 @@ module Unicorn
391
398
  # once a client is accepted, it is processed in its entirety here
392
399
  # in 3 easy steps: read request, call app, write app response
393
400
  def process_client(client)
394
- client.nonblock = false
395
- set_client_sockopt(client) if TCPSocket === client
396
- env = @request.read(client)
397
- app_response = @app.call(env)
398
- HttpResponse.write(client, app_response)
401
+ # one syscall less than "client.nonblock = false":
402
+ client.fcntl(Fcntl::F_SETFL, File::RDWR)
403
+ HttpResponse.write(client, @app.call(@request.read(client)))
399
404
  # if we get any error, try to write something back to the client
400
405
  # assuming we haven't closed the socket, but don't get hung up
401
406
  # if the socket is already closed or broken. We'll always ensure
@@ -423,23 +428,19 @@ module Unicorn
423
428
  # traps for USR1, USR2, and HUP may be set in the @after_fork Proc
424
429
  # by the user.
425
430
  def init_worker_process(worker)
426
- build_app! unless @preload_app
427
- @sig_queue.clear
428
- QUEUE_SIGS.each { |sig| trap(sig, 'IGNORE') }
431
+ QUEUE_SIGS.each { |sig| trap(sig, 'DEFAULT') }
429
432
  trap(:CHLD, 'DEFAULT')
430
-
433
+ SIG_QUEUE.clear
431
434
  proc_name "worker[#{worker.nr}]"
432
435
  @rd_sig.close if @rd_sig
433
436
  @wr_sig.close if @wr_sig
434
437
  @workers.values.each { |other| other.tempfile.close rescue nil }
435
- @workers.clear
436
- @start_ctx.clear
437
438
  @start_ctx = @workers = @rd_sig = @wr_sig = nil
438
439
  @listeners.each { |sock| sock.fcntl(Fcntl::F_SETFD, Fcntl::FD_CLOEXEC) }
439
- ENV.delete('UNICORN_FD')
440
- @after_fork.call(self, worker.nr) if @after_fork
441
440
  worker.tempfile.fcntl(Fcntl::F_SETFD, Fcntl::FD_CLOEXEC)
441
+ @after_fork.call(self, worker) if @after_fork # can drop perms
442
442
  @request = HttpRequest.new(logger)
443
+ build_app! unless @preload_app
443
444
  end
444
445
 
445
446
  # runs inside each forked worker, this sits around and waits
@@ -452,9 +453,8 @@ module Unicorn
452
453
  alive = true
453
454
  ready = @listeners
454
455
  client = nil
455
- [:TERM, :INT].each { |sig| trap(sig) { exit(0) } } # instant shutdown
456
456
  trap(:QUIT) do
457
- alive = false
457
+ alive = false # graceful shutdown
458
458
  @listeners.each { |sock| sock.close rescue nil } # break IO.select
459
459
  end
460
460
  reopen_logs, (rd, wr) = false, IO.pipe
@@ -569,6 +569,7 @@ module Unicorn
569
569
  def load_config!
570
570
  begin
571
571
  logger.info "reloading config_file=#{@config.config_file}"
572
+ @config[:listeners].replace(@init_listeners)
572
573
  @config.reload
573
574
  @config.commit!(self)
574
575
  kill_each_worker(:QUIT)