rainbows 0.1.1 → 0.2.0

Sign up to get free protection for your applications and to get access to all the features.
Files changed (48) hide show
  1. data/.document +6 -5
  2. data/DEPLOY +13 -0
  3. data/GIT-VERSION-GEN +1 -1
  4. data/GNUmakefile +1 -1
  5. data/README +32 -6
  6. data/SIGNALS +11 -7
  7. data/TODO +2 -3
  8. data/lib/rainbows.rb +10 -3
  9. data/lib/rainbows/app_pool.rb +90 -0
  10. data/lib/rainbows/base.rb +41 -4
  11. data/lib/rainbows/const.rb +1 -6
  12. data/lib/rainbows/http_server.rb +1 -1
  13. data/lib/rainbows/rev.rb +174 -0
  14. data/lib/rainbows/revactor.rb +40 -37
  15. data/lib/rainbows/thread_pool.rb +31 -57
  16. data/lib/rainbows/thread_spawn.rb +32 -45
  17. data/local.mk.sample +4 -3
  18. data/t/.gitignore +1 -2
  19. data/t/GNUmakefile +21 -7
  20. data/t/README +42 -0
  21. data/t/bin/content-md5-put +36 -0
  22. data/t/bin/unused_listen +1 -1
  23. data/t/content-md5.ru +23 -0
  24. data/t/sleep.ru +11 -0
  25. data/t/t0000-basic.sh +29 -3
  26. data/t/t1000-thread-pool-basic.sh +5 -6
  27. data/t/t1000.ru +5 -1
  28. data/t/t1002-thread-pool-graceful.sh +37 -0
  29. data/t/t2000-thread-spawn-basic.sh +4 -6
  30. data/t/t2000.ru +5 -1
  31. data/t/t2002-thread-spawn-graceful.sh +37 -0
  32. data/t/t3000-revactor-basic.sh +4 -6
  33. data/t/t3000.ru +5 -1
  34. data/t/t3001-revactor-pipeline.sh +46 -0
  35. data/t/t3002-revactor-graceful.sh +38 -0
  36. data/t/t3003-revactor-reopen-logs.sh +54 -0
  37. data/t/t3100-revactor-tee-input.sh +8 -13
  38. data/t/t4000-rev-basic.sh +51 -0
  39. data/t/t4000.ru +9 -0
  40. data/t/t4002-rev-graceful.sh +52 -0
  41. data/t/t4003-rev-parser-error.sh +34 -0
  42. data/t/t4100-rev-rack-input.sh +44 -0
  43. data/t/t4101-rev-rack-input-trailer.sh +51 -0
  44. data/t/t9000-rack-app-pool.sh +37 -0
  45. data/t/t9000.ru +14 -0
  46. data/t/test-lib.sh +29 -2
  47. data/vs_Unicorn +50 -1
  48. metadata +28 -6
data/.document CHANGED
@@ -1,11 +1,12 @@
1
- README
2
- LICENSE
1
+ ChangeLog
3
2
  DEPLOY
3
+ FAQ
4
4
  lib
5
- rainbows.1
6
- ChangeLog
5
+ LICENSE
7
6
  NEWS
7
+ rainbows.1
8
+ README
9
+ SIGNALS
8
10
  TODO
9
- FAQ
10
11
  TUNING
11
12
  vs_Unicorn
data/DEPLOY CHANGED
@@ -27,3 +27,16 @@ processing of the request body as it is being uploaded.
27
27
 
28
28
  In this case, haproxy or any similar (non-request-body-buffering) load
29
29
  balancer should be used to balance requests between different machines.
30
+
31
+ == Denial-of-Service Concerns
32
+
33
+ Since \Rainbows! is designed to talk to slow clients with long-held
34
+ connections, it may be subject to brute force denial-of-service attacks.
35
+ In Unicorn and Mongrel, we've already enabled the "httpready" accept
36
+ filter for FreeBSD and the TCP_DEFER_ACCEPT option in Linux; but it is
37
+ still possible to build clients that work around and fool these
38
+ mechanisms.
39
+
40
+ \Rainbows! itself does not feature any explicit protection against brute
41
+ force denial-of-service attacks. We believe this is best handled by
42
+ dedicated firewalls provided by the operating system.
data/GIT-VERSION-GEN CHANGED
@@ -1,7 +1,7 @@
1
1
  #!/bin/sh
2
2
 
3
3
  GVF=GIT-VERSION-FILE
4
- DEF_VER=v0.1.1.GIT
4
+ DEF_VER=v0.2.0.GIT
5
5
 
6
6
  LF='
7
7
  '
data/GNUmakefile CHANGED
@@ -58,7 +58,7 @@ NEWS: GIT-VERSION-FILE
58
58
  $(rake) -s news_rdoc > $@+
59
59
  mv $@+ $@
60
60
 
61
- SINCE =
61
+ SINCE = 0.1.1
62
62
  ChangeLog: log_range = $(shell test -n "$(SINCE)" && echo v$(SINCE)..)
63
63
  ChangeLog: GIT-VERSION-FILE
64
64
  @echo "ChangeLog from $(GIT_URL) ($(SINCE)..$(GIT_VERSION))" > $@+
data/README CHANGED
@@ -16,11 +16,21 @@ For network concurrency, models we currently support are:
16
16
  * {:ThreadSpawn}[link:Rainbows/ThreadSpawn.html]
17
17
  * {:ThreadPool}[link:Rainbows/ThreadPool.html]
18
18
  * {:Revactor}[link:Rainbows/Revactor.html]
19
+ * {:Rev}[link:Rainbows/Rev.html]*
19
20
 
20
21
  We have {more on the way}[link:TODO.html] for handling network concurrency.
21
22
  Additionally, we also use multiple processes (managed by Unicorn) for
22
23
  CPU/memory/disk concurrency.
23
24
 
25
+ \* our \Rev concurrency model is only recommended for slow clients, not
26
+ sleepy apps. Additionally it does not support streaming "rack.input" to
27
+ the application.
28
+
29
+ For application concurrency, we have the Rainbows::AppPool Rack
30
+ middleware that allows us to limit application concurrency independently
31
+ of network concurrency. Rack::Lock as distributed by Rack may also be
32
+ used to limit application concurrency to one (per-worker).
33
+
24
34
  == Features
25
35
 
26
36
  * Designed for {Rack}[http://rack.rubyforge.org/], the standard for
@@ -54,6 +64,9 @@ CPU/memory/disk concurrency.
54
64
  * Long polling
55
65
  * Reverse Ajax
56
66
 
67
+ \Rainbows may also be used to service slow clients even with fast
68
+ applications using the \Rev concurrency model.
69
+
57
70
  == License
58
71
 
59
72
  \Rainbows! is copyright 2009 by all contributors (see logs in git).
@@ -99,13 +112,25 @@ config file:
99
112
 
100
113
  Rainbows! do
101
114
  use :Revactor
102
- worker_connections 128
115
+ worker_connections 400
103
116
  end
104
117
 
118
+ See the {Rainbows! configuration documentation}[link:Rainbows.html#M000001]
119
+ for more details.
120
+
105
121
  == Development
106
122
 
107
- * git: git://git.bogomips.org/rainbows.git
108
- * cgit: http://git.bogomips.org/cgit/rainbows.git
123
+ You can get the latest source via git from the following locations
124
+ (these versions may not be stable):
125
+
126
+ git://git.bogomips.org/rainbows.git
127
+ git://rubyforge.org/rainbows.git (mirror)
128
+
129
+ You may browse the code from the web and download the latest snapshot
130
+ tarballs here:
131
+
132
+ * http://git.bogomips.org/cgit/rainbows.git (cgit)
133
+ * http://rainbows.rubyforge.org/git?p=rainbows.git (gitweb)
109
134
 
110
135
  Inline patches (from "git format-patch") to the mailing list are
111
136
  preferred because they allow code review and comments in the reply to
@@ -126,9 +151,10 @@ and we'll try our best to fix it.
126
151
  All feedback (bug reports, user/development dicussion, patches, pull
127
152
  requests) go to the mailing list/newsgroup. Patches must be sent inline
128
153
  (git format-patch -M + git send-email). No subscription is necessary
129
- to post on the mailing list. No top posting. Address replies +To:+ (or
130
- +Cc:+) the original sender and +Cc:+ the mailing list.
154
+ to post on the mailing list. No top posting. Address replies +To:+
155
+ the mailing list.
131
156
 
132
157
  * email: mailto:rainbows-talk@rubyforge.org
133
- * archives: http://rubyforge.org/pipermail/rainbows-talk
158
+ * nntp: nntp://news.gmane.org/gmane.comp.lang.ruby.rainbows.general
134
159
  * subscribe: http://rubyforge.org/mailman/listinfo/rainbows-talk
160
+ * archives: http://rubyforge.org/pipermail/rainbows-talk
data/SIGNALS CHANGED
@@ -2,7 +2,10 @@
2
2
 
3
3
  In general, signals need only be sent to the master process. However,
4
4
  the signals Rainbows! uses internally to communicate with the worker
5
- processes are documented here as well.
5
+ processes are documented here as well. With the exception of TTIN/TTOU,
6
+ signal handling matches the behavior of and {nginx}[http://nginx.net/]
7
+ so it should be possible to easily share process management scripts
8
+ between \Rainbows!, Unicorn and nginx.
6
9
 
7
10
  === Master Process
8
11
 
@@ -43,16 +46,17 @@ automatically respawned.
43
46
 
44
47
  * USR1 - Reopen all logs owned by the worker process.
45
48
  See Unicorn::Util.reopen_logs for what is considered a log.
46
- Log files are not reopened until it is done processing
47
- the current request, so multiple log lines for one request
48
- (as done by Rails) will not be split across multiple logs.
49
+ Unlike Unicorn, log files are reopened immediately in \Rainbows!
50
+ since worker processes are likely to be serving multiple clients
51
+ simutaneously, we can't wait for all of them to finish.
49
52
 
50
53
  === Procedure to replace a running rainbows executable
51
54
 
52
- You may replace a running instance of unicorn with a new one without
55
+ You may replace a running instance of rainbows with a new one without
53
56
  losing any incoming connections. Doing so will reload all of your
54
- application code, Unicorn config, Ruby executable, and all libraries.
55
- The only things that will not change (due to OS limitations) are:
57
+ application code, Unicorn/Rainbows! config, Ruby executable, and all
58
+ libraries. The only things that will not change (due to OS limitations)
59
+ are:
56
60
 
57
61
  1. The path to the rainbows executable script. If you want to change to
58
62
  a different installation of Ruby, you can modify the shebang
data/TODO CHANGED
@@ -3,9 +3,8 @@
3
3
  We're lazy and pick the easy items to do first, then the ones people
4
4
  care about.
5
5
 
6
- * Rev (without Revactor) - since we already use Revactor we might as
7
- well support this one so 1.8 users won't be left out. Doing TeeInput
8
- is probably going to get ugly, though...
6
+ * Rev + Thread - current Rev model with threading, which will give
7
+ us a streaming (but rewindable) "rack.input".
9
8
 
10
9
  * EventMachine - much like Rev, but we haven't looked at this one much
11
10
  (our benevolent dictator doesn't like C++). If we can figure out how
data/lib/rainbows.rb CHANGED
@@ -7,6 +7,7 @@ module Rainbows
7
7
  require 'rainbows/http_server'
8
8
  require 'rainbows/http_response'
9
9
  require 'rainbows/base'
10
+ autoload :AppPool, 'rainbows/app_pool'
10
11
 
11
12
  class << self
12
13
 
@@ -17,18 +18,23 @@ module Rainbows
17
18
  end
18
19
  end
19
20
 
20
- # configures Rainbows! with a given concurrency model to +use+ and
21
+ # configures \Rainbows! with a given concurrency model to +use+ and
21
22
  # a +worker_connections+ upper-bound. This method may be called
22
23
  # inside a Unicorn/Rainbows configuration file:
23
24
  #
24
25
  # Rainbows! do
25
26
  # use :Revactor # this may also be :ThreadSpawn or :ThreadPool
26
- # worker_connections 128
27
+ # worker_connections 400
27
28
  # end
28
29
  #
30
+ # # the rest of the Unicorn configuration
31
+ # worker_processes 8
32
+ #
29
33
  # See the documentation for the respective Revactor, ThreadSpawn,
30
34
  # and ThreadPool classes for descriptions and recommendations for
31
- # each of them.
35
+ # each of them. The total number of clients we're able to serve is
36
+ # +worker_processes+ * +worker_connections+, so in the above example
37
+ # we can serve 8 * 400 = 3200 clients concurrently.
32
38
  def Rainbows!(&block)
33
39
  block_given? or raise ArgumentError, "Rainbows! requires a block"
34
40
  HttpServer.setup(block)
@@ -42,6 +48,7 @@ module Rainbows
42
48
  :Revactor => 50,
43
49
  :ThreadSpawn => 30,
44
50
  :ThreadPool => 10,
51
+ :Rev => 50,
45
52
  }.each do |model, _|
46
53
  u = model.to_s.gsub(/([a-z0-9])([A-Z0-9])/) { "#{$1}_#{$2.downcase!}" }
47
54
  autoload model, "rainbows/#{u.downcase!}"
@@ -0,0 +1,90 @@
1
+ # -*- encoding: binary -*-
2
+
3
+ require 'thread'
4
+
5
+ module Rainbows
6
+
7
+ # Rack middleware to limit application-level concurrency independently
8
+ # of network conncurrency in \Rainbows! Since the +worker_connections+
9
+ # option in \Rainbows! is only intended to limit the number of
10
+ # simultaneous clients, this middleware may be used to limit the
11
+ # number of concurrent application dispatches independently of
12
+ # concurrent clients.
13
+ #
14
+ # Instead of using M:N concurrency in \Rainbows!, this middleware
15
+ # allows M:N:P concurrency where +P+ is the AppPool +:size+ while
16
+ # +M+ remains the number of +worker_processes+ and +N+ remains the
17
+ # number of +worker_connections+.
18
+ #
19
+ # rainbows master
20
+ # \_ rainbows worker[0]
21
+ # | \_ client[0,0]------\ ___app[0]
22
+ # | \_ client[0,1]-------\ /___app[1]
23
+ # | \_ client[0,2]-------->--< ...
24
+ # | ... __/ `---app[P]
25
+ # | \_ client[0,N]----/
26
+ # \_ rainbows worker[1]
27
+ # | \_ client[1,0]------\ ___app[0]
28
+ # | \_ client[1,1]-------\ /___app[1]
29
+ # | \_ client[1,2]-------->--< ...
30
+ # | ... __/ `---app[P]
31
+ # | \_ client[1,N]----/
32
+ # \_ rainbows worker[M]
33
+ # \_ client[M,0]------\ ___app[0]
34
+ # \_ client[M,1]-------\ /___app[1]
35
+ # \_ client[M,2]-------->--< ...
36
+ # ... __/ `---app[P]
37
+ # \_ client[M,N]----/
38
+ #
39
+ # AppPool should be used if you want to enforce a lower value of +P+
40
+ # than +N+.
41
+ #
42
+ # AppPool has no effect on the Rainbows::Rev concurrency model as that is
43
+ # single-threaded/single-instance as far as application concurrency goes.
44
+ # In other words, +P+ is always +one+ when using \Rev (but not
45
+ # \Revactor) regardless of (or even if) this middleware is loaded.
46
+ #
47
+ # Since this is Rack middleware, you may load this in your Rack
48
+ # config.ru file and even use it in servers other than \Rainbows!
49
+ #
50
+ # use Rainbows::AppPool, :size => 30
51
+ # map "/lobster" do
52
+ # run Rack::Lobster.new
53
+ # end
54
+ #
55
+ # You may to load this earlier or later in your middleware chain
56
+ # depending on the concurrency/copy-friendliness of your middleware(s).
57
+
58
+ class AppPool < Struct.new(:pool)
59
+
60
+ # +opt+ is a hash, +:size+ is the size of the pool (default: 6)
61
+ # meaning you can have up to 6 concurrent instances of +app+
62
+ # within one \Rainbows! worker process. We support various
63
+ # methods of the +:copy+ option: +dup+, +clone+, +deep+ and +none+.
64
+ # Depending on your +app+, one of these options should be set.
65
+ # The default +:copy+ is +:dup+ as is commonly seen in existing
66
+ # Rack middleware.
67
+ def initialize(app, opt = {})
68
+ self.pool = Queue.new
69
+ (1...(opt[:size] || 6)).each do
70
+ pool << case (opt[:copy] || :dup)
71
+ when :none then app
72
+ when :dup then app.dup
73
+ when :clone then app.clone
74
+ when :deep then Marshal.load(Marshal.dump(app)) # unlikely...
75
+ else
76
+ raise ArgumentError, "unsupported copy method: #{opt[:copy].inspect}"
77
+ end
78
+ end
79
+ pool << app # the original
80
+ end
81
+
82
+ # Rack application endpoint, +env+ is the Rack environment
83
+ def call(env)
84
+ app = pool.shift
85
+ app.call(env)
86
+ ensure
87
+ pool << app
88
+ end
89
+ end
90
+ end
data/lib/rainbows/base.rb CHANGED
@@ -2,7 +2,8 @@
2
2
 
3
3
  module Rainbows
4
4
 
5
- # base class for Rainbows concurrency models
5
+ # base class for Rainbows concurrency models, this is currently
6
+ # used by ThreadSpawn and ThreadPool models
6
7
  module Base
7
8
 
8
9
  include Unicorn
@@ -16,16 +17,35 @@ module Rainbows
16
17
  client.close rescue nil
17
18
  end
18
19
 
20
+ # TODO: migrate into Unicorn::HttpServer
21
+ def listen_loop_error(e)
22
+ logger.error "Unhandled listen loop exception #{e.inspect}."
23
+ logger.error e.backtrace.join("\n")
24
+ end
25
+
26
+ def init_worker_process(worker)
27
+ super(worker)
28
+
29
+ # we're don't use the self-pipe mechanism in the Rainbows! worker
30
+ # since we don't defer reopening logs
31
+ HttpServer::SELF_PIPE.each { |x| x.close }.clear
32
+ trap(:USR1) { reopen_worker_logs(worker.nr) rescue nil }
33
+ # closing anything we IO.select on will raise EBADF
34
+ trap(:QUIT) { HttpServer::LISTENERS.map! { |s| s.close rescue nil } }
35
+ [:TERM, :INT].each { |sig| trap(sig) { exit!(0) } } # instant shutdown
36
+ logger.info "Rainbows! #@use worker_connections=#@worker_connections"
37
+ end
38
+
19
39
  # once a client is accepted, it is processed in its entirety here
20
40
  # in 3 easy steps: read request, call app, write app response
21
41
  def process_client(client)
22
42
  buf = client.readpartial(CHUNK_SIZE)
23
43
  hp = HttpParser.new
24
44
  env = {}
45
+ alive = true
25
46
  remote_addr = TCPSocket === client ? client.peeraddr.last : LOCALHOST
26
47
 
27
48
  begin # loop
28
- Thread.current[:t] = Time.now
29
49
  while ! hp.headers(env, buf)
30
50
  buf << client.readpartial(CHUNK_SIZE)
31
51
  end
@@ -42,9 +62,10 @@ module Rainbows
42
62
  response = app.call(env)
43
63
  end
44
64
 
45
- out = [ hp.keepalive? ? CONN_ALIVE : CONN_CLOSE ] if hp.headers?
65
+ alive = hp.keepalive? && ! Thread.current[:quit]
66
+ out = [ alive ? CONN_ALIVE : CONN_CLOSE ] if hp.headers?
46
67
  HttpResponse.write(client, response, out)
47
- end while hp.keepalive? and hp.reset.nil? and env.clear
68
+ end while alive and hp.reset.nil? and env.clear
48
69
  client.close
49
70
  # if we get any error, try to write something back to the client
50
71
  # assuming we haven't closed the socket, but don't get hung up
@@ -60,6 +81,22 @@ module Rainbows
60
81
  logger.error e.backtrace.join("\n")
61
82
  end
62
83
 
84
+ def join_threads(threads, worker)
85
+ logger.info "Joining threads..."
86
+ threads.each { |thr| thr[:quit] = true }
87
+ t0 = Time.now
88
+ timeleft = timeout * 2.0
89
+ m = 0
90
+ while (nr = threads.count { |thr| thr.alive? }) > 0 && timeleft > 0
91
+ threads.each { |thr|
92
+ worker.tmp.chmod(m = 0 == m ? 1 : 0)
93
+ thr.join(1)
94
+ break if (timeleft -= (Time.now - t0)) < 0
95
+ }
96
+ end
97
+ logger.info "Done joining threads. #{nr} left running"
98
+ end
99
+
63
100
  def self.included(klass)
64
101
  klass.const_set :LISTENERS, HttpServer::LISTENERS
65
102
  end
@@ -3,16 +3,11 @@
3
3
  module Rainbows
4
4
 
5
5
  module Const
6
- RAINBOWS_VERSION = '0.1.1'
6
+ RAINBOWS_VERSION = '0.2.0'
7
7
 
8
8
  include Unicorn::Const
9
9
 
10
10
  RACK_DEFAULTS = ::Unicorn::HttpRequest::DEFAULTS.merge({
11
-
12
- # we need to observe many of the rules for thread-safety even
13
- # with Revactor or Rev, so we're considered multithread-ed even
14
- # when we're not technically...
15
- "rack.multithread" => true,
16
11
  "SERVER_SOFTWARE" => "Rainbows! #{RAINBOWS_VERSION}",
17
12
  })
18
13
 
@@ -31,7 +31,7 @@ module Rainbows
31
31
  Module === mod or
32
32
  raise ArgumentError, "concurrency model #{model.inspect} not supported"
33
33
  extend(mod)
34
- @use = model
34
+ Const::RACK_DEFAULTS['rainbows.model'] = @use = model
35
35
  end
36
36
 
37
37
  def worker_connections(*args)
@@ -0,0 +1,174 @@
1
+ # -*- encoding: binary -*-
2
+ require 'rev'
3
+
4
+ # workaround revactor 0.1.4 still using the old Rev::Buffer
5
+ # ref: http://rubyforge.org/pipermail/revactor-talk/2009-October/000034.html
6
+ defined?(Rev::Buffer) or Rev::Buffer = IO::Buffer
7
+
8
+ module Rainbows
9
+
10
+ # Implements a basic single-threaded event model with
11
+ # {Rev}[http://rev.rubyforge.org/]. It is capable of handling
12
+ # thousands of simultaneous client connections, but with only a
13
+ # single-threaded app dispatch. It is suited for slow clients and
14
+ # fast applications (applications that do not have slow network
15
+ # dependencies). It does not require your Rack application to
16
+ # be reentrant or thread-safe.
17
+ #
18
+ # Compatibility: Whatever \Rev itself supports, currently Ruby
19
+ # 1.8/1.9.
20
+ #
21
+ # This model does not implement as streaming "rack.input" which
22
+ # allows the Rack application to process data as it arrives. This
23
+ # means "rack.input" will be fully buffered in memory or to a
24
+ # temporary file before the application is entered.
25
+ #
26
+ # Caveats: this model can buffer all output for slow clients in
27
+ # memory. This can be a problem if your application generates large
28
+ # responses (including static files served with Rack) as it will cause
29
+ # the memory footprint of your process to explode. If your workers
30
+ # seem to be eating a lot of memory from this, consider the
31
+ # {mall}[http://bogomips.org/mall/] library which allows access to the
32
+ # mallopt(3) function from Ruby.
33
+
34
+ module Rev
35
+
36
+ # global vars because class/instance variables are confusing me :<
37
+ # this struct is only accessed inside workers and thus private to each
38
+ G = Struct.new(:nr, :max, :logger, :alive, :app).new
39
+
40
+ include Base
41
+
42
+ class Client < ::Rev::IO
43
+ include Unicorn
44
+ include Rainbows::Const
45
+ G = Rainbows::Rev::G
46
+
47
+ def initialize(io)
48
+ G.nr += 1
49
+ super(io)
50
+ @remote_addr = ::TCPSocket === io ? io.peeraddr.last : LOCALHOST
51
+ @env = {}
52
+ @hp = HttpParser.new
53
+ @state = :headers # [ :body [ :trailers ] ] :app_call :close
54
+ @buf = ""
55
+ end
56
+
57
+ def handle_error(e)
58
+ msg = case e
59
+ when EOFError,Errno::ECONNRESET,Errno::EPIPE,Errno::EINVAL,Errno::EBADF
60
+ ERROR_500_RESPONSE
61
+ when HttpParserError # try to tell the client they're bad
62
+ ERROR_400_RESPONSE
63
+ else
64
+ G.logger.error "Read error: #{e.inspect}"
65
+ G.logger.error e.backtrace.join("\n")
66
+ ERROR_500_RESPONSE
67
+ end
68
+ write(msg)
69
+ ensure
70
+ @state = :close
71
+ end
72
+
73
+ def app_call
74
+ @input.rewind
75
+ @env[RACK_INPUT] = @input
76
+ @env[REMOTE_ADDR] = @remote_addr
77
+ response = G.app.call(@env.update(RACK_DEFAULTS))
78
+ alive = @hp.keepalive? && G.alive
79
+ out = [ alive ? CONN_ALIVE : CONN_CLOSE ] if @hp.headers?
80
+ HttpResponse.write(self, response, out)
81
+ if alive
82
+ @env.clear
83
+ @hp.reset
84
+ @state = :headers
85
+ else
86
+ @state = :close
87
+ end
88
+ end
89
+
90
+ def on_write_complete
91
+ :close == @state and close
92
+ end
93
+
94
+ def on_close
95
+ G.nr -= 1
96
+ end
97
+
98
+ def tmpio
99
+ io = Util.tmpio
100
+ def io.size
101
+ # already sync=true at creation, so no need to flush before stat
102
+ stat.size
103
+ end
104
+ io
105
+ end
106
+
107
+ # TeeInput doesn't map too well to this right now...
108
+ def on_read(data)
109
+ case @state
110
+ when :headers
111
+ @hp.headers(@env, @buf << data) or return
112
+ @state = :body
113
+ len = @hp.content_length
114
+ if len == 0
115
+ @input = HttpRequest::NULL_IO
116
+ app_call # common case
117
+ else # nil or len > 0
118
+ # since we don't do streaming input, we have no choice but
119
+ # to take over 100-continue handling from the Rack application
120
+ if @env[HTTP_EXPECT] =~ /\A100-continue\z/i
121
+ write(EXPECT_100_RESPONSE)
122
+ @env.delete(HTTP_EXPECT)
123
+ end
124
+ @input = len && len <= MAX_BODY ? StringIO.new("") : tmpio
125
+ @hp.filter_body(@buf2 = @buf.dup, @buf)
126
+ @input << @buf2
127
+ on_read("")
128
+ end
129
+ when :body
130
+ if @hp.body_eof?
131
+ @state = :trailers
132
+ on_read(data)
133
+ elsif data.size > 0
134
+ @hp.filter_body(@buf2, @buf << data)
135
+ @input << @buf2
136
+ on_read("")
137
+ end
138
+ when :trailers
139
+ @hp.trailers(@env, @buf << data) and app_call
140
+ end
141
+ rescue Object => e
142
+ handle_error(e)
143
+ end
144
+ end
145
+
146
+ class Server < ::Rev::IO
147
+ G = Rainbows::Rev::G
148
+
149
+ def on_readable
150
+ return if G.nr >= G.max
151
+ begin
152
+ Client.new(@_io.accept_nonblock).attach(::Rev::Loop.default)
153
+ rescue Errno::EAGAIN, Errno::ECONNBORTED
154
+ end
155
+ end
156
+
157
+ end
158
+
159
+ # runs inside each forked worker, this sits around and waits
160
+ # for connections and doesn't die until the parent dies (or is
161
+ # given a INT, QUIT, or TERM signal)
162
+ def worker_loop(worker)
163
+ init_worker_process(worker)
164
+ G.nr = 0
165
+ G.max = worker_connections
166
+ G.alive = true
167
+ G.logger = logger
168
+ G.app = app
169
+ LISTENERS.map! { |s| Server.new(s).attach(::Rev::Loop.default) }
170
+ ::Rev::Loop.default.run
171
+ end
172
+
173
+ end
174
+ end