puma_worker_killer 0.0.7 → 0.1.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 20389b0e2ba67ab795071554a0e52b6599cd3612
4
- data.tar.gz: 0789e6a76e4c12394df935bd9b16b503d6990a62
3
+ metadata.gz: 83921642be90494f9eb9988caff6f2397d2ff761
4
+ data.tar.gz: 0427e91feb9357ab8637739fac169edab633c8a1
5
5
  SHA512:
6
- metadata.gz: 62aa47f0f5032df0d0887e8001143059b4107fe1871932b545d4e56820c0012260ea73ffb77aed9be3fe83a1a3ec08b2f23633a05f96d8375885b3cdf1e05454
7
- data.tar.gz: 30fedd4919738c70432d7de4861089c03212c651805d4f564cd83e16d46f3b1d9bfc60ce44ae03a71cd1dff15b7547fbfb757b6b26d5e894f577ab0765669cad
6
+ metadata.gz: f87c5688cb637837c82182cad0eb912a5bcdfde7aab9fc8bd892154d51cb5c78019ea85a9edcaceba2cbb103e8062e68d8c9e500b29073d32bf7b22fafad56cc
7
+ data.tar.gz: e05e97981e2adf0e319b0ba92ff19f77e84a4f6c47906b7114fda2cdc18dce44e3476dd033f1995e076a073f25986fa74e342ea658f54b8dd7ba4a822af97998
@@ -1,3 +1,7 @@
1
+ ## 0.1.0
2
+
3
+ - Emit extra data via `pre_term` callback before puma worker killer terminates a worker #49.
4
+
1
5
  ## 0.0.7
2
6
 
3
7
  - Logging is configurable #41
data/README.md CHANGED
@@ -34,6 +34,8 @@ Then run `$ bundle install`
34
34
  A rolling restart will kill each of your workers on a rolling basis. You set the frequency which it conducts the restart. This is a simple way to keep memory down as Ruby web programs generally increase memory usage over time. If you're using Heroku [it is difficult to measure RAM from inside of a container accurately](https://github.com/schneems/get_process_mem/issues/7), so it is recommended to use this feature or use a [log-drain-based worker killer](https://github.com/arches/whacamole). You can enable roling restarts by running:
35
35
 
36
36
  ```ruby
37
+ # config/puma.rb
38
+
37
39
  before_fork do
38
40
  require 'puma_worker_killer'
39
41
 
@@ -103,10 +105,30 @@ PumaWorkerKiller.config do |config|
103
105
  config.rolling_restart_frequency = 12 * 3600 # 12 hours in seconds
104
106
  config.reaper_status_logs = true # setting this to false will not log lines like:
105
107
  # PumaWorkerKiller: Consuming 54.34765625 mb with master and 2 workers.
108
+
109
+ config.pre_term = -> (worker) { puts "Worker #{worker.inspect} being killed" }
106
110
  end
107
111
  PumaWorkerKiller.start
108
112
  ```
109
113
 
114
+ ### pre_term
115
+
116
+ `config.pre_term` will be called just prior to worker termination with the worker that is about to be terminated. This may be useful to use in keeping track of metrics, time of day workers are restarted, etc.
117
+
118
+ By default Puma Worker Killer will emit a log when a worker is being killed
119
+
120
+ ```
121
+ PumaWorkerKiller: Out of memory. 5 workers consuming total: 500 mb out of max: 450 mb. Sending TERM to pid 23 consuming 53 mb.
122
+ ```
123
+
124
+ or
125
+
126
+ ```
127
+ PumaWorkerKiller: Rolling Restart. 5 workers consuming total: 650mb mb. Sending TERM to pid 34.
128
+ ```
129
+
130
+ However you may want to collect more data, such as sending an event to an error collection service like rollbar or airbrake. The `pre_term` lambda gets called before any worker is killed by PWK for any reason.
131
+
110
132
  ## Attention
111
133
 
112
134
  If you start puma as a daemon, to add puma worker killer config into puma config file, rather than into initializers:
@@ -3,19 +3,20 @@ require 'get_process_mem'
3
3
  module PumaWorkerKiller
4
4
  extend self
5
5
 
6
- attr_accessor :ram, :frequency, :percent_usage, :rolling_restart_frequency, :reaper_status_logs
6
+ attr_accessor :ram, :frequency, :percent_usage, :rolling_restart_frequency, :reaper_status_logs, :pre_term
7
7
  self.ram = 512 # mb
8
8
  self.frequency = 10 # seconds
9
9
  self.percent_usage = 0.99 # percent of RAM to use
10
10
  self.rolling_restart_frequency = 6 * 3600 # 6 hours in seconds
11
11
  self.reaper_status_logs = true
12
+ self.pre_term = lambda { |_| } # nop
12
13
 
13
14
  def config
14
15
  yield self
15
16
  end
16
17
 
17
- def reaper(ram = self.ram, percent = self.percent_usage, reaper_status_logs = self.reaper_status_logs)
18
- Reaper.new(ram * percent_usage, nil, reaper_status_logs)
18
+ def reaper(ram = self.ram, percent = self.percent_usage, reaper_status_logs = self.reaper_status_logs, pre_term = self.pre_term)
19
+ Reaper.new(ram * percent_usage, nil, reaper_status_logs, pre_term)
19
20
  end
20
21
 
21
22
  def start(frequency = self.frequency, reaper = self.reaper)
@@ -12,6 +12,10 @@ module PumaWorkerKiller
12
12
  workers.size
13
13
  end
14
14
 
15
+ def term_worker(worker)
16
+ worker.term
17
+ end
18
+
15
19
  def term_largest_worker
16
20
  largest_worker.term
17
21
  # Process.wait(largest_worker.pid)
@@ -78,4 +82,4 @@ module PumaWorkerKiller
78
82
  end
79
83
  end
80
84
  end
81
- end
85
+ end
@@ -1,9 +1,10 @@
1
1
  module PumaWorkerKiller
2
2
  class Reaper
3
- def initialize(max_ram, master = nil, reaper_status_logs = true)
3
+ def initialize(max_ram, master = nil, reaper_status_logs = true, pre_term)
4
4
  @cluster = PumaWorkerKiller::PumaMemory.new(master)
5
5
  @max_ram = max_ram
6
6
  @reaper_status_logs = reaper_status_logs
7
+ @pre_term = pre_term
7
8
  end
8
9
 
9
10
  # used for tes
@@ -15,7 +16,18 @@ module PumaWorkerKiller
15
16
  return false if @cluster.workers_stopped?
16
17
  if (total = get_total_memory) > @max_ram
17
18
  @cluster.master.log "PumaWorkerKiller: Out of memory. #{@cluster.workers.count} workers consuming total: #{total} mb out of max: #{@max_ram} mb. Sending TERM to pid #{@cluster.largest_worker.pid} consuming #{@cluster.largest_worker_memory} mb."
18
- @cluster.term_largest_worker
19
+
20
+ # Fetch the largest_worker so that both `@pre_term` and `term_worker` are called with the same worker
21
+ # Avoids a race condition where:
22
+ # Worker A consume 100 mb memory
23
+ # Worker B consume 99 mb memory
24
+ # pre_term gets called with Worker A
25
+ # A new request comes in, Worker B takes it, and consumes 101 mb memory
26
+ # term_largest_worker (previously here) gets called and terms Worker B (thus not passing the about-to-be-terminated worker to `@pre_term`)
27
+ largest_worker = @cluster.largest_worker
28
+ @pre_term.call(largest_worker)
29
+ @cluster.term_worker(largest_worker)
30
+
19
31
  elsif @reaper_status_logs
20
32
  @cluster.master.log "PumaWorkerKiller: Consuming #{total} mb with master and #{@cluster.workers.count} workers."
21
33
  end
@@ -12,7 +12,7 @@ module PumaWorkerKiller
12
12
  def reap(wait_between_worker_kill = 60) # seconds
13
13
  return false unless @cluster.running?
14
14
  @cluster.workers.each do |worker, ram|
15
- @cluster.master.log "PumaWorkerKiller: Rolling Restart. #{@cluster.workers.count} workers consuming total: #{ get_total_memory } mb out of max: #{@max_ram} mb. Sending TERM to pid #{worker.pid}."
15
+ @cluster.master.log "PumaWorkerKiller: Rolling Restart. #{@cluster.workers.count} workers consuming total: #{ get_total_memory } mb. Sending TERM to pid #{worker.pid}."
16
16
  worker.term
17
17
  sleep wait_between_worker_kill
18
18
  end
@@ -1,3 +1,3 @@
1
1
  module PumaWorkerKiller
2
- VERSION = "0.0.7"
2
+ VERSION = "0.1.0"
3
3
  end
@@ -0,0 +1,8 @@
1
+ load File.expand_path("../fixture_helper.rb", __FILE__)
2
+
3
+ PumaWorkerKiller.config do |config|
4
+ config.pre_term = lambda { |worker| puts("About to terminate worker: #{worker.inspect}") }
5
+ end
6
+ PumaWorkerKiller.start
7
+
8
+ run HelloWorldApp
@@ -33,6 +33,18 @@ class PumaWorkerKillerTest < Test::Unit::TestCase
33
33
  end
34
34
  end
35
35
 
36
+ def test_pre_term
37
+ file = fixture_path.join("pre_term.ru")
38
+ port = 0
39
+ command = "bundle exec puma #{ file } -t 1:1 -w 2 --preload --debug -p #{ port }"
40
+ options = { wait_for: "booted", timeout: 5, env: { "PUMA_FREQUENCY" => 1, 'PUMA_RAM' => 1} }
41
+
42
+ WaitForIt.new(command, options) do |spawn|
43
+ assert_contains(spawn, "Out of memory")
44
+ assert_contains(spawn, "About to terminate worker:") # defined in pre_term.ru
45
+ end
46
+ end
47
+
36
48
  def assert_contains(spawn, string)
37
49
  assert spawn.wait(string), "Expected logs to contain '#{string}' but it did not, contents: #{ spawn.log.read }"
38
50
  end
@@ -50,4 +62,3 @@ class PumaWorkerKillerTest < Test::Unit::TestCase
50
62
  end
51
63
  end
52
64
  end
53
-
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: puma_worker_killer
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.0.7
4
+ version: 0.1.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Richard Schneeman
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2016-10-13 00:00:00.000000000 Z
11
+ date: 2017-05-12 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: puma
@@ -124,6 +124,7 @@ files:
124
124
  - test/fixtures/config/puma_worker_killer_start.rb
125
125
  - test/fixtures/default.ru
126
126
  - test/fixtures/fixture_helper.rb
127
+ - test/fixtures/pre_term.ru
127
128
  - test/fixtures/rolling_restart.ru
128
129
  - test/puma_worker_killer_test.rb
129
130
  - test/test_helper.rb
@@ -147,7 +148,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
147
148
  version: '0'
148
149
  requirements: []
149
150
  rubyforge_project:
150
- rubygems_version: 2.6.4
151
+ rubygems_version: 2.6.11
151
152
  signing_key:
152
153
  specification_version: 4
153
154
  summary: If you have a memory leak in your web code puma_worker_killer can keep it
@@ -157,7 +158,7 @@ test_files:
157
158
  - test/fixtures/config/puma_worker_killer_start.rb
158
159
  - test/fixtures/default.ru
159
160
  - test/fixtures/fixture_helper.rb
161
+ - test/fixtures/pre_term.ru
160
162
  - test/fixtures/rolling_restart.ru
161
163
  - test/puma_worker_killer_test.rb
162
164
  - test/test_helper.rb
163
- has_rdoc: