puma_worker_killer 0.0.5 → 0.0.6

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: b5655d10238db0d4a493057719a99621386373a7
4
- data.tar.gz: 853a84a27a47ea134bc5812fa7a6cd4f0e2de231
3
+ metadata.gz: 948c271c454d3a1dc1b2bccaf270bcba98de73f8
4
+ data.tar.gz: 14b5ed3e5b5c28b03df31e23dc7a1047d1965a30
5
5
  SHA512:
6
- metadata.gz: 8e43e8ea674c6afafa5e693ffbcf12cb07785ee1176e3b33e30f00faa536fe277903782138172a285b4e44dc264e033358b6693308d84b3dca6bc6159130b687
7
- data.tar.gz: 7f2575bbde18a5250049b7af098902bed20baf08ac5e7bcbb86443b3b2d3adb6c17a3a5acaa14cbd0c8ec7380da8ef3492fa347132e7b33fb197893f9a030168
6
+ metadata.gz: d47c3c52bca9383936074705a88ef19e62751c183b0f086a1f004970655e5bf3b9bc934a2706013a4af3a69b160ad986371dfa36105b52acee0d6a981481f6a1
7
+ data.tar.gz: 683b00d578d35bc61b3450acbfb2276f438208b99f50e7951650c2348d2bdc0fc2570a5bf4f52b446131e80dc61ff21d39821c87c1cd6fe3e2768ff20cd2e58a
@@ -3,12 +3,12 @@ rvm:
3
3
  - 1.9.3
4
4
  - 2.0.0
5
5
  - 2.1.0
6
+ - 2.2.4
7
+ - 2.3.0
6
8
  - ruby-head
7
- - jruby-19mode
8
- - rbx-19mode
9
+ - rbx
9
10
 
10
11
  matrix:
11
12
  allow_failures:
12
13
  - rvm: ruby-head
13
- - rvm: rbx-19mode
14
- - rvm: jruby-19mode
14
+ - rvm: rbx
@@ -1,3 +1,7 @@
1
+ ## 0.0.6
2
+
3
+ - Log PID of worker insead of inspecting the worker #33
4
+
1
5
  ## 0.0.5
2
6
 
3
7
  - Support for Puma 3.x
data/README.md CHANGED
@@ -9,7 +9,7 @@ If you have a memory leak in your code, finding and plugging it can be a hercule
9
9
 
10
10
  Puma worker killer can only function if you have enabled cluster mode or hybrid mode (threads + worker cluster). If you are only using threads (and not workers) then puma worker killer cannot help keep your memory in control.
11
11
 
12
- BTW restarting your processes to controll memory is like putting a bandaid on a gunshot wound, try figuring out the reason you're seeing so much memory bloat [derailed benchmarks](https://github.com/schneems/derailed_benchmarks) can help.
12
+ BTW restarting your processes to control memory is like putting a bandaid on a gunshot wound, try figuring out the reason you're seeing so much memory bloat [derailed benchmarks](https://github.com/schneems/derailed_benchmarks) can help.
13
13
 
14
14
 
15
15
  ## Install
@@ -52,7 +52,7 @@ end
52
52
  PumaWorkerKiller.start
53
53
  ```
54
54
 
55
- ## Attention
55
+ ## Attention
56
56
  If you start puma as a daemon, to add puma worker killer config into puma config file, rather than into initializers:
57
57
  Sample like this: (in puma.rb file)
58
58
  ```ruby
@@ -98,7 +98,7 @@ By default PumaWorkerKiller will perform a rolling restart of all your worker pr
98
98
  If you're running on a platform like [Heroku where it is difficult to measure RAM from inside of a container accurately](https://github.com/schneems/get_process_mem/issues/7), you may want to disable the "worker killer" functionality and only use the rolling restart. You can do that by running:
99
99
 
100
100
  ```ruby
101
- PumaWorkerKiller.enable_rolling_restart
101
+ PumaWorkerKiller.enable_rolling_restart # Default is every 6 hours
102
102
  ```
103
103
 
104
104
  or you can pass in the restart frequency
@@ -7,7 +7,7 @@ module PumaWorkerKiller
7
7
  self.ram = 512 # mb
8
8
  self.frequency = 10 # seconds
9
9
  self.percent_usage = 0.99 # percent of RAM to use
10
- self.rolling_restart_frequency = 6 * 3600
10
+ self.rolling_restart_frequency = 6 * 3600 # 6 hours in seconds
11
11
 
12
12
  def config
13
13
  yield self
@@ -13,10 +13,10 @@ module PumaWorkerKiller
13
13
  def reap
14
14
  return false unless @cluster.running?
15
15
  if (total = get_total_memory) > @max_ram
16
- @cluster.master.log "PumaWorkerKiller: Out of memory. #{@cluster.workers.count} workers consuming total: #{total} mb out of max: #{@max_ram} mb. Sending TERM to #{@cluster.largest_worker.inspect} consuming #{@cluster.largest_worker_memory} mb."
16
+ @cluster.master.log "PumaWorkerKiller: Out of memory. #{@cluster.workers.count} workers consuming total: #{total} mb out of max: #{@max_ram} mb. Sending TERM to pid #{@cluster.largest_worker.pid} consuming #{@cluster.largest_worker_memory} mb."
17
17
  @cluster.term_largest_worker
18
18
  else
19
- @cluster.master.log "PumaWorkerKiller: Consuming #{total} mb with master and #{@cluster.workers.count} workers"
19
+ @cluster.master.log "PumaWorkerKiller: Consuming #{total} mb with master and #{@cluster.workers.count} workers."
20
20
  end
21
21
  end
22
22
  end
@@ -12,7 +12,7 @@ module PumaWorkerKiller
12
12
  def reap(wait_between_worker_kill = 60) # seconds
13
13
  return false unless @cluster.running?
14
14
  @cluster.workers.each do |worker, ram|
15
- @cluster.master.log "PumaWorkerKiller: Rolling Restart. #{@cluster.workers.count} workers consuming total: #{ get_total_memory } mb out of max: #{@max_ram} mb. Sending TERM to #{worker.inspect}"
15
+ @cluster.master.log "PumaWorkerKiller: Rolling Restart. #{@cluster.workers.count} workers consuming total: #{ get_total_memory } mb out of max: #{@max_ram} mb. Sending TERM to pid #{worker.pid}."
16
16
  worker.term
17
17
  sleep wait_between_worker_kill
18
18
  end
@@ -1,3 +1,3 @@
1
1
  module PumaWorkerKiller
2
- VERSION = "0.0.5"
2
+ VERSION = "0.0.6"
3
3
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: puma_worker_killer
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.0.5
4
+ version: 0.0.6
5
5
  platform: ruby
6
6
  authors:
7
7
  - Richard Schneeman
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2016-03-02 00:00:00.000000000 Z
11
+ date: 2016-04-01 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: puma