autoscaler 0.9.0 → 0.10.0

Sign up to get free protection for your applications and to get access to all the features.
Files changed (35) hide show
  1. checksums.yaml +4 -4
  2. data/CHANGELOG.md +10 -0
  3. data/Guardfile +4 -6
  4. data/README.md +20 -14
  5. data/examples/complex.rb +7 -3
  6. data/examples/simple.rb +4 -2
  7. data/lib/autoscaler/binary_scaling_strategy.rb +1 -1
  8. data/lib/autoscaler/heroku_platform_scaler.rb +84 -0
  9. data/lib/autoscaler/ignore_scheduled_and_retrying.rb +5 -0
  10. data/lib/autoscaler/linear_scaling_strategy.rb +6 -2
  11. data/lib/autoscaler/sidekiq/celluloid_monitor.rb +3 -2
  12. data/lib/autoscaler/sidekiq/client.rb +1 -1
  13. data/lib/autoscaler/sidekiq/entire_queue_system.rb +13 -2
  14. data/lib/autoscaler/sidekiq/monitor_middleware_adapter.rb +7 -4
  15. data/lib/autoscaler/sidekiq/sleep_wait_server.rb +2 -2
  16. data/lib/autoscaler/sidekiq/specified_queue_system.rb +13 -3
  17. data/lib/autoscaler/version.rb +1 -1
  18. data/spec/autoscaler/binary_scaling_strategy_spec.rb +2 -2
  19. data/spec/autoscaler/counter_cache_memory_spec.rb +3 -3
  20. data/spec/autoscaler/counter_cache_redis_spec.rb +6 -6
  21. data/spec/autoscaler/delayed_shutdown_spec.rb +4 -4
  22. data/spec/autoscaler/heroku_platform_scaler_spec.rb +47 -0
  23. data/spec/autoscaler/heroku_scaler_spec.rb +8 -8
  24. data/spec/autoscaler/ignore_scheduled_and_retrying_spec.rb +4 -4
  25. data/spec/autoscaler/linear_scaling_strategy_spec.rb +20 -14
  26. data/spec/autoscaler/sidekiq/activity_spec.rb +4 -4
  27. data/spec/autoscaler/sidekiq/celluloid_monitor_spec.rb +3 -3
  28. data/spec/autoscaler/sidekiq/client_spec.rb +5 -5
  29. data/spec/autoscaler/sidekiq/entire_queue_system_spec.rb +11 -11
  30. data/spec/autoscaler/sidekiq/monitor_middleware_adapter_spec.rb +2 -2
  31. data/spec/autoscaler/sidekiq/sleep_wait_server_spec.rb +21 -21
  32. data/spec/autoscaler/sidekiq/specified_queue_system_spec.rb +10 -10
  33. data/spec/spec_helper.rb +4 -2
  34. data/spec/test_system.rb +6 -0
  35. metadata +51 -12
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: c1870d13f3e44b1d9b2267f41fb8f5d94c94c6ec
4
- data.tar.gz: 0a7662b031e66c6d9967567cb7612791fe6914ff
3
+ metadata.gz: 3d1fdf02eef4bd9e9ed8ca36ec7a88b0e298ead8
4
+ data.tar.gz: fd639d4797b01dc4aded9d0d11dc0d386d24f39e
5
5
  SHA512:
6
- metadata.gz: 10566a1b1762379530e22dbfe1174212c7c23dc4078787fffdb9956d80d3dc33a4d1aba2c9298b1408579fbda1e336e05a9b1eb63124d6bfb48fd8b90de0cc94
7
- data.tar.gz: 37fd250217015d0aabf1e99cc7773aacdbf51e6ed57bfbda0a18fe998369459b69fcbcf1126fa467ba1906539e4d20b12134ec944c17c2071849e61c5243f0ff
6
+ metadata.gz: 43c6d465b6bbbb1978d9ace6e5adc60db0a4c4aac72b206706f10397121dafc9ad2c59edd1a02395a10df8b01927abf445977be997d60bf0d19e172ed24aa85b
7
+ data.tar.gz: 27fdfa89b504de1b90a735929e96d9c96e7fd0f84e2b8d80ea05a2ec58d786af71f726109649dfa220b7d9a5cd038f34dc591d28a9218587c073a2a6b8fb9961
@@ -1,5 +1,15 @@
1
1
  # Changelog
2
2
 
3
+ ## 0.10.0
4
+
5
+ - Require Sidekiq 3.5
6
+ - You may use `HerokuPlatformScaler` and `HEROKU_ACCESS_TOKEN` in place of `HerkouScaler` and `HEROKU_API_KEY`
7
+ - QueueSystem#workers returns the number of engaged SK processes.
8
+ - Linear Scaling Strategy will not scale down past number of active workers. Assumes 1-1 SK process/dyno mapping.
9
+ - Calls the SideKiq quiet api when shutting down
10
+ - Count workers currently running (Joel Van Horn)
11
+ - Update gems and use RSpec expect syntax (giviger)
12
+
3
13
  ## 0.9.0
4
14
 
5
15
  - CounterCacheRedis.new now takes a third parameter `worker_type`, a string used in the
data/Guardfile CHANGED
@@ -4,11 +4,9 @@ end
4
4
 
5
5
  tag = "--tag #{ENV['TAG']}" if ENV['TAG']
6
6
  example = "--example '#{ENV['EXAMPLE']}'" if ENV['EXAMPLE']
7
- %w(sidekiq-2 sidekiq-3).each do |appraisal|
8
- guard :rspec, :cmd => "appraisal #{appraisal} rspec --color --format d #{tag} #{example}" do
9
- watch(%r{^spec/.+_spec\.rb$})
10
- watch(%r{^lib/(.+).rb$}) { |m| "spec/#{m[1]}_spec.rb" }
11
- watch('spec/spec_helper.rb') { "spec" }
12
- end
7
+ guard :rspec, :cmd => "rspec --color --format d #{tag} #{example}" do
8
+ watch(%r{^spec/.+_spec\.rb$})
9
+ watch(%r{^lib/(.+).rb$}) { |m| "spec/#{m[1]}_spec.rb" }
10
+ watch('spec/spec_helper.rb') { "spec" }
13
11
  end
14
12
 
data/README.md CHANGED
@@ -1,10 +1,10 @@
1
1
  # Sidekiq Heroku Autoscaler
2
2
 
3
- [Sidekiq](https://github.com/mperham/sidekiq) performs background jobs. While it's threading model allows it to scale easier than worker-pre-process background systems, people running test or lightly loaded systems on [Heroku](http://www.heroku.com/) still want to scale down to zero to avoid racking up charges.
3
+ [Sidekiq](https://github.com/mperham/sidekiq) performs background jobs. While its threading model allows it to scale easier than worker-pre-process background systems, people running test or lightly loaded systems on [Heroku](http://www.heroku.com/) still want to scale down to zero to avoid racking up charges.
4
4
 
5
5
  ## Requirements
6
6
 
7
- Tested on Ruby 1.9.2 and Heroku Cedar stack.
7
+ Tested on Ruby 2.1.7 and Heroku Cedar stack.
8
8
 
9
9
  ## Installation
10
10
 
@@ -12,36 +12,38 @@ Tested on Ruby 1.9.2 and Heroku Cedar stack.
12
12
 
13
13
  ## Getting Started
14
14
 
15
- This gem uses the [Herkou-Api](https://github.com/heroku/heroku.rb) gem, which requires an API key from Heroku. It will also need the heroku app name. By default, these are specified through environment variables. You can also pass them to HerokuScaler explicitly.
15
+ This gem uses the [Heroku Platform-Api](https://github.com/heroku/platform-api.rb) gem, which requires an OAuth token from Heroku. It will also need the heroku app name. By default, these are specified through environment variables. You can also pass them to `HerokuPlatformScaler` explicitly.
16
16
 
17
- HEROKU_API_KEY=.....
17
+ HEROKU_ACCESS_TOKEN=.....
18
18
  HEROKU_APP=....
19
19
 
20
+ Support is still present for [Heroku-Api](https://github.com/heroku/heroku.rb) via `HerkouScaler` and `HEROKU_API_KEY`, but may be removed in a future major version.
21
+
20
22
  Install the middleware in your `Sidekiq.configure_` blocks
21
23
 
22
24
  require 'autoscaler/sidekiq'
23
- require 'autoscaler/heroku_scaler'
25
+ require 'autoscaler/heroku_platform_scaler'
24
26
 
25
27
  Sidekiq.configure_client do |config|
26
28
  config.client_middleware do |chain|
27
- chain.add Autoscaler::Sidekiq::Client, 'default' => Autoscaler::HerokuScaler.new
29
+ chain.add Autoscaler::Sidekiq::Client, 'default' => Autoscaler::HerokuPlatformScaler.new
28
30
  end
29
31
  end
30
32
 
31
33
  Sidekiq.configure_server do |config|
32
34
  config.server_middleware do |chain|
33
- chain.add(Autoscaler::Sidekiq::Server, Autoscaler::HerokuScaler.new, 60) # 60 second timeout
35
+ chain.add(Autoscaler::Sidekiq::Server, Autoscaler::HerokuPlatformScaler.new, 60) # 60 second timeout
34
36
  end
35
37
  end
36
38
 
37
39
  ## Limits and Challenges
38
40
 
39
- - HerokuScaler includes an attempt at current-worker cache that may be overcomplication, and doesn't work very well on the server
41
+ - HerokuPlatformScaler includes an attempt at current-worker cache that may be overcomplication, and doesn't work very well on the server
40
42
  - Multiple scale-down loops may be started, particularly if there are multiple jobs queued when the servers comes up. Heroku seems to handle multiple scale-down commands well.
41
43
  - The scale-down monitor is triggered on job completion (and server middleware is only run around jobs), so if the server nevers processes any jobs, it won't turn off.
42
44
  - The retry and schedule lists are considered - if you schedule a long-running task, the process will not scale-down.
43
45
  - If background jobs trigger jobs in other scaled processes, please note you'll need `config.client_middleware` in your `Sidekiq.configure_server` block in order to scale-up.
44
- - Exceptions while calling the Heroku API are caught and printed by default. See `HerokuScaler#exception_handler` to override
46
+ - Exceptions while calling the Heroku API are caught and printed by default. See `HerokuPlatformScaler#exception_handler` to override
45
47
 
46
48
  ## Experimental
47
49
 
@@ -57,24 +59,28 @@ You can pass a scaling strategy object instead of the timeout to the server midd
57
59
 
58
60
  ### Working caching
59
61
 
60
- scaler.counter_cache = Autoscaler::CounterCacheRedis(Sidekiq.method(:redis))
62
+ scaler.counter_cache = Autoscaler::CounterCacheRedis.new(Sidekiq.method(:redis))
61
63
 
62
64
  ## Tests
63
65
 
64
66
  The project is setup to run RSpec with Guard. It expects a redis instance on a custom port, which is started by the Guardfile.
65
67
 
66
- The HerokuScaler is not tested by default because it makes live API requests. Specify `HEROKU_APP` and `HEROKU_API_KEY` on the command line, and then watch your app's logs.
68
+ The HerokuPlatformScaler is not tested by default because it makes live API requests. Specify `HEROKU_APP` and `HEROKU_ACCESS_TOKEN` on the command line, and then watch your app's logs.
67
69
 
68
- HEROKU_APP=... HEROKU_API_KEY=... guard
70
+ HEROKU_APP=... HEROKU_ACCESS_TOKEN=... guard
69
71
  heroku logs --app ...
70
72
 
71
73
  ## Authors
72
74
 
73
75
  Justin Love, [@wondible](http://twitter.com/wondible), [https://github.com/JustinLove](https://github.com/JustinLove)
74
76
 
75
- Ported to Heroku-Api by Fix Peña, [https://github.com/fixr](https://github.com/fixr)
77
+ ### Contributors
76
78
 
77
- Retry/schedule sets by Matt Anderson [https://github.com/tonkapark](https://github.com/tonkapark) and Thibaud Guillaume-Gentil [https://github.com/jilion](https://github.com/jilion)
79
+ - Benjamin Kudria [https://github.com/bkudria](https://github.com/bkudria)
80
+ - Fix Peña [https://github.com/fixr](https://github.com/fixr)
81
+ - Gabriel Givigier Guimarães [https://github.com/givigier](https://github.com/givigier)
82
+ - Matt Anderson [https://github.com/tonkapark](https://github.com/tonkapark)
83
+ - Thibaud Guillaume-Gentil [https://github.com/jilion](https://github.com/jilion)
78
84
 
79
85
  ## Licence
80
86
 
@@ -1,15 +1,19 @@
1
1
  require 'sidekiq'
2
2
  require 'autoscaler/sidekiq'
3
- require 'autoscaler/heroku_scaler'
3
+ require 'autoscaler/heroku_platform_scaler'
4
+
5
+ # This setup is for multiple queues, where each queue has a dedicated process type
4
6
 
5
7
  heroku = nil
6
8
  if ENV['HEROKU_APP']
7
9
  heroku = {}
8
10
  scaleable = %w[default import] - (ENV['ALWAYS'] || '').split(' ')
9
11
  scaleable.each do |queue|
10
- heroku[queue] = Autoscaler::HerokuScaler.new(
12
+ # We are using the convention that worker process type is the
13
+ # same as the queue name
14
+ heroku[queue] = Autoscaler::HerokuPlatformScaler.new(
11
15
  queue,
12
- ENV['HEROKU_API_KEY'],
16
+ ENV['HEROKU_ACCESS_TOKEN'],
13
17
  ENV['HEROKU_APP'])
14
18
  end
15
19
  end
@@ -1,10 +1,12 @@
1
1
  require 'sidekiq'
2
2
  require 'autoscaler/sidekiq'
3
- require 'autoscaler/heroku_scaler'
3
+ require 'autoscaler/heroku_platform_scaler'
4
+
5
+ # This is setup for a single queue (default) and worker process (worker)
4
6
 
5
7
  heroku = nil
6
8
  if ENV['HEROKU_APP']
7
- heroku = Autoscaler::HerokuScaler.new
9
+ heroku = Autoscaler::HerokuPlatformScaler.new
8
10
  #heroku.exception_handler = lambda {|exception| MyApp.logger.error(exception)}
9
11
  end
10
12
 
@@ -20,7 +20,7 @@ module Autoscaler
20
20
 
21
21
  private
22
22
  def active?(system)
23
- system.queued > 0 || system.scheduled > 0 || system.retrying > 0 || system.workers > 0
23
+ system.any_work?
24
24
  end
25
25
  end
26
26
  end
@@ -0,0 +1,84 @@
1
+ require 'platform-api'
2
+ require 'autoscaler/counter_cache_memory'
3
+
4
+ module Autoscaler
5
+ # Wraps the Heroku Platform API to provide just the interface that we need for scaling.
6
+ class HerokuPlatformScaler
7
+ # @param [String] type process type this scaler controls
8
+ # @param [String] token Heroku OAuth access token
9
+ # @param [String] app Heroku app name
10
+ def initialize(
11
+ type = 'worker',
12
+ token = ENV['HEROKU_ACCESS_TOKEN'],
13
+ app = ENV['HEROKU_APP'])
14
+ @client = PlatformAPI.connect_oauth(token)
15
+ @type = type
16
+ @app = app
17
+ @workers = CounterCacheMemory.new
18
+ end
19
+
20
+ attr_reader :app
21
+ attr_reader :type
22
+
23
+ # Read the current worker count (value may be cached)
24
+ # @return [Numeric] number of workers
25
+ def workers
26
+ @workers.counter {@workers.counter = heroku_get_workers}
27
+ end
28
+
29
+ # Set the number of workers (noop if workers the same)
30
+ # @param [Numeric] n number of workers
31
+ def workers=(n)
32
+ unknown = false
33
+ current = @workers.counter{unknown = true; 1}
34
+ if n != current || unknown
35
+ p "Scaling #{type} to #{n}"
36
+ heroku_set_workers(n)
37
+ @workers.counter = n
38
+ end
39
+ end
40
+
41
+ # Callable object which responds to exceptions during api calls #
42
+ # @example
43
+ # heroku.exception_handler = lambda {|exception| MyApp.logger.error(exception)}
44
+ # heroku.exception_handler = lambda {|exception| raise}
45
+ # # default
46
+ # lambda {|exception|
47
+ # p exception
48
+ # puts exception.backtrace
49
+ # }
50
+ attr_writer :exception_handler
51
+
52
+ # Object which supports #counter and #counter=
53
+ # Defaults to CounterCacheMemory
54
+ def counter_cache=(cache)
55
+ @workers = cache
56
+ end
57
+
58
+ private
59
+ attr_reader :client
60
+
61
+ def heroku_get_workers
62
+ client.formation.list(app)
63
+ .select {|item| item['type'] == type}
64
+ .map {|item| item['quantity']}
65
+ .reduce(0, &:+)
66
+ rescue Excon::Errors::Error => e
67
+ exception_handler.call(e)
68
+ 0
69
+ end
70
+
71
+ def heroku_set_workers(n)
72
+ client.formation.update(app, type, {:quantity => n})
73
+ rescue Excon::Errors::Error, Heroku::API::Errors::Error => e
74
+ exception_handler.call(e)
75
+ end
76
+
77
+ def exception_handler
78
+ @exception_handler ||= lambda {|exception|
79
+ p exception
80
+ puts exception.backtrace
81
+ }
82
+ end
83
+ end
84
+ end
@@ -1,9 +1,14 @@
1
1
  module Autoscaler
2
+ # - Strategy wrapper to ignore scheduled and retrying queues. Usage:
3
+ # ``new_strategy = IgnoreScheduledAndRetrying.new(my_old_strategy)``
2
4
  class IgnoreScheduledAndRetrying
3
5
  def initialize(strategy)
4
6
  @strategy = strategy
5
7
  end
6
8
 
9
+ # @param [QueueSystem] system interface to the queuing system
10
+ # @param [Numeric] event_idle_time number of seconds since a job related event
11
+ # @return [Integer] target number of workers
7
12
  def call(system, event_idle_time)
8
13
  system.define_singleton_method(:scheduled) { 0 }
9
14
  system.define_singleton_method(:retrying) { 0 }
@@ -20,16 +20,20 @@ module Autoscaler
20
20
 
21
21
  # Scale requested capacity taking into account the minimum required
22
22
  scale_factor = (requested_capacity_percentage - @min_capacity_percentage) / (@total_capacity - @min_capacity_percentage)
23
+ scale_factor = 0 if scale_factor.nan? # Handle DIVZERO
24
+
23
25
  scaled_capacity_percentage = scale_factor * @total_capacity
24
26
 
25
27
  ideal_workers = ([0, scaled_capacity_percentage].max * @max_workers).ceil
28
+ min_workers = [system.workers, ideal_workers].max # Don't scale down past number of currently engaged workers
29
+ max_workers = [min_workers, @max_workers].min # Don't scale up past number of max workers
26
30
 
27
- return [ideal_workers, @max_workers].min
31
+ return [min_workers, max_workers].min
28
32
  end
29
33
 
30
34
  private
31
35
  def total_work(system)
32
- system.queued + system.scheduled + system.retrying
36
+ system.total_work
33
37
  end
34
38
  end
35
39
  end
@@ -1,4 +1,4 @@
1
- require 'celluloid'
1
+ require 'celluloid/current'
2
2
 
3
3
  module Autoscaler
4
4
  module Sidekiq
@@ -6,7 +6,7 @@ module Autoscaler
6
6
  class CelluloidMonitor
7
7
  include Celluloid
8
8
 
9
- # @param [scaler] scaler object that actually performs scaling operations (e.g. {HerokuScaler})
9
+ # @param [scaler] scaler object that actually performs scaling operations (e.g. {HerokuPlatformScaler})
10
10
  # @param [Strategy] strategy object that decides the target number of workers (e.g. {BinaryScalingStrategy})
11
11
  # @param [System] system interface to the queuing system for use by the strategy
12
12
  def initialize(scaler, strategy, system)
@@ -29,6 +29,7 @@ module Autoscaler
29
29
  target_workers = @strategy.call(@system, idle_time)
30
30
  workers = @scaler.workers = target_workers unless workers == target_workers
31
31
  end while workers > 0
32
+ ::Sidekiq::ProcessSet.new.each(&:quiet!)
32
33
  end
33
34
  end
34
35
 
@@ -6,7 +6,7 @@ module Autoscaler
6
6
  # Sidekiq client middleware
7
7
  # Performs scale-up when items are queued and there are no workers running
8
8
  class Client
9
- # @param [Hash] scalers map of queue(String) => scaler (e.g. {HerokuScaler}).
9
+ # @param [Hash] scalers map of queue(String) => scaler (e.g. {HerokuPlatformScaler}).
10
10
  # Which scaler to use for each sidekiq queue
11
11
  def initialize(scalers)
12
12
  @scalers = scalers
@@ -5,9 +5,10 @@ module Autoscaler
5
5
  # Interface to to interrogate the queuing system
6
6
  # Includes every queue
7
7
  class EntireQueueSystem
8
- # @return [Integer] number of worker actively engaged
8
+ # @return [Integer] number of workers actively engaged
9
9
  def workers
10
- ::Sidekiq::Workers.new.size
10
+ ::Sidekiq::Workers.new.map {|pid, _, _| pid}.uniq.size
11
+ # #size may be out-of-date.
11
12
  end
12
13
 
13
14
  # @return [Integer] amount work ready to go
@@ -25,6 +26,16 @@ module Autoscaler
25
26
  ::Sidekiq::RetrySet.new.size
26
27
  end
27
28
 
29
+ # @return [Boolean] if any kind of work still needs to be done
30
+ def any_work?
31
+ queued > 0 || scheduled > 0 || retrying > 0 || workers > 0
32
+ end
33
+
34
+ # @return [Integer] total amount of work
35
+ def total_work
36
+ queued + scheduled + retrying + workers
37
+ end
38
+
28
39
  # @return [Array[String]]
29
40
  def queue_names
30
41
  sidekiq_queues.keys
@@ -8,15 +8,17 @@ module Autoscaler
8
8
  # Shim to the existing autoscaler interface
9
9
  # Starts the monitor and notifies it of job events that may occur while it's sleeping
10
10
  class MonitorMiddlewareAdapter
11
- # @param [scaler] scaler object that actually performs scaling operations (e.g. {HerokuScaler})
11
+ # @param [scaler] scaler object that actually performs scaling operations (e.g. {HerokuPlatformScaler})
12
12
  # @param [Strategy,Numeric] timeout strategy object that determines target workers, or a timeout in seconds to be passed to {DelayedShutdown}+{BinaryScalingStrategy}
13
13
  # @param [Array[String]] specified_queues list of queues to monitor to determine if there is work left. Defaults to all sidekiq queues.
14
14
  def initialize(scaler, timeout, specified_queues = nil)
15
15
  unless monitor
16
- CelluloidMonitor.supervise_as(:autoscaler_monitor,
16
+ CelluloidMonitor.supervise :as => :autoscaler_monitor,
17
+ :args => [
17
18
  scaler,
18
19
  strategy(timeout),
19
- QueueSystem.new(specified_queues))
20
+ QueueSystem.new(specified_queues),
21
+ ]
20
22
  end
21
23
  end
22
24
 
@@ -25,7 +27,8 @@ module Autoscaler
25
27
  monitor.async.starting_job
26
28
  yield
27
29
  ensure
28
- monitor.async.finished_job
30
+ # monitor might have gone, e.g. if Sidekiq has received SIGTERM
31
+ monitor.async.finished_job if monitor
29
32
  end
30
33
 
31
34
  private
@@ -6,7 +6,7 @@ module Autoscaler
6
6
  # Sidekiq server middleware
7
7
  # Performs scale-down when the queue is empty
8
8
  class SleepWaitServer
9
- # @param [scaler] scaler object that actually performs scaling operations (e.g. {HerokuScaler})
9
+ # @param [scaler] scaler object that actually performs scaling operations (e.g. {HerokuPlatformScaler})
10
10
  # @param [Numeric] timeout number of seconds to wait before shutdown
11
11
  # @param [Array[String]] specified_queues list of queues to monitor to determine if there is work left. Defaults to all sidekiq queues.
12
12
  def initialize(scaler, timeout, specified_queues = nil)
@@ -36,7 +36,7 @@ module Autoscaler
36
36
  attr_reader :system
37
37
 
38
38
  def pending_work?
39
- system.queued > 0 || system.scheduled > 0 || system.retrying > 0
39
+ system.any_work?
40
40
  end
41
41
 
42
42
  def working!(queue, redis)
@@ -10,11 +10,11 @@ module Autoscaler
10
10
  @queue_names = specified_queues
11
11
  end
12
12
 
13
- # @return [Integer] number of worker actively engaged
13
+ # @return [Integer] number of workers actively engaged
14
14
  def workers
15
- ::Sidekiq::Workers.new.count {|name, work|
15
+ ::Sidekiq::Workers.new.select {|_, _, work|
16
16
  queue_names.include?(work['queue'])
17
- }
17
+ }.map {|pid, _, _| pid}.uniq.size
18
18
  end
19
19
 
20
20
  # @return [Integer] amount work ready to go
@@ -32,6 +32,16 @@ module Autoscaler
32
32
  count_set(::Sidekiq::RetrySet.new)
33
33
  end
34
34
 
35
+ # @return [Boolean] if any kind of work still needs to be done
36
+ def any_work?
37
+ queued > 0 || scheduled > 0 || retrying > 0 || workers > 0
38
+ end
39
+
40
+ # @return [Integer] total amount of work
41
+ def total_work
42
+ queued + scheduled + retrying + workers
43
+ end
44
+
35
45
  # @return [Array[String]]
36
46
  attr_reader :queue_names
37
47
 
@@ -1,4 +1,4 @@
1
1
  module Autoscaler
2
2
  # version number
3
- VERSION = "0.9.0"
3
+ VERSION = "0.10.0"
4
4
  end
@@ -8,12 +8,12 @@ describe Autoscaler::BinaryScalingStrategy do
8
8
  it "scales with no work" do
9
9
  system = TestSystem.new(0)
10
10
  strategy = cut.new
11
- strategy.call(system, 1).should == 0
11
+ expect(strategy.call(system, 1)).to eq 0
12
12
  end
13
13
 
14
14
  it "does not scale with pending work" do
15
15
  system = TestSystem.new(1)
16
16
  strategy = cut.new(2)
17
- strategy.call(system, 1).should == 2
17
+ expect(strategy.call(system, 1)).to eq 2
18
18
  end
19
19
  end
@@ -5,17 +5,17 @@ describe Autoscaler::CounterCacheMemory do
5
5
  let(:cut) {Autoscaler::CounterCacheMemory}
6
6
 
7
7
  it {expect{cut.new.counter}.to raise_error(cut::Expired)}
8
- it {cut.new.counter{1}.should == 1}
8
+ it {expect(cut.new.counter{1}).to eq 1}
9
9
 
10
10
  it 'set and store' do
11
11
  cache = cut.new
12
12
  cache.counter = 1
13
- cache.counter.should == 1
13
+ expect(cache.counter).to eq 1
14
14
  end
15
15
 
16
16
  it 'times out' do
17
17
  cache = cut.new(0)
18
18
  cache.counter = 1
19
- expect{cache.counter.should}.to raise_error(cut::Expired)
19
+ expect{cache.counter}.to raise_error(cut::Expired)
20
20
  end
21
21
  end
@@ -11,11 +11,11 @@ describe Autoscaler::CounterCacheRedis do
11
11
  subject {cut.new(Sidekiq.method(:redis))}
12
12
 
13
13
  it {expect{subject.counter}.to raise_error(cut::Expired)}
14
- it {subject.counter{1}.should == 1}
14
+ it {expect(subject.counter{1}).to eq 1}
15
15
 
16
16
  it 'set and store' do
17
17
  subject.counter = 2
18
- subject.counter.should == 2
18
+ expect(subject.counter).to eq 2
19
19
  end
20
20
 
21
21
  it 'does not conflict with multiple worker types' do
@@ -23,7 +23,7 @@ describe Autoscaler::CounterCacheRedis do
23
23
  subject.counter = 1
24
24
  other_worker_cache.counter = 2
25
25
 
26
- subject.counter.should == 1
26
+ expect(subject.counter).to eq 1
27
27
  other_worker_cache.counter = 2
28
28
  end
29
29
 
@@ -37,13 +37,13 @@ describe Autoscaler::CounterCacheRedis do
37
37
  it 'passed a connection pool' do
38
38
  cache = cut.new(@redis)
39
39
  cache.counter = 4
40
- cache.counter.should == 4
40
+ expect(cache.counter).to eq 4
41
41
  end
42
42
 
43
43
  it 'passed a plain connection' do
44
- connection = Redis.connect(:url => 'http://localhost:9736', :namespace => 'autoscaler')
44
+ connection = Redis.connect(:url => 'redis://localhost:9736', :namespace => 'autoscaler')
45
45
  cache = cut.new connection
46
46
  cache.counter = 5
47
- cache.counter.should == 5
47
+ expect(cache.counter).to eq 5
48
48
  end
49
49
  end
@@ -7,17 +7,17 @@ describe Autoscaler::DelayedShutdown do
7
7
 
8
8
  it "returns normal values" do
9
9
  strategy = cut.new(lambda{|s,t| 2}, 0)
10
- strategy.call(nil, 1).should == 2
10
+ expect(strategy.call(nil, 1)).to eq 2
11
11
  end
12
12
 
13
13
  it "delays zeros" do
14
14
  strategy = cut.new(lambda{|s,t| 0}, 60)
15
- strategy.call(nil, 1).should == 1
15
+ expect(strategy.call(nil, 1)).to eq 1
16
16
  end
17
17
 
18
18
  it "eventually returns zero" do
19
19
  strategy = cut.new(lambda{|s,t| 0}, 60)
20
- strategy.stub(:level_idle_time).and_return(61)
21
- strategy.call(nil, 61).should == 0
20
+ allow(strategy).to receive(:level_idle_time).and_return(61)
21
+ expect(strategy.call(nil, 61)).to eq 0
22
22
  end
23
23
  end
@@ -0,0 +1,47 @@
1
+ require 'spec_helper'
2
+ require 'autoscaler/heroku_platform_scaler'
3
+
4
+ describe Autoscaler::HerokuPlatformScaler, :platform_api => true do
5
+ let(:cut) {Autoscaler::HerokuPlatformScaler}
6
+ let(:client) {cut.new}
7
+ subject {client}
8
+
9
+ its(:workers) {should eq(0)}
10
+
11
+ describe 'scaled' do
12
+ around do |example|
13
+ client.workers = 1
14
+ example.call
15
+ client.workers = 0
16
+ end
17
+
18
+ its(:workers) {should eq(1)}
19
+ end
20
+
21
+ shared_examples 'exception handler' do |exception_class|
22
+ before do
23
+ expect(client).to receive(:client){
24
+ raise exception_class.new(Exception.new('oops'))
25
+ }
26
+ end
27
+
28
+ describe "default handler" do
29
+ it {expect{client.workers}.to_not raise_error}
30
+ it {expect(client.workers).to eq(0)}
31
+ it {expect{client.workers = 2}.to_not raise_error}
32
+ end
33
+
34
+ describe "custom handler" do
35
+ before do
36
+ @caught = false
37
+ client.exception_handler = lambda {|exception| @caught = true}
38
+ end
39
+
40
+ it {client.workers; expect(@caught).to be(true)}
41
+ end
42
+ end
43
+
44
+ describe 'exception handling', :focus => true do
45
+ it_behaves_like 'exception handler', Excon::Errors::Error
46
+ end
47
+ end
@@ -2,33 +2,33 @@ require 'spec_helper'
2
2
  require 'autoscaler/heroku_scaler'
3
3
  require 'heroku/api/errors'
4
4
 
5
- describe Autoscaler::HerokuScaler, :online => true do
5
+ describe Autoscaler::HerokuScaler, :api1 => true do
6
6
  let(:cut) {Autoscaler::HerokuScaler}
7
7
  let(:client) {cut.new}
8
8
  subject {client}
9
9
 
10
- its(:workers) {should == 0}
10
+ its(:workers) {should eq(0)}
11
11
 
12
12
  describe 'scaled' do
13
13
  around do |example|
14
14
  client.workers = 1
15
- example.yield
15
+ example.call
16
16
  client.workers = 0
17
17
  end
18
18
 
19
- its(:workers) {should == 1}
19
+ its(:workers) {should eq(1)}
20
20
  end
21
21
 
22
22
  shared_examples 'exception handler' do |exception_class|
23
23
  before do
24
- client.should_receive(:client){
24
+ expect(client).to receive(:client){
25
25
  raise exception_class.new(Exception.new('oops'))
26
26
  }
27
27
  end
28
28
 
29
29
  describe "default handler" do
30
30
  it {expect{client.workers}.to_not raise_error}
31
- it {client.workers.should == 0}
31
+ it {expect(client.workers).to eq(0)}
32
32
  it {expect{client.workers = 2}.to_not raise_error}
33
33
  end
34
34
 
@@ -38,7 +38,7 @@ describe Autoscaler::HerokuScaler, :online => true do
38
38
  client.exception_handler = lambda {|exception| @caught = true}
39
39
  end
40
40
 
41
- it {client.workers; @caught.should be_true}
41
+ it {client.workers; expect(@caught).to be(true)}
42
42
  end
43
43
  end
44
44
 
@@ -46,4 +46,4 @@ describe Autoscaler::HerokuScaler, :online => true do
46
46
  it_behaves_like 'exception handler', Excon::Errors::SocketError
47
47
  it_behaves_like 'exception handler', Heroku::API::Errors::Error
48
48
  end
49
- end
49
+ end
@@ -8,25 +8,25 @@ describe Autoscaler::IgnoreScheduledAndRetrying do
8
8
  it "passes through enqueued" do
9
9
  system = Struct.new(:enqueued).new(3)
10
10
  strategy = proc {|system, time| system.enqueued}
11
- cut.new(strategy).call(system, 0).should == 3
11
+ expect(cut.new(strategy).call(system, 0)).to eq 3
12
12
  end
13
13
 
14
14
  it "passes through workers" do
15
15
  system = Struct.new(:workers).new(3)
16
16
  strategy = proc {|system, time| system.workers}
17
- cut.new(strategy).call(system, 0).should == 3
17
+ expect(cut.new(strategy).call(system, 0)).to eq 3
18
18
  end
19
19
 
20
20
  it "ignores scheduled" do
21
21
  system = Struct.new(:scheduled).new(3)
22
22
  strategy = proc {|system, time| system.scheduled}
23
- cut.new(strategy).call(system, 0).should == 0
23
+ expect(cut.new(strategy).call(system, 0)).to eq 0
24
24
  end
25
25
 
26
26
  it "ignores retrying" do
27
27
  system = Struct.new(:retrying).new(3)
28
28
  strategy = proc {|system, time| system.retrying}
29
- cut.new(strategy).call(system, 0).should == 0
29
+ expect(cut.new(strategy).call(system, 0)).to eq 0
30
30
  end
31
31
  end
32
32
 
@@ -8,72 +8,78 @@ describe Autoscaler::LinearScalingStrategy do
8
8
  it "deactivates with no work" do
9
9
  system = TestSystem.new(0)
10
10
  strategy = cut.new(1)
11
- strategy.call(system, 1).should == 0
11
+ expect(strategy.call(system, 1)).to eq 0
12
12
  end
13
13
 
14
14
  it "activates with some work" do
15
15
  system = TestSystem.new(1)
16
16
  strategy = cut.new(1)
17
- strategy.call(system, 1).should be > 0
17
+ expect(strategy.call(system, 1)).to be > 0
18
18
  end
19
19
 
20
20
  it "minimally scales with minimal work" do
21
21
  system = TestSystem.new(1)
22
22
  strategy = cut.new(2, 2)
23
- strategy.call(system, 1).should == 1
23
+ expect(strategy.call(system, 1)).to eq 1
24
24
  end
25
25
 
26
26
  it "maximally scales with too much work" do
27
27
  system = TestSystem.new(5)
28
28
  strategy = cut.new(2, 2)
29
- strategy.call(system, 1).should == 2
29
+ expect(strategy.call(system, 1)).to eq 2
30
30
  end
31
31
 
32
32
  it "proportionally scales with some work" do
33
33
  system = TestSystem.new(5)
34
34
  strategy = cut.new(5, 2)
35
- strategy.call(system, 1).should == 3
35
+ expect(strategy.call(system, 1)).to eq 3
36
36
  end
37
37
 
38
38
  it "doesn't scale unless minimum is met" do
39
39
  system = TestSystem.new(2)
40
40
  strategy = cut.new(10, 4, 0.5)
41
- strategy.call(system, 1).should == 0
41
+ expect(strategy.call(system, 1)).to eq 0
42
42
  end
43
43
 
44
44
  it "scales proprotionally with a minimum" do
45
45
  system = TestSystem.new(3)
46
46
  strategy = cut.new(10, 4, 0.5)
47
- strategy.call(system, 1).should == 1
47
+ expect(strategy.call(system, 1)).to eq 1
48
48
  end
49
49
 
50
50
  it "scales maximally with a minimum" do
51
51
  system = TestSystem.new(25)
52
52
  strategy = cut.new(5, 4, 0.5)
53
- strategy.call(system, 1).should == 5
53
+ expect(strategy.call(system, 1)).to eq 5
54
54
  end
55
55
 
56
56
  it "scales proportionally with a minimum > 1" do
57
57
  system = TestSystem.new(12)
58
58
  strategy = cut.new(5, 4, 2)
59
- strategy.call(system, 1).should == 2
59
+ expect(strategy.call(system, 1)).to eq 2
60
60
  end
61
61
 
62
62
  it "scales maximally with a minimum factor > 1" do
63
63
  system = TestSystem.new(30)
64
64
  strategy = cut.new(5, 4, 2)
65
- strategy.call(system, 1).should == 5
65
+ expect(strategy.call(system, 1)).to eq 5
66
66
  end
67
67
 
68
- xit "doesn't scale down engaged workers" do
68
+ it "doesn't scale down engaged workers" do
69
69
  system = TestSystem.new(0, 2)
70
70
  strategy = cut.new(5, 4)
71
- strategy.call(system, 1).should == 2
71
+ expect(strategy.call(system, 1)).to eq 2
72
72
  end
73
73
 
74
- xit "doesn't scale above max workers even if engaged workers is greater" do
74
+ it "doesn't scale above max workers even if engaged workers is greater" do
75
75
  system = TestSystem.new(40, 6)
76
76
  strategy = cut.new(5, 4)
77
- strategy.call(system, 1).should == 5
77
+ expect(strategy.call(system, 1)).to eq 5
78
+ end
79
+
80
+ it "returns zero if requested capacity is zero" do
81
+ system = TestSystem.new(0, 0)
82
+ strategy = cut.new(0, 0)
83
+ expect(strategy.call(system, 5)).to eq 0
78
84
  end
79
85
  end
@@ -16,19 +16,19 @@ describe Autoscaler::Sidekiq::Activity do
16
16
  activity.idle!('queue')
17
17
  other_process.working!('other_queue')
18
18
  end
19
- it {activity.should be_idle(['queue'])}
19
+ it {expect(activity).to be_idle(['queue'])}
20
20
  end
21
21
 
22
22
  it 'passed a connection pool' do
23
23
  activity = cut.new(5, @redis)
24
24
  activity.working!('queue')
25
- activity.should_not be_idle(['queue'])
25
+ expect(activity).to_not be_idle(['queue'])
26
26
  end
27
27
 
28
28
  it 'passed a plain connection' do
29
- connection = Redis.connect(:url => 'http://localhost:9736', :namespace => 'autoscaler')
29
+ connection = Redis.connect(:url => 'redis://localhost:9736', :namespace => 'autoscaler')
30
30
  activity = cut.new(5, connection)
31
31
  activity.working!('queue')
32
- activity.should_not be_idle(['queue'])
32
+ expect(activity).to_not be_idle(['queue'])
33
33
  end
34
34
  end
@@ -16,7 +16,7 @@ describe Autoscaler::Sidekiq::CelluloidMonitor do
16
16
  system = TestSystem.new(0)
17
17
  manager = cut.new(scaler, lambda{|s,t| 0}, system)
18
18
  Timeout.timeout(1) { manager.wait_for_downscale(0.5) }
19
- scaler.workers.should == 0
19
+ expect(scaler.workers).to eq 0
20
20
  manager.terminate
21
21
  end
22
22
 
@@ -24,7 +24,7 @@ describe Autoscaler::Sidekiq::CelluloidMonitor do
24
24
  system = TestSystem.new(1)
25
25
  manager = cut.new(scaler, lambda{|s,t| 1}, system)
26
26
  expect {Timeout.timeout(1) { manager.wait_for_downscale(0.5) }}.to raise_error Timeout::Error
27
- scaler.workers.should == 1
27
+ expect(scaler.workers).to eq 1
28
28
  manager.terminate
29
29
  end
30
30
 
@@ -33,7 +33,7 @@ describe Autoscaler::Sidekiq::CelluloidMonitor do
33
33
  scaler = TestScaler.new(0)
34
34
  manager = cut.new(scaler, lambda{|s,t| 0}, system)
35
35
  Timeout.timeout(1) { manager.wait_for_downscale(0.5) }
36
- scaler.workers.should == 0
36
+ expect(scaler.workers).to eq 0
37
37
  manager.terminate
38
38
  end
39
39
  end
@@ -10,26 +10,26 @@ describe Autoscaler::Sidekiq::Client do
10
10
  describe 'call' do
11
11
  it 'scales' do
12
12
  client.call(Class, {}, 'queue') {}
13
- scaler.workers.should == 1
13
+ expect(scaler.workers).to eq 1
14
14
  end
15
15
 
16
16
  it 'scales with a redis pool' do
17
17
  client.call(Class, {}, 'queue', ::Sidekiq.method(:redis)) {}
18
- scaler.workers.should == 1
18
+ expect(scaler.workers).to eq 1
19
19
  end
20
20
 
21
- it('yields') {client.call(Class, {}, 'queue') {:foo}.should == :foo}
21
+ it('yields') {expect(client.call(Class, {}, 'queue') {:foo}).to eq :foo}
22
22
  end
23
23
 
24
24
  describe 'initial workers' do
25
25
  it 'works with default arguments' do
26
26
  client.set_initial_workers
27
- scaler.workers.should == 0
27
+ expect(scaler.workers).to eq 0
28
28
  end
29
29
 
30
30
  it 'scales when necessary' do
31
31
  client.set_initial_workers {|q| TestSystem.new(1)}
32
- scaler.workers.should == 1
32
+ expect(scaler.workers).to eq 1
33
33
  end
34
34
  end
35
35
  end
@@ -24,42 +24,42 @@ describe Autoscaler::Sidekiq::EntireQueueSystem do
24
24
 
25
25
  subject {cut.new}
26
26
 
27
- it {subject.queue_names.should == []}
28
- it {subject.workers.should == 0}
27
+ it {expect(subject.queue_names).to eq []}
28
+ it {expect(subject.workers).to eq 0}
29
29
 
30
30
  describe 'no queued work' do
31
31
  it "with no work" do
32
- subject.stub(:sidekiq_queues).and_return({'queue' => 0, 'another_queue' => 0})
33
- subject.queued.should == 0
32
+ allow(subject).to receive(:sidekiq_queues).and_return({'queue' => 0, 'another_queue' => 0})
33
+ expect(subject.queued).to eq 0
34
34
  end
35
35
 
36
36
  it "with no work and no queues" do
37
- subject.queued.should == 0
37
+ expect(subject.queued).to eq 0
38
38
  end
39
39
 
40
40
  it "with no scheduled work" do
41
- subject.scheduled.should == 0
41
+ expect(subject.scheduled).to eq 0
42
42
  end
43
43
 
44
44
  it "with no retry work" do
45
- subject.retrying.should == 0
45
+ expect(subject.retrying).to eq 0
46
46
  end
47
47
  end
48
48
 
49
49
  describe 'with queued work' do
50
50
  it "with enqueued work" do
51
- subject.stub(:sidekiq_queues).and_return({'queue' => 1})
52
- subject.queued.should == 1
51
+ allow(subject).to receive(:sidekiq_queues).and_return({'queue' => 1})
52
+ expect(subject.queued).to eq 1
53
53
  end
54
54
 
55
55
  it "with schedule work" do
56
56
  with_scheduled_work_in('queue')
57
- subject.scheduled.should == 1
57
+ expect(subject.scheduled).to eq 1
58
58
  end
59
59
 
60
60
  it "with retry work" do
61
61
  with_retry_work_in('queue')
62
- subject.retrying.should == 1
62
+ expect(subject.retrying).to eq 1
63
63
  end
64
64
  end
65
65
  end
@@ -11,6 +11,6 @@ describe Autoscaler::Sidekiq::MonitorMiddlewareAdapter do
11
11
  let(:scaler) {TestScaler.new(1)}
12
12
  let(:server) {cut.new(scaler, 0, ['queue'])}
13
13
 
14
- it('yields') {server.call(Object.new, {}, 'queue') {:foo}.should == :foo}
15
- it('yields with a redis pool') {server.call(Object.new, {}, 'queue', Sidekiq.method(:redis)) {:foo}.should == :foo}
14
+ it('yields') {expect(server.call(Object.new, {}, 'queue') {:foo}).to eq :foo}
15
+ it('yields with a redis pool') {expect(server.call(Object.new, {}, 'queue', Sidekiq.method(:redis)) {:foo}).to eq :foo}
16
16
  end
@@ -12,34 +12,34 @@ describe Autoscaler::Sidekiq::SleepWaitServer do
12
12
  let(:server) {cut.new(scaler, 0, ['queue'])}
13
13
 
14
14
  shared_examples "a sleepwait server" do
15
- it "scales with no work" do
16
- server.stub(:pending_work?).and_return(false)
17
- when_run
18
- scaler.workers.should == 0
19
- end
15
+ it "scales with no work" do
16
+ allow(server).to receive(:pending_work?).and_return(false)
17
+ when_run
18
+ expect(scaler.workers).to eq 0
19
+ end
20
20
 
21
- it "does not scale with pending work" do
22
- server.stub(:pending_work?).and_return(true)
23
- when_run
24
- scaler.workers.should == 1
25
- end
21
+ it "does not scale with pending work" do
22
+ allow(server).to receive(:pending_work?).and_return(true)
23
+ when_run
24
+ expect(scaler.workers).to eq 1
25
+ end
26
26
  end
27
27
 
28
28
  describe "a middleware with no redis specified" do
29
- it_behaves_like "a sleepwait server" do
30
- def when_run
31
- server.call(Object.new, {}, 'queue') {}
32
- end
33
- end
29
+ it_behaves_like "a sleepwait server" do
30
+ def when_run
31
+ server.call(Object.new, {}, 'queue') {}
32
+ end
33
+ end
34
34
  end
35
35
 
36
36
  describe "a middleware with redis specified" do
37
- it_behaves_like "a sleepwait server" do
38
- def when_run
39
- server.call(Object.new, {}, 'queue', Sidekiq.method(:redis)) {}
40
- end
41
- end
37
+ it_behaves_like "a sleepwait server" do
38
+ def when_run
39
+ server.call(Object.new, {}, 'queue', Sidekiq.method(:redis)) {}
40
+ end
41
+ end
42
42
  end
43
43
 
44
- it('yields') {server.call(Object.new, {}, 'queue') {:foo}.should == :foo}
44
+ it('yields') {expect(server.call(Object.new, {}, 'queue') {:foo}).to eq :foo}
45
45
  end
@@ -24,40 +24,40 @@ describe Autoscaler::Sidekiq::SpecifiedQueueSystem do
24
24
 
25
25
  subject {cut.new(['queue'])}
26
26
 
27
- it {subject.queue_names.should == ['queue']}
28
- it {subject.workers.should == 0}
27
+ it {expect(subject.queue_names).to eq ['queue']}
28
+ it {expect(subject.workers).to eq 0}
29
29
 
30
30
  describe 'no queued work' do
31
31
  it "with no work" do
32
- subject.stub(:sidekiq_queues).and_return({'queue' => 0, 'another_queue' => 1})
33
- subject.queued.should == 0
32
+ allow(subject).to receive(:sidekiq_queues).and_return({'queue' => 0, 'another_queue' => 1})
33
+ expect(subject.queued).to eq 0
34
34
  end
35
35
 
36
36
  it "with scheduled work in another queue" do
37
37
  with_scheduled_work_in('another_queue')
38
- subject.scheduled.should == 0
38
+ expect(subject.scheduled).to eq 0
39
39
  end
40
40
 
41
41
  it "with retry work in another queue" do
42
42
  with_retry_work_in('another_queue')
43
- subject.retrying.should == 0
43
+ expect(subject.retrying).to eq 0
44
44
  end
45
45
  end
46
46
 
47
47
  describe 'with queued work' do
48
48
  it "with enqueued work" do
49
- subject.stub(:sidekiq_queues).and_return({'queue' => 1})
50
- subject.queued.should == 1
49
+ allow(subject).to receive(:sidekiq_queues).and_return({'queue' => 1})
50
+ expect(subject.queued).to eq 1
51
51
  end
52
52
 
53
53
  it "with schedule work" do
54
54
  with_scheduled_work_in('queue')
55
- subject.scheduled.should == 1
55
+ expect(subject.scheduled).to eq 1
56
56
  end
57
57
 
58
58
  it "with retry work" do
59
59
  with_retry_work_in('queue')
60
- subject.retrying.should == 1
60
+ expect(subject.retrying).to eq 1
61
61
  end
62
62
  end
63
63
  end
@@ -1,10 +1,12 @@
1
+ require 'rspec/its'
1
2
  require 'sidekiq'
2
- REDIS = Sidekiq::RedisConnection.create(:url => 'http://localhost:9736', :namespace => 'autoscaler')
3
+ REDIS = Sidekiq::RedisConnection.create(:url => 'redis://localhost:9736', :namespace => 'autoscaler')
3
4
 
4
5
  RSpec.configure do |config|
5
6
  config.mock_with :rspec
6
7
 
7
- config.filter_run_excluding :online => true unless ENV['HEROKU_APP']
8
+ config.filter_run_excluding :api1 => true unless ENV['HEROKU_API_KEY']
9
+ config.filter_run_excluding :platform_api => true unless ENV['HEROKU_ACCESS_TOKEN']
8
10
  end
9
11
 
10
12
  class TestScaler
@@ -8,4 +8,10 @@ class TestSystem
8
8
  def queued; @pending; end
9
9
  def scheduled; 0; end
10
10
  def retrying; 0; end
11
+ def total_work
12
+ queued + scheduled + retrying + workers
13
+ end
14
+ def any_work?
15
+ queued > 0 || scheduled > 0 || retrying > 0 || workers > 0
16
+ end
11
17
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: autoscaler
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.9.0
4
+ version: 0.10.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Justin Love
@@ -9,30 +9,52 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2014-05-19 00:00:00.000000000 Z
12
+ date: 2015-10-27 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: sidekiq
16
16
  requirement: !ruby/object:Gem::Requirement
17
17
  requirements:
18
- - - ">="
18
+ - - "~>"
19
19
  - !ruby/object:Gem::Version
20
- version: '2.7'
21
- - - "<"
20
+ version: 3.5.1
21
+ type: :runtime
22
+ prerelease: false
23
+ version_requirements: !ruby/object:Gem::Requirement
24
+ requirements:
25
+ - - "~>"
26
+ - !ruby/object:Gem::Version
27
+ version: 3.5.1
28
+ - !ruby/object:Gem::Dependency
29
+ name: celluloid
30
+ requirement: !ruby/object:Gem::Requirement
31
+ requirements:
32
+ - - "~>"
22
33
  - !ruby/object:Gem::Version
23
- version: '3.1'
34
+ version: 0.17.2
24
35
  type: :runtime
25
36
  prerelease: false
26
37
  version_requirements: !ruby/object:Gem::Requirement
38
+ requirements:
39
+ - - "~>"
40
+ - !ruby/object:Gem::Version
41
+ version: 0.17.2
42
+ - !ruby/object:Gem::Dependency
43
+ name: heroku-api
44
+ requirement: !ruby/object:Gem::Requirement
27
45
  requirements:
28
46
  - - ">="
29
47
  - !ruby/object:Gem::Version
30
- version: '2.7'
31
- - - "<"
48
+ version: '0'
49
+ type: :runtime
50
+ prerelease: false
51
+ version_requirements: !ruby/object:Gem::Requirement
52
+ requirements:
53
+ - - ">="
32
54
  - !ruby/object:Gem::Version
33
- version: '3.1'
55
+ version: '0'
34
56
  - !ruby/object:Gem::Dependency
35
- name: heroku-api
57
+ name: platform-api
36
58
  requirement: !ruby/object:Gem::Requirement
37
59
  requirements:
38
60
  - - ">="
@@ -73,6 +95,20 @@ dependencies:
73
95
  - - ">="
74
96
  - !ruby/object:Gem::Version
75
97
  version: '0'
98
+ - !ruby/object:Gem::Dependency
99
+ name: rspec-its
100
+ requirement: !ruby/object:Gem::Requirement
101
+ requirements:
102
+ - - ">="
103
+ - !ruby/object:Gem::Version
104
+ version: '0'
105
+ type: :development
106
+ prerelease: false
107
+ version_requirements: !ruby/object:Gem::Requirement
108
+ requirements:
109
+ - - ">="
110
+ - !ruby/object:Gem::Version
111
+ version: '0'
76
112
  - !ruby/object:Gem::Dependency
77
113
  name: guard-rspec
78
114
  requirement: !ruby/object:Gem::Requirement
@@ -119,6 +155,7 @@ files:
119
155
  - lib/autoscaler/counter_cache_memory.rb
120
156
  - lib/autoscaler/counter_cache_redis.rb
121
157
  - lib/autoscaler/delayed_shutdown.rb
158
+ - lib/autoscaler/heroku_platform_scaler.rb
122
159
  - lib/autoscaler/heroku_scaler.rb
123
160
  - lib/autoscaler/ignore_scheduled_and_retrying.rb
124
161
  - lib/autoscaler/linear_scaling_strategy.rb
@@ -137,6 +174,7 @@ files:
137
174
  - spec/autoscaler/counter_cache_memory_spec.rb
138
175
  - spec/autoscaler/counter_cache_redis_spec.rb
139
176
  - spec/autoscaler/delayed_shutdown_spec.rb
177
+ - spec/autoscaler/heroku_platform_scaler_spec.rb
140
178
  - spec/autoscaler/heroku_scaler_spec.rb
141
179
  - spec/autoscaler/ignore_scheduled_and_retrying_spec.rb
142
180
  - spec/autoscaler/linear_scaling_strategy_spec.rb
@@ -149,7 +187,7 @@ files:
149
187
  - spec/autoscaler/sidekiq/specified_queue_system_spec.rb
150
188
  - spec/spec_helper.rb
151
189
  - spec/test_system.rb
152
- homepage: ''
190
+ homepage: https://github.com/JustinLove/autoscaler
153
191
  licenses: []
154
192
  metadata: {}
155
193
  post_install_message:
@@ -168,7 +206,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
168
206
  version: '0'
169
207
  requirements: []
170
208
  rubyforge_project: autoscaler
171
- rubygems_version: 2.2.0
209
+ rubygems_version: 2.4.8
172
210
  signing_key:
173
211
  specification_version: 4
174
212
  summary: Start/stop Sidekiq workers on Heroku
@@ -178,6 +216,7 @@ test_files:
178
216
  - spec/autoscaler/counter_cache_memory_spec.rb
179
217
  - spec/autoscaler/counter_cache_redis_spec.rb
180
218
  - spec/autoscaler/delayed_shutdown_spec.rb
219
+ - spec/autoscaler/heroku_platform_scaler_spec.rb
181
220
  - spec/autoscaler/heroku_scaler_spec.rb
182
221
  - spec/autoscaler/ignore_scheduled_and_retrying_spec.rb
183
222
  - spec/autoscaler/linear_scaling_strategy_spec.rb