autoscale 0.9.1

Sign up to get free protection for your applications and to get access to all the features.
Files changed (42) hide show
  1. checksums.yaml +7 -0
  2. data/CHANGELOG.md +81 -0
  3. data/Guardfile +12 -0
  4. data/README.md +81 -0
  5. data/examples/complex.rb +39 -0
  6. data/examples/simple.rb +28 -0
  7. data/lib/autoscaler.rb +5 -0
  8. data/lib/autoscaler/binary_scaling_strategy.rb +26 -0
  9. data/lib/autoscaler/counter_cache_memory.rb +35 -0
  10. data/lib/autoscaler/counter_cache_redis.rb +50 -0
  11. data/lib/autoscaler/delayed_shutdown.rb +44 -0
  12. data/lib/autoscaler/heroku_scaler.rb +81 -0
  13. data/lib/autoscaler/ignore_scheduled_and_retrying.rb +13 -0
  14. data/lib/autoscaler/linear_scaling_strategy.rb +39 -0
  15. data/lib/autoscaler/sidekiq.rb +11 -0
  16. data/lib/autoscaler/sidekiq/activity.rb +62 -0
  17. data/lib/autoscaler/sidekiq/celluloid_monitor.rb +67 -0
  18. data/lib/autoscaler/sidekiq/client.rb +50 -0
  19. data/lib/autoscaler/sidekiq/entire_queue_system.rb +41 -0
  20. data/lib/autoscaler/sidekiq/monitor_middleware_adapter.rb +46 -0
  21. data/lib/autoscaler/sidekiq/queue_system.rb +20 -0
  22. data/lib/autoscaler/sidekiq/sleep_wait_server.rb +51 -0
  23. data/lib/autoscaler/sidekiq/specified_queue_system.rb +48 -0
  24. data/lib/autoscaler/stub_scaler.rb +25 -0
  25. data/lib/autoscaler/version.rb +4 -0
  26. data/spec/autoscaler/binary_scaling_strategy_spec.rb +19 -0
  27. data/spec/autoscaler/counter_cache_memory_spec.rb +21 -0
  28. data/spec/autoscaler/counter_cache_redis_spec.rb +49 -0
  29. data/spec/autoscaler/delayed_shutdown_spec.rb +23 -0
  30. data/spec/autoscaler/heroku_scaler_spec.rb +49 -0
  31. data/spec/autoscaler/ignore_scheduled_and_retrying_spec.rb +33 -0
  32. data/spec/autoscaler/linear_scaling_strategy_spec.rb +85 -0
  33. data/spec/autoscaler/sidekiq/activity_spec.rb +34 -0
  34. data/spec/autoscaler/sidekiq/celluloid_monitor_spec.rb +39 -0
  35. data/spec/autoscaler/sidekiq/client_spec.rb +35 -0
  36. data/spec/autoscaler/sidekiq/entire_queue_system_spec.rb +65 -0
  37. data/spec/autoscaler/sidekiq/monitor_middleware_adapter_spec.rb +16 -0
  38. data/spec/autoscaler/sidekiq/sleep_wait_server_spec.rb +45 -0
  39. data/spec/autoscaler/sidekiq/specified_queue_system_spec.rb +63 -0
  40. data/spec/spec_helper.rb +16 -0
  41. data/spec/test_system.rb +11 -0
  42. metadata +187 -0
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: 116541fd00550df7c896d491b0861293d2bd6766
4
+ data.tar.gz: d7498ed8d5eeb0e2c483c7b70a92b575a6c7cea3
5
+ SHA512:
6
+ metadata.gz: 7d9085be6c0eb40885b296cbe8d0a307f0a244b6278eb54f5bf366be865dfd8bb22da8a335d20a6802d90bf0b78115d2c6c627994973f5a737372fc8d877c9ee
7
+ data.tar.gz: d15705c79a8966dbf7439166fb3752dd5470298d90e236692a6c569f7ce9031b0214a477c9cfcb2d9fa7433d3315bf6c2268c4f07a064f52b964fd7277391c1f
data/CHANGELOG.md ADDED
@@ -0,0 +1,81 @@
1
+ # Changelog
2
+
3
+ ## 0.10.0
4
+ - Require Sidekiq 3.
5
+ - Linear Scaling Strategy will not scale down past number of active workers. Assumes 1-1 SK process/dyno mapping.
6
+ - QueueSystem#workers returns the number of engaged SK processes.
7
+
8
+ ## 0.9.0
9
+
10
+ - CounterCacheRedis.new now takes a third parameter `worker_type`, a string used in the
11
+ Redis cache key. Allows for caching counts for various types of workers, not just `worker`
12
+ - Support for Sidekiq 3.0
13
+ - Strategy wrapper to ignore scheduled and retrying queues. Usage:
14
+ ``new_strategy = IgnoreScheduledAndRetrying.new(my_old_strategy)``
15
+ - LinearScalingStrategy now accepts a minimum amount of work (as a percentage of worker capacity)
16
+ required to begin scaling up. E.g LinearScalingStrategy.new(10, 4, 0.5) will scale to one worker
17
+ after 4*0.5 = 2 jobs are enqueued, and a maximum of 10 workers at 10*4 jobs. Old behavior is preserved
18
+ with a default value of 0.
19
+
20
+ ## 0.8.0
21
+
22
+ - Extracted caching of Heroku worker counts and added experimental Redis cache:
23
+ ``scaler.counter_cache = Autoscaler::CounterCacheRedis.new(Sidekiq.method(:redis))``
24
+ - Now rescues Heroku::Api::Errors in addition to Excon::Errors
25
+
26
+ ## 0.7.0
27
+
28
+ - Added Autoscaler::LinearScalingStrategy
29
+ - EntireQueuSystem#queued always returns an integer
30
+
31
+ ## 0.6.0
32
+
33
+ - Excon errors from the Heroku API are caught be default. See `HerokuScaler#exception_handler` to override behavior
34
+ - Client side scaling occurs after enquing the job, previously it was before.
35
+
36
+ ## 0.5.0
37
+
38
+ - Experimental: `Client#set_initial_workers` to start workers on main process startup; typically:
39
+ Autoscaler::Sidekiq::Client.add_to_chain(chain, 'default' => heroku).set_initial_workers
40
+ - Ensure that timeout is documented as being in seconds
41
+ - Convert gemspec to wildcard file selection
42
+
43
+ ## 0.4.1
44
+
45
+ - Missing file from gemspec
46
+
47
+ ## 0.4.0
48
+
49
+ - Experimental: The default scaling logic is contained in BinaryScalingStrategy. A strategy object can be passed instead of timeout to the server middleware.
50
+
51
+ ## 0.3.0
52
+
53
+ - Downscale method changed from busy-waiting workers to a separate monitor process
54
+ - Minimum Sidekiq version raised to 2.7 to take advantage of Worker API
55
+ - Internal refactoring
56
+ - Autoscaler::StubScaler may be used for local testing
57
+
58
+ ## 0.2.1
59
+
60
+ - Separate background activity flags to avoid crosstalk between processes
61
+
62
+ ## 0.2.0
63
+
64
+ - Raise minimum Sidekiq version to 2.6.1 to take advantage of Stats API
65
+ - Inspect scheduled and retry sets to see if they match `specified_queues`
66
+ - Testing: Refactor server middleware tests
67
+
68
+ ## 0.1.0
69
+
70
+ - The `retry` and `scheduled` queues are now considered for shutdown
71
+ - Testing: Guard starts up an isolated redis instance
72
+
73
+ ## 0.0.3
74
+
75
+ - Typo correction
76
+
77
+ ## 0.0.2
78
+
79
+ - Loosen Sidekiq version dependency
80
+ - Add changelog
81
+ - Add changelog, readme, and examples to gem files list
data/Guardfile ADDED
@@ -0,0 +1,12 @@
1
+ guard 'process', :name => 'redis', :command => 'redis-server spec/redis_test.conf' do
2
+ watch('spec/redis_test.conf')
3
+ end
4
+
5
+ tag = "--tag #{ENV['TAG']}" if ENV['TAG']
6
+ example = "--example '#{ENV['EXAMPLE']}'" if ENV['EXAMPLE']
7
+ guard :rspec, :cmd => "rspec --color --format d #{tag} #{example}" do
8
+ watch(%r{^spec/.+_spec\.rb$})
9
+ watch(%r{^lib/(.+).rb$}) { |m| "spec/#{m[1]}_spec.rb" }
10
+ watch('spec/spec_helper.rb') { "spec" }
11
+ end
12
+
data/README.md ADDED
@@ -0,0 +1,81 @@
1
+ # Sidekiq Heroku Autoscaler
2
+
3
+ [Sidekiq](https://github.com/mperham/sidekiq) performs background jobs. While its threading model allows it to scale easier than worker-pre-process background systems, people running test or lightly loaded systems on [Heroku](http://www.heroku.com/) still want to scale down to zero to avoid racking up charges.
4
+
5
+ ## Requirements
6
+
7
+ Tested on Ruby 1.9.2 and Heroku Cedar stack.
8
+
9
+ ## Installation
10
+
11
+ gem install autoscaler
12
+
13
+ ## Getting Started
14
+
15
+ This gem uses the [Heroku-Api](https://github.com/heroku/heroku.rb) gem, which requires an API key from Heroku. It will also need the heroku app name. By default, these are specified through environment variables. You can also pass them to HerokuScaler explicitly.
16
+
17
+ HEROKU_API_KEY=.....
18
+ HEROKU_APP=....
19
+
20
+ Install the middleware in your `Sidekiq.configure_` blocks
21
+
22
+ require 'autoscaler/sidekiq'
23
+ require 'autoscaler/heroku_scaler'
24
+
25
+ Sidekiq.configure_client do |config|
26
+ config.client_middleware do |chain|
27
+ chain.add Autoscaler::Sidekiq::Client, 'default' => Autoscaler::HerokuScaler.new
28
+ end
29
+ end
30
+
31
+ Sidekiq.configure_server do |config|
32
+ config.server_middleware do |chain|
33
+ chain.add(Autoscaler::Sidekiq::Server, Autoscaler::HerokuScaler.new, 60) # 60 second timeout
34
+ end
35
+ end
36
+
37
+ ## Limits and Challenges
38
+
39
+ - HerokuScaler includes an attempt at current-worker cache that may be overcomplication, and doesn't work very well on the server
40
+ - Multiple scale-down loops may be started, particularly if there are multiple jobs queued when the servers comes up. Heroku seems to handle multiple scale-down commands well.
41
+ - The scale-down monitor is triggered on job completion (and server middleware is only run around jobs), so if the server nevers processes any jobs, it won't turn off.
42
+ - The retry and schedule lists are considered - if you schedule a long-running task, the process will not scale-down.
43
+ - If background jobs trigger jobs in other scaled processes, please note you'll need `config.client_middleware` in your `Sidekiq.configure_server` block in order to scale-up.
44
+ - Exceptions while calling the Heroku API are caught and printed by default. See `HerokuScaler#exception_handler` to override
45
+
46
+ ## Experimental
47
+
48
+ ### Strategies
49
+
50
+ You can pass a scaling strategy object instead of the timeout to the server middleware. The object (or lambda) should respond to `#call(system, idle_time)` and return the desired number of workers. See `lib/autoscaler/binary_scaling_strategy.rb` for an example.
51
+
52
+ ### Initial Workers
53
+
54
+ `Client#set_initial_workers` to start workers on main process startup; typically:
55
+
56
+ Autoscaler::Sidekiq::Client.add_to_chain(chain, 'default' => heroku).set_initial_workers
57
+
58
+ ### Working caching
59
+
60
+ scaler.counter_cache = Autoscaler::CounterCacheRedis(Sidekiq.method(:redis))
61
+
62
+ ## Tests
63
+
64
+ The project is setup to run RSpec with Guard. It expects a redis instance on a custom port, which is started by the Guardfile.
65
+
66
+ The HerokuScaler is not tested by default because it makes live API requests. Specify `HEROKU_APP` and `HEROKU_API_KEY` on the command line, and then watch your app's logs.
67
+
68
+ HEROKU_APP=... HEROKU_API_KEY=... guard
69
+ heroku logs --app ...
70
+
71
+ ## Authors
72
+
73
+ Justin Love, [@wondible](http://twitter.com/wondible), [https://github.com/JustinLove](https://github.com/JustinLove)
74
+
75
+ Ported to Heroku-Api by Fix Peña, [https://github.com/fixr](https://github.com/fixr)
76
+
77
+ Retry/schedule sets by Matt Anderson [https://github.com/tonkapark](https://github.com/tonkapark) and Thibaud Guillaume-Gentil [https://github.com/jilion](https://github.com/jilion)
78
+
79
+ ## Licence
80
+
81
+ Released under the [MIT license](http://www.opensource.org/licenses/mit-license.php).
@@ -0,0 +1,39 @@
1
+ require 'sidekiq'
2
+ require 'autoscaler/sidekiq'
3
+ require 'autoscaler/heroku_scaler'
4
+
5
+ heroku = nil
6
+ if ENV['HEROKU_APP']
7
+ heroku = {}
8
+ scaleable = %w[default import] - (ENV['ALWAYS'] || '').split(' ')
9
+ scaleable.each do |queue|
10
+ heroku[queue] = Autoscaler::HerokuScaler.new(
11
+ queue,
12
+ ENV['HEROKU_API_KEY'],
13
+ ENV['HEROKU_APP'])
14
+ end
15
+ end
16
+
17
+ Sidekiq.configure_client do |config|
18
+ if heroku
19
+ config.client_middleware do |chain|
20
+ chain.add Autoscaler::Sidekiq::Client, heroku
21
+ end
22
+ end
23
+ end
24
+
25
+ # define HEROKU_PROCESS in the Procfile:
26
+ #
27
+ # default: env HEROKU_PROCESS=default bundle exec sidekiq -r ./background/boot.rb
28
+ # import: env HEROKU_PROCESS=import bundle exec sidekiq -q import -c 1 -r ./background/boot.rb
29
+
30
+ Sidekiq.configure_server do |config|
31
+ config.server_middleware do |chain|
32
+ if heroku && ENV['HEROKU_PROCESS'] && heroku[ENV['HEROKU_PROCESS']]
33
+ p "Setting up auto-scaledown"
34
+ chain.add(Autoscaler::Sidekiq::Server, heroku[ENV['HEROKU_PROCESS']], 60, [ENV['HEROKU_PROCESS']]) # 60 second timeout
35
+ else
36
+ p "Not scaleable"
37
+ end
38
+ end
39
+ end
@@ -0,0 +1,28 @@
1
+ require 'sidekiq'
2
+ require 'autoscaler/sidekiq'
3
+ require 'autoscaler/heroku_scaler'
4
+
5
+ heroku = nil
6
+ if ENV['HEROKU_APP']
7
+ heroku = Autoscaler::HerokuScaler.new
8
+ #heroku.exception_handler = lambda {|exception| MyApp.logger.error(exception)}
9
+ end
10
+
11
+ Sidekiq.configure_client do |config|
12
+ if heroku
13
+ config.client_middleware do |chain|
14
+ chain.add Autoscaler::Sidekiq::Client, 'default' => heroku
15
+ end
16
+ end
17
+ end
18
+
19
+ Sidekiq.configure_server do |config|
20
+ config.server_middleware do |chain|
21
+ if heroku
22
+ p "Setting up auto-scaledown"
23
+ chain.add(Autoscaler::Sidekiq::Server, heroku, 60) # 60 second timeout
24
+ else
25
+ p "Not scaleable"
26
+ end
27
+ end
28
+ end
data/lib/autoscaler.rb ADDED
@@ -0,0 +1,5 @@
1
+ require "autoscaler/version"
2
+
3
+ # Namespace module; no code
4
+ module Autoscaler
5
+ end
@@ -0,0 +1,26 @@
1
+ module Autoscaler
2
+ # Strategies determine the target number of workers
3
+ # The default strategy has a single worker when there is anything, or shuts it down.
4
+ class BinaryScalingStrategy
5
+ #@param [integer] active_workers number of workers when in the active state.
6
+ def initialize(active_workers = 1)
7
+ @active_workers = active_workers
8
+ end
9
+
10
+ # @param [QueueSystem] system interface to the queuing system
11
+ # @param [Numeric] event_idle_time number of seconds since a job related event
12
+ # @return [Integer] target number of workers
13
+ def call(system, event_idle_time)
14
+ if active?(system)
15
+ @active_workers
16
+ else
17
+ 0
18
+ end
19
+ end
20
+
21
+ private
22
+ def active?(system)
23
+ system.queued > 0 || system.scheduled > 0 || system.retrying > 0 || system.workers > 0
24
+ end
25
+ end
26
+ end
@@ -0,0 +1,35 @@
1
+ module Autoscaler
2
+ # Implements a cache for the number of heroku works currently up
3
+ # Values are stored for short periods in the object
4
+ class CounterCacheMemory
5
+ # @param [Numeric] timeout number of seconds to allow before expiration
6
+ def initialize(timeout = 5)
7
+ @timeout = timeout
8
+ @counter = 0
9
+ @valid_until = Time.now - 1
10
+ end
11
+
12
+ # @param [Numeric] value new counter value
13
+ def counter=(value)
14
+ @valid_until = Time.now + @timeout
15
+ @counter = value
16
+ end
17
+
18
+ # Raised when no block is provided to #counter
19
+ class Expired < ArgumentError; end
20
+
21
+ # Current value. Uses the Hash#fetch api - pass a block to use in place of expired values or it will raise an exception.
22
+ def counter
23
+ return @counter if valid?
24
+ return yield if block_given?
25
+ raise Expired
26
+ end
27
+
28
+ private
29
+ attr_reader :timeout
30
+
31
+ def valid?
32
+ Time.now < @valid_until
33
+ end
34
+ end
35
+ end
@@ -0,0 +1,50 @@
1
+ module Autoscaler
2
+ # Implements a cache for the number of heroku works currently up
3
+ # This permits some web/worker communication, which makes longer timeouts practical.
4
+ class CounterCacheRedis
5
+ # @param [Proc, ConnectionPool, Redis client] redis redis interface
6
+ # Proc: e.g. Sidekiq.method(:redis)
7
+ # ConnectionPool: e.g. what you pass to Sidekiq.redis=
8
+ # Redis client: e.g. Redis.connect
9
+ # @param [Numeric] timeout number of seconds to allow before expiration
10
+ # @param [String] worker_type the name of the worker type, for cache keys
11
+ def initialize(redis, timeout = 5 * 60, worker_type = 'worker')
12
+ @redis = redis
13
+ @timeout = timeout
14
+ @worker_type = worker_type
15
+ end
16
+
17
+ # @param [Numeric] value new counter value
18
+ def counter=(value)
19
+ redis {|c| c.setex(key, @timeout, value)}
20
+ end
21
+
22
+ # Raised when no block is provided to #counter
23
+ class Expired < ArgumentError; end
24
+
25
+ # Current value. Uses the Hash#fetch api - pass a block to use in place of expired values or it will raise an exception.
26
+ def counter
27
+ value = redis {|c| c.get(key)}
28
+ return value.to_i if value
29
+ return yield if block_given?
30
+ raise Expired
31
+ end
32
+
33
+ private
34
+ attr_reader :timeout
35
+
36
+ def key
37
+ ['autoscaler', 'workers', @worker_type] * ':'
38
+ end
39
+
40
+ def redis(&block)
41
+ if @redis.respond_to?(:call)
42
+ @redis.call(&block)
43
+ elsif @redis.respond_to?(:with)
44
+ @redis.with(&block)
45
+ else
46
+ block.call(@redis)
47
+ end
48
+ end
49
+ end
50
+ end
@@ -0,0 +1,44 @@
1
+ module Autoscaler
2
+ # This is a sort of middleware that keeps the last worker up for a minimum amount of time
3
+ class DelayedShutdown
4
+ # @param [ScalingStrategy] strategy object that makes most decisions
5
+ # @param [Numeric] timeout number of seconds to stay up after base strategy says zero
6
+ def initialize(strategy, timeout)
7
+ @strategy = strategy
8
+ @timeout = timeout
9
+ active_now!
10
+ end
11
+
12
+ # @param [QueueSystem] system interface to the queuing system
13
+ # @param [Numeric] event_idle_time number of seconds since a job related event
14
+ # @return [Integer] target number of workers
15
+ def call(system, event_idle_time)
16
+ target_workers = strategy.call(system, event_idle_time)
17
+ if target_workers > 0
18
+ active_now!
19
+ target_workers
20
+ elsif time_left?(event_idle_time)
21
+ 1
22
+ else
23
+ 0
24
+ end
25
+ end
26
+
27
+ private
28
+
29
+ attr_reader :strategy
30
+ attr_reader :timeout
31
+
32
+ def active_now!
33
+ @activity = Time.now
34
+ end
35
+
36
+ def level_idle_time
37
+ Time.now - @activity
38
+ end
39
+
40
+ def time_left?(event_idle_time)
41
+ [event_idle_time, level_idle_time].min < timeout
42
+ end
43
+ end
44
+ end
@@ -0,0 +1,81 @@
1
+ require 'heroku-api'
2
+ require 'autoscaler/counter_cache_memory'
3
+
4
+ module Autoscaler
5
+ # Wraps the Heroku API to provide just the interface that we need for scaling.
6
+ class HerokuScaler
7
+ # @param [String] type process type this scaler controls
8
+ # @param [String] key Heroku API key
9
+ # @param [String] app Heroku app name
10
+ def initialize(
11
+ type = 'worker',
12
+ key = ENV['HEROKU_API_KEY'],
13
+ app = ENV['HEROKU_APP'])
14
+ @client = Heroku::API.new(:api_key => key)
15
+ @type = type
16
+ @app = app
17
+ @workers = CounterCacheMemory.new
18
+ end
19
+
20
+ attr_reader :app
21
+ attr_reader :type
22
+
23
+ # Read the current worker count (value may be cached)
24
+ # @return [Numeric] number of workers
25
+ def workers
26
+ @workers.counter {@workers.counter = heroku_get_workers}
27
+ end
28
+
29
+ # Set the number of workers (noop if workers the same)
30
+ # @param [Numeric] n number of workers
31
+ def workers=(n)
32
+ unknown = false
33
+ current = @workers.counter{unknown = true; 1}
34
+ if n != current || unknown
35
+ p "Scaling #{type} to #{n}"
36
+ heroku_set_workers(n)
37
+ @workers.counter = n
38
+ end
39
+ end
40
+
41
+ # Callable object which responds to exceptions during api calls #
42
+ # @example
43
+ # heroku.exception_handler = lambda {|exception| MyApp.logger.error(exception)}
44
+ # heroku.exception_handler = lambda {|exception| raise}
45
+ # # default
46
+ # lambda {|exception|
47
+ # p exception
48
+ # puts exception.backtrace
49
+ # }
50
+ attr_writer :exception_handler
51
+
52
+ # Object which supports #counter and #counter=
53
+ # Defaults to CounterCacheMemory
54
+ def counter_cache=(cache)
55
+ @workers = cache
56
+ end
57
+
58
+ private
59
+ attr_reader :client
60
+
61
+ def heroku_get_workers
62
+ client.get_ps(app).body.count {|ps| ps['process'].match /#{type}\.\d?/ }
63
+ rescue Excon::Errors::Error, Heroku::API::Errors::Error => e
64
+ exception_handler.call(e)
65
+ 0
66
+ end
67
+
68
+ def heroku_set_workers(n)
69
+ client.post_ps_scale(app, type, n)
70
+ rescue Excon::Errors::Error, Heroku::API::Errors::Error => e
71
+ exception_handler.call(e)
72
+ end
73
+
74
+ def exception_handler
75
+ @exception_handler ||= lambda {|exception|
76
+ p exception
77
+ puts exception.backtrace
78
+ }
79
+ end
80
+ end
81
+ end