opzworks 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA1:
3
+ metadata.gz: 9a7a099fc80bd069a428e204ed8b8ae80e970405
4
+ data.tar.gz: 902044b0b2bde27c5a8bbd45fcf242446c3be66a
5
+ SHA512:
6
+ metadata.gz: d9fae78a2acc179277714c933268b76adf7f4e671e37a0971a3af61745972887b0806911021aac80e77efd8856205cb99afe6e0dfd229560f21bc8ae652f38d3
7
+ data.tar.gz: f7014b5ca89eeb1e2b881bc7aee17f6020ba37c8679681bc9865a94f894f0dea1d1a8f82a9f2753cff7261d6b01e1e2f22671b01f3da928f4eae8c10d6ba479b
data/.gitignore ADDED
@@ -0,0 +1,17 @@
1
+ *.gem
2
+ *.rbc
3
+ .bundle
4
+ .config
5
+ .yardoc
6
+ Gemfile.lock
7
+ InstalledFiles
8
+ _yardoc
9
+ coverage
10
+ doc/
11
+ lib/bundler/man
12
+ pkg
13
+ rdoc
14
+ spec/reports
15
+ test/tmp
16
+ test/version_tmp
17
+ tmp
data/CHANGELOG.md ADDED
@@ -0,0 +1,30 @@
1
+ changelog
2
+ =========
3
+
4
+ 0.3.0
5
+ -----
6
+ * attempt to clone opsworks-${project} repo if not found
7
+
8
+ 0.2.4
9
+ -----
10
+ * provide ability to change the amount of diff context via --context {int} switch to json command
11
+
12
+ 0.2.3
13
+ -----
14
+ * provide --private option for ssh to allow use of private ips (defaults to public)
15
+
16
+ 0.2.2
17
+ -----
18
+ * big speed improvement for ssh by removing unecessary aws calls
19
+
20
+ 0.2.1
21
+ -----
22
+ * documentation enhancements
23
+
24
+ 0.2.0
25
+ -----
26
+ * elastic command support
27
+
28
+ 0.1.0
29
+ -----
30
+ * initial release with berks/json/ssh support
data/Gemfile ADDED
@@ -0,0 +1,4 @@
1
+ source 'https://rubygems.org'
2
+
3
+ # Specify your gem's dependencies in opzworks.gemspec
4
+ gemspec
data/LICENSE.txt ADDED
@@ -0,0 +1,22 @@
1
+ Copyright (c) 2013 Adam Lindberg
2
+
3
+ MIT License
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining
6
+ a copy of this software and associated documentation files (the
7
+ "Software"), to deal in the Software without restriction, including
8
+ without limitation the rights to use, copy, modify, merge, publish,
9
+ distribute, sublicense, and/or sell copies of the Software, and to
10
+ permit persons to whom the Software is furnished to do so, subject to
11
+ the following conditions:
12
+
13
+ The above copyright notice and this permission notice shall be
14
+ included in all copies or substantial portions of the Software.
15
+
16
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
17
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
18
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
19
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
20
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
21
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
22
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
data/README.md ADDED
@@ -0,0 +1,212 @@
1
+ # OpzWorks CLI
2
+
3
+ Command line interface for managing AWS OpsWorks chef cookbooks and stack json, as well
4
+ as other OpsWorks centric tasks such as generating ssh configs for OpsWorks instances.
5
+
6
+ ## Build Status
7
+
8
+ [![Circle CI](https://circleci.com/gh/mapzen/opzworks.svg?style=svg)](https://circleci.com/gh/mapzen/opzworks)
9
+
10
+ ## Third party requirements:
11
+
12
+ Aside from a recent version of ruby:
13
+
14
+ * git
15
+ * [ChefDK](https://downloads.chef.io/chef-dk/)
16
+
17
+ ## Installation
18
+
19
+ Install for use on the command line (requires ruby and rubygems): `gem install opzworks`
20
+
21
+ Then run `opzworks --help`
22
+
23
+ To use the gem in a project, add `gem 'opzworks'` to your Gemfile, and then execute: `bundle`
24
+
25
+ To build locally from this repository: `rake install`
26
+
27
+ ## Commands
28
+
29
+ Run `opzworks` with one of the following commands:
30
+
31
+ #### ssh
32
+
33
+ Generate and update SSH configuration files.
34
+
35
+ Host names are based off the stack naming convention, `project_name::env::region`. The default
36
+ is to use public instance IPs (or elastic ip if one is assigned). Passing the `--private` option
37
+ will instead use instance private IPs.
38
+
39
+ For example, if we have a host 'api1' in the stack apiaxle::prod::us-east, the
40
+ resultant hostname will be `api1-apiaxle-prod-us-east`
41
+
42
+ By default, `opsworks ssh` will iterate over all stacks. If you wish to restrict the stacks
43
+ it searches, simply pass the stack name (or a partial match) as an argument:
44
+
45
+ `opzworks ssh myproject::prod`
46
+
47
+ If you wanted to automatically scrape all your stacks to populate your ssh config, and
48
+ you don't want to use the `--update` flag (which will overwrite the entire file contents),
49
+ you could do something like:
50
+
51
+ * add a crontab entry similar to: `0 * * * * /bin/bash -l -c /path/to/opzworks-ssh.sh`
52
+ * create `/path/to/opzworks-ssh.sh`:
53
+
54
+ ```bash
55
+ # this script reads .ssh/config, drops anything after the matched line,
56
+ # then generates a list of opsworks hosts and appends them to the file.
57
+ gsed -i '/OPSWORKS_CRON_LINE_MATCH/q' ~/.ssh/config
58
+ opzworks ssh >>~/.ssh/config
59
+ ```
60
+
61
+ Note this example assumes the use of a gnu sed-like utility, which on OSX means
62
+ installing gnu sed (`brew install gsed` if you're using homebrew). On Linux, simply
63
+ change `gsed` to `sed`.
64
+
65
+ Add the following line to the bottom of your existing ~/.ssh/config:
66
+
67
+ `# OPSWORKS_CRON_LINE_MATCH`
68
+
69
+ #### elastic
70
+
71
+ Perform [start|stop|bounce|rolling] operations on an Elastic cluster.
72
+
73
+ The host from which this command is originated will need to have access to the the target
74
+ systems via private IP and assumes port 9200 is open and available.
75
+
76
+ This is a very rough implementation!
77
+
78
+ #### json
79
+
80
+ Update stack custom JSON.
81
+
82
+ #### berks
83
+
84
+ Build the berkshelf for a stack, upload the tarball to S3, trigger `update_custom_cookbooks` on the stack.
85
+
86
+ ## Configuration
87
+
88
+ The gem reads information from `~/.aws/config`, or from the file referenced by
89
+ the `AWS_CONFIG_FILE` environment variable. It should already look something like this:
90
+
91
+ [default]
92
+ aws_access_key_id = ilmiochiaveID
93
+ aws_secret_access_key = ilmiochiavesegreto
94
+ region = us-east-1
95
+ output = json
96
+
97
+ If you want the gem to read from an environment other than 'default', you can do so
98
+ by exporting the `AWS_PROFILE` environment variable. It should be set to whatever profile
99
+ name you have defined that you want to use in the config file.
100
+
101
+ Add the following section to `~/.aws/config`:
102
+
103
+ [opzworks]
104
+ ssh-user-name = <MY SSH USER NAME>
105
+ berks-repository-path = <PATH TO OPSWORKS BERKSHELF REPOSITORIES>
106
+ berks-github-org = <GITHUB ORG THAT YOUR OPSWORKS REPOSITORIES EXIST UNDER>
107
+ berks-s3-bucket = <AN EXISTING S3 BUCKET>
108
+
109
+ The `ssh-user-name` value should be set to the username you want to use when
110
+ logging in remotely, most probably the user name from your _My Settings_ page
111
+ in OpsWorks.
112
+
113
+ The `berks-repository-path` should point to a base directory in which your opsworks
114
+ git repositories for each stack will live.
115
+
116
+ The `berks-s3-bucket` will default to 'opzworks' if not set. You need to create the
117
+ the bucket manually (e.g. `aws s3 mb s3://opsworks-cookbook-bucket`).
118
+
119
+ The `berks-github-org` setting is used if you try to run `berks` or `json` on a stack, and
120
+ the local opsworks-${project} repo isn't found. In this event, the code will attempt to clone
121
+ the repo into `berks-repository-path` and continue.
122
+
123
+ Additional option are:
124
+
125
+ `berks-base-path`, which is the temporary base directory where the berkshelf will be
126
+ built. Defaults to /tmp.
127
+
128
+ `berks-tarball-name`, which is the name of the tarball that will be uploaded to S3. Defaults to cookbooks.tgz.
129
+
130
+ ## Setup Conventions/Workflow for Berks/JSON Commands
131
+
132
+ ![workflow](img/flow.png)
133
+
134
+ This gem makes a number of assumptions in order to enforce a specific workflow. First among them is
135
+ the Opsworks stack naming convection. This will need to adhere toe the following format:
136
+
137
+ PROJECT::ENV::REGION
138
+
139
+ If PROJECT will be comprised of multiple words, they should be joined with underscores, e.g.
140
+
141
+ my_awesome_rails_app::prod::us-east
142
+
143
+ So for example, if you have an Elastic cluster in dev and prod in us-east, and dev in us-west:
144
+
145
+ elastic::dev::us-east
146
+ elastic::dev::us-west
147
+ elastic::prod::us-east
148
+
149
+ The next workflow that must be conformed to is berkshelf management. In this context, that means a git
150
+ repository that conforms to the following setup:
151
+
152
+ opsworks-project
153
+
154
+ Inside that repository, you will have branches that match each of your environments and regions.
155
+
156
+ So in our Elastic example, you would have the following setup:
157
+
158
+ * a git repository called opsworks-elastic
159
+ * branches in that repository called dev-us-east, dev-us-west and prod-us-east
160
+
161
+ In each of those branches, you should have the following:
162
+
163
+ * Berksfile
164
+ * stack.json (if you want to maintain the stack json using the `opzworks json` utility)
165
+
166
+ The Berksfile will look similar to the following. If you're familiar with Berkshelf, there's nothing
167
+ new here:
168
+
169
+ ```ruby
170
+ source 'https://api.berkshelf.com'
171
+
172
+ # opsworks
173
+ cookbook 'apache2' , github: 'aws/opsworks-cookbooks' , branch: 'release-chef-11.10' , rel: 'apache2'
174
+
175
+ # external
176
+ #
177
+ cookbook 'lvm', '= 1.0.8'
178
+ cookbook 'sensu', '= 2.10.0'
179
+ cookbook 'runit', '= 1.5.10'
180
+ cookbook 'java', '= 1.29.0'
181
+ cookbook 'nodejs', '= 2.1.0'
182
+ cookbook 'elasticsearch', '= 0.3.13'
183
+ cookbook 'chef_handler', '= 1.1.6'
184
+
185
+ # mapzen wrappers
186
+ #
187
+ cookbook 'mapzen_sensu_clients', git: 'git@github.com:mapzen/chef-mapzen_sensu_clients', tag: '0.12.0'
188
+ cookbook 'mapzen_elasticsearch', git: 'git@github.com:mapzen/chef-mapzen_elasticsearch', tag: '0.16.3'
189
+ cookbook 'mapzen_logstash', git: 'git@github.com:mapzen/chef-mapzen_logstash', tag: '0.13.1'
190
+ cookbook 'mapzen_graphite', git: 'git@github.com:mapzen/chef-mapzen_graphite', tag: '0.6.0'
191
+ cookbook 'mapzen_pelias', git: 'git@github.com:mapzen/chef-mapzen_pelias', tag: '0.34.2'
192
+ ```
193
+
194
+ If we placed that Berkshelf file in opsworks-elastic, in the prod-us-east branch, we would run `opzworks berks elastic::prod::us-east`, which would do the following:
195
+
196
+ * build the berkshelf locally
197
+ * push the resultant cookbook tarball to: s3://opzworks/elastic-prod-us-east/cookbooks.tgz
198
+ * run `update_custom_cookbook` on the stack (unless you pass the `--no-update` flag)
199
+
200
+ Your stack should be configured to use a berkshelf from an S3 archive. The url will look as below:
201
+
202
+ https://s3.amazonaws.com/opzworks/elastic-prod-us-east/cookbooks.tgz
203
+
204
+ You'll need to set up an IAM user or users with permission to access the location.
205
+
206
+ ## Contributing
207
+
208
+ 1. Fork it
209
+ 2. Create your feature branch (`git checkout -b my-new-feature`)
210
+ 3. Commit your changes (`git commit -am 'Add some feature'`)
211
+ 4. Push to the branch (`git push origin my-new-feature`)
212
+ 5. Create new Pull Request
data/Rakefile ADDED
@@ -0,0 +1,11 @@
1
+ require 'bundler/gem_tasks'
2
+
3
+ namespace :test do
4
+ desc 'Run tests'
5
+ task :syntax do
6
+ puts 'Running rubocop'
7
+ sh 'rubocop .'
8
+ end
9
+ end
10
+
11
+ task default: 'test:syntax'
data/bin/opzworks ADDED
@@ -0,0 +1,9 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ require 'pathname'
4
+ bin_file = Pathname.new(__FILE__).realpath
5
+ $LOAD_PATH.unshift File.expand_path('../../lib', bin_file)
6
+
7
+ require 'opzworks/cli'
8
+
9
+ OpzWorks::CLI.start
data/circle.yml ADDED
@@ -0,0 +1,6 @@
1
+ machine:
2
+ ruby:
3
+ version: 2.3.0
4
+ test:
5
+ override:
6
+ - bundle exec rake
data/img/flow.png ADDED
Binary file
@@ -0,0 +1,47 @@
1
+ require 'trollop'
2
+ require 'opzworks'
3
+
4
+ module OpzWorks
5
+ class CLI
6
+ def self.start
7
+ commands = %w(ssh json berks elastic)
8
+
9
+ Trollop.options do
10
+ version "opzworks #{OpzWorks::VERSION} (c) #{OpzWorks::AUTHORS.join(', ')}"
11
+ banner <<-EOS.unindent
12
+ usage: opzworks [COMMAND] [OPTIONS...]
13
+
14
+ #{OpzWorks::SUMMARY}
15
+
16
+ Commands
17
+ ssh #{OpzWorks::Commands::SSH.banner}
18
+ json #{OpzWorks::Commands::JSON.banner}
19
+ berks #{OpzWorks::Commands::BERKS.banner}
20
+ elastic #{OpzWorks::Commands::ELASTIC.banner}
21
+
22
+ For help with specific commands, run:
23
+ opzworks COMMAND -h/--help
24
+
25
+ Options:
26
+ EOS
27
+ stop_on commands
28
+ end
29
+
30
+ command = ARGV.shift
31
+ case command
32
+ when 'ssh'
33
+ OpzWorks::Commands::SSH.run
34
+ when 'json'
35
+ OpzWorks::Commands::JSON.run
36
+ when 'berks'
37
+ OpzWorks::Commands::BERKS.run
38
+ when 'elastic'
39
+ OpzWorks::Commands::ELASTIC.run
40
+ when nil
41
+ Trollop.die 'no command specified'
42
+ else
43
+ Trollop.die "unknown command: #{command}"
44
+ end
45
+ end
46
+ end
47
+ end
@@ -0,0 +1,147 @@
1
+ require 'aws-sdk'
2
+ require 'trollop'
3
+ require 'opzworks'
4
+ require 'rainbow/ext/string'
5
+
6
+ require_relative 'include/run_local'
7
+ require_relative 'include/populate_stack'
8
+ require_relative 'include/manage_berks_repos'
9
+
10
+ module OpzWorks
11
+ class Commands
12
+ class BERKS
13
+ def self.banner
14
+ 'Build the stack berkshelf'
15
+ end
16
+
17
+ def self.run
18
+ options = Trollop.options do
19
+ banner <<-EOS.unindent
20
+ #{BERKS.banner}
21
+
22
+ opzworks berks stack1 stack2 ...
23
+
24
+ The stack name can be passed as any unique regex. If there is
25
+ more than one match, it will simply be skipped.
26
+
27
+ Options:
28
+ EOS
29
+ opt :update, 'Trigger update_custom_cookbooks on stack after uploading a new cookbook tarball.', default: true
30
+ end
31
+ ARGV.empty? ? Trollop.die('no stacks specified') : false
32
+
33
+ config = OpzWorks.config
34
+
35
+ aws_credentials_provider = Aws::SharedCredentials.new(profile_name: config.aws_profile)
36
+ s3 = Aws::S3::Resource.new(region: config.aws_region, credentials: aws_credentials_provider)
37
+
38
+ opsworks = Aws::OpsWorks::Client.new(region: config.aws_region, profile: config.aws_profile)
39
+ response = opsworks.describe_stacks
40
+
41
+ # loops over inputs
42
+ ARGV.each do |opt|
43
+ populate_stack(opt, response)
44
+ next if @populate_stack_failure == true
45
+
46
+ manage_berks_repos
47
+ next if @berks_repo_failure == true
48
+
49
+ berks_cook_path = config.berks_base_path || '/tmp'
50
+ cook_path = "#{berks_cook_path}/#{@project}-#{@branch}"
51
+ install_path = "#{cook_path}" + '/' + "cookbooks-#{@project}-#{@branch}"
52
+ cookbook_tarball = config.berks_tarball_name || 'cookbooks.tgz'
53
+ cookbook_upload = "#{cook_path}" + '/' "#{cookbook_tarball}"
54
+ s3_bucket = config.berks_s3_bucket || 'opzworks'
55
+ opsworks_berks = 'Berksfile.opsworks'
56
+ overrides = 'overrides'
57
+
58
+ # berks
59
+ #
60
+ puts 'Running berks install'.foreground(:blue)
61
+ run_local <<-BASH
62
+ cd #{@target_path}
63
+ berks update
64
+ BASH
65
+ run_local <<-BASH
66
+ cd #{@target_path}
67
+ berks vendor #{install_path}
68
+ BASH
69
+
70
+ # if there's a Berksfile.opsworks, push it up to let nodes build their cookbook
71
+ # repository from its contents
72
+ #
73
+ if File.file?("#{@target_path}/#{opsworks_berks}")
74
+ puts 'Copying opsworks Berksfile into place'.foreground(:blue)
75
+ FileUtils.copy("#{@target_path}/#{opsworks_berks}", "#{install_path}/Berksfile")
76
+ end
77
+
78
+ # if there's an overrides file, just pull it and stuff the contents into the
79
+ # upload repo; the line is assumed to be a git repo. This is done to override
80
+ # opsworks templates without destroying the upstream cookbook.
81
+ #
82
+ # For example, to override the default nginx cookbook's nginx.conf, create a git
83
+ # repo with the directory structure nginx/templates/default and place your
84
+ # custom nginx.conf.erb in it.
85
+ #
86
+ if File.file?("#{@target_path}/#{overrides}")
87
+ unless File.directory?("#{install_path}")
88
+ FileUtils.mkdir_p("#{install_path}")
89
+ end
90
+ File.open("#{@target_path}/#{overrides}") do |f|
91
+ f.each_line do |line|
92
+ puts "Copying override #{line}".foreground(:blue)
93
+ `cd #{install_path} && git clone #{line}`
94
+ end
95
+ end
96
+ end
97
+
98
+ puts 'Committing changes and pushing'.foreground(:blue)
99
+ system "cd #{@target_path} && git commit -am 'berks update'; git push origin #{@branch}"
100
+
101
+ puts 'Creating tarball of cookbooks'.foreground(:blue)
102
+ FileUtils.mkdir_p("#{cook_path}")
103
+ run_local "tar czf #{cookbook_upload} -C #{install_path} ."
104
+
105
+ # upload
106
+ #
107
+ puts 'Uploading to S3'.foreground(:blue)
108
+
109
+ begin
110
+ obj = s3.bucket(s3_bucket).object("#{@s3_path}/#{cookbook_tarball}")
111
+ obj.upload_file("#{cookbook_upload}")
112
+ rescue StandardError => e
113
+ puts "Caught exception while uploading to S3 bucket #{s3_bucket}: #{e}".foreground(:red)
114
+ puts 'Cleaning up before exiting'.foreground(:blue)
115
+ FileUtils.rm("#{cookbook_upload}")
116
+ FileUtils.rm_rf("#{install_path}")
117
+ abort
118
+ else
119
+ puts "Completed successful upload of #{@s3_path}/#{cookbook_tarball} to #{s3_bucket}!".foreground(:green)
120
+ end
121
+
122
+ # cleanup
123
+ #
124
+ puts 'Cleaning up'.foreground(:blue)
125
+ FileUtils.rm("#{cookbook_upload}")
126
+ FileUtils.rm_rf("#{install_path}")
127
+ puts 'Done!'.foreground(:green)
128
+
129
+ # update remote cookbooks
130
+ #
131
+ if options[:update] == true
132
+ puts "Triggering update_custom_cookbooks for remote stack (#{@stack_id})".foreground(:blue)
133
+
134
+ hash = {}
135
+ hash[:comment] = 'shake and bake'
136
+ hash[:stack_id] = @stack_id
137
+ hash[:command] = { name: 'update_custom_cookbooks' }
138
+
139
+ opsworks.create_deployment(hash)
140
+ else
141
+ puts 'Update custom cookbooks skipped via --no-update switch.'.foreground(:blue)
142
+ end
143
+ end
144
+ end
145
+ end
146
+ end
147
+ end
@@ -0,0 +1,103 @@
1
+ require 'aws-sdk'
2
+ require 'trollop'
3
+ require 'faraday'
4
+ require 'opzworks'
5
+ require 'net/ssh'
6
+ require 'net/ssh/multi'
7
+ require 'rainbow/ext/string'
8
+
9
+ require_relative 'include/elastic'
10
+
11
+ module OpzWorks
12
+ class Commands
13
+ class ELASTIC
14
+ def self.banner
15
+ 'Perform operations on an Elastic cluster'
16
+ end
17
+
18
+ def self.run
19
+ options = Trollop.options do
20
+ banner <<-EOS.unindent
21
+ #{ELASTIC.banner}
22
+
23
+ opzworks elastic stack1 stack2 ... [--start|--stop|--bounce|--rolling]
24
+
25
+ The stack name can be passed as any unique regex. If there is
26
+ more than one match, it will simply be skipped.
27
+
28
+ Options:
29
+ EOS
30
+ opt :start, 'Start Elastic', default: false
31
+ opt :stop, 'Stop Elastic', default: false
32
+ opt :bounce, 'Bounce (stop/start) Elastic', default: false
33
+ opt :rolling, 'Perform a rolling restart of Elastic', default: false
34
+ end
35
+ ARGV.empty? ? Trollop.die('no stacks specified') : false
36
+
37
+ optarr = []
38
+ options.each do |opt, val|
39
+ val == true ? optarr << opt : false
40
+ end
41
+ optarr.empty? ? Trollop.die('no options specified') : false
42
+
43
+ config = OpzWorks.config
44
+ @client = Aws::OpsWorks::Client.new(region: config.aws_region, profile: config.aws_profile)
45
+ response = @client.describe_stacks
46
+
47
+ # loops over inputs
48
+ ARGV.each do |opt|
49
+ es_get_input(opt, response)
50
+ next if @get_data_failure == true
51
+
52
+ case options[:rolling]
53
+ when true
54
+ # cycle through all the hosts, waiting for status
55
+ @ip_addrs.each do |ip|
56
+ puts "\n________________________________________________"
57
+ puts "Now operating on host #{ip}".foreground(:yellow)
58
+
59
+ es_enable_allocation(ip, 'none') if @disable_shard_allocation == true
60
+ sleep 2 if @disable_shard_allocation == true
61
+
62
+ es_service('restart', [ip])
63
+ es_wait_for_status(ip, 'yellow')
64
+ es_enable_allocation(ip, 'all') if @disable_shard_allocation == true
65
+ es_wait_for_status(ip, 'green')
66
+ end
67
+ end
68
+
69
+ case options[:start]
70
+ when true
71
+ es_service('start', @ip_addrs)
72
+
73
+ @ip_addrs.each do |ip|
74
+ es_wait_for_status(ip, 'green')
75
+ end
76
+ end
77
+
78
+ case options[:stop]
79
+ when true
80
+ # use the first host to disable shard allocation
81
+ es_enable_allocation(@ip_addrs.first, 'none') if @disable_shard_allocation == true
82
+ sleep 2 if @disable_shard_allocation == true
83
+
84
+ es_service('stop', @ip_addrs)
85
+ end
86
+
87
+ case options[:bounce]
88
+ when true
89
+ # use the first host to disable shard allocation
90
+ es_enable_allocation(@ip_addrs.first, 'none') if @disable_shard_allocation == true
91
+ sleep 2 if @disable_shard_allocation == true
92
+
93
+ es_service('restart', @ip_addrs)
94
+
95
+ es_wait_for_status(@ip_addrs.first, 'yellow')
96
+ es_enable_allocation(@ip_addrs.first, 'all') if @disable_shard_allocation == true
97
+ es_wait_for_status(@ip_addrs.first, 'green')
98
+ end
99
+ end
100
+ end
101
+ end
102
+ end
103
+ end