knife-stackbuilder 0.5.6 → 0.5.7

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 209a8ea8c3e0ab99bbe754f70003ed242e435052
4
- data.tar.gz: 7e17e996e03499a4e9c7432ccd7354780d392b7f
3
+ metadata.gz: 278edc9ab005e30e5b5da72cd13e7e28a49df655
4
+ data.tar.gz: 0ee5c8dc89f98d4cfdcd6d0db3cd040789fa2a52
5
5
  SHA512:
6
- metadata.gz: 65bc78b0ed9f4ae0a8a27f95734f5c3b84632c6d39188d8e3025e692051f43b04e2dd973e485308e89be641b7aa4d28dc616bcc3d40cc8852fb231a571d6cfcc
7
- data.tar.gz: d5b245863b3a171884b481668482de92bdddcb49d10a36bdbf1db48d00093237c520fe44c9318edffbc33742460adbcfde0ba73186f8ba2e877d49825113a15e
6
+ metadata.gz: 463be1cca68ecdc686bb7f4e985b2e7f79a8e443605b142078a1fc2be0891327fa68b35be85a8283fdda7750ff789b5eaa38b299fa631b96eeba4de5cd2dc11a
7
+ data.tar.gz: 9336cb38c175d9cc6daaf14b70883adb13038c7ec1e7d323754e44b651d70309731ea908d85bdcd5f917b6f37a5f5e2e6592c182a33bbe2cd92922121830e340
data/README.md CHANGED
@@ -1,80 +1,119 @@
1
1
  # Knife StackBuilder plugin
2
2
 
3
- ## Usage
3
+ Knife StackBuilder is a Chef Knife tool that can be used to orchestrate configuration across multiple nodes. It
4
+ evolved from the need to simplify using Chef to build a clustered application services environment such as OpenStack.
5
+ The plugin was built to:
4
6
 
5
- ```
6
- knife stack initialize repo
7
-
8
- Initializes or validates an existing stack repo. The stack repo should contain the
9
- following folders along with a Berksfile.
10
-
11
- * cookbooks
12
- * environments
13
- * secrets
14
- * databags
15
- * roles
16
- * stacks
17
-
18
- --path
19
-
20
- Path to create and initialize the stack chef repository. If the repository already
21
- exists then any it will be validated. If the provided path begins with git:, http:
22
- or https:, it will be assumed to be a git repo. A branch/label may be specified
23
- by preceding it with a : after the path.
24
-
25
- i.e. http://github.com/mevansam/mystack:tag1
26
-
27
- --cert_path | --certs
28
-
29
- If "--cert_path" is specified then it should point to a directory with a folder for
30
- each server domain name containing that server's certificate files.
31
-
32
- If instead "--certs" are specified then it should provide a comma separated list of
33
- server domain names for which self-signed certificates will be generated.
34
-
35
- --envs
36
-
37
- Comma separated list of environments to generate.
38
-
39
- --cookbooks
40
-
41
- A comma separated list of cookbooks and their versions to be added to the Berksfile.
42
-
43
- i.e. "mysql:=5.6.1, wordpress:~> 2.3.0"
44
- ```
7
+ 1. Describe a complex system topology using a YAML file
8
+ 2. Leverage knife cloud plugins to bootstrap cloud, virtual and baremetal nodes within the topology
9
+ 3. Leverage knife container to build and deploy docker containers using Chef cookbooks
10
+ 4. Re-use cookbooks from the [Chef Supermarket](http://supermarket.getchef.com)
11
+ 5. Leverage the Berkshelf workflow and not re-invent the wheel for developing Chef cookbooks
12
+ 6. Normalize the Chef environment and provide a means to externalize and parameterize configuration values
45
13
 
46
- ```
47
- knife stack upload cookbook[s]
48
- ```
14
+ The plugin is very similar to Ansible and Saltstack, but is meant to be Chef centric. It you plan is to not use Chef
15
+ cookbooks for configuration management, then this is not the tool for you. It differs from Chef metal in that the
16
+ orchestration is driven by a set of directives captured as a YAML file. The advantage of describing the build in a
17
+ YAML file is that it is easier to transform the topology description to another format such as Heat or Bosh if a
18
+ decision is made down the road to move to a different infrastructure orchestration approach.
49
19
 
50
- ```
51
- knife stack upload environment[s]
52
- ```
20
+ Check out the brief [tutorial](docs/how-to.md) on setting up a repository for a single node wordpress stack and building
21
+ it. The [OpenStack HA Cookbook](https://github.com/mevansam/openstack-ha-cookbook) contains examples where the plugin
22
+ is used to setup mult-node OpenStack environments in Vagrant, VMware etc. using the OpenStack StackForge cookbooks.
23
+
24
+ ## Overview
25
+
26
+ The plugin extends the standard cookbook repository upload capabilities to provide an extensive variable substituion
27
+ capability. This is done to enable templatizing the Chef artifacts to model a system which can be manipulated by
28
+ variables in environment specific YAML files in the '```etc/```' folder which in turn can be overridden by shell
29
+ variables.
30
+
31
+ ### Chef and Berkshelf Cookbook Repository Management
32
+
33
+ This is nothing more than a wrapper of existing Chef and Berkshelf repository functionality. However, it adds a couple
34
+ of key features that are helpful when externalizing and securing the environment for Chef.
35
+
36
+ * Cookbooks
37
+
38
+ '```knife stack upload cookbooks```' simply invokes Berkshelf to upload the cookbooks specified in the Berksfile.
39
+
40
+ * Data Bags and Encryption
41
+
42
+ '```knife stack upload data bags```' will upload the data bags found within the '```data_bags/```' folder. Folders
43
+ at the top-level of that folder will considered to be the data bags with the json files within them, the data bag
44
+ item and its content. A data bag instance will be created for each environment and encrypted with an environment
45
+ specific key found in the '```secrets/```' folder. So a data bag name will have the format '```[data bag
46
+ name]-[environment]```'.
47
+
48
+ Data bag content can be parameterized with the environment specific YAML file in the '```etc/```' folder. This
49
+ simplifies the handling of environment specific settings/secrets by externalizing them. Within a data bag folder
50
+ creating a content file within an environment specific folder will override any item content at the parent level.
51
+
52
+ * Roles
53
+
54
+ '```knife stack upload roles```' will upload the roles within the '```roles/```'. This is similar to uploading
55
+ roles via the standard knife role method. However if required, role content can be paremeterized by referencing
56
+ shell environment variables.
57
+
58
+ ### Externalizing configuration values and order of evaluation
59
+
60
+ As mentioned previously this plugin parameterizes the Chef environment in the '```environments/```' folder using a YAML
61
+ file having the same name as the environment in the '```etc/```' folder. This same file is used to parameterize the
62
+ stack file that describe the system topology. The YAML environment file can in turn be parameterized by pulling in
63
+ values from the shell environment
64
+
65
+ For example the following will propagate a value from the shell to the rest of the stack and Chef envrionment. Since ruby string variable expansion is used it is possible to reference '```ENV```' to pull shell environment directly into any YAML or JSON configuration file. You can reference a key-value in the yaml that has already been parsed via '```#{my['some key']}```'.
66
+
67
+ In shell:
53
68
 
54
69
  ```
55
- knife stack upload role[s]
70
+ export DOMAIN=knife-stackbuilder-dev.org
56
71
  ```
57
72
 
73
+ in ```./etc/DEV.yml```:
74
+
58
75
  ```
59
- knife stack upload data bag[s]
76
+ ---
77
+ domain: "#{ENV['DOMAIN']}"
78
+ .
79
+ .
60
80
  ```
61
81
 
82
+ in ```./stack.yml```:
83
+
62
84
  ```
63
- knife stack upload repo
85
+ ---
86
+ # Stack
87
+ name: Stack1
88
+ environment: DEV
89
+ domain: "#{env['domain']}"
90
+ .
91
+ .
64
92
  ```
65
93
 
94
+ in ```environments/DEV.rb```:
95
+
66
96
  ```
67
- knife stack build
97
+ ---
98
+ name "DEV"
99
+ description "Chef 'DEV' environment."
100
+ env = YAML.load_file(File.expand_path('../../etc/DEV.yml', __FILE__))
101
+ override_attributes(
102
+ 'domain' => "#{env['domain']}",
103
+ .
104
+ .
68
105
  ```
69
106
 
70
- ## Design
107
+ The following diagram illustrates the relationships between the files in the repository and how they are
108
+ parameterized.
71
109
 
72
- ### Berkshelf Cookbook Repository Management
73
-
74
- ### Externalizing configuration values and order of evaluation
110
+ ![Image of OpenStack HA Configuration File Structure]
111
+ (docs/images/config_files.png)
75
112
 
76
113
  #### Requesting user input for non-persisted values
77
114
 
115
+
116
+
78
117
  #### Including common yaml configurations
79
118
 
80
119
  #### Processing node attributes
@@ -87,8 +126,10 @@ knife stack build
87
126
 
88
127
  ## To Do:
89
128
 
90
- * Use Chef Pushy instead of Knife SSH
91
- * Add option to execute Chef Push jobs on events
129
+ * Use Chef Pushy instead of Knife SSH and add option to execute Chef Push jobs on events
130
+ * Make encrypted data bag handling more robust using Use Chef Vault
131
+ * The repo needs to detect changes to cookbooks, roles, data bags etc and upload only the changes
132
+ * Load custom provider gems by inspecting the installed gems
92
133
 
93
134
  ## Contributing
94
135
 
@@ -112,4 +153,7 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
112
153
  See the License for the specific language governing permissions and
113
154
  limitations under the License.
114
155
 
115
- Author: Mevan Samaratunga (mevansam@gmail.com)
156
+ Author | Email | Company
157
+ -------|-------|--------
158
+ Mevan Samaratunga | msamaratunga@pivotal.io | [Pivotal](http://www.pivotal.io)
159
+
@@ -73,6 +73,10 @@ class Chef
73
73
 
74
74
  env_vars = provider.get_env_vars
75
75
  stack = StackBuilder::Common.load_yaml(stack_file, env_vars)
76
+ merge_maps( stack, stack_overrides.end_with?('.json') ?
77
+ JSON.load(File.new(stack_overrides, 'r')) : JSON.load(stack_overrides) ) \
78
+ unless stack_overrides.nil?
79
+
76
80
  puts("Stack file:\n#{stack.to_yaml}")
77
81
 
78
82
  else
@@ -89,6 +89,9 @@ module StackBuilder::Chef
89
89
  # Create or copy certs
90
90
  create_certs(certificates) unless certificates.nil?
91
91
  end
92
+
93
+ @build_path = @repo_path + '/.build'
94
+ FileUtils.mkdir_p(@build_path)
92
95
  end
93
96
 
94
97
  def upload_environments(environment = nil)
@@ -101,7 +104,7 @@ module StackBuilder::Chef
101
104
  # TODO: Handle JSON environment files. JSON files should be processed similar to roles.
102
105
 
103
106
  env_file = "#{@repo_path}/environments/#{env_name}.rb"
104
- FileUtils.touch(env_file)
107
+ # FileUtils.touch(env_file)
105
108
 
106
109
  knife_cmd.name_args = [ env_file ]
107
110
  run_knife(knife_cmd)
@@ -411,7 +414,7 @@ module StackBuilder::Chef
411
414
  @logger.debug("Uploading data bag '#{data_bag_item_name}' with contents:\n#{data_bag_item.to_yaml}")
412
415
 
413
416
  tmpfile = "#{Dir.tmpdir}/#{data_bag_item_name}.json"
414
- File.open("#{tmpfile}", 'w+') { |f| f.write(data_bag_item.to_json) }
417
+ File.open(tmpfile, 'w+') { |f| f.write(data_bag_item.to_json) }
415
418
 
416
419
  knife_cmd = Chef::Knife::DataBagFromFile.new
417
420
  knife_cmd.name_args = [ data_bag_name, tmpfile ]
@@ -439,7 +442,7 @@ module StackBuilder::Chef
439
442
  @logger.debug("Uploading role '#{role_name}' with contents:\n#{role_content.to_yaml}")
440
443
 
441
444
  tmpfile = "#{Dir.tmpdir}/#{role_name}.json"
442
- File.open("#{tmpfile}", 'w+') { |f| f.write(role_content.to_json) }
445
+ File.open(tmpfile, 'w+') { |f| f.write(role_content.to_json) }
443
446
 
444
447
  knife_cmd = Chef::Knife::RoleFromFile.new
445
448
  knife_cmd.name_args = [ tmpfile ]
@@ -12,20 +12,177 @@ module StackBuilder::Chef
12
12
 
13
13
  @env_file_path = repo_path + '/environments/' + environment + '.rb'
14
14
 
15
- docker_image_dir = repo_path + '/.docker_images'
15
+ @dockerfiles_build_dir = repo_path + '/.build/docker/build'
16
+ FileUtils.mkdir_p(@dockerfiles_build_dir)
17
+
18
+ docker_image_dir = repo_path + '/.build/docker/' + environment
16
19
  FileUtils.mkdir_p(docker_image_dir)
17
- @docker_image_path = docker_image_dir + '/' + @name + '.gz'
20
+
21
+ docker_image_filename = docker_image_dir + '/' + @name
22
+ @docker_image_path = docker_image_filename + '.gz'
23
+ @docker_image_target = docker_image_filename + '.target'
18
24
  end
19
25
 
20
26
  def process(index, events, attributes, target = nil)
21
27
 
28
+ return unless events.include?('create') || events.include?('install') ||
29
+ events.include?('configure') || events.include?('update')
30
+
31
+ target_node_instance = "#{target.node_id}-#{index}"
32
+ node = Chef::Node.load(target_node_instance)
33
+ ipaddress = node.attributes['ipaddress']
34
+
35
+ ssh = ssh_create(ipaddress, target.ssh_user,
36
+ target.ssh_password.nil? ? target.ssh_identity_file : target.ssh_password)
37
+
38
+ image_exists = @name==ssh_exec!(ssh, "docker images | awk '$1==\"@name\" { print $1 }'")[:out].strip
39
+
40
+ # Copy image file to target if it has changed or does not exist on target
41
+ if build_container(attributes) && !target.nil? &&
42
+ ( !File.exist?(@docker_image_target) ||
43
+ (File.mtime(@docker_image_path) > File.mtime(@docker_image_target)) ||
44
+ !image_exists )
45
+
46
+ puts "Uploading docker image to target '#{target_node_instance}'."
47
+ ssh.open_channel do |channel|
48
+
49
+ channel.exec('gunzip | sudo docker load') do |ch, success|
50
+ channel.on_data do |ch, data|
51
+ res << data
52
+ end
53
+
54
+ channel.send_data IO.binread(@docker_image_path)
55
+ channel.eof!
56
+ end
57
+ end
58
+ ssh.loop
59
+
60
+ FileUtils.touch(@docker_image_target)
61
+ end
62
+
63
+ # Start container instances
64
+ if @knife_config.has_key?('container_start')
65
+
66
+ result = ssh_exec!(ssh, "sudo docker ps -a | awk '/#{@node_id}/ { print $0 }'")
67
+ raise StackBuilder::Common::StackBuilderError, "Error determining running containers for #{@name}: #{result[:err]}" \
68
+ if result[:exit_code]>0
69
+
70
+ running_instances = result[:out].lines
71
+ if running_instances.size > @scale
72
+
73
+ (running_instances.size - 1).downto(@scale) do |i|
74
+
75
+ container_node_id = "#{@node_id}-#{i}"
76
+ running_instance = running_instances.select{ |ri| ri[/#{@node_id}-\d+/,0]==container_node_id }
77
+ container_id = running_instance.first[/^[0-9a-z]+/,0]
78
+
79
+ result = ssh_exec!(ssh, "sudo docker rm -f #{container_id}")
80
+
81
+ if result[:exit_code]==0
82
+
83
+ remove_container_node_from_chef(container_node_id)
84
+
85
+ container_port_map = Hash.new
86
+ container_port_map.merge!(node.normal['container_port_map']) \
87
+ unless node.normal['container_port_map'].nil?
88
+
89
+ container_port_map.each do |k,v|
90
+ container_port_map[k].delete(container_node_id)
91
+ end
92
+
93
+ node.normal['container_port_map'] = container_port_map
94
+ node.save
95
+ else
96
+ @logger.error("Unable to stop container instance #{running_instances[i]}: #{result[:err]}")
97
+ end
98
+ end
99
+
100
+ elsif running_instances.size < @scale
101
+
102
+ container_start = @knife_config['container_start']
103
+ container_ports = container_start['ports']
104
+ container_options = container_start['options']
105
+
106
+ start_cmd = "sudo docker run -d "
107
+
108
+ start_cmd += container_options + ' ' \
109
+ unless container_options.nil?
110
+
111
+ start_cmd += container_ports.collect \
112
+ { |k,p| "-p :#{p=~/\d+\:\d+/ ? p.to_s : ':' + p.to_s}" }.join(' ') \
113
+ unless container_ports.nil?
114
+
115
+ running_instances.size.upto(@scale - 1) do |i|
116
+
117
+ container_node_id = "#{@node_id}-#{i}"
118
+ remove_container_node_from_chef(container_node_id)
119
+
120
+ result = ssh_exec!( ssh,
121
+ "#{start_cmd} --name #{container_node_id} " +
122
+ "-h #{container_node_id} -e \"CHEF_NODE_NAME=#{container_node_id}\" #{@name}")
123
+
124
+ if result[:exit_code]==0
125
+
126
+ container_id = container_node_id
127
+
128
+ container_port_map = Hash.new
129
+ container_port_map.merge!(node.normal['container_port_map']) \
130
+ unless node.normal['container_port_map'].nil?
131
+
132
+ container_ports.each do |k,p|
133
+
134
+ port_map = container_port_map[k]
135
+ if port_map.nil?
136
+
137
+ port_map = { }
138
+ container_port_map[k] = port_map
139
+ end
140
+
141
+ if p=~/\d+\:\d+/
142
+ port_map[container_id] = p[/(\d+)\:\d+/, 1]
143
+ else
144
+ result = ssh_exec!(ssh, "sudo docker port #{container_node_id} #{p}")
145
+
146
+ @logger.error( "Unable to get host port for " +
147
+ "'#{@node_id}-#{i}:#{p}': #{result[:err]}") \
148
+ if result[:exit_code]>0
149
+
150
+ port_map[container_id] = result[:out][/:(\d+)$/, 1]
151
+ end
152
+ end
153
+
154
+ node.normal['container_port_map'] = container_port_map
155
+ node.save
156
+ else
157
+ @logger.error("Unable to start container instance '#{@node_id}-#{i}': #{result[:err]}")
158
+ end
159
+ end
160
+ end
161
+
162
+ end
163
+
164
+ super(index, events, attributes, target)
165
+ end
166
+
167
+ def delete(index)
168
+
169
+ super(index)
170
+ end
171
+
172
+ private
173
+
174
+ def build_container(attributes)
175
+
22
176
  @@sync ||= Mutex.new
23
177
  @@sync.synchronize {
24
178
 
25
179
  unless @build_complete ||
26
- (File.exist?(@docker_image_path) && File.exist?(@env_file_path) && \
180
+ ( File.exist?(@docker_image_path) && File.exist?(@env_file_path) && \
27
181
  File.mtime(@docker_image_path) > File.mtime(@env_file_path) )
28
182
 
183
+ %x(docker images)
184
+ raise ArgumentError, "Docker does not appear to be available." unless $?.success?
185
+
29
186
  if is_os_x? || !is_nix_os?
30
187
 
31
188
  raise ArgumentError, "DOCKER_HOST environment variable not set." \
@@ -36,121 +193,147 @@ module StackBuilder::Chef
36
193
  unless ENV['DOCKER_TLS_VERIFY']
37
194
  end
38
195
 
39
- begin
40
- build_role = Chef::Role.new
41
- build_role.name(@name + '_build')
42
- build_role.override_attributes(attributes)
43
- build_role.save
196
+ echo_output = @logger.info? || @logger.debug?
197
+ build_exists = @name==`docker images | awk '$1=="#{@name}" { print $1 }'`.strip
44
198
 
45
- dockerfiles_path = File.join(Dir.home, '/.knife/container')
199
+ knife_cmd = Chef::Knife::ContainerDockerInit.new
46
200
 
47
- build_exists = @name==`docker images | awk '/#{@name}/ { print $1 }'`.strip
201
+ # Run as a forked job (This captures all output and removes noise from output)
202
+ run_jobs(knife_cmd, true, echo_output) do |k|
48
203
 
49
- knife_cmd = Chef::Knife::ContainerDockerInit.new
50
- knife_cmd.name_args = [ @name ]
204
+ k.name_args = [ @name ]
51
205
 
52
- knife_cmd.config[:local_mode] = false
53
- knife_cmd.config[:base_image] = build_exists ? @name : @knife_config['image']
54
- knife_cmd.config[:force] = true
55
- knife_cmd.config[:generate_berksfile] = false
56
- knife_cmd.config[:include_credentials] = false
206
+ k.config[:local_mode] = false
207
+ k.config[:base_image] = build_exists ? @name : @knife_config['image']
208
+ k.config[:force] = true
209
+ k.config[:generate_berksfile] = false
210
+ k.config[:include_credentials] = true
57
211
 
58
- knife_cmd.config[:dockerfiles_path] = dockerfiles_path
59
- knife_cmd.config[:run_list] = @knife_config['run_list'] + [ "role[#{build_role.name}]" ]
212
+ k.config[:dockerfiles_path] = @dockerfiles_build_dir
213
+ k.config[:run_list] = @knife_config['run_list']
60
214
 
61
- knife_cmd.config[:encrypted_data_bag_secret] = IO.read(@env_key_file) \
215
+ k.config[:encrypted_data_bag_secret] = IO.read(@env_key_file) \
62
216
  unless File.exist? (@env_key_file)
63
217
 
64
- run_knife(knife_cmd)
218
+ run_knife(k)
219
+ end
220
+
221
+ dockerfiles_named_path = @dockerfiles_build_dir + '/' + @name
222
+
223
+ # Create env key to add to the docker image
224
+ FileUtils.cp(@env_key_file, dockerfiles_named_path + '/chef/encrypted_data_bag_secret')
225
+
226
+ if @knife_config.has_key?('inline_dockerfile')
65
227
 
66
- if @knife_config.has_key?('inline_dockerfile')
228
+ dockerfile_file = dockerfiles_named_path + '/Dockerfile'
229
+ dockerfile = IO.read(dockerfile_file).lines
67
230
 
68
- dockerfile_path = dockerfiles_path + "/#{@name}/Dockerfile"
69
- docker_file = IO.read(dockerfile_path).lines
231
+ dockerfile_new = [ ]
232
+
233
+ log_level = (
234
+ @logger.debug? ? 'debug' :
235
+ @logger.info? ? 'info' :
236
+ @logger.warn? ? 'warn' :
237
+ @logger.error? ? 'error' :
238
+ @logger.fatal? ? 'fatal' : 'error' )
239
+
240
+ while dockerfile.size>0
241
+ l = dockerfile.delete_at(0)
242
+
243
+ if l.start_with?('RUN chef-init ')
244
+ # Ensure node builds with the correct Chef environment and attributes
245
+ dockerfile_new << l.chomp + " -E #{@environment} -l #{log_level}\n"
246
+
247
+ elsif l.start_with?('CMD ')
248
+ # Ensure node starts within the correct Chef environment and attributes
249
+ dockerfile_new << l.gsub(/\"\]/,"\",\"-E #{@environment}\"]")
250
+
251
+ else
252
+ dockerfile_new << l
70
253
 
71
- docker_file_new = [ ]
72
- while docker_file.size>0
73
- l = docker_file.delete_at(0)
74
- docker_file_new << l
75
254
  if l.start_with?('FROM ')
76
- docker_file_new += @knife_config['inline_dockerfile'].lines.map { |ll| ll.strip + "\n" }
77
- break
255
+ # Insert additional custom Dockerfile build steps
256
+ dockerfile_new += @knife_config['inline_dockerfile'].lines.map { |ll| ll.strip + "\n" }
78
257
  end
79
258
  end
80
- docker_file_new += docker_file
81
-
82
- File.open(dockerfile_path, 'w+') { |f| f.write(docker_file_new.join) }
83
259
  end
260
+ dockerfile_new += dockerfile
84
261
 
85
- knife_cmd = Chef::Knife::ContainerDockerBuild.new
86
- knife_cmd.name_args = [ @name ]
262
+ File.open(dockerfile_file, 'w+') { |f| f.write(dockerfile_new.join) }
263
+ end
87
264
 
88
- knife_cmd.config[:run_berks] = false
89
- knife_cmd.config[:force_build] = true
90
- knife_cmd.config[:dockerfiles_path] = dockerfiles_path
91
- knife_cmd.config[:cleanup] = true
265
+ # Update first boot json file with attributes and services
266
+ first_boot_file = dockerfiles_named_path + '/chef/first-boot.json'
267
+ first_boot = JSON.load(File.new(first_boot_file, 'r')).to_hash
268
+ first_boot.merge!(attributes)
92
269
 
93
- result = run_knife(knife_cmd)
270
+ first_boot['container_service'] = @knife_config['container_services'] \
271
+ if @knife_config.has_key?('container_services')
94
272
 
95
- ensure
96
- build_role.destroy unless build_role.nil?
97
- end
273
+ File.open(first_boot_file, 'w+') { |f| f.write(first_boot.to_json) }
98
274
 
99
- # TODO: Errors are currently not detected as knife-container sends all chef-client output to stdout
100
- if result.rindex('Chef run process exited unsuccessfully (exit code 1)')
275
+ # Run the build as a forked job (This captures all output and removes noise from output)
276
+ knife_cmd = Chef::Knife::ContainerDockerBuild.new
101
277
 
102
- if @logger.level>=::Logger::WARN
103
- puts "Knife execution failed with an error."
104
- puts "#{result.string}"
105
- end
278
+ job_results = run_jobs(knife_cmd, true, echo_output) do |k|
106
279
 
107
- `for i in $(docker ps -a | awk '/chef-in/ { print $1 }'); do docker rm -f $i; done`
108
- `for i in $(docker images | awk '/<none>/ { print $3 }'); do docker rmi $i; done`
280
+ k.name_args = [ @name ]
109
281
 
110
- raise StackBuilderError, 'Container build has errors.'
282
+ k.config[:run_berks] = false
283
+ k.config[:force_build] = true
284
+ k.config[:dockerfiles_path] = @dockerfiles_build_dir
285
+ k.config[:cleanup] = true
286
+
287
+ run_knife(k)
111
288
  end
112
289
 
113
- `docker save #{@name} | gzip -9 > #{@docker_image_path}`
114
- end
115
- @build_complete = true
116
- }
290
+ result = job_results[knife_cmd.object_id][0]
291
+ if result.rindex('Chef run process exited unsuccessfully (exit code 1)') ||
292
+ result.rindex(/The command \[.*\] returned a non-zero code:/)
117
293
 
118
- if @build_complete && !target.nil?
119
-
120
- node = Chef::Node.load("#{target.node_id}-#{index}")
121
- ipaddress = node.attributes['ipaddress']
122
-
123
- if target.ssh_password.nil?
124
- ssh = Net::SSH.start(ipaddress, target.ssh_user,
125
- {
126
- :key_data => IO.read(target.ssh_identity_file),
127
- :user_known_hosts_file => "/dev/null"
128
- } )
129
- else
130
- ssh = Net::SSH.start(ipaddress, target.ssh_user,
131
- {
132
- :password => target.ssh_password,
133
- :user_known_hosts_file => "/dev/null"
134
- } )
135
- end
294
+ if @logger.level>=::Logger::WARN
136
295
 
137
- ssh.open_channel do |channel|
296
+ %x(for i in $(docker ps -a | awk '/Exited \(\d+\)/ { print $1 }'); do docker rm -f $i; done)
297
+ %x(for i in $(docker ps -a | awk '/chef-in/ { print $1 }'); do docker rm -f $i; done)
298
+ %x(docker rmi -f $(docker images -q --filter "dangling=true"))
299
+ %x(docker rmi -f #{@name})
138
300
 
139
- channel.exec('gunzip | sudo docker load') do |ch, success|
140
- channel.on_data do |ch, data|
141
- res << data
301
+ puts "Knife container build Chef convergence failed with an error."
302
+ puts "#{job_results.first[0]}"
142
303
  end
143
304
 
144
- channel.send_data IO.binread(@docker_image_path)
145
- channel.eof!
305
+ raise StackBuilder::Common::StackBuilderError, "Docker build of container #{@name} has errors."
146
306
  end
307
+
308
+ puts 'Saving docker image for upload. This may take a few minutes.'
309
+ out = %x(docker save #{@name} | gzip -9 > #{@docker_image_path})
310
+
311
+ raise StackBuilder::Common::StackBuilderError, \
312
+ "Unable to save docker container #{@name}: #{out}" unless $?.success?
147
313
  end
148
- ssh.loop
149
- end
314
+ }
150
315
 
151
- super(index, events, attributes, target)
316
+ @build_complete = true
152
317
  end
153
318
 
319
+ def remove_container_node_from_chef(container_node_id)
320
+
321
+ begin
322
+ node = Chef::Node.load(container_node_id)
323
+ @logger.info("Deleting container node reference in Chef: #{container_node_id}")
324
+ node.destroy
325
+ rescue
326
+ # Do Nothing
327
+ end
328
+
329
+ begin
330
+ client = Chef::ApiClient.load(container_node_id)
331
+ @logger.info("Deleting container api client reference in Chef: #{container_node_id}")
332
+ client.destroy
333
+ rescue
334
+ # Do Nothing
335
+ end
336
+ end
154
337
 
155
338
  end
156
339
  end
@@ -26,6 +26,8 @@ module StackBuilder::Chef
26
26
  @name = node_config['node']
27
27
  @node_id = @name + '-' + @id
28
28
 
29
+ @environment = environment
30
+
29
31
  @run_list = node_config.has_key?('run_list') ? node_config['run_list'].join(',') : nil
30
32
  @run_on_event = node_config['run_on_event']
31
33
 
@@ -67,15 +69,10 @@ module StackBuilder::Chef
67
69
  name = "#{@node_id}-#{index}"
68
70
  self.create_vm(name, @knife_config)
69
71
 
70
- knife_cmd = KnifeAttribute::Node::NodeAttributeSet.new
71
- knife_cmd.name_args = [ name, 'stack_id', @id ]
72
- knife_cmd.config[:type] = 'override'
73
- run_knife(knife_cmd)
74
-
75
- knife_cmd = KnifeAttribute::Node::NodeAttributeSet.new
76
- knife_cmd.name_args = [ name, 'stack_node', @name ]
77
- knife_cmd.config[:type] = 'override'
78
- run_knife(knife_cmd)
72
+ node = Chef::Node.load(name)
73
+ node.normal['stack_id'] = @id
74
+ node.normal['stack_node'] = @name
75
+ node.save
79
76
 
80
77
  unless @env_key_file.nil?
81
78
  env_key = IO.read(@env_key_file)
@@ -116,8 +113,6 @@ module StackBuilder::Chef
116
113
  run_on_event = node_manager.run_on_event
117
114
  end
118
115
 
119
- set_attributes(name, attributes)
120
-
121
116
  if (events.include?('configure') || events.include?('update')) && !run_list.nil?
122
117
 
123
118
  log_level = (
@@ -138,6 +133,10 @@ module StackBuilder::Chef
138
133
  "result=$?\n" +
139
134
  "rm -f $TMPFILE\n" +
140
135
  "exit $result" )
136
+ else
137
+ node = Chef::Node.load(name)
138
+ attributes.each { |k,v| node.override[k] = v }
139
+ node.save
141
140
  end
142
141
 
143
142
  run_on_event.each_pair { |event, cmd|
@@ -212,21 +211,6 @@ module StackBuilder::Chef
212
211
  results[2]
213
212
  end
214
213
 
215
- def set_attributes(name, attributes, key = nil)
216
-
217
- attributes.each do |k, v|
218
-
219
- if v.is_a?(Hash)
220
- set_attributes(name, v, key.nil? ? k : key + '.' + k)
221
- else
222
- knife_cmd = KnifeAttribute::Node::NodeAttributeSet.new
223
- knife_cmd.name_args = [ name, (key.nil? ? k : key + '.' + k), v.to_s ]
224
- knife_cmd.config[:type] = 'override'
225
- run_knife(knife_cmd)
226
- end
227
- end
228
- end
229
-
230
214
  def knife_ssh(name, cmd)
231
215
 
232
216
  sudo = @knife_config['options']['sudo'] ? 'sudo -i su -c ' : ''
@@ -8,6 +8,8 @@ module StackBuilder::Chef
8
8
 
9
9
  def create_vm(name, knife_config)
10
10
 
11
+ handle_vagrant_box_additions(name, knife_config)
12
+
11
13
  knife_cmd = Chef::Knife::VagrantServerCreate.new
12
14
 
13
15
  knife_cmd.config[:chef_node_name] = name
@@ -53,6 +55,8 @@ module StackBuilder::Chef
53
55
  run_knife(knife_cmd, 3)
54
56
  }
55
57
 
58
+ handle_vagrant_box_cleanup(knife_config)
59
+
56
60
  rescue Exception => msg
57
61
 
58
62
  if Dir.exist?(knife_cmd.config[:vagrant_dir] + '/' + name)
@@ -68,5 +72,79 @@ module StackBuilder::Chef
68
72
  end
69
73
  end
70
74
  end
75
+
76
+ def handle_vagrant_box_additions(name, knife_config)
77
+
78
+ # Create add-on provider specific infrastructure
79
+ knife_options = knife_config['options']
80
+ provider = knife_options['provider']
81
+
82
+ if !provider.nil? && provider.start_with?('vmware')
83
+
84
+ vmx_customize = knife_options['vmx_customize']
85
+ unless vmx_customize.nil?
86
+
87
+ # Build additional disks that will be added to
88
+ # the VMware fusion/desktop VM when booted.
89
+
90
+ disks = {}
91
+ vagrant_disk_path = File.join(Dir.home, '/.vagrant/disks') + '/' + name
92
+ FileUtils.mkdir_p(vagrant_disk_path)
93
+
94
+ vmx_customize.split(/::/).each do |p|
95
+
96
+ kv = p.split('=')
97
+ k = kv[0].gsub(/\"/, '').strip
98
+ v = kv[1].gsub(/\"/, '').strip
99
+
100
+ if k.start_with?('scsi')
101
+ ss = k.split('.')
102
+ disks[ss[0]] ||= {}
103
+ case ss[1]
104
+ when 'fileName'
105
+ if v.start_with?('/')
106
+ disks[ss[0]]['fileName'] = v
107
+ else
108
+ vv = vagrant_disk_path + '/' + v
109
+ vmx_customize.gsub!(/#{v}/, vv)
110
+ disks[ss[0]]['fileName'] = vv
111
+ end
112
+ when 'fileSize'
113
+ disks[ss[0]]['fileSize'] = v
114
+ end
115
+ end
116
+ end
117
+
118
+ # Create extra disks as unlike virtual box VMware fusion/workstation
119
+ # will not create disks automatically based on configuration params
120
+
121
+ vdiskmgr = %x(which vmware-vdiskmanager)
122
+ vdiskmgr = "/Applications/VMware Fusion.app/Contents/Library/vmware-vdiskmanager" \
123
+ if is_os_x? && vdiskmgr.empty?
124
+
125
+ if File.exist?(vdiskmgr)
126
+
127
+ run_jobs(disks.values) do |f|
128
+
129
+ disk = f['fileName']
130
+ @logger.info("Creating disk #{disk}.")
131
+
132
+ %x("#{vdiskmgr}" -c -t 0 -s #{f['fileSize']} -a ide #{disk}) \
133
+ unless File.exist?(f['fileName'])
134
+
135
+ raise StackBuilder::Common::StackBuilderError, "Disk #{disk} could not be created." \
136
+ unless File.exist?(disk)
137
+ end
138
+ else
139
+ raise StackBuilder::Common::StackBuilderError,
140
+ "Unable to determine path to vmware-vdiskmanager" +
141
+ "to create the requested additional disk."
142
+ end
143
+ end
144
+ end
145
+ end
146
+
147
+ def handle_vagrant_box_cleanup(knife_config)
148
+ end
71
149
  end
72
150
  end
@@ -19,23 +19,68 @@ module StackBuilder::Common
19
19
  #
20
20
  # Runs the given execution list asynchronously if fork is supported
21
21
  #
22
- def exec_forked(exec_list)
23
-
24
- if is_nix_os
25
- p = []
26
- exec_list.each do |data|
27
- p << fork {
28
- yield(data)
22
+ def run_jobs(jobs, wait = true, echo = false)
23
+
24
+ jobs = [ jobs ] unless jobs.is_a?(Array)
25
+ job_handles = { }
26
+
27
+ if is_nix_os?
28
+
29
+ jobs.each do |job|
30
+
31
+ read, write = IO.pipe
32
+
33
+ pid = fork {
34
+
35
+ read.close
36
+
37
+ if echo
38
+ stdout = StackBuilder::Common::TeeIO.new($stdout)
39
+ stderr = StackBuilder::Common::TeeIO.new($stderr)
40
+ else
41
+ stdout = StringIO.new
42
+ stderr = StringIO.new
43
+ end
44
+
45
+ begin
46
+ previous_stdout, $stdout = $stdout, stdout
47
+ previous_stderr, $stderr = $stderr, stderr
48
+ yield(job)
49
+ Marshal.dump([stdout.string, stderr.string], write)
50
+ ensure
51
+ $stdout = previous_stdout
52
+ $stderr = previous_stderr
53
+ end
29
54
  }
55
+ write.close
56
+
57
+ job_handles[job.object_id] = [ pid, read ]
30
58
  end
31
- p.each { |pid| Process.waitpid(pid) }
59
+ end
60
+
61
+ if wait
62
+ wait_jobs(job_handles)
32
63
  else
33
- exec_list.each do |data|
34
- yield(data)
35
- printf("\n")
36
- end
64
+ job_handles
37
65
  end
38
-
66
+ end
67
+
68
+ #
69
+ # This should be called after run_jobs() with the returned handles
70
+ # if you want to wait for the forked jobs to complete and retrieve
71
+ # the results.
72
+ #
73
+ def wait_jobs(job_handles)
74
+
75
+ job_results = { }
76
+ job_handles.each do |job_id, handle|
77
+
78
+ result = Marshal.load(handle[1])
79
+ Process.waitpid(handle[0])
80
+ job_results[job_id] = result
81
+ end
82
+
83
+ job_results
39
84
  end
40
85
 
41
86
  #
@@ -324,7 +369,7 @@ module StackBuilder::Common
324
369
  end
325
370
 
326
371
  #
327
- # Helper command to rin Chef knife
372
+ # Helper command to run Chef knife
328
373
  #
329
374
  def run_knife(knife_cmd, retries = 0, output = StringIO.new, error = StringIO.new)
330
375
 
@@ -385,5 +430,71 @@ module StackBuilder::Common
385
430
  $stderr = previous_stderr
386
431
  end
387
432
 
433
+ # Creates and ssh session to the given host using the given credentials
434
+ def ssh_create(host, user, key)
435
+
436
+ if key.start_with?('-----BEGIN RSA PRIVATE KEY-----')
437
+ ssh = Net::SSH.start(host, user,
438
+ {
439
+ :key_data => key,
440
+ :user_known_hosts_file => "/dev/null"
441
+ } )
442
+ elsif File.exist?(key)
443
+ ssh = Net::SSH.start(host, user,
444
+ {
445
+ :key_data => IO.read(key),
446
+ :user_known_hosts_file => "/dev/null"
447
+ } )
448
+ else
449
+ ssh = Net::SSH.start(host, user,
450
+ {
451
+ :password => key,
452
+ :user_known_hosts_file => "/dev/null"
453
+ } )
454
+ end
455
+
456
+ ssh
457
+ end
458
+
459
+ # Executes a remote shell command and returns exit status
460
+ def ssh_exec!(ssh, command)
461
+
462
+ stdout_data = ""
463
+ stderr_data = ""
464
+ exit_code = nil
465
+ exit_signal = nil
466
+
467
+ ssh.open_channel do |channel|
468
+ channel.exec(command) do |ch, success|
469
+ unless success
470
+ abort "FAILED: couldn't execute command (ssh.channel.exec)"
471
+ end
472
+ channel.on_data do |ch,data|
473
+ stdout_data+=data
474
+ end
475
+
476
+ channel.on_extended_data do |ch,type,data|
477
+ stderr_data+=data
478
+ end
479
+
480
+ channel.on_request("exit-status") do |ch,data|
481
+ exit_code = data.read_long
482
+ end
483
+
484
+ channel.on_request("exit-signal") do |ch, data|
485
+ exit_signal = data.read_long
486
+ end
487
+ end
488
+ end
489
+ ssh.loop
490
+
491
+ {
492
+ :out => stdout_data,
493
+ :err => stderr_data,
494
+ :exit_code => exit_code,
495
+ :exit_signal => exit_signal
496
+ }
497
+ end
498
+
388
499
  end
389
500
  end
@@ -5,25 +5,16 @@ module StackBuilder::Common
5
5
  #
6
6
  # Sends data written to an IO object to multiple outputs.
7
7
  #
8
- class TeeIO < IO
8
+ class TeeIO < StringIO
9
9
 
10
10
  def initialize(output = nil)
11
- @string_io = StringIO.new
11
+ super()
12
12
  @output = output
13
13
  end
14
14
 
15
- def tty?
16
- return false
17
- end
18
-
19
15
  def write(string)
20
- @string_io.write(string)
16
+ super(string)
21
17
  @output.write(string) unless @output.nil?
22
18
  end
23
-
24
- def string
25
- @string_io.string
26
- end
27
-
28
19
  end
29
20
  end
@@ -31,7 +31,6 @@ module StackBuilder::Stack
31
31
  end
32
32
 
33
33
  def delete(index)
34
- raise StackBuilder::Common::NotImplemented, 'NodeManager.delete'
35
34
  end
36
35
 
37
36
  end
@@ -49,22 +49,14 @@ module StackBuilder::Stack
49
49
  @sync = SYNC_NONE
50
50
  end
51
51
 
52
- if node_config.has_key?('targets')
53
-
54
- @logger.warn("Ignoring 'scale' attribute for '#{@name}' as that node has targets.") \
55
- if node_config.has_key?("scale")
56
-
57
- @scale = 0
52
+ current_scale = manager.get_scale
53
+ if current_scale==0
54
+ @scale = (node_config.has_key?("scale") ? node_config["scale"] : 1)
58
55
  else
59
- current_scale = manager.get_scale
60
- if current_scale==0
61
- @scale = (node_config.has_key?("scale") ? node_config["scale"] : 1)
62
- else
63
- @scale = current_scale
64
- end
65
-
66
- raise ArgumentError, "The scale for node \"#{@name}\" must be greater than 0." if @scale < 1
56
+ @scale = current_scale
67
57
  end
58
+
59
+ raise ArgumentError, "The scale for node \"#{@name}\" must be greater than 0." if @scale < 1
68
60
  @prev_scale = @scale
69
61
 
70
62
  @targets = [ ]
@@ -138,7 +130,7 @@ module StackBuilder::Stack
138
130
  # Scale Down
139
131
 
140
132
  delete_events = Set.new([ "stop", "uninstall" ])
141
- @scale.step(current_scale - 1).to_a.each do |i|
133
+ @scale.step(current_scale - 1) do |i|
142
134
 
143
135
  resource_sync = @resource_sync[i]
144
136
  resource_sync.wait
@@ -208,9 +200,10 @@ module StackBuilder::Stack
208
200
  end
209
201
 
210
202
  @prev_scale = current_scale
211
- @manager.set_scale(@scale)
212
203
  end
213
204
 
205
+ @manager.set_scale(@scale)
206
+
214
207
  threads
215
208
  end
216
209
 
@@ -219,25 +212,26 @@ module StackBuilder::Stack
219
212
  threads = [ ]
220
213
 
221
214
  scale = (@deleted ? @manager.get_scale : @scale)
222
- if scale > 0
215
+ if @targets.empty?
223
216
 
224
- if @sync == "first"
225
- @manager.process(scale, events, self.parse_attributes(@attributes, 0))
226
- scale -= 1
227
- end
217
+ if scale > 0
228
218
 
229
- if @sync == "all"
230
- scale.times do |i|
231
- @manager.process(i, events, self.parse_attributes(@attributes, i))
219
+ if @sync == "first"
220
+ @manager.process(scale, events, self.parse_attributes(@attributes, 0))
221
+ scale -= 1
232
222
  end
233
- else
234
- scale.times do |i|
235
- spawn_processing(i, events, threads)
223
+
224
+ if @sync == "all"
225
+ scale.times do |i|
226
+ @manager.process(i, events, self.parse_attributes(@attributes, i))
227
+ end
228
+ else
229
+ scale.times do |i|
230
+ spawn_processing(i, events, threads)
231
+ end
236
232
  end
237
233
  end
238
-
239
- elsif !@targets.empty?
240
-
234
+ else
241
235
  @targets.each do |t|
242
236
  t.scale.times do |i|
243
237
  spawn_processing(i, events, threads, t)
@@ -19,13 +19,14 @@ module StackBuilder::Stack
19
19
  StackBuilder::Stack::NodeProvider." unless provider.is_a?(NodeProvider)
20
20
 
21
21
  @provider = provider
22
- env_vars = provider.get_env_vars
23
22
 
23
+ env_vars = provider.get_env_vars
24
24
  stack = StackBuilder::Common.load_yaml(stack_file, env_vars)
25
- @logger.debug("Initializing stack definition:\n #{stack.to_yaml}")
25
+ merge_maps( stack, overrides.end_with?('.json') ?
26
+ JSON.load(File.new(overrides, 'r')) : JSON.load(overrides) ) \
27
+ unless overrides.nil?
26
28
 
27
- overrides = JSON.load(File.new(overrides, 'r')) unless overrides.nil? || !overrides.end_with?('.json')
28
- merge_maps(stack, overrides) unless overrides.nil?
29
+ @logger.debug("Initializing stack definition:\n #{stack.to_yaml}")
29
30
 
30
31
  if id.nil?
31
32
  @id = SecureRandom.uuid.gsub(/-/, '')
@@ -2,7 +2,7 @@
2
2
 
3
3
  module Knife
4
4
  module StackBuilder
5
- VERSION = "0.5.6"
5
+ VERSION = "0.5.7"
6
6
  MAJOR, MINOR, TINY = VERSION.split('.')
7
7
  end
8
8
  end
data/lib/stackbuilder.rb CHANGED
@@ -13,6 +13,7 @@ require 'tmpdir'
13
13
  require 'tempfile'
14
14
  require "stringio"
15
15
  require 'json'
16
+ require 'timeout'
16
17
  require 'openssl'
17
18
  require "net/ssh"
18
19
  require "net/ssh/multi"
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: knife-stackbuilder
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.5.6
4
+ version: 0.5.7
5
5
  platform: ruby
6
6
  authors:
7
7
  - Mevan Samaratunga
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2014-12-12 00:00:00.000000000 Z
11
+ date: 2014-12-31 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: chef
@@ -24,6 +24,20 @@ dependencies:
24
24
  - - "~>"
25
25
  - !ruby/object:Gem::Version
26
26
  version: '12'
27
+ - !ruby/object:Gem::Dependency
28
+ name: berkshelf
29
+ requirement: !ruby/object:Gem::Requirement
30
+ requirements:
31
+ - - "~>"
32
+ - !ruby/object:Gem::Version
33
+ version: 3.2.1
34
+ type: :runtime
35
+ prerelease: false
36
+ version_requirements: !ruby/object:Gem::Requirement
37
+ requirements:
38
+ - - "~>"
39
+ - !ruby/object:Gem::Version
40
+ version: 3.2.1
27
41
  - !ruby/object:Gem::Dependency
28
42
  name: knife-attribute
29
43
  requirement: !ruby/object:Gem::Requirement
@@ -53,7 +67,7 @@ dependencies:
53
67
  - !ruby/object:Gem::Version
54
68
  version: '0'
55
69
  - !ruby/object:Gem::Dependency
56
- name: knife-vagrant3
70
+ name: knife-vagrant2
57
71
  requirement: !ruby/object:Gem::Requirement
58
72
  requirements:
59
73
  - - ">="