ironfan 3.1.7 → 3.2.2

Sign up to get free protection for your applications and to get access to all the features.
Files changed (63) hide show
  1. data/CHANGELOG.md +11 -0
  2. data/Gemfile +15 -12
  3. data/Rakefile +1 -1
  4. data/VERSION +1 -1
  5. data/config/ubuntu10.04-ironfan.erb +10 -0
  6. data/config/ubuntu11.10-ironfan.erb +10 -0
  7. data/ironfan.gemspec +29 -54
  8. data/lib/chef/knife/bootstrap/centos6.2-ironfan.erb +10 -0
  9. data/lib/chef/knife/bootstrap/ubuntu10.04-ironfan.erb +10 -0
  10. data/lib/chef/knife/bootstrap/ubuntu11.10-ironfan.erb +10 -0
  11. data/lib/chef/knife/cluster_kick.rb +7 -2
  12. data/lib/chef/knife/cluster_launch.rb +3 -0
  13. data/lib/chef/knife/cluster_ssh.rb +3 -3
  14. data/lib/chef/knife/ironfan_knife_common.rb +21 -0
  15. data/lib/chef/knife/ironfan_script.rb +2 -0
  16. data/lib/ironfan/chef_layer.rb +9 -9
  17. data/lib/ironfan/cloud.rb +232 -360
  18. data/lib/ironfan/cluster.rb +3 -3
  19. data/lib/ironfan/compute.rb +26 -40
  20. data/lib/ironfan/deprecated.rb +45 -10
  21. data/lib/ironfan/discovery.rb +1 -1
  22. data/lib/ironfan/dsl_builder.rb +99 -0
  23. data/lib/ironfan/facet.rb +2 -3
  24. data/lib/ironfan/fog_layer.rb +14 -10
  25. data/lib/ironfan/private_key.rb +1 -1
  26. data/lib/ironfan/security_group.rb +46 -44
  27. data/lib/ironfan/server.rb +26 -52
  28. data/lib/ironfan/server_slice.rb +13 -19
  29. data/lib/ironfan/volume.rb +47 -59
  30. data/lib/ironfan.rb +5 -4
  31. metadata +116 -122
  32. data/lib/ironfan/dsl_object.rb +0 -124
  33. data/notes/Backup of ec2-pricing_and_capacity.numbers +0 -0
  34. data/notes/Home.md +0 -45
  35. data/notes/INSTALL-cloud_setup.md +0 -103
  36. data/notes/INSTALL.md +0 -134
  37. data/notes/Ironfan-Roadmap.md +0 -70
  38. data/notes/advanced-superpowers.md +0 -16
  39. data/notes/aws_servers.jpg +0 -0
  40. data/notes/aws_user_key.png +0 -0
  41. data/notes/cookbook-versioning.md +0 -11
  42. data/notes/core_concepts.md +0 -200
  43. data/notes/declaring_volumes.md +0 -3
  44. data/notes/design_notes-aspect_oriented_devops.md +0 -36
  45. data/notes/design_notes-ci_testing.md +0 -169
  46. data/notes/design_notes-cookbook_event_ordering.md +0 -249
  47. data/notes/design_notes-meta_discovery.md +0 -59
  48. data/notes/ec2-pricing_and_capacity.md +0 -69
  49. data/notes/ec2-pricing_and_capacity.numbers +0 -0
  50. data/notes/homebase-layout.txt +0 -102
  51. data/notes/knife-cluster-commands.md +0 -18
  52. data/notes/named-cloud-objects.md +0 -11
  53. data/notes/opscode_org_key.png +0 -0
  54. data/notes/opscode_user_key.png +0 -0
  55. data/notes/philosophy.md +0 -13
  56. data/notes/rake_tasks.md +0 -24
  57. data/notes/renamed-recipes.txt +0 -142
  58. data/notes/silverware.md +0 -85
  59. data/notes/style_guide.md +0 -300
  60. data/notes/tips_and_troubleshooting.md +0 -92
  61. data/notes/version-3_2.md +0 -273
  62. data/notes/walkthrough-hadoop.md +0 -168
  63. data/notes/walkthrough-web.md +0 -166
@@ -1,92 +0,0 @@
1
- ## Tips and Notes
2
-
3
- ### Gems
4
-
5
- knife cluster ssh bonobo-worker-2 'sudo gem update --system'
6
- knife cluster ssh bonobo-worker-2 'sudo true ; for foo in /usr/lib/ruby/gems/1.9.2-p290/specifications/* ; do sudo sed -i.bak "s!000000000Z!!" $foo ; done'
7
- knife cluster ssh bonobo-worker-2 'sudo true ; for foo in /usr/lib/ruby/site_ruby/*/rubygems/deprecate.rb ; do sudo sed -i.bak "s!@skip ||= false!true!" $foo ; done'
8
-
9
-
10
- ### EC2 Notes Instance attributes: `disable_api_termination` and `delete_on_termination`
11
-
12
- To set `delete_on_termination` to 'true' after the fact, run the following (modify the instance and volume to suit):
13
-
14
- ```
15
- ec2-modify-instance-attribute -v i-0704be6c --block-device-mapping /dev/sda1=vol-XX8d2c80::true
16
- ```
17
-
18
- If you set `disable_api_termination` to true, in order to terminate the node run
19
- ```
20
- ec2-modify-instance-attribute -v i-0704be6c --disable-api-termination false
21
- ```
22
-
23
- To view whether an attached volume is deleted when the machine is terminated:
24
-
25
- ```
26
- # show volumes that will be deleted
27
- ec2-describe-volumes --filter "attachment.delete-on-termination=true"
28
- ```
29
-
30
- You can't (as far as I know) alter the delete-on-termination flag of a running volume. Crazy, huh?
31
-
32
- ### EC2: See your userdata
33
-
34
- curl http://169.254.169.254/latest/user-data
35
-
36
- ### EBS Volumes for a persistent HDFS
37
-
38
- * Make one volume and format for XFS:
39
- `$ sudo mkfs.xfs -f /dev/sdh1`
40
- * options "defaults,nouuid,noatime" give good results. The 'nouuid' part
41
- prevents errors when mounting multiple volumes from the same snapshot.
42
- * poke a file onto the drive :
43
- datename=`date +%Y%m%d`
44
- sudo bash -c "(echo $datename ; df /data/ebs1 ) > /data/ebs1/xfs-created-at-$datename.txt"
45
-
46
-
47
- If you want to grow the drive:
48
- * take a snapshot.
49
- * make a new volume from it
50
- * mount that, and run `sudo xfs_growfs`. You *should* have the volume mounted, and should stop anything that would be working the volume hard.
51
-
52
- ### Hadoop: On-the-fly backup of your namenode metadata
53
-
54
- bkupdir=/ebs2/hadoop-nn-backup/`date +"%Y%m%d"`
55
-
56
- for srcdir in /ebs*/hadoop/hdfs/ /home/hadoop/gibbon/hdfs/ ; do
57
- destdir=$bkupdir/$srcdir ; echo $destdir ;
58
- sudo mkdir -p $destdir ;
59
- done
60
-
61
-
62
- ### NFS: Halp I am using an NFS-mounted /home and now I can't log in as ubuntu
63
-
64
- Say you set up an NFS server 'core-homebase-0' (in the 'core' cluster) to host and serve out `/home` directory; and a machine 'awesome-webserver-0' (in the 'awesome' cluster), that is an NFS client.
65
-
66
- In each case, when the machine was born EC2 created a `/home/ubuntu/.ssh/authorized_keys` file listing only the single approved machine keypair -- 'core' for the core cluster, 'awesome' for the awesome cluster.
67
-
68
- When chef client runs, however, it mounts the NFS share at /home. This then masks the actual /home directory -- nothing that's on the base directory tree shows up. Which means that after chef runs, the /home/ubuntu/.ssh/authorized_keys file on awesome-webserver-0 is the one for the *'core'* cluster, not the *'awesome'* cluster.
69
-
70
- The solution is to use the cookbook ironfan provides -- it moves the 'ubuntu' user's home directory to an alternative path not masked by the NFS.
71
-
72
-
73
- ### NFS: Problems starting NFS server on ubuntu maverick
74
-
75
- For problems starting NFS server on ubuntu maverick systems, read, understand and then run /tmp/fix_nfs_on_maverick_amis.sh -- See "this thread for more":http://fossplanet.com/f10/[ec2ubuntu]-not-starting-nfs-kernel-daemon-no-support-current-kernel-90948/
76
-
77
-
78
- ### Git deploys: My git deploy recipe has gone limp
79
-
80
- Suppose you are using the @git@ resource to deploy a recipe (@george@ for sake of example). If @/var/chef/cache/revision_deploys/var/www/george@ exists then *nothing* will get deployed, even if /var/www/george/{release_sha} is empty or screwy. If git deploy is acting up in any way, nuke that cache from orbit -- it's the only way to be sure.
81
-
82
- $ sudo rm -rf /var/www/george/{release_sha} /var/chef/cache/revision_deploys/var/www/george
83
-
84
- ### Runit services : 'fail: XXX: unable to change to service directory: file does not exist'
85
-
86
- Your service is probably installed but removed from runit's purview; check the `/etc/service` symlink. All of the following should be true:
87
-
88
- * directory `/etc/sv/foo`, containing file `run` and dirs `log` and `supervise`
89
- * `/etc/init.d/foo` is symlinked to `/usr/bin/sv`
90
- * `/etc/servics/foo` is symlinked tp `/etc/sv/foo`
91
-
92
-
data/notes/version-3_2.md DELETED
@@ -1,273 +0,0 @@
1
-
2
- # v3.2.0 (future): Revamped undercarriage, spec coverage, standalone usage
3
-
4
- This is a Snow Leopard-style version change. No new features to speak of, but a much more solid and predictable foundation.
5
-
6
- * **significantly cleaner DSL mixin**: uses the new, awesome `Gorillib::Builder`, giving it a much cleaner handling of fields and collections
7
-
8
- * **attributes are late-resolved**: in previous versions, the way you 'resolved' a server was to collapse the entire attribute set of cluster/facet/server hard onto the server model, a consistent source of bugs. Resolution is now done with the `Gorillib::Record::Overlay` mechanism, which means that you can set an attribute on the cluster and read it from the facet; change it later an all lower layers see the update.
9
-
10
- * **standalone usable**: can use ironfan-knife as a standalone library.
11
-
12
- # v3.3.x (future): Coherent universe of Servers, Components, Aspects
13
-
14
- * **spec coverage**:
15
-
16
- * **coherent data model**:
17
-
18
- ComputeLayer -- common attributes of Provider, Cluster, Facet, Server
19
- - overlay_stack of Cloud attributes
20
-
21
- Universe -- across organizations
22
- Organization -- one or many providers
23
- Provider --
24
- - has_many :clusters
25
- Cluster --
26
- - has_many :providers
27
- - overlays :main_provider
28
- Facet --
29
- - has_one :cluster
30
- - overlays :cluster
31
- Server
32
- - has_one :facet
33
- - overlays :cluster
34
- - has_one chef_node
35
- - has_one machine
36
-
37
-
38
- System Role Cookbook
39
- Component Cookbook+Recipes
40
-
41
-
42
-
43
- * **improved discovery**:
44
-
45
- * **config isolation**:
46
-
47
-
48
- ### Nitpicks
49
-
50
-
51
- * make bootstrap_distro and image_name follow from os_version
52
-
53
- * minidash just publishes announcements
54
- * silverware is always included; it subsumes volumes
55
-
56
- * if you add a `data_dir_for :hadoop` to
57
-
58
- * volumes should name their `mount_point` after themselves by default
59
-
60
- ### Components
61
-
62
- * components replace roles (they are auto-generated by the component, and tie strictly to it)
63
- *
64
-
65
- ### Clusters
66
-
67
- If clusters are more repeatable they won't be so bothersomely multi-provider:
68
-
69
- Ironfan.cluster :gibbon do
70
- cloud(:ec2) do
71
- backing 'ebs'
72
- permanent false
73
- end
74
- stack :systemwide
75
- stack :devstack
76
- stack :monitoring
77
- stack :log_handling
78
-
79
- component :hadoop_devstack
80
- component :hadoop_dedicated
81
-
82
- discovers :zookeeper, :realm => :zk
83
- discovers :hbase, :realm => :hbase
84
-
85
- facet :master do
86
- component :hadoop_namenode
87
- component :hadoop_secondarynn
88
- component :hadoop_jobtracker
89
- end
90
- facet :worker do
91
- component :hadoop_datanode
92
- component :hadoop_tasktracker
93
- end
94
-
95
- volume :hadoop_data do
96
- data_dir_for :hadoop_datanode, :hadoop_namenode, :hadoop_secondarynn
97
- device '/dev/sdj1'
98
- size 100
99
- keep true
100
- end
101
- end
102
-
103
-
104
- Here are ideas about how to get there
105
-
106
- # silverware is always included; it subsumes volumes
107
-
108
- organization :infochimps do
109
- cloud(:ec2) do
110
- availability_zones ['us-east-1d']
111
- backing :ebs
112
- image_name 'ironfan-natty'
113
- bootstrap_distro 'ironfan-natty'
114
- chef_client_script 'client.rb'
115
- permanent true
116
- end
117
-
118
- volume(:default) do
119
- keep true
120
- snapshot_name :blank_xfs
121
- resizable true
122
- create_at_launch true
123
- end
124
-
125
- stack :systemwide do
126
- system(:chef_client) do
127
- run_state :on_restart
128
- end
129
- component :set_hostname
130
- component :minidash
131
- component :org_base
132
- component :org_users
133
- component :org_final
134
- end
135
-
136
- stack :devstack do
137
- component :ssh
138
- component :nfs_client
139
- component :package_set
140
- end
141
-
142
- stack :monitoring do
143
- component :zabbix_agent
144
- end
145
-
146
- stack :log_handling do
147
- component :log_handling
148
- end
149
- end
150
-
151
- stack :hadoop do
152
- end
153
-
154
- stack :hadoop_devstack do
155
- component :pig
156
- component :jruby
157
- component :rstats
158
- end
159
-
160
- stack :hadoop_dedicated do
161
- component :tuning
162
- end
163
-
164
- system :hadoop do
165
- stack :hadoop_devstack
166
- stack :zookeeper_client
167
- stack :hbase_client
168
- end
169
-
170
- Ironfan.cluster :gibbon do
171
- cloud(:ec2) do
172
- backing 'ebs'
173
- permanent false
174
- end
175
-
176
- system :systemwide do
177
- exclude_stack :monitoring
178
- end
179
-
180
- # how are its components configured? distributed among machines?
181
- system :hadoop do
182
-
183
- # all servers will
184
- # * have the `hadoop` role
185
- # * have run_state => false for components with a daemon aspect by default
186
-
187
- facet :master do
188
- # component :hadoop_namenode means
189
- # * this facet has the `hadoop_namenode` role
190
- # * it has the component's security_groups
191
- # * it sets node[:hadoop][:namenode][:run_state] = true
192
- # * it will mount the volumes that adhere to this component
193
- component :hadoop_namenode
194
- end
195
-
196
- # something gains eg zookeeper client if it discovers a zookeeper in another realm
197
- # zookeeper must explicitly admit it discovers zookeeper, but can do that in the component
198
-
199
- # what volumes should it use on those machines?
200
- # create the volumes, pair it to components
201
- # if a component is on a server, it adds its volumes.
202
- # you can also add them explicitly.
203
-
204
- # volume tags are applied automagically from their adherance to components
205
-
206
- volume :hadoop_data do # will be assigned to servers with components it lists
207
- data_dir_for :hadoop_datanode, :hadoop_namenode, :hadoop_secondarynn
208
- end
209
-
210
- ### Providers
211
-
212
- I want to be able to:
213
-
214
- * on a compute layer, modify its behavior depending on provider:
215
- - example:
216
-
217
- facet(:bob) do
218
- cloud do
219
- security_group :bob
220
- authorize :from => :bobs_friends, :to => :bob
221
- end
222
- cloud(:ec2, :flavor => 'm1.small')
223
- cloud(:rackspace, :flavor => '2GB')
224
- cloud(:vagrant, :ram_mb => 256 )
225
- end
226
-
227
- - Any world that understands security groups will endeavor to make a `bob` security group, and authorize the `bobs_friends` group to use it.
228
- - On EC2 and rackspace, the `flavor` attribute is set explicitly
229
- - On vagrant (which got no `flavor`), we instead specify how much ram to supply
230
- - On any other provider the flavor and machine ram will follow defaults.
231
-
232
- * see all machines and clusters within an organization
233
-
234
-
235
- ### Organizations
236
-
237
- * see the entire universe; this might get hairy, but not ridiculous
238
- - each org describes its providers; only those are used
239
- - you don't have to do much to add a provider, just say `provider(:ec2)`
240
- - you can configure the provider like this:
241
-
242
- organization(:infochimps_test, :doc => 'Infochimps test cloud') do
243
- provider(:vagrant)
244
- provider(:ec2) do
245
- access_key '...'
246
- secret_access_key '...'
247
- end
248
- provider(:hp_cloud) do
249
- access_key '...'
250
- secret_access_key '...'
251
- end
252
- end
253
-
254
- organization(:demo, :doc => 'Live client demo cloud') do
255
- provider(:vagrant)
256
- provider(:ec2) do #... end
257
- provider(:hp_cloud) do #... end
258
- provider(:rackspace) do #... end
259
- end
260
-
261
- - clusters can be declared directly or imported from other organizations:
262
-
263
- organization :infochimps_test do
264
- # developers' sandboxes
265
- cluster :dev_sandboxes
266
- # all the example clusters, for development
267
- organization(:examples).clusters.each do |cl|
268
- add_cluster cl
269
- end
270
- end
271
-
272
- - if just starting, should see clusters;
273
- - per-org cluster dirs
@@ -1,168 +0,0 @@
1
- FIXME: Repurpose general structure to demonstrate a Hadoop cluster.
2
-
3
- ## Walkthrough: Hadoop Cluster
4
-
5
- Here's a very simple cluster:
6
-
7
- ```ruby
8
- Ironfan.cluster 'hadoop_demo' do
9
- cloud(:ec2) do
10
- flavor 't1.micro'
11
- end
12
-
13
- role :base_role
14
- role :chef_client
15
- role :ssh
16
-
17
- # The database server
18
- facet :dbnode do
19
- instances 1
20
- role :mysql_server
21
-
22
- cloud do
23
- flavor 'm1.large'
24
- backing 'ebs'
25
- end
26
- end
27
-
28
- # A throwaway facet for development.
29
- facet :webnode do
30
- instances 2
31
- role :nginx_server
32
- role :awesome_webapp
33
- end
34
- end
35
- ```
36
-
37
- This code defines a cluster named hadoop_demo. A cluster is a group of servers united around a common purpose, in this case to serve a scalable web application.
38
-
39
- The hadoop_demo cluster has two 'facets' -- dbnode and webnode. A facet is a subgroup of interchangeable servers that provide a logical set of systems: in this case, the systems that store the website's data and those that render it.
40
-
41
- The dbnode facet has one server, which will be named `hadoop_demo-dbnode-0`; the webnode facet has two servers, `hadoop_demo-webnode-0` and `hadoop_demo-webnode-1`.
42
-
43
- Each server inherits the appropriate behaviors from its facet and cluster. All the servers in this cluster have the `base_role`, `chef_client` and `ssh` roles. The dbnode machines additionally house a MySQL server, while the webnodes have an nginx reverse proxy for the custom `hadoop_demo_webapp`.
44
-
45
- As you can see, the dbnode facet asks for a different flavor of machine (`m1.large`) than the cluster default (`t1.micro`). Settings in the facet override those in the server, and settings in the server override those of its facet. You economically describe only what's significant about each machine.
46
-
47
- ### Cluster-level tools
48
-
49
- ```
50
- $ knife cluster show hadoop_demo
51
-
52
- +---------------------+-------+------------+-------------+--------------+---------------+-----------------+----------+--------------+------------+------------+
53
- | Name | Chef? | InstanceID | State | Public IP | Private IP | Created At | Flavor | Image | AZ | SSH Key |
54
- +---------------------+-------+------------+-------------+--------------+---------------+-----------------+----------+--------------+------------+------------+
55
- | hadoop_demo-dbnode-0 | yes | i-43c60e20 | running | 107.22.6.104 | 10.88.112.201 | 20111029-204156 | t1.micro | ami-cef405a7 | us-east-1a | hadoop_demo |
56
- | hadoop_demo-webnode-0 | yes | i-1233aef1 | running | 102.99.3.123 | 10.88.112.123 | 20111029-204156 | t1.micro | ami-cef405a7 | us-east-1a | hadoop_demo |
57
- | hadoop_demo-webnode-1 | yes | i-0986423b | not running | | | | | | | |
58
- +---------------------+-------+------------+-------------+--------------+---------------+-----------------+----------+--------------+------------+------------+
59
-
60
- ```
61
-
62
- The commands available are:
63
-
64
- * list -- lists known clusters
65
- * show -- show the named servers
66
- * launch -- launch server
67
- * bootstrap
68
- * sync
69
- * ssh
70
- * start/stop
71
- * kill
72
- * kick -- trigger a chef-client run on each named machine, tailing the logs until the run completes
73
-
74
-
75
- ### Advanced clusters remain simple
76
-
77
- Let's say that app is truly awesome, and the features and demand increases. This cluster adds an [ElasticSearch server](http://elasticsearch.org) for searching, a haproxy loadbalancer, and spreads the webnodes across two availability zones.
78
-
79
- ```ruby
80
- Ironfan.cluster 'hadoop_demo' do
81
- cloud(:ec2) do
82
- image_name "maverick"
83
- flavor "t1.micro"
84
- availability_zones ['us-east-1a']
85
- end
86
-
87
- # The database server
88
- facet :dbnode do
89
- instances 1
90
- role :mysql_server
91
- cloud do
92
- flavor 'm1.large'
93
- backing 'ebs'
94
- end
95
-
96
- volume(:data) do
97
- size 20
98
- keep true
99
- device '/dev/sdi'
100
- mount_point '/data'
101
- snapshot_id 'snap-a10234f'
102
- attachable :ebs
103
- end
104
- end
105
-
106
- facet :webnode do
107
- instances 6
108
- cloud.availability_zones ['us-east-1a', 'us-east-1b']
109
-
110
- role :nginx_server
111
- role :awesome_webapp
112
- role :elasticsearch_client
113
-
114
- volume(:server_logs) do
115
- size 5
116
- keep true
117
- device '/dev/sdi'
118
- mount_point '/server_logs'
119
- snapshot_id 'snap-d9c1edb1'
120
- end
121
- end
122
-
123
- facet :esnode do
124
- instances 1
125
- role "elasticsearch_data_esnode"
126
- role "elasticsearch_http_esnode"
127
- cloud.flavor "m1.large"
128
- end
129
-
130
- facet :loadbalancer do
131
- instances 1
132
- role "haproxy"
133
- cloud.flavor "m1.xlarge"
134
- elastic_ip "128.69.69.23"
135
- end
136
-
137
- cluster_role.override_attributes({
138
- :elasticsearch => {
139
- :version => '0.17.8',
140
- },
141
- })
142
- end
143
- ```
144
-
145
- The facets are described and scale independently. If you'd like to add more webnodes, just increase the instance count. If a machine misbehaves, just terminate it. Running `knife cluster launch hadoop_demo webnode` will note which machines are missing, and launch and configure them appropriately.
146
-
147
- Ironfan speaks naturally to both Chef and your cloud provider. The esnode's `cluster_role.override_attributes` statement will be synchronized to the chef server, pinning the elasticsearch version across the server and clients. Your chef roles should focus on specific subsystems; the cluster file lets you see the architecture as a whole.
148
-
149
- With these simple settings, if you have already [set up chef's knife to launch cloud servers](http://wiki.opscode.com/display/chef/Launch+Cloud+Instances+with+Knife), typing `knife cluster launch hadoop_demo --bootstrap` will (using Amazon EC2 as an example):
150
-
151
- * Synchronize to the chef server:
152
- - create chef roles on the server for the cluster and each facet.
153
- - apply role directives (eg the homebase's `default_attributes` declaration).
154
- - create a node for each machine
155
- - apply the runlist to each node
156
- * Set up security isolation:
157
- - uses a keypair (login ssh key) isolated to that cluster
158
- - Recognizes the `ssh` role, and add a security group `ssh` that by default opens port 22.
159
- - Recognize the `nfs_server` role, and adds security groups `nfs_server` and `nfs_client`
160
- - Authorizes the `nfs_server` to accept connections from all `nfs_client`s. Machines in other clusters that you mark as `nfs_client`s can connect to the NFS server, but are not automatically granted any other access to the machines in this cluster. Ironfan's opinionated behavior is about more than saving you effort -- tying this behavior to the chef role means you can't screw it up.
161
- * Launches the machines in parallel:
162
- - using the image name and the availability zone, it determines the appropriate region, image ID, and other implied behavior.
163
- - passes a JSON-encoded user_data hash specifying the machine's chef `node_name` and client key. An appropriately-configured machine image will need no further bootstrapping -- it will connect to the chef server with the appropriate identity and proceed completely unattended.
164
- * Syncronizes to the cloud provider:
165
- - Applies EC2 tags to the machine, making your console intelligible: ![AWS Console screenshot](https://github.com/infochimps-labs/ironfan/raw/version_3/notes/aws_console_screenshot.jpg)
166
- - Connects external (EBS) volumes, if any, to the correct mount point -- it uses (and applies) tags to the volumes, so they know which machine to adhere to. If you've manually added volumes, just make sure they're defined correctly in your cluster file and run `knife cluster sync {cluster_name}`; it will paint them with the correct tags.
167
- - Associates an elastic IP, if any, to the machine
168
- * Bootstraps the machine using knife bootstrap