ironfan 5.0.11 → 6.0.0

Sign up to get free protection for your applications and to get access to all the features.
Files changed (121) hide show
  1. data/.gitignore +4 -0
  2. data/.gitmodules +3 -0
  3. data/Gemfile +8 -26
  4. data/Gemfile.lock +38 -41
  5. data/NOTES-REALM.md +172 -0
  6. data/Rakefile +19 -77
  7. data/config/ubuntu12.04-ironfan.erb +7 -0
  8. data/ironfan.gemspec +28 -225
  9. data/lib/chef/cluster_knife.rb +26 -0
  10. data/lib/chef/knife/bootstrap/ubuntu12.04-ironfan.erb +7 -0
  11. data/lib/chef/knife/cluster_bootstrap.rb +1 -3
  12. data/lib/chef/knife/cluster_diff.rb +2 -8
  13. data/lib/chef/knife/cluster_kick.rb +1 -3
  14. data/lib/chef/knife/cluster_kill.rb +1 -2
  15. data/lib/chef/knife/cluster_launch.rb +17 -34
  16. data/lib/chef/knife/cluster_list.rb +6 -5
  17. data/lib/chef/knife/cluster_proxy.rb +1 -3
  18. data/lib/chef/knife/cluster_pry.rb +1 -2
  19. data/lib/chef/knife/cluster_show.rb +6 -7
  20. data/lib/chef/knife/cluster_ssh.rb +10 -8
  21. data/lib/chef/knife/cluster_start.rb +1 -2
  22. data/lib/chef/knife/cluster_stop.rb +1 -2
  23. data/lib/chef/knife/cluster_sync.rb +2 -3
  24. data/lib/chef/knife/ironfan_knife_common.rb +58 -18
  25. data/lib/chef/knife/ironfan_script.rb +0 -3
  26. data/lib/ironfan/broker/computer.rb +14 -11
  27. data/lib/ironfan/broker.rb +17 -12
  28. data/lib/ironfan/cookbook_requirements.rb +155 -0
  29. data/lib/ironfan/dsl/cloud.rb +2 -0
  30. data/lib/ironfan/dsl/cluster.rb +25 -15
  31. data/lib/ironfan/dsl/component.rb +12 -15
  32. data/lib/ironfan/dsl/compute.rb +10 -8
  33. data/lib/ironfan/dsl/ec2.rb +2 -26
  34. data/lib/ironfan/dsl/facet.rb +16 -14
  35. data/lib/ironfan/dsl/openstack.rb +147 -0
  36. data/lib/ironfan/dsl/realm.rb +23 -16
  37. data/lib/ironfan/dsl/security_group.rb +29 -0
  38. data/lib/ironfan/dsl/server.rb +14 -5
  39. data/lib/ironfan/dsl/static.rb +63 -0
  40. data/lib/ironfan/dsl/vsphere.rb +1 -0
  41. data/lib/ironfan/dsl.rb +1 -134
  42. data/lib/ironfan/headers.rb +19 -0
  43. data/lib/ironfan/provider/chef/node.rb +3 -2
  44. data/lib/ironfan/provider/ec2/machine.rb +10 -14
  45. data/lib/ironfan/provider/ec2/security_group.rb +58 -43
  46. data/lib/ironfan/provider/openstack/elastic_ip.rb +96 -0
  47. data/lib/ironfan/provider/openstack/keypair.rb +78 -0
  48. data/lib/ironfan/provider/openstack/machine.rb +371 -0
  49. data/lib/ironfan/provider/openstack/security_group.rb +224 -0
  50. data/lib/ironfan/provider/openstack.rb +69 -0
  51. data/lib/ironfan/provider/static/machine.rb +192 -0
  52. data/lib/ironfan/provider/static.rb +23 -0
  53. data/lib/ironfan/provider.rb +58 -1
  54. data/lib/ironfan/requirements.rb +17 -1
  55. data/lib/ironfan/version.rb +3 -0
  56. data/lib/ironfan.rb +107 -172
  57. data/spec/chef/cluster_bootstrap_spec.rb +2 -7
  58. data/spec/chef/cluster_launch_spec.rb +1 -2
  59. data/spec/fixtures/realms/samurai.rb +26 -0
  60. data/spec/integration/minimal-chef-repo/clusters/.gitkeep +0 -0
  61. data/spec/integration/minimal-chef-repo/config/.gitkeep +0 -0
  62. data/spec/integration/minimal-chef-repo/knife/credentials/.gitignore +1 -0
  63. data/spec/integration/minimal-chef-repo/knife/credentials/certificates/.gitkeep +0 -0
  64. data/spec/integration/minimal-chef-repo/knife/credentials/client_keys/.gitkeep +0 -0
  65. data/spec/integration/minimal-chef-repo/knife/credentials/data_bag_keys/.gitkeep +0 -0
  66. data/spec/integration/minimal-chef-repo/knife/credentials/ec2_certs/.gitkeep +0 -0
  67. data/spec/integration/minimal-chef-repo/knife/credentials/ec2_keys/.gitkeep +0 -0
  68. data/spec/integration/minimal-chef-repo/knife/credentials/ironfantest-validator.pem +27 -0
  69. data/spec/integration/minimal-chef-repo/knife/credentials/ironfantester.pem +27 -0
  70. data/spec/integration/minimal-chef-repo/tasks/.gitkeep +0 -0
  71. data/spec/ironfan/cluster_spec.rb +1 -2
  72. data/spec/ironfan/diff_spec.rb +0 -2
  73. data/spec/ironfan/dsl_spec.rb +6 -3
  74. data/spec/ironfan/ec2/cloud_provider_spec.rb +17 -18
  75. data/spec/ironfan/ec2/elb_spec.rb +44 -41
  76. data/spec/ironfan/ec2/security_group_spec.rb +45 -47
  77. data/spec/ironfan/manifest_spec.rb +0 -1
  78. data/spec/ironfan/plugin_spec.rb +55 -40
  79. data/spec/ironfan/realm_spec.rb +42 -30
  80. data/spec/spec_helper.rb +17 -31
  81. data/spec/{spec_helper → support}/dummy_chef.rb +0 -0
  82. data/spec/{spec_helper → support}/dummy_diff_drawer.rb +0 -0
  83. metadata +78 -155
  84. data/.rspec +0 -2
  85. data/.yardopts +0 -19
  86. data/VERSION +0 -2
  87. data/chefignore +0 -41
  88. data/notes/Future-development-proposals.md +0 -266
  89. data/notes/Home.md +0 -55
  90. data/notes/INSTALL-cloud_setup.md +0 -103
  91. data/notes/INSTALL.md +0 -134
  92. data/notes/Ironfan-Roadmap.md +0 -70
  93. data/notes/Upgrading-to-v4.md +0 -66
  94. data/notes/advanced-superpowers.md +0 -16
  95. data/notes/aws_servers.jpg +0 -0
  96. data/notes/aws_user_key.png +0 -0
  97. data/notes/cookbook-versioning.md +0 -11
  98. data/notes/core_concepts.md +0 -200
  99. data/notes/declaring_volumes.md +0 -3
  100. data/notes/design_notes-aspect_oriented_devops.md +0 -36
  101. data/notes/design_notes-ci_testing.md +0 -169
  102. data/notes/design_notes-cookbook_event_ordering.md +0 -249
  103. data/notes/design_notes-meta_discovery.md +0 -59
  104. data/notes/ec2-pricing_and_capacity.md +0 -75
  105. data/notes/ec2-pricing_and_capacity.numbers +0 -0
  106. data/notes/homebase-layout.txt +0 -102
  107. data/notes/knife-cluster-commands.md +0 -21
  108. data/notes/named-cloud-objects.md +0 -11
  109. data/notes/opscode_org_key.png +0 -0
  110. data/notes/opscode_user_key.png +0 -0
  111. data/notes/philosophy.md +0 -13
  112. data/notes/rake_tasks.md +0 -24
  113. data/notes/renamed-recipes.txt +0 -142
  114. data/notes/silverware.md +0 -85
  115. data/notes/style_guide.md +0 -300
  116. data/notes/tips_and_troubleshooting.md +0 -92
  117. data/notes/walkthrough-hadoop.md +0 -168
  118. data/notes/walkthrough-web.md +0 -166
  119. data/spec/fixtures/gunbai.rb +0 -24
  120. data/spec/test_config.rb +0 -20
  121. data/tasks/chef_config.rake +0 -38
@@ -1,266 +0,0 @@
1
- ## Nathan
2
- * **Clusters from JSON** - this is theoretically quite easy, given the DSL's gorillib underpinnings.
3
-
4
- ## Flip
5
- * **standalone usable**: can use ironfan-knife as a standalone library.
6
-
7
- * **spec coverage**:
8
-
9
- * **coherent data model**:
10
-
11
- ComputeLayer -- common attributes of Provider, Cluster, Facet, Server
12
- - overlay_stack of Cloud attributes
13
-
14
- Universe -- across organizations
15
- Organization -- one or many providers
16
- Provider --
17
- - has_many :clusters
18
- Cluster --
19
- - has_many :providers
20
- - overlays :main_provider
21
- Facet --
22
- - has_one :cluster
23
- - overlays :cluster
24
- Server
25
- - has_one :facet
26
- - overlays :cluster
27
- - has_one chef_node
28
- - has_one machine
29
-
30
-
31
- System Role Cookbook
32
- Component Cookbook+Recipes
33
-
34
-
35
-
36
- * **improved discovery**:
37
-
38
- * **config isolation**:
39
-
40
-
41
- ### Nitpicks
42
-
43
-
44
- * make bootstrap_distro and image_name follow from os_version
45
-
46
- * minidash just publishes announcements
47
- * silverware is always included; it subsumes volumes
48
-
49
- * if you add a `data_dir_for :hadoop` to
50
-
51
- * volumes should name their `mount_point` after themselves by default
52
-
53
- ### Components
54
-
55
- * components replace roles (they are auto-generated by the component, and tie strictly to it)
56
- *
57
-
58
- ### Clusters
59
-
60
- If clusters are more repeatable they won't be so bothersomely multi-provider:
61
-
62
- Ironfan.cluster :gibbon do
63
- cloud(:ec2) do
64
- backing 'ebs'
65
- permanent false
66
- end
67
- stack :systemwide
68
- stack :devstack
69
- stack :monitoring
70
- stack :log_handling
71
-
72
- component :hadoop_devstack
73
- component :hadoop_dedicated
74
-
75
- discovers :zookeeper, :realm => :zk
76
- discovers :hbase, :realm => :hbase
77
-
78
- facet :master do
79
- component :hadoop_namenode
80
- component :hadoop_secondarynn
81
- component :hadoop_jobtracker
82
- end
83
- facet :worker do
84
- component :hadoop_datanode
85
- component :hadoop_tasktracker
86
- end
87
-
88
- volume :hadoop_data do
89
- data_dir_for :hadoop_datanode, :hadoop_namenode, :hadoop_secondarynn
90
- device '/dev/sdj1'
91
- size 100
92
- keep true
93
- end
94
- end
95
-
96
-
97
- Here are ideas about how to get there
98
-
99
- # silverware is always included; it subsumes volumes
100
-
101
- organization :infochimps do
102
- cloud(:ec2) do
103
- availability_zones ['us-east-1d']
104
- backing :ebs
105
- image_name 'ironfan-natty'
106
- bootstrap_distro 'ironfan-natty'
107
- chef_client_script 'client.rb'
108
- permanent true
109
- end
110
-
111
- volume(:default) do
112
- keep true
113
- snapshot_name :blank_xfs
114
- resizable true
115
- create_at_launch true
116
- end
117
-
118
- stack :systemwide do
119
- system(:chef_client) do
120
- run_state :on_restart
121
- end
122
- component :set_hostname
123
- component :minidash
124
- component :org_base
125
- component :org_users
126
- component :org_final
127
- end
128
-
129
- stack :devstack do
130
- component :ssh
131
- component :nfs_client
132
- component :package_set
133
- end
134
-
135
- stack :monitoring do
136
- component :zabbix_agent
137
- end
138
-
139
- stack :log_handling do
140
- component :log_handling
141
- end
142
- end
143
-
144
- stack :hadoop do
145
- end
146
-
147
- stack :hadoop_devstack do
148
- component :pig
149
- component :jruby
150
- component :rstats
151
- end
152
-
153
- stack :hadoop_dedicated do
154
- component :tuning
155
- end
156
-
157
- system :hadoop do
158
- stack :hadoop_devstack
159
- stack :zookeeper_client
160
- stack :hbase_client
161
- end
162
-
163
- Ironfan.cluster :gibbon do
164
- cloud(:ec2) do
165
- backing 'ebs'
166
- permanent false
167
- end
168
-
169
- system :systemwide do
170
- exclude_stack :monitoring
171
- end
172
-
173
- # how are its components configured? distributed among machines?
174
- system :hadoop do
175
-
176
- # all servers will
177
- # * have the `hadoop` role
178
- # * have run_state => false for components with a daemon aspect by default
179
-
180
- facet :master do
181
- # component :hadoop_namenode means
182
- # * this facet has the `hadoop_namenode` role
183
- # * it has the component's security_groups
184
- # * it sets node[:hadoop][:namenode][:run_state] = true
185
- # * it will mount the volumes that adhere to this component
186
- component :hadoop_namenode
187
- end
188
-
189
- # something gains eg zookeeper client if it discovers a zookeeper in another realm
190
- # zookeeper must explicitly admit it discovers zookeeper, but can do that in the component
191
-
192
- # what volumes should it use on those machines?
193
- # create the volumes, pair it to components
194
- # if a component is on a server, it adds its volumes.
195
- # you can also add them explicitly.
196
-
197
- # volume tags are applied automagically from their adherance to components
198
-
199
- volume :hadoop_data do # will be assigned to servers with components it lists
200
- data_dir_for :hadoop_datanode, :hadoop_namenode, :hadoop_secondarynn
201
- end
202
-
203
- ### Providers
204
-
205
- I want to be able to:
206
-
207
- * on a compute layer, modify its behavior depending on provider:
208
- - example:
209
-
210
- facet(:bob) do
211
- cloud do
212
- security_group :bob
213
- authorize :from => :bobs_friends, :to => :bob
214
- end
215
- cloud(:ec2, :flavor => 'm1.small')
216
- cloud(:rackspace, :flavor => '2GB')
217
- cloud(:vagrant, :ram_mb => 256 )
218
- end
219
-
220
- - Any world that understands security groups will endeavor to make a `bob` security group, and authorize the `bobs_friends` group to use it.
221
- - On EC2 and rackspace, the `flavor` attribute is set explicitly
222
- - On vagrant (which got no `flavor`), we instead specify how much ram to supply
223
- - On any other provider the flavor and machine ram will follow defaults.
224
-
225
- * see all machines and clusters within an organization
226
-
227
-
228
- ### Organizations
229
-
230
- * see the entire universe; this might get hairy, but not ridiculous
231
- - each org describes its providers; only those are used
232
- - you don't have to do much to add a provider, just say `provider(:ec2)`
233
- - you can configure the provider like this:
234
-
235
- organization(:infochimps_test, :doc => 'Infochimps test cloud') do
236
- provider(:vagrant)
237
- provider(:ec2) do
238
- access_key '...'
239
- secret_access_key '...'
240
- end
241
- provider(:hp_cloud) do
242
- access_key '...'
243
- secret_access_key '...'
244
- end
245
- end
246
-
247
- organization(:demo, :doc => 'Live client demo cloud') do
248
- provider(:vagrant)
249
- provider(:ec2) do #... end
250
- provider(:hp_cloud) do #... end
251
- provider(:rackspace) do #... end
252
- end
253
-
254
- - clusters can be declared directly or imported from other organizations:
255
-
256
- organization :infochimps_test do
257
- # developers' sandboxes
258
- cluster :dev_sandboxes
259
- # all the example clusters, for development
260
- organization(:examples).clusters.each do |cl|
261
- add_cluster cl
262
- end
263
- end
264
-
265
- - if just starting, should see clusters;
266
- - per-org cluster dirs
data/notes/Home.md DELETED
@@ -1,55 +0,0 @@
1
- >## **Ironfan: A Community Discussion Webinar**
2
- **<p>Thursday, January 31 @ 10a P, 12p C, 1p E</p>**
3
- Join Nathaniel Eliot, @temujin9, DevOps Engineer and lead on Ironfan, in this community discussion. Ironfan is a lightweight cluster orchestration toolset, built on top of Chef, which empowers spinning up of Hadoop clusters in under 20 minutes. Nathan has been responsible for Ironfan’s core plugin code, cookbooks, and other components to stabilize both Infochimps’ open source offerings, and internal architectures.
4
- [Register Now](https://www4.gotomeeting.com/register/188375087)
5
-
6
- ## Overview
7
-
8
- Ironfan, the foundation of The Infochimps Platform, is an expressive toolset for constructing scalable, resilient architectures. It works in the cloud, in the data center, and on your laptop, and it makes your system diagram visible and inevitable. Inevitable systems coordinate automatically to interconnect, removing the hassle of manual configuration of connection points (and the associated danger of human error). For more information about Ironfan and the Infochimps Platform, visit [infochimps.com](https://www.infochimps.com).
9
-
10
- <a name="getting-started"></a>
11
- ## Getting Started
12
-
13
- * [Installation Instructions](https://github.com/infochimps-labs/ironfan/wiki/INSTALL)
14
- * [Web Walkthrough](https://github.com/infochimps-labs/ironfan/wiki/walkthrough-web)
15
- * [Ironfan Screencast](http://bit.ly/ironfan-hadoop-in-20-minutes) -- build a Hadoop cluster from scratch in 20 minutes.
16
-
17
- <a name="toolset"></a>
18
- ### Tools
19
-
20
- Ironfan consists of the following toolset:
21
-
22
- * [ironfan-homebase](https://github.com/infochimps-labs/ironfan-homebase): centralizes the cookbooks, roles and clusters. A solid foundation for any chef user.
23
- * [ironfan gem](https://github.com/infochimps-labs/ironfan):
24
- - core models to describe your system diagram with a clean, expressive domain-specific language
25
- - knife plugins to orchestrate clusters of machines using simple commands like `knife cluster launch`
26
- - logic to coordinate truth among chef server and cloud providers.
27
- * [ironfan-pantry](https://github.com/infochimps-labs/ironfan-pantry): Our collection of industrial-strength, cloud-ready recipes for Hadoop, HBase, Cassandra, Elasticsearch, Zabbix and more.
28
- * [silverware cookbook](https://github.com/infochimps-labs/ironfan-homebase/tree/master/cookbooks/silverware): coordinate discovery of services ("list all the machines for `awesome_webapp`, that I might load balance them") and aspects ("list all components that write logs, that I might logrotate them, or that I might monitor the free space on their volumes".
29
- * [Infochimps Platform](http://www.infochimps.com) -- our scalable enterprise big data platform. Ironfan Enterprise adds dynamic orchestration and zero-configuration logging and monitoring.
30
-
31
- <a name="ironfan-way"></a>
32
- ### Ironfan Concepts
33
-
34
- * [Core Concepts](https://github.com/infochimps-labs/ironfan/wiki/core_concepts) -- Components, Announcements, Amenities and more.
35
- * [Philosophy](https://github.com/infochimps-labs/ironfan/wiki/philosophy) -- best practices and lessons learned behind the Ironfan Way
36
- * [Style Guide](https://github.com/infochimps-labs/ironfan/wiki/style_guide) -- common attribute names, how and when to include other cookbooks, and more
37
- * [Homebase Layout](https://github.com/infochimps-labs/ironfan/wiki/homebase-layout) -- how this homebase is organized, and why
38
-
39
- <a name="documentation"></a>
40
- ### Documentation
41
-
42
- * [Index of wiki pages](https://github.com/infochimps-labs/ironfan/wiki/_pages)
43
- * [ironfan wiki](https://github.com/infochimps-labs/ironfan/wiki): high-level documentation and install instructions
44
- * [ironfan issues](https://github.com/infochimps-labs/ironfan/issues): bugs, questions and feature requests for *any* part of the Ironfan toolset.
45
- * [ironfan gem docs](http://rdoc.info/gems/ironfan): rdoc docs for Ironfan
46
-
47
-
48
- <a name="documentation"></a>
49
- ### Documentation
50
- * [EC2 Instance Pricing and Capacity Reference](https://github.com/infochimps-labs/ironfan/wiki/ec2-pricing_and_capacity) * [EC2 Pricing and Capacity Spreadsheet](https://github.com/infochimps-labs/ironfan/wiki/ec2-pricing_and_capacity.numbers) -- source data, and calculations for Hadoop tunables
51
-
52
- __________________________________________________________________________
53
- __________________________________________________________________________
54
- __________________________________________________________________________
55
- <br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/>
@@ -1,103 +0,0 @@
1
- ## Credentials
2
-
3
- * make a credentials repo
4
- - copy the knife/example-credentials directory
5
- - best to not live on github: use a private server and run
6
-
7
- ```
8
- repo=ORGANIZATION-credentials ; repodir=/gitrepos/$repo.git ; mkdir -p $repodir ; ( GIT_DIR=$repodir git init --shared=group --bare && cd $repodir && git --bare update-server-info && chmod a+x hooks/post-update )
9
- ```
10
-
11
- - git submodule it into knife as `knife/yourorg-credentials`
12
- - or, if somebody has added it,
13
-
14
- ```
15
- git pull
16
- git submodule update --init
17
- find . -iname '*.pem' -exec chmod og-rw {} \;
18
- cp knife/${OLD_CHEF_ORGANIZATION}-credentials/knife-user-${CHEF_USER}.rb knife/${CHEF_ORGANIZATION}-credentials
19
- cp knife/${OLD_CHEF_ORGANIZATION}-credentials/${CHEF_USER}.pem knife/${CHEF_ORGANIZATION}-credentials/
20
- ```
21
-
22
- * create AWS account
23
- - [sign up for AWS + credit card + password]
24
- - make IAM users for admins
25
- - add your IAM keys into your `{credentials}/knife-user`
26
-
27
- * create opscode account
28
- - download org keys, put in the credentials repo
29
-
30
- ## Populate Chef Server
31
-
32
- * create `prod` and `dev` environments by using
33
-
34
- ```
35
- knife environment create dev
36
- knife environment create prod
37
- knife environment create stag
38
- knife environment from file environments/stag.json
39
- knife environment from file environments/dev.json
40
- knife environment from file environments/prod.json
41
- ```
42
-
43
- ```
44
- knife cookbook upload --all
45
- rake roles
46
- # if you have data bags, do that too
47
- ```
48
-
49
- ## Create Your Initial Machine Boot-Image (AMI)
50
-
51
- * Start by launching the burninator cluster: `knife cluster launch --bootstrap --yes burninator-trogdor-0`
52
- - You may have to specify the template by adding this an anargument: `--template-file ${CHEF_HOMEBASE}/vendor/ironfan/lib/chef/knife/bootstrap/ubuntu10.04-ironfan.erb`
53
- - This template makes the machine auto-connect to the server upon launch and teleports the client-key into the machine.
54
- - If this fails, bootstrap separately: `knife cluster bootstrap --yes burninator-trogdor-0`
55
-
56
- * Log into the burninator-trogdor and run the script /tmp/burn_ami_prep.sh: `sudo bash /tmp/burn_ami_prep.sh`
57
- - You will have to ssh as the ubuntu user and pass in the burninator.pem identity file.
58
- - Review the output of this script and ensure the world we have created is sane.
59
-
60
- * Once the script has been run:
61
- - Exit the machine.
62
- - Go to AWS console.
63
- - DO NOT stop the machine.
64
- - Do "Create Image (EBS AMI)" from the burninator-trogdor instance (may take a while).
65
-
66
- * Add the AMI id to your `{credentials}/knife-org.rb` in the `ec2_image_info.merge!` section and create a reference name for the image (e.g ironfan-natty).
67
- - Add that reference name to the burninator-village facet in the burninator.rb cluster definition: `cloud.image_name 'ironfan_natty'`
68
-
69
- * Launch the burninator-village in order to test your newly created AMI.
70
- - The village should launch with no problems, have the correct permissions and be able to complete a chef run: `sudo chef-client`.
71
-
72
- * If all has gone well so far, you may now stop the original burninator: `knife cluster kill burninator-trogdor`
73
- - Leave the burninator-village up and stay ssh'ed to assist with the next step.
74
-
75
- ## Create an NFS
76
-
77
- * Make a command/control cluster definition file with an nfs facet (see clusters/demo_cnc.rb).
78
- - Make sure specify the `image_name` to be the AMI you've created.
79
-
80
- * In the AWS console make yourself a 20GB drive.
81
- - Make sure the availability zone matches the one specified in your cnc_cluster definition file.
82
- - Don't choose a snapshot.
83
- - Set the device name to `/dev/sdh`.
84
- - Attach to the burninator-village instance.
85
-
86
- * ssh in to burninator-village to format the nfs drive:
87
- ```
88
- dev=/dev/xvdh ; name='home_drive' ; sudo umount $dev ; ls -l $dev ; sudo mkfs.xfs $dev ; sudo mkdir /mnt/$name ; sudo mount -t xfs $dev /mnt/$name ; sudo bash -c "echo 'snapshot for $name burned on `date`' > /mnt/$name/vol_info.txt "
89
- sudo cp -rp /home/ubuntu /mnt/$name/ubuntu
90
- sudo umount /dev/xvdh
91
- exit
92
- ```
93
- * Back in the AWS console, snapshot the volume and name it `{org}-home_drive`. Delete the original volume as it is not needed anymore.
94
- - While you're in there, make `{org}-resizable_1gb` a 'Minimum-sized snapshot, resizable -- use `xfs_growfs` to resize after launch' snapshot.
95
-
96
- * Paste the snapshot id into your cnc_cluster definition file.
97
- - ssh into the newly launched cnc_cluster-nfs.
98
- - You should restart the machine via the AWS console (may or may not be necessary, do anyway).
99
-
100
- * Manipulate security groups
101
- - `nfs_server` group should open all UDP ports and all TCP ports to `nfs_client` group
102
-
103
- * Change /etc/ssh/sshd_config to be passwordful and restart the ssh service
data/notes/INSTALL.md DELETED
@@ -1,134 +0,0 @@
1
- # Ironfan Installation Instructions
2
-
3
- First of all, every Chef installation needs a Chef Homebase. Chef Homebase is the place where cookbooks, roles, config files and other artifacts for managing systems with Chef will live. Store this homebase in a version control system such as Git and treat it like source code.
4
-
5
- ## Conventions
6
-
7
- In all of the below,
8
-
9
- * `{homebase}`: is the directory that holds your Chef cookbooks, roles and so forth. For example, this file is in `{homebase}/README.md`.
10
- * `{username}`: identifies your personal Chef client name: the thing you use to log into the Chef WebUI.
11
- * `{organization}`: identifies the credentials set and cloud settings to use. If your Chef server is on the Opscode platform (Try it! It's super-easy), use your organization name (the last segment of your chef_server url). If not, use an identifier you deem sensible.
12
-
13
- <a name="initial_install"></a>
14
- ## Install Ironfan's Gem and Homebase
15
-
16
- _Before you begin, you may wish to fork homebase repo, as you'll be making changes to personalize it for your platform that you may want to share with teammates. If you do so, replace all references to infochimps-labs/ironfan-homebase with your fork's path._
17
-
18
- 1. Install system prerequisites (libXML and libXSLT). The following works under Debian/Ubuntu:
19
-
20
- sudo apt-get install libxml2-dev libxslt1-dev
21
-
22
- 1. Install the Ironfan gem (you may need to use `sudo`):
23
-
24
- gem install ironfan
25
-
26
- 1. Clone the repo. It will produce the directory we will call `homebase` from now on:
27
-
28
- git clone https://github.com/infochimps-labs/ironfan-homebase homebase
29
- cd homebase
30
- bundle install
31
- git submodule update --init
32
- git submodule foreach git checkout master
33
-
34
- <a name="knife-configuration"></a>
35
- ## Configure Knife and Add Credentials
36
-
37
- Ironfan expands out the traditional singular [knife.rb](http://wiki.opscode.com/display/chef/Knife#Knife-ConfiguringYourSystemForKnife) into several components. This modularity allows for better management of sensitive shared credentials, personal credentials, and organization-wide configuration.
38
-
39
- ### Set up
40
-
41
- _Note_: If your local username differs from your Opscode Chef username, then you should `export CHEF_USER={username}` (eg from your `.bashrc`) before you run any knife commands.
42
-
43
- So that Knife finds its configuration files, symlink the `{homebase}/knife` directory (the one holding this file) to be your `~/.chef` folder.
44
-
45
- cd {homebase}
46
- ln -sni $CHEF_HOMEBASE/knife ~/.chef
47
-
48
- <a name="credentials"></a>
49
- ### Credentials Directory
50
-
51
- All the keys and settings specific to your organization are held in a directory named `credentials/`, versioned independently of the homebase.
52
-
53
- To set up your credentials directory, visit `{homebase}/knife` and duplicate the example, naming it `credentials`:
54
-
55
- cd $CHEF_HOMEBASE/knife
56
- rm credentials
57
- cp -a example-credentials credentials
58
- cd credentials
59
- git init ; git add .
60
- git commit -m "New credentials universe for $CHEF_ORGANIZATION" .
61
-
62
- You will likely want to store the credentials in another remote repository. We recommend erring on the side of caution in its hosting. Setting that up is outside the scope of this guide, but there [good external resources](http://book.git-scm.com/3_distributed_workflows.html) available to get you started.
63
-
64
- <a name="download"></a>
65
- ### Download Cloud Credentials
66
-
67
- You will need to obtain user keys from your cloud providers. Your AWS access keys can be obtained from [Amazon IAM](https://console.aws.amazon.com/iam/home):
68
-
69
- ![Reset AWS User Key](https://github.com/infochimps-labs/ironfan/wiki/aws_user_key.png)
70
-
71
- __________________________________________________________________________
72
-
73
- Your Opscode user key can be obtained from the [Opscode Password settings](https://www.opscode.com/account/password) console:
74
-
75
- ![Reset Opscode User Key](https://github.com/infochimps-labs/ironfan/wiki/opscode_user_key.png)
76
-
77
- __________________________________________________________________________
78
-
79
- Your Opscode organization validator key can be obtained from the [Opscode Organization management](https://manage.opscode.com/organizations) console, by choosing the `Regenerate validation key` link:
80
-
81
- ![Reset Opscode Organization Key](https://github.com/infochimps-labs/ironfan/wiki/opscode_org_key.png)
82
-
83
- __________________________________________________________________________
84
-
85
-
86
- <a name="org"></a>
87
- ### User / Organization-specific config
88
-
89
- Edit the following in your new `credentials`:
90
-
91
- * Organization-specific settings are in `knife/credentials/knife-org.rb`:
92
- - _organization_: Your organization name
93
- - _chef server url_: Edit the lines for your `chef_server_url` and `validator`. _Note_: If you are an Opscode platform user, you can skip this step -- your `chef_server_url` defaults to `https://api.opscode.com/organizations/#{organization}` and your validator to `{organization}-validator.pem`.
94
- - Cloud-specific settings: if you are targeting a cloud provider, add account information and configuration here.
95
-
96
- * User-specific settings are in `knife/credentials/knife-user-{username}.rb`. (You can duplicate and rename the one in `knife/example-credentials/knife-user-example.rb`). For example, if you're using Amazon EC2 you should set your access keys:
97
-
98
- Chef::Config.knife[:aws_access_key_id] = "XXXX"
99
- Chef::Config.knife[:aws_secret_access_key] = "XXXX"
100
- Chef::Config.knife[:aws_account_id] = "XXXX"
101
-
102
- * Chef user key is in `{credentials_path}/{username}.pem`
103
-
104
- * Organization validator key in `{credentials_path}/{organization}-validator.pem`
105
-
106
- * If you have existing Amazon machines, place their keypairs in `{credentials_path}/ec2_keys`. Ironfan will also automatically populate this with new keys as new clusters are created. Commit the resulting keys back to the credentials repo to share them with your teammates, or they will be unable to make certain calls against the resulting architecture.
107
-
108
- <a name="go_speed_racer"></a>
109
- ## Try it out
110
-
111
- You should now be able to use Knife to control your clusters:
112
-
113
- $ knife cluster list
114
- +--------------------+---------------------------------------------------+
115
- | cluster | path |
116
- +--------------------+---------------------------------------------------+
117
- | burninator | /cloud/clusters/burninator.rb |
118
- | el_ridiculoso | /cloud/clusters/el_ridiculoso.rb |
119
- | elasticsearch_demo | /cloud/clusters/elasticsearch_demo.rb |
120
- | hadoop_demo | /cloud/clusters/hadoop_demo.rb |
121
- | sandbox | /cloud/clusters/sandbox.rb |
122
- +--------------------+---------------------------------------------------+
123
-
124
- Launching a cluster in the cloud should now be this easy!
125
-
126
- knife cluster launch sandbox-simple --bootstrap
127
-
128
- ## Next
129
-
130
- The README file in each of the subdirectories for more information about what goes in those directories. If you are bored of reading, go customize one of the files in the 'clusters/ directory'. Or, if you're a fan of ridiculous things and have ever pondered how many things you can fit in one box, launch el_ridiculoso:. It contains every single recipe we have ever made stacked on top of one another.
131
-
132
- knife cluster launch el_ridiculoso-gordo --bootstrap
133
-
134
- For more information about configuring Knife, see the [Knife documentation](http://wiki.opscode.com/display/chef/knife).
@@ -1,70 +0,0 @@
1
- # Ironfan Roadmap
2
-
3
- ## Summary
4
-
5
- - I. Ironfan-ci
6
- - II. DSL Undercarriage / OpenStack
7
- - III. Cookbook Updates
8
- - IV. Keys Handling
9
- - V. Silverware Update
10
- - VI. Ironfan Knife
11
- - VII. Orchestration
12
-
13
- ## Detailed Roadmap
14
-
15
- ### Ironfan-CI (I)
16
- Jenkins on laptop (Done)
17
- Jenkins runs VM sees output of test
18
- Translate announcement to cucumber lines
19
- Implement as necessary new Cuken tests
20
-
21
- ### Openstack / Multi-cloud (II)
22
- * Learn Openstack
23
- * (get accts @ a couple providers + eucalytus)
24
- * Fog (library we use, ec2 only?) compatibility with some tear-out
25
- * Depends on DSL Object above
26
- * Move stuff in Fog_layer to be methods on Cloud Object
27
- * cloud(:ec2, ‘us_east’) do
28
- * cores 1
29
- * end
30
- * Cloud Statement is just a layer, not its own object
31
- * (Cloud loses to everything else, we think)
32
-
33
- ### Ironfanize Rest of Cookbooks (III)
34
- * Debugging and updating exercise.
35
- * Ironfan-ci accelerates
36
- * Zabbix
37
- * MySql
38
- * Map to order of operations
39
- * Clean Separation of tight-bound services
40
- * Resque’s Redis
41
- * Flume’s Zookeeper
42
-
43
- ### DSL Object / Librarification (Mix)
44
- * New DSL Object (II)
45
- * Unify Models in Silverware/lib & Ironfan/lib (Birth of the Ironfan API Interface) (II)
46
- * Birth of the Ironfan API Interface (V)
47
- * Clean up Announcment Interface (framework) (V)
48
- * Merge Volume (VIII)
49
- * Actual Model for a dummy node (VIII)
50
- * Refactor deploy code across cookbooks (III)
51
- * Discovers component is an aspect endowed upon a component when it discovers another component to find out what depends on a service (V)
52
- * Key Databag Rollout (IV)
53
-
54
- ### Ironfan-knife (VI)
55
- * Separate SSH user as “Machine” or “Me”
56
- * Better Error Messages
57
- * Verbose vs. Sustained
58
- * Clearout Issues
59
- * Refactor Cluster into definitions - “Stacks” (Roles that are smarter)
60
- * Role Replacement
61
- * (Design doc forthcoming)
62
-
63
- ### Orchestration (VI/VII)
64
- * System diagram /reporting (VII)
65
- * Ticketed Worker Queue to run steps (bring up a Hadoop cluster, for instance) (VII)
66
- * Rundeck? Juju? (VII)
67
- * Activity stream (VII)
68
- * Helpers (VII)
69
- * API Frontend (VII)
70
- * Richer Slice Queries (VI)