ironfan 3.1.0.rc1 → 3.1.1

Sign up to get free protection for your applications and to get access to all the features.
Files changed (49) hide show
  1. data/.yardopts +5 -0
  2. data/CHANGELOG.md +18 -5
  3. data/README.md +34 -115
  4. data/TODO.md +36 -8
  5. data/VERSION +1 -1
  6. data/cluster_chef-knife.gemspec +2 -3
  7. data/ironfan.gemspec +29 -4
  8. data/lib/chef/knife/cluster_bootstrap.rb +1 -1
  9. data/lib/chef/knife/cluster_kick.rb +3 -3
  10. data/lib/chef/knife/cluster_kill.rb +1 -1
  11. data/lib/chef/knife/cluster_launch.rb +3 -3
  12. data/lib/chef/knife/cluster_list.rb +1 -1
  13. data/lib/chef/knife/cluster_proxy.rb +5 -5
  14. data/lib/chef/knife/cluster_show.rb +3 -2
  15. data/lib/chef/knife/cluster_ssh.rb +2 -2
  16. data/lib/chef/knife/cluster_start.rb +1 -2
  17. data/lib/chef/knife/cluster_stop.rb +2 -1
  18. data/lib/chef/knife/cluster_sync.rb +2 -2
  19. data/lib/chef/knife/cluster_vagrant.rb +144 -0
  20. data/lib/chef/knife/{knife_common.rb → ironfan_knife_common.rb} +8 -4
  21. data/lib/chef/knife/{generic_command.rb → ironfan_script.rb} +1 -1
  22. data/lib/chef/knife/vagrant/ironfan_environment.rb +18 -0
  23. data/lib/chef/knife/vagrant/ironfan_provisioners.rb +27 -0
  24. data/lib/chef/knife/vagrant/skeleton_vagrantfile.rb +116 -0
  25. data/lib/ironfan/chef_layer.rb +2 -2
  26. data/lib/ironfan/fog_layer.rb +16 -13
  27. data/lib/ironfan/private_key.rb +3 -3
  28. data/lib/ironfan/server.rb +9 -6
  29. data/lib/ironfan/server_slice.rb +11 -0
  30. data/notes/Home.md +30 -0
  31. data/notes/INSTALL-cloud_setup.md +100 -0
  32. data/notes/INSTALL.md +135 -0
  33. data/notes/Knife-Cluster-Commands.md +8 -0
  34. data/notes/Silverware.md +5 -0
  35. data/notes/aws_console_screenshot.jpg +0 -0
  36. data/notes/cookbook-versioning.md +11 -0
  37. data/notes/declaring_volumes.md +3 -0
  38. data/notes/design_notes-ci_testing.md +169 -0
  39. data/notes/design_notes-cookbook_event_ordering.md +212 -0
  40. data/notes/design_notes-dsl_object.md +55 -0
  41. data/notes/design_notes-meta_discovery.md +59 -0
  42. data/notes/ec2-pricing_and_capacity.md +63 -0
  43. data/notes/ironfan_homebase_layout.md +94 -0
  44. data/notes/named-cloud-objects.md +11 -0
  45. data/notes/rake_tasks.md +25 -0
  46. data/notes/renamed-recipes.txt +142 -0
  47. data/notes/style_guide.md +251 -0
  48. data/notes/tips_and_troubleshooting.md +83 -0
  49. metadata +50 -26
@@ -0,0 +1,144 @@
1
+ #
2
+ # Author:: Philip (flip) Kromer (<flip@infochimps.com>)
3
+ # Copyright:: Copyright (c) 2011 Infochimps, Inc
4
+ # License:: Apache License, Version 2.0
5
+ #
6
+ # Licensed under the Apache License, Version 2.0 (the "License");
7
+ # you may not use this file except in compliance with the License.
8
+ # You may obtain a copy of the License at
9
+ #
10
+ # http://www.apache.org/licenses/LICENSE-2.0
11
+ #
12
+ # Unless required by applicable law or agreed to in writing, software
13
+ # distributed under the License is distributed on an "AS IS" BASIS,
14
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+ # See the License for the specific language governing permissions and
16
+ # limitations under the License.
17
+ #
18
+
19
+ class Chef
20
+ class Knife
21
+
22
+ #
23
+ # Vagrant support is VERY rough. It will
24
+ #
25
+ # * change the ui/commandline args, at the worst possible moment for you to adapt
26
+ # * show you info that is inaccurate, even beyond the obvious fact that it reports AWS properties.
27
+ # * translate your documents into swahili, make your TV record Gigli -- [tracker issue here](http://bit.ly/vrsalert)
28
+ #
29
+ # Vagrant has a really strong narcissistic streak, even more so than chef
30
+ # and ironfan already do. I don't want to fight (it's probably that way for
31
+ # good reasons), so the following oddball procedure may persist until it a)
32
+ # stops working well or b) someone recommends a better approach.
33
+ #
34
+ # when you run `knife cluster vagrant command cluster-[facet-[indexes]]` we:
35
+ #
36
+ # * identify all servers defined on that cluster
37
+ # * find (or make) the directory config.vagrant_path
38
+ # - if unset, will use `{homebase}/vagrants/{cluster_name}`
39
+ # * copy a special-purpose vagrantfile into that directory
40
+ # - it is called vagrantfile, but won't work in a standalone way.
41
+ #
42
+ #
43
+ class ClusterVagrant < Knife
44
+ IRONFAN_DIR = File.dirname(File.realdirpath(__FILE__))
45
+ require File.expand_path('ironfan_knife_common', IRONFAN_DIR)
46
+ include Ironfan::KnifeCommon
47
+
48
+ deps do
49
+ Ironfan::KnifeCommon.load_deps
50
+ require 'vagrant'
51
+ require File.expand_path('vagrant/ironfan_environment', IRONFAN_DIR)
52
+ require File.expand_path('vagrant/ironfan_provisioners', IRONFAN_DIR)
53
+ end
54
+
55
+ banner "knife cluster vagrant CMD CLUSTER-[FACET-[INDEXES]] (options) - runs the given command against a vagrant environment created from your cluster definition. EARLY, use at your own risk"
56
+
57
+ option :cloud,
58
+ :long => "--cloud PROVIDER",
59
+ :short => "-p",
60
+ :description => "cloud provider to target, or 'false' to skip cloud-targeted steps. (default false)",
61
+ :default => false,
62
+ :boolean => false
63
+
64
+ def run
65
+ # The subnet for this world
66
+ Chef::Config.host_network_blk ||= '33.33.33'
67
+ # Location that cookbooks, roles, etc will be mounted on vm
68
+ # set to false to skip
69
+ Chef::Config.homebase_on_vm_dir "/homebase" if Chef::Config.homebase_on_vm_dir.nil?
70
+ # yuck. necessary until cloud agnosticism shows up
71
+ Chef::Config[:cloud] = config[:cloud] = false
72
+ # TODO: make this customizable
73
+ Chef::Config[:vagrant_path] = File.expand_path("vagrants", Chef::Config.homebase)
74
+
75
+ # standard ironfan knife preamble
76
+ load_ironfan
77
+ die("Missing command or slice:\n#{banner}") if @name_args.length < 2
78
+ die("Too many args:\n#{banner}") if @name_args.length > 2
79
+ configure_dry_run
80
+ ui.warn "Vagrant support is VERY rough: the ui will change, displays are inaccurate, may translate your documents into swahili"
81
+
82
+ #
83
+ # Load the servers. Note carefully: this is subtly different from all
84
+ # the other `knife cluster` commands. Vagrant provides idempotency, but
85
+ # we want the vagrant file to be invariant to the particular servers
86
+ # you're asking it to diddle.
87
+ #
88
+ # So we configure VMs for all servers in the cluster, but only issue the
89
+ # cli command against the ones given by the server slice.
90
+ #
91
+ vagrant_command, slice_string = @name_args
92
+ target = get_slice(slice_string)
93
+ all_servers = target.cluster.servers
94
+ display(target)
95
+
96
+ # Pre-populate information in chef
97
+ section("Sync'ing to chef and cloud")
98
+ target.sync_to_chef
99
+
100
+ # FIXME: I read somewhere that global variables are a smell for something
101
+ $ironfan_target = all_servers
102
+
103
+ #
104
+ # Prepare vagrant
105
+ #
106
+ section("Configuring vagrant", :green)
107
+
108
+ cluster_vagrant_dir = File.expand_path(target.cluster.name.to_s, Chef::Config.vagrant_path)
109
+ skeleton_vagrantfile = File.expand_path('vagrant/skeleton_vagrantfile.rb', IRONFAN_DIR)
110
+
111
+ # using ':vagrantfile_name => skeleton_vagrantfile' doesn't seem to work
112
+ # in vagrant - the VM comes out incompletely configured in a way I don't
113
+ # totally understand. Plus it wants its own directory anyhow. So, make a
114
+ # directory to hold vagrantfiles and push the skeleton in there
115
+ FileUtils.mkdir_p cluster_vagrant_dir
116
+ FileUtils.cp skeleton_vagrantfile, File.join(cluster_vagrant_dir, 'vagrantfile')
117
+
118
+ log_level = [0, (3 - config.verbosity)].max
119
+ env = Vagrant::IronfanEnvironment.new(
120
+ :ui_class => Vagrant::UI::Colored,
121
+ :cwd => cluster_vagrant_dir,
122
+ :log_level => log_level,
123
+ )
124
+
125
+ #
126
+ # Run command
127
+ #
128
+ section("issuing command 'vagrant #{vagrant_command}'", :green)
129
+
130
+ target.servers.each do |server|
131
+ env.cli(vagrant_command, server.fullname)
132
+ end
133
+ end
134
+
135
+ def display(target)
136
+ super(target.cluster.servers,
137
+ ["Name", "InstanceID", "State", "Flavor", "Image", "AZ", "Public IP", "Private IP", "Created At", 'Volumes', 'Elastic IP']) do |svr|
138
+ { 'targeted?' => (target.include?(svr) ? "[blue]true[reset]" : '-' ), }
139
+ end
140
+ end
141
+
142
+ end
143
+ end
144
+ end
@@ -27,10 +27,13 @@ module Ironfan
27
27
  # You must specify a facet if you use slice_indexes.
28
28
  #
29
29
  # @return [Ironfan::ServerSlice] the requested slice
30
- def get_slice(cluster_name, facet_name=nil, slice_indexes=nil)
31
- if facet_name.nil? && slice_indexes.nil?
32
- cluster_name, facet_name, slice_indexes = cluster_name.split(/[\s\-]/, 3)
30
+ def get_slice(slice_string, *args)
31
+ if not args.empty?
32
+ slice_string = [slice_string, args].flatten.join("-")
33
+ ui.info("")
34
+ ui.warn("Please specify server slices joined by dashes and not separate args:\n\n knife cluster #{sub_command} #{slice_string}\n\n")
33
35
  end
36
+ cluster_name, facet_name, slice_indexes = slice_string.split(/[\s\-]/, 3)
34
37
  ui.info("Inventorying servers in #{predicate_str(cluster_name, facet_name, slice_indexes)}")
35
38
  cluster = Ironfan.load_cluster(cluster_name)
36
39
  cluster.resolve!
@@ -183,7 +186,8 @@ module Ironfan
183
186
  next if options.include?(name) || options[:except].include?(name)
184
187
  option name, info
185
188
  end
186
- banner "knife cluster #{sub_command} CLUSTER_NAME [FACET_NAME [INDEXES]] (options)"
189
+ options[:description] ||= "#{sub_command} all servers described by given cluster slice"
190
+ banner "knife cluster #{"%-11s" % sub_command} CLUSTER-[FACET-[INDEXES]] (options) - #{options[:description]}"
187
191
  end
188
192
  end
189
193
  def self.included(base)
@@ -16,7 +16,7 @@
16
16
  # limitations under the License.
17
17
  #
18
18
 
19
- require File.expand_path(File.dirname(__FILE__)+"/knife_common.rb")
19
+ require File.expand_path('ironfan_knife_common', File.dirname(File.realdirpath(__FILE__)))
20
20
 
21
21
  module Ironfan
22
22
  class Script < Chef::Knife
@@ -0,0 +1,18 @@
1
+ module Vagrant
2
+ class IronfanEnvironment < Vagrant::Environment
3
+
4
+ def initialize(opts={})
5
+ super(opts)
6
+ munge_logger(opts)
7
+ end
8
+
9
+
10
+ protected
11
+ def munge_logger(opts)
12
+ logger = Log4r::Logger.new("vagrant")
13
+ logger.outputters = Log4r::Outputter.stderr
14
+ logger.level = opts[:log_level] || 3
15
+ logger.info( "ironfan vagrant (#{self}) - cwd: #{cwd}")
16
+ end
17
+ end
18
+ end
@@ -0,0 +1,27 @@
1
+ module Vagrant
2
+ module Provisioners
3
+ class IronfanChefClient < Vagrant::Provisioners::ChefClient
4
+ class Config < Vagrant::Provisioners::ChefClient::Config
5
+ attr_accessor :upload_client_key
6
+ end
7
+ def self.config_class() Config ; end
8
+
9
+ def upload_validation_key
10
+ super()
11
+ if config.upload_client_key
12
+ # env[:ui].info I18n.t("vagrant.provisioners.chef.upload_client_key")
13
+ host_client_key = File.expand_path(config.upload_client_key)
14
+ env[:vm].channel.upload(host_client_key, tmp_client_key_path)
15
+ env[:vm].channel.sudo("mv #{tmp_client_key_path} #{config.client_key_path}")
16
+ env[:vm].channel.sudo("chown root #{config.client_key_path}")
17
+ env[:vm].channel.sudo("chmod 600 #{config.client_key_path}")
18
+ end
19
+ end
20
+
21
+ def tmp_client_key_path
22
+ File.join(config.provisioning_path, "client.pem")
23
+ end
24
+
25
+ end
26
+ end
27
+ end
@@ -0,0 +1,116 @@
1
+ #
2
+ # ~~ DO NOT EDIT ~~
3
+ #
4
+
5
+ unless defined?(Chef::Config)
6
+ puts <<-EOL
7
+ Warning!
8
+
9
+ This might *look* like a vagrantfile. However it's only useful when you use
10
+ `knife cluster vm` to manipulate it. Any changes you put here will be
11
+ clobbered, and you can't edit the original if you install ironfan as a gem.
12
+ Annoying, but concentrate on the magic of the jetpack-powered ridiculous
13
+ cybernetic future magic that's going on here.
14
+
15
+ Instead, issue commands like
16
+
17
+ knife cluster vagrant COMMAND CLUSTER[-FACET-[INDEXES]]
18
+
19
+ Where command is something like
20
+
21
+ status up provision reload resume suspend ssh destroy or halt
22
+
23
+ EOL
24
+ exit(2)
25
+ end
26
+
27
+
28
+ Vagrant::Config.run do |config|
29
+
30
+ #
31
+ # FIXME: the things in this first section need to go somewhere else; need to
32
+ # learn more abt vagrant to figure out where.
33
+ #
34
+
35
+ def ip_for(svr)
36
+ "#{Chef::Config.host_network_blk}.#{30 + svr.facet_index}"
37
+ end
38
+
39
+ # FIXME: things like this should be imputed by the `cloud` statement
40
+ ram_mb = 640
41
+ video_ram_mb = 10
42
+ cores = 2
43
+
44
+ # ===========================================================================
45
+ #
46
+ # Configure VM
47
+ #
48
+
49
+ # Boot with a GUI so you can see the screen. (Default is headless)
50
+ config.vm.boot_mode = :gui
51
+
52
+ # Mount this to see all our chefs and stuff: [type, vm_path, host_path]
53
+ config.vm.share_folder "homebase", Chef::Config.homebase_on_vm_dir, Chef::Config.homebase
54
+
55
+ #
56
+ # Define a VM for all the targeted servers in the cluster.
57
+ #
58
+ # * vm name - server's fullname ('el_ridiculoso-gordo-2')
59
+ # * vm box - cloud.image_name
60
+ # * creates host network on the subnet defined in Chef::Config[:host_network_blk]
61
+ # * populates chef provisioner from the server's run_list
62
+ #
63
+ $ironfan_target.servers.each do |svr|
64
+ config.vm.define svr.fullname.to_sym do |cfg|
65
+
66
+ cfg.vm.box = svr.cloud.image_name
67
+ cfg.vm.network :hostonly, ip_for(svr)
68
+
69
+ #
70
+ # See http://www.virtualbox.org/manual/ch08.html#idp12418752
71
+ # for the craziness
72
+ #
73
+ vm_customizations = {}
74
+ vm_customizations[:name] = svr.fullname.to_s
75
+ vm_customizations[:memory] = ram_mb.to_s if ram_mb
76
+ vm_customizations[:vram] = video_ram_mb.to_s if video_ram_mb
77
+ vm_customizations[:cpus] = cores.to_s if cores
78
+ # Use the host resolver for DNS so that VPN continues to work within the VM
79
+ vm_customizations[:natdnshostresolver1] = "on"
80
+ #
81
+ cfg.vm.customize ["modifyvm", :id, vm_customizations.map{|k,v| ["--#{k}", v]} ].flatten
82
+
83
+ # Assign this VM to a bridged network, allowing you to connect directly to a
84
+ # network using the host's network device. This makes the VM appear as another
85
+ # physical device on your network.
86
+ # cfg.vm.network :bridged
87
+
88
+ # Forward a port from the guest to the host, which allows for outside
89
+ # computers to access the VM, whereas host only networking does not.
90
+ # cfg.vm.forward_port 80, 8080
91
+
92
+ cfg.vm.provision Vagrant::Provisioners::IronfanChefClient do |chef|
93
+ #
94
+ chef.node_name = svr.fullname
95
+
96
+ chef.chef_server_url = svr.chef_server_url
97
+ chef.validation_client_name = svr.validation_client_name
98
+ chef.validation_key_path = svr.validation_key
99
+ chef.upload_client_key = svr.client_key.filename if svr.client_key
100
+
101
+ chef.environment = svr.environment
102
+ chef.json = svr.cloud.user_data
103
+
104
+ svr.combined_run_list.each do |run_list_entry|
105
+ case run_list_entry
106
+ when /^role\[(\w+)\]$/ then chef.add_role($1) # role[foo]
107
+ when /^recipe\[(\w+)\]$/ then chef.add_recipe($1) # recipe[foo]
108
+ else chef.add_recipe(run_list_entry) # foo
109
+ end
110
+ end
111
+
112
+ end
113
+ end
114
+ end
115
+
116
+ end
@@ -145,7 +145,7 @@ module Ironfan
145
145
  def chef_client_script_content
146
146
  return @chef_client_script_content if @chef_client_script_content
147
147
  return unless cloud.chef_client_script
148
- script_filename = File.expand_path("../../config/#{cloud.chef_client_script}", File.dirname(__FILE__))
148
+ script_filename = File.expand_path("../../config/#{cloud.chef_client_script}", File.dirname(File.realdirpath(__FILE__)))
149
149
  @chef_client_script_content = safely{ File.read(script_filename) }
150
150
  end
151
151
 
@@ -202,7 +202,7 @@ module Ironfan
202
202
  def set_chef_node_attributes
203
203
  step(" setting node runlist and essential attributes")
204
204
  @chef_node.run_list = Chef::RunList.new(*@settings[:run_list])
205
- @chef_node.normal[ :organization] = Chef::Config[:organization] if Chef::Config[:organization]
205
+ @chef_node.normal[:organization] = organization if organization
206
206
  @chef_node.override[:cluster_name] = cluster_name
207
207
  @chef_node.override[:facet_name] = facet_name
208
208
  @chef_node.override[:facet_index] = facet_index
@@ -20,23 +20,26 @@ module Ironfan
20
20
  end
21
21
 
22
22
  def fog_launch_description
23
+ user_data_hsh =
24
+ if client_key.body then cloud.user_data.merge({ :client_key => client_key.body })
25
+ else cloud.user_data.merge({ :validation_key => cloud.validation_key }) ; end
26
+ #
23
27
  {
24
- :image_id => cloud.image_id,
25
- :flavor_id => cloud.flavor,
26
- #
27
- :groups => cloud.security_groups.keys,
28
- :key_name => cloud.keypair.to_s,
28
+ :image_id => cloud.image_id,
29
+ :flavor_id => cloud.flavor,
30
+ :groups => cloud.security_groups.keys,
31
+ :key_name => cloud.keypair.to_s,
29
32
  # Fog does not actually create tags when it creates a server.
30
- :tags => {
31
- :cluster => cluster_name,
32
- :facet => facet_name,
33
- :index => facet_index, },
34
- :user_data => JSON.pretty_generate(cloud.user_data),
35
- :block_device_mapping => block_device_mapping,
33
+ :tags => {
34
+ :cluster => cluster_name,
35
+ :facet => facet_name,
36
+ :index => facet_index, },
37
+ :user_data => JSON.pretty_generate(user_data_hsh),
38
+ :block_device_mapping => block_device_mapping,
39
+ :availability_zone => self.default_availability_zone,
40
+ :monitoring => cloud.monitoring,
36
41
  # :disable_api_termination => cloud.permanent,
37
42
  # :instance_initiated_shutdown_behavior => instance_initiated_shutdown_behavior,
38
- :availability_zone => self.default_availability_zone,
39
- :monitoring => cloud.monitoring,
40
43
  }
41
44
  end
42
45
 
@@ -70,7 +70,7 @@ module Ironfan
70
70
  end
71
71
 
72
72
  def key_dir
73
- Chef::Config.client_key_dir || '/tmp/client_keys'
73
+ Chef::Config.client_key_dir || "/tmp/#{ENV['USER']}-client_keys"
74
74
  end
75
75
  end
76
76
 
@@ -89,7 +89,7 @@ module Ironfan
89
89
 
90
90
  def key_dir
91
91
  return Chef::Config.data_bag_key_dir if Chef::Config.data_bag_key_dir
92
- dir = "#{ENV['HOME']}/.chef/data_bag_keys"
92
+ dir = "#{ENV['HOME']}/.chef/credentials/data_bag_keys"
93
93
  warn "Please set 'data_bag_key_dir' in your knife.rb. Will use #{dir} as a default"
94
94
  dir
95
95
  end
@@ -120,7 +120,7 @@ module Ironfan
120
120
  if Chef::Config.ec2_key_dir
121
121
  return Chef::Config.ec2_key_dir
122
122
  else
123
- dir = "#{ENV['HOME']}/.chef/ec2_keys"
123
+ dir = "#{ENV['HOME']}/.chef/credentials/ec2_keys"
124
124
  warn "Please set 'ec2_key_dir' in your knife.rb. Will use #{dir} as a default"
125
125
  dir
126
126
  end
@@ -37,7 +37,7 @@ module Ironfan
37
37
  end
38
38
 
39
39
  def servers
40
- Ironfan::ServerGroup.new(cluster, [self])
40
+ Ironfan::ServerSlice.new(cluster, [self])
41
41
  end
42
42
 
43
43
  def bogosity val=nil
@@ -101,6 +101,10 @@ module Ironfan
101
101
  @tags[key]
102
102
  end
103
103
 
104
+ def chef_server_url() Chef::Config.chef_server_url ; end
105
+ def validation_client_name() Chef::Config.validation_client_name ; end
106
+ def validation_key() Chef::Config.validation_key ; end
107
+ def organization() Chef::Config.organization ; end
104
108
  #
105
109
  # Resolve:
106
110
  #
@@ -113,19 +117,18 @@ module Ironfan
113
117
  cloud.reverse_merge!(cluster.cloud)
114
118
  #
115
119
  cloud.user_data({
116
- :chef_server => Chef::Config.chef_server_url,
117
- :validation_client_name => Chef::Config.validation_client_name,
120
+ :chef_server => chef_server_url,
121
+ :validation_client_name => validation_client_name,
118
122
  #
119
123
  :node_name => fullname,
124
+ :organization => organization,
120
125
  :cluster_name => cluster_name,
121
126
  :facet_name => facet_name,
122
127
  :facet_index => facet_index,
123
128
  #
124
- :run_list => run_list,
129
+ :run_list => combined_run_list,
125
130
  })
126
131
  #
127
- if client_key.body then cloud.user_data({ :client_key => client_key.body, })
128
- else cloud.user_data({ :validation_key => cloud.validation_key }) ; end
129
132
  cloud.keypair(cluster_name) if cloud.keypair.nil?
130
133
  #
131
134
  self
@@ -33,6 +33,17 @@ module Ironfan
33
33
  ServerSlice.new cluster, @servers.send(method, *args, &block)
34
34
  end
35
35
  end
36
+ # true if slice contains a server with the given fullname (if arg is a
37
+ # string) or same fullname as the given server (if a Server)
38
+ #
39
+ # @overload include?(server_fullname)
40
+ # @param [String] server_fullname checks for a server with that fullname
41
+ # @overload include?(server)
42
+ # @param [Ironfan::Server] server checks for server with same fullname
43
+ def include?(server)
44
+ fullname = server.is_a?(String) ? server : server.fullname
45
+ @servers.any?{|svr| svr.fullname == fullname }
46
+ end
36
47
 
37
48
  # Return the collection of servers that are not yet 'created'
38
49
  def uncreated_servers
data/notes/Home.md ADDED
@@ -0,0 +1,30 @@
1
+ ## Overview
2
+
3
+ The ironfan project is an expressive toolset for constructing scalable, resilient architectures. It works in the cloud, in the data center, and on your laptop, and makes your system diagram visible and inevitable.
4
+
5
+ ### Code, documentation and support:
6
+
7
+ Ironfan consists of the following:
8
+
9
+ * [ironfan-homebase](https://github.com/infochimps-labs/ironfan-homebase): centralizes the cookbooks, roles and clusters. A solid foundation for any chef user.
10
+ * [ironfan gem](https://github.com/infochimps-labs/ironfan):
11
+ - core models to describe your system diagram with a clean, expressive domain-specific language
12
+ - knife plugins to orchestrate clusters of machines using simple commands like `knife cluster launch`
13
+ - logic to coordinate truth among chef server and cloud providers.
14
+ * [ironfan-pantry](https://github.com/infochimps-labs/ironfan-pantry): Our collection of industrial-strength, cloud-ready recipes for Hadoop, HBase, Cassandra, Elasticsearch, Zabbix and more.
15
+ * [silverware cookbook](https://github.com/infochimps-labs/ironfan-homebase/tree/master/cookbooks/silverware): coordinate discovery of services ("list all the machines for `awesome_webapp`, that I might load balance them") and aspects ("list all components that write logs, that I might logrotate them, or that I might monitor the free space on their volumes".
16
+ * [ironfan-ci](https://github.com/infochimps-labs/ironfan-ci): Continuous integration testing of not just your cookbooks but your *architecture*.
17
+
18
+ * [ironfan wiki](https://github.com/infochimps-labs/ironfan/wiki): high-level documentation and install instructions
19
+ * [ironfan issues](https://github.com/infochimps-labs/ironfan/issues): bugs, questions and feature requests for *any* part of the ironfan toolset.
20
+ * [ironfan gem docs](http://rdoc.info/gems/ironfan): rdoc docs for ironfan
21
+
22
+ Please file all issues on [ironfan issues](https://github.com/infochimps-labs/ironfan/issues).
23
+
24
+ ## Table of Contents
25
+
26
+ @sya please complete
27
+
28
+
29
+ * [INSTALL](https://github.com/infochimps-labs/ironfan/wiki/INSTALL) -- How to get started
30
+ * ...
@@ -0,0 +1,100 @@
1
+ ## Credentials
2
+
3
+ * make a credentials repo
4
+ - copy the knife/example-credentials directory
5
+ - best to not live on github: use a private server and run
6
+
7
+ ```
8
+ repo=ORGANIZATION-credentials ; repodir=/gitrepos/$repo.git ; mkdir -p $repodir ; ( GIT_DIR=$repodir git init --shared=group --bare && cd $repodir && git --bare update-server-info && chmod a+x hooks/post-update )
9
+ ```
10
+
11
+ - git submodule it into knife as `knife/yourorg-credentials`
12
+ - or, if somebody has added it,
13
+
14
+ ```
15
+ git pull
16
+ git submodule update --init
17
+ find . -iname '*.pem' -exec chmod og-rw {} \;
18
+ cp knife/${OLD_CHEF_ORGANIZATION}-credentials/knife-user-${CHEF_USER}.rb knife/${CHEF_ORGANIZATION}-credentials
19
+ cp knife/${OLD_CHEF_ORGANIZATION}-credentials/${CHEF_USER}.pem knife/${CHEF_ORGANIZATION}-credentials/
20
+ ```
21
+
22
+ * create AWS account
23
+ - [sign up for AWS + credit card + password]
24
+ - make IAM users for admins
25
+ - add your IAM keys into your {credentials}/knife-user
26
+
27
+ * create opscode account
28
+ - download org keys, put in the credentials repo
29
+
30
+ ## Populate Chef Server
31
+
32
+ * create `prod` and `dev` environments by using
33
+ ```
34
+ knife environment create dev
35
+ knife environment create prod
36
+ knife environment from file environments/dev.json
37
+ knife environment from file environments/prod.json
38
+ ```
39
+
40
+ ```ruby
41
+ knife cookbook upload --all
42
+ rake roles
43
+ # if you have data bags, do that too
44
+ ```
45
+
46
+ ## Create Your Initial Machine Boot-Image (AMI)
47
+
48
+ * Start by launching the burninator cluster: `knife cluster launch --bootstrap --yes burninator-trogdor-0`
49
+ - You may have to specify the template by adding this an anargument: `--template-file ${CHEF_HOMEBASE}/vendor/ironfan/lib/chef/knife/bootstrap/ubuntu10.04-ironfan.erb`
50
+ - This template makes the machine auto-connect to the server upon launch and teleports the client-key into the machine.
51
+ - If this fails, bootstrap separately: `knife cluster bootstrap --yes burninator-trogdor-0`
52
+
53
+ * Log into the burninator-trogdor and run the script /tmp/burn_ami_prep.sh: `sudo bash /tmp/burn_ami_prep.sh`
54
+ - You will have to ssh as the ubuntu user and pass in the burninator.pem identity file.
55
+ - Review the output of this script and ensure the world we have created is sane.
56
+
57
+ * Once the script has been run:
58
+ - Exit the machine.
59
+ - Go to AWS console.
60
+ - DO NOT stop the machine.
61
+ - Do "Create Image (EBS AMI)" from the burninator-trogdor instance (may take a while).
62
+
63
+ * Add the AMI id to your `{credentials}/knife-org.rb` in the `ec2_image_info.merge!` section and create a reference name for the image (e.g ironfan-natty).
64
+ - Add that reference name to the burninator-village facet in the burninator.rb cluster definition: `cloud.image_name 'ironfan_natty'`
65
+
66
+ * Launch the burninator-village in order to test your newly created AMI.
67
+ - The village should launch with no problems, have the correct permissions and be able to complete a chef run: `sudo chef-client`.
68
+
69
+ * If all has gone well so far, you may now stop the original burninator: `knife cluster kill burninator-trogdor`
70
+ - Leave the burninator-village up and stay ssh'ed to assist with the next step.
71
+
72
+ ## Create an NFS
73
+
74
+ * Make a command/control cluster definition file with an nfs facet (see clusters/demo_cnc.rb).
75
+ - Make sure specify the `image_name` to be the AMI you've created.
76
+
77
+ * In the AWS console make yourself a 20GB drive.
78
+ - Make sure the availability zone matches the one specified in your cnc_cluster definition file.
79
+ - Don't choose a snapshot.
80
+ - Set the device name to `/dev/sdh`.
81
+ - Attach to the burninator-village instance.
82
+
83
+ * ssh in to burninator-village to format the nfs drive:
84
+ ```
85
+ dev=/dev/xvdh ; name='home_drive' ; sudo umount $dev ; ls -l $dev ; sudo mkfs.xfs $dev ; sudo mkdir /mnt/$name ; sudo mount -t xfs $dev /mnt/$name ; sudo bash -c "echo 'snapshot for $name burned on `date`' > /mnt/$name/vol_info.txt "
86
+ sudo cp -rp /home/ubuntu /mnt/$name/ubuntu
87
+ sudo umount /dev/xvdh
88
+ exit
89
+ ```
90
+ * Back in the AWS console, snapshot the volume and name it {org}-home_drive. Delete the original volume as it is not needed anymore.
91
+ # While you're in there, make {org}-resizable_1gb a 'Minimum-sized snapshot, resizable -- use xfs_growfs to resize after launch' snapshot.
92
+
93
+ * Paste the snapshot id into your cnc_cluster definition file.
94
+ - ssh into the newly launched cnc_cluster-nfs.
95
+ - You should restart the machine via the AWS console (may or may not be necessary, do anyway).
96
+
97
+ * Manipulate security groups
98
+ - nfs_server group should open all UDP ports and all TCP ports to nfs_client group
99
+
100
+ * Change /etc/ssh/sshd_config to be passwordful and restart the ssh service