cluster_chef 3.0.10 → 3.0.11

Sign up to get free protection for your applications and to get access to all the features.
data/CHANGELOG.md CHANGED
@@ -1,3 +1,17 @@
1
+ ## v3.0.11: We Raid at Dawn
2
+
3
+ * You can now assemble raid groups in the cluster definition:
4
+ - node metadata instructing the volumes recipe to build the raid volume
5
+ - marks the component volumes as non-mountable, in the appropriate raid group, etc
6
+ * Changed the order of `cluster_role` and `facet_role` in the run list. It now goes:
7
+ - `:first` roles (cluster then facet)
8
+ - `:normal` roles (cluster then facet)
9
+ - special roles: `cluster_role` then `facet_role`
10
+ - `:last` roles (cluster then facet)
11
+ * knife cluster launch uses ClusterBootstrap, not knife's vanilla bootstrap.
12
+ * can now do group('group_that_wants').authorized_by_group('group_that_grants') so that in cluster A I can request access to cluster B without gaining its group myself.
13
+ * push the organization (if set) into the node metadata
14
+
1
15
  ## v3.0.10: Cloud fixes
2
16
 
3
17
  * security groups are now created/updated in knife cluster sync. This can't help you apply then to a node afer launch though -- nothing can, the API doesn't allow it.
data/TODO.md CHANGED
@@ -6,3 +6,11 @@
6
6
  * knife cluster kick fails if service isn't running
7
7
  * make clear directions for installing `cluster_chef` and its initial use.
8
8
  * knife cluster launch should fail differently if you give it a facet that doesn't exist
9
+
10
+
11
+
12
+ ### ssh_user, ssh_identity_file, keypair, template should be set by cluster except when they shouldn't
13
+
14
+ ### Organization-specific homebase files
15
+
16
+ The current organization of the homebase needs to better scope organization-specific customizations
data/VERSION CHANGED
@@ -1 +1 @@
1
- 3.0.10
1
+ 3.0.11
data/cluster_chef.gemspec CHANGED
@@ -5,11 +5,11 @@
5
5
 
6
6
  Gem::Specification.new do |s|
7
7
  s.name = "cluster_chef"
8
- s.version = "3.0.10"
8
+ s.version = "3.0.11"
9
9
 
10
10
  s.required_rubygems_version = Gem::Requirement.new(">= 0") if s.respond_to? :required_rubygems_version=
11
11
  s.authors = ["Infochimps"]
12
- s.date = "2012-01-17"
12
+ s.date = "2012-01-24"
13
13
  s.description = "cluster_chef allows you to orchestrate not just systems but clusters of machines. It includes a powerful layer on top of knife and a collection of cloud cookbooks."
14
14
  s.email = "coders@infochimps.com"
15
15
  s.extra_rdoc_files = [
@@ -84,7 +84,7 @@ Gem::Specification.new do |s|
84
84
  s.add_development_dependency(%q<rspec>, ["~> 2.5"])
85
85
  s.add_development_dependency(%q<yard>, ["~> 0.6"])
86
86
  s.add_development_dependency(%q<configliere>, ["~> 0.4.8"])
87
- s.add_runtime_dependency(%q<cluster_chef-knife>, ["= 3.0.10"])
87
+ s.add_runtime_dependency(%q<cluster_chef-knife>, ["= 3.0.11"])
88
88
  else
89
89
  s.add_dependency(%q<chef>, ["~> 0.10.4"])
90
90
  s.add_dependency(%q<fog>, ["~> 1.1.1"])
@@ -95,7 +95,7 @@ Gem::Specification.new do |s|
95
95
  s.add_dependency(%q<rspec>, ["~> 2.5"])
96
96
  s.add_dependency(%q<yard>, ["~> 0.6"])
97
97
  s.add_dependency(%q<configliere>, ["~> 0.4.8"])
98
- s.add_dependency(%q<cluster_chef-knife>, ["= 3.0.10"])
98
+ s.add_dependency(%q<cluster_chef-knife>, ["= 3.0.11"])
99
99
  end
100
100
  else
101
101
  s.add_dependency(%q<chef>, ["~> 0.10.4"])
@@ -107,7 +107,7 @@ Gem::Specification.new do |s|
107
107
  s.add_dependency(%q<rspec>, ["~> 2.5"])
108
108
  s.add_dependency(%q<yard>, ["~> 0.6"])
109
109
  s.add_dependency(%q<configliere>, ["~> 0.4.8"])
110
- s.add_dependency(%q<cluster_chef-knife>, ["= 3.0.10"])
110
+ s.add_dependency(%q<cluster_chef-knife>, ["= 3.0.11"])
111
111
  end
112
112
  end
113
113
 
@@ -198,13 +198,14 @@ module ClusterChef
198
198
  def set_chef_node_attributes
199
199
  step(" setting node runlist and essential attributes")
200
200
  @chef_node.run_list = Chef::RunList.new(*@settings[:run_list])
201
+ @chef_node.normal[ :organization] = Chef::Config[:organization] if Chef::Config[:organization]
201
202
  @chef_node.override[:cluster_name] = cluster_name
202
203
  @chef_node.override[:facet_name] = facet_name
203
204
  @chef_node.override[:facet_index] = facet_index
204
205
  end
205
206
 
206
207
  def set_chef_node_environment
207
- @chef_node.chef_environment(environment.to_s)
208
+ @chef_node.chef_environment(environment.to_s) if environment.present?
208
209
  end
209
210
 
210
211
  #
@@ -111,7 +111,7 @@ module ClusterChef
111
111
  def create_cluster_role
112
112
  @cluster_role_name = "#{name}_cluster"
113
113
  @cluster_role = new_chef_role(@cluster_role_name, cluster)
114
- role(@cluster_role_name, :last)
114
+ role(@cluster_role_name, :own)
115
115
  end
116
116
 
117
117
  end
@@ -70,6 +70,16 @@ module ClusterChef
70
70
  volumes[volume_name]
71
71
  end
72
72
 
73
+ def raid_group(rg_name, attrs={}, &block)
74
+ volumes[rg_name] ||= ClusterChef::RaidGroup.new(:parent => self, :name => rg_name)
75
+ volumes[rg_name].configure(attrs, &block)
76
+ p volumes
77
+ volumes[rg_name].sub_volumes.each do |sv_name|
78
+ volume(sv_name){ in_raid(rg_name) ; mountable(false) ; tags({}) }
79
+ end
80
+ volumes[rg_name]
81
+ end
82
+
73
83
  def root_volume(attrs={}, &block)
74
84
  volume(:root, attrs, &block)
75
85
  end
@@ -134,7 +144,7 @@ module ClusterChef
134
144
  protected
135
145
 
136
146
  def add_to_run_list(item, placement)
137
- raise "run_list placement must be one of :first, :normal, :last or nil (also means :normal)" unless [:first, :last, nil].include?(placement)
147
+ raise "run_list placement must be one of :first, :normal, :last or nil (also means :normal)" unless [:first, :last, :own, nil].include?(placement)
138
148
  @@run_list_rank += 1
139
149
  placement ||= :normal
140
150
  @run_list_info[item] ||= { :rank => @@run_list_rank, :placement => placement }
@@ -119,7 +119,7 @@ module ClusterChef
119
119
  def create_facet_role
120
120
  @facet_role_name = "#{cluster_name}_#{facet_name}"
121
121
  @facet_role = new_chef_role(@facet_role_name, cluster, self)
122
- role(@facet_role_name, :last)
122
+ role(@facet_role_name, :own)
123
123
  end
124
124
 
125
125
  #
@@ -74,6 +74,27 @@ module ClusterChef
74
74
  end
75
75
  end
76
76
 
77
+ class DataBagKey < PrivateKey
78
+ def body
79
+ return @body if @body
80
+ @body
81
+ end
82
+
83
+ def random_token
84
+ require "digest/sha2"
85
+ digest = Digest::SHA512.hexdigest( Time.now.to_s + (1..10).collect{ rand.to_s }.join )
86
+ 5.times{ digest = Digest::SHA512.hexdigest(digest) }
87
+ digest
88
+ end
89
+
90
+ def key_dir
91
+ return Chef::Config.data_bag_key_dir if Chef::Config.data_bag_key_dir
92
+ dir = "#{ENV['HOME']}/.chef/data_bag_keys"
93
+ warn "Please set 'data_bag_key_dir' in your knife.rb. Will use #{dir} as a default"
94
+ dir
95
+ end
96
+ end
97
+
77
98
  class Ec2Keypair < PrivateKey
78
99
  def body
79
100
  return @body if @body
@@ -100,7 +121,7 @@ module ClusterChef
100
121
  return Chef::Config.ec2_key_dir
101
122
  else
102
123
  dir = "#{ENV['HOME']}/.chef/ec2_keys"
103
- warn "Please set 'ec2_key_dir' in your knife.rb -- using #{dir} as a default"
124
+ warn "Please set 'ec2_key_dir' in your knife.rb. Will use #{dir} as a default"
104
125
  dir
105
126
  end
106
127
  end
@@ -12,8 +12,9 @@ module ClusterChef
12
12
  description group_description || "cluster_chef generated group #{group_name}"
13
13
  @cloud = cloud
14
14
  @group_authorizations = []
15
+ @group_authorized_by = []
15
16
  @range_authorizations = []
16
- owner_id group_owner_id || Chef::Config[:knife][:aws_account_id]
17
+ owner_id(group_owner_id || Chef::Config[:knife][:aws_account_id])
17
18
  end
18
19
 
19
20
  @@all = nil
@@ -26,8 +27,8 @@ module ClusterChef
26
27
  end
27
28
  def self.get_all
28
29
  groups_list = ClusterChef.fog_connection.security_groups.all
29
- @@all = groups_list.inject(Mash.new) do |hsh, group|
30
- hsh[group.name] = group ; hsh
30
+ @@all = groups_list.inject(Mash.new) do |hsh, fog_group|
31
+ hsh[fog_group.name] = fog_group ; hsh
31
32
  end
32
33
  end
33
34
 
@@ -35,58 +36,73 @@ module ClusterChef
35
36
  all[name] || ClusterChef.fog_connection.security_groups.get(name)
36
37
  end
37
38
 
38
- def self.get_or_create group_name, description
39
- group = all[group_name] || ClusterChef.fog_connection.security_groups.get(group_name)
40
- if ! group
41
- self.step(group_name, "creating (#{description})", :blue)
42
- group = all[group_name] = ClusterChef.fog_connection.security_groups.new(:name => group_name, :description => description, :connection => ClusterChef.fog_connection)
43
- group.save
39
+ def self.get_or_create(group_name, description)
40
+ # FIXME: the '|| ClusterChef.fog' part is probably unnecessary
41
+ fog_group = all[group_name] || ClusterChef.fog_connection.security_groups.get(group_name)
42
+ unless fog_group
43
+ self.step(group_name, "creating (#{description})", :green)
44
+ fog_group = all[group_name] = ClusterChef.fog_connection.security_groups.new(:name => group_name, :description => description, :connection => ClusterChef.fog_connection)
45
+ fog_group.save
44
46
  end
45
- group
47
+ fog_group
46
48
  end
47
49
 
48
- def authorize_group_and_owner group, owner_id=nil
49
- @group_authorizations << [group.to_s, owner_id]
50
+ def authorize_group(group_name, owner_id=nil)
51
+ @group_authorizations << [group_name.to_s, owner_id]
50
52
  end
51
53
 
52
- # Alias for authorize_group_and_owner
53
- def authorize_group *args
54
- authorize_group_and_owner *args
54
+ def authorized_by_group(other_name)
55
+ @group_authorized_by << [other_name.to_s, nil]
55
56
  end
56
57
 
57
- def authorize_port_range range, cidr_ip = '0.0.0.0/0', ip_protocol = 'tcp'
58
+ def authorize_port_range(range, cidr_ip = '0.0.0.0/0', ip_protocol = 'tcp')
58
59
  range = (range .. range) if range.is_a?(Integer)
59
60
  @range_authorizations << [range, cidr_ip, ip_protocol]
60
61
  end
61
62
 
62
- def group_permission_already_set? group, authed_group, authed_owner
63
- return false if group.ip_permissions.nil?
64
- group.ip_permissions.any? do |existing_permission|
65
- existing_permission["groups"].include?({"userId"=>authed_owner, "groupName"=>authed_group}) &&
63
+ def group_permission_already_set?(fog_group, other_name, authed_owner)
64
+ return false if fog_group.ip_permissions.nil?
65
+ fog_group.ip_permissions.any? do |existing_permission|
66
+ existing_permission["groups"].include?({"userId" => authed_owner, "groupName" => other_name}) &&
66
67
  existing_permission["fromPort"] == 1 &&
67
- existing_permission["toPort"] == 65535
68
+ existing_permission["toPort"] == 65535
68
69
  end
69
70
  end
70
71
 
71
- def range_permission_already_set? group, range, cidr_ip, ip_protocol
72
- return false if group.ip_permissions.nil?
73
- group.ip_permissions.include?({"groups"=>[], "ipRanges"=>[{"cidrIp"=>cidr_ip}], "ipProtocol"=>ip_protocol, "fromPort"=>range.first, "toPort"=>range.last})
72
+ def range_permission_already_set?(fog_group, range, cidr_ip, ip_protocol)
73
+ return false if fog_group.ip_permissions.nil?
74
+ fog_group.ip_permissions.include?(
75
+ { "groups"=>[], "ipRanges"=>[{"cidrIp"=>cidr_ip}],
76
+ "ipProtocol"=>ip_protocol, "fromPort"=>range.first, "toPort"=>range.last})
74
77
  end
75
78
 
79
+ # FIXME: so if you're saying to yourself, "self, this is some soupy gooey
80
+ # code right here" then you and your self are correct. Much of this is to
81
+ # work around old limitations in the EC2 api. You can now treat range and
82
+ # group permissions the same, and we should.
83
+
76
84
  def run
77
- group = self.class.get_or_create name, description
78
- @group_authorizations.uniq.each do |authed_group, authed_owner|
85
+ fog_group = self.class.get_or_create(name, description)
86
+ @group_authorizations.uniq.each do |other_name, authed_owner|
79
87
  authed_owner ||= self.owner_id
80
- next if group_permission_already_set?(group, authed_group, authed_owner)
81
- step("authorizing access from all machines in #{authed_group}", :blue)
82
- self.class.get_or_create(authed_group, "Authorized to access nfs server")
83
- begin group.authorize_group_and_owner(authed_group, authed_owner)
88
+ next if group_permission_already_set?(fog_group, other_name, authed_owner)
89
+ step("authorizing access from all machines in #{other_name} to #{name}", :blue)
90
+ self.class.get_or_create(other_name, "Authorized to access #{name}")
91
+ begin fog_group.authorize_group_and_owner(other_name, authed_owner)
92
+ rescue StandardError => e ; ui.warn e ; end
93
+ end
94
+ @group_authorized_by.uniq.each do |other_name|
95
+ authed_owner = self.owner_id
96
+ other_group = self.class.get_or_create(other_name, "Authorized for access by #{self.name}")
97
+ next if group_permission_already_set?(other_group, self.name, authed_owner)
98
+ step("authorizing access to all machines in #{other_name} from #{name}", :blue)
99
+ begin other_group.authorize_group_and_owner(self.name, authed_owner)
84
100
  rescue StandardError => e ; ui.warn e ; end
85
101
  end
86
102
  @range_authorizations.uniq.each do |range, cidr_ip, ip_protocol|
87
- next if range_permission_already_set?(group, range, cidr_ip, ip_protocol)
103
+ next if range_permission_already_set?(fog_group, range, cidr_ip, ip_protocol)
88
104
  step("opening #{ip_protocol} ports #{range} to #{cidr_ip}", :blue)
89
- begin group.authorize_port_range(range, { :cidr_ip => cidr_ip, :ip_protocol => ip_protocol })
105
+ begin fog_group.authorize_port_range(range, { :cidr_ip => cidr_ip, :ip_protocol => ip_protocol })
90
106
  rescue StandardError => e ; ui.warn e ; end
91
107
  end
92
108
  end
@@ -136,6 +136,7 @@ module ClusterChef
136
136
  #
137
137
  # * run_list :first items -- cluster then facet then server
138
138
  # * run_list :normal items -- cluster then facet then server
139
+ # * own roles: cluster_role then facet_role
139
140
  # * run_list :last items -- cluster then facet then server
140
141
  #
141
142
  # ClusterChef.cluster(:my_cluster) do
@@ -152,11 +153,11 @@ module ClusterChef
152
153
  # end
153
154
  #
154
155
  # produces
155
- # cluster list [a] [c] [fg]
156
- # facet list [b] [de] [h]
156
+ # cluster list [a] [c] [cluster_role] [fg]
157
+ # facet list [b] [de] [facet_role] [h]
157
158
  #
158
159
  # yielding run_list
159
- # ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h']
160
+ # ['a', 'b', 'c', 'd', 'e', 'cr', 'fr', 'f', 'g', 'h']
160
161
  #
161
162
  # Avoid duplicate conflicting declarations. If you say define things more
162
163
  # than once, the *earliest encountered* one wins, even if it is elsewhere
@@ -168,6 +169,7 @@ module ClusterChef
168
169
  sg = self.run_list_groups
169
170
  [ cg[:first], fg[:first], sg[:first],
170
171
  cg[:normal], fg[:normal], sg[:normal],
172
+ cg[:own], fg[:own],
171
173
  cg[:last], fg[:last], sg[:last], ].flatten.compact.uniq
172
174
  end
173
175
 
@@ -9,6 +9,7 @@ module ClusterChef
9
9
  :name,
10
10
  # mountable volume attributes
11
11
  :device, :mount_point, :mount_options, :fstype, :mount_dump, :mount_pass,
12
+ :mountable, :formattable, :resizable, :in_raid,
12
13
  # cloud volume attributes
13
14
  :attachable, :create_at_launch, :volume_id, :snapshot_id, :size, :keep, :availability_zone,
14
15
  # arbitrary tags
@@ -18,11 +19,29 @@ module ClusterChef
18
19
  VOLUME_DEFAULTS = {
19
20
  :fstype => 'xfs',
20
21
  :mount_options => 'defaults,nouuid,noatime',
22
+ :keep => true,
21
23
  :attachable => :ebs,
22
24
  :create_at_launch => false,
23
- :keep => true,
25
+ #
26
+ :mountable => true,
27
+ :resizable => false,
28
+ :formattable => false,
29
+ :in_raid => false,
24
30
  }
25
31
 
32
+ # Snapshot for snapshot_name method.
33
+ # Set your own by adding
34
+ #
35
+ # VOLUME_IDS = Mash.new unless defined?(VOLUME_IDS)
36
+ # VOLUME_IDS.merge!({ :your_id => 'snap-whatever' })
37
+ #
38
+ # to your organization's knife.rb
39
+ #
40
+ VOLUME_IDS = Mash.new unless defined?(VOLUME_IDS)
41
+ VOLUME_IDS.merge!({
42
+ :blank_xfs => 'snap-d9c1edb1',
43
+ })
44
+
26
45
  # Describes a volume
27
46
  #
28
47
  # @example
@@ -50,6 +69,13 @@ module ClusterChef
50
69
  volume_id =~ /^ephemeral/
51
70
  end
52
71
 
72
+ # Named snapshots, as defined in ClusterChef::Volume::VOLUME_IDS
73
+ def snapshot_name(name)
74
+ snap_id = VOLUME_IDS[name.to_sym]
75
+ raise "Unknown snapshot name #{name} - is it defined in ClusterChef::Volume::VOLUME_IDS?" unless snap_id
76
+ self.snapshot_id(snap_id)
77
+ end
78
+
53
79
  # With snapshot specified but volume missing, have it auto-created at launch
54
80
  #
55
81
  # Be careful with this -- you can end up with multiple volumes claiming to
@@ -79,10 +105,10 @@ module ClusterChef
79
105
  if ephemeral_device?
80
106
  hsh['VirtualName'] = volume_id
81
107
  elsif create_at_launch?
82
- hsh.merge!({
83
- 'Ebs.SnapshotId' => snapshot_id,
84
- 'Ebs.VolumeSize' => size,
85
- 'Ebs.DeleteOnTermination' => (! keep).to_s })
108
+ raise "Must specify a size or a snapshot ID for #{self}" if snapshot_id.blank? && size.blank?
109
+ hsh['Ebs.SnapshotId'] = snapshot_id if snapshot_id.present?
110
+ hsh['Ebs.VolumeSize'] = size.to_s if size.present?
111
+ hsh['Ebs.DeleteOnTermination'] = (! keep).to_s
86
112
  else
87
113
  return
88
114
  end
@@ -90,4 +116,42 @@ module ClusterChef
90
116
  end
91
117
 
92
118
  end
119
+
120
+
121
+ #
122
+ # Consider raising the chunk size to 256 and setting read_ahead 65536 if you are raid'ing EBS volumes
123
+ #
124
+ # * http://victortrac.com/EC2_Ephemeral_Disks_vs_EBS_Volumes
125
+ # * http://orion.heroku.com/past/2009/7/29/io_performance_on_ebs/
126
+ # * http://tech.blog.greplin.com/aws-best-practices-and-benchmarks
127
+ # * http://stu.mp/2009/12/disk-io-and-throughput-benchmarks-on-amazons-ec2.html
128
+ #
129
+ class RaidGroup < Volume
130
+ has_keys(
131
+ :sub_volumes, # volumes that comprise this raid group
132
+ :level, # RAID level (http://en.wikipedia.org/wiki/RAID#Standard_levels)
133
+ :chunk, # Raid chunk size (https://raid.wiki.kernel.org/articles/r/a/i/RAID_setup_cbb2.html)
134
+ :read_ahead, # read-ahead buffer
135
+ )
136
+
137
+ def desc
138
+ "#{name} on #{parent.fullname} (#{volume_id} @ #{device} from #{sub_volumes.join(',')})"
139
+ end
140
+
141
+ def defaults()
142
+ super
143
+ fstype 'xfs'
144
+ mount_options "defaults,nobootwait,noatime,nouuid,comment=cluster_chef"
145
+ attachable false
146
+ create_at_launch false
147
+ #
148
+ mountable true
149
+ resizable false
150
+ formattable true
151
+ #
152
+ in_raid false
153
+ #
154
+ sub_volumes []
155
+ end
156
+ end
93
157
  end
metadata CHANGED
@@ -2,7 +2,7 @@
2
2
  name: cluster_chef
3
3
  version: !ruby/object:Gem::Version
4
4
  prerelease:
5
- version: 3.0.10
5
+ version: 3.0.11
6
6
  platform: ruby
7
7
  authors:
8
8
  - Infochimps
@@ -10,7 +10,7 @@ autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
12
 
13
- date: 2012-01-17 00:00:00 Z
13
+ date: 2012-01-24 00:00:00 Z
14
14
  dependencies:
15
15
  - !ruby/object:Gem::Dependency
16
16
  name: chef
@@ -118,7 +118,7 @@ dependencies:
118
118
  requirements:
119
119
  - - "="
120
120
  - !ruby/object:Gem::Version
121
- version: 3.0.10
121
+ version: 3.0.11
122
122
  type: :runtime
123
123
  prerelease: false
124
124
  version_requirements: *id010
@@ -191,7 +191,7 @@ required_ruby_version: !ruby/object:Gem::Requirement
191
191
  requirements:
192
192
  - - ">="
193
193
  - !ruby/object:Gem::Version
194
- hash: 905624580831723867
194
+ hash: 2883280125648369733
195
195
  segments:
196
196
  - 0
197
197
  version: "0"