ironfan 4.7.6 → 4.7.7

Sign up to get free protection for your applications and to get access to all the features.
data/CHANGELOG.md CHANGED
@@ -1,3 +1,6 @@
1
+ # v4.7.7
2
+ * Allow per-ephemeral-disk options using :disks attribute (thanks @nickmarden)
3
+
1
4
  # v4.7.6
2
5
  * adding chef-client-nonce invocation to knife cluster kick
3
6
 
data/VERSION CHANGED
@@ -1 +1 @@
1
- 4.7.6
1
+ 4.7.7
data/ironfan.gemspec CHANGED
@@ -5,11 +5,11 @@
5
5
 
6
6
  Gem::Specification.new do |s|
7
7
  s.name = "ironfan"
8
- s.version = "4.7.6"
8
+ s.version = "4.7.7"
9
9
 
10
10
  s.required_rubygems_version = Gem::Requirement.new(">= 0") if s.respond_to? :required_rubygems_version=
11
11
  s.authors = ["Infochimps"]
12
- s.date = "2013-01-25"
12
+ s.date = "2013-01-31"
13
13
  s.description = "Ironfan allows you to orchestrate not just systems but clusters of machines. It includes a powerful layer on top of knife and a collection of cloud cookbooks."
14
14
  s.email = "coders@infochimps.com"
15
15
  s.extra_rdoc_files = [
@@ -90,10 +90,12 @@ Gem::Specification.new do |s|
90
90
  "lib/ironfan/provider/virtualbox.rb",
91
91
  "lib/ironfan/provider/virtualbox/machine.rb",
92
92
  "lib/ironfan/requirements.rb",
93
+ "notes/Future-development-proposals.md",
93
94
  "notes/Home.md",
94
95
  "notes/INSTALL-cloud_setup.md",
95
96
  "notes/INSTALL.md",
96
97
  "notes/Ironfan-Roadmap.md",
98
+ "notes/Upgrading-to-v4.md",
97
99
  "notes/advanced-superpowers.md",
98
100
  "notes/aws_servers.jpg",
99
101
  "notes/aws_user_key.png",
@@ -117,7 +119,6 @@ Gem::Specification.new do |s|
117
119
  "notes/silverware.md",
118
120
  "notes/style_guide.md",
119
121
  "notes/tips_and_troubleshooting.md",
120
- "notes/version-3_2.md",
121
122
  "notes/walkthrough-hadoop.md",
122
123
  "notes/walkthrough-web.md",
123
124
  "spec/chef/cluster_bootstrap_spec.rb",
@@ -104,7 +104,13 @@ module Ironfan
104
104
  mount_options 'defaults,noatime'
105
105
  tags({:bulk => true, :local => true, :fallback => true})
106
106
  end
107
- ephemeral.receive! mount_ephemerals
107
+ ephemeral_attrs = mount_ephemerals.clone
108
+ if ephemeral_attrs.has_key?(:disks)
109
+ disk_attrs = mount_ephemerals[:disks][idx] || { }
110
+ ephemeral_attrs.delete(:disks)
111
+ ephemeral_attrs.merge!(disk_attrs)
112
+ end
113
+ ephemeral.receive! ephemeral_attrs
108
114
  result << ephemeral
109
115
  end
110
116
  result
@@ -1,16 +1,9 @@
1
+ ## Nathan
2
+ * **Clusters from JSON** - this is theoretically quite easy, given the DSL's gorillib underpinnings.
1
3
 
2
- # v3.2.0 (future): Revamped undercarriage, spec coverage, standalone usage
3
-
4
- This is a Snow Leopard-style version change. No new features to speak of, but a much more solid and predictable foundation.
5
-
6
- * **significantly cleaner DSL mixin**: uses the new, awesome `Gorillib::Builder`, giving it a much cleaner handling of fields and collections
7
-
8
- * **attributes are late-resolved**: in previous versions, the way you 'resolved' a server was to collapse the entire attribute set of cluster/facet/server hard onto the server model, a consistent source of bugs. Resolution is now done with the `Gorillib::Record::Overlay` mechanism, which means that you can set an attribute on the cluster and read it from the facet; change it later an all lower layers see the update.
9
-
4
+ ## Flip
10
5
  * **standalone usable**: can use ironfan-knife as a standalone library.
11
6
 
12
- # v3.3.x (future): Coherent universe of Servers, Components, Aspects
13
-
14
7
  * **spec coverage**:
15
8
 
16
9
  * **coherent data model**:
data/notes/Home.md CHANGED
@@ -1,3 +1,8 @@
1
+ >## **Ironfan: A Community Discussion Webinar**
2
+ **<p>Thursday, January 31 @ 10a P, 12p C, 1p E</p>**
3
+ Join Nathaniel Eliot, @temujin9, DevOps Engineer and lead on Ironfan, in this community discussion. Ironfan is a lightweight cluster orchestration toolset, built on top of Chef, which empowers spinning up of Hadoop clusters in under 20 minutes. Nathan has been responsible for Ironfan’s core plugin code, cookbooks, and other components to stabilize both Infochimps’ open source offerings, and internal architectures.
4
+ [Register Now](https://www4.gotomeeting.com/register/188375087)
5
+
1
6
  ## Overview
2
7
 
3
8
  Ironfan, the foundation of The Infochimps Platform, is an expressive toolset for constructing scalable, resilient architectures. It works in the cloud, in the data center, and on your laptop, and it makes your system diagram visible and inevitable. Inevitable systems coordinate automatically to interconnect, removing the hassle of manual configuration of connection points (and the associated danger of human error). For more information about Ironfan and the Infochimps Platform, visit [infochimps.com](https://www.infochimps.com).
@@ -0,0 +1,66 @@
1
+ While the refactoring that lead to version 4 was intended to be as backwards compatible as possible, there have been some small but important changes to the way homebases and the DSL work.
2
+
3
+ ## Bundler
4
+ Ironfan v4 uses bundler to manage its dependencies. In order to take advantage of it, the homebase's Gemfile should be updated to use ```gem 'ironfan', "~> 4.0"```
5
+
6
+ We highly recommend that you run all your knife commands via bundle exec. This can be accomplished with an alias:
7
+ ```
8
+ knife() {
9
+ bundle exec knife "$@"
10
+ }
11
+ ```
12
+
13
+ If you are comfortable with having bundle run every knife command (e.g. - you only have one homebase, or are using a Ironfan > 3.1.6 for all homebases you do use), you can add the above snippet to your .bashrc.
14
+
15
+ ## Vagrant
16
+ Vagrant support has been discontinued for the time being. One of the first targets for the multicloud capabilities of Ironfan v4 will be a Virtualbox or Vagrant extension.
17
+
18
+ ## DSL Changes
19
+ ### Role implications removed
20
+ In v3, certain roles could trigger further steps via role_implications.rb, which was used to add servers to corresponding EC2 Security Groups. This was deemed to be too risky and indirect, and has been removed for now. (A better mechanism for binding roles and provider-specific resources into repeatable components is being worked on.)
21
+
22
+ If you used any of the roles below, you will probably want to add the following stanzas next to them in the clusters file, to replace the removed implications. **Be aware that EC2 instances can only be added to a security group at startup; if you fail to add the security groups before launch, you will have to kill and relaunch the machines to change them.**
23
+
24
+ * `role :systemwide`
25
+ ```
26
+ cloud(:ec2).security_group :systemwide
27
+ ```
28
+ * `role :nfs_server`
29
+ ```
30
+ cloud(:ec2).security_group(:nfs_server).authorize_group :nfs_client
31
+ ```
32
+ * `role :nfs_client`
33
+ ```
34
+ cloud(:ec2).security_group :nfs_client
35
+ ```
36
+ * `role :ssh`
37
+ ```
38
+ cloud(:ec2).security_group(:ssh).authorize_port_range 22..22
39
+ ```
40
+ * `role :chef_server`
41
+ ```
42
+ cloud(:ec2).security_group :chef_server do
43
+ authorize_port_range 4000..4000 # chef-server-api
44
+ authorize_port_range 4040..4040 # chef-server-webui
45
+ end
46
+ ```
47
+ * `role :web_server`
48
+ ```
49
+ cloud(:ec2).security_group("#{self.cluster_name}-web_server") do
50
+ authorize_port_range 80..80
51
+ authorize_port_range 443..443
52
+ end
53
+ ```
54
+ * `role :redis_server`
55
+ ```
56
+ cloud(:ec2).security_group("#{self.cluster_name}-redis_server") do
57
+ authorize_group("#{self.cluster_name}-redis_client")
58
+ end
59
+ ```
60
+ * `role :redis_client`
61
+ ```
62
+ cloud(:ec2).security_group("#{self.cluster_name}-redis_client")
63
+ ```
64
+
65
+ ### Default statements removed
66
+ Defaults should not need to be selected, and have been removed as a statement from the cluster DSL (in both cluster and volume). Although this is a non-breaking change, it has been flagged to raise a halting error, to alert people to the role_implications change above (which lacks well-defined indicators of its usage).
@@ -1,35 +1,37 @@
1
1
  ## Compute Costs
2
2
 
3
3
 
4
- code $/mo $/day $/hr CPU/$ Mem/$ mem cpu cores cpcore storage bits IO type name
5
- t1.micro 15 0.48 .02 13 13 0.61 0.25 0.25 1 0 32 Low Micro Micro
6
- m1.small 58 1.92 .08 13 21 1.7 1 1 1 160 32 Moderate Standard Small
7
- m1.medium 116 3.84 .165 13 13 3.75 2 2 1 410 32 Moderate Standard Medium
8
- c1.medium 120 3.96 .17 30 10 1.7 5 2 2.5 350 32 Moderate High-CPU Medium
9
- m1.large 232 7.68 .32 13 23 7.5 4 2 2 850 64 High Standard Large
10
- m2.xlarge 327 10.80 .45 14 38 17.1 6.5 2 3.25 420 64 Moderate High-Memory Extra Large
11
- m1.xlarge 465 15.36 .64 13 23 15 8 4 2 1690 64 High Standard Extra Large
12
- c1.xlarge 479 15.84 .66 30 11 7 20 8 2.5 1690 64 High High-CPU Extra Large
13
- m2.2xlarge 653 21.60 .90 14 38 34.2 13 4 3.25 850 64 High High-Memory Double Extra Large
14
- cc1.4xlarge 944 31.20 1.30 26 18 23 33.5 2 16.75 1690 64 10GB Compute Quadruple Extra Large
15
- m2.4xlarge 1307 43.20 1.80 14 38 68.4 26 8 3.25 1690 64 High High-Memory Quadruple Extra Large
16
- cg1.4xlarge 1525 50.40 2.10 16 10 22 33.5 2 16.75 1690 64 10GB Cluster GPU Quadruple Extra Large
17
- cc2.8xlarge 1742 57.60 2.40 37 25 60.5 88 2 44 3370 64 10GB Compute Eight Extra Large
18
-
19
- dummy header ln 15 0.48 0.02 12345 12345 0.61 0.25 0.25 1.00 6712345 32123 Low Micro Micro
20
-
4
+ code $/mo $/day $/hr Mem/$ CPU/$ mem cpu cores cpcore storage disks bits ebs-opt IO
5
+ t1.micro 15 0.48 .02 13 13 0.61 0.25 0.25 1 0 0 32 - Low
6
+ m1.small 47 1.56 .065 26 15 1.7 1 1 1 160 1 32 - Moderate
7
+ m1.medium 95 3.12 .13 15 15 3.75 2 2 1 410 1 32 - Moderate
8
+ c1.medium 124 4.08 .165 10 30 1.7 5 2 2.5 350 1 32 - Moderate
9
+ m1.large 190 6.24 .26 29 15 7.5 4 2 2 850 2 64 500 High
10
+ m2.xlarge 329 10.80 .45 38 14 17.1 6.5 2 3.25 420 1 64 - Moderate
11
+ m1.xlarge 380 12.48 .52 29 15 15 8 4 2 1690 4 64 1000 High
12
+ m3.xlarge 424 13.92 .58 26 22 15 13 4 3.25 0 0 64 - Moderate
13
+ c1.xlarge 482 15.84 .66 11 30 7 20 8 2.5 1690 4 64 - High
14
+ m2.2xlarge 658 21.60 .90 38 14 34.2 13 4 3.25 850 2 64 - High
15
+ m3.2xlarge 847 27.84 1.16 26 22 30 26 8 3.25 0 0 64 - High
16
+ cc1.4xlarge 950 31.20 1.30 18 26 23 33.5 8 4.2 1690 4 64 - 10GB
17
+ m2.4xlarge 1315 43.20 1.80 38 14 68.4 26 8 3.25 1690 2 64 1000 High
18
+ cg1.4xlarge 1534 50.40 2.10 10 16 22 33.5 8 4.2 1690 4 64 - 10GB
19
+ cc2.8xlarge 1753 57.60 2.40 25 37 60.5 88 16 5.5 3370 2 64 - 10GB
20
+ hi1.4xlarge 2265 74.40 3.10 20 11 60.5 35 16 2.2 2048 ssd 2 64 - 10GB
21
+ cr1.8xlarge 2557 84.00 3.50 70 25 244 88 16 5.5 240 ssd 2 64 - 10GB
22
+ hs1.8xlarge 3361 110.40 4.60 25 8 117 35 16 2.2 49152 24 64 - 10GB
21
23
 
22
24
  ## Storage Costs
23
25
 
24
- $/GB..mo $/GB.mo $/Mio
25
- EBS Volume $0.10
26
+ $/GB..mo $/GB.mo $/Mio
27
+ EBS Volume $0.10
26
28
  EBS I/O $0.10
27
- EBS Snapshot S3 $0.083
29
+ EBS Snapshot S3 $0.083
28
30
 
29
31
  Std $/GB.mo Red.Red. $/GB.mo
30
32
  S3 1st tb $0.125 $0.093
31
- S3 next 49tb $0.110 $0.083
32
- S3 next 450tb $0.095 $0.073
33
+ S3 next 49tb $0.110 $0.083
34
+ S3 next 450tb $0.095 $0.073
33
35
 
34
36
  ### Storing 1TB data
35
37
 
@@ -51,12 +53,12 @@ NOTE: For current pricing information, be sure to check Amazon EC2 Pricing.
51
53
  The cost of an EBS Volume is $0.10/GB per month. You are responsible for paying for the amount of disk space that you reserve, not for the amount of the disk space that you actually use. If you reserve a 1TB volume, but only use 1GB, you will be paying for 1TB.
52
54
  * $0.10/GB per month of provisioned storage
53
55
  * $0.10/GB per 1 million I/O requests
54
-
56
+
55
57
  #### Transaction Costs
56
58
 
57
59
  In addition to the storage cost for EBS Volumes, you will also be charged for I/O transactions. The cost is $0.10 per million I/O transactions, where one transaction is equivalent to one read or write. This number may be smaller than the actual number of transactions performed by your application because of the Linux cache for all file systems.
58
60
  $0.10 per 1 million I/O requests
59
-
61
+
60
62
  #### S3 Snapshot Costs
61
63
 
62
64
  Snapshot costs are compressed and based on altered blocks from the previous snapshot backup. Files that have altered blocks on the disk and then been deleted will add cost to the Snapshots for example. Remember, snapshots are at the data block level.
@@ -65,5 +67,3 @@ $0.01 per 1,000 PUT requests (when saving a snapshot)
65
67
  $0.01 per 10,000 GET requests (when loading a snapshot)
66
68
 
67
69
  NOTE: Payment charges stop the moment you delete a volume. If you delete a volume and the status appears as "deleting" for an extended period of time, you will not be charged for the time needed to complete the deletion.
68
-
69
-
Binary file
@@ -1,5 +1,7 @@
1
1
  # Ironfan Knife Commands
2
2
 
3
+ ## Available Commands
4
+
3
5
  Available cluster subcommands: (for details, `knife SUB-COMMAND --help`)
4
6
 
5
7
  knife cluster list (options) - show available clusters
@@ -15,4 +17,5 @@ Available cluster subcommands: (for details, `knife SUB-COMMAND --help`)
15
17
  knife cluster sync CLUSTER-[FACET-[INDEXES]] (options) - Update chef server and cloud machines with current cluster definition
16
18
  knife cluster vagrant CMD CLUSTER-[FACET-[INDEXES]] (options) - runs the given command against a vagrant environment created from your cluster definition. EARLY, use at your own risk
17
19
 
20
+ ## Examples
18
21
 
@@ -40,7 +40,7 @@ The dbnode facet has one server, which will be named `web_demo-dbnode-0`; the we
40
40
 
41
41
  Each server inherits the appropriate behaviors from its facet and cluster. All the servers in this cluster have the `base_role`, `chef_client` and `ssh` roles. The dbnode machines additionally house a MySQL server, while the webnodes have an nginx reverse proxy for the custom `web_demo_webapp`.
42
42
 
43
- As you can see, the dbnode facet asks for a different flavor of machine (`m1.large`) than the cluster default (`t1.micro`). Settings in the facet override those in the server, and settings in the server override those of its facet. You economically describe only what's significant about each machine.
43
+ As you can see, the dbnode facet asks for a different flavor of machine (`m1.large`) than the cluster default (`t1.micro`). Settings in the facet override those in the cluster, and settings in the server override those of its facet. You economically describe only what's significant about each machine.
44
44
 
45
45
  ### Cluster-level tools
46
46
 
@@ -13,6 +13,10 @@ describe Ironfan::Dsl::Cluster do
13
13
 
14
14
  facet :web do
15
15
  instances 3
16
+ cloud(:ec2) do
17
+ flavor 'm1.small'
18
+ mount_ephemerals({ :disks => { 0 => { :mount_point => '/data' } } })
19
+ end
16
20
  end
17
21
 
18
22
  end
@@ -30,6 +34,10 @@ describe Ironfan::Dsl::Cluster do
30
34
  it 'should have one cloud provider, EC2' do
31
35
  @facet.servers[0].clouds.keys.should == [ :ec2 ]
32
36
  end
37
+
38
+ it 'should have its first ephemeral disk mounted at /data' do
39
+ @facet.servers[0].implied_volumes[1].mount_point.should == '/data'
40
+ end
33
41
  end
34
42
 
35
43
  end
metadata CHANGED
@@ -2,7 +2,7 @@
2
2
  name: ironfan
3
3
  version: !ruby/object:Gem::Version
4
4
  prerelease:
5
- version: 4.7.6
5
+ version: 4.7.7
6
6
  platform: ruby
7
7
  authors:
8
8
  - Infochimps
@@ -10,7 +10,7 @@ autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
12
 
13
- date: 2013-01-25 00:00:00 Z
13
+ date: 2013-01-31 00:00:00 Z
14
14
  dependencies:
15
15
  - !ruby/object:Gem::Dependency
16
16
  name: chef
@@ -216,10 +216,12 @@ files:
216
216
  - lib/ironfan/provider/virtualbox.rb
217
217
  - lib/ironfan/provider/virtualbox/machine.rb
218
218
  - lib/ironfan/requirements.rb
219
+ - notes/Future-development-proposals.md
219
220
  - notes/Home.md
220
221
  - notes/INSTALL-cloud_setup.md
221
222
  - notes/INSTALL.md
222
223
  - notes/Ironfan-Roadmap.md
224
+ - notes/Upgrading-to-v4.md
223
225
  - notes/advanced-superpowers.md
224
226
  - notes/aws_servers.jpg
225
227
  - notes/aws_user_key.png
@@ -243,7 +245,6 @@ files:
243
245
  - notes/silverware.md
244
246
  - notes/style_guide.md
245
247
  - notes/tips_and_troubleshooting.md
246
- - notes/version-3_2.md
247
248
  - notes/walkthrough-hadoop.md
248
249
  - notes/walkthrough-web.md
249
250
  - spec/chef/cluster_bootstrap_spec.rb
@@ -282,7 +283,7 @@ required_ruby_version: !ruby/object:Gem::Requirement
282
283
  requirements:
283
284
  - - ">="
284
285
  - !ruby/object:Gem::Version
285
- hash: -1978687581575052392
286
+ hash: 1940435726309718533
286
287
  segments:
287
288
  - 0
288
289
  version: "0"