ironfan 4.3.4 → 4.4.0

Sign up to get free protection for your applications and to get access to all the features.
Files changed (66) hide show
  1. data/CHANGELOG.md +7 -0
  2. data/ELB.md +121 -0
  3. data/Gemfile +1 -0
  4. data/Rakefile +4 -0
  5. data/VERSION +1 -1
  6. data/ironfan.gemspec +48 -3
  7. data/lib/chef/knife/cluster_launch.rb +5 -0
  8. data/lib/chef/knife/cluster_proxy.rb +3 -3
  9. data/lib/chef/knife/cluster_sync.rb +4 -0
  10. data/lib/chef/knife/ironfan_knife_common.rb +17 -6
  11. data/lib/chef/knife/ironfan_script.rb +29 -11
  12. data/lib/ironfan.rb +2 -2
  13. data/lib/ironfan/broker/computer.rb +8 -3
  14. data/lib/ironfan/dsl/ec2.rb +133 -2
  15. data/lib/ironfan/headers.rb +4 -0
  16. data/lib/ironfan/provider.rb +48 -3
  17. data/lib/ironfan/provider/ec2.rb +23 -8
  18. data/lib/ironfan/provider/ec2/elastic_load_balancer.rb +239 -0
  19. data/lib/ironfan/provider/ec2/iam_server_certificate.rb +101 -0
  20. data/lib/ironfan/provider/ec2/machine.rb +8 -0
  21. data/lib/ironfan/provider/ec2/security_group.rb +3 -5
  22. data/lib/ironfan/requirements.rb +2 -0
  23. data/notes/Home.md +45 -0
  24. data/notes/INSTALL-cloud_setup.md +103 -0
  25. data/notes/INSTALL.md +134 -0
  26. data/notes/Ironfan-Roadmap.md +70 -0
  27. data/notes/advanced-superpowers.md +16 -0
  28. data/notes/aws_servers.jpg +0 -0
  29. data/notes/aws_user_key.png +0 -0
  30. data/notes/cookbook-versioning.md +11 -0
  31. data/notes/core_concepts.md +200 -0
  32. data/notes/declaring_volumes.md +3 -0
  33. data/notes/design_notes-aspect_oriented_devops.md +36 -0
  34. data/notes/design_notes-ci_testing.md +169 -0
  35. data/notes/design_notes-cookbook_event_ordering.md +249 -0
  36. data/notes/design_notes-meta_discovery.md +59 -0
  37. data/notes/ec2-pricing_and_capacity.md +69 -0
  38. data/notes/ec2-pricing_and_capacity.numbers +0 -0
  39. data/notes/homebase-layout.txt +102 -0
  40. data/notes/knife-cluster-commands.md +18 -0
  41. data/notes/named-cloud-objects.md +11 -0
  42. data/notes/opscode_org_key.png +0 -0
  43. data/notes/opscode_user_key.png +0 -0
  44. data/notes/philosophy.md +13 -0
  45. data/notes/rake_tasks.md +24 -0
  46. data/notes/renamed-recipes.txt +142 -0
  47. data/notes/silverware.md +85 -0
  48. data/notes/style_guide.md +300 -0
  49. data/notes/tips_and_troubleshooting.md +92 -0
  50. data/notes/version-3_2.md +273 -0
  51. data/notes/walkthrough-hadoop.md +168 -0
  52. data/notes/walkthrough-web.md +166 -0
  53. data/spec/fixtures/ec2/elb/snakeoil.crt +35 -0
  54. data/spec/fixtures/ec2/elb/snakeoil.key +51 -0
  55. data/spec/integration/minimal-chef-repo/chefignore +41 -0
  56. data/spec/integration/minimal-chef-repo/environments/_default.json +12 -0
  57. data/spec/integration/minimal-chef-repo/knife/credentials/knife-org.rb +19 -0
  58. data/spec/integration/minimal-chef-repo/knife/credentials/knife-user-ironfantester.rb +9 -0
  59. data/spec/integration/minimal-chef-repo/knife/knife.rb +66 -0
  60. data/spec/integration/minimal-chef-repo/roles/systemwide.rb +10 -0
  61. data/spec/integration/spec/elb_build_spec.rb +95 -0
  62. data/spec/integration/spec_helper.rb +16 -0
  63. data/spec/integration/spec_helper/launch_cluster.rb +55 -0
  64. data/spec/ironfan/ec2/elb_spec.rb +95 -0
  65. data/spec/ironfan/ec2/security_group_spec.rb +0 -6
  66. metadata +60 -3
@@ -0,0 +1,59 @@
1
+ @temujin9 has proposed, and it's a good propose, that there should exist such a thing as an 'integration cookbook'.
2
+
3
+ The hadoop_cluster cookbook should describe the hadoop_cluster, the ganglia cookbook ganglia, and the zookeeper cookbook zookeeper. Each should provide hooks that are neighborly but not exhibitionist, but should mind its own business.
4
+
5
+ The job of tying those components together should belong to a separate concern. It should know how and when to copy hbase jars into the pig home dir, or what cluster service_provide'r a redis client should reference.
6
+
7
+ ## Practical implications
8
+
9
+ * I'm going to revert out the `node[:zookeeper][:cluster_name]` attributes -- services should always announce under their cluster.
10
+
11
+ * Until we figure out how and when to separate integrations, I'm going to isolate entanglements into their own recipes within cookbooks: so, the ganglia part of hadoop will become `ganglia_integration` or somesuch.
12
+
13
+ ## Example integrations
14
+
15
+ ### Copying jars
16
+
17
+ Pig needs jars from hbase and zookeeper. They should announce they have jars; pig should announce its home directory; the integration should decide how and where to copy the jars.
18
+
19
+ ### Reference a service
20
+
21
+ Right now in several places we have attributes like `node[:zookeeper][:cluster_name]`, used to specify the cluster that provides_service zookeeper.
22
+
23
+ * Server recipes should never use `node[:zookeeper][:cluster_name]` -- they should always announce under their native cluster. (I'd kinda like to give `provides_service` some sugar to assume the cluster name, but need to find something backwards-compatible to use)
24
+
25
+ * Need to take a better survey of usage among clients to determine how to do this.
26
+
27
+ * cases:
28
+ - hbase cookbook refs: hadoop, zookeeper, ganglia
29
+ - flume cookbook refs: zookeeper, ganglia.
30
+ - flume agents may reference several different flume provides_service'rs
31
+ - API using two different elasticsearch clusters
32
+
33
+ ### Logging, monitoring
34
+
35
+ * tell flume you have logs to pick up
36
+ * tell ganglia to monitor you
37
+
38
+ ### Service Dashboard
39
+
40
+ Let everything with a dashboard say so, and then let one resource create a page that links to each.
41
+
42
+
43
+ ________________________
44
+
45
+ These are still forming, ideas welcome.
46
+
47
+
48
+
49
+
50
+ Sometimes we want
51
+
52
+ * if there are volumes marked for 'mysql-table_data', use that; otherwise, the 'persistent' datastore, if any; else the 'bulk' datastore, if any; else the 'fallback' datastore (which is guaranteed to exist).
53
+
54
+ * IP addresses (or hostnames):
55
+ - `[:private_ip, :public_ip]`
56
+ - `[:private_ip]`
57
+ - `:primary_ip`
58
+
59
+ * .
@@ -0,0 +1,69 @@
1
+ ## Compute Costs
2
+
3
+
4
+ code $/mo $/day $/hr CPU/$ Mem/$ mem cpu cores cpcore storage bits IO type name
5
+ t1.micro 15 0.48 .02 13 13 0.61 0.25 0.25 1 0 32 Low Micro Micro
6
+ m1.small 58 1.92 .08 13 21 1.7 1 1 1 160 32 Moderate Standard Small
7
+ m1.medium 116 3.84 .165 13 13 3.75 2 2 1 410 32 Moderate Standard Medium
8
+ c1.medium 120 3.96 .17 30 10 1.7 5 2 2.5 350 32 Moderate High-CPU Medium
9
+ m1.large 232 7.68 .32 13 23 7.5 4 2 2 850 64 High Standard Large
10
+ m2.xlarge 327 10.80 .45 14 38 17.1 6.5 2 3.25 420 64 Moderate High-Memory Extra Large
11
+ m1.xlarge 465 15.36 .64 13 23 15 8 4 2 1690 64 High Standard Extra Large
12
+ c1.xlarge 479 15.84 .66 30 11 7 20 8 2.5 1690 64 High High-CPU Extra Large
13
+ m2.2xlarge 653 21.60 .90 14 38 34.2 13 4 3.25 850 64 High High-Memory Double Extra Large
14
+ cc1.4xlarge 944 31.20 1.30 26 18 23 33.5 2 16.75 1690 64 10GB Compute Quadruple Extra Large
15
+ m2.4xlarge 1307 43.20 1.80 14 38 68.4 26 8 3.25 1690 64 High High-Memory Quadruple Extra Large
16
+ cg1.4xlarge 1525 50.40 2.10 16 10 22 33.5 2 16.75 1690 64 10GB Cluster GPU Quadruple Extra Large
17
+ cc2.8xlarge 1742 57.60 2.40 37 25 60.5 88 2 44 3370 64 10GB Compute Eight Extra Large
18
+
19
+ dummy header ln 15 0.48 0.02 12345 12345 0.61 0.25 0.25 1.00 6712345 32123 Low Micro Micro
20
+
21
+
22
+ ## Storage Costs
23
+
24
+ $/GB..mo $/GB.mo $/Mio
25
+ EBS Volume $0.10
26
+ EBS I/O $0.10
27
+ EBS Snapshot S3 $0.083
28
+
29
+ Std $/GB.mo Red.Red. $/GB.mo
30
+ S3 1st tb $0.125 $0.093
31
+ S3 next 49tb $0.110 $0.083
32
+ S3 next 450tb $0.095 $0.073
33
+
34
+ ### Storing 1TB data
35
+
36
+ (Cost of storage, neglecting I/O costs, and assuming the ratio of EBS volume size to snapshot size is as given)
37
+
38
+ * http://aws.amazon.com/ec2/instance-types/
39
+ * http://aws.amazon.com/ec2/#pricing
40
+
41
+ ### How much does EBS cost?
42
+
43
+ The costs of EBS will be similar to the pricing structure of data storage on S3. There are three types of costs associated with EBS.
44
+
45
+ Storage Cost + Transaction Cost + S3 Snapshot Cost = Total Cost of EBS
46
+
47
+ NOTE: For current pricing information, be sure to check Amazon EC2 Pricing.
48
+
49
+ #### Storage Costs
50
+
51
+ The cost of an EBS Volume is $0.10/GB per month. You are responsible for paying for the amount of disk space that you reserve, not for the amount of the disk space that you actually use. If you reserve a 1TB volume, but only use 1GB, you will be paying for 1TB.
52
+ * $0.10/GB per month of provisioned storage
53
+ * $0.10/GB per 1 million I/O requests
54
+
55
+ #### Transaction Costs
56
+
57
+ In addition to the storage cost for EBS Volumes, you will also be charged for I/O transactions. The cost is $0.10 per million I/O transactions, where one transaction is equivalent to one read or write. This number may be smaller than the actual number of transactions performed by your application because of the Linux cache for all file systems.
58
+ $0.10 per 1 million I/O requests
59
+
60
+ #### S3 Snapshot Costs
61
+
62
+ Snapshot costs are compressed and based on altered blocks from the previous snapshot backup. Files that have altered blocks on the disk and then been deleted will add cost to the Snapshots for example. Remember, snapshots are at the data block level.
63
+ $0.15 per GB-month of data stored
64
+ $0.01 per 1,000 PUT requests (when saving a snapshot)
65
+ $0.01 per 10,000 GET requests (when loading a snapshot)
66
+
67
+ NOTE: Payment charges stop the moment you delete a volume. If you delete a volume and the status appears as "deleting" for an extended period of time, you will not be charged for the time needed to complete the deletion.
68
+
69
+
@@ -0,0 +1,102 @@
1
+ Ironfan Homebase Layout
2
+ =======================
3
+
4
+ Your Chef Homebase contains several directories, and each contains a README.md file describing its purpose and use in greater detail.
5
+
6
+ This directory structure came out of a *lot* of trial and error, and is working very well where many others didn't. This homebase makes it easy to pull in third-party pantries (cookbook collections) and track upstream changes without being locked in.
7
+
8
+ ## Main Directories
9
+
10
+ The main assets you'll use are:
11
+
12
+ * `clusters/` - Clusters fully describe your machines, from its construction ('an 8-core machine on the Amazon EC2 cloud') to its roles ('install Cassandra, Ganglia for monitoring, and silverware to manage its logs and firewall').
13
+ * `cookbooks/` - Cookbooks you download or create. Cookbooks install components, for example `cassandra` or `java`.
14
+ * `roles/` - Roles organize cookbooks and attribute overrides to describe the specific composition of your system. For example, you install Cassandra attaching the `cassandra_server` role to your machine. (.rb or .json files)
15
+
16
+ These folders hold supporting files. You're less likely to visit here regularly.
17
+
18
+ * `knife/` - Chef and cloud configuration, and their myriad attendant credentials.
19
+ * `environments/` - Organization-wide attribute values. (.json or .rb files)
20
+ * `data_bags/` - Data bags are an occasionally-useful alternative to node metadata for distributing information to your machines. (.json files)
21
+ * `certificates/` - SSL certificates generated by `rake ssl_cert` live here.
22
+ * `tasks/` - Rake tasks for common administrative tasks.
23
+ * `vendor/` - cookbooks are checked out to `vendor`; symlinks in the `cookbooks/` directory select which ones will be deployed to chef server. The vendor directory comes with the Ironfan, Opscode and (should you be a lucky customer) Ironfan Enterprise chef pantries.
24
+ * `notes/` - a submoduled copy of the [ironfan wiki](http://github.com/infochimps-labs/ironfan/wiki`
25
+
26
+ ## Directory setup
27
+
28
+ The core structure of the homebase is as follows ("├─*" means "git submodule'd"):
29
+
30
+ homebase
31
+ ├── clusters - cluster definition files
32
+ │ └── ( clusters )
33
+ ├── cookbooks - symlinks to cookbooks
34
+ │ ├── @vendor/ironfan-pantry/cookbooks/...
35
+ │ ├── @vendor/opscode/cookbooks/...
36
+ │ └── @vendor/org-pantry/cookbooks/...
37
+ ├── environments - environment definition files
38
+ │ └── ( environments )
39
+ ├── data_bags - symlinks to data_bags
40
+ │ ├── @vendor/ironfan-pantry/roles/...
41
+ │ └── @vendor/org-pantry/roles/...
42
+ ├── roles - symlinks to roles
43
+ │ ├── @vendor/ironfan-pantry/roles/...
44
+ │ └── @vendor/org-pantry/roles/...
45
+ ├── vendor
46
+ │ ├─* ironfan-pantry - git submodule of https://github.com/infochimps-labs/ironfan-pantry
47
+ │ │ ├── cookbooks
48
+ │ │ ├── data_bags
49
+ │ │ ├── roles
50
+ │ │ └── tasks
51
+ │ ├─* ironfan-enterprise - git submodule of ironfan-enterprise, if you're a lucky customer
52
+ │ │ ├── cookbooks
53
+ │ │ ├── data_bags
54
+ │ │ ├── roles
55
+ │ │ └── tasks
56
+ │ ├── opscode
57
+ │ │ └─* cookbooks - git submodule of https://github.com/infochimps-labs/opscode_cookbooks; itself a closely-tracked fork of https://github.com/opscode/cookbooks
58
+ │ └── org-pantry - organization specific roles, cookbooks, etc.
59
+ │ ├── cookbooks
60
+ │ ├── data_bags
61
+ │ ├── roles
62
+ │ └── tasks
63
+ ├── knife - credentials (see below)
64
+ ├── certificates
65
+ ├── config
66
+ ├─* notes
67
+ ├── tasks
68
+ └── vagrants - vagrant files (when using ironfan-ci)
69
+
70
+ The `vendor/opscode/cookbooks` and `vendor/ironfan-pantry` directories are actually [git submodules](http://help.github.com/submodules/). This makes it easy to track upstream changes in each collection while still maintaining your own modifications.
71
+
72
+ We recommend you place your cookbooks in `vendor/org-pantry`. If you have cookbooks &c from other pantries that have significant changes, you can duplicate it into this `vendor/org-pantry` and simply change the symlink in homebase/cookbooks/.
73
+
74
+ ## Knife dir setup
75
+
76
+ We recommend you version your credentials directory separately from your homebase. You will want to place it under version control, but you should not place it into a central git repository -- this holds the keys to the kingdom.
77
+
78
+ We exclude the chef user key (`(user).pem`) and user-specific knife settings (`knife-user-(user).rb`) from the repo as well. Each user has their own revokable client key, and their own cloud credentials set, and those live nowhere but their own computers.
79
+
80
+ knife/
81
+ ├── knife.rb
82
+ ├── credentials -> (organization)-credentials
83
+ ├── (organization)-credentials
84
+ │ ├── knife-org.rb - org-specific stuff, and cloud assets (elastic IPs, AMI image ids, etc)
85
+ │ ├── (user).pem - your chef client key *GITIGNORED*
86
+ │ ├── knife-user-(user).rb - your user-specific knife customizations *GITIGNORED*
87
+ │ ├── (organization)-validator.pem- chef validator key, used to create client keys
88
+ │ ├── client_keys
89
+ │ │ └── (transient client keys) - you can delete these at will; only useful if you're debugging a bad bootstrap
90
+ │ ├── ec2_keys
91
+ │ │ └── (transient client keys) - ssh keys for cloud machines (in EC2 parlance, the private half of your keypair)
92
+ │ ├── certificates
93
+ │ ├── data_bag_keys
94
+ │ └── ec2_certs
95
+ ├── bootstrap - knife cluster bootstrap scripts
96
+ ├── plugins
97
+ │ └── knife
98
+ │ └── (knife plugins) - knife plugins
99
+ └── .gitignore - make sure not to version the secret/user-specific stuff (*-keypairs, *-credentials.rb, knife-user-*.rb)
100
+
101
+
102
+ (You can safely ignore the directories above that aren't annotated; they're useful in certain advanced contexts but not immediately)
@@ -0,0 +1,18 @@
1
+ # Ironfan Knife Commands
2
+
3
+ Available cluster subcommands: (for details, `knife SUB-COMMAND --help`)
4
+
5
+ knife cluster list (options) - show available clusters
6
+ knife cluster bootstrap CLUSTER-[FACET-[INDEXES]] (options) - bootstrap all servers described by given cluster slice
7
+ knife cluster kick CLUSTER-[FACET-[INDEXES]] (options) - start a run of chef-client on each server, tailing the logs and exiting when the run completes.
8
+ knife cluster kill CLUSTER-[FACET-[INDEXES]] (options) - kill all servers described by given cluster slice
9
+ knife cluster launch CLUSTER-[FACET-[INDEXES]] (options) - Creates chef node and chef apiclient, pre-populates chef node, and instantiates in parallel their cloud machines. With --bootstrap flag, will ssh in to machines as they become ready and launch the bootstrap process
10
+ knife cluster proxy CLUSTER-[FACET-[INDEXES]] (options) - Runs the ssh command to open a SOCKS proxy to the given host, and writes a PAC (automatic proxy config) file to /tmp/ironfan_proxy-YOURNAME.pac. Only the first host is used, even if multiple match.
11
+ knife cluster show CLUSTER-[FACET-[INDEXES]] (options) - a helpful display of cluster's cloud and chef state
12
+ knife cluster ssh CLUSTER-[FACET-[INDEXES]] COMMAND (options) - run an interactive ssh session, or execuse the given command, across a cluster slice
13
+ knife cluster start CLUSTER-[FACET-[INDEXES]] (options) - start all servers described by given cluster slice
14
+ knife cluster stop CLUSTER-[FACET-[INDEXES]] (options) - stop all servers described by given cluster slice
15
+ knife cluster sync CLUSTER-[FACET-[INDEXES]] (options) - Update chef server and cloud machines with current cluster definition
16
+ knife cluster vagrant CMD CLUSTER-[FACET-[INDEXES]] (options) - runs the given command against a vagrant environment created from your cluster definition. EARLY, use at your own risk
17
+
18
+
@@ -0,0 +1,11 @@
1
+ # Named Cloud Objects
2
+
3
+ To add a new machine image, place this snippet:
4
+
5
+ Chef::Config[:ec2_image_info] ||= {}
6
+ Chef::Config[:ec2_image_info].merge!({
7
+ # ... lines like this:
8
+ # %w[ us-west-1 64-bit ebs natty ] => { :image_id => 'ami-4d580408' },
9
+ })
10
+
11
+ in your knife.rb or whereever. ironfan will notice that it exists and add to it, rather than clobbering it.
Binary file
Binary file
@@ -0,0 +1,13 @@
1
+ ## Philosophy
2
+
3
+ Some general principles of how we use Chef.
4
+
5
+ * *Chef server is never the repository of truth* -- it only mirrors the truth. A file is tangible and immediate to access.
6
+ * Specifically, we want truth to live in the git repo, and be enforced by the Chef server. This means that everything is versioned, documented and exchangeable. *There is no truth but git, and Chef is its messenger*.
7
+ * *Systems, services and significant modifications cluster should be obvious from the `clusters` file*. I don't want to have to bounce around nine different files to find out which thing installed a redis:server. The existence of anything that opens a port should be obvious when I look at the cluster file.
8
+ * *Roles define systems, clusters assemble systems into a machine*.
9
+ - For example, a resque worker queue has a redis, a webserver and some config files -- your cluster should invoke a @whatever_queue@ role, and the @whatever_queue@ role should include recipes for the component services.
10
+ - the existence of anything that opens a port _or_ runs as a service should be obvious when I look at the roles file.
11
+ * *include_recipe considered harmful* Do NOT use include_recipe for anything that a) provides a service, b) launches a daemon or c) is interesting in any way. (so: @include_recipe java@ yes; @include_recipe iptables@ no.) You should note the dependency in the metadata.rb. This seems weird, but the breaking behavior is purposeful: it makes you explicitly state all dependencies.
12
+ * It's nice when *machines are in full control of their destiny*. Their initial setup (elastic IP, attaching a drive) is often best enforced externally. However, machines should be able independently assert things like load balancer registration which may change at any point in their lifetime.
13
+ * It's even nicer, though, to have *full idempotency from the command line*: I can at any time push truth from the git repo to the Chef server and know that it will take hold.
@@ -0,0 +1,24 @@
1
+
2
+ Rake Tasks
3
+ ==========
4
+
5
+ The homebase contains a `Rakefile` that includes tasks that are installed with the Chef libraries. To view the tasks available with in the homebase with a brief description, run `rake -T`.
6
+
7
+ Besides your `~/.chef/knife.rb` file, the Rakefile loads `config/rake.rb`, which sets:
8
+
9
+ * Constants used in the `ssl_cert` task for creating the certificates.
10
+ * Constants that set the directory locations used in various tasks.
11
+
12
+ If you use the `ssl_cert` task, change the values in the `config/rake.rb` file appropriately. These values were also used in the `new_cookbook` task, but that task is replaced by the `knife cookbook create` command which can be configured below.
13
+
14
+ The default task (`default`) is run when executing `rake` with no arguments. It will call the task `test_cookbooks`.
15
+
16
+ The following standard Chef tasks are typically accomplished using the rake file:
17
+
18
+ * `bundle_cookbook[cookbook]` - Creates cookbook tarballs in the `pkgs/` dir.
19
+ * `install` - Calls `update`, `roles` and `upload_cookbooks` Rake tasks.
20
+ * `ssl_cert` - Create self-signed SSL certificates in `certificates/` dir.
21
+ * `update` - Update the homebase from source control server, understands git and svn.
22
+ * `roles` - iterates over the roles and uploads with `knife role from file`.
23
+
24
+ Most other tasks use knife: run a bare `knife cluster`, `knife cookbook` (etc) to find out more.
@@ -0,0 +1,142 @@
1
+ cassandra :: default |
2
+ cassandra :: add_apt_repo | new
3
+ cassandra :: install_from_git |
4
+ cassandra :: install_from_package |
5
+ cassandra :: install_from_release |
6
+ cassandra :: config_from_data_bag | autoconf
7
+ cassandra :: client |
8
+ cassandra :: server |
9
+ cassandra :: authentication | not include_recipe'd -- added to role
10
+ cassandra :: bintools |
11
+ cassandra :: ec2snitch |
12
+ cassandra :: jna_support |
13
+ cassandra :: mx4j |
14
+ cassandra :: iptables |
15
+ cassandra :: ruby_client |
16
+ cassandra :: config_files | new
17
+
18
+ elasticsearch :: default |
19
+ elasticsearch :: install_from_git |
20
+ elasticsearch :: install_from_release |
21
+ elasticsearch :: plugins | install_plugins
22
+ elasticsearch :: server |
23
+ elasticsearch :: client |
24
+ elasticsearch :: load_balancer |
25
+ elasticsearch :: config_files | config
26
+
27
+ flume :: default |
28
+ flume :: master |
29
+ flume :: agent | node
30
+ flume :: plugin-hbase_sink | hbase_sink_plugin
31
+ flume :: plugin-jruby | jruby_plugin
32
+ flume :: test_flow |
33
+ flume :: test_s3_source |
34
+ flume :: config_files | config
35
+
36
+ ganglia :: agent |
37
+ ganglia :: default |
38
+ ganglia :: server |
39
+ ganglia :: config_files | new
40
+
41
+ graphite :: default |
42
+ graphite :: carbon |
43
+ graphite :: ganglia |
44
+ graphite :: dashboard | web
45
+ graphite :: whisper |
46
+
47
+ hadoop_cluster :: default |
48
+ hadoop_cluster :: add_cloudera_repo |
49
+ hadoop_cluster :: datanode |
50
+ hadoop_cluster :: doc |
51
+ hadoop_cluster :: hdfs_fuse |
52
+ hadoop_cluster :: jobtracker |
53
+ hadoop_cluster :: namenode |
54
+ hadoop_cluster :: secondarynn |
55
+ hadoop_cluster :: tasktracker |
56
+ hadoop_cluster :: wait_on_hdfs_safemode |
57
+ hadoop_cluster :: fake_topology |
58
+ hadoop_cluster :: minidash |
59
+ hadoop_cluster :: config_files | cluster_conf
60
+
61
+ hbase :: default |
62
+ hbase :: master |
63
+ hbase :: minidash |
64
+ hbase :: regionserver |
65
+ hbase :: stargate |
66
+ hbase :: thrift |
67
+ hbase :: backup_tables |
68
+ hbase :: config_files | config
69
+
70
+ jenkins :: default |
71
+ jenkins :: server |
72
+ jenkins :: user_key |
73
+ jenkins :: node_ssh |
74
+ jenkins :: osx_worker |
75
+ jenkins :: build_from_github |
76
+ jenkins :: build_ruby_rspec |
77
+ jenkins :: auth_github_oauth |
78
+ jenkins :: plugins |
79
+ #
80
+ jenkins :: add_apt_repo |
81
+ jenkins :: iptables |
82
+ jenkins :: node_jnlp |
83
+ jenkins :: node_windows |
84
+ jenkins :: proxy_apache2 |
85
+ jenkins :: proxy_nginx |
86
+
87
+ minidash :: default |
88
+ minidash :: server |
89
+
90
+ mongodb :: default |
91
+ mongodb :: apt | add_apt_repo
92
+ mongodb :: install_from_release | source
93
+ mongodb :: backup |
94
+ mongodb :: config_server | fixme
95
+ mongodb :: mongos | fixme
96
+ mongodb :: server |
97
+
98
+ nfs :: client |
99
+ nfs :: default |
100
+ nfs :: server |
101
+
102
+ redis :: default |
103
+ redis :: install_from_package |
104
+ redis :: install_from_release |
105
+ redis :: client |
106
+ redis :: server |
107
+
108
+ resque :: default |
109
+ resque :: dedicated_redis |
110
+ resque :: dashboard |
111
+
112
+ route53 :: default |
113
+ route53 :: set_hostname | ec2
114
+
115
+ statsd :: default |
116
+ statsd :: server |
117
+
118
+ volumes :: default |
119
+ volumes :: build_raid |
120
+ volumes :: format |
121
+ volumes :: mount |
122
+ volumes :: resize |
123
+ volumes_ebs :: default |
124
+ volumes_ebs :: attach_ebs |
125
+
126
+ zabbix :: agent |
127
+ zabbix :: agent_prebuild |
128
+ zabbix :: agent_source |
129
+ zabbix :: database |
130
+ zabbix :: database_mysql |
131
+ zabbix :: default |
132
+ zabbix :: firewall |
133
+ zabbix :: server |
134
+ zabbix :: server_source |
135
+ zabbix :: web |
136
+ zabbix :: web_apache |
137
+ zabbix :: web_nginx |
138
+
139
+ zookeeper :: default |
140
+ zookeeper :: client |
141
+ zookeeper :: server |
142
+ zookeeper :: config_files |