ironfan 3.1.0.rc1 → 3.1.1

Sign up to get free protection for your applications and to get access to all the features.
Files changed (49) hide show
  1. data/.yardopts +5 -0
  2. data/CHANGELOG.md +18 -5
  3. data/README.md +34 -115
  4. data/TODO.md +36 -8
  5. data/VERSION +1 -1
  6. data/cluster_chef-knife.gemspec +2 -3
  7. data/ironfan.gemspec +29 -4
  8. data/lib/chef/knife/cluster_bootstrap.rb +1 -1
  9. data/lib/chef/knife/cluster_kick.rb +3 -3
  10. data/lib/chef/knife/cluster_kill.rb +1 -1
  11. data/lib/chef/knife/cluster_launch.rb +3 -3
  12. data/lib/chef/knife/cluster_list.rb +1 -1
  13. data/lib/chef/knife/cluster_proxy.rb +5 -5
  14. data/lib/chef/knife/cluster_show.rb +3 -2
  15. data/lib/chef/knife/cluster_ssh.rb +2 -2
  16. data/lib/chef/knife/cluster_start.rb +1 -2
  17. data/lib/chef/knife/cluster_stop.rb +2 -1
  18. data/lib/chef/knife/cluster_sync.rb +2 -2
  19. data/lib/chef/knife/cluster_vagrant.rb +144 -0
  20. data/lib/chef/knife/{knife_common.rb → ironfan_knife_common.rb} +8 -4
  21. data/lib/chef/knife/{generic_command.rb → ironfan_script.rb} +1 -1
  22. data/lib/chef/knife/vagrant/ironfan_environment.rb +18 -0
  23. data/lib/chef/knife/vagrant/ironfan_provisioners.rb +27 -0
  24. data/lib/chef/knife/vagrant/skeleton_vagrantfile.rb +116 -0
  25. data/lib/ironfan/chef_layer.rb +2 -2
  26. data/lib/ironfan/fog_layer.rb +16 -13
  27. data/lib/ironfan/private_key.rb +3 -3
  28. data/lib/ironfan/server.rb +9 -6
  29. data/lib/ironfan/server_slice.rb +11 -0
  30. data/notes/Home.md +30 -0
  31. data/notes/INSTALL-cloud_setup.md +100 -0
  32. data/notes/INSTALL.md +135 -0
  33. data/notes/Knife-Cluster-Commands.md +8 -0
  34. data/notes/Silverware.md +5 -0
  35. data/notes/aws_console_screenshot.jpg +0 -0
  36. data/notes/cookbook-versioning.md +11 -0
  37. data/notes/declaring_volumes.md +3 -0
  38. data/notes/design_notes-ci_testing.md +169 -0
  39. data/notes/design_notes-cookbook_event_ordering.md +212 -0
  40. data/notes/design_notes-dsl_object.md +55 -0
  41. data/notes/design_notes-meta_discovery.md +59 -0
  42. data/notes/ec2-pricing_and_capacity.md +63 -0
  43. data/notes/ironfan_homebase_layout.md +94 -0
  44. data/notes/named-cloud-objects.md +11 -0
  45. data/notes/rake_tasks.md +25 -0
  46. data/notes/renamed-recipes.txt +142 -0
  47. data/notes/style_guide.md +251 -0
  48. data/notes/tips_and_troubleshooting.md +83 -0
  49. metadata +50 -26
data/notes/INSTALL.md ADDED
@@ -0,0 +1,135 @@
1
+ First of all, every Chef installation needs a Chef Homebase. Chef Homebase is the place where cookbooks, roles, config files and other artifacts for managing systems with Chef will live. Store this homebase in a version control system such as Git and treat it like source code.
2
+
3
+ ### Conventions
4
+
5
+ In all of the below,
6
+
7
+ * `{homebase}`: is the directory that holds your Chef cookbooks, roles and so forth. For example, this file is in `{homebase}/README.md`.
8
+
9
+ * `{username}`: identifies your personal Chef client name: the thing you use to log into the Chef WebUI.
10
+
11
+ * `{organization}`: identifies the credentials set and cloud settings to use. If your Chef server is on the Opscode platform (Try it! It's super-easy), use your organization name (the last segment of your chef_server url). If not, use an identifier you deem sensible.
12
+
13
+ Ironfan Installation Instructions
14
+ ============
15
+
16
+ _Before you begin, fork the repo as you'll be making changes to personalize it for your platform._
17
+
18
+ 1. Clone the repo. It will produce the directory we will call `homebase` from now on:
19
+
20
+ git clone https://github.com/infochimps-labs/ironfan-homebase homebase
21
+ cd homebase
22
+ git submodule foreach git checkout master
23
+ git submodule update --init
24
+
25
+ 2. Install the Ironfan gem (you may need to use `sudo`):
26
+
27
+ gem install ironfan
28
+
29
+ 3. Set up your [knife.rb](http://wiki.opscode.com/display/chef/Knife#Knife-ConfiguringYourSystemForKnife) file.
30
+
31
+
32
+
33
+ Go to the [Knife Plugins](http://wiki.opscode.com/display/chef/Knife+Plugins) in the Chef Wiki for additional information.
34
+
35
+
36
+ __________________________________________________________________________
37
+
38
+
39
+ Installing the [knife.rb](http://wiki.opscode.com/display/chef/Knife#Knife-ConfiguringYourSystemForKnife) file when using Ironfan has proven exceedingly useful -- it leaves you in good shape to avoid problems with credential management, importing 3rd-party cookbooks, and other things down the road.
40
+
41
+ ## <a name="knife-configuration" />Standard Knife Configuration
42
+
43
+ ### Set up
44
+
45
+ _Note_: If your local username differs from your Opscode Chef username, then you should `export CHEF_USER={username}` using your Opscode username (e.g. from your .bashrc) before you run any knife commands.
46
+
47
+ 1. So that Knife finds its configuration files, symlink the `{homebase}/knife` directory (the one holding this file) to be your `~/.chef` folder.
48
+
49
+ cd {homebase}
50
+ ln -sni $CHEF_HOMEBASE/knife ~/.chef
51
+
52
+ **HEY @MRFLIP MAKE SURE THAT THIS WORKS WITHOUT THE CHEF HOMEBASE VARIABLE SET**
53
+
54
+ ### <a name="credentials" />Credentials
55
+
56
+ All the keys and settings specific to your organization are held in a directory named `credentials/`, versioned and distributed independently of the homebase.
57
+
58
+ To set up your credentials directory, visit `{homebase}/knife` and duplicate the example, naming it `credentials`:
59
+
60
+ cd $CHEF_HOMEBASE/knife
61
+ rm credentials
62
+ cp -a example-credentials credentials
63
+ cd credentials
64
+ git init ; git add .
65
+ git commit -m "New credentials universe for $CHEF_ORGANIZATION" .
66
+
67
+ ### User / Cloud config
68
+
69
+ Edit the following places in your new `credentials`:
70
+
71
+ * Organization-specific settings are in `knife/{organization}-credentials/knife-org.rb`:
72
+ - _organization_: Your organization name
73
+ - _chef server url_: Edit the lines for your `chef_server_url` and `validator`.
74
+
75
+ If you are an Opscode platform user, you can skip this step -- your `chef_server_url` defaults to `https://api.opscode.com/organizations/#{organization}` and your validator to `{organization}-validator.pem`.
76
+
77
+ - Cloud-specific settings, if you are targeting a cloud provider, add account information and configuration here.
78
+
79
+ * User-specific settings are in `knife/credentials/knife-user-{username}.rb`. (You can duplicate and rename the one in `knife/example-credentials/knife-user-example.rb`)
80
+ - for example, if you're using Amazon EC2 you should set your access keys:
81
+
82
+ Chef::Config.knife[:aws_access_key_id] = "XXXX"
83
+ Chef::Config.knife[:aws_secret_access_key] = "XXXX"
84
+ Chef::Config.knife[:aws_account_id] = "XXXX"
85
+
86
+ * Chef user key is in `{credentials_path}/{username}.pem`
87
+
88
+ * Organization validator key in `{credentials_path}/{organization}-validator.pem`
89
+
90
+ * If you have existing Amazon machines, place their keypairs in `{credentials_path}/ec2_keys`
91
+
92
+ __________________________________________________________________________
93
+
94
+ ## Try it out
95
+
96
+ You should now be able to use Knife.
97
+
98
+ $ knife client list
99
+ chef-webui
100
+ cocina-chef_server-0
101
+ cocina-sandbox-0
102
+ cocina-validator
103
+
104
+ $ knife cluster list
105
+ +--------------------+---------------------------------------------------+
106
+ | cluster | path |
107
+ +--------------------+---------------------------------------------------+
108
+ | burninator | /cloud/clusters/burninator.rb |
109
+ | el_ridiculoso | /cloud/clusters/el_ridiculoso.rb |
110
+ | elasticsearch_demo | /cloud/clusters/elasticsearch_demo.rb |
111
+ | hadoop_demo | /cloud/clusters/hadoop_demo.rb |
112
+ | sandbox | /cloud/clusters/sandbox.rb |
113
+ +--------------------+---------------------------------------------------+
114
+
115
+ Launching a cluster in the cloud should now be this easy!
116
+
117
+ knife cluster launch sandbox:simple --bootstrap
118
+ __________________________________________________________________________
119
+
120
+ **SYAFLAG: Not sure where the stuff below, before getting to Next is supposed to go. Here?**
121
+
122
+ To get started with Knife and Chef, follow the [Chef Fast Start Guide](http://wiki.opscode.com/display/chef/Fast+Start+Guide). We use the hosted Chef service and are happy, we are sure you'll be happy with it too (there are instructions in the wiki to set up a Chef server, if that's what you prefer). Stop when you get to "Bootstrap the Ubuntu system" -- Ironfan makes that much easier. **SYAFLAG: "Bootstrap the Ubuntu System is the 1st step in the EC2 Bootstrap Fast Start Guide, so should we include EC2 Bootstrap Fast Start Guide?**
123
+
124
+ * [Launch Cloud Instances with Knife](http://wiki.opscode.com/display/chef/Launch+Cloud+Instances+with+Knife)
125
+ * [EC2 Bootstrap Fast Start Guide](http://wiki.opscode.com/display/chef/EC2+Bootstrap+Fast+Start+Guide) (**Wow, we do everything in that page in _one_ command.**)
126
+
127
+
128
+ ### Next
129
+
130
+ The README file in each of the subdirectories for more information about what goes in those directories. If you are bored of reading, go customize one of the files in the 'clusters/ directory'. Or, if you're a fan of ridiculous things and have ever pondered how many things you can fit in one box, launch el_[ridiculoso:](http://www.spanishdict.com/translate/ridiculoso). It contains every single recipe we have ever made stacked on top of one another.
131
+
132
+ knife cluster launch el_ridiculoso-gordo --bootstrap
133
+
134
+
135
+ For more information about configuring Knife, see the [Knife documentation](http://wiki.opscode.com/display/chef/knife).
@@ -0,0 +1,8 @@
1
+ @sya please complete and fill out the descriptions + usage:
2
+
3
+
4
+ * knife cluster kill
5
+ * knife cluster launch
6
+
7
+ (... show start stop proxy ssh boootstrap maybe some more, type 'knife cluster' and it'll list them)
8
+
@@ -0,0 +1,5 @@
1
+ Major portions of the cookbooks/silverware/README file should move here
2
+
3
+ @sya please make sure the true stuff is polished, the wrong stuff is deleted or fixed, the practical stuff goes in the README.md and the philosophical stuff goes here on the wiki.
4
+
5
+
Binary file
@@ -0,0 +1,11 @@
1
+ Cookbook Versioning and Tracking
2
+ ================================
3
+
4
+ @temujin9 please complete and correct
5
+
6
+ * git tag labels the *release* of a cookbook version: tag 'cookbooks-elasticsearch-3.1.7' denotes the *last* commit to that tag.
7
+ * The next commit will be the one that bumps the version number: the `metadata.rb` will then read '3.1.8'.
8
+
9
+ Periodically, we will release a 'gold' version set and push those to the opscode cookbook community site.
10
+
11
+ *
@@ -0,0 +1,3 @@
1
+
2
+ Please see the {homebase}/cookbooks/volumes/README.md
3
+
@@ -0,0 +1,169 @@
1
+
2
+
3
+ https://github.com/acrmp/chefspec
4
+
5
+
6
+ pre-testing -- converge machine
7
+ https://github.com/acrmp/chefspec
8
+
9
+ http://wiki.opscode.com/display/chef/Knife#Knife-test
10
+
11
+ benchmarks
12
+
13
+ bonnie++
14
+ hdparm -t
15
+ iozone
16
+
17
+
18
+ in-machine
19
+
20
+ * x ports on x interfaces open
21
+ * daemon is running
22
+ * file exists and has string
23
+
24
+ * log file is accumulating lines at rate X
25
+ * script x runs successfully
26
+
27
+ in-chef
28
+
29
+ * runlist is X
30
+ * chef attribute X should be Y
31
+
32
+ meta
33
+
34
+ * chef run was idempotent
35
+
36
+
37
+
38
+
39
+
40
+ __________________________________________________________________________
41
+
42
+ ## Notes from around the web
43
+
44
+
45
+ * ...
46
+
47
+ > I'm thinking that the useful thing to test is NOT did chef install
48
+ > some package or setup a user, but rather after chef has run can I
49
+ > interact with the system as I would expect from an external
50
+ > perspective. For example:
51
+ >
52
+ > * Is the website accessible?
53
+ > * Are unused ports blocked?
54
+ > * When I send an email thorough the website does it end up in my inbox?
55
+ >
56
+ > Capybara (http://github.com/jnicklas/capybara) enforces this external
57
+ > perspective for webapp testing:
58
+ >
59
+ > "Access to session, request and response from the test is not
60
+ > possible. Maybe we’ll do response headers at some point in the future,
61
+ > but the others really shouldn’t be touched in an integration test
62
+ > anyway. "
63
+ >
64
+ > They only let you interact with screen elements that a user could
65
+ > interact with. It makes sense because the things that users interact
66
+ > with are what provides the business value
67
+
68
+ * Andrew Shafer < andrew@cloudscaling.com>
69
+
70
+ > Here's my thinking at this point... which could be wrong on every level.
71
+ > There is really no good way to TDD/BDD configuration management for several
72
+ > reasons:
73
+ > The recipes are already relatively declarative
74
+ > Mocking is useless because it may not reflect 'ground truth'
75
+ > The cycle times to really test convergence are relatively long
76
+ > Trying to test if a package is installed or not is testing the framework,
77
+ > not the recipe IMHO.
78
+ > I agree with the general sentiment that the functional service is the true
79
+ > test.
80
+ > I'm leaning towards 'testing' at that level, ideally with (a superset of?)
81
+ > what should be used for the production monitoring system.
82
+ > So the CI builds services, runs all the checks in test, green can go live
83
+ > and that's that.
84
+
85
+
86
+ * Jeremy Deininger < jeremy@rightscale.com>
87
+
88
+ > Thought I'd chime in with my experience testing system configuration code @ RightScale so far. What we've been building are integration style cucumber tests to run a cookbook through it's paces on all platforms and OSs that we support.
89
+ > First we use our API to spin up 'fresh' server clusters in EC2, one for every platform/OS (variation) that the cookbook will be supporting. The same could be done using other cloud APIs (anyone else doing this with VMware or etc?) Starting from scratch is important because of chef's idempotent nature.
90
+ > Then a cucumber test is run against every variation in parallel. The cucumber test runs a series of recipes on the cluster then uses what we call 'spot checks' to ensure the cluster is configured and functional. The spot checks are updated when we find a bug, to cover the bug. An example spot check would be, sshing to every server and checking the mysql.err file for bad strings.
91
+ > These high level integration tests are long running but have been very useful flushing out bugs.
92
+ > ...
93
+ > If you stop by the #rightscale channel on Freenode I'd be happy to embarrass myself by giving you a sneak peak at the features etc.. Would love to bounce ideas around and collaborate if you're interested. jeremydei on Freenode IRC
94
+
95
+ Ranjib Dey < ranjibd@th...s.com>
96
+
97
+ > So far, what we've done for testing is to use rspec for implementing tests. Here's an example:
98
+ >
99
+ > it "should respond on port 80" do
100
+ > lambda {
101
+ > TCPSocket.open(@server, 'http')
102
+ > }.should_not raise_error
103
+ > end
104
+ >
105
+ > Before running the tests, I have to manually bootstrap a node using knife. If my instance is the only one in its environment, the spec can find it using knife's search feature. The bootstrap takes a few minutes, and the 20 or so tests take about half a minute to run.
106
+ >
107
+ > While I'm iteratively developing a recipe, my work cycle is to edit source, upload a cookbook, and rerun chef-client (usually by rerunning knife boostrap, because the execution environment is different from invoking chef-client directly). This feels a bit slower than the cycle I'm used to when coding in Ruby because of the upload and bootstrap steps.
108
+ >
109
+ > I like rspec over other testing tools because of how it generates handy reports, such as this one, which displays an English list of covered test cases:
110
+ >
111
+ > $ rspec spec/ -f doc
112
+ >
113
+ > Foo role
114
+ > should respond on port 80
115
+ > should run memcached
116
+ > should accept memcached connections
117
+ > should have mysql account
118
+ > should allow passwordless sudo to user foo as user bar
119
+ > should allow passwordless sudo to root as a member of sysadmin
120
+ > should allow key login as user bar
121
+ > should mount homedirs on ext4, not NFS
122
+ > should rotate production.log
123
+ > should have baz as default vhost
124
+ > ...
125
+ >
126
+ > That sample report also gives a feel for sort of things we check. So far, nearly all of our checks are non-intrusive enough to run on a production system. (The exception is testing of local email delivery configurations.)
127
+ >
128
+ > Areas I'd love to see improvement:
129
+ >
130
+ > * Shortening the edit-upload-bootstrap-test cycle
131
+ > * Automating the bootstrap in the context of testing
132
+ > * Adding rspec primitives for Chef-related testing, which might
133
+ > include support for multiple platforms
134
+ >
135
+ > As an example of rspec primitives, instead of:
136
+ >
137
+ > it "should respond on port 80" do
138
+ > lambda {
139
+ > TCPSocket.open(@server, 'http')
140
+ > }.should_not raise_error
141
+ > end
142
+ >
143
+ > I'd like to write:
144
+ >
145
+ > it { should respond_on_port(80) }
146
+ >
147
+ > Rspec supports the the syntactic sugar; it's just a matter of adding some "matcher" plugins.
148
+ >
149
+ > How do other chef users verify that recipes work as expected?
150
+ >
151
+ > I'm not sure how applicable my approach is to opscode/cookbooks because it relies on having a specific server configuration and can only test a cookbook in the context of that single server. If we automated the boostrap step so it could be embedded into the rspec setup blocks, it would be possible to test a cookbook in several sample contexts, but the time required to setup each server instance might be prohibitive.
152
+ >
153
+
154
+
155
+ Andrew Crump < acrump@gmail.com>
156
+
157
+ > Integration tests that exercise the service you are building definitely give you the most bang for buck.
158
+ >
159
+ > We found the feedback cycle slow as well so I wrote chefspec which builds on RSpec to support unit testing cookbooks:
160
+ >
161
+ > https://github.com/acrmp/chefspec
162
+ >
163
+ > This basically fakes a convergence and allows you to make assertions about the created resources. At first glance Chef's declarative nature makes this less useful, but once you start introducing conditional execution I've found this to be a time saver.
164
+ >
165
+ > If you're looking to do CI (which you should be) converging twice goes some way to verifying that your recipes are idempotent.
166
+ >
167
+ > knife cookbook test is a useful first gate for CI:
168
+ >
169
+ > http://wiki.opscode.com/display/chef/Knife#Knife-test
@@ -0,0 +1,212 @@
1
+ # Cookbook event ordering
2
+
3
+
4
+ Most cookbooks have some set of the following
5
+
6
+ * base configuration
7
+ * announce component
8
+ - before discovery so it can be found
9
+ - currently done in converge stage -- some aspects might be incompletely defined?
10
+
11
+ * register apt repository if any
12
+ * create daemon user
13
+ - before directories so we can set permissions
14
+ - before package install so uid is stable
15
+ * install, as package, git deploy or install from release
16
+ - often have to halt legacy services -- config files don't exist
17
+ * create any remaining directories
18
+ - after package install so it has final say
19
+ * install plugins
20
+ - after directories, before config files
21
+ * define service
22
+ - before config file creation so we can notify
23
+ - can't start it yet because no config files
24
+ * discover component (own or other, same or other machines)
25
+ * write config files (notify service of changes)
26
+ - must follow everything so info is current
27
+ * register a minidash dashboard
28
+ * trigger start (or restart) of service
29
+
30
+ Run List is
31
+
32
+ [role[systemwide], role[chef_client], role[ssh], role[nfs_client],
33
+ role[volumes], role[package_set], role[org_base], role[org_users],
34
+
35
+ role[hadoop],
36
+ role[hadoop_s3_keys],
37
+ role[cassandra_server], role[zookeeper_server],
38
+ role[flume_master], role[flume_agent],
39
+ role[ganglia_master],
40
+ role[ganglia_agent], role[hadoop_namenode], role[hadoop_datanode],
41
+ role[hadoop_jobtracker], role[hadoop_secondarynn], role[hadoop_tasktracker],
42
+ role[hbase_master], role[hbase_regionserver], role[hbase_stargate],
43
+ role[redis_server], role[mysql_client], role[redis_client],
44
+ role[cassandra_client], role[elasticsearch_client], role[jruby], role[pig],
45
+ recipe[ant], recipe[bluepill], recipe[boost], recipe[build-essential],
46
+ recipe[cron], recipe[git], recipe[hive], recipe[java::sun], recipe[jpackage],
47
+ recipe[jruby], recipe[nodejs], recipe[ntp], recipe[openssh], recipe[openssl],
48
+ recipe[rstats], recipe[runit], recipe[thrift], recipe[xfs], recipe[xml],
49
+ recipe[zabbix], recipe[zlib], recipe[apache2], recipe[nginx],
50
+ role[el_ridiculoso_cluster], role[el_ridiculoso_gordo], role[minidash],
51
+ role[org_final], recipe[hadoop_cluster::config_files], role[tuning]]
52
+
53
+ Run List expands to
54
+
55
+ build-essential, motd, zsh, emacs, ntp, nfs, nfs::client, xfs,
56
+ volumes::mount, volumes::resize, package_set,
57
+
58
+ hadoop_cluster,
59
+ hadoop_cluster::minidash,
60
+
61
+ cassandra, cassandra::install_from_release,
62
+ cassandra::autoconf, cassandra::server, cassandra::jna_support,
63
+ cassandra::config_files, zookeeper::default, zookeeper::server,
64
+ zookeeper::config_files, flume, flume::master, flume::agent,
65
+ flume::jruby_plugin, flume::hbase_sink_plugin, ganglia, ganglia::server,
66
+ ganglia::monitor,
67
+
68
+ hadoop_cluster::namenode,
69
+ hadoop_cluster::datanode,
70
+ hadoop_cluster::jobtracker,
71
+ hadoop_cluster::secondarynn,
72
+ hadoop_cluster::tasktracker,
73
+
74
+ zookeeper::client,
75
+ hbase::master,
76
+ hbase::minidash,
77
+
78
+ minidash::server,
79
+ hbase::regionserver,
80
+ hbase::stargate,
81
+ redis, redis::install_from_release, redis::server,
82
+ mysql, mysql::client,
83
+ cassandra::client, elasticsearch::default,
84
+
85
+ elasticsearch::install_from_release,
86
+ elasticsearch::plugins,
87
+ elasticsearch::client,
88
+
89
+ jruby, jruby::gems,
90
+
91
+ pig,
92
+ pig::install_from_package,
93
+ pig::piggybank,
94
+ pig::integration,
95
+
96
+ zookeeper,
97
+ ant, bluepill, boost, cron,
98
+ git, hive,
99
+ java::sun, jpackage, nodejs, openssh, openssl, rstats, runit,
100
+ thrift, xml, zabbix, zlib, apache2, nginx, hadoop_cluster::config_files,
101
+ tuning::default
102
+
103
+
104
+ From an actual run of el_ridiculoso-gordo:
105
+
106
+ 2 nfs::client
107
+ 1 java::sun
108
+ 1 aws::default
109
+ 5 build-essential::default
110
+ 3 motd::default
111
+ 2 zsh::default
112
+ 1 emacs::default
113
+ 8 ntp::default
114
+ 1 nfs::default
115
+ 2 nfs::client
116
+ 3 xfs::default
117
+ 46 package_set::default
118
+ 4 java::sun
119
+ 8 tuning::ubuntu
120
+ 6 apt::default
121
+ 1 hadoop_cluster::add_cloudera_repo
122
+ 44 hadoop_cluster::default
123
+ 4 minidash::default
124
+ 2 /srv/chef/file_store/cookbooks/minidash/providers/dashboard.rb
125
+ 1 hadoop_cluster::minidash
126
+ 2 /srv/chef/file_store/cookbooks/minidash/providers/dashboard.rb
127
+ 1 boost::default
128
+ 2 python::package
129
+ 2 python::pip
130
+ 1 python::virtualenv
131
+ 2 install_from::default
132
+ 7 thrift::default
133
+ 9 cassandra::default
134
+ 1 cassandra::install_from_release
135
+ 6 /srv/chef/file_store/cookbooks/install_from/providers/release.rb
136
+ 3 cassandra::install_from_release
137
+ 1 cassandra::bintools
138
+ 3 runit::default
139
+ 11 cassandra::server
140
+ 2 cassandra::jna_support
141
+ 2 cassandra::config_files
142
+ 6 zookeeper::default
143
+ 15 zookeeper::server
144
+ 3 zookeeper::config_files
145
+ 18 flume::default
146
+ 2 flume::master
147
+ 3 flume::agent
148
+ 2 flume::jruby_plugin
149
+ 1 flume::hbase_sink_plugin
150
+ 21 ganglia::server
151
+ 20 ganglia::monitor
152
+ 13 hadoop_cluster::namenode
153
+ 11 hadoop_cluster::datanode
154
+ 11 hadoop_cluster::jobtracker
155
+ 11 hadoop_cluster::secondarynn
156
+ 11 hadoop_cluster::tasktracker
157
+ 14 hbase::default
158
+ 11 hbase::master
159
+ 1 hbase::minidash
160
+ 2 /srv/chef/file_store/cookbooks/minidash/providers/dashboard.rb
161
+ 13 minidash::server
162
+ 11 hbase::regionserver
163
+ 10 hbase::stargate
164
+ 2 redis::default
165
+ 2 redis::install_from_release
166
+ 2 redis::default
167
+ 16 redis::server
168
+ 3 mysql::client
169
+ 1 aws::default
170
+ 7 elasticsearch::default
171
+ 1 elasticsearch::install_from_release
172
+ 6 /srv/chef/file_store/cookbooks/install_from/providers/release.rb
173
+ 2 elasticsearch::plugins
174
+ 3 elasticsearch::client
175
+ 3 elasticsearch::config
176
+ 1 jruby::default
177
+ 9 /srv/chef/file_store/cookbooks/install_from/providers/release.rb
178
+ 7 jruby::default
179
+ 18 jruby::gems
180
+ 2 pig::install_from_package
181
+ 5 pig::piggybank
182
+ 8 pig::integration
183
+ 6 zookeeper::default
184
+ 3 ant::default
185
+ 5 bluepill::default
186
+ 2 cron::default
187
+ 1 git::default
188
+ 1 hive::default
189
+ 2 nodejs::default
190
+ 4 openssh::default
191
+ 7 rstats::default
192
+ 1 xml::default
193
+ 11 zabbix::default
194
+ 1 zlib::default
195
+ 10 apache2::default
196
+ 2 apache2::mod_status
197
+ 2 apache2::mod_alias
198
+ 1 apache2::mod_auth_basic
199
+ 1 apache2::mod_authn_file
200
+ 1 apache2::mod_authz_default
201
+ 1 apache2::mod_authz_groupfile
202
+ 1 apache2::mod_authz_host
203
+ 1 apache2::mod_authz_user
204
+ 2 apache2::mod_autoindex
205
+ 2 apache2::mod_dir
206
+ 1 apache2::mod_env
207
+ 2 apache2::mod_mime
208
+ 2 apache2::mod_negotiation
209
+ 2 apache2::mod_setenvif
210
+ 1 apache2::default
211
+ 8 nginx::default
212
+ 9 hadoop_cluster::config_files
@@ -0,0 +1,55 @@
1
+
2
+
3
+ * is a subclass of extlib's `Mash` (Hash with indifferent access).
4
+ - in the
5
+
6
+ * does not have to be performant. Obviously it can't suck, but we're not going to worry about say the overhead of converting values to `dsl_mash`es on assignment.
7
+
8
+ ## Layering
9
+
10
+ This is the key thing. We don't prescribe *any* resolution semantics except the following:
11
+
12
+ * layers are evaluated in order to create a composite, `dup`ing as you go.
13
+ * while building a composite, when a later layer and the current layer collide:
14
+ - if layer has merge logic, hand it off.
15
+ - simple + any: clobber
16
+ - array + array: layer value appended to composite value
17
+ - hash + hash: recurse
18
+ - otherwise: error
19
+
20
+ ### no complicated resolution rules allowed
21
+
22
+ This is the key to the whole thing.
23
+
24
+ * You can very easily
25
+ * You can adorn an object with merge logic if it's more complicated than that
26
+
27
+ ### duping (?)
28
+
29
+ I'm not certain of the always-`dup` rule. We'll find out
30
+
31
+ __________________________________________________________________________
32
+
33
+ ## Interface
34
+
35
+ ### setter/getter/block
36
+
37
+
38
+ * `self.foo` -- returns value of foo
39
+ * `self.get(:foo)` -- sets foo to val
40
+ * `self.foo(val)` -- sets foo to val
41
+ * `self.set(:foo, val)` -- sets foo to val
42
+ * `self.foo ||= val` -- sets foo to val if foo is unset, `false` or `nil`
43
+ * `self.default(:foo, val)` -- sets foo to val if foo is unset
44
+ * `self.unset(:foo)` -- unsets foo
45
+
46
+ ### collection attributes
47
+
48
+
49
+ | array | array | add or clobber
50
+ |
51
+
52
+ ### nested objects
53
+
54
+
55
+ ### type co
@@ -0,0 +1,59 @@
1
+ @temujin9 has proposed, and it's a good propose, that there should exist such a thing as an 'integration cookbook'.
2
+
3
+ The hadoop_cluster cookbook should describe the hadoop_cluster, the ganglia cookbook ganglia, and the zookeeper cookbook zookeeper. Each should provide hooks that are neighborly but not exhibitionist, but should mind its own business.
4
+
5
+ The job of tying those components together should belong to a separate concern. It should know how and when to copy hbase jars into the pig home dir, or what cluster service_provide'r a redis client should reference.
6
+
7
+ ## Practical implications
8
+
9
+ * I'm going to revert out the `node[:zookeeper][:cluster_name]` attributes -- services should always announce under their cluster.
10
+
11
+ * Until we figure out how and when to separate integrations, I'm going to isolate entanglements into their own recipes within cookbooks: so, the ganglia part of hadoop will become `ganglia_integration` or somesuch.
12
+
13
+ ## Example integrations
14
+
15
+ ### Copying jars
16
+
17
+ Pig needs jars from hbase and zookeeper. They should announce they have jars; pig should announce its home directory; the integration should decide how and where to copy the jars.
18
+
19
+ ### Reference a service
20
+
21
+ Right now in several places we have attributes like `node[:zookeeper][:cluster_name]`, used to specify the cluster that provides_service zookeeper.
22
+
23
+ * Server recipes should never use `node[:zookeeper][:cluster_name]` -- they should always announce under their native cluster. (I'd kinda like to give `provides_service` some sugar to assume the cluster name, but need to find something backwards-compatible to use)
24
+
25
+ * Need to take a better survey of usage among clients to determine how to do this.
26
+
27
+ * cases:
28
+ - hbase cookbook refs: hadoop, zookeeper, ganglia
29
+ - flume cookbook refs: zookeeper, ganglia.
30
+ - flume agents may reference several different flume provides_service'rs
31
+ - API using two different elasticsearch clusters
32
+
33
+ ### Logging, monitoring
34
+
35
+ * tell flume you have logs to pick up
36
+ * tell ganglia to monitor you
37
+
38
+ ### Service Dashboard
39
+
40
+ Let everything with a dashboard say so, and then let one resource create a page that links to each.
41
+
42
+
43
+ ________________________
44
+
45
+ These are still forming, ideas welcome.
46
+
47
+
48
+
49
+
50
+ Sometimes we want
51
+
52
+ * if there are volumes marked for 'mysql-table_data', use that; otherwise, the 'persistent' datastore, if any; else the 'bulk' datastore, if any; else the 'fallback' datastore (which is guaranteed to exist).
53
+
54
+ * IP addresses (or hostnames):
55
+ - `[:private_ip, :public_ip]`
56
+ - `[:private_ip]`
57
+ - `:primary_ip`
58
+
59
+ * .