judo 0.4.0 → 0.4.1

Sign up to get free protection for your applications and to get access to all the features.
@@ -6,16 +6,16 @@ to get going and powerful.
6
6
  ## CONCEPTS
7
7
 
8
8
  Servers and Groups. Servers are identified by a naked string. Groups always
9
- have a colon prefix.
9
+ have a colon prefix. The special group :all refers to all groups. A name prepended
10
+ by a carrot will exclude that selection.
10
11
 
11
- $ judo restart my_server_1 ## this restarts my_server_1
12
- $ judo restart my_server_1 other_server ## this restarts my_server_1 and other_server
12
+ $ judo restart myserver1 ## this restarts myserver1
13
+ $ judo restart myserver1 myserver2 ## this restarts myserver1 and myserver2
13
14
  $ judo restart :default ## this restarts all servers in the :default group
14
- $ judo restart sammy :default :db ## this restarts all servers in the :default group, the :db group, and a server named sammy
15
-
16
- Config Repo: Judo keeps it configuration for each server group in a git repo.
17
-
18
- State Database: Judo keeps cloud state and information on specific servers in SimpleDB.
15
+ $ judo restart myserver1 :default :db ## this restarts all servers in the :default group, the :db group, and a server named myserver1
16
+ $ judo restart :all ## this restarts all servers in all groups
17
+ $ judo restart :default ^myserver3 ## this restarts all servers in the default group except myserver3
18
+ $ judo restart :all ^:default ## this restarts all servers except those in the default group
19
19
 
20
20
  Server: Judo does not track EC2 instances, but Servers, which are a collection
21
21
  of state, such as EBS volumes, elastic IP addresses and a current EC2 instance
@@ -28,47 +28,58 @@ You will need an AWS account with EC2, S3 and SDB all enabled.
28
28
 
29
29
  Setting up a new judo repo named "my_cloud" would look like this:
30
30
 
31
+ $ export AWS_SECRET_ACCESS_KEY="..."
32
+ $ export AWS_ACCESS_KEY_ID="..."
31
33
  $ mkdir my_cloud
32
34
  $ cd my_cloud
33
- $ judo setup --accessid AWS_ACCESS_ID --secret AWS_KEY --bucket BUCKET
34
-
35
- Note: you can use the AWS_SECRET_ACCESS_KEY, AWS_ACCESS_KEY_ID and JUDO_BUCKET
36
- enviornment variables instead of the command line switches.
35
+ $ judo setup --bucket BUCKET
37
36
 
38
37
  The 'judo setup' command will make a .judo folder to store your EC2 keys and S3
39
38
  bucket. It will also make a folder named "default" to hold the default server
40
- config.
41
-
42
- To view all of the groups and servers running in those group you can type:
43
-
44
- $ judo groups
39
+ config. Feel free to examine the default folder. It consists of some example
40
+ files for a simple server.
45
41
 
46
42
  To launch a default server you create it
47
43
 
48
- $ judo create my_server_1:default
49
- ---> Creating server my_server_1... done (0.6s)
44
+ $ judo create :default ## make one :default server - have judo pick the name
45
+ ---> Creating server default1... done (0.6s)
46
+ $ judo create 2:default ## make two :default servers - have judo pick the name
47
+ ---> Creating server default2... done (0.5s)
48
+ ---> Creating server default3... done (0.7s)
49
+ $ judo create myserver1:default ## make one :default server named myserver1
50
+ ---> Creating server myserver1... done (0.6s)
50
51
  $ judo list
51
52
  SERVERS
52
53
  --------------------------------------------------------------------------------
53
- my_server_1 default v12 m1.small ebs:0
54
- $ judo start my_server_1
55
- ---> Starting server my_server_1... done (2.3s)
54
+ default1 default v1 m1.small ebs:0
55
+ default2 default v1 m1.small ebs:0
56
+ default3 default v1 m1.small ebs:0
57
+ myserver1 default v1 m1.small ebs:0
58
+ $ judo start myserver1
59
+ ---> Starting server myserver1... done (2.3s)
60
+ $ judo launch myserver2:default ## launch does a create and a start in 1 step
61
+ ---> Creating server myserver2... done (0.6s)
62
+ ---> Starting server myserver2... done (2.9s)
56
63
  $ judo list
57
64
  SERVERS
58
65
  --------------------------------------------------------------------------------
59
- my_server_1 default v1 i-6fdf8d09 m1.small running ebs:0
66
+ default1 default v1 m1.small ebs:0
67
+ default2 default v1 m1.small ebs:0
68
+ default3 default v1 m1.small ebs:0
69
+ myserver1 default v1 i-6fdf8d09 m1.small running ebs:0
70
+ myserver2 default v1 i-49cef122 m1.small running ebs:0
60
71
 
61
- You can examine a groups config by looking in the group folder in the repo. The
72
+ You can examine a groups config by looking in the group folder in the repo. The
62
73
  default group will look something like this.
63
74
 
64
75
  $ cat default/config.json
65
76
  {
66
- "key_name":"judo14",
67
- "instance_type":"m1.small",
68
- "ami32":"ami-bb709dd2", // public ubuntu 9.10 ami - 32 bit
69
- "ami64":"ami-55739e3c", // public ubuntu 9.10 ami - 64 bit
70
- "user":"ubuntu",
77
+ "ami32":"ami-2d4aa444", // public ubuntu 10.04 ami - 32 bit
78
+ "ami64":"ami-fd4aa494", // public ubuntu 10.04 ami - 64 bit
79
+ "user":"ubuntu", // this is for ssh acccess - defaults to ubuntu not root
80
+ "kuzushi_version": "0.0.54", // this will pin the version of kuzushi the server will boot with
71
81
  "security_group":"judo",
82
+ "example_config": "example_mode",
72
83
  "availability_zone":"us-east-1d"
73
84
  }
74
85
 
@@ -78,97 +89,54 @@ To commit a group do the following.
78
89
  $ judo commit :default
79
90
  Compiling version 2... done (1.2s)
80
91
 
81
- We can now see that 'my_server_1' is running and running with version 1 of the
82
- config. We can create and start a server in one step with the launch command.
83
-
84
- $ judo launch my_server_2:default
85
- ---> Creating server my_server_2... done (0.6s)
86
- ---> Starting server my_server_2... done (1.6s)
92
+ This will create and start two servers. One named 'myserver1' and one named
93
+ 'myserver2'. You can ssh into 'myserver1' you can type:
87
94
 
88
- This will create and start two servers. One named 'my_server_1' and one named
89
- 'my_server_2'. You can ssh into 'my_server_1' you can type:
90
-
91
- $ judo ssh my_server_1
95
+ $ judo ssh myserver1
92
96
 
93
97
  You can stop all the servers in the :default group with:
94
98
 
95
99
  $ judo stop :default
96
100
 
97
- You could also have typed:
98
-
99
- $ judo stop my_server_1 my_server_2
100
-
101
101
  ## COMMANDS
102
102
 
103
- NOTE: many servers take an argument of "[SERVERS...]". This indicates that the
104
- command must be run in a group folder (specifying which group of servers to work
105
- on). Zero or more server names can be used. If no server is named, the
106
- operation is run on all servers. For instance:
107
-
108
- $ judo restart primary_db backup_db
109
-
110
- This will restart only the servers named 'primary_db' and 'backup_db'. Where as
111
-
112
- $ judo restart :db
113
-
114
- will restart all servers in the group.
115
-
116
- -------------------------------------------------------------------------------
117
-
118
- $ judo create NAME
119
-
120
- Creates a new named server in the current group. Will allocate EBS and Elastic
121
- IP's as needed.
122
-
123
- $ judo create +N
124
-
125
- Creates N new servers where N is an integer. These servers have generic names
126
- (group.N). Note: servers with generic names AND no external resources (like
127
- EBS Volumes or Elastic IPs) will be destroyed when stopped.
128
-
129
- $ judo destroy NAME
130
-
131
- Destroy the named server. De-allocates any elastic IP's and destroys the EBS
132
- volumes.
133
-
134
- $ judo start [SERVERS...]
135
- $ judo stop [SERVERS...]
136
- $ judo restart [SERVERS...]
137
-
138
- Starts stops or restarts then starts the given servers.
103
+ $ judo --help
139
104
 
140
- $ judo launch NAME
141
- $ judo launch +N
105
+ judo launch [options] SERVER ...
106
+ judo create [options] SERVER ...
107
+ judo destroy [options] SERVER ...
142
108
 
143
- Performs a 'judo create' and a 'judo start' in one step.
109
+ # SERVER can be formatted as NAME or NAME:GROUP or N:GROUP
110
+ # where N is the number of servers to create or launch
111
+ # 'launch' only differs from 'create' in that it immediately starts the server
144
112
 
145
- $ judo ssh [SERVERS...]
113
+ judo start [options] [SERVER ...]
114
+ judo stop [options] [SERVER ...]
115
+ judo restart [options] [SERVER ...]
146
116
 
147
- SSH's into the servers given.
117
+ judo commit [options] GROUP
148
118
 
149
- $ judo list
150
-
151
- At the top level it will list all of the groups and how many servers are in
152
- each group. Within a group it will list each server and its current state.
153
-
154
- $ judo commit
119
+ judo snapshot [options] SERVER SNAPSHOT ## take an ebs snapshot of a server
120
+ judo snapshots [options] [SERVER ...] ## show current snapshots on servers
121
+ judo animate [options] SNAPSHOT SERVER ## create a new server from a snapshot
122
+ judo erase [options] SNAPSHOT ## erase an old snapshot
155
123
 
156
- Commits the current group config and files to S3. New servers launched will
157
- use this new config.
124
+ judo swap [options] SERVER SERVER ## swap elastic IP's and names on the two servers
158
125
 
159
- $ judo console [SERVERS...]
126
+ judo watch [options] SERVER ## watch the server's boot process
127
+ judo info [options] [SERVER ...]
128
+ judo console [options] [SERVER ...] ## shows AWS console output
129
+ judo ssh [options] [SERVER ...] ## ssh's into the server
160
130
 
161
- See the AWS console output for the given servers.
131
+ # SERVER can be formatted as NAME or NAME:GROUP
132
+ # or :GROUP to indicate the whole group.
133
+ # If no servers are listed all servers are assumed.
162
134
 
163
- $ judo ips
135
+ judo list [options] ## lists all servers
136
+ judo groups [options] ## lists all groups
164
137
 
165
- This command gives you a top down view of all elastic IP addresses allocated
166
- for the AWS account and what servers or instances they are attached to.
167
-
168
- $ judo volumes
169
-
170
- This command gives you a top down view of all EBS volumes allocated for the AWS
171
- account and what servers or instances they are attached to.
138
+ judo volumes [options] ## shows all EBS volumes and what they are attached to
139
+ judo ips [options] ## shows all elastic ips and what they are attached to
172
140
 
173
141
  ## EXAMPLES
174
142
 
@@ -179,11 +147,12 @@ A couchdb server:
179
147
  ### ./couchdb/config.json
180
148
 
181
149
  {
182
- // dont repeat yourself - import the basic config
183
- "import" : "default",
184
- // its a db so we're going to want to have a static ip
150
+ "ami32":"ami-2d4aa444", // public ubuntu 10.04 ami - 32 bit
151
+ "ami64":"ami-fd4aa494", // public ubuntu 10.04 ami - 64 bit
152
+ "user":"ubuntu", // this is for ssh acccess - defaults to ubuntu not root
153
+ "security_group":"judo",
154
+ "availability_zone":"us-east-1d"
185
155
  "elastic_ip" : true,
186
- // only need 1 package
187
156
  "packages" : "couchdb",
188
157
  "volumes" : { "device" : "/dev/sde1",
189
158
  "media" : "ebs",
@@ -191,38 +160,35 @@ A couchdb server:
191
160
  "format" : "ext3",
192
161
  // this is where couchdb looks for its data by default
193
162
  "mount" : "/var/lib/couchdb/0.10.0",
194
- // make sure the data is owned by the couchdb user
195
163
  "user" : "couchdb",
196
- "group" : "couchdb",
164
+ "group" : "couchdb" }
165
+ // make sure the data is owned by the couchdb user
197
166
  // bounce couch since the data dir changed
198
- "after" : "#!/bin/bash\n service couchdb restart\n" }
199
167
  }
200
168
 
201
- A memcached server:
169
+ ### ./couchdb/setup.sh
170
+
171
+ service couchdb restart
202
172
 
203
173
  ### ./memcache/config.json
204
174
  {
205
- // dont repeat yourself - import the basic config
206
- "import" : "default",
207
- // its a data store so we're going to want to have a static ip
175
+ "ami32":"ami-2d4aa444", // public ubuntu 10.04 ami - 32 bit
176
+ "ami64":"ami-fd4aa494", // public ubuntu 10.04 ami - 64 bit
177
+ "user":"ubuntu", // this is for ssh acccess - defaults to ubuntu not root
178
+ "security_group":"judo",
179
+ "availability_zone":"us-east-1d"
208
180
  "elastic_ip" : true,
209
- // only need 1 package
210
- "packages" : "memcached",
211
- "instance_type" : "m1.xlarge",
212
- "files" : [
213
- { "file" : "/etc/memcached.conf",
214
- "template" : "memcached.conf.erb" },
215
- { "file" : "/etc/default/memcached",
216
- "source" : "memcached-default" },
217
- "after" : "#!/bin/bash\n service memcached start\n"
181
+ "instance_type" : "m1.xlarge"
218
182
  }
219
183
 
220
- ### ./memcache/files/memcached-default
184
+ ### ./memcache/setup.sh
221
185
 
222
- # Set this to yes to enable memcached.
223
- ENABLE_MEMCACHED=yes
186
+ apt-get install -y memcached
187
+ echo 'ENABLE_MEMCACHED=yes' > /etc/default/memcached
188
+ kuzushi-erb memcached.conf.erb > /etc/memcached.conf
189
+ service memcached start
224
190
 
225
- ### ./memcache/templates/memcached.conf.erb
191
+ ### ./memcache/memcached.conf.erb
226
192
 
227
193
  -d
228
194
  logfile /var/log/memcached.log
@@ -236,11 +202,13 @@ A redis server with a 2 disk xfs raid 0:
236
202
  ### ./redis/config.json
237
203
 
238
204
  {
239
- // dont repeat yourself - import the basic config
240
- "import" : "default",
205
+ "ami32":"ami-2d4aa444", // public ubuntu 10.04 ami - 32 bit
206
+ "ami64":"ami-fd4aa494", // public ubuntu 10.04 ami - 64 bit
207
+ "user":"ubuntu", // this is for ssh acccess - defaults to ubuntu not root
208
+ "security_group":"judo",
209
+ "availability_zone":"us-east-1d"
241
210
  "elastic_ip" : true,
242
211
  "instance_type" : "m2.xlarge",
243
- "local_packages" : { "package" : "redis-server_1.2.5-1", "source" : "http://http.us.debian.org/debian/pool/main/r/redis/" },
244
212
  "volumes" : [{ "device" : "/dev/sde1",
245
213
  "media" : "ebs",
246
214
  "scheduler" : "deadline",
@@ -253,38 +221,34 @@ A redis server with a 2 disk xfs raid 0:
253
221
  "media" : "raid",
254
222
  "mount" : "/var/lib/redis",
255
223
  "drives" : [ "/dev/sde1", "/dev/sde2" ],
256
- "user" : "redis",
257
- "group" : "redis",
258
224
  "level" : 0,
259
225
  "format" : "xfs" }]
260
226
  }
261
227
 
262
- ## CONFIG - LAUNCHING THE SERVER
228
+ ### ./redis/redis-server_1.2.6-1_i686.deb
229
+
230
+ ## the deb package can be included the the folder and pushed to the server
263
231
 
264
- The easiest way to make a judo config is to start with a working example and
265
- build from there. Complete documentation of all the options are below. Note:
266
- you can add keys and values NOT listed here and they will not harm anything, in
267
- fact they will be accessible (and useful) in the erb templates you may include.
232
+ ### ./redis/setup.sh
268
233
 
269
- "key_name":"judo123",
234
+ dpkg -i redis-server_1.2.6-1_i686.deb
235
+ chown redis:redis -R /var/lib/redis
236
+ service redis restart
270
237
 
271
- This specifies the name of the EC2 keypair passed to the EC2 instance on
272
- launch. Normally you never need to set this up as it is setup for you in the
273
- default config. The system is expecting a registered keypair in this case
274
- named "keypair123" with a "keypair123.pem" file located in a subfolder named
275
- "keypairs".
238
+ ## CONFIG - LAUNCHING THE SERVER
276
239
 
277
240
  "instance_type":"m1.small",
278
241
 
279
- Specify the instance size for the server type here. See:
280
- http://aws.amazon.com/ec2/instance-types/
242
+ Specify the instance type for the server type here. See:
243
+ http://aws.amazon.com/ec2/instance-types/. If nothing is specified m1.small is
244
+ used.
281
245
 
282
- "ami32":"ami-bb709dd2",
283
- "ami64":"ami-55739e3c",
284
- "user":"ubuntu",
246
+ "ami32":"ami-2d4aa444",
247
+ "ami64":"ami-fd4aa494",
248
+ "user":"ubuntu",
285
249
 
286
250
  This is where you specify the AMI's to use. The defaults (above) are the
287
- ubuntu 9.10 public AMI's. The "user" value is which user has the keypair
251
+ ubuntu 10.04 public AMI's. The "user" value is which user has the keypair
288
252
  bound to it for ssh'ing into the server.
289
253
 
290
254
  "security_group":"judo",
@@ -303,18 +267,6 @@ If this is true, an elastic IP will be allocated for the server when it is
303
267
  created. This means that if the server is rebooted it will keep the same IP
304
268
  address.
305
269
 
306
- "import" : "default",
307
-
308
- This command is very import and allows you inherit the configurations and files
309
- from other groups. If you wanted to make a group called 'mysql' that was
310
- exactly like the default group except it installed the mysql package and ran on
311
- a m2.4xlarge instance type you could specify it like this:
312
-
313
- { "import : "default", "packages" : [ "mysql" ], "instance_type" : "m2.4xlarge" }
314
-
315
- and save yourself a lot of typing. You could further subclass by making a new
316
- group and importing this config.
317
-
318
270
  "volumes" : [ { "device" : "/dev/sde1", "media" : "ebs", "size" : 64 } ],
319
271
 
320
272
  You can specify one or more volumes for the group. If the media is of type
@@ -324,177 +276,6 @@ media is anything other than "ebs" judo will ignore the entry. The EBS drives
324
276
  are tied to the server and attached as the specified device when started. Only
325
277
  when the server is destroyed are the EBS drives deleted.
326
278
 
327
- ## CONFIG - CONTROLLING THE SERVER
328
-
329
- Judo uses kuzushi (a ruby gem) to control the server once it boots and will
330
- feed the config and files you have committed with 'judo commit' to it. At its
331
- core, kuzushi is a tool to run whatever custom scripts you need in order to put
332
- the server into the state you want. If you want to use puppet or chef to
333
- bootstrap your server. Put the needed commands into a script and run it. If
334
- you just want to write your own shell script, do it. Beyond that kuzushi has
335
- an array of tools to cover many of the common setup steps to prevent you from
336
- having to write scripts to reinvent the wheel. The hooks to run your scripts
337
- come in three forms.
338
-
339
- "before" : "script1.sh", // a single script
340
- "init" : [ "script2.rb", "script3.pl" ], // an array of scripts
341
- "after" : "#!/bin/bash\n service restart mysql\n", // an inline script
342
-
343
- Each of the hooks can refer to a single script (located in the "scripts" subfolder),
344
- or a list of scripts, or an inline script which can be embedded in the config data.
345
- Inline scripts are any string beginning with the magic unix "#!".
346
-
347
- The "before" hook runs before all other actions. The "after" hook runs after
348
- all other actions. The "init" hook runs right before the "after" hook but only
349
- on the server's very first boot. It will be skipped on all subsequent boots.
350
-
351
- These three hooks can be added to to any hash '{}' in the system.
352
-
353
- "files" : [ { "file" : "/etc/apache2/ports.conf" ,
354
- "before" : "stop_apache2.sh",
355
- "after" : "start_apache2.sh" } ],
356
-
357
- This example runs the "stop_apache2.sh" script before installing the ports.conf
358
- file and runs 'start_apach2.sh" after installing it. If there was some one time
359
- formatting to be done we could add an "init" hook as well.
360
-
361
- After running "before" and before running "init" and "after" the following
362
- hooks will run in the following order:
363
-
364
- "packages" : [ "postgresql-8.4", "postgresql-server-dev-8.4", "libpq-dev" ],
365
-
366
- Packages listed here will be installed via 'apt-get install'.
367
-
368
- "local_packages" : [ "fathomdb_0.1-1" ],
369
- "local_packages" : [{ "package" : "redis-server_1.2.5-1", "source" : "http://http.us.debian.org/debian/pool/main/r/redis/" }],
370
-
371
- The "local_packages" hook is for including debs. Either hand compiled ones you
372
- have included in the git repo, or ones found in other repos. Judo will include
373
- both the i386 and amd64 versions of the package in the commit.
374
-
375
- In the first case judo will look in the local packages subfolder for
376
- "fathomdb_0.1-1_i386.deb" as well as "fathomdb_0.1-1_amd64.deb". In the second
377
- case it will attempt to use curl to fetch the following URLs.
378
-
379
- http://http.us.debian.org/debian/pool/main/r/redis/redis-server-1.2.5-1_i386.deb
380
- http://http.us.debian.org/debian/pool/main/r/redis/redis-server-1.2.5-1_amd64.deb
381
-
382
- Both types of local packages can be intermixed in config.
383
-
384
- "gems" : [ "thin", "rake", "rails", "pg" ],
385
-
386
- The "gems" hook lists gems to be installed on the system on boot via "gem install ..."
387
-
388
- "volumes" : [ { "device" : "/dev/sde1",
389
- "media" : "ebs",
390
- "size" : 64,
391
- "format" : "ext3",
392
- "scheduler" : "deadline",
393
- "label" : "/wal",
394
- "mount" : "/wal",
395
- "mount_options" : "nodev,nosuid,noatime" },
396
- { "device" : "/dev/sdf1",
397
- "media" : "ebs",
398
- "size" : 128,
399
- "scheduler" : "cfq" },
400
- { "device" : "/dev/sdf2",
401
- "media" : "ebs",
402
- "size" : 128,
403
- "scheduler" : "cfq" },
404
- { "media" : "tmpfs",
405
- "options" : "size=500M,mode=0744",
406
- "mount" : "/var/lib/stats",
407
- "user" : "stats",
408
- "group" : "stats" },
409
- { "device" : "/dev/md0",
410
- "media" : "raid",
411
- "mount" : "/database",
412
- "drives" : [ "/dev/sdf1", "/dev/sdf2" ],
413
- "level" : 0,
414
- "chunksize" : 256,
415
- "readahead" : 65536,
416
- "format" : "xfs",
417
- "init" : "init_database.sh" } ],
418
-
419
- The most complex and powerful hook is "volumes". While volumes of media type
420
- "ebs" are created when the server is created the media types of "raid" and
421
- "tmpfs" are also supported. If a format is specified, kuzushi will format the
422
- volume on the server's very first boot. Currently "xfs" and "ext3" are the
423
- only formats supported. Using "xfs" will install the "xfsprogs" package
424
- implicitly. If a label is specified it will be set at format time. If "mount"
425
- is specified it will be mounted there on boot, with "mount_options" if
426
- specified. A "readahead" can be set to specify the readahead size, as well as
427
- a scheduler which can be "noop", "cfq", "deadline" or "anticipatory". Volumes
428
- of type "raid" will implicitly install the "mdadm" package and will expect a
429
- list of "drives", a "level" and a "chunksize".
430
-
431
- Kuzushi will wait for all volumes of media "ebs" to attach before proceeding
432
- with mounting and formatting.
433
-
434
- "files" : [ { "file" : "/etc/postgresql/8.4/main/pg_hba.conf" },
435
- { "file" : "/etc/hosts",
436
- "source" : "hosts-database" },
437
- { "file" : "/etc/postgresql/8.4/main/postgresql.conf",
438
- "template" : "postgresql.conf-8.4.erb" } ],
439
-
440
- The "files" hook allows you to install files in the system. In the first example
441
- it will install a ph_hba.conf file. Since no source or template is given it will
442
- look for this file in the "files" subdirectory by the same name.
443
-
444
- The second example will install "/etc/hosts" but will pull the file
445
- "hosts-database" from the "files" subfolder.
446
-
447
- The third example will dynamically generate the postgresql.conf file from an erb
448
- template. The erb template will have access to two special variables to help
449
- it fill out its proper options. The variable "@config" will have the hash of data
450
- contained in json.conf including the data imported in via the "import" hook. There
451
- will also be a "@system" variable which will have all the system info collected by
452
- the ruby gem "ohai".
453
-
454
- "crontab" : [ { "user" : "root", "file" : "crontab-root" } ],
455
-
456
- The "crontab" hook will install a crontab file from the "crontabs" subfolder with
457
- a simple "crontab -u #{user} #{file}".
458
-
459
- -------------------------------------------------------------------------------
460
- CONFIG - DEBUGGING KUZUSHI
461
- -------------------------------------------------------------------------------
462
-
463
- If something goes wrong its good to understand how judo uses juzushi to manage
464
- the server. First, judo sends a short shell script in the EC2 user_data in to
465
- launch kuzushi. You can see the exact shell script sent by setting the
466
- environment variable "JUDO_DEBUG" to 1.
467
-
468
- $ export JUDO_DEBUG=1
469
- $ judo launch +1
470
-
471
- This boot script will fail if you choose an AMI that does not execute the ec2
472
- user data on boot, or use apt-get, or have rubygems 1.3.5 or higher in its
473
- package repo.
474
-
475
- You can log into the server and watch kuzushi's output in /var/log/kuzushi.log
476
-
477
- $ judo start my_server
478
- ...
479
- $ judo ssh my_server
480
- ubuntu:~$ sudo tail -f /var/log/kuzushi.log
481
-
482
- If you need to re-run kuzushi manually, the command to do so is either (as root)
483
-
484
- # kuzushi init S3_URL
485
-
486
- or
487
-
488
- # kuzushi start S3_URL
489
-
490
- Kuzushi "init" if for a server's first boot and will run "init" hooks, while
491
- "start" is for all other boots. The S3_URL is the url of the config.json and
492
- other files commited for this type of server. To see exactly what command
493
- was run on this specific server boot check the first line of kuzushi.log.
494
-
495
- ubuntu:~$ sudo head -n 1 /var/log/kuzushi.log
496
-
497
-
498
279
  ## Meta
499
280
 
500
281
  Created by Orion Henry and Adam Wiggins. Forked from the gem 'sumo'.
data/VERSION CHANGED
@@ -1 +1 @@
1
- 0.4.0
1
+ 0.4.1
data/bin/judo CHANGED
@@ -18,7 +18,7 @@ Usage: judo launch [options] SERVER ...
18
18
  judo create [options] SERVER ...
19
19
  judo destroy [options] SERVER ...
20
20
 
21
- # SERVER can be formatted as NAME or NAME:GROUP or +N or +N:GROUP
21
+ # SERVER can be formatted as NAME or NAME:GROUP or N:GROUP
22
22
  # where N is the number of servers to create or launch
23
23
  # 'launch' only differs from 'create' in that it immediately starts the server
24
24
 
@@ -1,5 +1,7 @@
1
1
  #!/bin/bash
2
2
 
3
+ kuzushi-setup ## this will wait for all the volumes to attach and take care of mounting and formatting them
4
+
3
5
  if [ "$JUDO_FIRST_BOOT" ] ; then
4
6
  ## do some setup on the first boot only
5
7
  fi
@@ -29,6 +29,7 @@
29
29
  ## url - the URL to the group verson file in the s3 bucket
30
30
 
31
31
  %>
32
+ set -x
32
33
 
33
34
  export JUDO_ID='<%= id %>'
34
35
  export JUDO_NAME='<%= name %>'
@@ -49,6 +50,5 @@ gem install kuzushi --no-rdoc --no-ri --version '<%= config['kuzushi_version'] |
49
50
  echo 'export PATH=`ruby -r rubygems -e "puts Gem.bindir"`:$PATH' >> /etc/profile ; . /etc/profile
50
51
  export LOG=/var/log/kuzushi.log
51
52
 
52
- kuzushi-setup >> $LOG ## this will properly mount / configure and formal all ebs drives
53
53
  bash setup.sh >> $LOG
54
54
 
metadata CHANGED
@@ -5,8 +5,8 @@ version: !ruby/object:Gem::Version
5
5
  segments:
6
6
  - 0
7
7
  - 4
8
- - 0
9
- version: 0.4.0
8
+ - 1
9
+ version: 0.4.1
10
10
  platform: ruby
11
11
  authors:
12
12
  - Orion Henry