judo 0.0.7 → 0.0.8

Sign up to get free protection for your applications and to get access to all the features.
Files changed (3) hide show
  1. data/{README.rdoc → README.markdown} +171 -186
  2. data/VERSION +1 -1
  3. metadata +4 -4
@@ -1,11 +1,9 @@
1
- = Judo
1
+ # Judo
2
2
 
3
3
  Judo is a tool for managing a cloud of ec2 servers. It aims to be both simple
4
4
  to get going and powerful.
5
5
 
6
- -------------------------------------------------------------------------------
7
- CONCEPTS
8
- -------------------------------------------------------------------------------
6
+ ## CONCEPTS
9
7
 
10
8
  Config Repo: Judo keeps it configuration for each server group in a git repo.
11
9
 
@@ -16,18 +14,16 @@ of state, such as EBS volumes, elastic IP addresses and a current EC2 instance
16
14
  ID. This allows you to abstract all the elastic EC2 concepts into a more
17
15
  traditional concept of a static Server.
18
16
 
19
- -------------------------------------------------------------------------------
20
- STARTING
21
- -------------------------------------------------------------------------------
17
+ ## STARTING
22
18
 
23
19
  You will need an AWS account with EC2, S3 and SDB all enabled.
24
20
 
25
21
  Setting up a new judo repo named "my_cloud" would look like this:
26
22
 
27
- $ mkdir my_cloud
28
- $ cd my_cloud
29
- $ git init
30
- $ judo init
23
+ $ mkdir my_cloud
24
+ $ cd my_cloud
25
+ $ git init
26
+ $ judo init
31
27
 
32
28
  The 'judo init' command will make a .judo folder to store your EC2 keys and S3
33
29
  bucket. It will also make a folder named "default" to hold the default server
@@ -35,28 +31,28 @@ config.
35
31
 
36
32
  To view all of the groups and servers running in those group you can type:
37
33
 
38
- $ judo list
39
- SERVER GROUPS
40
- default 0 servers
34
+ $ judo list
35
+ SERVER GROUPS
36
+ default 0 servers
41
37
 
42
38
  To launch a default server you need to cd into the default folder:
43
39
 
44
- $ cd default
45
- $ judo create my_server_1
46
- ---> Creating server my_server_1... done (0.6s)
47
- $ judo list
48
- SERVER IN GROUP my_server_1
49
- my_server_1 m1.small ami-bb709dd2 0 volumes
50
- $ judo start my_server_1
51
- No config has been committed yet, type 'judo commit'
40
+ $ cd default
41
+ $ judo create my_server_1
42
+ ---> Creating server my_server_1... done (0.6s)
43
+ $ judo list
44
+ SERVER IN GROUP my_server_1
45
+ my_server_1 m1.small ami-bb709dd2 0 volumes
46
+ $ judo start my_server_1
47
+ No config has been committed yet, type 'judo commit'
52
48
 
53
49
  The server has now been created but cannot be launched because the config has
54
50
  not been committed. Committing the config loads the config.json and all of the
55
51
  scripts and packages it needs to run into your S3 bucket. The config probably
56
52
  looks something like this:
57
53
 
58
- $ cat config.json
59
- {
54
+ $ cat config.json
55
+ {
60
56
  "key_name":"judo14",
61
57
  "instance_size":"m1.small",
62
58
  "ami32":"ami-bb709dd2", // public ubuntu 9.10 ami - 32 bit
@@ -64,7 +60,7 @@ looks something like this:
64
60
  "user":"ubuntu",
65
61
  "security_group":"judo",
66
62
  "availability_zone":"us-east-1d"
67
- }
63
+ }
68
64
 
69
65
  The default config uses the public ubuntu 9.10 ami's. It runs in the judo
70
66
  security group and a judo key pair (which were made during the init process).
@@ -72,220 +68,211 @@ The user parameter is the user the 'judo ssh' command attempts to ssh in using
72
68
  the keypair. Other debian based distros can be used assuming they have current
73
69
  enough installations of ruby (1.8.7) and rubygems (1.3.5).
74
70
 
75
- $ judo commit
76
- Compiling version 1
77
- a default
78
- a default/config.json
79
- Uploading to s3...
80
- $ judo start my_server_1
81
- ---> Starting server my_server_1... done (2.3s)
82
- ---> Acquire hostname... ec2-1-2-3-4.compute-1.amazonaws.com (49.8s)
83
- ---> Wait for ssh... done (9.8s)
84
- $ judo list
85
- SERVER IN GROUP default
86
- my_server_1 v1 i-80000000 m1.small ami-bb709dd2 running 0 volumes ec2-1-2-3-4.compute-1.amazonaws.com
71
+ $ judo commit
72
+ Compiling version 1
73
+ a default
74
+ a default/config.json
75
+ Uploading to s3...
76
+ $ judo start my_server_1
77
+ ---> Starting server my_server_1... done (2.3s)
78
+ ---> Acquire hostname... ec2-1-2-3-4.compute-1.amazonaws.com (49.8s)
79
+ ---> Wait for ssh... done (9.8s)
80
+ $ judo list
81
+ SERVER IN GROUP default
82
+ my_server_1 v1 i-80000000 m1.small ami-bb709dd2 running 0 volumes ec2-1-2-3-4.compute-1.amazonaws.com
87
83
 
88
84
  We can now see that 'my_server_1' is running and running with version 1 of the
89
85
  config. We can create and start a server in one step with the launch command.
90
86
 
91
- $ judo launch my_server_2
92
- ---> Creating server my_server_2... done (0.6s)
93
- ---> Starting server my_server_2... done (1.6s)
94
- ---> Acquire hostname... ec2-1-2-3-5.compute-1.amazonaws.com (31.1s)
95
- ---> Wait for ssh... done (6.1s)
87
+ $ judo launch my_server_2
88
+ ---> Creating server my_server_2... done (0.6s)
89
+ ---> Starting server my_server_2... done (1.6s)
90
+ ---> Acquire hostname... ec2-1-2-3-5.compute-1.amazonaws.com (31.1s)
91
+ ---> Wait for ssh... done (6.1s)
96
92
 
97
93
  This will create and start two servers. One named 'my_server_1' and one named
98
94
  'my_server_2'. You can ssh into 'my_server_1' you can type:
99
95
 
100
- $ judo ssh my_server_1
96
+ $ judo ssh my_server_1
101
97
 
102
98
  You can stop all the servers with:
103
99
 
104
- $ judo stop
100
+ $ judo stop
105
101
 
106
102
  Note that since no name was specified it will stop all servers in the group.
107
103
  You could also have typed:
108
104
 
109
- $ judo stop my_server_1 my_server_2
105
+ $ judo stop my_server_1 my_server_2
110
106
 
111
- -------------------------------------------------------------------------------
112
- COMMANDS
113
- -------------------------------------------------------------------------------
107
+ ## COMMANDS
114
108
 
115
- NOTE: many servers take an argument of "[SERVERS...]". This indicates that the
109
+ NOTE: many servers take an argument of "[SERVERS...]". This indicates that the
116
110
  command must be run in a group folder (specifying which group of servers to work
117
111
  on). Zero or more server names can be used. If no server is named, the
118
112
  operation is run on all servers. For instance:
119
113
 
120
- $ judo restart primary_db backup_db
114
+ $ judo restart primary_db backup_db
121
115
 
122
116
  This will restart only the servers named 'primary_db' and 'backup_db'. Where as
123
117
 
124
- $ judo restart
118
+ $ judo restart
125
119
 
126
120
  will restart all servers in the group.
127
121
 
128
122
  -------------------------------------------------------------------------------
129
123
 
130
- $ judo create NAME
124
+ $ judo create NAME
131
125
 
132
126
  Creates a new named server in the current group. Will allocate EBS and Elastic
133
127
  IP's as needed.
134
128
 
135
- $ judo create +N
129
+ $ judo create +N
136
130
 
137
131
  Creates N new servers where N is an integer. These servers have generic names
138
132
  (group.N). Note: servers with generic names AND no external resources (like
139
133
  EBS Volumes or Elastic IPs) will be destroyed when stopped.
140
134
 
141
- $ judo destroy NAME
135
+ $ judo destroy NAME
142
136
 
143
137
  Destroy the named server. De-allocates any elastic IP's and destroys the EBS
144
138
  volumes.
145
139
 
146
- $ judo start [SERVERS...]
147
- $ judo stop [SERVERS...]
148
- $ judo restart [SERVERS...]
140
+ $ judo start [SERVERS...]
141
+ $ judo stop [SERVERS...]
142
+ $ judo restart [SERVERS...]
149
143
 
150
144
  Starts stops or restarts then starts the given servers.
151
145
 
152
- $ judo launch NAME
153
- $ judo launch +N
146
+ $ judo launch NAME
147
+ $ judo launch +N
154
148
 
155
149
  Performs a 'judo create' and a 'judo start' in one step.
156
150
 
157
- $ judo ssh [SERVERS...]
151
+ $ judo ssh [SERVERS...]
158
152
 
159
153
  SSH's into the servers given.
160
154
 
161
- $ judo list
155
+ $ judo list
162
156
 
163
157
  At the top level it will list all of the groups and how many servers are in
164
158
  each group. Within a group it will list each server and its current state.
165
159
 
166
- $ judo commit
160
+ $ judo commit
167
161
 
168
162
  Commits the current group config and files to S3. New servers launched will
169
163
  use this new config.
170
164
 
171
- $ judo console [SERVERS...]
165
+ $ judo console [SERVERS...]
172
166
 
173
167
  See the AWS console output for the given servers.
174
168
 
175
- $ judo ips
169
+ $ judo ips
176
170
 
177
171
  This command gives you a top down view of all elastic IP addresses allocated
178
172
  for the AWS account and what servers or instances they are attached to.
179
173
 
180
- $ judo volumes
174
+ $ judo volumes
181
175
 
182
176
  This command gives you a top down view of all EBS volumes allocated for the AWS
183
177
  account and what servers or instances they are attached to.
184
178
 
185
- -------------------------------------------------------------------------------
186
- EXAMPLES
187
- -------------------------------------------------------------------------------
179
+ ## EXAMPLES
188
180
 
189
181
  An example is worth a thousand words.
190
182
 
191
183
  A couchdb server:
192
184
 
193
- === ./couchdb/config.json
194
- {
195
- // dont repeat yourself - import the basic config
196
- "import" : "default",
197
- // its a db so we're going to want to have a static ip
198
- "elastic_ip" : true,
199
- // only need 1 package
200
- "packages" : "couchdb",
201
- "volumes" : { "device" : "/dev/sde1",
202
- "media" : "ebs",
203
- "size" : 64,
204
- "format" : "ext3",
205
- // this is where couchdb looks for its data by default
206
- "mount" : "/var/lib/couchdb/0.10.0",
207
- // make sure the data is owned by the couchdb user
208
- "user" : "couchdb",
209
- "group" : "couchdb",
210
- // bounce couch since the data dir changed
211
- "after" : "#!/bin/bash\n service couchdb restart\n" }
212
- }
213
- ===
185
+ ### ./couchdb/config.json
186
+
187
+ {
188
+ // dont repeat yourself - import the basic config
189
+ "import" : "default",
190
+ // its a db so we're going to want to have a static ip
191
+ "elastic_ip" : true,
192
+ // only need 1 package
193
+ "packages" : "couchdb",
194
+ "volumes" : { "device" : "/dev/sde1",
195
+ "media" : "ebs",
196
+ "size" : 64,
197
+ "format" : "ext3",
198
+ // this is where couchdb looks for its data by default
199
+ "mount" : "/var/lib/couchdb/0.10.0",
200
+ // make sure the data is owned by the couchdb user
201
+ "user" : "couchdb",
202
+ "group" : "couchdb",
203
+ // bounce couch since the data dir changed
204
+ "after" : "#!/bin/bash\n service couchdb restart\n" }
205
+ }
214
206
 
215
207
  A memcached server:
216
208
 
217
- === ./memcache/config.json
218
- {
219
- // dont repeat yourself - import the basic config
220
- "import" : "default",
221
- // its a data store so we're going to want to have a static ip
222
- "elastic_ip" : true,
223
- // only need 1 package
224
- "packages" : "memcached",
225
- "instance_size" : "m1.xlarge",
226
- "files" : [
227
- { "file" : "/etc/memcached.conf",
228
- "template" : "memcached.conf.erb" },
229
- { "file" : "/etc/default/memcached",
230
- "source" : "memcached-default" },
231
- "after" : "#!/bin/bash\n service memcached start\n"
232
- }
233
- ===
234
-
235
- === ./memcache/files/memcached-default
236
- # Set this to yes to enable memcached.
237
- ENABLE_MEMCACHED=yes
238
- ===
239
-
240
- === ./memcache/templates/memcached.conf.erb
241
- -d
242
- logfile /var/log/memcached.log
243
- ## ohai gives memory in Kb so div by 1024 to get megs
244
- ## use 75% of total ram (* 0.75)
245
- -m <%= (@system.memory["total"].to_i / 1024 * 0.75).to_i %>
246
- -u nobody
247
- ===
209
+ ### ./memcache/config.json
210
+ {
211
+ // dont repeat yourself - import the basic config
212
+ "import" : "default",
213
+ // its a data store so we're going to want to have a static ip
214
+ "elastic_ip" : true,
215
+ // only need 1 package
216
+ "packages" : "memcached",
217
+ "instance_size" : "m1.xlarge",
218
+ "files" : [
219
+ { "file" : "/etc/memcached.conf",
220
+ "template" : "memcached.conf.erb" },
221
+ { "file" : "/etc/default/memcached",
222
+ "source" : "memcached-default" },
223
+ "after" : "#!/bin/bash\n service memcached start\n"
224
+ }
225
+
226
+ ### ./memcache/files/memcached-default
227
+
228
+ # Set this to yes to enable memcached.
229
+ ENABLE_MEMCACHED=yes
230
+
231
+ ### ./memcache/templates/memcached.conf.erb
232
+
233
+ -d
234
+ logfile /var/log/memcached.log
235
+ ## ohai gives memory in Kb so div by 1024 to get megs
236
+ ## use 75% of total ram (* 0.75)
237
+ -m <%= (@system.memory["total"].to_i / 1024 * 0.75).to_i %>
238
+ -u nobody
248
239
 
249
240
  A redis server with a 2 disk xfs raid 0:
250
241
 
251
- === ./redis/config.json
252
- {
253
- // dont repeat yourself - import the basic config
254
- "import" : "default",
255
- "elastic_ip" : true,
256
- "instance_size" : "m2.xlarge",
257
- "local_packages" : { "package" : "redis-server_1.2.5-1", "source" : "http://http.us.debian.org/debian/pool/main/r/redis/" },
258
- "volumes" : [{ "device" : "/dev/sde1",
259
- "media" : "ebs",
260
- "scheduler" : "deadline",
261
- "size" : 16 },
262
- { "device" : "/dev/sde2",
263
- "media" : "ebs",
264
- "scheduler" : "deadline",
265
- "size" : 16 },
266
- { "device" : "/dev/md0",
267
- "media" : "raid",
268
- "mount" : "/var/lib/redis",
269
- "drives" : [ "/dev/sde1", "/dev/sde2" ],
270
- "user" : "redis",
271
- "group" : "redis",
272
- "level" : 0,
273
- "format" : "xfs" }]
274
- }
275
- ===
276
-
277
- A postgresql server with a raid and a separate write-head log:
278
-
279
- -------------------------------------------------------------------------------
280
- CONFIG - LAUNCHING THE SERVER
281
- -------------------------------------------------------------------------------
242
+ ### ./redis/config.json
243
+
244
+ {
245
+ // dont repeat yourself - import the basic config
246
+ "import" : "default",
247
+ "elastic_ip" : true,
248
+ "instance_size" : "m2.xlarge",
249
+ "local_packages" : { "package" : "redis-server_1.2.5-1", "source" : "http://http.us.debian.org/debian/pool/main/r/redis/" },
250
+ "volumes" : [{ "device" : "/dev/sde1",
251
+ "media" : "ebs",
252
+ "scheduler" : "deadline",
253
+ "size" : 16 },
254
+ { "device" : "/dev/sde2",
255
+ "media" : "ebs",
256
+ "scheduler" : "deadline",
257
+ "size" : 16 },
258
+ { "device" : "/dev/md0",
259
+ "media" : "raid",
260
+ "mount" : "/var/lib/redis",
261
+ "drives" : [ "/dev/sde1", "/dev/sde2" ],
262
+ "user" : "redis",
263
+ "group" : "redis",
264
+ "level" : 0,
265
+ "format" : "xfs" }]
266
+ }
267
+
268
+ ## CONFIG - LAUNCHING THE SERVER
282
269
 
283
270
  The easiest way to make a judo config is to start with a working example and
284
271
  build from there. Complete documentation of all the options are below. Note:
285
272
  you can add keys and values NOT listed here and they will not harm anything, in
286
273
  fact they will be accessible (and useful) in the erb templates you may include.
287
274
 
288
- "key_name":"judo123",
275
+ "key_name":"judo123",
289
276
 
290
277
  This specifies the name of the EC2 keypair passed to the EC2 instance on
291
278
  launch. Normally you never need to set this up as it is setup for you in the
@@ -293,7 +280,7 @@ default config. The system is expecting a registered keypair in this case
293
280
  named "keypair123" with a "keypair123.pem" file located in a subfolder named
294
281
  "keypairs".
295
282
 
296
- "instance_size":"m1.small",
283
+ "instance_size":"m1.small",
297
284
 
298
285
  Specify the instance size for the server type here. See:
299
286
  http://aws.amazon.com/ec2/instance-types/
@@ -316,13 +303,13 @@ and name them here.
316
303
 
317
304
  What zone to launch the server in.
318
305
 
319
- "elastic_ip" : true,
306
+ "elastic_ip" : true,
320
307
 
321
308
  If this is true, an elastic IP will be allocated for the server when it is
322
309
  created. This means that if the server is rebooted it will keep the same IP
323
310
  address.
324
311
 
325
- "import" : "default",
312
+ "import" : "default",
326
313
 
327
314
  This command is very import and allows you inherit the configurations and files
328
315
  from other groups. If you wanted to make a group called 'mysql' that was
@@ -332,7 +319,7 @@ a m2.4xlarge instance type you could specify it like this:
332
319
  and save yourself a lot of typing. You could further subclass by making a new
333
320
  group and importing this config.
334
321
 
335
- "volumes" : [ { "device" : "/dev/sde1", "media" : "ebs", "size" : 64 } ],
322
+ "volumes" : [ { "device" : "/dev/sde1", "media" : "ebs", "size" : 64 } ],
336
323
 
337
324
  You can specify one or more volumes for the group. If the media is of type
338
325
  "ebs" judo will create an elastic block device with a number of gigabytes
@@ -341,9 +328,7 @@ media is anything other than "ebs" judo will ignore the entry. The EBS drives
341
328
  are tied to the server and attached as the specified device when started. Only
342
329
  when the server is destroyed are the EBS drives deleted.
343
330
 
344
- -------------------------------------------------------------------------------
345
- CONFIG - CONTROLLING THE SERVER
346
- -------------------------------------------------------------------------------
331
+ ## CONFIG - CONTROLLING THE SERVER
347
332
 
348
333
  Judo uses kuzushi (a ruby gem) to control the server once it boots and will
349
334
  feed the config and files you have committed with 'judo commit' to it. At its
@@ -355,9 +340,9 @@ an array of tools to cover many of the common setup steps to prevent you from
355
340
  having to write scripts to reinvent the wheel. The hooks to run your scripts
356
341
  come in three forms.
357
342
 
358
- "before" : "script1.sh", // a single script
359
- "init" : [ "script2.rb", "script3.pl" ], // an array of scripts
360
- "after" : "#!/bin/bash\n service restart mysql\n", // an inline script
343
+ "before" : "script1.sh", // a single script
344
+ "init" : [ "script2.rb", "script3.pl" ], // an array of scripts
345
+ "after" : "#!/bin/bash\n service restart mysql\n", // an inline script
361
346
 
362
347
  Each of the hooks can refer to a single script (located in the "scripts" subfolder),
363
348
  or a list of scripts, or an inline script which can be embedded in the config data.
@@ -369,7 +354,7 @@ on the server's very first boot. It will be skipped on all subsequent boots.
369
354
 
370
355
  These three hooks can be added to to any hash '{}' in the system.
371
356
 
372
- "files" : [ { "file" : "/etc/apache2/ports.conf" ,
357
+ "files" : [ { "file" : "/etc/apache2/ports.conf" ,
373
358
  "before" : "stop_apache2.sh",
374
359
  "after" : "start_apache2.sh" } ],
375
360
 
@@ -380,12 +365,12 @@ formatting to be done we could add an "init" hook as well.
380
365
  After running "before" and before running "init" and "after" the following
381
366
  hooks will run in the following order:
382
367
 
383
- "packages" : [ "postgresql-8.4", "postgresql-server-dev-8.4", "libpq-dev" ],
368
+ "packages" : [ "postgresql-8.4", "postgresql-server-dev-8.4", "libpq-dev" ],
384
369
 
385
370
  Packages listed here will be installed via 'apt-get install'.
386
371
 
387
- "local_packages" : [ "fathomdb_0.1-1" ],
388
- "local_packages" : [{ "package" : "redis-server_1.2.5-1", "source" : "http://http.us.debian.org/debian/pool/main/r/redis/" }],
372
+ "local_packages" : [ "fathomdb_0.1-1" ],
373
+ "local_packages" : [{ "package" : "redis-server_1.2.5-1", "source" : "http://http.us.debian.org/debian/pool/main/r/redis/" }],
389
374
 
390
375
  The "local_packages" hook is for including debs. Either hand compiled ones you
391
376
  have included in the git repo, or ones found in other repos. Judo will include
@@ -400,19 +385,19 @@ http://http.us.debian.org/debian/pool/main/r/redis/redis-server-1.2.5-1_amd64.de
400
385
 
401
386
  Both types of local packages can be intermixed in config.
402
387
 
403
- "gems" : [ "thin", "rake", "rails", "pg" ],
388
+ "gems" : [ "thin", "rake", "rails", "pg" ],
404
389
 
405
390
  The "gems" hook lists gems to be installed on the system on boot via "gem install ..."
406
391
 
407
- "volumes" : [ { "device" : "/dev/sde1",
392
+ "volumes" : [ { "device" : "/dev/sde1",
408
393
  "media" : "ebs",
409
394
  "size" : 64,
410
- "format" : "ext3",
395
+ "format" : "ext3",
411
396
  "scheduler" : "deadline",
412
397
  "label" : "/wal",
413
398
  "mount" : "/wal",
414
399
  "mount_options" : "nodev,nosuid,noatime" },
415
- { "device" : "/dev/sdf1",
400
+ { "device" : "/dev/sdf1",
416
401
  "media" : "ebs",
417
402
  "size" : 128,
418
403
  "scheduler" : "cfq" },
@@ -425,7 +410,7 @@ The "gems" hook lists gems to be installed on the system on boot via "gem instal
425
410
  "mount" : "/var/lib/stats",
426
411
  "user" : "stats",
427
412
  "group" : "stats" },
428
- { "device" : "/dev/md0",
413
+ { "device" : "/dev/md0",
429
414
  "media" : "raid",
430
415
  "mount" : "/database",
431
416
  "drives" : [ "/dev/sdf1", "/dev/sdf2" ],
@@ -450,11 +435,11 @@ list of "drives", a "level" and a "chunksize".
450
435
  Kuzushi will wait for all volumes of media "ebs" to attach before proceeding
451
436
  with mounting and formatting.
452
437
 
453
- "files" : [ { "file" : "/etc/postgresql/8.4/main/pg_hba.conf" },
454
- { "file" : "/etc/hosts",
455
- "source" : "hosts-database" },
456
- { "file" : "/etc/postgresql/8.4/main/postgresql.conf",
457
- "template" : "postgresql.conf-8.4.erb" } ],
438
+ "files" : [ { "file" : "/etc/postgresql/8.4/main/pg_hba.conf" },
439
+ { "file" : "/etc/hosts",
440
+ "source" : "hosts-database" },
441
+ { "file" : "/etc/postgresql/8.4/main/postgresql.conf",
442
+ "template" : "postgresql.conf-8.4.erb" } ],
458
443
 
459
444
  The "files" hook allows you to install files in the system. In the first example
460
445
  it will install a ph_hba.conf file. Since no source or template is given it will
@@ -470,7 +455,7 @@ contained in json.conf including the data imported in via the "import" hook. The
470
455
  will also be a "@system" variable which will have all the system info collected by
471
456
  the ruby gem "ohai".
472
457
 
473
- "crontab" : [ { "user" : "root", "file" : "crontab-root" } ],
458
+ "crontab" : [ { "user" : "root", "file" : "crontab-root" } ],
474
459
 
475
460
  The "crontab" hook will install a crontab file from the "crontabs" subfolder with
476
461
  a simple "crontab -u #{user} #{file}".
@@ -484,8 +469,8 @@ the server. First, judo sends a short shell script in the EC2 user_data in to
484
469
  launch kuzushi. You can see the exact shell script sent by setting the
485
470
  environment variable "JUDO_DEBUG" to 1.
486
471
 
487
- $ export JUDO_DEBUG=1
488
- $ judo launch +1
472
+ $ export JUDO_DEBUG=1
473
+ $ judo launch +1
489
474
 
490
475
  This boot script will fail if you choose an AMI that does not execute the ec2
491
476
  user data on boot, or use apt-get, or have rubygems 1.3.5 or higher in its
@@ -493,25 +478,25 @@ package repo.
493
478
 
494
479
  You can log into the server and watch kuzushi's output in /var/log/kuzushi.log
495
480
 
496
- $ judo start my_server
497
- ...
498
- $ judo ssh my_server
499
- ubuntu:~$ sudo tail -f /var/log/kuzushi.log
481
+ $ judo start my_server
482
+ ...
483
+ $ judo ssh my_server
484
+ ubuntu:~$ sudo tail -f /var/log/kuzushi.log
500
485
 
501
486
  If you need to re-run kuzushi manually, the command to do so is either (as root)
502
487
 
503
- # kuzushi init S3_URL
488
+ # kuzushi init S3_URL
504
489
 
505
490
  or
506
491
 
507
- # kuzushi start S3_URL
492
+ # kuzushi start S3_URL
508
493
 
509
494
  Kuzushi "init" if for a server's first boot and will run "init" hooks, while
510
495
  "start" is for all other boots. The S3_URL is the url of the config.json and
511
496
  other files commited for this type of server. To see exactly what command
512
497
  was run on this specific server boot check the first line of kuzushi.log.
513
498
 
514
- ubuntu:~$ sudo head -n 1 /var/log/kuzushi.log
499
+ ubuntu:~$ sudo head -n 1 /var/log/kuzushi.log
515
500
 
516
501
 
517
502
  == Meta
data/VERSION CHANGED
@@ -1 +1 @@
1
- 0.0.7
1
+ 0.0.8
metadata CHANGED
@@ -5,8 +5,8 @@ version: !ruby/object:Gem::Version
5
5
  segments:
6
6
  - 0
7
7
  - 0
8
- - 7
9
- version: 0.0.7
8
+ - 8
9
+ version: 0.0.8
10
10
  platform: ruby
11
11
  authors:
12
12
  - Orion Henry
@@ -60,9 +60,9 @@ executables:
60
60
  extensions: []
61
61
 
62
62
  extra_rdoc_files:
63
- - README.rdoc
63
+ - README.markdown
64
64
  files:
65
- - README.rdoc
65
+ - README.markdown
66
66
  - Rakefile
67
67
  - VERSION
68
68
  - bin/judo