judo 0.0.6 → 0.0.7
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- data/README.rdoc +515 -1
- data/Rakefile +0 -1
- data/VERSION +1 -1
- data/bin/judo +2 -2
- data/lib/all.rb +1 -1
- data/lib/group.rb +18 -5
- data/lib/server.rb +28 -3
- data/lib/setup.rb +7 -3
- metadata +6 -18
data/README.rdoc
CHANGED
@@ -1,8 +1,522 @@
|
|
1
1
|
= Judo
|
2
2
|
|
3
|
+
Judo is a tool for managing a cloud of ec2 servers. It aims to be both simple
|
4
|
+
to get going and powerful.
|
5
|
+
|
6
|
+
-------------------------------------------------------------------------------
|
7
|
+
CONCEPTS
|
8
|
+
-------------------------------------------------------------------------------
|
9
|
+
|
10
|
+
Config Repo: Judo keeps it configuration for each server group in a git repo.
|
11
|
+
|
12
|
+
State Database: Judo keeps cloud state and information on specific servers in SimpleDB.
|
13
|
+
|
14
|
+
Server: Judo does not track EC2 instances, but Servers, which are a collection
|
15
|
+
of state, such as EBS volumes, elastic IP addresses and a current EC2 instance
|
16
|
+
ID. This allows you to abstract all the elastic EC2 concepts into a more
|
17
|
+
traditional concept of a static Server.
|
18
|
+
|
19
|
+
-------------------------------------------------------------------------------
|
20
|
+
STARTING
|
21
|
+
-------------------------------------------------------------------------------
|
22
|
+
|
23
|
+
You will need an AWS account with EC2, S3 and SDB all enabled.
|
24
|
+
|
25
|
+
Setting up a new judo repo named "my_cloud" would look like this:
|
26
|
+
|
27
|
+
$ mkdir my_cloud
|
28
|
+
$ cd my_cloud
|
29
|
+
$ git init
|
30
|
+
$ judo init
|
31
|
+
|
32
|
+
The 'judo init' command will make a .judo folder to store your EC2 keys and S3
|
33
|
+
bucket. It will also make a folder named "default" to hold the default server
|
34
|
+
config.
|
35
|
+
|
36
|
+
To view all of the groups and servers running in those group you can type:
|
37
|
+
|
38
|
+
$ judo list
|
39
|
+
SERVER GROUPS
|
40
|
+
default 0 servers
|
41
|
+
|
42
|
+
To launch a default server you need to cd into the default folder:
|
43
|
+
|
44
|
+
$ cd default
|
45
|
+
$ judo create my_server_1
|
46
|
+
---> Creating server my_server_1... done (0.6s)
|
47
|
+
$ judo list
|
48
|
+
SERVER IN GROUP my_server_1
|
49
|
+
my_server_1 m1.small ami-bb709dd2 0 volumes
|
50
|
+
$ judo start my_server_1
|
51
|
+
No config has been committed yet, type 'judo commit'
|
52
|
+
|
53
|
+
The server has now been created but cannot be launched because the config has
|
54
|
+
not been committed. Committing the config loads the config.json and all of the
|
55
|
+
scripts and packages it needs to run into your S3 bucket. The config probably
|
56
|
+
looks something like this:
|
57
|
+
|
58
|
+
$ cat config.json
|
59
|
+
{
|
60
|
+
"key_name":"judo14",
|
61
|
+
"instance_size":"m1.small",
|
62
|
+
"ami32":"ami-bb709dd2", // public ubuntu 9.10 ami - 32 bit
|
63
|
+
"ami64":"ami-55739e3c", // public ubuntu 9.10 ami - 64 bit
|
64
|
+
"user":"ubuntu",
|
65
|
+
"security_group":"judo",
|
66
|
+
"availability_zone":"us-east-1d"
|
67
|
+
}
|
68
|
+
|
69
|
+
The default config uses the public ubuntu 9.10 ami's. It runs in the judo
|
70
|
+
security group and a judo key pair (which were made during the init process).
|
71
|
+
The user parameter is the user the 'judo ssh' command attempts to ssh in using
|
72
|
+
the keypair. Other debian based distros can be used assuming they have current
|
73
|
+
enough installations of ruby (1.8.7) and rubygems (1.3.5).
|
74
|
+
|
75
|
+
$ judo commit
|
76
|
+
Compiling version 1
|
77
|
+
a default
|
78
|
+
a default/config.json
|
79
|
+
Uploading to s3...
|
80
|
+
$ judo start my_server_1
|
81
|
+
---> Starting server my_server_1... done (2.3s)
|
82
|
+
---> Acquire hostname... ec2-1-2-3-4.compute-1.amazonaws.com (49.8s)
|
83
|
+
---> Wait for ssh... done (9.8s)
|
84
|
+
$ judo list
|
85
|
+
SERVER IN GROUP default
|
86
|
+
my_server_1 v1 i-80000000 m1.small ami-bb709dd2 running 0 volumes ec2-1-2-3-4.compute-1.amazonaws.com
|
87
|
+
|
88
|
+
We can now see that 'my_server_1' is running and running with version 1 of the
|
89
|
+
config. We can create and start a server in one step with the launch command.
|
90
|
+
|
91
|
+
$ judo launch my_server_2
|
92
|
+
---> Creating server my_server_2... done (0.6s)
|
93
|
+
---> Starting server my_server_2... done (1.6s)
|
94
|
+
---> Acquire hostname... ec2-1-2-3-5.compute-1.amazonaws.com (31.1s)
|
95
|
+
---> Wait for ssh... done (6.1s)
|
96
|
+
|
97
|
+
This will create and start two servers. One named 'my_server_1' and one named
|
98
|
+
'my_server_2'. You can ssh into 'my_server_1' you can type:
|
99
|
+
|
100
|
+
$ judo ssh my_server_1
|
101
|
+
|
102
|
+
You can stop all the servers with:
|
103
|
+
|
104
|
+
$ judo stop
|
105
|
+
|
106
|
+
Note that since no name was specified it will stop all servers in the group.
|
107
|
+
You could also have typed:
|
108
|
+
|
109
|
+
$ judo stop my_server_1 my_server_2
|
110
|
+
|
111
|
+
-------------------------------------------------------------------------------
|
112
|
+
COMMANDS
|
113
|
+
-------------------------------------------------------------------------------
|
114
|
+
|
115
|
+
NOTE: many servers take an argument of "[SERVERS...]". This indicates that the
|
116
|
+
command must be run in a group folder (specifying which group of servers to work
|
117
|
+
on). Zero or more server names can be used. If no server is named, the
|
118
|
+
operation is run on all servers. For instance:
|
119
|
+
|
120
|
+
$ judo restart primary_db backup_db
|
121
|
+
|
122
|
+
This will restart only the servers named 'primary_db' and 'backup_db'. Where as
|
123
|
+
|
124
|
+
$ judo restart
|
125
|
+
|
126
|
+
will restart all servers in the group.
|
127
|
+
|
128
|
+
-------------------------------------------------------------------------------
|
129
|
+
|
130
|
+
$ judo create NAME
|
131
|
+
|
132
|
+
Creates a new named server in the current group. Will allocate EBS and Elastic
|
133
|
+
IP's as needed.
|
134
|
+
|
135
|
+
$ judo create +N
|
136
|
+
|
137
|
+
Creates N new servers where N is an integer. These servers have generic names
|
138
|
+
(group.N). Note: servers with generic names AND no external resources (like
|
139
|
+
EBS Volumes or Elastic IPs) will be destroyed when stopped.
|
140
|
+
|
141
|
+
$ judo destroy NAME
|
142
|
+
|
143
|
+
Destroy the named server. De-allocates any elastic IP's and destroys the EBS
|
144
|
+
volumes.
|
145
|
+
|
146
|
+
$ judo start [SERVERS...]
|
147
|
+
$ judo stop [SERVERS...]
|
148
|
+
$ judo restart [SERVERS...]
|
149
|
+
|
150
|
+
Starts stops or restarts then starts the given servers.
|
151
|
+
|
152
|
+
$ judo launch NAME
|
153
|
+
$ judo launch +N
|
154
|
+
|
155
|
+
Performs a 'judo create' and a 'judo start' in one step.
|
156
|
+
|
157
|
+
$ judo ssh [SERVERS...]
|
158
|
+
|
159
|
+
SSH's into the servers given.
|
160
|
+
|
161
|
+
$ judo list
|
162
|
+
|
163
|
+
At the top level it will list all of the groups and how many servers are in
|
164
|
+
each group. Within a group it will list each server and its current state.
|
165
|
+
|
166
|
+
$ judo commit
|
167
|
+
|
168
|
+
Commits the current group config and files to S3. New servers launched will
|
169
|
+
use this new config.
|
170
|
+
|
171
|
+
$ judo console [SERVERS...]
|
172
|
+
|
173
|
+
See the AWS console output for the given servers.
|
174
|
+
|
175
|
+
$ judo ips
|
176
|
+
|
177
|
+
This command gives you a top down view of all elastic IP addresses allocated
|
178
|
+
for the AWS account and what servers or instances they are attached to.
|
179
|
+
|
180
|
+
$ judo volumes
|
181
|
+
|
182
|
+
This command gives you a top down view of all EBS volumes allocated for the AWS
|
183
|
+
account and what servers or instances they are attached to.
|
184
|
+
|
185
|
+
-------------------------------------------------------------------------------
|
186
|
+
EXAMPLES
|
187
|
+
-------------------------------------------------------------------------------
|
188
|
+
|
189
|
+
An example is worth a thousand words.
|
190
|
+
|
191
|
+
A couchdb server:
|
192
|
+
|
193
|
+
=== ./couchdb/config.json
|
194
|
+
{
|
195
|
+
// dont repeat yourself - import the basic config
|
196
|
+
"import" : "default",
|
197
|
+
// its a db so we're going to want to have a static ip
|
198
|
+
"elastic_ip" : true,
|
199
|
+
// only need 1 package
|
200
|
+
"packages" : "couchdb",
|
201
|
+
"volumes" : { "device" : "/dev/sde1",
|
202
|
+
"media" : "ebs",
|
203
|
+
"size" : 64,
|
204
|
+
"format" : "ext3",
|
205
|
+
// this is where couchdb looks for its data by default
|
206
|
+
"mount" : "/var/lib/couchdb/0.10.0",
|
207
|
+
// make sure the data is owned by the couchdb user
|
208
|
+
"user" : "couchdb",
|
209
|
+
"group" : "couchdb",
|
210
|
+
// bounce couch since the data dir changed
|
211
|
+
"after" : "#!/bin/bash\n service couchdb restart\n" }
|
212
|
+
}
|
213
|
+
===
|
214
|
+
|
215
|
+
A memcached server:
|
216
|
+
|
217
|
+
=== ./memcache/config.json
|
218
|
+
{
|
219
|
+
// dont repeat yourself - import the basic config
|
220
|
+
"import" : "default",
|
221
|
+
// its a data store so we're going to want to have a static ip
|
222
|
+
"elastic_ip" : true,
|
223
|
+
// only need 1 package
|
224
|
+
"packages" : "memcached",
|
225
|
+
"instance_size" : "m1.xlarge",
|
226
|
+
"files" : [
|
227
|
+
{ "file" : "/etc/memcached.conf",
|
228
|
+
"template" : "memcached.conf.erb" },
|
229
|
+
{ "file" : "/etc/default/memcached",
|
230
|
+
"source" : "memcached-default" },
|
231
|
+
"after" : "#!/bin/bash\n service memcached start\n"
|
232
|
+
}
|
233
|
+
===
|
234
|
+
|
235
|
+
=== ./memcache/files/memcached-default
|
236
|
+
# Set this to yes to enable memcached.
|
237
|
+
ENABLE_MEMCACHED=yes
|
238
|
+
===
|
239
|
+
|
240
|
+
=== ./memcache/templates/memcached.conf.erb
|
241
|
+
-d
|
242
|
+
logfile /var/log/memcached.log
|
243
|
+
## ohai gives memory in Kb so div by 1024 to get megs
|
244
|
+
## use 75% of total ram (* 0.75)
|
245
|
+
-m <%= (@system.memory["total"].to_i / 1024 * 0.75).to_i %>
|
246
|
+
-u nobody
|
247
|
+
===
|
248
|
+
|
249
|
+
A redis server with a 2 disk xfs raid 0:
|
250
|
+
|
251
|
+
=== ./redis/config.json
|
252
|
+
{
|
253
|
+
// dont repeat yourself - import the basic config
|
254
|
+
"import" : "default",
|
255
|
+
"elastic_ip" : true,
|
256
|
+
"instance_size" : "m2.xlarge",
|
257
|
+
"local_packages" : { "package" : "redis-server_1.2.5-1", "source" : "http://http.us.debian.org/debian/pool/main/r/redis/" },
|
258
|
+
"volumes" : [{ "device" : "/dev/sde1",
|
259
|
+
"media" : "ebs",
|
260
|
+
"scheduler" : "deadline",
|
261
|
+
"size" : 16 },
|
262
|
+
{ "device" : "/dev/sde2",
|
263
|
+
"media" : "ebs",
|
264
|
+
"scheduler" : "deadline",
|
265
|
+
"size" : 16 },
|
266
|
+
{ "device" : "/dev/md0",
|
267
|
+
"media" : "raid",
|
268
|
+
"mount" : "/var/lib/redis",
|
269
|
+
"drives" : [ "/dev/sde1", "/dev/sde2" ],
|
270
|
+
"user" : "redis",
|
271
|
+
"group" : "redis",
|
272
|
+
"level" : 0,
|
273
|
+
"format" : "xfs" }]
|
274
|
+
}
|
275
|
+
===
|
276
|
+
|
277
|
+
A postgresql server with a raid and a separate write-head log:
|
278
|
+
|
279
|
+
-------------------------------------------------------------------------------
|
280
|
+
CONFIG - LAUNCHING THE SERVER
|
281
|
+
-------------------------------------------------------------------------------
|
282
|
+
|
283
|
+
The easiest way to make a judo config is to start with a working example and
|
284
|
+
build from there. Complete documentation of all the options are below. Note:
|
285
|
+
you can add keys and values NOT listed here and they will not harm anything, in
|
286
|
+
fact they will be accessible (and useful) in the erb templates you may include.
|
287
|
+
|
288
|
+
"key_name":"judo123",
|
289
|
+
|
290
|
+
This specifies the name of the EC2 keypair passed to the EC2 instance on
|
291
|
+
launch. Normally you never need to set this up as it is setup for you in the
|
292
|
+
default config. The system is expecting a registered keypair in this case
|
293
|
+
named "keypair123" with a "keypair123.pem" file located in a subfolder named
|
294
|
+
"keypairs".
|
295
|
+
|
296
|
+
"instance_size":"m1.small",
|
297
|
+
|
298
|
+
Specify the instance size for the server type here. See:
|
299
|
+
http://aws.amazon.com/ec2/instance-types/
|
300
|
+
|
301
|
+
"ami32":"ami-bb709dd2",
|
302
|
+
"ami64":"ami-55739e3c",
|
303
|
+
"user":"ubuntu",
|
304
|
+
|
305
|
+
This is where you specify the AMI's to use. The defaults (above) are the
|
306
|
+
ubuntu 9.10 public AMI's. The "user" value is which user has the keypair
|
307
|
+
bound to it for ssh'ing into the server.
|
308
|
+
|
309
|
+
"security_group":"judo",
|
310
|
+
|
311
|
+
What security group to launch the server in. A judo group is created for you
|
312
|
+
which only has port 22 access. Manually create new security groups as needed
|
313
|
+
and name them here.
|
314
|
+
|
315
|
+
"availability_zone":"us-east-1d"
|
316
|
+
|
317
|
+
What zone to launch the server in.
|
318
|
+
|
319
|
+
"elastic_ip" : true,
|
320
|
+
|
321
|
+
If this is true, an elastic IP will be allocated for the server when it is
|
322
|
+
created. This means that if the server is rebooted it will keep the same IP
|
323
|
+
address.
|
324
|
+
|
325
|
+
"import" : "default",
|
326
|
+
|
327
|
+
This command is very import and allows you inherit the configurations and files
|
328
|
+
from other groups. If you wanted to make a group called 'mysql' that was
|
329
|
+
exactly like the default group except it installed the mysql package and ran on
|
330
|
+
a m2.4xlarge instance type you could specify it like this:
|
331
|
+
{ "import : "default", "packages" : [ "mysql" ], "instance_size" : "m2.4xlarge" }
|
332
|
+
and save yourself a lot of typing. You could further subclass by making a new
|
333
|
+
group and importing this config.
|
334
|
+
|
335
|
+
"volumes" : [ { "device" : "/dev/sde1", "media" : "ebs", "size" : 64 } ],
|
336
|
+
|
337
|
+
You can specify one or more volumes for the group. If the media is of type
|
338
|
+
"ebs" judo will create an elastic block device with a number of gigabytes
|
339
|
+
specified under size. AWS currently allows values from 1 to 1000. If the
|
340
|
+
media is anything other than "ebs" judo will ignore the entry. The EBS drives
|
341
|
+
are tied to the server and attached as the specified device when started. Only
|
342
|
+
when the server is destroyed are the EBS drives deleted.
|
343
|
+
|
344
|
+
-------------------------------------------------------------------------------
|
345
|
+
CONFIG - CONTROLLING THE SERVER
|
346
|
+
-------------------------------------------------------------------------------
|
347
|
+
|
348
|
+
Judo uses kuzushi (a ruby gem) to control the server once it boots and will
|
349
|
+
feed the config and files you have committed with 'judo commit' to it. At its
|
350
|
+
core, kuzushi is a tool to run whatever custom scripts you need in order to put
|
351
|
+
the server into the state you want. If you want to use puppet or chef to
|
352
|
+
bootstrap your server. Put the needed commands into a script and run it. If
|
353
|
+
you just want to write your own shell script, do it. Beyond that kuzushi has
|
354
|
+
an array of tools to cover many of the common setup steps to prevent you from
|
355
|
+
having to write scripts to reinvent the wheel. The hooks to run your scripts
|
356
|
+
come in three forms.
|
357
|
+
|
358
|
+
"before" : "script1.sh", // a single script
|
359
|
+
"init" : [ "script2.rb", "script3.pl" ], // an array of scripts
|
360
|
+
"after" : "#!/bin/bash\n service restart mysql\n", // an inline script
|
361
|
+
|
362
|
+
Each of the hooks can refer to a single script (located in the "scripts" subfolder),
|
363
|
+
or a list of scripts, or an inline script which can be embedded in the config data.
|
364
|
+
Inline scripts are any string beginning with the magic unix "#!".
|
365
|
+
|
366
|
+
The "before" hook runs before all other actions. The "after" hook runs after
|
367
|
+
all other actions. The "init" hook runs right before the "after" hook but only
|
368
|
+
on the server's very first boot. It will be skipped on all subsequent boots.
|
369
|
+
|
370
|
+
These three hooks can be added to to any hash '{}' in the system.
|
371
|
+
|
372
|
+
"files" : [ { "file" : "/etc/apache2/ports.conf" ,
|
373
|
+
"before" : "stop_apache2.sh",
|
374
|
+
"after" : "start_apache2.sh" } ],
|
375
|
+
|
376
|
+
This example runs the "stop_apache2.sh" script before installing the ports.conf
|
377
|
+
file and runs 'start_apach2.sh" after installing it. If there was some one time
|
378
|
+
formatting to be done we could add an "init" hook as well.
|
379
|
+
|
380
|
+
After running "before" and before running "init" and "after" the following
|
381
|
+
hooks will run in the following order:
|
382
|
+
|
383
|
+
"packages" : [ "postgresql-8.4", "postgresql-server-dev-8.4", "libpq-dev" ],
|
384
|
+
|
385
|
+
Packages listed here will be installed via 'apt-get install'.
|
386
|
+
|
387
|
+
"local_packages" : [ "fathomdb_0.1-1" ],
|
388
|
+
"local_packages" : [{ "package" : "redis-server_1.2.5-1", "source" : "http://http.us.debian.org/debian/pool/main/r/redis/" }],
|
389
|
+
|
390
|
+
The "local_packages" hook is for including debs. Either hand compiled ones you
|
391
|
+
have included in the git repo, or ones found in other repos. Judo will include
|
392
|
+
both the i386 and amd64 versions of the package in the commit.
|
393
|
+
|
394
|
+
In the first case judo will look in the local packages subfolder for
|
395
|
+
"fathomdb_0.1-1_i386.deb" as well as "fathomdb_0.1-1_amd64.deb". In the second
|
396
|
+
case it will attempt to use curl to fetch the following URLs.
|
397
|
+
|
398
|
+
http://http.us.debian.org/debian/pool/main/r/redis/redis-server-1.2.5-1_i386.deb
|
399
|
+
http://http.us.debian.org/debian/pool/main/r/redis/redis-server-1.2.5-1_amd64.deb
|
400
|
+
|
401
|
+
Both types of local packages can be intermixed in config.
|
402
|
+
|
403
|
+
"gems" : [ "thin", "rake", "rails", "pg" ],
|
404
|
+
|
405
|
+
The "gems" hook lists gems to be installed on the system on boot via "gem install ..."
|
406
|
+
|
407
|
+
"volumes" : [ { "device" : "/dev/sde1",
|
408
|
+
"media" : "ebs",
|
409
|
+
"size" : 64,
|
410
|
+
"format" : "ext3",
|
411
|
+
"scheduler" : "deadline",
|
412
|
+
"label" : "/wal",
|
413
|
+
"mount" : "/wal",
|
414
|
+
"mount_options" : "nodev,nosuid,noatime" },
|
415
|
+
{ "device" : "/dev/sdf1",
|
416
|
+
"media" : "ebs",
|
417
|
+
"size" : 128,
|
418
|
+
"scheduler" : "cfq" },
|
419
|
+
{ "device" : "/dev/sdf2",
|
420
|
+
"media" : "ebs",
|
421
|
+
"size" : 128,
|
422
|
+
"scheduler" : "cfq" },
|
423
|
+
{ "media" : "tmpfs",
|
424
|
+
"options" : "size=500M,mode=0744",
|
425
|
+
"mount" : "/var/lib/stats",
|
426
|
+
"user" : "stats",
|
427
|
+
"group" : "stats" },
|
428
|
+
{ "device" : "/dev/md0",
|
429
|
+
"media" : "raid",
|
430
|
+
"mount" : "/database",
|
431
|
+
"drives" : [ "/dev/sdf1", "/dev/sdf2" ],
|
432
|
+
"level" : 0,
|
433
|
+
"chunksize" : 256,
|
434
|
+
"readahead" : 65536,
|
435
|
+
"format" : "xfs",
|
436
|
+
"init" : "init_database.sh" } ],
|
437
|
+
|
438
|
+
The most complex and powerful hook is "volumes". While volumes of media type
|
439
|
+
"ebs" are created when the server is created the media types of "raid" and
|
440
|
+
"tmpfs" are also supported. If a format is specified, kuzushi will format the
|
441
|
+
volume on the server's very first boot. Currently "xfs" and "ext3" are the
|
442
|
+
only formats supported. Using "xfs" will install the "xfsprogs" package
|
443
|
+
implicitly. If a label is specified it will be set at format time. If "mount"
|
444
|
+
is specified it will be mounted there on boot, with "mount_options" if
|
445
|
+
specified. A "readahead" can be set to specify the readahead size, as well as
|
446
|
+
a scheduler which can be "noop", "cfq", "deadline" or "anticipatory". Volumes
|
447
|
+
of type "raid" will implicitly install the "mdadm" package and will expect a
|
448
|
+
list of "drives", a "level" and a "chunksize".
|
449
|
+
|
450
|
+
Kuzushi will wait for all volumes of media "ebs" to attach before proceeding
|
451
|
+
with mounting and formatting.
|
452
|
+
|
453
|
+
"files" : [ { "file" : "/etc/postgresql/8.4/main/pg_hba.conf" },
|
454
|
+
{ "file" : "/etc/hosts",
|
455
|
+
"source" : "hosts-database" },
|
456
|
+
{ "file" : "/etc/postgresql/8.4/main/postgresql.conf",
|
457
|
+
"template" : "postgresql.conf-8.4.erb" } ],
|
458
|
+
|
459
|
+
The "files" hook allows you to install files in the system. In the first example
|
460
|
+
it will install a ph_hba.conf file. Since no source or template is given it will
|
461
|
+
look for this file in the "files" subdirectory by the same name.
|
462
|
+
|
463
|
+
The second example will install "/etc/hosts" but will pull the file
|
464
|
+
"hosts-database" from the "files" subfolder.
|
465
|
+
|
466
|
+
The third example will dynamically generate the postgresql.conf file from an erb
|
467
|
+
template. The erb template will have access to two special variables to help
|
468
|
+
it fill out its proper options. The variable "@config" will have the hash of data
|
469
|
+
contained in json.conf including the data imported in via the "import" hook. There
|
470
|
+
will also be a "@system" variable which will have all the system info collected by
|
471
|
+
the ruby gem "ohai".
|
472
|
+
|
473
|
+
"crontab" : [ { "user" : "root", "file" : "crontab-root" } ],
|
474
|
+
|
475
|
+
The "crontab" hook will install a crontab file from the "crontabs" subfolder with
|
476
|
+
a simple "crontab -u #{user} #{file}".
|
477
|
+
|
478
|
+
-------------------------------------------------------------------------------
|
479
|
+
CONFIG - DEBUGGING KUZUSHI
|
480
|
+
-------------------------------------------------------------------------------
|
481
|
+
|
482
|
+
If something goes wrong its good to understand how judo uses juzushi to manage
|
483
|
+
the server. First, judo sends a short shell script in the EC2 user_data in to
|
484
|
+
launch kuzushi. You can see the exact shell script sent by setting the
|
485
|
+
environment variable "JUDO_DEBUG" to 1.
|
486
|
+
|
487
|
+
$ export JUDO_DEBUG=1
|
488
|
+
$ judo launch +1
|
489
|
+
|
490
|
+
This boot script will fail if you choose an AMI that does not execute the ec2
|
491
|
+
user data on boot, or use apt-get, or have rubygems 1.3.5 or higher in its
|
492
|
+
package repo.
|
493
|
+
|
494
|
+
You can log into the server and watch kuzushi's output in /var/log/kuzushi.log
|
495
|
+
|
496
|
+
$ judo start my_server
|
497
|
+
...
|
498
|
+
$ judo ssh my_server
|
499
|
+
ubuntu:~$ sudo tail -f /var/log/kuzushi.log
|
500
|
+
|
501
|
+
If you need to re-run kuzushi manually, the command to do so is either (as root)
|
502
|
+
|
503
|
+
# kuzushi init S3_URL
|
504
|
+
|
505
|
+
or
|
506
|
+
|
507
|
+
# kuzushi start S3_URL
|
508
|
+
|
509
|
+
Kuzushi "init" if for a server's first boot and will run "init" hooks, while
|
510
|
+
"start" is for all other boots. The S3_URL is the url of the config.json and
|
511
|
+
other files commited for this type of server. To see exactly what command
|
512
|
+
was run on this specific server boot check the first line of kuzushi.log.
|
513
|
+
|
514
|
+
ubuntu:~$ sudo head -n 1 /var/log/kuzushi.log
|
515
|
+
|
516
|
+
|
3
517
|
== Meta
|
4
518
|
|
5
|
-
Created by Adam Wiggins
|
519
|
+
Created by Orion Henry and Adam Wiggins. Forked from the gem 'sumo'.
|
6
520
|
|
7
521
|
Patches contributed by Blake Mizerany, Jesse Newland, Gert Goet, and Tim Lossen
|
8
522
|
|
data/Rakefile
CHANGED
data/VERSION
CHANGED
@@ -1 +1 @@
|
|
1
|
-
0.0.
|
1
|
+
0.0.7
|
data/bin/judo
CHANGED
@@ -201,11 +201,11 @@ class CLI < Thor
|
|
201
201
|
end
|
202
202
|
|
203
203
|
def ip_to_judo(ip)
|
204
|
-
Judo::Server.all.detect { |s| s.
|
204
|
+
Judo::Server.all.detect { |s| s.elastic_ip == ip }
|
205
205
|
end
|
206
206
|
|
207
207
|
def instance_id_to_judo(instance_id)
|
208
|
-
Judo::Server.all.detect { |s| s.
|
208
|
+
Judo::Server.all.detect { |s| s.instance_id and s.instance_id == instance_id }
|
209
209
|
end
|
210
210
|
end
|
211
211
|
end
|
data/lib/all.rb
CHANGED
data/lib/group.rb
CHANGED
@@ -70,7 +70,13 @@ module Judo
|
|
70
70
|
Dir.chdir(tmpdir) do |d|
|
71
71
|
attachments.each do |to,from|
|
72
72
|
FileUtils.mkdir_p(File.dirname(to))
|
73
|
-
|
73
|
+
if from =~ /^http:\/\//
|
74
|
+
puts "curl '#{from}'"
|
75
|
+
system "curl '#{from}' > #{to}"
|
76
|
+
puts "#{to} is #{File.stat(to).size} bytes"
|
77
|
+
else
|
78
|
+
FileUtils.cp(from,to)
|
79
|
+
end
|
74
80
|
end
|
75
81
|
File.open("config.json", "w") { |f| f.write(config.to_json) }
|
76
82
|
Dir.chdir("..") do
|
@@ -112,18 +118,25 @@ module Judo
|
|
112
118
|
def extract(config, files)
|
113
119
|
config.each do |key,value|
|
114
120
|
[value].flatten.each do |v| ### cover "packages" : ["a","b"], "packages" : "a", "packages":[{ "file" : "foo.pkg"}]
|
115
|
-
|
116
|
-
|
121
|
+
if v.is_a? Hash
|
122
|
+
extract(v, files)
|
123
|
+
else
|
124
|
+
case key
|
117
125
|
when *[ "init", "before", "after" ]
|
118
126
|
extract_file(:script, v, files) unless v =~ /^#!/
|
127
|
+
when "package"
|
128
|
+
files["packages/#{v}_i386.deb"] = "#{config["source"]}#{v}_i386.deb"
|
129
|
+
files["packages/#{v}_amd64.deb"] = "#{config["source"]}#{v}_amd64.deb"
|
119
130
|
when "local_packages"
|
120
|
-
extract_file(:package, "#{v}
|
131
|
+
extract_file(:package, "#{v}_i386.deb", files)
|
132
|
+
extract_file(:package, "#{v}_amd64.deb", files)
|
121
133
|
when "template"
|
122
134
|
extract_file(:template, v, files)
|
123
135
|
when "source"
|
124
|
-
extract_file(:file, v, files) unless config["template"]
|
136
|
+
extract_file(:file, v, files) unless config["template"] or config["package"]
|
125
137
|
when "file"
|
126
138
|
extract_file(:file, File.basename(v), files) unless config["template"] or config["source"]
|
139
|
+
end
|
127
140
|
end
|
128
141
|
end
|
129
142
|
end
|
data/lib/server.rb
CHANGED
@@ -1,6 +1,5 @@
|
|
1
1
|
### NEEDED for new gem launch
|
2
2
|
|
3
|
-
### 32 hrs to go - 12:00am Feb 26th - expected completion Mar 2
|
4
3
|
### [X] judo init (2 hrs)
|
5
4
|
### [X] implement real default config - remove special case code (3 hrs)
|
6
5
|
### [X] refactor keypair.pem setup (3 hrs)
|
@@ -19,6 +18,7 @@
|
|
19
18
|
### [ ] implement auto security_group creation and setup (6 hrs)
|
20
19
|
### [ ] write some examples - simple postgres/redis/couchdb server (5hrs)
|
21
20
|
### [ ] write new README (4 hrs)
|
21
|
+
### [ ] bind kuzushi gem version version
|
22
22
|
### [ ] realase new gem! (1 hr)
|
23
23
|
|
24
24
|
### [ ] user a logger service (1 hr)
|
@@ -51,6 +51,10 @@ module Judo
|
|
51
51
|
end
|
52
52
|
|
53
53
|
def domain
|
54
|
+
self.class.domain
|
55
|
+
end
|
56
|
+
|
57
|
+
def self.domain
|
54
58
|
"judo_servers"
|
55
59
|
end
|
56
60
|
|
@@ -62,6 +66,20 @@ module Judo
|
|
62
66
|
Judo::Config.sdb.get_attributes(domain, name)[:attributes]
|
63
67
|
end
|
64
68
|
|
69
|
+
def self.fetch_all
|
70
|
+
@@state = {}
|
71
|
+
Judo::Config.sdb.select("select * from #{domain}")[:items].each do |group|
|
72
|
+
group.each do |key,val|
|
73
|
+
@@state[key] = val
|
74
|
+
end
|
75
|
+
end
|
76
|
+
@@state.map { |k,v| Server.new(k,v["group"].first) }
|
77
|
+
end
|
78
|
+
|
79
|
+
def self.all
|
80
|
+
@@all ||= fetch_all
|
81
|
+
end
|
82
|
+
|
65
83
|
def super_state
|
66
84
|
@@state ||= {}
|
67
85
|
end
|
@@ -148,7 +166,7 @@ module Judo
|
|
148
166
|
|
149
167
|
def allocate_resources
|
150
168
|
if config["volumes"]
|
151
|
-
config["volumes"].each do |volume_config|
|
169
|
+
[config["volumes"]].flatten.each do |volume_config|
|
152
170
|
device = volume_config["device"]
|
153
171
|
if volume_config["media"] == "ebs"
|
154
172
|
size = volume_config["size"]
|
@@ -276,16 +294,23 @@ module Judo
|
|
276
294
|
# validate
|
277
295
|
|
278
296
|
## EC2 launch_instances
|
297
|
+
ud = user_data
|
298
|
+
debug(ud)
|
279
299
|
result = Config.ec2.launch_instances(ami,
|
280
300
|
:instance_type => config["instance_size"],
|
281
301
|
:availability_zone => config["availability_zone"],
|
282
302
|
:key_name => config["key_name"],
|
283
303
|
:group_ids => security_groups,
|
284
|
-
:user_data =>
|
304
|
+
:user_data => ud).first
|
285
305
|
|
286
306
|
update "instance_id" => result[:aws_instance_id], "virgin" => false, "version" => group.version
|
287
307
|
end
|
288
308
|
|
309
|
+
def debug(str)
|
310
|
+
return unless ENV['JUDO_DEBUG'] == "1"
|
311
|
+
puts "<JUDO_DEBUG>#{str}</JUDO_DEBUG>"
|
312
|
+
end
|
313
|
+
|
289
314
|
def security_groups
|
290
315
|
[ config["security_group"] ].flatten
|
291
316
|
end
|
data/lib/setup.rb
CHANGED
@@ -71,13 +71,17 @@ DEFAULT
|
|
71
71
|
end
|
72
72
|
|
73
73
|
def setup_default_security_group
|
74
|
-
|
75
|
-
|
74
|
+
begin
|
75
|
+
ec2.create_security_group('judo', 'Judo')
|
76
|
+
ec2.authorize_security_group_IP_ingress("judo", 22, 22,'tcp','0.0.0.0/0')
|
77
|
+
rescue Aws::AwsError => e
|
78
|
+
raise unless e.message =~ /InvalidGroup.Duplicate/
|
79
|
+
end
|
76
80
|
end
|
77
81
|
|
78
82
|
def setup_bucket
|
79
83
|
puts "setting up an s3 bucket"
|
80
|
-
|
84
|
+
Aws::S3.new(@aws_access_id, @aws_secret_key, :logger => Logger.new(nil)).bucket(@s3_bucket, true)
|
81
85
|
end
|
82
86
|
|
83
87
|
def setup_default_server_group
|
metadata
CHANGED
@@ -5,8 +5,8 @@ version: !ruby/object:Gem::Version
|
|
5
5
|
segments:
|
6
6
|
- 0
|
7
7
|
- 0
|
8
|
-
-
|
9
|
-
version: 0.0.
|
8
|
+
- 7
|
9
|
+
version: 0.0.7
|
10
10
|
platform: ruby
|
11
11
|
authors:
|
12
12
|
- Orion Henry
|
@@ -14,11 +14,11 @@ autorequire:
|
|
14
14
|
bindir: bin
|
15
15
|
cert_chain: []
|
16
16
|
|
17
|
-
date: 2010-03-
|
17
|
+
date: 2010-03-17 00:00:00 -04:00
|
18
18
|
default_executable: judo
|
19
19
|
dependencies:
|
20
20
|
- !ruby/object:Gem::Dependency
|
21
|
-
name:
|
21
|
+
name: aws
|
22
22
|
prerelease: false
|
23
23
|
requirement: &id001 !ruby/object:Gem::Requirement
|
24
24
|
requirements:
|
@@ -30,7 +30,7 @@ dependencies:
|
|
30
30
|
type: :runtime
|
31
31
|
version_requirements: *id001
|
32
32
|
- !ruby/object:Gem::Dependency
|
33
|
-
name:
|
33
|
+
name: thor
|
34
34
|
prerelease: false
|
35
35
|
requirement: &id002 !ruby/object:Gem::Requirement
|
36
36
|
requirements:
|
@@ -42,7 +42,7 @@ dependencies:
|
|
42
42
|
type: :runtime
|
43
43
|
version_requirements: *id002
|
44
44
|
- !ruby/object:Gem::Dependency
|
45
|
-
name:
|
45
|
+
name: json
|
46
46
|
prerelease: false
|
47
47
|
requirement: &id003 !ruby/object:Gem::Requirement
|
48
48
|
requirements:
|
@@ -53,18 +53,6 @@ dependencies:
|
|
53
53
|
version: "0"
|
54
54
|
type: :runtime
|
55
55
|
version_requirements: *id003
|
56
|
-
- !ruby/object:Gem::Dependency
|
57
|
-
name: json
|
58
|
-
prerelease: false
|
59
|
-
requirement: &id004 !ruby/object:Gem::Requirement
|
60
|
-
requirements:
|
61
|
-
- - ">="
|
62
|
-
- !ruby/object:Gem::Version
|
63
|
-
segments:
|
64
|
-
- 0
|
65
|
-
version: "0"
|
66
|
-
type: :runtime
|
67
|
-
version_requirements: *id004
|
68
56
|
description: The gentle way to manage and control ec2 instances
|
69
57
|
email: orion@heroku.com
|
70
58
|
executables:
|