slugforge 4.0.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +7 -0
- data/README.md +316 -0
- data/bin/slugforge +9 -0
- data/lib/slugforge.rb +19 -0
- data/lib/slugforge/build.rb +4 -0
- data/lib/slugforge/build/build_project.rb +31 -0
- data/lib/slugforge/build/export_upstart.rb +85 -0
- data/lib/slugforge/build/package.rb +63 -0
- data/lib/slugforge/cli.rb +125 -0
- data/lib/slugforge/commands.rb +130 -0
- data/lib/slugforge/commands/build.rb +20 -0
- data/lib/slugforge/commands/config.rb +24 -0
- data/lib/slugforge/commands/deploy.rb +383 -0
- data/lib/slugforge/commands/project.rb +21 -0
- data/lib/slugforge/commands/tag.rb +148 -0
- data/lib/slugforge/commands/wrangler.rb +142 -0
- data/lib/slugforge/configuration.rb +125 -0
- data/lib/slugforge/helper.rb +186 -0
- data/lib/slugforge/helper/build.rb +46 -0
- data/lib/slugforge/helper/config.rb +37 -0
- data/lib/slugforge/helper/enumerable.rb +46 -0
- data/lib/slugforge/helper/fog.rb +90 -0
- data/lib/slugforge/helper/git.rb +89 -0
- data/lib/slugforge/helper/path.rb +76 -0
- data/lib/slugforge/helper/project.rb +86 -0
- data/lib/slugforge/models/host.rb +233 -0
- data/lib/slugforge/models/host/fog_host.rb +33 -0
- data/lib/slugforge/models/host/hostname_host.rb +9 -0
- data/lib/slugforge/models/host/ip_address_host.rb +9 -0
- data/lib/slugforge/models/host_group.rb +65 -0
- data/lib/slugforge/models/host_group/aws_tag_group.rb +22 -0
- data/lib/slugforge/models/host_group/ec2_instance_group.rb +21 -0
- data/lib/slugforge/models/host_group/hostname_group.rb +16 -0
- data/lib/slugforge/models/host_group/ip_address_group.rb +16 -0
- data/lib/slugforge/models/host_group/security_group_group.rb +20 -0
- data/lib/slugforge/models/logger.rb +36 -0
- data/lib/slugforge/models/tag_manager.rb +125 -0
- data/lib/slugforge/slugins.rb +125 -0
- data/lib/slugforge/version.rb +9 -0
- data/scripts/post-install.sh +143 -0
- data/scripts/unicorn-shepherd.sh +305 -0
- data/spec/fixtures/array.yaml +3 -0
- data/spec/fixtures/fog_credentials.yaml +4 -0
- data/spec/fixtures/invalid_syntax.yaml +1 -0
- data/spec/fixtures/one.yaml +3 -0
- data/spec/fixtures/two.yaml +3 -0
- data/spec/fixtures/valid.yaml +4 -0
- data/spec/slugforge/commands/deploy_spec.rb +72 -0
- data/spec/slugforge/commands_spec.rb +33 -0
- data/spec/slugforge/configuration_spec.rb +200 -0
- data/spec/slugforge/helper/fog_spec.rb +81 -0
- data/spec/slugforge/helper/git_spec.rb +152 -0
- data/spec/slugforge/models/host_group/aws_tag_group_spec.rb +54 -0
- data/spec/slugforge/models/host_group/ec2_instance_group_spec.rb +51 -0
- data/spec/slugforge/models/host_group/hostname_group_spec.rb +20 -0
- data/spec/slugforge/models/host_group/ip_address_group_spec.rb +54 -0
- data/spec/slugforge/models/host_group/security_group_group_spec.rb +52 -0
- data/spec/slugforge/models/tag_manager_spec.rb +75 -0
- data/spec/spec_helper.rb +37 -0
- data/spec/support/env.rb +3 -0
- data/spec/support/example_groups/configuration_writer.rb +24 -0
- data/spec/support/example_groups/helper_provider.rb +10 -0
- data/spec/support/factories.rb +18 -0
- data/spec/support/fog.rb +15 -0
- data/spec/support/helpers.rb +18 -0
- data/spec/support/mock_logger.rb +6 -0
- data/spec/support/ssh.rb +8 -0
- data/spec/support/streams.rb +13 -0
- data/templates/foreman/master.conf.erb +21 -0
- data/templates/foreman/process-master.conf.erb +2 -0
- data/templates/foreman/process.conf.erb +19 -0
- metadata +344 -0
checksums.yaml
ADDED
@@ -0,0 +1,7 @@
|
|
1
|
+
---
|
2
|
+
SHA1:
|
3
|
+
metadata.gz: ec9f92855ff0e829cf999361d243ee156137908c
|
4
|
+
data.tar.gz: 9967219bfa3287ae66d4f2e9a9feb2035f7184c7
|
5
|
+
SHA512:
|
6
|
+
metadata.gz: 5bec845153b6efcf95592fcffb2b7c8770af66891e2398170a4a208995a66e9e188b1cb1a8480cc2cb9ddc9ae60d6fcc1fa047a5cbfe626c6e96046380657c12
|
7
|
+
data.tar.gz: 92bfa85049c8955748544aa5e1a782706662388c7a759158a195fc10ec5943135ece6274915056ccee4da58ecb22ca1651974f5a6f1358baf92f0eec052738dd
|
data/README.md
ADDED
@@ -0,0 +1,316 @@
|
|
1
|
+
slugforge
|
2
|
+
=========
|
3
|
+
|
4
|
+
## Contents ##
|
5
|
+
- [Overview](#overview)
|
6
|
+
- [Installation](#installation)
|
7
|
+
- [Configuration](#configuration)
|
8
|
+
- [Typical developer workflow](#typical-developer-workflow)
|
9
|
+
- [Deploying slugs to production](#deploying-slugs-to-production)
|
10
|
+
- [Repository format](#repository-format)
|
11
|
+
- [The slug itself](#the-slug-itself)
|
12
|
+
- [License](#license)
|
13
|
+
- [Contributing](#contributing)
|
14
|
+
|
15
|
+
## Overview ##
|
16
|
+
|
17
|
+
Slugforge is a tool used at Tapjoy to build, manage, and deploy slugs of code for any project that
|
18
|
+
conforms to a basic structure defined below. The idea is to have a file that conforms to the "build"
|
19
|
+
part of a [12 factor app](http://12factor.net). We built this tool after looking at a number of
|
20
|
+
options in the open source community, but not finding anything that met all of our needs. After
|
21
|
+
building and using this tool over the past year we now want to share that work with the world so
|
22
|
+
that others can benefit for them as well.
|
23
|
+
|
24
|
+
A slug is a single file that contains all application code, build artifacts, and dependent binaries
|
25
|
+
necessary to run the application. This would include bundled gems for a ruby app, or jars for a java
|
26
|
+
app. As per the outlines laid out for a 12 factor app, the slug does not include any configuration
|
27
|
+
for the app. All configuration should be specified as environment variables, which the app should
|
28
|
+
recognize and which are specified outside of the slug. In practice the slug could be used to deploy
|
29
|
+
the application for a development, testing, qa, or production server, each of which only differ in
|
30
|
+
their configuration.
|
31
|
+
|
32
|
+
## Installation ##
|
33
|
+
|
34
|
+
Developers should install slugforge locally to assist with building, managing, and deploying slugs.
|
35
|
+
After following the installations below, you can confirm that slugforge is properly installed by
|
36
|
+
typing `slugforge` and ensuring that the help is displayed.
|
37
|
+
|
38
|
+
### Installing from source ###
|
39
|
+
|
40
|
+
If you are assisting with the development of Slugforge, or if the installation cannot be completed using the gem files
|
41
|
+
in Gemfury, you can clone the repositories and build the gems from their source.
|
42
|
+
|
43
|
+
1. Ensure that you are using an appropriate Ruby (like 1.9.3-p484), if necessary:
|
44
|
+
|
45
|
+
rvm use 1.9.3-p484
|
46
|
+
|
47
|
+
1. Install the Tapjoy customized version of FPM:
|
48
|
+
|
49
|
+
git clone git@github.com:Tapjoy/fpm.git
|
50
|
+
cd fpm
|
51
|
+
gem build fpm.gemspec
|
52
|
+
gem install fpm*.gem --no-ri --no-rdoc
|
53
|
+
cd ..
|
54
|
+
|
55
|
+
1. Install the slugforge gem:
|
56
|
+
|
57
|
+
git clone git@github.com:Tapjoy/slugforge.git
|
58
|
+
cd slugforge
|
59
|
+
gem build slugforge.gemspec
|
60
|
+
gem install slugforge*.gem --no-ri --no-rdoc
|
61
|
+
cd ..
|
62
|
+
|
63
|
+
## Configuration ##
|
64
|
+
|
65
|
+
### Configuring settings ###
|
66
|
+
|
67
|
+
There are a few settings that need to be configured for certain functionality to be available. These can be configured in the environment, on the command-line, or in a configuration file.
|
68
|
+
|
69
|
+
#### AWS configuration ####
|
70
|
+
|
71
|
+
In order to store, retrieve, or query files in S3 buckets, or to be able to specify servers to deploy to by their instance ID, you need to configure your AWS access key and secret access key. As most developers will need these for other tools, we recommend setting the variables in your environment variables:
|
72
|
+
|
73
|
+
export AWS_ACCESS_KEY_ID=<20 character access key>
|
74
|
+
export AWS_SECRET_ACCESS_KEY=<40 character secret access key>
|
75
|
+
|
76
|
+
In addition to you AWS keys, Slugforge needs to know what bucket in S3 you will be using to store your slugs. This can be specified in your configuration file, environment, or on the command line:
|
77
|
+
|
78
|
+
* Slugforge configuration file
|
79
|
+
|
80
|
+
Add the following setting to your configuration file:
|
81
|
+
|
82
|
+
aws:
|
83
|
+
slug_bucket: <bucket_name>
|
84
|
+
|
85
|
+
* Environment variable:
|
86
|
+
|
87
|
+
export SLUG_BUCKET=<bucket_name>
|
88
|
+
|
89
|
+
* Command-line parameter:
|
90
|
+
|
91
|
+
--slug-bucket=<bucket_name>
|
92
|
+
|
93
|
+
#### SSH configuration ####
|
94
|
+
|
95
|
+
When deploying a slug to a host, the Slugforge tool with create a SSH connection to that host. It will first try a username
|
96
|
+
specified on the command line, or in your configuration files. If that is not specified, it will look at your standard SSH
|
97
|
+
configuration for a username. If all that fails, it will use your current username. If that account does not have access to
|
98
|
+
log into the remote host you should override the default:
|
99
|
+
|
100
|
+
export SSH_USERNAME=<username>
|
101
|
+
|
102
|
+
## Typical developer workflow ##
|
103
|
+
|
104
|
+
Once a developer has finished making their changes and have tested them locally, they are generally
|
105
|
+
interested in deploying them to some test environment so that they can test them in a more production-like
|
106
|
+
environment. Slugforge helps automate that process, making it simple and repeatable.
|
107
|
+
|
108
|
+
### Build a slug with the command line tools ###
|
109
|
+
|
110
|
+
NOTE: If any gems are used that include native extensions, the slug must be built on the same
|
111
|
+
architecture as it will be deployed to.
|
112
|
+
|
113
|
+
In certain circumstances, it may be useful to build your slug locally. A local slug can generally be built from the
|
114
|
+
project's root directoy by running the following command:
|
115
|
+
|
116
|
+
slugforge build --ruby 1.9.3-p194
|
117
|
+
|
118
|
+
This will create a new slug in the current directory. While specifying the ruby version is
|
119
|
+
optional, the above recommended value is currently appropriate in most cases.
|
120
|
+
|
121
|
+
### Tagging your slugs (recommended)
|
122
|
+
|
123
|
+
Slugs can be tagged with names, such as the deployment that they are associated with. To tag a
|
124
|
+
slug on S3 for `myproject` as `stable`, do the following:
|
125
|
+
|
126
|
+
slugforge tag set stable <slug_name> --project myproject
|
127
|
+
|
128
|
+
You can tag any slug using a portion of its name. This can be any sequential subset of characters, as long as it uniquely identifies the slug. To tag a specific slug for the current project as `test`, do the following:
|
129
|
+
|
130
|
+
1. Determine the name of the slug
|
131
|
+
|
132
|
+
slugforge wrangler list
|
133
|
+
|
134
|
+
1. Create the `test` tag for the slug using a unique portion of the slug name. Assuming that the slug name
|
135
|
+
was `myproject-20130909201513-8b81b614d3.slug`, you might use:
|
136
|
+
|
137
|
+
slugforge tag set test 0909201513
|
138
|
+
|
139
|
+
### Deploying your slugs
|
140
|
+
|
141
|
+
Now that you have your slug, and have a way of referencing it, it's time to deploy your slug
|
142
|
+
for testing. The most convenient ways of deploying is by
|
143
|
+
tag. You can optionally pass a list of servers to deploy to, rather than a single
|
144
|
+
server. In addition, if you are in the local repository for a project, the `--project` option is
|
145
|
+
optional.
|
146
|
+
|
147
|
+
When specifying the hosts to a deploy to, there are a number of ways to target them. The different
|
148
|
+
types can be intermixed and will all be OR'ed together to determine the list of target machines:
|
149
|
+
|
150
|
+
* IP Addresses: four numbers, joined with dots (eg. 123.45.6.78)
|
151
|
+
* Host name: a series of words, joined with dots (eg. ec2-123-45-6-78.compute-1.amazonaws.com)
|
152
|
+
* EC2 instance: an AWS instance name (eg. i-0112358d)
|
153
|
+
* AWS tag: an AWS tag name and value, joined with equals (eg. cluster=watcher)
|
154
|
+
* Security group: an AWS security group name (eg. connect-19)
|
155
|
+
|
156
|
+
NOTE: When using instance names, tags, or security groups, you need to be
|
157
|
+
using an AWS access key and secret key that have permissions to view those
|
158
|
+
resources. This can create conflicts if the S3 bucket that you want to access
|
159
|
+
is using a different pair of keys.
|
160
|
+
|
161
|
+
#### Deploying by tag name ####
|
162
|
+
|
163
|
+
Assuming that you wanted to deploy the `stable` release of `myproject` to the host at
|
164
|
+
`12.13.14.15`, you would execute:
|
165
|
+
|
166
|
+
slugforge deploy tag stable 12.13.14.15 --project myproject
|
167
|
+
|
168
|
+
#### Deploying local slug files ####
|
169
|
+
|
170
|
+
If you had built your slug locally, you can deploy the file directly from your local machine. To
|
171
|
+
deploy a file in the current directory called `filename.slug` to server `12.13.14.15`, and install
|
172
|
+
it as `myproject`, you would execute:
|
173
|
+
|
174
|
+
slugforge deploy file filename.slug 12.13.14.15 --project myproject
|
175
|
+
|
176
|
+
## Repository format ##
|
177
|
+
|
178
|
+
The repository used to create a slug need only have a few required parts.
|
179
|
+
|
180
|
+
### Procfile ###
|
181
|
+
|
182
|
+
The slug will create upstart services when installing the slug based on the lines in the Procfile at
|
183
|
+
the root of the repository. This is exactly the same format as a Heroku/Foreman Procfile. Each line
|
184
|
+
will be converted to an upstart service which will monitor the lifecycle of each of the apps
|
185
|
+
processes.
|
186
|
+
|
187
|
+
### "build" script ###
|
188
|
+
|
189
|
+
The contents of the slug will be the entire repository directory contents, minus the `.git`, `log`,
|
190
|
+
and `tmp` directories, in whatever state its in after running a script found in `deploy/build`. This
|
191
|
+
script can be written in any language and perform any tasks but it should put the repository in the
|
192
|
+
necessary state for packaging such that all processes defined in the `Procfile` will run
|
193
|
+
successfully. That means that any binaries specified in the `Procfile` should be created, compiled,
|
194
|
+
downloaded, materialized, conjured etc. All dependent packages or binaries should be placed in the
|
195
|
+
repository. For example, a ruby project might want to call `bundle install --paths vendor/bundle` in
|
196
|
+
order to package all necessary gems for runtime execution.
|
197
|
+
|
198
|
+
If no `deploy/build` script exists in your repository, no prepackage build steps will be taken.
|
199
|
+
|
200
|
+
### Install behavior customization ###
|
201
|
+
|
202
|
+
If your slug requires extra fancy post install setup, you can configure that by adding scripts in
|
203
|
+
the `deploy` directory at the root of your repo. slugforge will look for a subset of the scripts run
|
204
|
+
by the [chef deploy_revision resource](http://docs.opscode.com/resource_deploy.html). It will run
|
205
|
+
`deploy/before_restart` and `deploy/after_restart` at the times you would expect when
|
206
|
+
originally starting or restarting the upstart services associated with the app.
|
207
|
+
|
208
|
+
This sort of customization should only be app specific and very minimal. All prerequisite OS
|
209
|
+
configuration and slug install should be managed by a tool like chef or puppet, or baked directly into the AMI.
|
210
|
+
|
211
|
+
## The slug itself ##
|
212
|
+
|
213
|
+
The slug will be a .slug package which contains the state of the app repo after the build script has
|
214
|
+
run as well as the upstart scripts generated by this tool. That's it. El fin. The files will be
|
215
|
+
installed on the server in `/opt/apps/<app-name>` where app-name is configured at deploy time
|
216
|
+
defaulting to the name of the directory containing the repo.
|
217
|
+
|
218
|
+
### Slug storage ###
|
219
|
+
|
220
|
+
Slugs will be stored in S3 and downloaded to servers on deploy. The S3 bucket will have project
|
221
|
+
directories which contain the slugs for any given project. The name of the slug will follow the
|
222
|
+
format
|
223
|
+
|
224
|
+
<project_name>-<build_time>-<partial_sha>.slug
|
225
|
+
|
226
|
+
For example
|
227
|
+
|
228
|
+
project-20130714060000-5bcb55141b.slug
|
229
|
+
|
230
|
+
### Deploying ###
|
231
|
+
|
232
|
+
From a single server's point of view, in order to deploy a new slug to a server run the command
|
233
|
+
|
234
|
+
curl http://s3.amazonaws.com/slugs/<project_name>/<slug_name>.slug && <slug_name>.slug -r -i <deploy_path>
|
235
|
+
|
236
|
+
The deploy scripts we use to deploy clusters will coordinate which slug is installed on each server
|
237
|
+
to allow for isolated deploys, incremental rollout, rollback etc. The most recently installed slugs
|
238
|
+
will be cached in /tmp on a server to allow for fast rollback.
|
239
|
+
|
240
|
+
On installation the upstart scripts will be installed in `/etc/init` and started or restarted.
|
241
|
+
|
242
|
+
### Unicorn/Rainbows support ###
|
243
|
+
|
244
|
+
Slugs have special handling for unicorn and rainbows to correctly handle the interactionb between upstart and
|
245
|
+
the rolling restart done by sending a USR2 signal to unicorn or rainbows, heretoafter referred to as unicorn.
|
246
|
+
|
247
|
+
#### What you need to know ####
|
248
|
+
|
249
|
+
Unicorn master will be managed correctly by upstart for start, stop and restart. However, in order for your app to
|
250
|
+
correctly handle the reload it needs the following in the unicorn.rb config file.
|
251
|
+
|
252
|
+
_NOTE_:
|
253
|
+
|
254
|
+
* After restart the new unicorn master must load all of the rails code. This can take a while and any code
|
255
|
+
changes will not be visible until it is done loading and has replaced the old master process.
|
256
|
+
|
257
|
+
|
258
|
+
This needs to be anywhere in the top scope of your config file.
|
259
|
+
|
260
|
+
# Unicorn will respawn on signal USR2 with this command. By defaults its the same one it was
|
261
|
+
# launched with. In capistrano style deploys this will always be the same revision
|
262
|
+
# unless we explicitly point it to the 'current' symlink.
|
263
|
+
Unicorn::HttpServer::START_CTX[0] = ::File.join(ENV['GEM_HOME'].gsub(/releases\/[^\/]+/, "current"),'bin','unicorn')
|
264
|
+
|
265
|
+
This needs to be the contents of your `before_exec` block. You can have additional config in `before_exec` but be
|
266
|
+
careful not to disrupt the logic here.
|
267
|
+
|
268
|
+
before_exec do |server|
|
269
|
+
# Read environment settings from .env. This allows the environment to be changed during a unicorn
|
270
|
+
# upgrade via USR2. Remove leading "export " if its there
|
271
|
+
env_file = File.join(app_dir, '.env')
|
272
|
+
|
273
|
+
if File.exists?(env_file)
|
274
|
+
File.foreach(env_file) do |line|
|
275
|
+
name,value = line.split('=').map{ |v| v.strip }
|
276
|
+
name.gsub!(/^export /,'')
|
277
|
+
ENV[name]=value
|
278
|
+
end
|
279
|
+
end
|
280
|
+
|
281
|
+
# In capistrano style deployments, the newly started unicorn master in no-downtime
|
282
|
+
# restarts will get the GEM_HOME from the previous one.
|
283
|
+
# By pointing it at the 'current' symlink we know we're up to date.
|
284
|
+
# No effect in other types of deployments
|
285
|
+
ENV['GEM_HOME'] = ENV['GEM_HOME'].gsub(/releases\/[^\/]+/, "current")
|
286
|
+
|
287
|
+
# put the updated GEM_HOME bin in the path instead of the specific release directory
|
288
|
+
# in capistrano like deployments
|
289
|
+
paths = (ENV["PATH"] || "").split(File::PATH_SEPARATOR)
|
290
|
+
paths.unshift ::File.join(ENV['GEM_HOME'], 'bin')
|
291
|
+
ENV["PATH"] = paths.uniq.join(File::PATH_SEPARATOR)
|
292
|
+
end
|
293
|
+
|
294
|
+
All of this is to tell unicorn to use the new location of the `current` symlink created by a new slug install instead of
|
295
|
+
the old target directory it was originally pointing to. It also will reread the `.env` file in the slug's top level
|
296
|
+
directory upon restart.
|
297
|
+
|
298
|
+
#### The details ####
|
299
|
+
|
300
|
+
If your Procfile has a line for either unicorn or rainbows it, upon installation the slug will export an upstart
|
301
|
+
service config file that doesn't start unicorn but starts a shell script called unicorn-shepherd.sh which is packaged in the slug.
|
302
|
+
This script stays alive and gives upstart a constant pid to monitor. The script starts unicorn and waits for the unicorn master
|
303
|
+
pid to die. If it receives a restart command from upstart it will send a USR2 signal to the unicorn master and exit. While unicorn
|
304
|
+
forks a new master and starts rolling new workers, upstart will restart the unicorn-shepherd.sh script which will find the
|
305
|
+
new master and wait for it.
|
306
|
+
|
307
|
+
## License ##
|
308
|
+
|
309
|
+
This software is released under the MIT license. See the `LICENSE` file in the repository for
|
310
|
+
additional details.
|
311
|
+
|
312
|
+
## Contributing ##
|
313
|
+
|
314
|
+
Tapjoy is not currently accepting contributions to this code at this time. We are in the process of
|
315
|
+
formalizing our Contributor License Agreement and will update this section with those details when
|
316
|
+
they are available.
|
data/bin/slugforge
ADDED
data/lib/slugforge.rb
ADDED
@@ -0,0 +1,19 @@
|
|
1
|
+
# Make things work while testing crap
|
2
|
+
git_path = File.expand_path('../../.git', __FILE__)
|
3
|
+
if File.exists?(git_path)
|
4
|
+
slugforge_path = File.expand_path('../../lib', __FILE__)
|
5
|
+
$:.unshift(slugforge_path)
|
6
|
+
|
7
|
+
begin
|
8
|
+
require 'pry-debugger'
|
9
|
+
rescue LoadError
|
10
|
+
end
|
11
|
+
end
|
12
|
+
|
13
|
+
require 'active_support/core_ext/string/inflections'
|
14
|
+
require 'active_support/notifications'
|
15
|
+
|
16
|
+
require 'slugforge/configuration'
|
17
|
+
require 'slugforge/cli'
|
18
|
+
require 'slugforge/version'
|
19
|
+
|
@@ -0,0 +1,31 @@
|
|
1
|
+
module Slugforge
|
2
|
+
module Build
|
3
|
+
class BuildProject < Slugforge::BuildCommand
|
4
|
+
desc :call, 'run a project\'s build script'
|
5
|
+
def call
|
6
|
+
unless File.exists?(build_script)
|
7
|
+
logger.say_status :missing, build_script, :yellow
|
8
|
+
return true
|
9
|
+
end
|
10
|
+
|
11
|
+
logger.say_status :run, build_script
|
12
|
+
inside(project_root) do
|
13
|
+
with_gemfile(project_path('Gemfile')) do
|
14
|
+
|
15
|
+
FileUtils.chmod("+x", build_script)
|
16
|
+
unless execute(build_script)
|
17
|
+
raise error_class, "build script #{build_script} failed"
|
18
|
+
end
|
19
|
+
end
|
20
|
+
end
|
21
|
+
end
|
22
|
+
default_task :call
|
23
|
+
|
24
|
+
private
|
25
|
+
def build_script
|
26
|
+
project_path('deploy', 'build')
|
27
|
+
end
|
28
|
+
end
|
29
|
+
end
|
30
|
+
end
|
31
|
+
|
@@ -0,0 +1,85 @@
|
|
1
|
+
module Slugforge
|
2
|
+
module Build
|
3
|
+
class ExportUpstart < Slugforge::BuildCommand
|
4
|
+
desc :call, 'export upstart scripts from the Procfile'
|
5
|
+
def call
|
6
|
+
unless File.exist?(procfile_path)
|
7
|
+
logger.say_status :missing, 'foreman Procfile', :yellow
|
8
|
+
return false
|
9
|
+
end
|
10
|
+
|
11
|
+
logger.say_status :execute, 'preprocessing foreman templates'
|
12
|
+
preprocess_templates
|
13
|
+
end
|
14
|
+
default_task :call
|
15
|
+
|
16
|
+
private
|
17
|
+
def preprocess_templates
|
18
|
+
Dir.foreach(foreman_templates_dir) do |template|
|
19
|
+
next unless template =~ /\.erb$/
|
20
|
+
|
21
|
+
template "foreman/#{template}", File.join(upstart_templates_dir, template)
|
22
|
+
end
|
23
|
+
end
|
24
|
+
|
25
|
+
# The template file is processed by ERB twice. Once by chef when putting it down
|
26
|
+
# and once by foreman when generating the upstart config. This ERB string goes into
|
27
|
+
# the command variable so it will show up as it is here when chef is done with it
|
28
|
+
# so foreman can process the logic in this string.
|
29
|
+
def template_command
|
30
|
+
if unicorn_command
|
31
|
+
"<% if process.command.include?('bundle exec #{unicorn_command}') %> deploy/unicorn-shepherd.sh #{unicorn_command} <%= app %>-<%= name %>-<%= num %> <% else %> <%= process.command %> <% end %>"
|
32
|
+
else
|
33
|
+
"<%= process.command %>"
|
34
|
+
end
|
35
|
+
|
36
|
+
end
|
37
|
+
|
38
|
+
# iterate through procfile and see if we have have unicorn or rainbows
|
39
|
+
#
|
40
|
+
# ===Returns===
|
41
|
+
# unicorn|rainbows or nil if neither was found
|
42
|
+
#
|
43
|
+
# Notes, currently doesn't handle if there is both unicorn AND rainbows. Will just return
|
44
|
+
# the last one it finds. We could handle this if someone needed it but this is
|
45
|
+
# easier for now.
|
46
|
+
def unicorn_command
|
47
|
+
@unicorn_command ||= begin
|
48
|
+
command = nil
|
49
|
+
::File.read(procfile_path).lines do |line|
|
50
|
+
if line.include?("bundle exec unicorn")
|
51
|
+
command = "unicorn"
|
52
|
+
elsif line.include?("bundle exec rainbows")
|
53
|
+
command = "rainbows"
|
54
|
+
end
|
55
|
+
end
|
56
|
+
# If we are using unicorn/rainbows, put the unicorn-shepherd script into the repo's deploy directory
|
57
|
+
# so we can use it to start unicorn as an upstart service
|
58
|
+
if command
|
59
|
+
FileUtils.cp(unicorn_shepherd_path, project_path('deploy'))
|
60
|
+
end
|
61
|
+
command
|
62
|
+
end
|
63
|
+
end
|
64
|
+
|
65
|
+
def foreman_templates_dir
|
66
|
+
templates_dir('foreman')
|
67
|
+
end
|
68
|
+
|
69
|
+
def procfile_path
|
70
|
+
project_path('Procfile')
|
71
|
+
end
|
72
|
+
|
73
|
+
def unicorn_shepherd_path
|
74
|
+
scripts_dir('unicorn-shepherd.sh')
|
75
|
+
end
|
76
|
+
|
77
|
+
def upstart_templates_dir
|
78
|
+
@repo_templates_dir ||= project_path('deploy', 'upstart-templates').tap do |dir|
|
79
|
+
FileUtils.mkdir_p(dir)
|
80
|
+
end
|
81
|
+
end
|
82
|
+
end
|
83
|
+
end
|
84
|
+
end
|
85
|
+
|