kamal 0.16.0 → 1.0.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/README.md +5 -1013
- data/lib/kamal/cli/app.rb +38 -11
- data/lib/kamal/cli/base.rb +8 -0
- data/lib/kamal/cli/build.rb +18 -1
- data/lib/kamal/cli/env.rb +56 -0
- data/lib/kamal/cli/healthcheck/poller.rb +64 -0
- data/lib/kamal/cli/healthcheck.rb +2 -2
- data/lib/kamal/cli/lock.rb +12 -3
- data/lib/kamal/cli/main.rb +14 -3
- data/lib/kamal/cli/prune.rb +3 -2
- data/lib/kamal/cli/server.rb +2 -0
- data/lib/kamal/cli/templates/deploy.yml +12 -1
- data/lib/kamal/commander.rb +21 -8
- data/lib/kamal/commands/accessory.rb +8 -8
- data/lib/kamal/commands/app/assets.rb +51 -0
- data/lib/kamal/commands/app/containers.rb +23 -0
- data/lib/kamal/commands/app/cord.rb +22 -0
- data/lib/kamal/commands/app/execution.rb +27 -0
- data/lib/kamal/commands/app/images.rb +13 -0
- data/lib/kamal/commands/app/logging.rb +18 -0
- data/lib/kamal/commands/app.rb +17 -91
- data/lib/kamal/commands/auditor.rb +3 -1
- data/lib/kamal/commands/base.rb +12 -0
- data/lib/kamal/commands/builder/base.rb +6 -0
- data/lib/kamal/commands/builder.rb +3 -1
- data/lib/kamal/commands/healthcheck.rb +15 -12
- data/lib/kamal/commands/lock.rb +2 -2
- data/lib/kamal/commands/prune.rb +11 -3
- data/lib/kamal/commands/server.rb +5 -0
- data/lib/kamal/commands/traefik.rb +26 -8
- data/lib/kamal/configuration/accessory.rb +14 -2
- data/lib/kamal/configuration/role.rb +112 -19
- data/lib/kamal/configuration/ssh.rb +1 -1
- data/lib/kamal/configuration/volume.rb +22 -0
- data/lib/kamal/configuration.rb +79 -43
- data/lib/kamal/env_file.rb +41 -0
- data/lib/kamal/git.rb +19 -0
- data/lib/kamal/utils.rb +0 -39
- data/lib/kamal/version.rb +1 -1
- metadata +16 -5
- data/lib/kamal/utils/healthcheck_poller.rb +0 -39
data/README.md
CHANGED
@@ -1,1020 +1,12 @@
|
|
1
|
-
# Kamal
|
1
|
+
# Kamal: Deploy web apps anywhere
|
2
2
|
|
3
|
-
|
3
|
+
From bare metal to cloud VMs, deploy web apps anywhere with zero downtime. Kamal has the dynamic reverse-proxy Traefik hold requests while a new app container is started and the old one is stopped. Works seamlessly across multiple hosts, using SSHKit to execute commands. Originally built for Rails apps, Kamal will work with any type of web app that can be containerized with Docker.
|
4
4
|
|
5
|
-
|
5
|
+
➡️ See [kamal-deploy.org](https://kamal-deploy.org) for documentation on [installation](https://kamal-deploy.org/docs/installation), [configuration](https://kamal-deploy.org/docs/configuration), and [commands](https://kamal-deploy.org/docs/commands).
|
6
6
|
|
7
|
-
|
7
|
+
## Contributing to the documentation
|
8
8
|
|
9
|
-
|
10
|
-
|
11
|
-
## Installation
|
12
|
-
|
13
|
-
If you have a Ruby environment available, you can install Kamal globally with:
|
14
|
-
|
15
|
-
```sh
|
16
|
-
gem install kamal
|
17
|
-
```
|
18
|
-
|
19
|
-
...otherwise, you can run a dockerized version via an alias (add this to your .bashrc or similar to simplify re-use):
|
20
|
-
|
21
|
-
```sh
|
22
|
-
alias kamal='docker run --rm -it -v $HOME/.ssh:/root/.ssh -v /var/run/docker.sock:/var/run/docker.sock -v ${PWD}/:/workdir ghcr.io/basecamp/kamal'
|
23
|
-
```
|
24
|
-
|
25
|
-
Then, inside your app directory, run `kamal init` (or `kamal init --bundle` within Rails 7+ apps where you want a bin/kamal binstub). Now edit the new file `config/deploy.yml`. It could look as simple as this:
|
26
|
-
|
27
|
-
```yaml
|
28
|
-
service: hey
|
29
|
-
image: 37s/hey
|
30
|
-
servers:
|
31
|
-
- 192.168.0.1
|
32
|
-
- 192.168.0.2
|
33
|
-
registry:
|
34
|
-
username: registry-user-name
|
35
|
-
password:
|
36
|
-
- KAMAL_REGISTRY_PASSWORD
|
37
|
-
env:
|
38
|
-
secret:
|
39
|
-
- RAILS_MASTER_KEY
|
40
|
-
```
|
41
|
-
|
42
|
-
Then edit your `.env` file to add your registry password as `KAMAL_REGISTRY_PASSWORD` (and your `RAILS_MASTER_KEY` for production with a Rails app).
|
43
|
-
|
44
|
-
Now you're ready to deploy to the servers:
|
45
|
-
|
46
|
-
```
|
47
|
-
kamal setup
|
48
|
-
```
|
49
|
-
|
50
|
-
This will:
|
51
|
-
|
52
|
-
1. Connect to the servers over SSH (using root by default, authenticated by your ssh key)
|
53
|
-
2. Install Docker and curl on any server that might be missing it (using apt-get): root access is needed via ssh for this.
|
54
|
-
3. Log into the registry both locally and remotely
|
55
|
-
4. Build the image using the standard Dockerfile in the root of the application.
|
56
|
-
5. Push the image to the registry.
|
57
|
-
6. Pull the image from the registry onto the servers.
|
58
|
-
7. Ensure Traefik is running and accepting traffic on port 80.
|
59
|
-
8. Ensure your app responds with `200 OK` to `GET /up` (you must have curl installed inside your app image!).
|
60
|
-
9. Start a new container with the version of the app that matches the current git version hash.
|
61
|
-
10. Stop the old container running the previous version of the app.
|
62
|
-
11. Prune unused images and stopped containers to ensure servers don't fill up.
|
63
|
-
|
64
|
-
Voila! All the servers are now serving the app on port 80. If you're just running a single server, you're ready to go. If you're running multiple servers, you need to put a load balancer in front of them. For subsequent deploys, or if your servers already have Docker and curl installed, you can just run `kamal deploy`.
|
65
|
-
|
66
|
-
### Rails <7 usage
|
67
|
-
|
68
|
-
Kamal is not needed to be in your application Gemfile to be used. However, if you want to guarantee specific Kamal version in your CI/CD workflows, you can create a separate Gemfile for Kamal, for example, `gemfile/kamal.gemfile`:
|
69
|
-
|
70
|
-
```ruby
|
71
|
-
source 'https://rubygems.org'
|
72
|
-
|
73
|
-
gem 'kamal', '~> 0.14'
|
74
|
-
```
|
75
|
-
|
76
|
-
Bundle with `BUNDLE_GEMFILE=gemfiles/kamal.gemfile bundle`.
|
77
|
-
|
78
|
-
After this Kamal can be used for deployment:
|
79
|
-
|
80
|
-
```sh
|
81
|
-
BUNDLE_GEMFILE=gemfiles/kamal.gemfile bundle exec kamal deploy
|
82
|
-
```
|
83
|
-
|
84
|
-
## Vision
|
85
|
-
|
86
|
-
In the past decade+, there's been an explosion in commercial offerings that make deploying web apps easier. Heroku kicked it off with an incredible offering that stayed ahead of the competition seemingly forever. These days we have excellent alternatives like Fly.io and Render. And hosted Kubernetes is making things easier too on AWS, GCP, Digital Ocean, and elsewhere. But these are all offerings that have you renting computers in the cloud at a premium. If you want to run on your own hardware, or even just have a clear migration path to do so in the future, you need to carefully consider how locked in you get to these commercial platforms. Preferably before the bills swallow your business whole!
|
87
|
-
|
88
|
-
Kamal seeks to bring the advance in ergonomics pioneered by these commercial offerings to deploying web apps anywhere. Whether that's low-cost cloud options without the managed-service markup from the likes of Digital Ocean, Hetzner, OVH, etc., or it's your own colocated bare metal. To Kamal, it's all the same. Feed the config file a list of IP addresses with vanilla Ubuntu servers that have seen no prep beyond an added SSH key, and you'll be running in literally minutes.
|
89
|
-
|
90
|
-
This approach gives you enormous portability. You can have your web app deployed on several clouds at ease like this. Or you can buy the baseline with your own hardware, then deploy to a cloud before a big seasonal spike to get more capacity. When you're not locked into a single provider from a tooling perspective, there are a lot of compelling options available.
|
91
|
-
|
92
|
-
Ultimately, Kamal is meant to compress the complexity of going to production using open source tooling that isn't tied to any commercial offering. Not to zero, mind you. You're probably still better off with a fully managed service if basic Linux or Docker is still difficult, but as soon as those concepts are familiar, you'll be ready to go with Kamal.
|
93
|
-
|
94
|
-
## Why not just run Capistrano, Kubernetes or Docker Swarm?
|
95
|
-
|
96
|
-
Kamal basically is Capistrano for Containers, without the need to carefully prepare servers in advance. No need to ensure that the servers have just the right version of Ruby or other dependencies you need. That all lives in the Docker image now. You can boot a brand new Ubuntu (or whatever) server, add it to the list of servers in Kamal, and it'll be auto-provisioned with Docker, and run right away. Docker's layer caching also speeds up deployments with less mucking about on the server. And the images built for Kamal can be used for CI or later introspection.
|
97
|
-
|
98
|
-
Kubernetes is a beast. Running it yourself on your own hardware is not for the faint of heart. It's a fine option if you want to run on someone else's platform, either transparently [like Render](https://thenewstack.io/render-cloud-deployment-with-less-engineering/) or explicitly on AWS/GCP, but if you'd like the freedom to move between cloud and your own hardware, or even mix the two, Kamal is much simpler. You can see everything that's going on, it's just basic Docker commands being called.
|
99
|
-
|
100
|
-
Docker Swarm is much simpler than Kubernetes, but it's still built on the same declarative model that uses state reconciliation. Kamal is intentionally designed around imperative commands, like Capistrano.
|
101
|
-
|
102
|
-
Ultimately, there are a myriad of ways to deploy web apps, but this is the toolkit we're using at [37signals](https://37signals.com) to bring [HEY](https://www.hey.com) [home from the cloud](https://world.hey.com/dhh/why-we-re-leaving-the-cloud-654b47e0) without losing the advantages of modern containerization tooling.
|
103
|
-
|
104
|
-
## Running Kamal from Docker
|
105
|
-
|
106
|
-
Kamal is packaged up in a Docker container similarly to [rails/docked](https://github.com/rails/docked). This will allow you to run Kamal (from your application directory) without having to install any dependencies other than Docker. Add the following alias to your profile configuration to make working with the container more convenient:
|
107
|
-
|
108
|
-
```bash
|
109
|
-
alias kamal="docker run -it --rm -v '${PWD}:/workdir' -v '${SSH_AUTH_SOCK}:/ssh-agent' -v /var/run/docker.sock:/var/run/docker.sock -e 'SSH_AUTH_SOCK=/ssh-agent' ghcr.io/basecamp/kamal:latest"
|
110
|
-
```
|
111
|
-
|
112
|
-
Since Kamal uses SSH to establish a remote connection, it will need access to your SSH agent. The above command uses a volume mount to make it available inside the container and configures the SSH agent inside the container to make use of it.
|
113
|
-
|
114
|
-
## Configuration
|
115
|
-
|
116
|
-
### Using .env file to load required environment variables
|
117
|
-
|
118
|
-
Kamal uses [dotenv](https://github.com/bkeepers/dotenv) to automatically load environment variables set in the `.env` file present in the application root. This file can be used to set variables like `KAMAL_REGISTRY_PASSWORD` or database passwords. But for this reason you must ensure that .env files are not checked into Git or included in your Dockerfile! The format is just key-value like:
|
119
|
-
|
120
|
-
```bash
|
121
|
-
KAMAL_REGISTRY_PASSWORD=pw
|
122
|
-
DB_PASSWORD=secret123
|
123
|
-
```
|
124
|
-
|
125
|
-
### Using a generated .env file
|
126
|
-
|
127
|
-
#### 1Password as a secret store
|
128
|
-
|
129
|
-
If you're using a centralized secret store, like 1Password, you can create `.env.erb` as a template which looks up the secrets. Example of a .env.erb file:
|
130
|
-
|
131
|
-
```erb
|
132
|
-
<% if (session_token = `op signin --account my-one-password-account --raw`.strip) != "" %># Generated by kamal envify
|
133
|
-
GITHUB_TOKEN=<%= `gh config get -h github.com oauth_token`.strip %>
|
134
|
-
KAMAL_REGISTRY_PASSWORD=<%= `op read "op://Vault/Docker Hub/password" -n --session #{session_token}` %>
|
135
|
-
RAILS_MASTER_KEY=<%= `op read "op://Vault/My App/RAILS_MASTER_SECRET" -n --session #{session_token}` %>
|
136
|
-
MYSQL_ROOT_PASSWORD=<%= `op read "op://Vault/My App/MYSQL_ROOT_PASSWORD" -n --session #{session_token}` %>
|
137
|
-
<% else raise ArgumentError, "Session token missing" end %>
|
138
|
-
```
|
139
|
-
|
140
|
-
This template can safely be checked into git. Then everyone deploying the app can run `kamal envify` when they setup the app for the first time or passwords change to get the correct `.env` file.
|
141
|
-
|
142
|
-
If you need separate env variables for different destinations, you can set them with `.env.destination.erb` for the template, which will generate `.env.staging` when run with `kamal envify -d staging`.
|
143
|
-
|
144
|
-
Note: If you utilize biometrics with 1Password you can remove the `session_token` related parts in the example and just call `op read op://Vault/Docker Hub/password -n`.
|
145
|
-
|
146
|
-
#### Bitwarden as a secret store
|
147
|
-
|
148
|
-
If you are using open source secret store like bitwarden, you can create `.env.erb` as a template which looks up the secrets.
|
149
|
-
|
150
|
-
You can store `SOME_SECRET` in a secure note in bitwarden vault.
|
151
|
-
|
152
|
-
```
|
153
|
-
$ bw list items --search SOME_SECRET | jq
|
154
|
-
? Master password: [hidden]
|
155
|
-
|
156
|
-
[
|
157
|
-
{
|
158
|
-
"object": "item",
|
159
|
-
"id": "123123123-1232-4224-222f-234234234234",
|
160
|
-
"organizationId": null,
|
161
|
-
"folderId": null,
|
162
|
-
"type": 2,
|
163
|
-
"reprompt": 0,
|
164
|
-
"name": "SOME_SECRET",
|
165
|
-
"notes": "yyy",
|
166
|
-
"favorite": false,
|
167
|
-
"secureNote": {
|
168
|
-
"type": 0
|
169
|
-
},
|
170
|
-
"collectionIds": [],
|
171
|
-
"revisionDate": "2023-02-28T23:54:47.868Z",
|
172
|
-
"creationDate": "2022-11-07T03:16:05.828Z",
|
173
|
-
"deletedDate": null
|
174
|
-
}
|
175
|
-
]
|
176
|
-
```
|
177
|
-
|
178
|
-
and extract the `id` of `SOME_SECRET` from the `json` above and use in the `erb` below.
|
179
|
-
|
180
|
-
|
181
|
-
Example `.env.erb` file:
|
182
|
-
|
183
|
-
```erb
|
184
|
-
<% if (session_token=`bw unlock --raw`.strip) != "" %># Generated by kamal envify
|
185
|
-
SOME_SECRET=<%= `bw get notes 123123123-1232-4224-222f-234234234234 --session #{session_token}` %>
|
186
|
-
<% else raise ArgumentError, "session_token token missing" end %>
|
187
|
-
```
|
188
|
-
|
189
|
-
Then everyone deploying the app can run `kamal envify` and kamal will generate `.env`
|
190
|
-
|
191
|
-
|
192
|
-
### Using another registry than Docker Hub
|
193
|
-
|
194
|
-
The default registry is Docker Hub, but you can change it using `registry/server`:
|
195
|
-
|
196
|
-
```yaml
|
197
|
-
registry:
|
198
|
-
server: registry.digitalocean.com
|
199
|
-
username:
|
200
|
-
- DOCKER_REGISTRY_TOKEN
|
201
|
-
password:
|
202
|
-
- DOCKER_REGISTRY_TOKEN
|
203
|
-
```
|
204
|
-
|
205
|
-
A reference to secret `DOCKER_REGISTRY_TOKEN` will look for `ENV["DOCKER_REGISTRY_TOKEN"]` on the machine running Kamal.
|
206
|
-
|
207
|
-
#### Using AWS ECR as the container registry
|
208
|
-
|
209
|
-
AWS ECR's access token is only valid for 12hrs. In order to not have to manually regenerate the token every time, you can use ERB in the `deploy.yml` file to shell out to the `aws` cli command, and obtain the token:
|
210
|
-
|
211
|
-
```yaml
|
212
|
-
registry:
|
213
|
-
server: <your aws account id>.dkr.ecr.<your aws region id>.amazonaws.com
|
214
|
-
username: AWS
|
215
|
-
password: <%= %x(aws ecr get-login-password) %>
|
216
|
-
```
|
217
|
-
|
218
|
-
You will need to have the `aws` CLI installed locally for this to work.
|
219
|
-
|
220
|
-
### Using a different SSH user than root
|
221
|
-
|
222
|
-
The default SSH user is root, but you can change it using `ssh/user`:
|
223
|
-
|
224
|
-
```yaml
|
225
|
-
ssh:
|
226
|
-
user: app
|
227
|
-
```
|
228
|
-
|
229
|
-
If you are using non-root user (`app` as above example), you need to bootstrap your servers manually, before using them with Kamal. On Ubuntu, you'd do:
|
230
|
-
|
231
|
-
```bash
|
232
|
-
sudo apt update
|
233
|
-
sudo apt upgrade -y
|
234
|
-
sudo apt install -y docker.io curl git
|
235
|
-
sudo usermod -a -G docker app
|
236
|
-
```
|
237
|
-
|
238
|
-
### Using a proxy SSH host
|
239
|
-
|
240
|
-
If you need to connect to server through a proxy host, you can use `ssh/proxy`:
|
241
|
-
|
242
|
-
```yaml
|
243
|
-
ssh:
|
244
|
-
proxy: "192.168.0.1" # defaults to root as the user
|
245
|
-
```
|
246
|
-
|
247
|
-
Or with specific user:
|
248
|
-
|
249
|
-
```yaml
|
250
|
-
ssh:
|
251
|
-
proxy: "app@192.168.0.1"
|
252
|
-
```
|
253
|
-
|
254
|
-
Also if you need specific proxy command to connect to the server:
|
255
|
-
|
256
|
-
```yaml
|
257
|
-
ssh:
|
258
|
-
proxy_command: aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p' --region=us-east-1 ## ssh via aws ssm
|
259
|
-
```
|
260
|
-
|
261
|
-
### Configuring the SSH log level
|
262
|
-
|
263
|
-
```yaml
|
264
|
-
ssh:
|
265
|
-
log_level: debug
|
266
|
-
```
|
267
|
-
|
268
|
-
Valid levels are `debug`, `info`, `warn`, `error` and `fatal` (default).
|
269
|
-
### Using env variables
|
270
|
-
|
271
|
-
You can inject env variables into the app containers using `env`:
|
272
|
-
|
273
|
-
```yaml
|
274
|
-
env:
|
275
|
-
DATABASE_URL: mysql2://db1/hey_production/
|
276
|
-
REDIS_URL: redis://redis1:6379/1
|
277
|
-
```
|
278
|
-
|
279
|
-
### Using secret env variables
|
280
|
-
|
281
|
-
If you have env variables that are secret, you can divide the `env` block into `clear` and `secret`:
|
282
|
-
|
283
|
-
```yaml
|
284
|
-
env:
|
285
|
-
clear:
|
286
|
-
DATABASE_URL: mysql2://db1/hey_production/
|
287
|
-
REDIS_URL: redis://redis1:6379/1
|
288
|
-
secret:
|
289
|
-
- DATABASE_PASSWORD
|
290
|
-
- REDIS_PASSWORD
|
291
|
-
```
|
292
|
-
|
293
|
-
The list of secret env variables will be expanded at run time from your local machine. So a reference to a secret `DATABASE_PASSWORD` will look for `ENV["DATABASE_PASSWORD"]` on the machine running Kamal. Just like with build secrets.
|
294
|
-
|
295
|
-
If the referenced secret ENVs are missing, the configuration will be halted with a `KeyError` exception.
|
296
|
-
|
297
|
-
Note: Marking an ENV as secret currently only redacts its value in the output for Kamal. The ENV is still injected in the clear into the container at runtime.
|
298
|
-
|
299
|
-
### Using volumes
|
300
|
-
|
301
|
-
You can add custom volumes into the app containers using `volumes`:
|
302
|
-
|
303
|
-
```yaml
|
304
|
-
volumes:
|
305
|
-
- "/local/path:/container/path"
|
306
|
-
```
|
307
|
-
|
308
|
-
### Kamal env variables
|
309
|
-
|
310
|
-
The following env variables are set when your container runs:
|
311
|
-
|
312
|
-
`KAMAL_CONTAINER_NAME` : this contains the current container name and version
|
313
|
-
|
314
|
-
### Using different roles for servers
|
315
|
-
|
316
|
-
If your application uses separate hosts for running jobs or other roles beyond the default web running, you can specify these hosts in a dedicated role with a new entrypoint command like so:
|
317
|
-
|
318
|
-
```yaml
|
319
|
-
servers:
|
320
|
-
web:
|
321
|
-
- 192.168.0.1
|
322
|
-
- 192.168.0.2
|
323
|
-
job:
|
324
|
-
hosts:
|
325
|
-
- 192.168.0.3
|
326
|
-
- 192.168.0.4
|
327
|
-
cmd: bin/jobs
|
328
|
-
```
|
329
|
-
|
330
|
-
Note: Traefik will only by default be installed and run on the servers in the `web` role (and on all servers if no roles are defined). If you need Traefik on hosts in other roles than `web`, add `traefik: true`:
|
331
|
-
|
332
|
-
```yaml
|
333
|
-
servers:
|
334
|
-
web:
|
335
|
-
- 192.168.0.1
|
336
|
-
- 192.168.0.2
|
337
|
-
web2:
|
338
|
-
traefik: true
|
339
|
-
hosts:
|
340
|
-
- 192.168.0.3
|
341
|
-
- 192.168.0.4
|
342
|
-
```
|
343
|
-
|
344
|
-
### Using container labels
|
345
|
-
|
346
|
-
You can specialize the default Traefik rules by setting labels on the containers that are being started:
|
347
|
-
|
348
|
-
```yaml
|
349
|
-
labels:
|
350
|
-
traefik.http.routers.hey-web.rule: Host(`app.hey.com`)
|
351
|
-
```
|
352
|
-
Traefik rules are in the "service-role-destination" format. The default role will be `web` if no rule is specified. If the destination is not specified, it is not included. To give an example, the above rule would become "traefik.http.routers.hey-web-staging.rule" if it was for the "staging" destination.
|
353
|
-
|
354
|
-
Note: The backticks are needed to ensure the rule is passed in correctly and not treated as command substitution by Bash!
|
355
|
-
|
356
|
-
This allows you to run multiple applications on the same server sharing the same Traefik instance and port.
|
357
|
-
See https://doc.traefik.io/traefik/routing/routers/#rule for a full list of available routing rules.
|
358
|
-
|
359
|
-
The labels can also be applied on a per-role basis:
|
360
|
-
|
361
|
-
```yaml
|
362
|
-
servers:
|
363
|
-
web:
|
364
|
-
- 192.168.0.1
|
365
|
-
- 192.168.0.2
|
366
|
-
job:
|
367
|
-
hosts:
|
368
|
-
- 192.168.0.3
|
369
|
-
- 192.168.0.4
|
370
|
-
cmd: bin/jobs
|
371
|
-
labels:
|
372
|
-
my-label: "50"
|
373
|
-
```
|
374
|
-
|
375
|
-
### Using shell expansion
|
376
|
-
|
377
|
-
You can use shell expansion to interpolate values from the host machine into labels and env variables with the `${}` syntax.
|
378
|
-
Anything within the curly braces will be executed on the host machine and the result will be interpolated into the label or env variable.
|
379
|
-
|
380
|
-
```yaml
|
381
|
-
labels:
|
382
|
-
host-machine: "${cat /etc/hostname}"
|
383
|
-
|
384
|
-
env:
|
385
|
-
HOST_DEPLOYMENT_DIR: "${PWD}"
|
386
|
-
```
|
387
|
-
|
388
|
-
Note: Any other occurrence of `$` will be escaped to prevent unwanted shell expansion!
|
389
|
-
|
390
|
-
### Using container options
|
391
|
-
|
392
|
-
You can specialize the options used to start containers using the `options` definitions:
|
393
|
-
|
394
|
-
```yaml
|
395
|
-
servers:
|
396
|
-
web:
|
397
|
-
- 192.168.0.1
|
398
|
-
- 192.168.0.2
|
399
|
-
job:
|
400
|
-
hosts:
|
401
|
-
- 192.168.0.3
|
402
|
-
- 192.168.0.4
|
403
|
-
cmd: bin/jobs
|
404
|
-
options:
|
405
|
-
cap-add: true
|
406
|
-
cpu-count: 4
|
407
|
-
```
|
408
|
-
|
409
|
-
That'll start the job containers with `docker run ... --cap-add --cpu-count 4 ...`.
|
410
|
-
|
411
|
-
### Setting a minimum version
|
412
|
-
|
413
|
-
You can set the minimum Kamal version with:
|
414
|
-
|
415
|
-
```yaml
|
416
|
-
minimum_version: 0.13.3
|
417
|
-
```
|
418
|
-
|
419
|
-
Note: versions <= 0.13.2 will ignore this setting.
|
420
|
-
|
421
|
-
### Configuring logging
|
422
|
-
|
423
|
-
You can configure the logging driver and options passed to Docker using `logging`:
|
424
|
-
|
425
|
-
```yaml
|
426
|
-
logging:
|
427
|
-
driver: awslogs
|
428
|
-
options:
|
429
|
-
awslogs-region: "eu-central-2"
|
430
|
-
awslogs-group: "my-app"
|
431
|
-
```
|
432
|
-
|
433
|
-
If nothing is configured, the default option `max-size=10m` is used for all containers. The default logging driver of Docker is `json-file`.
|
434
|
-
|
435
|
-
### Using a different stop wait time
|
436
|
-
|
437
|
-
On a new deploy, each old running container is gracefully shut down with a `SIGTERM`, and after a grace period of `10` seconds a `SIGKILL` is sent.
|
438
|
-
You can configure this value via the `stop_wait_time` option:
|
439
|
-
|
440
|
-
```yaml
|
441
|
-
stop_wait_time: 30
|
442
|
-
```
|
443
|
-
|
444
|
-
### Using remote builder for native multi-arch
|
445
|
-
|
446
|
-
If you're developing on ARM64 (like Apple Silicon), but you want to deploy on AMD64 (x86 64-bit), you can use multi-architecture images. By default, Kamal will setup a local buildx configuration that does this through QEMU emulation. But this can be quite slow, especially on the first build.
|
447
|
-
|
448
|
-
If you want to speed up this process by using a remote AMD64 host to natively build the AMD64 part of the image, while natively building the ARM64 part locally, you can do so using builder options:
|
449
|
-
|
450
|
-
```yaml
|
451
|
-
builder:
|
452
|
-
local:
|
453
|
-
arch: arm64
|
454
|
-
host: unix:///Users/<%= `whoami`.strip %>/.docker/run/docker.sock
|
455
|
-
remote:
|
456
|
-
arch: amd64
|
457
|
-
host: ssh://root@192.168.0.1
|
458
|
-
```
|
459
|
-
|
460
|
-
Note: You must have Docker running on the remote host being used as a builder. This instance should only be shared for builds using the same registry and credentials.
|
461
|
-
|
462
|
-
### Using remote builder for single-arch
|
463
|
-
|
464
|
-
If you're developing on ARM64 (like Apple Silicon), want to deploy on AMD64 (x86 64-bit), but don't need to run the image locally (or on other ARM64 hosts), you can configure a remote builder that just targets AMD64. This is a bit faster than building with multi-arch, as there's nothing to build locally.
|
465
|
-
|
466
|
-
```yaml
|
467
|
-
builder:
|
468
|
-
remote:
|
469
|
-
arch: amd64
|
470
|
-
host: ssh://root@192.168.0.1
|
471
|
-
```
|
472
|
-
|
473
|
-
### Using native builder when multi-arch isn't needed
|
474
|
-
|
475
|
-
If you're developing on the same architecture as the one you're deploying on, you can speed up the build by forgoing both multi-arch and remote building:
|
476
|
-
|
477
|
-
```yaml
|
478
|
-
builder:
|
479
|
-
multiarch: false
|
480
|
-
```
|
481
|
-
|
482
|
-
This is also a good option if you're running Kamal from a CI server that shares architecture with the deployment servers.
|
483
|
-
|
484
|
-
### Using a different Dockerfile or context when building
|
485
|
-
|
486
|
-
If you need to pass a different Dockerfile or context to the build command (e.g. if you're using a monorepo or you have
|
487
|
-
different Dockerfiles), you can do so in the builder options:
|
488
|
-
|
489
|
-
```yaml
|
490
|
-
# Use a different Dockerfile
|
491
|
-
builder:
|
492
|
-
dockerfile: Dockerfile.xyz
|
493
|
-
|
494
|
-
# Set context
|
495
|
-
builder:
|
496
|
-
context: ".."
|
497
|
-
|
498
|
-
# Set Dockerfile and context
|
499
|
-
builder:
|
500
|
-
dockerfile: "../Dockerfile.xyz"
|
501
|
-
context: ".."
|
502
|
-
```
|
503
|
-
|
504
|
-
### Using multistage builder cache
|
505
|
-
|
506
|
-
Docker multistage build cache can singlehandedly speed up your builds by a lot. Currently Kamal only supports using the GHA cache or the Registry cache:
|
507
|
-
|
508
|
-
```yaml
|
509
|
-
# Using GHA cache
|
510
|
-
builder:
|
511
|
-
cache:
|
512
|
-
type: gha
|
513
|
-
|
514
|
-
# Using Registry cache
|
515
|
-
builder:
|
516
|
-
cache:
|
517
|
-
type: registry
|
518
|
-
|
519
|
-
# Using Registry cache with different cache image
|
520
|
-
builder:
|
521
|
-
cache:
|
522
|
-
type: registry
|
523
|
-
# default image name is <image>-build-cache
|
524
|
-
image: application-cache-image
|
525
|
-
|
526
|
-
# Using Registry cache with additinonal cache-to options
|
527
|
-
builder:
|
528
|
-
cache:
|
529
|
-
type: registry
|
530
|
-
options: mode=max,image-manifest=true,oci-mediatypes=true
|
531
|
-
```
|
532
|
-
|
533
|
-
For further insights into build cache optimization, check out documentation on Docker's official website: https://docs.docker.com/build/cache/.
|
534
|
-
|
535
|
-
### Using build secrets for new images
|
536
|
-
|
537
|
-
Some images need a secret passed in during build time, like a GITHUB_TOKEN, to give access to private gem repositories. This can be done by having the secret in ENV, then referencing it in the builder configuration:
|
538
|
-
|
539
|
-
```yaml
|
540
|
-
builder:
|
541
|
-
secrets:
|
542
|
-
- GITHUB_TOKEN
|
543
|
-
```
|
544
|
-
|
545
|
-
This build secret can then be referenced in the Dockerfile:
|
546
|
-
|
547
|
-
```dockerfile
|
548
|
-
# Copy Gemfiles
|
549
|
-
COPY Gemfile Gemfile.lock ./
|
550
|
-
|
551
|
-
# Install dependencies, including private repositories via access token (then remove bundle cache with exposed GITHUB_TOKEN)
|
552
|
-
RUN --mount=type=secret,id=GITHUB_TOKEN \
|
553
|
-
BUNDLE_GITHUB__COM=x-access-token:$(cat /run/secrets/GITHUB_TOKEN) \
|
554
|
-
bundle install && \
|
555
|
-
rm -rf /usr/local/bundle/cache
|
556
|
-
```
|
557
|
-
|
558
|
-
### Traefik command arguments
|
559
|
-
|
560
|
-
Customize the Traefik command line using `args`:
|
561
|
-
|
562
|
-
```yaml
|
563
|
-
traefik:
|
564
|
-
args:
|
565
|
-
accesslog: true
|
566
|
-
accesslog.format: json
|
567
|
-
```
|
568
|
-
|
569
|
-
This starts the Traefik container with `--accesslog=true --accesslog.format=json` arguments.
|
570
|
-
|
571
|
-
### Traefik host port binding
|
572
|
-
|
573
|
-
Traefik binds to port 80 by default. Specify an alternative port using `host_port`:
|
574
|
-
|
575
|
-
```yaml
|
576
|
-
traefik:
|
577
|
-
host_port: 8080
|
578
|
-
```
|
579
|
-
|
580
|
-
### Traefik version, upgrades, and custom images
|
581
|
-
|
582
|
-
Kamal runs the traefik:v2.9 image to track Traefik 2.9.x releases.
|
583
|
-
|
584
|
-
To pin Traefik to a specific version or an image published to your registry,
|
585
|
-
specify `image`:
|
586
|
-
|
587
|
-
```yaml
|
588
|
-
traefik:
|
589
|
-
image: traefik:v2.10.0-rc1
|
590
|
-
```
|
591
|
-
|
592
|
-
This is useful for downgrading Traefik if there's an unexpected breaking
|
593
|
-
change in a minor version release, upgrading Traefik to test forthcoming
|
594
|
-
releases, or running your own Traefik-derived image.
|
595
|
-
|
596
|
-
Kamal has not been tested for compatibility with Traefik 3 betas. Please do!
|
597
|
-
|
598
|
-
### Traefik container configuration
|
599
|
-
|
600
|
-
Pass additional Docker configuration for the Traefik container using `options`:
|
601
|
-
|
602
|
-
```yaml
|
603
|
-
traefik:
|
604
|
-
options:
|
605
|
-
publish:
|
606
|
-
- 8080:8080
|
607
|
-
volume:
|
608
|
-
- /tmp/example.json:/tmp/example.json
|
609
|
-
memory: 512m
|
610
|
-
```
|
611
|
-
|
612
|
-
This starts the Traefik container with `--volume /tmp/example.json:/tmp/example.json --publish 8080:8080 --memory 512m` arguments to `docker run`.
|
613
|
-
|
614
|
-
### Traefik container labels
|
615
|
-
|
616
|
-
Add labels to Traefik Docker container.
|
617
|
-
|
618
|
-
```yaml
|
619
|
-
traefik:
|
620
|
-
labels:
|
621
|
-
traefik.enable: true
|
622
|
-
traefik.http.routers.dashboard.rule: Host(`traefik.example.com`) && (PathPrefix(`/api`) || PathPrefix(`/dashboard`))
|
623
|
-
traefik.http.routers.dashboard.service: api@internal
|
624
|
-
traefik.http.routers.dashboard.middlewares: auth
|
625
|
-
traefik.http.middlewares.auth.basicauth.users: test:$2y$05$H2o72tMaO.TwY1wNQUV1K.fhjRgLHRDWohFvUZOJHBEtUXNKrqUKi # test:password
|
626
|
-
```
|
627
|
-
|
628
|
-
This labels Traefik container with `--label traefik.http.routers.dashboard.middlewares=\"auth\"` and so on.
|
629
|
-
|
630
|
-
### Traefik alternate entrypoints
|
631
|
-
|
632
|
-
You can configure multiple entrypoints for Traefik like so:
|
633
|
-
|
634
|
-
```yaml
|
635
|
-
service: myservice
|
636
|
-
|
637
|
-
labels:
|
638
|
-
traefik.tcp.routers.other.rule: 'HostSNI(`*`)'
|
639
|
-
traefik.tcp.routers.other.entrypoints: otherentrypoint
|
640
|
-
traefik.tcp.services.other.loadbalancer.server.port: 9000
|
641
|
-
traefik.http.routers.myservice.entrypoints: web
|
642
|
-
traefik.http.services.myservice.loadbalancer.server.port: 8080
|
643
|
-
|
644
|
-
traefik:
|
645
|
-
options:
|
646
|
-
publish:
|
647
|
-
- 9000:9000
|
648
|
-
args:
|
649
|
-
entrypoints.web.address: ':80'
|
650
|
-
entrypoints.otherentrypoint.address: ':9000'
|
651
|
-
```
|
652
|
-
|
653
|
-
### Rebooting Traefik
|
654
|
-
|
655
|
-
If you make changes to Traefik args or labels, you'll need to reboot with:
|
656
|
-
|
657
|
-
`kamal traefik reboot`
|
658
|
-
|
659
|
-
In production, reboot the Traefik containers one by one with a slower but safer approach, using a rolling reboot:
|
660
|
-
|
661
|
-
`kamal traefik reboot --rolling`
|
662
|
-
|
663
|
-
### Configuring build args for new images
|
664
|
-
|
665
|
-
Build arguments that aren't secret can also be configured:
|
666
|
-
|
667
|
-
```yaml
|
668
|
-
builder:
|
669
|
-
args:
|
670
|
-
RUBY_VERSION: 3.2.0
|
671
|
-
```
|
672
|
-
|
673
|
-
This build argument can then be used in the Dockerfile:
|
674
|
-
|
675
|
-
```
|
676
|
-
ARG RUBY_VERSION
|
677
|
-
FROM ruby:$RUBY_VERSION-slim as base
|
678
|
-
```
|
679
|
-
|
680
|
-
### Using accessories for database, cache, search services
|
681
|
-
|
682
|
-
You can manage your accessory services via Kamal as well. Accessories are long-lived services that your app depends on. They are not updated when you deploy.
|
683
|
-
|
684
|
-
```yaml
|
685
|
-
accessories:
|
686
|
-
mysql:
|
687
|
-
image: mysql:5.7
|
688
|
-
host: 1.1.1.3
|
689
|
-
port: 3306
|
690
|
-
env:
|
691
|
-
clear:
|
692
|
-
MYSQL_ROOT_HOST: '%'
|
693
|
-
secret:
|
694
|
-
- MYSQL_ROOT_PASSWORD
|
695
|
-
volumes:
|
696
|
-
- /var/lib/mysql:/var/lib/mysql
|
697
|
-
options:
|
698
|
-
cpus: 4
|
699
|
-
memory: "2GB"
|
700
|
-
redis:
|
701
|
-
image: redis:latest
|
702
|
-
roles:
|
703
|
-
- web
|
704
|
-
port: "36379:6379"
|
705
|
-
volumes:
|
706
|
-
- /var/lib/redis:/data
|
707
|
-
internal-example:
|
708
|
-
image: registry.digitalocean.com/user/otherservice:latest
|
709
|
-
host: 1.1.1.5
|
710
|
-
port: 44444
|
711
|
-
```
|
712
|
-
|
713
|
-
The hosts that the accessories will run on can be specified by hosts or roles:
|
714
|
-
|
715
|
-
```yaml
|
716
|
-
# Single host
|
717
|
-
mysql:
|
718
|
-
host: 1.1.1.1
|
719
|
-
# Multiple hosts
|
720
|
-
redis:
|
721
|
-
hosts:
|
722
|
-
- 1.1.1.1
|
723
|
-
- 1.1.1.2
|
724
|
-
# By role
|
725
|
-
monitoring:
|
726
|
-
roles:
|
727
|
-
- web
|
728
|
-
- jobs
|
729
|
-
```
|
730
|
-
|
731
|
-
Now run `kamal accessory start mysql` to start the MySQL server on 1.1.1.3. See `kamal accessory` for all the commands possible.
|
732
|
-
|
733
|
-
Accessory images must be public or tagged in your private registry.
|
734
|
-
|
735
|
-
### Using Cron
|
736
|
-
|
737
|
-
You can use a specific container to run your Cron jobs:
|
738
|
-
|
739
|
-
```yaml
|
740
|
-
servers:
|
741
|
-
cron:
|
742
|
-
hosts:
|
743
|
-
- 192.168.0.1
|
744
|
-
cmd:
|
745
|
-
bash -c "cat config/crontab | crontab - && cron -f"
|
746
|
-
```
|
747
|
-
|
748
|
-
This assumes the Cron settings are stored in `config/crontab`.
|
749
|
-
|
750
|
-
### Healthcheck
|
751
|
-
|
752
|
-
Kamal uses Docker healthchecks to check the health of your application during deployment. Traefik uses this same healthcheck status to determine when a container is ready to receive traffic.
|
753
|
-
|
754
|
-
The healthcheck defaults to testing the HTTP response to the path `/up` on port 3000, up to 7 times. You can tailor this behaviour with the `healthcheck` setting:
|
755
|
-
|
756
|
-
```yaml
|
757
|
-
healthcheck:
|
758
|
-
path: /healthz
|
759
|
-
port: 4000
|
760
|
-
max_attempts: 7
|
761
|
-
interval: 20s
|
762
|
-
```
|
763
|
-
|
764
|
-
This will ensure your application is configured with a traefik label for the healthcheck against `/healthz` and that the pre-deploy healthcheck that Kamal performs is done against the same path on port 4000.
|
765
|
-
|
766
|
-
You can also specify a custom healthcheck command, which is useful for non-HTTP services:
|
767
|
-
|
768
|
-
```yaml
|
769
|
-
healthcheck:
|
770
|
-
cmd: /bin/check_health
|
771
|
-
```
|
772
|
-
|
773
|
-
The top-level healthcheck configuration applies to all services that use
|
774
|
-
Traefik, by default. You can also specialize the configuration at the role
|
775
|
-
level:
|
776
|
-
|
777
|
-
```yaml
|
778
|
-
servers:
|
779
|
-
job:
|
780
|
-
hosts: ...
|
781
|
-
cmd: bin/jobs
|
782
|
-
healthcheck:
|
783
|
-
cmd: bin/check
|
784
|
-
```
|
785
|
-
|
786
|
-
The healthcheck allows for an optional `max_attempts` setting, which will attempt the healthcheck up to the specified number of times before failing the deploy. This is useful for applications that take a while to start up. The default is 7.
|
787
|
-
|
788
|
-
Note: The HTTP health checks assume that the `curl` command is available inside the container. If that's not the case, use the healthcheck's `cmd` option to specify an alternative check that the container supports.
|
789
|
-
|
790
|
-
## Commands
|
791
|
-
|
792
|
-
### Running commands on servers
|
793
|
-
|
794
|
-
You can execute one-off commands on the servers:
|
795
|
-
|
796
|
-
```bash
|
797
|
-
# Runs command on all servers
|
798
|
-
kamal app exec 'ruby -v'
|
799
|
-
App Host: 192.168.0.1
|
800
|
-
ruby 3.1.3p185 (2022-11-24 revision 1a6b16756e) [x86_64-linux]
|
801
|
-
|
802
|
-
App Host: 192.168.0.2
|
803
|
-
ruby 3.1.3p185 (2022-11-24 revision 1a6b16756e) [x86_64-linux]
|
804
|
-
|
805
|
-
# Runs command on primary server
|
806
|
-
kamal app exec --primary 'cat .ruby-version'
|
807
|
-
App Host: 192.168.0.1
|
808
|
-
3.1.3
|
809
|
-
|
810
|
-
# Runs Rails command on all servers
|
811
|
-
kamal app exec 'bin/rails about'
|
812
|
-
App Host: 192.168.0.1
|
813
|
-
About your application's environment
|
814
|
-
Rails version 7.1.0.alpha
|
815
|
-
Ruby version ruby 3.1.3p185 (2022-11-24 revision 1a6b16756e) [x86_64-linux]
|
816
|
-
RubyGems version 3.3.26
|
817
|
-
Rack version 2.2.5
|
818
|
-
Middleware ActionDispatch::HostAuthorization, Rack::Sendfile, ActionDispatch::Static, ActionDispatch::Executor, Rack::Runtime, Rack::MethodOverride, ActionDispatch::RequestId, ActionDispatch::RemoteIp, Rails::Rack::Logger, ActionDispatch::ShowExceptions, ActionDispatch::DebugExceptions, ActionDispatch::Callbacks, ActionDispatch::Cookies, ActionDispatch::Session::CookieStore, ActionDispatch::Flash, ActionDispatch::ContentSecurityPolicy::Middleware, ActionDispatch::PermissionsPolicy::Middleware, Rack::Head, Rack::ConditionalGet, Rack::ETag, Rack::TempfileReaper
|
819
|
-
Application root /rails
|
820
|
-
Environment production
|
821
|
-
Database adapter sqlite3
|
822
|
-
Database schema version 20221231233303
|
823
|
-
|
824
|
-
App Host: 192.168.0.2
|
825
|
-
About your application's environment
|
826
|
-
Rails version 7.1.0.alpha
|
827
|
-
Ruby version ruby 3.1.3p185 (2022-11-24 revision 1a6b16756e) [x86_64-linux]
|
828
|
-
RubyGems version 3.3.26
|
829
|
-
Rack version 2.2.5
|
830
|
-
Middleware ActionDispatch::HostAuthorization, Rack::Sendfile, ActionDispatch::Static, ActionDispatch::Executor, Rack::Runtime, Rack::MethodOverride, ActionDispatch::RequestId, ActionDispatch::RemoteIp, Rails::Rack::Logger, ActionDispatch::ShowExceptions, ActionDispatch::DebugExceptions, ActionDispatch::Callbacks, ActionDispatch::Cookies, ActionDispatch::Session::CookieStore, ActionDispatch::Flash, ActionDispatch::ContentSecurityPolicy::Middleware, ActionDispatch::PermissionsPolicy::Middleware, Rack::Head, Rack::ConditionalGet, Rack::ETag, Rack::TempfileReaper
|
831
|
-
Application root /rails
|
832
|
-
Environment production
|
833
|
-
Database adapter sqlite3
|
834
|
-
Database schema version 20221231233303
|
835
|
-
|
836
|
-
# Run Rails runner on primary server
|
837
|
-
kamal app exec -p 'bin/rails runner "puts Rails.application.config.time_zone"'
|
838
|
-
UTC
|
839
|
-
```
|
840
|
-
|
841
|
-
### Running interactive commands over SSH
|
842
|
-
|
843
|
-
You can run interactive commands, like a Rails console or a bash session, on a server (default is primary, use `--hosts` to connect to another):
|
844
|
-
|
845
|
-
```bash
|
846
|
-
# Starts a bash session in a new container made from the most recent app image
|
847
|
-
kamal app exec -i bash
|
848
|
-
|
849
|
-
# Starts a bash session in the currently running container for the app
|
850
|
-
kamal app exec -i --reuse bash
|
851
|
-
|
852
|
-
# Starts a Rails console in a new container made from the most recent app image
|
853
|
-
kamal app exec -i 'bin/rails console'
|
854
|
-
```
|
855
|
-
|
856
|
-
|
857
|
-
### Running details to show state of containers
|
858
|
-
|
859
|
-
You can see the state of your servers by running `kamal details`:
|
860
|
-
|
861
|
-
```
|
862
|
-
Traefik Host: 192.168.0.1
|
863
|
-
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
864
|
-
6195b2a28c81 traefik "/entrypoint.sh --pr…" 30 minutes ago Up 19 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp traefik
|
865
|
-
|
866
|
-
Traefik Host: 192.168.0.2
|
867
|
-
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
868
|
-
de14a335d152 traefik "/entrypoint.sh --pr…" 30 minutes ago Up 19 minutes 0.0.0.0:80->80/tcp, :::80->80/tcp traefik
|
869
|
-
|
870
|
-
App Host: 192.168.0.1
|
871
|
-
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
872
|
-
badb1aa51db3 registry.digitalocean.com/user/app:6ef8a6a84c525b123c5245345a8483f86d05a123 "/rails/bin/docker-e…" 13 minutes ago Up 13 minutes 3000/tcp chat-6ef8a6a84c525b123c5245345a8483f86d05a123
|
873
|
-
|
874
|
-
App Host: 192.168.0.2
|
875
|
-
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
876
|
-
1d3c91ed1f55 registry.digitalocean.com/user/app:6ef8a6a84c525b123c5245345a8483f86d05a123 "/rails/bin/docker-e…" 13 minutes ago Up 13 minutes 3000/tcp chat-6ef8a6a84c525b123c5245345a8483f86d05a123
|
877
|
-
```
|
878
|
-
|
879
|
-
You can also see just info for app containers with `kamal app details` or just for Traefik with `kamal traefik details`.
|
880
|
-
|
881
|
-
### Running rollback to fix a bad deploy
|
882
|
-
|
883
|
-
If you've discovered a bad deploy, you can quickly rollback by reactivating the old, paused container image. You can see what old containers are available for rollback by running `kamal app containers`. It'll give you a presentation similar to `kamal app details`, but include all the old containers as well. Showing something like this:
|
884
|
-
|
885
|
-
```
|
886
|
-
App Host: 192.168.0.1
|
887
|
-
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
888
|
-
1d3c91ed1f51 registry.digitalocean.com/user/app:6ef8a6a84c525b123c5245345a8483f86d05a123 "/rails/bin/docker-e…" 19 minutes ago Up 19 minutes 3000/tcp chat-6ef8a6a84c525b123c5245345a8483f86d05a123
|
889
|
-
539f26b28369 registry.digitalocean.com/user/app:e5d9d7c2b898289dfbc5f7f1334140d984eedae4 "/rails/bin/docker-e…" 31 minutes ago Exited (1) 27 minutes ago chat-e5d9d7c2b898289dfbc5f7f1334140d984eedae4
|
890
|
-
|
891
|
-
App Host: 192.168.0.2
|
892
|
-
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
893
|
-
badb1aa51db4 registry.digitalocean.com/user/app:6ef8a6a84c525b123c5245345a8483f86d05a123 "/rails/bin/docker-e…" 19 minutes ago Up 19 minutes 3000/tcp chat-6ef8a6a84c525b123c5245345a8483f86d05a123
|
894
|
-
6f170d1172ae registry.digitalocean.com/user/app:e5d9d7c2b898289dfbc5f7f1334140d984eedae4 "/rails/bin/docker-e…" 31 minutes ago Exited (1) 27 minutes ago chat-e5d9d7c2b898289dfbc5f7f1334140d984eedae4
|
895
|
-
```
|
896
|
-
|
897
|
-
From the example above, we can see that `e5d9d7c2b898289dfbc5f7f1334140d984eedae4` was the last version, so it's available as a rollback target. We can perform this rollback by running `kamal rollback e5d9d7c2b898289dfbc5f7f1334140d984eedae4`. That'll stop `6ef8a6a84c525b123c5245345a8483f86d05a123` and then start `e5d9d7c2b898289dfbc5f7f1334140d984eedae4`. Because the old container is still available, this is very quick. Nothing to download from the registry.
|
898
|
-
|
899
|
-
Note that by default old containers are pruned after 3 days when you run `kamal deploy`.
|
900
|
-
|
901
|
-
### Running removal to clean up servers
|
902
|
-
|
903
|
-
If you wish to remove the entire application, including Traefik, containers, images, and registry session, you can run `kamal remove`. This will leave the servers clean.
|
904
|
-
|
905
|
-
## Locking
|
906
|
-
|
907
|
-
Commands that are unsafe to run concurrently will take a deploy lock while they run. The lock is the `kamal_lock-<service>` directory on the primary server.
|
908
|
-
|
909
|
-
You can check the lock status with:
|
910
|
-
|
911
|
-
```
|
912
|
-
kamal lock status
|
913
|
-
|
914
|
-
Locked by: AN Other at 2023-03-24 09:49:03 UTC
|
915
|
-
Version: 77f45c0686811c68989d6576748475a60bf53fc2
|
916
|
-
Message: Automatic deploy lock
|
917
|
-
```
|
918
|
-
|
919
|
-
You can also manually acquire and release the lock
|
920
|
-
|
921
|
-
```
|
922
|
-
kamal lock acquire -m "Doing maintenance"
|
923
|
-
```
|
924
|
-
|
925
|
-
```
|
926
|
-
kamal lock release
|
927
|
-
```
|
928
|
-
|
929
|
-
## Rolling deployments
|
930
|
-
|
931
|
-
When deploying to large numbers of hosts, you might prefer not to restart your services on every host at the same time.
|
932
|
-
|
933
|
-
Kamal's default is to boot new containers on all hosts in parallel. But you can control this by configuring `boot/limit` and `boot/wait` as options:
|
934
|
-
|
935
|
-
```yaml
|
936
|
-
service: myservice
|
937
|
-
|
938
|
-
boot:
|
939
|
-
limit: 10 # Can also specify as a percentage of total hosts, such as "25%"
|
940
|
-
wait: 2
|
941
|
-
```
|
942
|
-
|
943
|
-
When `limit` is specified, containers will be booted on, at most, `limit` hosts at once. Kamal will pause for `wait` seconds between batches.
|
944
|
-
|
945
|
-
These settings only apply when booting containers (using `kamal deploy`, or `kamal app boot`). For other commands, Kamal continues to run commands in parallel across all hosts.
|
946
|
-
|
947
|
-
## Hooks
|
948
|
-
|
949
|
-
You can run custom scripts at specific points with hooks.
|
950
|
-
|
951
|
-
Hooks should be stored in the .kamal/hooks folder. Running kamal init will build that folder and add some sample scripts.
|
952
|
-
|
953
|
-
You can change their location by setting `hooks_path` in the configuration file.
|
954
|
-
|
955
|
-
If the script returns a non-zero exit code the command will be aborted.
|
956
|
-
|
957
|
-
`KAMAL_*` environment variables are available to the hooks command for
|
958
|
-
fine-grained audit reporting, e.g. for triggering deployment reports or
|
959
|
-
firing a JSON webhook. These variables include:
|
960
|
-
- `KAMAL_RECORDED_AT` - UTC timestamp in ISO 8601 format, e.g. `2023-04-14T17:07:31Z`
|
961
|
-
- `KAMAL_PERFORMER` - the local user performing the command (from `whoami`)
|
962
|
-
- `KAMAL_SERVICE_VERSION` - an abbreviated service and version for use in messages, e.g. app@150b24f
|
963
|
-
- `KAMAL_VERSION` - the full version being deployed
|
964
|
-
- `KAMAL_HOSTS` - a comma-separated list of the hosts targeted by the command
|
965
|
-
- `KAMAL_COMMAND` - The command we are running
|
966
|
-
- `KAMAL_SUBCOMMAND` - optional: The subcommand we are running
|
967
|
-
- `KAMAL_DESTINATION` - optional: destination, e.g. "staging"
|
968
|
-
- `KAMAL_ROLE` - optional: role targeted, e.g. "web"
|
969
|
-
|
970
|
-
There are four hooks:
|
971
|
-
|
972
|
-
1. pre-connect
|
973
|
-
Called before taking the deploy lock. For checks that need to run before connecting to remote hosts - e.g. DNS warming.
|
974
|
-
|
975
|
-
2. pre-build
|
976
|
-
Used for pre-build checks - e.g. there are no uncommitted changes or that CI has passed.
|
977
|
-
|
978
|
-
3. pre-deploy
|
979
|
-
For final checks before deploying, e.g. checking CI completed
|
980
|
-
|
981
|
-
3. post-deploy - run after a deploy, redeploy or rollback.
|
982
|
-
This hook is also passed a `KAMAL_RUNTIME` env variable set to the total seconds the deploy took.
|
983
|
-
|
984
|
-
This could be used to broadcast a deployment message, or register the new version with an APM.
|
985
|
-
|
986
|
-
The command could look something like:
|
987
|
-
|
988
|
-
```bash
|
989
|
-
#!/usr/bin/env bash
|
990
|
-
curl -q -d content="[My App] ${KAMAL_PERFORMER} Rolled back to version ${KAMAL_VERSION}" https://3.basecamp.com/XXXXX/integrations/XXXXX/buckets/XXXXX/chats/XXXXX/lines
|
991
|
-
```
|
992
|
-
|
993
|
-
That'll post a line like the following to a preconfigured chatbot in Basecamp:
|
994
|
-
|
995
|
-
```
|
996
|
-
[My App] [dhh] Rolled back to version d264c4e92470ad1bd18590f04466787262f605de
|
997
|
-
```
|
998
|
-
|
999
|
-
Set `--skip_hooks` to avoid running the hooks.
|
1000
|
-
|
1001
|
-
## SSH connection management
|
1002
|
-
|
1003
|
-
Creating SSH connections concurrently can be an issue when deploying to many servers. By default Kamal will limit concurrent connection starts to 30 at a time.
|
1004
|
-
|
1005
|
-
It also sets a long idle timeout of 900 seconds on connections to prevent re-connection storms after a long idle period, like building an image or waiting for CI.
|
1006
|
-
|
1007
|
-
You can configure both of these settings:
|
1008
|
-
|
1009
|
-
```yaml
|
1010
|
-
sshkit:
|
1011
|
-
max_concurrent_starts: 10
|
1012
|
-
pool_idle_timeout: 300
|
1013
|
-
```
|
1014
|
-
|
1015
|
-
## Stage of development
|
1016
|
-
|
1017
|
-
This is beta software. Commands may still move around. But we're live in production at [37signals](https://37signals.com).
|
9
|
+
Please help us improve Kamal's documentation on the [the basecamp/kamal-site repository](https://github.com/basecamp/kamal-site).
|
1018
10
|
|
1019
11
|
## License
|
1020
12
|
|