cpl 1.0.0 → 1.0.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.gitignore +1 -0
- data/CHANGELOG.md +32 -3
- data/Gemfile.lock +4 -4
- data/README.md +140 -178
- data/docs/assets/grafana-alert.png +0 -0
- data/docs/assets/memcached.png +0 -0
- data/docs/assets/sidekiq-pre-stop-hook.png +0 -0
- data/docs/commands.md +14 -2
- data/docs/migrating.md +262 -0
- data/docs/tips.md +177 -0
- data/examples/circleci.yml +8 -9
- data/examples/controlplane.yml +16 -14
- data/lib/command/base.rb +1 -1
- data/lib/command/cleanup_stale_apps.rb +1 -0
- data/lib/command/ps_start.rb +1 -3
- data/lib/command/ps_stop.rb +3 -5
- data/lib/command/ps_wait.rb +35 -0
- data/lib/core/controlplane.rb +20 -0
- data/lib/cpl/version.rb +2 -1
- data/lib/cpl.rb +42 -1
- data/script/update_command_docs +2 -2
- data/templates/daily-task.yml +5 -4
- data/templates/maintenance.yml +4 -3
- data/templates/memcached.yml +3 -2
- data/templates/postgres.yml +3 -2
- data/templates/rails.yml +3 -2
- data/templates/redis.yml +2 -1
- data/templates/sidekiq.yml +13 -4
- metadata +8 -2
data/docs/migrating.md
ADDED
@@ -0,0 +1,262 @@
|
|
1
|
+
# Steps to Migrate from Heroku to Control Plane
|
2
|
+
|
3
|
+
We recommend following along with
|
4
|
+
[this example project](https://github.com/shakacode/react-webpack-rails-tutorial).
|
5
|
+
|
6
|
+
1. [Clone the Staging Environment](#clone-the-staging-environment)
|
7
|
+
- [Review Special Gems](#review-special-gems)
|
8
|
+
- [Create a Minimum Bootable Config](#create-a-minimum-bootable-config)
|
9
|
+
2. [Create the Review App Process](#create-the-review-app-process)
|
10
|
+
- [Database for Review Apps](#database-for-review-apps)
|
11
|
+
- [Redis and Memcached for Review Apps](#redis-and-memcached-for-review-apps)
|
12
|
+
3. [Deploy to Production](#deploy-to-production)
|
13
|
+
|
14
|
+
## Clone the Staging Environment
|
15
|
+
|
16
|
+
By cloning the staging environment on Heroku, you can speed up the initial provisioning of the app on Control Plane
|
17
|
+
without compromising your current environment.
|
18
|
+
|
19
|
+
Consider migrating just the web dyno first, and get other types of dynos working afterward. You can also move the
|
20
|
+
add-ons to Control Plane later once the app works as expected.
|
21
|
+
|
22
|
+
First, create a new Heroku app with all the add-ons, copying the data from the current staging app.
|
23
|
+
|
24
|
+
Then, copy project-specific configs to a `.controlplane/` directory at the top of your project. `cpl` will pick those up
|
25
|
+
depending on which project folder tree it runs. Thus, this automates running several projects with different configs
|
26
|
+
without explicitly switching configs.
|
27
|
+
|
28
|
+
Edit the `.controlplane/controlplane.yml` file as needed. Note that the `my-app-staging` name used in the examples below
|
29
|
+
is defined in this file. See
|
30
|
+
[this example](https://github.com/shakacode/react-webpack-rails-tutorial/blob/master/.controlplane/controlplane.yml).
|
31
|
+
|
32
|
+
Before the initial setup, add the templates for the app to the `.controlplane/controlplane.yml` file, using the `setup`
|
33
|
+
key, e.g.:
|
34
|
+
|
35
|
+
```yaml
|
36
|
+
my-app-staging:
|
37
|
+
<<: *common
|
38
|
+
setup:
|
39
|
+
- gvc
|
40
|
+
- redis
|
41
|
+
- memcached
|
42
|
+
- rails
|
43
|
+
- sidekiq
|
44
|
+
```
|
45
|
+
|
46
|
+
Note how the templates correspond to files in the `.controlplane/templates/` directory. These files will be used by the
|
47
|
+
`cpl setup-app` and `cpl apply-template` commands.
|
48
|
+
|
49
|
+
Ensure that env vars point to the Heroku add-ons in the template for the app (`.controlplane/templates/gvc.yml`). See
|
50
|
+
[this example](https://github.com/shakacode/react-webpack-rails-tutorial/blob/master/.controlplane/templates/gvc.yml).
|
51
|
+
|
52
|
+
After that, create a Dockerfile in `.controlplane/Dockerfile` for your deployment. See
|
53
|
+
[this example](https://github.com/shakacode/react-webpack-rails-tutorial/blob/master/.controlplane/Dockerfile).
|
54
|
+
|
55
|
+
You should have a folder structure similar to the following:
|
56
|
+
|
57
|
+
```sh
|
58
|
+
app_main_folder/
|
59
|
+
.controlplane/
|
60
|
+
Dockerfile # Your app's Dockerfile, with some Control Plane changes.
|
61
|
+
controlplane.yml
|
62
|
+
entrypoint.sh # App-specific - edit as needed.
|
63
|
+
templates/
|
64
|
+
gvc.yml
|
65
|
+
memcached.yml
|
66
|
+
rails.yml
|
67
|
+
redis.yml
|
68
|
+
sidekiq.yml
|
69
|
+
```
|
70
|
+
|
71
|
+
The example
|
72
|
+
[`.controlplane/` directory](https://github.com/shakacode/react-webpack-rails-tutorial/tree/master/.controlplane)
|
73
|
+
already contains these files.
|
74
|
+
|
75
|
+
Finally, check the app for any Heroku-specific code and update it, such as the `HEROKU_SLUG_COMMIT` env var and other
|
76
|
+
env vars beginning with `HEROKU_`. You should add some logic to check for the Control Plane equivalents - it might be
|
77
|
+
worth adding a `CONTROLPLANE` env var to act as a feature flag and help run different code for Heroku and Control Plane
|
78
|
+
until the migration is complete.
|
79
|
+
|
80
|
+
You might want to [review special gems](#review-special-gems) and
|
81
|
+
[create a minimum bootable config](#create-a-minimum-bootable-config).
|
82
|
+
|
83
|
+
At first, do the deployments from the command line. Then set up CI scripts to trigger the deployment upon merges to
|
84
|
+
master/main.
|
85
|
+
|
86
|
+
Use these commands for the initial setup and deployment:
|
87
|
+
|
88
|
+
```sh
|
89
|
+
# Provision infrastructure (one-time-only for new apps) using templates.
|
90
|
+
cpl setup-app -a my-app-staging
|
91
|
+
|
92
|
+
# Build and push image with auto-tagging, e.g., "my-app-staging:1_456".
|
93
|
+
cpl build-image -a my-app-staging --commit 456
|
94
|
+
|
95
|
+
# Prepare database.
|
96
|
+
cpl run:detached -a my-app-staging --image latest -- rails db:prepare
|
97
|
+
|
98
|
+
# Deploy latest image.
|
99
|
+
cpl deploy-image -a my-app-staging
|
100
|
+
|
101
|
+
# Open app in browser.
|
102
|
+
cpl open -a my-app-staging
|
103
|
+
```
|
104
|
+
|
105
|
+
Then for promoting code upgrades:
|
106
|
+
|
107
|
+
```sh
|
108
|
+
# Build and push new image with sequential tagging, e.g., "my-app-staging:2".
|
109
|
+
cpl build-image -a my-app-staging
|
110
|
+
|
111
|
+
# Or build and push new image with sequential tagging and commit SHA, e.g., "my-app-staging:2_ABC".
|
112
|
+
cpl build-image -a my-app-staging --commit ABC
|
113
|
+
|
114
|
+
# Run database migrations (or other release tasks) with latest image, while app is still running on previous image.
|
115
|
+
# This is analogous to the release phase.
|
116
|
+
cpl run:detached -a my-app-staging --image latest -- rails db:migrate
|
117
|
+
|
118
|
+
# Deploy latest image.
|
119
|
+
cpl deploy-image -a my-app-staging
|
120
|
+
```
|
121
|
+
|
122
|
+
### Review Special Gems
|
123
|
+
|
124
|
+
Make sure to review "special" gems which might be related to Heroku, e.g.:
|
125
|
+
|
126
|
+
- `rails_autoscale_agent`. It's specific to Heroku, so it must be removed.
|
127
|
+
- `puma_worker_killer`. In general, it's unnecessary on Control Plane, as Kubernetes containers will restart on their
|
128
|
+
own logic and may not restart at all if everything is ok.
|
129
|
+
- `rack-timeout`. It could possibly be replaced with Control Plane's `timeout` option.
|
130
|
+
|
131
|
+
You can use the `CONTROLPLANE` env var to separate the gems, e.g.:
|
132
|
+
|
133
|
+
```ruby
|
134
|
+
# Gemfile
|
135
|
+
group :staging, :production do
|
136
|
+
gem "rack-timeout"
|
137
|
+
|
138
|
+
unless ENV.key?("CONTROLPLANE")
|
139
|
+
gem "rails_autoscale_agent"
|
140
|
+
gem "puma_worker_killer"
|
141
|
+
end
|
142
|
+
end
|
143
|
+
```
|
144
|
+
|
145
|
+
### Create a Minimum Bootable Config
|
146
|
+
|
147
|
+
You can try to create a minimum bootable config to migrate parts of your app gradually. To do that, follow these steps:
|
148
|
+
|
149
|
+
1. Rename the existing `application.yml` file to some other name (e.g., `application.old.yml`)
|
150
|
+
2. Create a new **minimal** `application.yml` file, e.g.:
|
151
|
+
|
152
|
+
```yaml
|
153
|
+
SECRET_KEY_BASE: "123"
|
154
|
+
# This should be enabled for `rails s`, not `rails assets:precompile`.
|
155
|
+
# DATABASE_URL: postgres://localhost:5432/dbname
|
156
|
+
# RAILS_SERVE_STATIC_FILES: "true"
|
157
|
+
|
158
|
+
# You will add whatever env vars are required here later.
|
159
|
+
```
|
160
|
+
|
161
|
+
3. Try running `RAILS_ENV=production CONTROLPLANE=true rails assets:precompile`
|
162
|
+
(theoretically, this should work without any additional env vars)
|
163
|
+
4. Fix whatever code needs to be fixed and add missing env vars
|
164
|
+
(the fewer env vars are needed, the cleaner the `Dockerfile` will be)
|
165
|
+
5. Enable `DATABASE_URL` and `RAILS_SERVE_STATIC_FILES` env vars
|
166
|
+
6. Try running `RAILS_ENV=production CONTROLPLANE=true rails s`
|
167
|
+
7. Fix whatever code needs to be fixed and add required env vars to `application.yml`
|
168
|
+
8. Try running your **production** entrypoint command, e.g.,
|
169
|
+
`RAILS_ENV=production RACK_ENV=production CONTROLPLANE=true puma -C config/puma.rb`
|
170
|
+
9. Fix whatever code needs to be fixed and add required env vars to `application.yml`
|
171
|
+
|
172
|
+
Now you should have a minimal bootable config.
|
173
|
+
|
174
|
+
Then you can temporarily set the `LOG_LEVEL=debug` env var and disable unnecessary services to help with the process,
|
175
|
+
e.g.:
|
176
|
+
|
177
|
+
```yaml
|
178
|
+
DISABLE_SPRING: "true"
|
179
|
+
SCOUT_MONITOR: "false"
|
180
|
+
RACK_TIMEOUT_SERVICE_TIMEOUT: "0"
|
181
|
+
```
|
182
|
+
|
183
|
+
## Create the Review App Process
|
184
|
+
|
185
|
+
Add an entry for review apps to the `.controlplane/controlplane.yml` file. By adding a `match_if_app_name_starts_with`
|
186
|
+
key with the value `true`, any app that starts with the entry's name will use this config. Doing this allows you to
|
187
|
+
configure an entry for, e.g., `my-app-review`, and then create review apps starting with that name (e.g.,
|
188
|
+
`my-app-review-1234`, `my-app-review-5678`, etc.). Here's an example:
|
189
|
+
|
190
|
+
```yaml
|
191
|
+
my-app-review:
|
192
|
+
<<: *common
|
193
|
+
match_if_app_name_starts_with: true
|
194
|
+
setup:
|
195
|
+
- gvc
|
196
|
+
- redis
|
197
|
+
- memcached
|
198
|
+
- rails
|
199
|
+
- sidekiq
|
200
|
+
```
|
201
|
+
|
202
|
+
In your CI scripts, you can create a review app using some identifier (e.g., the number of the PR on GitHub).
|
203
|
+
|
204
|
+
```yaml
|
205
|
+
# On CircleCI, you can use `echo $CIRCLE_PULL_REQUEST | grep -Eo '[0-9]+$'` to extract the number of the PR.
|
206
|
+
PR_NUM=$(... extract the number of the PR here ...)
|
207
|
+
echo "export APP_NAME=my-app-review-$PR_NUM" >> $BASH_ENV
|
208
|
+
|
209
|
+
# Only create the app if it doesn't exist yet, as we may have multiple triggers for the review app
|
210
|
+
# (such as when a PR gets updated).
|
211
|
+
if ! cpl exists -a ${APP_NAME}; then
|
212
|
+
cpl setup-app -a ${APP_NAME}
|
213
|
+
echo "export NEW_APP=true" >> $BASH_ENV
|
214
|
+
fi
|
215
|
+
|
216
|
+
# The `NEW_APP` env var that we exported above can be used to either reset or migrate the database before deploying.
|
217
|
+
if [ -n "${NEW_APP}" ]; then
|
218
|
+
cpl run:detached 'LOG_LEVEL=warn rails db:reset' -a ${APP_NAME} --image latest
|
219
|
+
else
|
220
|
+
cpl run:detached 'LOG_LEVEL=warn rails db:migrate_and_wait_replica' -a ${APP_NAME} --image latest
|
221
|
+
fi
|
222
|
+
```
|
223
|
+
|
224
|
+
Then follow the same steps for the initial deployment or code upgrades.
|
225
|
+
|
226
|
+
### Database for Review Apps
|
227
|
+
|
228
|
+
For the review app resources, these should be handled as env vars in the template for the app
|
229
|
+
(`.controlplane/templates/gvc.yml`), .e.g.:
|
230
|
+
|
231
|
+
```yaml
|
232
|
+
- name: DATABASE_URL
|
233
|
+
value: postgres://postgres:XXXXXXXX@cpln-XXXX-staging.XXXXXX.us-east-1.rds.amazonaws.com:5432/APP_GVC
|
234
|
+
```
|
235
|
+
|
236
|
+
Notice that `APP_GVC` is the app name, which is used as the database name on RDS, so that each review app gets its own
|
237
|
+
database on the one RDS instance used for all review apps, which would be, e.g., `my-app-review-1234`.
|
238
|
+
|
239
|
+
### Redis and Memcached for Review Apps
|
240
|
+
|
241
|
+
So long as no persistence is needed for Redis and Memcached, we have templates for workloads that should be sufficient
|
242
|
+
for review apps in the `templates/` directory of this repository. Using these templates results in considerable cost
|
243
|
+
savings compared to paying for the resources on Heroku.
|
244
|
+
|
245
|
+
```yaml
|
246
|
+
- name: MEMCACHE_SERVERS
|
247
|
+
value: memcached.APP_GVC.cpln.local
|
248
|
+
- name: REDIS_URL
|
249
|
+
value: redis://redis.APP_GVC.cpln.local:6379
|
250
|
+
```
|
251
|
+
|
252
|
+
## Deploy to Production
|
253
|
+
|
254
|
+
Only try deploying to production once staging and review apps are working well.
|
255
|
+
|
256
|
+
For simplicity, keep add-ons running on Heroku initially. You could move over the database to RDS first. However, it's a
|
257
|
+
bit simpler to isolate any differences in cost and performance by first moving over your compute to Control Plane.
|
258
|
+
|
259
|
+
Ensure that your Control Plane compute is in the AWS region `US-EAST-1`; otherwise, you'll have noticeable extra latency
|
260
|
+
with your calls to resources. You might also have egress charges from Control Plane.
|
261
|
+
|
262
|
+
Use the `cpl promote-app-from-upstream` command to promote the staging app to production.
|
data/docs/tips.md
ADDED
@@ -0,0 +1,177 @@
|
|
1
|
+
# Tips
|
2
|
+
|
3
|
+
1. [GVCs vs. Orgs](#gvcs-vs-orgs)
|
4
|
+
2. [RAM](#ram)
|
5
|
+
3. [Remote IP](#remote-ip)
|
6
|
+
4. [ENV Values](#env-values)
|
7
|
+
5. [CI](#ci)
|
8
|
+
6. [Memcached](#memcached)
|
9
|
+
7. [Sidekiq](#sidekiq)
|
10
|
+
- [Quieting Non-Critical Workers During Deployments](#quieting-non-critical-workers-during-deployments)
|
11
|
+
- [Setting Up a Pre Stop Hook](#setting-up-a-pre-stop-hook)
|
12
|
+
- [Setting Up a Liveness Probe](#setting-up-a-liveness-probe)
|
13
|
+
8. [Useful Links](#useful-links)
|
14
|
+
|
15
|
+
## GVCs vs. Orgs
|
16
|
+
|
17
|
+
- A "GVC" roughly corresponds to a Heroku "app."
|
18
|
+
- Images are available at the org level.
|
19
|
+
- Multiple GVCs within an org can use the same image.
|
20
|
+
- You can have different images within a GVC and even within a workload. This flexibility is one of the key differences
|
21
|
+
compared to Heroku apps.
|
22
|
+
|
23
|
+
## RAM
|
24
|
+
|
25
|
+
Any workload replica that reaches the max memory is terminated and restarted. You can configure alerts for workload
|
26
|
+
restarts and the percentage of memory used in the Control Plane UX.
|
27
|
+
|
28
|
+
Here are the steps for configuring an alert for the percentage of memory used:
|
29
|
+
|
30
|
+
1. Navigate to the workload that you want to configure the alert for
|
31
|
+
2. Click "Metrics" on the left menu to go to Grafana
|
32
|
+
3. On Grafana, go to the alerting page by clicking on the alert icon in the sidebar
|
33
|
+
4. Click on "New alert rule"
|
34
|
+
5. In the "Set a query and alert condition" section, select "Grafana managed alert"
|
35
|
+
6. There should be a default query named `A`
|
36
|
+
7. Change the data source of the query to `metrics`
|
37
|
+
8. Click "Code" on the top right of the query and enter `mem_used{workload="workload_name"} / mem_reserved{workload="workload_name"} * 100`
|
38
|
+
(replace `workload_name` with the name of the workload)
|
39
|
+
9. There should be a default expression named `B`, with the type `Reduce`, the function `Last`, and the input `A` (this
|
40
|
+
ensures that we're getting the last data point of the query)
|
41
|
+
10. There should be a default expression named `C`, with the type `Threshold`, and the input `B` (this is where you
|
42
|
+
configure the condition for firing the alert, e.g., `IS ABOVE 95`)
|
43
|
+
11. You can then preview the alert and check if it's firing or not based on the example time range of the query
|
44
|
+
12. In the "Alert evaluation behavior" section, you can configure how often the alert should be evaluated and for how
|
45
|
+
long the condition should be true before firing (for example, you might want the alert only to be fired if the
|
46
|
+
percentage has been above `95` for more than 20 seconds)
|
47
|
+
13. In the "Add details for your alert" section, fill out the name, folder, group, and summary for your alert
|
48
|
+
14. In the "Notifications" section, you can configure a label for the alert if you're using a custom notification policy,
|
49
|
+
but there should be a default root route for all alerts
|
50
|
+
15. Once you're done, save and exit in the top right of the page
|
51
|
+
16. Click "Contact points" on the top menu
|
52
|
+
17. Edit the `grafana-default-email` contact point and add the email where you want to receive notifications
|
53
|
+
18. You should now receive notifications for the alert in your email
|
54
|
+
|
55
|
+

|
56
|
+
|
57
|
+
The steps for configuring an alert for workload restarts are almost identical, but the code for the query would be
|
58
|
+
`container_restarts`.
|
59
|
+
|
60
|
+
For more information on Grafana alerts, see: https://grafana.com/docs/grafana/latest/alerting/
|
61
|
+
|
62
|
+
## Remote IP
|
63
|
+
|
64
|
+
The actual remote IP of the workload container is in the 127.0.0.x network, so that will be the value of the
|
65
|
+
`REMOTE_ADDR` env var.
|
66
|
+
|
67
|
+
However, Control Plane additionally sets the `x-forwarded-for` and `x-envoy-external-address` headers (and others - see:
|
68
|
+
https://docs.controlplane.com/concepts/security#headers). On Rails, the `ActionDispatch::RemoteIp` middleware should
|
69
|
+
pick those up and automatically populate `request.remote_ip`.
|
70
|
+
|
71
|
+
So `REMOTE_ADDR` should not be used directly, only `request.remote_ip`.
|
72
|
+
|
73
|
+
## ENV Values
|
74
|
+
|
75
|
+
You can store ENV values used by a container (within a workload) within Control Plane at the following levels:
|
76
|
+
|
77
|
+
1. Workload Container
|
78
|
+
2. GVC
|
79
|
+
|
80
|
+
For your "review apps," it is convenient to have simple ENVs stored in plain text in your source code. You will want to
|
81
|
+
keep some ENVs, like the Rails' `SECRET_KEY_BASE`, out of your source code. For staging and production apps, you will
|
82
|
+
set these values directly at the GVC or workload levels, so none of these ENV values are committed to the source code.
|
83
|
+
|
84
|
+
For storing ENVs in the source code, we can use a level of indirection so that you can store an ENV value in your source
|
85
|
+
code like `cpln://secret/my-app-review-env-secrets.SECRET_KEY_BASE` and then have the secret value stored at the org
|
86
|
+
level, which applies to your GVCs mapped to that org.
|
87
|
+
|
88
|
+
Here is how you do this:
|
89
|
+
|
90
|
+
1. In the upper left "Manage Org" menu, click on "Secrets"
|
91
|
+
2. Create a secret with `Secret Type: Dictionary` (e.g., `my-secrets`) and add the secret env vars there
|
92
|
+
3. In the upper left "Manage GVC" menu, click on "Identities"
|
93
|
+
4. Create an identity (e.g., `my-identity`)
|
94
|
+
5. Navigate to the workload that you want to associate with the identity created
|
95
|
+
6. Click "Identity" on the left menu and select the identity created
|
96
|
+
7. In the lower left "Access Control" menu, click on "Policies"
|
97
|
+
8. Create a policy with `Target Kind: Secret` and add a binding with the `reveal` permission for the identity created
|
98
|
+
9. Use `cpln://secret/...` in the app to access the secret env vars (e.g., `cpln://secret/my-secrets.SOME_VAR`)
|
99
|
+
|
100
|
+
## CI
|
101
|
+
|
102
|
+
**Note:** Docker builds much slower on Apple Silicon, so try configuring CI to build the images when using Apple
|
103
|
+
hardware.
|
104
|
+
|
105
|
+
Make sure to create a profile on CI before running any `cpln` or `cpl` commands.
|
106
|
+
|
107
|
+
```sh
|
108
|
+
CPLN_TOKEN=...
|
109
|
+
cpln profile create default --token ${CPLN_TOKEN}
|
110
|
+
```
|
111
|
+
|
112
|
+
Also, log in to the Control Plane Docker repository if building and pushing an image.
|
113
|
+
|
114
|
+
```sh
|
115
|
+
cpln image docker-login
|
116
|
+
```
|
117
|
+
|
118
|
+
## Memcached
|
119
|
+
|
120
|
+
On the workload container for Memcached (using the `memcached:alpine` image), configure the command with the args
|
121
|
+
`-l 0.0.0.0`.
|
122
|
+
|
123
|
+
To do this:
|
124
|
+
|
125
|
+
1. Navigate to the workload container for Memcached
|
126
|
+
2. Click "Command" on the top menu
|
127
|
+
3. Add the args and save
|
128
|
+
|
129
|
+

|
130
|
+
|
131
|
+
## Sidekiq
|
132
|
+
|
133
|
+
### Quieting Non-Critical Workers During Deployments
|
134
|
+
|
135
|
+
To avoid locks in migrations, we can quiet non-critical workers during deployments. Doing this early enough in the CI
|
136
|
+
allows all workers to finish jobs gracefully before deploying the new image.
|
137
|
+
|
138
|
+
There's no need to unquiet the workers, as that will happen automatically after deploying the new image.
|
139
|
+
|
140
|
+
```sh
|
141
|
+
cpl run 'rails runner "Sidekiq::ProcessSet.new.each { |w| w.quiet! unless w[%q(hostname)].start_with?(%q(criticalworker.)) }"' -a my-app
|
142
|
+
```
|
143
|
+
|
144
|
+
### Setting Up a Pre Stop Hook
|
145
|
+
|
146
|
+
By setting up a pre stop hook in the lifecycle of the workload container for Sidekiq, which sends "QUIET" to the workers,
|
147
|
+
we can ensure that all workers will finish jobs gracefully before Control Plane stops the replica. That also works
|
148
|
+
nicely for multiple replicas.
|
149
|
+
|
150
|
+
A couple of notes:
|
151
|
+
|
152
|
+
- We can't use the process name as regex because it's Ruby, not Sidekiq.
|
153
|
+
- We need to add a space after `sidekiq`; otherwise, it sends `TSTP` to the `sidekiqswarm` process as well, and for some
|
154
|
+
reason, that doesn't work.
|
155
|
+
|
156
|
+
So with `^` and `\s`, we guarantee it's sent only to worker processes.
|
157
|
+
|
158
|
+
```sh
|
159
|
+
pkill -TSTP -f ^sidekiq\s
|
160
|
+
```
|
161
|
+
|
162
|
+
To do this:
|
163
|
+
|
164
|
+
1. Navigate to the workload container for Sidekiq
|
165
|
+
2. Click "Lifecycle" on the top menu
|
166
|
+
3. Add the command and args below "Pre Stop Hook" and save
|
167
|
+
|
168
|
+

|
169
|
+
|
170
|
+
### Setting Up a Liveness Probe
|
171
|
+
|
172
|
+
To set up a liveness probe on port 7433, see: https://github.com/arturictus/sidekiq_alive
|
173
|
+
|
174
|
+
## Useful Links
|
175
|
+
|
176
|
+
- For best practices for the app's Dockerfile, see: https://lipanski.com/posts/dockerfile-ruby-best-practices
|
177
|
+
- For migrating from Heroku Postgres to RDS, see: https://pelle.io/posts/hetzner-rds-postgres
|
data/examples/circleci.yml
CHANGED
@@ -8,8 +8,8 @@ build-staging:
|
|
8
8
|
- image: cimg/ruby:3.1-node
|
9
9
|
resource_class: large
|
10
10
|
environment:
|
11
|
-
CPLN_ORG:
|
12
|
-
APP_NAME:
|
11
|
+
CPLN_ORG: my-org
|
12
|
+
APP_NAME: my-app-staging
|
13
13
|
steps:
|
14
14
|
- checkout
|
15
15
|
- setup_remote_docker:
|
@@ -21,14 +21,13 @@ build-staging:
|
|
21
21
|
cpln profile create default --token ${CPLN_TOKEN} --org ${CPLN_ORG} --gvc ${APP_NAME}
|
22
22
|
cpln image docker-login
|
23
23
|
|
24
|
-
|
25
|
-
sudo ln -s ~/heroku-to-control-plane/cpl /usr/local/bin/cpl
|
24
|
+
gem install cpl
|
26
25
|
- run:
|
27
26
|
name: Containerize and push image
|
28
27
|
command: cpl build-image -a ${APP_NAME}
|
29
28
|
- run:
|
30
29
|
name: Database tasks
|
31
|
-
command: cpl run:detached
|
30
|
+
command: cpl run:detached -a ${APP_NAME} --image latest -- rails db:migrate
|
32
31
|
- run:
|
33
32
|
name: Deploy image
|
34
33
|
command: cpl deploy-image -a ${APP_NAME}
|
@@ -43,7 +42,7 @@ build-review-app:
|
|
43
42
|
- image: cimg/ruby:3.1-node
|
44
43
|
resource_class: large
|
45
44
|
environment:
|
46
|
-
CPLN_ORG: my-org
|
45
|
+
CPLN_ORG: my-org
|
47
46
|
steps:
|
48
47
|
- checkout
|
49
48
|
- setup_remote_docker:
|
@@ -65,7 +64,7 @@ build-review-app:
|
|
65
64
|
- run:
|
66
65
|
name: Provision review app if needed
|
67
66
|
command: |
|
68
|
-
if ! cpl
|
67
|
+
if ! cpl exists -a ${APP_NAME}; then
|
69
68
|
cpl setup-app -a ${APP_NAME}
|
70
69
|
echo "export NEW_APP=true" >> $BASH_ENV
|
71
70
|
fi
|
@@ -77,9 +76,9 @@ build-review-app:
|
|
77
76
|
name: Database tasks
|
78
77
|
command: |
|
79
78
|
if [ -n "${NEW_APP}" ]; then
|
80
|
-
cpl run:detached
|
79
|
+
cpl run:detached -a ${APP_NAME} --image latest -- LOG_LEVEL=warn rails db:reset
|
81
80
|
else
|
82
|
-
cpl run:detached
|
81
|
+
cpl run:detached -a ${APP_NAME} --image latest -- LOG_LEVEL=warn rails db:migrate
|
83
82
|
fi
|
84
83
|
- run:
|
85
84
|
name: Deploy image
|
data/examples/controlplane.yml
CHANGED
@@ -1,47 +1,49 @@
|
|
1
|
+
# Keys beginning with "cpln_" correspond to your settings in Control Plane.
|
2
|
+
|
1
3
|
aliases:
|
2
4
|
common: &common
|
3
|
-
#
|
4
|
-
# Production apps will use a different
|
5
|
-
# keys beginning with CPLN correspond to your settings in Control Plane
|
5
|
+
# Organization name for staging (customize to your needs).
|
6
|
+
# Production apps will use a different organization, specified below, for security.
|
6
7
|
cpln_org: my-org-staging
|
7
8
|
|
8
|
-
# Example apps use only location.
|
9
|
-
# TODO
|
9
|
+
# Example apps use only one location. Control Plane offers the ability to use multiple locations.
|
10
|
+
# TODO: Allow specification of multiple locations.
|
10
11
|
default_location: aws-us-east-2
|
11
12
|
|
12
13
|
# Configure the workload name used as a template for one-off scripts, like a Heroku one-off dyno.
|
13
14
|
one_off_workload: rails
|
14
15
|
|
15
|
-
# Workloads that are application itself and are using application
|
16
|
+
# Workloads that are for the application itself and are using application Docker images.
|
16
17
|
app_workloads:
|
17
18
|
- rails
|
18
19
|
- sidekiq
|
19
20
|
|
20
|
-
# Additional "service type" workloads, using non-application
|
21
|
+
# Additional "service type" workloads, using non-application Docker images.
|
21
22
|
additional_workloads:
|
22
23
|
- redis
|
23
24
|
- postgres
|
24
25
|
- memcached
|
25
26
|
|
26
|
-
# Configure the workload name used when maintenance mode is on (defaults to
|
27
|
+
# Configure the workload name used when maintenance mode is on (defaults to "maintenance")
|
27
28
|
maintenance_workload: maintenance
|
28
29
|
|
29
30
|
apps:
|
30
31
|
my-app-staging:
|
31
|
-
# Use the values from the common section above
|
32
|
+
# Use the values from the common section above.
|
32
33
|
<<: *common
|
33
34
|
my-app-review:
|
34
35
|
<<: *common
|
35
|
-
#
|
36
|
-
#
|
36
|
+
# If `match_if_app_name_starts_with` is `true`, then use this config for app names starting with this name,
|
37
|
+
# e.g., "my-app-review-pr123", "my-app-review-anything-goes", etc.
|
37
38
|
match_if_app_name_starts_with: true
|
38
39
|
my-app-production:
|
39
40
|
<<: *common
|
40
|
-
# Use a different
|
41
|
+
# Use a different organization for production.
|
41
42
|
cpln_org: my-org-production
|
42
|
-
# Allows running command
|
43
|
+
# Allows running the command `cpl promote-app-from-upstream -a my-app-production`
|
44
|
+
# to promote the staging app to production.
|
43
45
|
upstream: my-app-staging
|
44
46
|
my-app-other:
|
45
47
|
<<: *common
|
46
|
-
#
|
48
|
+
# You can specify a different `Dockerfile` relative to the `.controlplane/` directory (defaults to "Dockerfile").
|
47
49
|
dockerfile: ../some_other/Dockerfile
|
data/lib/command/base.rb
CHANGED
@@ -210,7 +210,7 @@ module Command
|
|
210
210
|
|
211
211
|
# Or special string to indicate no image available
|
212
212
|
if matching_items.empty?
|
213
|
-
"#{app_name}:#{NO_IMAGE_AVAILABLE}"
|
213
|
+
name_only ? "#{app_name}:#{NO_IMAGE_AVAILABLE}" : nil
|
214
214
|
else
|
215
215
|
latest_item = matching_items.max_by { |item| extract_image_number(item["name"]) }
|
216
216
|
name_only ? latest_item["name"] : latest_item
|
@@ -51,6 +51,7 @@ module Command
|
|
51
51
|
|
52
52
|
images = cp.image_query(app_name)["items"].filter { |item| item["name"].start_with?("#{app_name}:") }
|
53
53
|
image = latest_image_from(images, app_name: app_name, name_only: false)
|
54
|
+
next unless image
|
54
55
|
|
55
56
|
created_date = DateTime.parse(image["created"])
|
56
57
|
diff_in_days = (now - created_date).to_i
|
data/lib/command/ps_start.rb
CHANGED
@@ -42,9 +42,7 @@ module Command
|
|
42
42
|
|
43
43
|
@workloads.reverse_each do |workload|
|
44
44
|
step("Waiting for workload '#{workload}' to be ready", retry_on_failure: true) do
|
45
|
-
cp.
|
46
|
-
item.dig("status", "ready")
|
47
|
-
end
|
45
|
+
cp.workload_deployments_ready?(workload, expected_status: true)
|
48
46
|
end
|
49
47
|
end
|
50
48
|
end
|
data/lib/command/ps_stop.rb
CHANGED
@@ -6,7 +6,7 @@ module Command
|
|
6
6
|
OPTIONS = [
|
7
7
|
app_option(required: true),
|
8
8
|
workload_option,
|
9
|
-
wait_option("workload to be
|
9
|
+
wait_option("workload to not be ready")
|
10
10
|
].freeze
|
11
11
|
DESCRIPTION = "Stops workloads in app"
|
12
12
|
LONG_DESCRIPTION = <<~DESC
|
@@ -41,10 +41,8 @@ module Command
|
|
41
41
|
progress.puts
|
42
42
|
|
43
43
|
@workloads.each do |workload|
|
44
|
-
step("Waiting for workload '#{workload}' to be
|
45
|
-
cp.
|
46
|
-
!item.dig("status", "ready")
|
47
|
-
end
|
44
|
+
step("Waiting for workload '#{workload}' to not be ready", retry_on_failure: true) do
|
45
|
+
cp.workload_deployments_ready?(workload, expected_status: false)
|
48
46
|
end
|
49
47
|
end
|
50
48
|
end
|
@@ -0,0 +1,35 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module Command
|
4
|
+
class PsWait < Base
|
5
|
+
NAME = "ps:wait"
|
6
|
+
OPTIONS = [
|
7
|
+
app_option(required: true),
|
8
|
+
workload_option
|
9
|
+
].freeze
|
10
|
+
DESCRIPTION = "Waits for workloads in app to be ready after re-deployment"
|
11
|
+
LONG_DESCRIPTION = <<~DESC
|
12
|
+
- Waits for workloads in app to be ready after re-deployment
|
13
|
+
DESC
|
14
|
+
EXAMPLES = <<~EX
|
15
|
+
```sh
|
16
|
+
# Waits for all workloads in app.
|
17
|
+
cpl ps:wait -a $APP_NAME
|
18
|
+
|
19
|
+
# Waits for a specific workload in app.
|
20
|
+
cpl ps:swait -a $APP_NAME -w $WORKLOAD_NAME
|
21
|
+
```
|
22
|
+
EX
|
23
|
+
|
24
|
+
def call
|
25
|
+
@workloads = [config.options[:workload]] if config.options[:workload]
|
26
|
+
@workloads ||= config[:app_workloads] + config[:additional_workloads]
|
27
|
+
|
28
|
+
@workloads.reverse_each do |workload|
|
29
|
+
step("Waiting for workload '#{workload}' to be ready", retry_on_failure: true) do
|
30
|
+
cp.workload_deployments_ready?(workload, expected_status: true)
|
31
|
+
end
|
32
|
+
end
|
33
|
+
end
|
34
|
+
end
|
35
|
+
end
|
data/lib/core/controlplane.rb
CHANGED
@@ -149,6 +149,26 @@ class Controlplane # rubocop:disable Metrics/ClassLength
|
|
149
149
|
api.workload_deployments(workload: workload, gvc: gvc, org: org)
|
150
150
|
end
|
151
151
|
|
152
|
+
def workload_deployment_version_ready?(version, next_version, expected_status:)
|
153
|
+
return false unless version["workload"] == next_version
|
154
|
+
|
155
|
+
version["containers"]&.all? do |_, container|
|
156
|
+
ready = container.dig("resources", "replicas") == container.dig("resources", "replicasReady")
|
157
|
+
expected_status == true ? ready : !ready
|
158
|
+
end
|
159
|
+
end
|
160
|
+
|
161
|
+
def workload_deployments_ready?(workload, expected_status:)
|
162
|
+
deployments = fetch_workload_deployments(workload)["items"]
|
163
|
+
deployments.all? do |deployment|
|
164
|
+
next_version = deployment.dig("status", "expectedDeploymentVersion")
|
165
|
+
|
166
|
+
deployment.dig("status", "versions")&.all? do |version|
|
167
|
+
workload_deployment_version_ready?(version, next_version, expected_status: expected_status)
|
168
|
+
end
|
169
|
+
end
|
170
|
+
end
|
171
|
+
|
152
172
|
def workload_set_image_ref(workload, container:, image:)
|
153
173
|
cmd = "cpln workload update #{workload} #{gvc_org}"
|
154
174
|
cmd += " --set spec.containers.#{container}.image=/org/#{config.org}/image/#{image}"
|