cpl 0.1.0

Sign up to get free protection for your applications and to get access to all the features.
Files changed (57) hide show
  1. checksums.yaml +7 -0
  2. data/.github/workflows/ci.yml +60 -0
  3. data/.gitignore +14 -0
  4. data/.rspec +1 -0
  5. data/.rubocop.yml +16 -0
  6. data/CHANGELOG.md +5 -0
  7. data/CONTRIBUTING.md +12 -0
  8. data/Gemfile +7 -0
  9. data/Gemfile.lock +104 -0
  10. data/LICENSE +21 -0
  11. data/README.md +318 -0
  12. data/Rakefile +11 -0
  13. data/bin/cpl +6 -0
  14. data/cpl +15 -0
  15. data/cpl.gemspec +42 -0
  16. data/docs/commands.md +219 -0
  17. data/docs/troubleshooting.md +6 -0
  18. data/examples/circleci.yml +106 -0
  19. data/examples/controlplane.yml +44 -0
  20. data/lib/command/base.rb +177 -0
  21. data/lib/command/build_image.rb +25 -0
  22. data/lib/command/config.rb +33 -0
  23. data/lib/command/delete.rb +50 -0
  24. data/lib/command/env.rb +21 -0
  25. data/lib/command/exists.rb +23 -0
  26. data/lib/command/latest_image.rb +18 -0
  27. data/lib/command/logs.rb +29 -0
  28. data/lib/command/open.rb +33 -0
  29. data/lib/command/promote_image.rb +27 -0
  30. data/lib/command/ps.rb +40 -0
  31. data/lib/command/ps_restart.rb +34 -0
  32. data/lib/command/ps_start.rb +34 -0
  33. data/lib/command/ps_stop.rb +34 -0
  34. data/lib/command/run.rb +106 -0
  35. data/lib/command/run_detached.rb +148 -0
  36. data/lib/command/setup.rb +59 -0
  37. data/lib/command/test.rb +26 -0
  38. data/lib/core/config.rb +81 -0
  39. data/lib/core/controlplane.rb +128 -0
  40. data/lib/core/controlplane_api.rb +51 -0
  41. data/lib/core/controlplane_api_cli.rb +10 -0
  42. data/lib/core/controlplane_api_direct.rb +42 -0
  43. data/lib/core/scripts.rb +34 -0
  44. data/lib/cpl/version.rb +5 -0
  45. data/lib/cpl.rb +139 -0
  46. data/lib/main.rb +5 -0
  47. data/postgres.md +436 -0
  48. data/redis.md +112 -0
  49. data/script/generate_commands_docs +60 -0
  50. data/templates/gvc.yml +13 -0
  51. data/templates/identity.yml +2 -0
  52. data/templates/memcached.yml +23 -0
  53. data/templates/postgres.yml +31 -0
  54. data/templates/rails.yml +25 -0
  55. data/templates/redis.yml +20 -0
  56. data/templates/sidekiq.yml +28 -0
  57. metadata +312 -0
data/postgres.md ADDED
@@ -0,0 +1,436 @@
1
+ # Migrating Postgres database from Heroku infrastructure
2
+
3
+ One of the biggest problems that will appear when moving from Heroku infrastructure is migrating the database. And
4
+ despite it being rather easy if done between Heroku-hosted databases or non-Heroku-hosted databases (as Postgres has
5
+ tools to do that naturally) it is not easily possible between Heroku and anything outside Heroku,
6
+ **as Heroku doesn't allow setting up WAL replication for Postgres**. Period. No... any... replication outside of
7
+ Heroku infrastructure for Postgres.
8
+
9
+ Previously, it was said to be possible to ask Heroku support to manually set up WAL log shipping, but they
10
+ don't want to do that anymore. Which leaves only 2 options:
11
+
12
+ ### Option A: dump and restore way
13
+
14
+ Nothing problematic here in general **if you can withstand long application maintenance time**.
15
+ You basically need to:
16
+
17
+ 1. enable maintenance
18
+ 2. stop the application completely and wait for all the database writes to finish
19
+ 3. dump database on Heroku
20
+ 4. restore the database on RDS
21
+ 5. start the application
22
+ 6. disable maintenance
23
+
24
+ And if the database is small or it is a hobby app, this should not be looked any further.
25
+ However, this is not acceptable for 99% of production apps as their databases are huge and maintenance time
26
+ should be as small as possible.
27
+
28
+ Rough timing for a 1Gb database can be (but your mileage may vary):
29
+
30
+ - 2.5h creating Heroku backup
31
+ - 0.5h downloading backup to EC2
32
+ - 13h restoring a backup on RDS (in 4 threads)
33
+
34
+ **~16h total time, equals maintenance downtime**
35
+
36
+ ### Option B: logical replication way
37
+
38
+ There are several logical replication solutions exist for Postgres - Slony, Bucardo, Londiste, Mimeo, but... when
39
+ you try to dig deeper, the only viable and up-to-date solution for purpose of migrating from Heroku to RDS is Bucardo.
40
+
41
+ The migration process with Bucardo looks as follows:
42
+
43
+ 1. setup Bucardo on the dedicated EC2 instance
44
+ 2. dump Heroku database schema and restore it on RDS - rather fast as there is no data
45
+ 3. start Bucardo replication - this will install triggers and special schema to your database
46
+ 4. wait for replication to catch up - this may take a long time, but the application can continue working as usual
47
+ 5. enable maintenance
48
+ 6. stop the application completely and wait for replication to finally finish
49
+ 7. switch the database connection strings
50
+ 8. start the application
51
+ 9. disable maintenance
52
+
53
+ Maintenance downtime here can be minutes not hours or days like in p1, but no free lunches - the process is more complex.
54
+
55
+ Rough timing for a 1Gb database can be (but your mileage may vary):
56
+
57
+ - whatever setup time, no hurry
58
+ - 1.5 days for onetimecopy (in 1 thread) - DDL changes not allowed, but no downtime
59
+ - 1-2 min for database switch, maintenance downtime
60
+
61
+ **~2 days total time, ~1-2 min maintenance downtime**
62
+
63
+ ### Some considerations:
64
+
65
+ - DDL changes should be "frozen and postponed" while Bucardo replication. There is also a way to stop replication,
66
+ update DDL in both databases, and restart replication, however, as well no-DDL for a day or two seems a
67
+ reasonable restriction for production databases vs potential errors.
68
+
69
+ - there is a "speed up" option to restore dump (with threads) and then run Bucardo to catch up only deltas, but it
70
+ looks unnecessary as speed gain is minimal vs potential errors. It will not speed up things dramatically, but will just
71
+ save a couple of hours of non-maintenance time (which most probably be spent on the command line) so not worth doing.
72
+
73
+ ## Before replication
74
+
75
+ ### Application code changes
76
+
77
+ Before everything, we need to recheck the database schema and ensure **that every table has a primary key (PK)
78
+ in place** as Bucardo is using PKs for replication.
79
+
80
+ > NOTE: theoretically Bucardo can work with uniq indexes as well, but having a PK on each table is easy and avoids
81
+ unnesessary complications
82
+
83
+ So, please stop, and do whatever is needed for your application.
84
+
85
+ ### Choosing database location and EC2 location
86
+
87
+ All Heroku Postgres databases for location US are running in AWS on `us-east-1`. Control Plane, on the other side,
88
+ recommends `us-east-2` as a default. So we need to choose:
89
+
90
+ - either simple setup - main database in `us-east-2`
91
+ - or a bit more complex - main database in `us-east-2`, replica in `us-east-1` (which can be removed later)
92
+
93
+ This makes sense if your application supports working with replicas. Then read-only `SELECT` queries will go to the
94
+ read replica and write `INSERT/UPDATE` queries will go to the main write-enabled database.
95
+ This way we can keep most reading latency to the minimum.
96
+
97
+ Anyway, it is worth to consider developing such a mode in the application if you want to scale in more than 1 region.
98
+
99
+ ### Create new EC2 instance which we will use for database replication
100
+
101
+ - better if it will be in the same AWS location where RDS database will be (most probably `us-east-2`)
102
+ - choose Ubuntu as OS
103
+ - use some bigger instance, e.g. `m6i.4xlarge` - price doesn't matter much, as such instance will not run long time
104
+ - if you will be copying backup via this instance, choose sufficient space for both OS and backup and some free space
105
+ - create security group `public-access` with all inbound and outbound traffic allowed. this will be handy as well for
106
+ database setup. if you need tighter access controls, up to you
107
+ - generate a new certificate and save it locally (e.g. `bucardo.pem`), will be used for SSH connection. Do not forget to
108
+ update correct permissions e.g. `chown TODO ~/Dowloads/bucardo.pem`
109
+
110
+ After the instance will be running on AWS, you can connect to it via SSH as follows:
111
+ ```sh
112
+ ssh ubuntu@1.2.3.4 -i ~/Downloads/bucardo.pem
113
+ ```
114
+
115
+ ### Creating RDS instance
116
+
117
+ - check `public` box
118
+ - pick `public-access` security group (or whatever you need)
119
+ - if you will be restoring from backup, it is possible to choose temporary bigger instance e.g. `db.r6i.4xlarge`, which
120
+ can be downgraded later.
121
+ - if you will be using bucardo onetimecopy, then it is ok to select any instance you may need, as bucardo does copying
122
+ in a single thread
123
+ - it is fairly easy to switch database instance type afterwards and requires only minimal downtime
124
+ - storage space. this needs a good pick, as it is a) not possible to shrink and b) auto-expanding which AWS offers and
125
+ which should be enabled by default can block database modifications for quite long periods (days)
126
+
127
+ ### Running commands in detached mode on the EC2 instance
128
+
129
+ Some commands that run on EC2 may take a long time and we may want to disconnect from the SSH session while the command
130
+ will continue running. And we want to reconnect to the session and see the progress. Possibly without installing
131
+ special tools. This can be accomplished with `screen` command, e.g.:
132
+
133
+ ```sh
134
+ # this will start a backround process and return to terminal (which can be closed)
135
+ screen -dmL ...your command...
136
+
137
+ # checking if screen is still running in the background
138
+ ps aux | grep -i screen
139
+
140
+ # see the output log
141
+ cat screenlog.0
142
+ ```
143
+
144
+ ### Installing Postgres and Bucardo on EC2
145
+
146
+ Now, when RDS is running and EC2 is running we can start installing local Postgres and Bucardo itself. Let's install
147
+ Postgres 13 first. It may be possible to install the latest Postgres, but 13 seems the best choice atm.
148
+
149
+ ```sh
150
+ # update all your packages
151
+ sudo apt update
152
+ sudo apt upgrade -y
153
+
154
+ # add postgres repository key
155
+ sudo sh -c 'echo "deb [arch=$(dpkg --print-architecture)] http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
156
+
157
+ wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
158
+
159
+ # update again
160
+ sudo apt-get update
161
+
162
+ # install packages
163
+ sudo apt-get -y install make postgresql-13 postgresql-client-13 postgresql-plperl-13 libdbd-pg-perl libdbix-safe-perl
164
+ ```
165
+
166
+ Postgres Perl language `plperl` as well as DBD and DBIx packages are needed for Bucardo.
167
+
168
+ Now, as all dependencies are installed, we can install Bucardo from latest tarball.
169
+
170
+ ```sh
171
+ # install Bucardo itself
172
+ wget https://bucardo.org/downloads/Bucardo-5.6.0.tar.gz
173
+ tar xzf Bucardo-5.6.0.tar.gz
174
+ cd Bucardo-5.6.0
175
+ perl Makefile.PL
176
+ sudo make install
177
+
178
+ # create dirs and fix permissions
179
+ sudo mkdir /var/run/bucardo
180
+ sudo mkdir /var/log/bucardo
181
+ sudo chown ubuntu /var/run/bucardo/
182
+ sudo chown ubuntu /var/log/bucardo
183
+ ```
184
+
185
+ After that, Bucardo is physically installed as a package and runnable but we need to configure everything.
186
+ Let's start with Postgres. As this is a temporary installation (only for the period of replication),
187
+ it is rather safe to set `trust` localhost connections (or set up another way if you want this).
188
+
189
+
190
+ For this, we need to edit `pg_hba.conf` as follows:
191
+ ```sh
192
+ # edit pg config to make postgres trusted
193
+ sudo nano /etc/postgresql/13/main/pg_hba.conf
194
+ ```
195
+
196
+ in that file change the following lines to `trust`
197
+ ```sh
198
+ # in pg_hba.conf
199
+ local all postgres trust
200
+ local all all trust
201
+ ```
202
+
203
+ and restart Postgres to pick up changes
204
+ ```sh
205
+ # restart postgres
206
+ sudo systemctl restart postgresql
207
+ ```
208
+
209
+ And finally-finally we can install Bucardo service database on local Postgres and see if everything runs.
210
+
211
+ ```sh
212
+ # this will create local bucardo "service" database
213
+ bucardo install
214
+
215
+ # for option 3 pick `postgres` as a user
216
+ # for option 4 pick `postgres` as a database from where initial connection should be attempted
217
+ ```
218
+
219
+ :tada: :tada: :tada: now we have local Postgres and Bucardo running, and can continue with external services
220
+ configuration.
221
+
222
+ ### Configuring external (Heroku, RDS) database connections
223
+
224
+ For this, we will use `pg_service.conf`. It will not work in all places, and sometimes we will need to provide
225
+ connection properties manually, but for many commands, it is very useful.
226
+
227
+ ```sh
228
+ # create and edit .pg_service.conf
229
+ touch ~/.pg_service.conf
230
+ nano ~/.pg_service.conf
231
+ ```
232
+
233
+ ```ini
234
+ # ~/.pg_service.conf
235
+
236
+ [heroku]
237
+ host=ec2-xxx.compute-1.amazonaws.com
238
+ port=5432
239
+ dbname=xxx
240
+ user=xxx
241
+ password=xxx
242
+
243
+ [rds]
244
+ host=xxx.us-east-2.rds.amazonaws.com
245
+ port=5432
246
+ dbname=xxx
247
+ user=postgres
248
+ password=xxx
249
+ ```
250
+
251
+ Test connectivity to databases with:
252
+ ```sh
253
+ psql service=heroku -c '\l+'
254
+ psql service=rds -c '\l+'
255
+ ```
256
+
257
+ You will see all databases set up on each server (and can see their size):
258
+ ```console
259
+ Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description
260
+ -------------------+----------+----------+-------------+-------------+-----------------------+-----------+------------+--------------------------------------------
261
+ my-production-db | xxx | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 821 GB | pg_default |
262
+ postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | 8205 kB | pg_default | default administrative connection database
263
+ rdsadmin | rdsadmin | UTF8 | en_US.UTF-8 | en_US.UTF-8 | rdsadmin=CTc/rdsadmin+| No Access | pg_default |
264
+ | | | | | rdstopmgr=Tc/rdsadmin | | |
265
+ template0 | rdsadmin | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/rdsadmin +| 8033 kB | pg_default | unmodifiable empty database
266
+ | | | | | rdsadmin=CTc/rdsadmin | | |
267
+ template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +| 8205 kB | pg_default | default template for new databases
268
+ | | | | | postgres=CTc/postgres | | |
269
+ (5 rows)
270
+ ```
271
+
272
+ After both databases are connectable, we can proceed with replication itself.
273
+
274
+ ## Performing replication
275
+
276
+ :fire: :fire: :fire: **IMPORTANT: from this step, DDL changes are not allowed** :fire: :fire: :fire:
277
+
278
+ ### Application changes
279
+
280
+ - temporary freeze all DDL changes till replication will finish
281
+ - temporary disable all background jobs or services that are possible to stop. This will help to lessen database
282
+ load and especially, where possible, to decrease database write operations that will decrease replication pipe as well.
283
+
284
+ ### Dump and restore the initial schema
285
+
286
+ This step doesn't take much time (as it is only a database schema without data),
287
+ but it is definitely handy to save all output, and closely check for any errors.
288
+
289
+ ```sh
290
+ # save heroku schema to `schema.sql`
291
+ pg_dump service=heroku --schema-only --no-acl --no-owner -v > schema.sql
292
+
293
+ # restore `schema.sql` on RDS
294
+ psql service=rds < schema.sql
295
+ ```
296
+
297
+ ### Configure Bucardo replication
298
+
299
+ After we have all databases connectable and all same schema, we can tell Bucardo what it needs to replicate:
300
+ ```sh
301
+ # add databases
302
+ bucardo add db from_db dbhost=xxx dbport=5432 dbuser=xxx dbpass=xxx dbname=xxx
303
+ bucardo add db to_db dbhost=xxx dbport=5432 dbuser=postgres dbpass=xxx dbname=xxx
304
+
305
+ # mark all tables and sequences for replication
306
+ bucardo add all tables
307
+ bucardo add all sequences
308
+ ```
309
+ Here, Bucardo will connect to databases and collect object metadata for replication.
310
+ After that, we can add sync as well:
311
+
312
+ ```sh
313
+ # add sync itself
314
+ bucardo add sync mysync tables=all dbs=from_db,to_db onetimecopy=1
315
+ ```
316
+
317
+ The most important option here is `onetimecopy=1` which will tell Bucardo to perform initial data copying
318
+ (when sync will start). Such a copying is done *in a single thread* by creating a pipe (via Bucardo) as follows:
319
+ ```SQL
320
+ -- on heroku
321
+ COPY xxx TO STDOUT
322
+ -- on rds
323
+ COPY xxx FROM STDIN
324
+ ```
325
+
326
+ ### Run sync
327
+
328
+ And now, when everything is ready, we can push the button and go for a long :coffee: or maybe even a weekend.
329
+
330
+ ```sh
331
+ # starts Bucardo sync daemon
332
+ bucardo start
333
+ ```
334
+
335
+ As well it is ok to disconnect from SSH as Bucardo daemon will continue working in background.
336
+
337
+ ### Monitor status
338
+
339
+ To check the progress of sync (from Bucardo perspective):
340
+ ```sh
341
+ # overall progress of all syncs
342
+ bucardo status
343
+
344
+ # single sync progress
345
+ bucardo status mysync
346
+ ```
347
+
348
+ To check what's going on in databases directly:
349
+ ```sh
350
+ # Bucardo adds a comment to it's queries, so it is fairly easy to grep those
351
+ psql service=heroku -c 'select * from pg_stat_activity' | grep -i bucardo
352
+ psql service=rds -c 'select * from pg_stat_activity' | grep -i bucardo
353
+ ```
354
+
355
+ ### After replication will catch up, but before databases switch
356
+
357
+ 1. Please do a sanity check of data in tables. E.g. check:
358
+
359
+ - table `COUNT`
360
+ - min/max of PK ids where applicable
361
+ - min/max of `created_at/updated_at` where applicable
362
+
363
+ 2. For p1 it is possible to use our checker script that will do this automatically (TODO)
364
+
365
+ 3. Refresh materialized views manually (as they are not synced by Bucardo).
366
+ Just go to `psql` and `REFRESH MATERIALIZED VIEW ...`
367
+
368
+ ## Switch databases
369
+
370
+ :fire: :fire: :fire: **This is final non reverisble step now** :fire: :fire: :fire:
371
+ Before this point, all changes can be easily removed or reversed and database can stay on Heroku as it was before,
372
+ afther this switch it is not possible (at least easily).
373
+
374
+ So... after sync will catch up, basically it is needed to:
375
+
376
+ 1. start maintenance mode on heroku `heroku maintenance:on`
377
+ 2. scale down and stop all the dynos
378
+ 3. wait a bit for all queries to finish and replication catch up latest changes
379
+ 4. detach heroku postgres from DATABASE_URL
380
+ 5. set `DATABASE_URL` to RDS url (plaintext now)
381
+ 6. start dynos
382
+ 7. wait for their readiness with `heroku ps:wait`
383
+ 8. stop maintenance with `heroku maintenance:off`
384
+ 9. :fire: **Now we are fully on RDS, so DDL changes are allowed** :fire:
385
+ 10. you could gradually enable all background jobs and services which were temporary stopped
386
+
387
+ ## After switch
388
+
389
+ As we now running on RDS, there is only single task left to do on Heroku - make a final backup of database and save it.
390
+
391
+ ```sh
392
+ # to capture backup (will take lots of time), can be disconnected
393
+ heroku pg:backups:capture -a example-app
394
+
395
+ # to get url of backup
396
+ heroku pg:backups:url bXXXX -a example-app
397
+ ```
398
+
399
+ Now you can download it locally or copy to S3 via EC2 as it will take quite some time and traffic.
400
+
401
+ ```sh
402
+ # download dump to EC2
403
+ screen -dmL time curl 'your-url' -o latest.dump
404
+
405
+ # install aws cli (in a way reccomended by Amazon)
406
+ # ...TODO...
407
+
408
+ # configure aws credentials
409
+ aws configure
410
+
411
+ # check S3 access
412
+ aws s3 ls
413
+
414
+ # upload to S3
415
+ screen -dmL time aws s3 cp latest.dump s3://my-dumps-bucket/ --region us-east-1
416
+ ```
417
+
418
+ # Refs
419
+
420
+ https://bucardo.org
421
+
422
+ https://stackoverflow.com/questions/22264753/linux-how-to-install-dbdpg-module
423
+
424
+ https://gist.github.com/luizomf/1a7994cf4263e10dce416a75b9180f01
425
+
426
+ https://www.waytoeasylearn.com/learn/bucardo-installation/
427
+
428
+ https://gist.github.com/knasyrov/97301801733a31c60521
429
+
430
+ https://www.cloudnation.nl/inspiratie/blogs/migrating-heroku-postgresql-to-aurora-rds-with-almost-minimal-downtime
431
+
432
+ https://blog.porter.run/migrating-postgres-from-heroku-to-rds/
433
+
434
+ https://www.endpointdev.com/blog/2017/06/amazon-aws-upgrades-to-postgres-with/
435
+
436
+ https://aws.amazon.com/blogs/database/migrating-legacy-postgresql-databases-to-amazon-rds-or-aurora-postgresql-using-bucardo/
data/redis.md ADDED
@@ -0,0 +1,112 @@
1
+ # Migrating Redis database from Heroku infrastructure
2
+
3
+ **General considerations:**
4
+
5
+ 1. Heroku uses self-signed TLS certificates, which are not verifiable. It needs special handling by setting
6
+ TLS verification to `none`, otherwise most apps are not able to connect.
7
+
8
+ 2. We are moving to private Redis that don't have a public URL, so have to do it from a Control Plane GVC container.
9
+
10
+ The tool that satisfies those criteria is [Redis-RIOT](https://developer.redis.com/riot/riot-redis/index.html)
11
+
12
+ **Heroku Redis:**
13
+
14
+ As Redis-RIOT says, master redis should have keyspace-notifications set to `KA` to be able to do live replication.
15
+ To do that:
16
+
17
+ ```sh
18
+ heroku redis:keyspace-notifications -c KA -a my-app
19
+ ```
20
+
21
+ Connect to heroku Redis CLI:
22
+ ```sh
23
+ heroku redis:cli -a my-app
24
+ ```
25
+
26
+ **Control Plane Redis:**
27
+
28
+ Connect to Control Plane Redis CLI:
29
+
30
+ ```sh
31
+ # open cpl interactive shell
32
+ cpl run bash -a my-app
33
+
34
+ # install redis CLI if you don't have it in Docker
35
+ apt-get update
36
+ apt-get install redis -y
37
+
38
+ # connect to local cloud Redis
39
+ redis-cli -u MY_CONTROL_PLANE_REDIS_URL
40
+ ```
41
+
42
+ **Useful Redis CLI commands:**
43
+
44
+ Quick-check keys qty:
45
+ ```
46
+ info keyspace
47
+
48
+ # Keyspace
49
+ db0:keys=9496,expires=2941,avg_ttl=77670114535
50
+ ```
51
+
52
+ **Create a Control Plane sync workload**
53
+
54
+ ```
55
+ name: riot-redis
56
+
57
+ suspend: true
58
+ min/max scale: 1/1
59
+
60
+ firewall: all firewalls off
61
+
62
+ image: fieldengineering/riot-redis
63
+
64
+ CPU: 1 Core
65
+ RAM: 1 GB
66
+
67
+ command args:
68
+ --info
69
+ -u
70
+ rediss://...your_heroku_redis_url...
71
+ --tls-verify=NONE
72
+ replicate
73
+ -h
74
+ ...your_control_plane_redis_host...
75
+ --mode
76
+ live
77
+ ```
78
+
79
+ **Sync process**
80
+
81
+ 1. open 1st terminal window with heroku redis CLI, check keys qty
82
+ 2. open 2nd terminal window with controlplane redis CLI, check keys qty
83
+ 3. start sync container
84
+ 4. open logs with `cpl logs -a my-app -w riot-redis`
85
+ 4. re-check keys sync qty again
86
+ 5. stop sync container
87
+
88
+ Result:
89
+ ```
90
+ Setting commit interval to default value (1)
91
+ Setting commit interval to default value (1)
92
+ Job: [SimpleJob: [name=snapshot-replication]] launched with the following parameters: [{}]
93
+ Executing step: [snapshot-replication]
94
+ Scanning 0% ╺━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/8891 (0:00:00 / ?)Job: [SimpleJob: [name=scan-reader]] launched with the following parameters: [{}]
95
+ Executing step: [scan-reader]
96
+ Scanning 61% ━━━━━━━━━━━━━━━━╸━━━━━━━━━━ 5460/8891 (0:00:07 / 0:00:04) 780.0/sStep: [scan-reader] executed in 7s918ms
97
+ Closing with items still in queue
98
+ Job: [SimpleJob: [name=scan-reader]] completed with the following parameters: [{}] and the following status: [COMPLETED] in 7s925ms
99
+ Scanning 100% ━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9482/9482 (0:00:11 / 0:00:00) 862.0/s
100
+ Step: [snapshot-replication] executed in 13s333ms
101
+ Executing step: [verification]
102
+ Verifying 0% ╺━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/8942 (0:00:00 / ?)Job: [SimpleJob: [name=RedisItemReader]] launched with the following parameters: [{}]
103
+ Executing step: [RedisItemReader]
104
+ Verifying 2% ╺━━━━━━━━━━━━━━━━━ 220/8942 (0:00:00 / 0:00:19) ?/s >0 T0 ≠Step: [RedisItemReader] executed in 7s521ms
105
+ Closing with items still in queue
106
+ Job: [SimpleJob: [name=RedisItemReader]] completed with the following parameters: [{}] and the following status: [COMPLETED] in 7s522ms
107
+ Verification completed - all OK
108
+ Step: [verification] executed in 7s776ms
109
+ Job: [SimpleJob: [name=snapshot-replication]] completed with the following parameters: [{}] and the following status: [COMPLETED] in 21s320ms
110
+ ```
111
+
112
+ Total sync time ~1min
@@ -0,0 +1,60 @@
1
+ #!/usr/bin/env ruby
2
+ # frozen_string_literal: true
3
+
4
+ require_relative "../lib/cpl"
5
+
6
+ commands_str_arr = []
7
+
8
+ commands = Command::Base.all_commands
9
+ commands.each do |_command_key, command_class|
10
+ next if command_class::HIDE
11
+
12
+ name = command_class::NAME
13
+ usage = command_class::USAGE.empty? ? name : command_class::USAGE
14
+ options = command_class::OPTIONS
15
+ long_description = command_class::LONG_DESCRIPTION
16
+ examples = command_class::EXAMPLES
17
+
18
+ command_str = "### `#{name}`\n\n"
19
+ command_str += "#{long_description.strip}\n\n"
20
+
21
+ if examples.empty?
22
+ options_str_arr = []
23
+ options.each do |option|
24
+ next unless option[:params][:required]
25
+
26
+ options_str_arr.push("#{option[:params][:aliases][0]} $#{option[:params][:banner]}")
27
+ end
28
+ options_str = options_str_arr.join(" ")
29
+
30
+ command_str += "```sh\ncpl #{usage}"
31
+ command_str += " #{options_str}" unless options_str.empty?
32
+ command_str += "\n```"
33
+ else
34
+ command_str += examples.strip
35
+ end
36
+
37
+ commands_str_arr.push(command_str)
38
+ end
39
+
40
+ commands_str = commands_str_arr.join("\n\n")
41
+
42
+ commands_path = "#{__dir__}/../docs/commands.md"
43
+ commands_data =
44
+ <<~HEREDOC
45
+ <!-- NOTE: This file is automatically generated by running `script/generate_commands_docs`. Do NOT edit it manually. -->
46
+
47
+ ### Common Options
48
+
49
+ ```
50
+ -a XXX, --app XXX app ref on Control Plane (GVC)
51
+ ```
52
+
53
+ This `-a` option is used in most of the commands and will pick all other app configurations from the project-specific
54
+ `.controlplane/controlplane.yml` file.
55
+
56
+ ### Commands
57
+
58
+ #{commands_str}
59
+ HEREDOC
60
+ File.binwrite(commands_path, commands_data)
data/templates/gvc.yml ADDED
@@ -0,0 +1,13 @@
1
+ kind: gvc
2
+ name: APP_GVC
3
+ spec:
4
+ env:
5
+ - name: MEMCACHE_SERVERS
6
+ value: memcached.APP_GVC.cpln.local
7
+ - name: REDIS_URL
8
+ value: redis://redis.APP_GVC.cpln.local:6379
9
+ - name: DATABASE_URL
10
+ value: postgres://postgres:password123@postgres.APP_GVC.cpln.local:5432/APP_GVC
11
+ staticPlacement:
12
+ locationLinks:
13
+ - /org/APP_ORG/location/APP_LOCATION
@@ -0,0 +1,2 @@
1
+ kind: identity
2
+ name: APP_GVC-identity
@@ -0,0 +1,23 @@
1
+ kind: workload
2
+ name: memcached
3
+ spec:
4
+ type: standard
5
+ containers:
6
+ - name: memcached
7
+ cpu: 3m
8
+ memory: 10Mi
9
+ args:
10
+ - '-l'
11
+ - 0.0.0.0
12
+ image: 'memcached:alpine'
13
+ ports:
14
+ - number: 11211
15
+ protocol: tcp
16
+ defaultOptions:
17
+ autoscaling:
18
+ metric: latency
19
+ maxScale: 1
20
+ capacityAI: false
21
+ firewallConfig:
22
+ internal:
23
+ inboundAllowType: same-gvc
@@ -0,0 +1,31 @@
1
+ kind: workload
2
+ name: postgres
3
+ spec:
4
+ type: standard
5
+ containers:
6
+ - name: postgres
7
+ cpu: 50m
8
+ memory: 200Mi
9
+ env:
10
+ - name: PGUSER
11
+ value: postgres
12
+ - name: POSTGRES_PASSWORD
13
+ value: password123
14
+ - name: POSTGRES_USER
15
+ value: postgres
16
+ image: 'postgres:13.8-alpine'
17
+ ports:
18
+ - number: 5432
19
+ protocol: tcp
20
+ volumes:
21
+ - path: /var/lib/postgresql/data
22
+ recoveryPolicy: retain
23
+ uri: 'scratch://postgres-vol'
24
+ defaultOptions:
25
+ autoscaling:
26
+ metric: latency
27
+ maxScale: 1
28
+ capacityAI: false
29
+ firewallConfig:
30
+ internal:
31
+ inboundAllowType: same-gvc