dokku-compose 0.7.0 → 0.9.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +71 -410
  2. package/dist/index.js +191 -91
  3. package/package.json +2 -2
package/README.md CHANGED
@@ -1,10 +1,10 @@
1
1
  # dokku-compose
2
2
 
3
- - 📄 **Declarative** -- Define your entire Dokku server in a single YAML file
3
+ - 📄 **Declarative** -- One YAML file for your entire Dokku server. Git-trackable, reviewable, reproducible
4
4
  - 🔁 **Idempotent** -- Run it twice, nothing changes. Safe to re-run anytime
5
5
  - 👀 **Dry-run** -- Preview every command before it touches your server
6
6
  - 🔍 **Diff** -- See exactly what's out of sync before applying changes
7
- - 🏗️ **Modular** -- One file per Dokku namespace. Easy to read, extend, and debug
7
+ - 📤 **Export** -- Reverse-engineer an existing server into a config file
8
8
 
9
9
  [![Tests](https://github.com/guess/dokku-compose/actions/workflows/tests.yml/badge.svg)](https://github.com/guess/dokku-compose/actions/workflows/tests.yml)
10
10
  [![License: MIT](https://img.shields.io/github/license/guess/dokku-compose)](LICENSE)
@@ -16,396 +16,84 @@
16
16
 
17
17
  ## Why
18
18
 
19
- Configuring a Dokku server means running dozens of imperative commands in the right order. Miss one and your deploy breaks. Change servers and you're starting over.
20
-
21
- `dokku-compose` replaces that with a single YAML file. Like Docker Compose, but for Dokku.
22
-
23
- ## A Complete Example
24
-
25
- ```yaml
26
- dokku:
27
- version: "0.35.12"
28
-
29
- plugins:
30
- postgres:
31
- url: https://github.com/dokku/dokku-postgres.git
32
- version: "1.41.0"
33
- redis:
34
- url: https://github.com/dokku/dokku-redis.git
35
-
36
- services:
37
- api-postgres:
38
- plugin: postgres
39
- version: "17-3.5"
40
- shared-cache:
41
- plugin: redis
42
-
43
- networks:
44
- - backend-net
45
-
46
- domains:
47
- - example.com
48
-
49
- nginx:
50
- client-max-body-size: "50m"
51
-
52
- logs:
53
- max-size: "50m"
54
-
55
- apps:
56
- api:
57
- build:
58
- context: apps/api
59
- dockerfile: docker/prod/api/Dockerfile
60
- env:
61
- APP_ENV: production
62
- APP_SECRET: "${SECRET_KEY}"
63
- domains:
64
- - api.example.com
65
- links:
66
- - api-postgres
67
- - shared-cache
68
- networks:
69
- - backend-net
70
- ports:
71
- - "https:4001:4000"
72
- ssl:
73
- certfile: certs/example.com/fullchain.pem
74
- keyfile: certs/example.com/privkey.pem
75
- storage:
76
- - "/var/lib/dokku/data/storage/api/uploads:/app/uploads"
77
- nginx:
78
- client-max-body-size: "15m"
79
- proxy-read-timeout: "120s"
80
- checks:
81
- wait-to-retire: 60
82
- disabled:
83
- - worker
84
-
85
- worker:
86
- links:
87
- - api-postgres
88
- - shared-cache
89
- checks: false
90
- proxy:
91
- enabled: false
92
- ```
93
-
94
- ## Quick Start
95
-
96
- ```bash
97
- # Install
98
- npm install -g dokku-compose
99
-
100
- # Create a starter config
101
- dokku-compose init myapp
102
-
103
- # Preview what will happen
104
- dokku-compose up --dry-run
105
-
106
- # Apply configuration
107
- dokku-compose up
108
-
109
- # Or apply to a remote server over SSH
110
- DOKKU_HOST=my-server.example.com dokku-compose up
111
- ```
19
+ [Dokku](https://dokku.com) is a battle-tested, single-server PaaS — and one of the best platforms for self-hosting. But configuring it means running dozens of imperative commands in the right order. Miss one and your deploy breaks. Change servers and you're starting over.
112
20
 
113
- Requires Node.js >= 18. See the [Installation Reference →](docs/reference/install.md) for requirements and remote execution details.
21
+ AI agents can generate and deploy code better than ever, but infrastructure as shell history can't be diffed, reviewed, or reproduced.
114
22
 
115
- ## Features
116
-
117
- Features are listed roughly in execution order — the sequence `dokku-compose up` follows.
118
-
119
- ### Dokku Version Check
120
-
121
- Declare the expected Dokku version. A warning is logged if the running version doesn't match.
122
-
123
- ```yaml
124
- dokku:
125
- version: "0.35.12"
126
- ```
127
-
128
- ```
129
- [dokku ] WARN: Version mismatch: running 0.34.0, config expects 0.35.12
130
- ```
131
-
132
- Dokku must be pre-installed on the target server.
133
-
134
- ### Application Management
135
-
136
- Create and destroy Dokku apps idempotently. If the app already exists, it's skipped.
137
-
138
- ```yaml
139
- apps:
140
- api:
141
- # per-app configuration goes here
142
- ```
23
+ `dokku-compose` makes Dokku declarative. One YAML file. Git-trackable. Agent-friendly.
143
24
 
144
- [Application Management Reference →](docs/reference/apps.md)
25
+ Like Docker Compose, but for Dokku.
145
26
 
146
- ### Environment Variables
147
-
148
- Set config vars per app or globally. All declared vars are converged — orphaned vars are automatically unset.
149
-
150
- ```yaml
151
- apps:
152
- api:
153
- env:
154
- APP_ENV: production
155
- APP_SECRET: "${SECRET_KEY}"
156
- ```
157
-
158
- [Environment Variables Reference →](docs/reference/config.md)
159
-
160
- ### Build
161
-
162
- Configure Dockerfile builds: build context, Dockerfile path, app.json location, and build args. Key names follow docker-compose conventions.
163
-
164
- ```yaml
165
- apps:
166
- api:
167
- build:
168
- context: apps/api
169
- dockerfile: docker/prod/api/Dockerfile
170
- app_json: docker/prod/api/app.json
171
- args:
172
- SENTRY_AUTH_TOKEN: "${SENTRY_AUTH_TOKEN}"
173
- ```
174
-
175
- [Build Reference →](docs/reference/builder.md)
176
-
177
- ### Docker Options
178
-
179
- Add custom Docker options per build phase (`build`, `deploy`, `run`). Options are converged with targeted add/remove — only changed options are modified, preserving entries managed by other resources (e.g. `--link`, `--build-arg`).
180
-
181
- ```yaml
182
- apps:
183
- api:
184
- docker_options:
185
- deploy:
186
- - "--shm-size 256m"
187
- run:
188
- - "--ulimit nofile=12"
189
- ```
190
-
191
- [Docker Options Reference →](docs/reference/docker_options.md)
192
-
193
- ### Networks
194
-
195
- Create shared Docker networks and configure per-app network properties.
196
-
197
- ```yaml
198
- networks:
199
- - backend-net
200
-
201
- apps:
202
- api:
203
- networks: # attach-post-deploy
204
- - backend-net
205
- network: # other network:set properties
206
- attach_post_create:
207
- - init-net
208
- initial_network: custom-bridge
209
- bind_all_interfaces: true
210
- tld: internal
211
- ```
212
-
213
- `down --force` clears network settings and destroys declared networks.
214
-
215
- [Networks Reference →](docs/reference/network.md)
216
-
217
- ### Domains
218
-
219
- Configure custom domains per app or globally.
220
-
221
- ```yaml
222
- domains:
223
- - example.com
27
+ ## Quick Start
224
28
 
225
- apps:
226
- api:
227
- domains:
228
- - api.example.com
229
- - api.example.co
29
+ ```bash
30
+ npm install -g dokku-compose
230
31
  ```
231
32
 
232
- [Domains Reference](docs/reference/domains.md)
33
+ Requires Node.js >= 20. See the [Installation Reference](docs/reference/install.md) for details.
233
34
 
234
- ### Port Mappings
35
+ ### 1. Export your existing server
235
36
 
236
- Map external ports to container ports using `SCHEME:HOST_PORT:CONTAINER_PORT` format.
37
+ Point at your Dokku server and generate a config file from its current state:
237
38
 
238
- ```yaml
239
- apps:
240
- api:
241
- ports:
242
- - "https:4001:4000"
39
+ ```bash
40
+ DOKKU_HOST=my-server.example.com dokku-compose export -o dokku-compose.yml
243
41
  ```
244
42
 
245
- Comparison is order-insensitive. `down --force` clears port mappings before destroying the app.
246
-
247
- [Port Mappings Reference →](docs/reference/ports.md)
43
+ This produces a complete `dokku-compose.yml` reflecting everything on the server apps, services, domains, env vars, and more.
248
44
 
249
- ### SSL Certificates
45
+ ### 2. See what's in sync
250
46
 
251
- Specify cert and key file paths. Idempotent — skips if SSL is already enabled. Set to `false` to remove an existing certificate.
252
-
253
- ```yaml
254
- apps:
255
- api:
256
- ssl: # add cert (idempotent)
257
- certfile: certs/example.com/fullchain.pem
258
- keyfile: certs/example.com/privkey.pem
259
- worker:
260
- ssl: false # remove cert
47
+ ```bash
48
+ dokku-compose diff
261
49
  ```
262
50
 
263
- [SSL Certificates Reference →](docs/reference/certs.md)
264
-
265
- ### Proxy
266
-
267
- Enable or disable the proxy for an app, and optionally select the proxy implementation (nginx, caddy, haproxy, traefik). All operations are idempotent.
268
-
269
- ```yaml
270
- apps:
271
- api:
272
- proxy: true # shorthand enable
273
-
274
- worker:
275
- proxy: false # shorthand disable
276
-
277
- caddy-app:
278
- proxy:
279
- enabled: true
280
- type: caddy # proxy:set caddy-app caddy
281
51
  ```
52
+ app: api
53
+ (in sync)
54
+ app: worker
55
+ ~ env: 1 → 2 items
56
+ + ports: (not set on server)
282
57
 
283
- [Proxy Reference →](docs/reference/proxy.md)
284
-
285
- ### Persistent Storage
286
-
287
- Declare persistent bind mounts for an app. Mounts are fully converged on each `up` run — new mounts are added, mounts removed from YAML are unmounted, and existing mounts are skipped.
288
-
289
- ```yaml
290
- apps:
291
- api:
292
- storage:
293
- - "/var/lib/dokku/data/storage/api/uploads:/app/uploads"
58
+ 1 resource(s) out of sync.
294
59
  ```
295
60
 
296
- Host directories must exist before mounting. On `down`, declared mounts are unmounted.
297
-
298
- [Storage Reference →](docs/reference/storage.md)
299
-
300
- ### Nginx Configuration
61
+ ### 3. Preview changes
301
62
 
302
- Set any nginx property supported by Dokku via a key-value map — per-app or globally.
303
-
304
- ```yaml
305
- nginx: # global defaults
306
- client-max-body-size: "50m"
307
-
308
- apps:
309
- api:
310
- nginx: # per-app overrides
311
- client-max-body-size: "15m"
312
- proxy-read-timeout: "120s"
63
+ ```bash
64
+ dokku-compose up --dry-run
313
65
  ```
314
66
 
315
- [Nginx Reference →](docs/reference/nginx.md)
316
-
317
- ### Zero-Downtime Checks
318
-
319
- Configure zero-downtime deploy check properties, disable checks entirely, or control per process type. Properties are idempotent — current values are checked before setting.
320
-
321
- ```yaml
322
- apps:
323
- api:
324
- checks:
325
- wait-to-retire: 60
326
- attempts: 5
327
- disabled:
328
- - worker
329
- skipped:
330
- - cron
331
- worker:
332
- checks: false # disable all checks (causes downtime)
333
67
  ```
68
+ [worker ] Setting 2 env var(s)... (dry run)
69
+ [worker ] Setting ports http:5000:5000... (dry run)
334
70
 
335
- [Zero-Downtime Checks Reference →](docs/reference/checks.md)
336
-
337
- ### Log Management
338
-
339
- Configure log retention and shipping globally or per-app.
340
-
341
- ```yaml
342
- logs: # global defaults
343
- max-size: "50m"
344
- vector-sink: "console://?encoding[codec]=json"
345
-
346
- apps:
347
- api:
348
- logs: # per-app overrides
349
- max-size: "10m"
71
+ # Commands that would run:
72
+ dokku config:set --no-restart worker APP_ENV=production WORKER_COUNT=****
73
+ dokku ports:set worker http:5000:5000
350
74
  ```
351
75
 
352
- [Log Management Reference →](docs/reference/logs.md)
353
-
354
- ### Plugins and Services
355
-
356
- Install plugins and declare service instances. Services are created before apps during `up` and linked on demand.
357
-
358
- ```yaml
359
- plugins:
360
- postgres:
361
- url: https://github.com/dokku/dokku-postgres.git
362
- version: "1.41.0"
76
+ ### 4. Apply
363
77
 
364
- services:
365
- api-postgres:
366
- plugin: postgres
367
-
368
- apps:
369
- api:
370
- links:
371
- - api-postgres
78
+ ```bash
79
+ dokku-compose up
372
80
  ```
373
81
 
374
- [Plugins and Services Reference →](docs/reference/plugins.md)
82
+ Running `up` again produces no changes — every step checks current state before acting.
375
83
 
376
84
  ## Commands
377
85
 
378
86
  | Command | Description |
379
87
  |---------|-------------|
380
- | `dokku-compose init [app...]` | Create a starter `dokku-compose.yml` |
381
- | `dokku-compose up` | Create/update apps and services to match config |
382
- | `dokku-compose down --force` | Destroy apps and services (requires `--force`) |
383
- | `dokku-compose ps` | Show status of configured apps |
384
- | `dokku-compose validate` | Validate config file offline (no server contact) |
385
- | `dokku-compose export` | Reverse-engineer server state into YAML |
88
+ | `dokku-compose up [apps...]` | Create/update apps and services to match config |
89
+ | `dokku-compose down --force [apps...]` | Destroy apps and services (requires `--force`) |
386
90
  | `dokku-compose diff` | Show what's out of sync between config and server |
91
+ | `dokku-compose export` | Export current server state to YAML |
92
+ | `dokku-compose ps [apps...]` | Show status of configured apps |
93
+ | `dokku-compose validate` | Validate config file offline (no server contact) |
94
+ | `dokku-compose init [apps...]` | Create a starter `dokku-compose.yml` |
387
95
 
388
- ### `ps` — Show Status
389
-
390
- Queries each configured app and prints its deploy status:
391
-
392
- ```
393
- $ dokku-compose ps
394
- api running
395
- worker running
396
- web not created
397
- ```
398
-
399
- ### `down` — Tear Down
400
-
401
- Destroys apps and their linked services. Requires `--force` as a safety measure. For each app, services are unlinked first, then the app is destroyed. Service instances from the top-level `services:` section are destroyed after all apps.
402
-
403
- ```bash
404
- dokku-compose down --force myapp # Destroy one app and its services
405
- dokku-compose down --force # Destroy all configured apps
406
- ```
407
-
408
- ## Options
96
+ ### Options
409
97
 
410
98
  | Option | Description |
411
99
  |--------|-------------|
@@ -415,18 +103,30 @@ dokku-compose down --force # Destroy all configured apps
415
103
  | `--fail-fast` | Stop on first error (default: continue to next app) |
416
104
  | `--remove-orphans` | Destroy services and networks not in config |
417
105
  | `--verbose` | Show git-style +/- diff (diff command only) |
418
- | `--help` | Show usage |
419
- | `--version` | Show version |
420
106
 
421
- ## Examples
107
+ ## Features
422
108
 
423
- ```bash
424
- dokku-compose up # Configure all apps
425
- dokku-compose up myapp # Configure one app
426
- dokku-compose up --dry-run # Preview changes
427
- dokku-compose down --force myapp # Destroy an app
428
- dokku-compose ps # Show status
429
- ```
109
+ All features are idempotent — running `up` twice produces no changes.
110
+
111
+ | Feature | Description | Reference |
112
+ |---------|-------------|-----------|
113
+ | Apps | Create and destroy Dokku apps | [apps](docs/reference/apps.md) |
114
+ | Environment Variables | Set config vars per app or globally, with full convergence | [config](docs/reference/config.md) |
115
+ | Build | Dockerfile path, build context, app.json, build args | [builder](docs/reference/builder.md) |
116
+ | Docker Options | Custom Docker options per phase (build/deploy/run) | [docker_options](docs/reference/docker_options.md) |
117
+ | Networks | Create shared Docker networks, attach to apps | [network](docs/reference/network.md) |
118
+ | Domains | Configure domains per app or globally | [domains](docs/reference/domains.md) |
119
+ | Port Mappings | Map external ports to container ports | [ports](docs/reference/ports.md) |
120
+ | SSL Certificates | Add or remove SSL certs | [certs](docs/reference/certs.md) |
121
+ | Proxy | Enable/disable proxy, select implementation | [proxy](docs/reference/proxy.md) |
122
+ | Storage | Persistent bind mounts with full convergence | [storage](docs/reference/storage.md) |
123
+ | Nginx | Set any nginx property per app or globally | [nginx](docs/reference/nginx.md) |
124
+ | Zero-Downtime Checks | Configure deploy checks, disable per process type | [checks](docs/reference/checks.md) |
125
+ | Log Management | Log retention and vector sink configuration | [logs](docs/reference/logs.md) |
126
+ | Plugins | Install Dokku plugins declaratively | [plugins](docs/reference/plugins.md) |
127
+ | Postgres | Postgres services with optional S3 backups | [postgres](docs/reference/postgres.md) |
128
+ | Redis | Redis service instances | [redis](docs/reference/redis.md) |
129
+ | Service Links | Link postgres/redis services to apps | [plugins](docs/reference/plugins.md#linking-services-to-apps-appsapplinks) |
430
130
 
431
131
  ## Execution Modes
432
132
 
@@ -434,52 +134,11 @@ dokku-compose ps # Show status
434
134
  # Run remotely over SSH (recommended)
435
135
  DOKKU_HOST=my-server.example.com dokku-compose up
436
136
 
437
- # Run on the Dokku server itself (use DOKKU_HOST=localhost for best compatibility)
137
+ # Run on the Dokku server itself
438
138
  DOKKU_HOST=localhost dokku-compose up
439
139
  ```
440
140
 
441
- When `DOKKU_HOST` is set, all Dokku commands are sent over SSH. This is the recommended mode — it works both remotely from your local machine and directly on the server. SSH mode avoids compatibility issues with Dokku's basher environment that can affect some commands when called directly from Node.js. SSH key access to the Dokku server is required.
442
-
443
- ## What `up` Does
444
-
445
- Idempotently ensures desired state, in order:
446
-
447
- 1. Check Dokku version (warn on mismatch)
448
- 2. Install missing plugins
449
- 3. Set global config (domains, env vars, nginx defaults)
450
- 4. Create shared networks
451
- 5. Create service instances (from top-level `services:`)
452
- 6. For each app:
453
- - Create app (if not exists)
454
- - Set domains, link/unlink services, attach networks
455
- - Enable/disable proxy, set ports, add SSL, mount storage
456
- - Configure nginx, checks, logs, env vars, build, and docker options
457
-
458
- Running `up` twice produces no changes — every step checks current state before acting.
459
-
460
- `up` is mostly additive. Removing a key (e.g. deleting a `ports:` block) won't remove the corresponding setting from Dokku. The exception is `links:`, which is fully declarative — services not in the list are unlinked. Use `down --force` to fully reset an app, or `--remove-orphans` to destroy services and networks no longer in config.
461
-
462
- ## Output
463
-
464
- ```
465
- [networks ] Creating backend-net... done
466
- [services ] Creating api-postgres (postgres 17-3.5)... done
467
- [services ] Creating api-redis (redis)... done
468
- [services ] Creating shared-cache (redis)... done
469
- [api ] Creating app... done
470
- [api ] Setting domains: api.example.com... done
471
- [api ] Linking api-postgres... done
472
- [api ] Linking api-redis... done
473
- [api ] Linking shared-cache... done
474
- [api ] Setting ports https:4001:4000... done
475
- [api ] Adding SSL certificate... done
476
- [api ] Mounting /var/lib/dokku/data/storage/api/uploads:/app/uploads... done
477
- [api ] Setting nginx client-max-body-size=15m... done
478
- [api ] Setting checks wait-to-retire=60... done
479
- [api ] Setting 2 env var(s)... done
480
- [worker ] Creating app... already configured
481
- [worker ] Linking shared-cache... already configured
482
- ```
141
+ When `DOKKU_HOST` is set, all commands are sent over SSH. This is the recommended mode — it works both remotely and on the server. SSH key access to the Dokku server is required.
483
142
 
484
143
  ## Architecture
485
144
 
@@ -513,7 +172,9 @@ dokku-compose/
513
172
  │ │ ├── proxy.ts # dokku proxy:*
514
173
  │ │ ├── registry.ts # dokku registry:*
515
174
  │ │ ├── scheduler.ts # dokku scheduler:*
516
- │ │ ├── services.ts # Service instances, links, plugin scripts
175
+ │ │ ├── postgres.ts # dokku postgres:* (create, backup, export)
176
+ │ │ ├── redis.ts # dokku redis:* (create, export)
177
+ │ │ ├── links.ts # Service link resolution across plugins
517
178
  │ │ └── storage.ts # dokku storage:*
518
179
  │ ├── commands/
519
180
  │ │ ├── up.ts # up command orchestration
@@ -542,7 +203,7 @@ bun install
542
203
  bun test
543
204
 
544
205
  # Run a specific module's tests
545
- bun test src/modules/services.test.ts
206
+ bun test src/modules/postgres.test.ts
546
207
  ```
547
208
 
548
209
  Tests use [Bun's test runner](https://bun.sh/docs/cli/test) with a mocked `Runner` — no real Dokku server needed.
package/dist/index.js CHANGED
@@ -79,12 +79,15 @@ var ServiceBackupSchema = z.object({
79
79
  bucket: z.string(),
80
80
  auth: ServiceBackupAuthSchema
81
81
  });
82
- var ServiceSchema = z.object({
83
- plugin: z.string(),
82
+ var PostgresSchema = z.object({
84
83
  version: z.string().optional(),
85
84
  image: z.string().optional(),
86
85
  backup: ServiceBackupSchema.optional()
87
86
  });
87
+ var RedisSchema = z.object({
88
+ version: z.string().optional(),
89
+ image: z.string().optional()
90
+ });
88
91
  var PluginSchema = z.object({
89
92
  url: z.string().url(),
90
93
  version: z.string().optional()
@@ -95,7 +98,8 @@ var ConfigSchema = z.object({
95
98
  }).optional(),
96
99
  plugins: z.record(z.string(), PluginSchema).optional(),
97
100
  networks: z.array(z.string()).optional(),
98
- services: z.record(z.string(), ServiceSchema).optional(),
101
+ postgres: z.record(z.string(), PostgresSchema).optional(),
102
+ redis: z.record(z.string(), RedisSchema).optional(),
99
103
  apps: z.record(z.string(), AppSchema),
100
104
  domains: z.union([z.array(z.string()), z.literal(false)]).optional(),
101
105
  env: EnvMapSchema.optional(),
@@ -805,7 +809,7 @@ async function exportNetworks(ctx) {
805
809
  return output.split("\n").map((s) => s.trim()).filter((s) => s && !s.startsWith("=====>") && !DOCKER_BUILTIN_NETWORKS.has(s));
806
810
  }
807
811
 
808
- // src/modules/services.ts
812
+ // src/modules/postgres.ts
809
813
  import { createHash as createHash2 } from "crypto";
810
814
  function backupHashKey(serviceName) {
811
815
  return "DOKKU_COMPOSE_BACKUP_HASH_" + serviceName.toUpperCase().replace(/-/g, "_");
@@ -813,22 +817,22 @@ function backupHashKey(serviceName) {
813
817
  function computeBackupHash(backup) {
814
818
  return createHash2("sha256").update(JSON.stringify(backup)).digest("hex");
815
819
  }
816
- async function ensureServices(ctx, services) {
820
+ async function ensurePostgres(ctx, services) {
817
821
  for (const [name, config] of Object.entries(services)) {
818
822
  logAction("services", `Ensuring ${name}`);
819
- const exists = await ctx.check(`${config.plugin}:exists`, name);
823
+ const exists = await ctx.check("postgres:exists", name);
820
824
  if (exists) {
821
825
  logSkip();
822
826
  continue;
823
827
  }
824
- const args = [`${config.plugin}:create`, name];
828
+ const args = ["postgres:create", name];
825
829
  if (config.image) args.push("--image", config.image);
826
830
  if (config.version) args.push("--image-version", config.version);
827
831
  await ctx.run(...args);
828
832
  logDone();
829
833
  }
830
834
  }
831
- async function ensureServiceBackups(ctx, services) {
835
+ async function ensurePostgresBackups(ctx, services) {
832
836
  for (const [name, config] of Object.entries(services)) {
833
837
  if (!config.backup) continue;
834
838
  logAction("services", `Configuring backup for ${name}`);
@@ -840,9 +844,9 @@ async function ensureServiceBackups(ctx, services) {
840
844
  continue;
841
845
  }
842
846
  const { schedule, bucket, auth } = config.backup;
843
- await ctx.run(`${config.plugin}:backup-deauth`, name);
847
+ await ctx.run("postgres:backup-deauth", name);
844
848
  await ctx.run(
845
- `${config.plugin}:backup-auth`,
849
+ "postgres:backup-auth",
846
850
  name,
847
851
  auth.access_key_id,
848
852
  auth.secret_access_key,
@@ -850,86 +854,172 @@ async function ensureServiceBackups(ctx, services) {
850
854
  auth.signature_version,
851
855
  auth.endpoint
852
856
  );
853
- await ctx.run(`${config.plugin}:backup-schedule`, name, schedule, bucket);
857
+ await ctx.run("postgres:backup-schedule", name, schedule, bucket);
854
858
  await ctx.run("config:set", "--global", `${hashKey}=${desiredHash}`);
855
859
  logDone();
856
860
  }
857
861
  }
858
- async function ensureAppLinks(ctx, app, desiredLinks, allServices) {
859
- const desiredSet = new Set(desiredLinks);
860
- for (const [serviceName, serviceConfig] of Object.entries(allServices)) {
861
- const isLinked = await ctx.check(`${serviceConfig.plugin}:linked`, serviceName, app);
862
- const isDesired = desiredSet.has(serviceName);
863
- if (isDesired && !isLinked) {
864
- logAction(app, `Linking ${serviceName}`);
865
- await ctx.run(`${serviceConfig.plugin}:link`, serviceName, app, "--no-restart");
866
- logDone();
867
- } else if (!isDesired && isLinked) {
868
- logAction(app, `Unlinking ${serviceName}`);
869
- await ctx.run(`${serviceConfig.plugin}:unlink`, serviceName, app, "--no-restart");
870
- logDone();
862
+ async function destroyPostgres(ctx, services) {
863
+ for (const [name] of Object.entries(services)) {
864
+ logAction("services", `Destroying ${name}`);
865
+ const exists = await ctx.check("postgres:exists", name);
866
+ if (!exists) {
867
+ logSkip();
868
+ continue;
871
869
  }
870
+ await ctx.run("postgres:destroy", name, "--force");
871
+ logDone();
872
872
  }
873
873
  }
874
- async function destroyAppLinks(ctx, app, links, allServices) {
875
- for (const serviceName of links) {
876
- const config = allServices[serviceName];
877
- if (!config) continue;
878
- const isLinked = await ctx.check(`${config.plugin}:linked`, serviceName, app);
879
- if (isLinked) {
880
- await ctx.run(`${config.plugin}:unlink`, serviceName, app, "--no-restart");
874
+ function parseBackupSchedule(cronLine) {
875
+ const parts = cronLine.trim().split(/\s+/);
876
+ if (parts.length < 8) return null;
877
+ const schedule = parts.slice(0, 5).join(" ");
878
+ const bucket = parts[parts.length - 1];
879
+ return { schedule, bucket };
880
+ }
881
+ async function exportPostgres(ctx) {
882
+ const services = {};
883
+ const listOutput = await ctx.query("postgres:list");
884
+ const lines = listOutput.split("\n").slice(1);
885
+ for (const line of lines) {
886
+ const name = line.trim().split(/\s+/)[0];
887
+ if (!name) continue;
888
+ const infoOutput = await ctx.query("postgres:info", name);
889
+ const versionMatch = infoOutput.match(/Version:\s+(\S+)/);
890
+ if (!versionMatch) continue;
891
+ const versionField = versionMatch[1];
892
+ const colonIdx = versionField.lastIndexOf(":");
893
+ const config = {};
894
+ if (colonIdx > 0) {
895
+ const image = versionField.slice(0, colonIdx);
896
+ const version2 = versionField.slice(colonIdx + 1);
897
+ if (image !== "postgres") config.image = image;
898
+ if (version2) config.version = version2;
899
+ } else {
900
+ config.version = versionField;
881
901
  }
902
+ try {
903
+ const cronOutput = await ctx.query("postgres:backup-schedule-cat", name);
904
+ if (cronOutput) {
905
+ const parsed = parseBackupSchedule(cronOutput);
906
+ if (parsed) {
907
+ config.backup = {
908
+ schedule: parsed.schedule,
909
+ bucket: parsed.bucket,
910
+ auth: {
911
+ access_key_id: "REPLACE_ME",
912
+ secret_access_key: "REPLACE_ME",
913
+ region: "REPLACE_ME",
914
+ signature_version: "REPLACE_ME",
915
+ endpoint: "REPLACE_ME"
916
+ }
917
+ };
918
+ }
919
+ }
920
+ } catch {
921
+ }
922
+ services[name] = config;
882
923
  }
924
+ return services;
883
925
  }
884
- async function destroyServices(ctx, services) {
926
+
927
+ // src/modules/redis.ts
928
+ async function ensureRedis(ctx, services) {
885
929
  for (const [name, config] of Object.entries(services)) {
930
+ logAction("services", `Ensuring ${name}`);
931
+ const exists = await ctx.check("redis:exists", name);
932
+ if (exists) {
933
+ logSkip();
934
+ continue;
935
+ }
936
+ const args = ["redis:create", name];
937
+ if (config.image) args.push("--image", config.image);
938
+ if (config.version) args.push("--image-version", config.version);
939
+ await ctx.run(...args);
940
+ logDone();
941
+ }
942
+ }
943
+ async function destroyRedis(ctx, services) {
944
+ for (const [name] of Object.entries(services)) {
886
945
  logAction("services", `Destroying ${name}`);
887
- const exists = await ctx.check(`${config.plugin}:exists`, name);
946
+ const exists = await ctx.check("redis:exists", name);
888
947
  if (!exists) {
889
948
  logSkip();
890
949
  continue;
891
950
  }
892
- await ctx.run(`${config.plugin}:destroy`, name, "--force");
951
+ await ctx.run("redis:destroy", name, "--force");
893
952
  logDone();
894
953
  }
895
954
  }
896
- var SERVICE_PLUGINS = ["postgres", "redis"];
897
- async function exportServices(ctx) {
955
+ async function exportRedis(ctx) {
898
956
  const services = {};
899
- const pluginOutput = await ctx.query("plugin:list");
900
- const installedPlugins = new Set(
901
- pluginOutput.split("\n").map((line) => line.trim().split(/\s+/)[0]).filter(Boolean)
902
- );
903
- for (const plugin of SERVICE_PLUGINS) {
904
- if (!installedPlugins.has(plugin)) continue;
905
- const listOutput = await ctx.query(`${plugin}:list`);
906
- const lines = listOutput.split("\n").slice(1);
907
- for (const line of lines) {
908
- const name = line.trim().split(/\s+/)[0];
909
- if (!name) continue;
910
- const infoOutput = await ctx.query(`${plugin}:info`, name);
911
- const versionMatch = infoOutput.match(/Version:\s+(\S+)/);
912
- if (!versionMatch) continue;
913
- const versionField = versionMatch[1];
914
- const colonIdx = versionField.lastIndexOf(":");
915
- const config = { plugin };
916
- if (colonIdx > 0) {
917
- const image = versionField.slice(0, colonIdx);
918
- const version2 = versionField.slice(colonIdx + 1);
919
- if (image !== plugin) config.image = image;
920
- if (version2) config.version = version2;
921
- } else {
922
- config.version = versionField;
923
- }
924
- services[name] = config;
957
+ const listOutput = await ctx.query("redis:list");
958
+ const lines = listOutput.split("\n").slice(1);
959
+ for (const line of lines) {
960
+ const name = line.trim().split(/\s+/)[0];
961
+ if (!name) continue;
962
+ const infoOutput = await ctx.query("redis:info", name);
963
+ const versionMatch = infoOutput.match(/Version:\s+(\S+)/);
964
+ if (!versionMatch) continue;
965
+ const versionField = versionMatch[1];
966
+ const colonIdx = versionField.lastIndexOf(":");
967
+ const config = {};
968
+ if (colonIdx > 0) {
969
+ const image = versionField.slice(0, colonIdx);
970
+ const version2 = versionField.slice(colonIdx + 1);
971
+ if (image !== "redis") config.image = image;
972
+ if (version2) config.version = version2;
973
+ } else {
974
+ config.version = versionField;
925
975
  }
976
+ services[name] = config;
926
977
  }
927
978
  return services;
928
979
  }
929
- async function exportAppLinks(ctx, app, services) {
980
+
981
+ // src/modules/links.ts
982
+ function resolveServicePlugin(name, config) {
983
+ if (config.postgres?.[name]) return { plugin: "postgres", config: config.postgres[name] };
984
+ if (config.redis?.[name]) return { plugin: "redis", config: config.redis[name] };
985
+ return void 0;
986
+ }
987
+ function allServices(config) {
988
+ const entries = [];
989
+ for (const name of Object.keys(config.postgres ?? {})) entries.push([name, "postgres"]);
990
+ for (const name of Object.keys(config.redis ?? {})) entries.push([name, "redis"]);
991
+ return entries;
992
+ }
993
+ async function ensureAppLinks(ctx, app, desiredLinks, config) {
994
+ const desiredSet = new Set(desiredLinks);
995
+ for (const [serviceName, plugin] of allServices(config)) {
996
+ const isLinked = await ctx.check(`${plugin}:linked`, serviceName, app);
997
+ const isDesired = desiredSet.has(serviceName);
998
+ if (isDesired && !isLinked) {
999
+ logAction(app, `Linking ${serviceName}`);
1000
+ await ctx.run(`${plugin}:link`, serviceName, app, "--no-restart");
1001
+ logDone();
1002
+ } else if (!isDesired && isLinked) {
1003
+ logAction(app, `Unlinking ${serviceName}`);
1004
+ await ctx.run(`${plugin}:unlink`, serviceName, app, "--no-restart");
1005
+ logDone();
1006
+ }
1007
+ }
1008
+ }
1009
+ async function destroyAppLinks(ctx, app, links, config) {
1010
+ for (const serviceName of links) {
1011
+ const resolved = resolveServicePlugin(serviceName, config);
1012
+ if (!resolved) continue;
1013
+ const isLinked = await ctx.check(`${resolved.plugin}:linked`, serviceName, app);
1014
+ if (isLinked) {
1015
+ await ctx.run(`${resolved.plugin}:unlink`, serviceName, app, "--no-restart");
1016
+ }
1017
+ }
1018
+ }
1019
+ async function exportAppLinks(ctx, app, config) {
930
1020
  const linked = [];
931
- for (const [serviceName, config] of Object.entries(services)) {
932
- const isLinked = await ctx.check(`${config.plugin}:linked`, serviceName, app);
1021
+ for (const [serviceName, plugin] of allServices(config)) {
1022
+ const isLinked = await ctx.check(`${plugin}:linked`, serviceName, app);
933
1023
  if (isLinked) linked.push(serviceName);
934
1024
  }
935
1025
  return linked;
@@ -976,8 +1066,9 @@ async function runUp(ctx, config, appFilter) {
976
1066
  if (config.logs !== void 0) await ensureGlobalLogs(ctx, config.logs);
977
1067
  if (config.nginx !== void 0) await ensureGlobalNginx(ctx, config.nginx);
978
1068
  if (config.networks) await ensureNetworks(ctx, config.networks);
979
- if (config.services) await ensureServices(ctx, config.services);
980
- if (config.services) await ensureServiceBackups(ctx, config.services);
1069
+ if (config.postgres) await ensurePostgres(ctx, config.postgres);
1070
+ if (config.redis) await ensureRedis(ctx, config.redis);
1071
+ if (config.postgres) await ensurePostgresBackups(ctx, config.postgres);
981
1072
  for (const app of apps) {
982
1073
  const appConfig = config.apps[app];
983
1074
  if (!appConfig) continue;
@@ -987,8 +1078,8 @@ async function runUp(ctx, config, appFilter) {
987
1078
  await reconcile(NetworkProps, ctx, app, appConfig.network);
988
1079
  await reconcile(Proxy, ctx, app, appConfig.proxy?.enabled);
989
1080
  await reconcile(Ports, ctx, app, appConfig.ports);
990
- if (config.services) {
991
- await ensureAppLinks(ctx, app, appConfig.links ?? [], config.services);
1081
+ if (config.postgres || config.redis) {
1082
+ await ensureAppLinks(ctx, app, appConfig.links ?? [], config);
992
1083
  }
993
1084
  await reconcile(Certs, ctx, app, appConfig.ssl);
994
1085
  await reconcile(Storage, ctx, app, appConfig.storage);
@@ -1028,14 +1119,13 @@ async function runDown(ctx, config, appFilter, opts) {
1028
1119
  for (const app of apps) {
1029
1120
  const appConfig = config.apps[app];
1030
1121
  if (!appConfig) continue;
1031
- if (config.services && appConfig.links) {
1032
- await destroyAppLinks(ctx, app, appConfig.links, config.services);
1122
+ if (appConfig.links && (config.postgres || config.redis)) {
1123
+ await destroyAppLinks(ctx, app, appConfig.links, config);
1033
1124
  }
1034
1125
  await destroyApp(ctx, app);
1035
1126
  }
1036
- if (config.services) {
1037
- await destroyServices(ctx, config.services);
1038
- }
1127
+ if (config.postgres) await destroyPostgres(ctx, config.postgres);
1128
+ if (config.redis) await destroyRedis(ctx, config.redis);
1039
1129
  if (config.networks) {
1040
1130
  for (const net of config.networks) {
1041
1131
  logAction("network", `Destroying ${net}`);
@@ -1088,8 +1178,10 @@ async function runExport(ctx, opts) {
1088
1178
  const apps = opts.appFilter?.length ? opts.appFilter : await exportApps(ctx);
1089
1179
  const networks = await exportNetworks(ctx);
1090
1180
  if (networks.length > 0) config.networks = networks;
1091
- const services = await exportServices(ctx);
1092
- if (Object.keys(services).length > 0) config.services = services;
1181
+ const postgres = await exportPostgres(ctx);
1182
+ if (Object.keys(postgres).length > 0) config.postgres = postgres;
1183
+ const redis = await exportRedis(ctx);
1184
+ if (Object.keys(redis).length > 0) config.redis = redis;
1093
1185
  const prefetched = /* @__PURE__ */ new Map();
1094
1186
  await Promise.all(
1095
1187
  ALL_APP_RESOURCES.filter((r) => !r.forceApply && !r.key.startsWith("_") && r.readAll).map(async (r) => {
@@ -1112,8 +1204,8 @@ async function runExport(ctx, opts) {
1112
1204
  appConfig[resource.key] = value;
1113
1205
  }
1114
1206
  }
1115
- if (Object.keys(services).length > 0) {
1116
- const links = await exportAppLinks(ctx, app, services);
1207
+ if (config.postgres || config.redis) {
1208
+ const links = await exportAppLinks(ctx, app, config);
1117
1209
  if (links.length > 0) appConfig.links = links;
1118
1210
  }
1119
1211
  config.apps[app] = appConfig;
@@ -1189,8 +1281,13 @@ async function computeDiff(ctx, config) {
1189
1281
  }
1190
1282
  result.apps[app] = appDiff;
1191
1283
  }
1192
- for (const [svc, svcConfig] of Object.entries(config.services ?? {})) {
1193
- const exists = await ctx.check(`${svcConfig.plugin}:exists`, svc);
1284
+ for (const svc of Object.keys(config.postgres ?? {})) {
1285
+ const exists = await ctx.check("postgres:exists", svc);
1286
+ result.services[svc] = { status: exists ? "in-sync" : "missing" };
1287
+ if (!exists) result.inSync = false;
1288
+ }
1289
+ for (const svc of Object.keys(config.redis ?? {})) {
1290
+ const exists = await ctx.check("redis:exists", svc);
1194
1291
  result.services[svc] = { status: exists ? "in-sync" : "missing" };
1195
1292
  if (!exists) result.inSync = false;
1196
1293
  }
@@ -1296,25 +1393,28 @@ function validate(filePath) {
1296
1393
  }
1297
1394
  const data = raw;
1298
1395
  if (data?.apps && typeof data.apps === "object") {
1299
- const serviceNames = new Set(
1300
- data?.services && typeof data.services === "object" ? Object.keys(data.services) : []
1301
- );
1396
+ const serviceNames = /* @__PURE__ */ new Set();
1397
+ if (data?.postgres && typeof data.postgres === "object") {
1398
+ for (const name of Object.keys(data.postgres)) serviceNames.add(name);
1399
+ }
1400
+ if (data?.redis && typeof data.redis === "object") {
1401
+ for (const name of Object.keys(data.redis)) serviceNames.add(name);
1402
+ }
1302
1403
  for (const [appName, appCfg] of Object.entries(data.apps)) {
1303
1404
  if (!appCfg?.links) continue;
1304
1405
  for (const link of appCfg.links) {
1305
1406
  if (!serviceNames.has(link)) {
1306
- errors.push(`apps.${appName}.links: service "${link}" not defined in services`);
1407
+ errors.push(`apps.${appName}.links: service "${link}" not defined in postgres or redis`);
1307
1408
  }
1308
1409
  }
1309
1410
  }
1310
1411
  }
1311
- if (data?.services && data?.plugins) {
1312
- const pluginNames = new Set(Object.keys(data.plugins));
1313
- for (const [svcName, svcCfg] of Object.entries(data.services)) {
1314
- if (svcCfg?.plugin && !pluginNames.has(svcCfg.plugin)) {
1315
- warnings.push(`services.${svcName}.plugin: "${svcCfg.plugin}" not declared in plugins (may be pre-installed)`);
1316
- }
1317
- }
1412
+ const pluginNames = data?.plugins ? new Set(Object.keys(data.plugins)) : /* @__PURE__ */ new Set();
1413
+ if (data?.postgres && pluginNames.size > 0 && !pluginNames.has("postgres")) {
1414
+ warnings.push(`postgres: "postgres" plugin not declared in plugins (may be pre-installed)`);
1415
+ }
1416
+ if (data?.redis && pluginNames.size > 0 && !pluginNames.has("redis")) {
1417
+ warnings.push(`redis: "redis" plugin not declared in plugins (may be pre-installed)`);
1318
1418
  }
1319
1419
  return { errors, warnings };
1320
1420
  }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "dokku-compose",
3
- "version": "0.7.0",
3
+ "version": "0.9.0",
4
4
  "description": "Docker Compose for Dokku — declare your entire server in a single YAML file.",
5
5
  "main": "dist/index.js",
6
6
  "exports": "./dist/index.js",
@@ -13,7 +13,7 @@
13
13
  ],
14
14
  "type": "module",
15
15
  "engines": {
16
- "node": ">=18"
16
+ "node": ">=20"
17
17
  },
18
18
  "scripts": {
19
19
  "test": "bun test",