@crossdelta/infrastructure 0.10.1 → 0.11.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -3,23 +3,52 @@
3
3
  [![npm version](https://img.shields.io/npm/v/@crossdelta/infrastructure.svg)](https://www.npmjs.com/package/@crossdelta/infrastructure)
4
4
  [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
5
5
 
6
- Pulumi abstractions that turn per-service config objects into complete Kubernetes deployments. You describe **what** each service needs (ports, env, secrets, health checks), the package handles **how** (Deployments, Services, Ingress, TLS, probes, pull secrets).
6
+ Opinionated Pulumi library for deploying microservices to **DigitalOcean Kubernetes (DOKS)**. You describe each service as a typed config object, the package generates Deployments, Services, Ingress, Secrets, health probes, and rolling update strategies.
7
7
 
8
- **What you skip writing:**
9
- - Boilerplate K8s YAML / Pulumi resource declarations per service
10
- - NATS JetStream + cert-manager + NGINX Ingress setup
11
- - Docker registry secrets, health probes, rolling update strategies
12
- - Multi-environment concerns (shared cluster, per-stack namespaces)
8
+ Built for teams running TypeScript microservices on DOKS with NATS JetStream, GHCR, and Pulumi. If that's your stack, this saves real boilerplate. If not, you're probably better off with plain Pulumi or Helm.
9
+
10
+ ## When to use this
11
+
12
+ - You deploy multiple TypeScript services to a **DigitalOcean** Kubernetes cluster
13
+ - You use **Pulumi** (not Terraform, not Helm-only) for infrastructure
14
+ - You want one config object per service instead of ~150 lines of K8s resource declarations
15
+ - You need NATS JetStream, NGINX Ingress, cert-manager, or Caddy as a reverse proxy
16
+
17
+ ## When NOT to use this
18
+
19
+ - **AWS/GCP/Azure**: Only DOKS is implemented. EKS/AKS/GKE are not supported.
20
+ - **Terraform or plain Helm**: This is Pulumi-only.
21
+ - **Non-opinionated setups**: The package makes choices for you (Helm chart versions, deployment strategies, DO-specific annotations). If you need full control, use Pulumi directly.
22
+ - **Single-service projects**: The overhead isn't worth it for one service.
13
23
 
14
24
  ## Install
15
25
 
16
26
  ```bash
17
27
  npm install @crossdelta/infrastructure @pulumi/pulumi @pulumi/kubernetes
28
+ # For DOKS cluster creation:
29
+ npm install @pulumi/digitalocean
30
+ # For NATS stream deployment (optional):
31
+ npm install @crossdelta/cloudevents
18
32
  ```
19
33
 
20
- ## End-to-End Example
34
+ ## What's included
35
+
36
+ | Module | What it does |
37
+ |--------|-------------|
38
+ | **Cluster** | `createDOKSCluster()`, `createVPC()`, `createK8sProviderFromKubeconfig()` |
39
+ | **Workloads** | `deployK8sService()`, `deployK8sServices()`, `createNamespace()`, `createImagePullSecret()` |
40
+ | **NGINX Ingress** | `deployNginxIngress()` via Helm (chart v4.11.3) |
41
+ | **cert-manager** | `deployCertManager()` via Helm (chart v1.16.2) |
42
+ | **NATS** | `deployNats()` via Helm (chart v1.2.6), `buildNatsUrl()` |
43
+ | **Caddy** | `deployCaddy()`, `generateCaddyfile()` for reverse proxy with on-demand TLS |
44
+ | **Streams** | `collectStreamDefinitions()`, `deployStreams()`, `materializeStreams()` (NATS JetStream) |
45
+ | **Service Discovery** | `discoverServiceConfigs()` auto-discovers configs from `infra/services/*.ts` |
46
+ | **Local Dev** | `generateComposeYaml()`, k3d cluster scripts |
47
+ | **CLI** | `generate-env` generates `.env.local` from Pulumi config + service discovery |
48
+
49
+ ## Quick Start
21
50
 
22
- One `index.ts` that provisions a cluster, runtime, and all services:
51
+ One `index.ts` that provisions a cluster, runtime components, and all services:
23
52
 
24
53
  ```typescript
25
54
  import {
@@ -31,7 +60,6 @@ import {
31
60
  discoverServiceConfigs,
32
61
  } from '@crossdelta/infrastructure'
33
62
 
34
- // 1. Cluster
35
63
  const vpc = createVPC({ name: 'my-vpc', region: 'fra1' })
36
64
  const { provider } = createDOKSCluster({
37
65
  name: 'my-cluster',
@@ -40,21 +68,19 @@ const { provider } = createDOKSCluster({
40
68
  })
41
69
  createNamespace(provider, 'my-namespace')
42
70
 
43
- // 2. Runtime (toggle what you need)
44
71
  const runtime = deployRuntime(provider, 'my-namespace', {
45
72
  nats: { enabled: true, config: { replicas: 1, jetstream: { enabled: true } } },
46
73
  ingress: { enabled: true },
47
74
  certManager: { enabled: true, config: { email: 'ops@example.com' } },
48
75
  })
49
76
 
50
- // 3. Services (auto-discovered from infra/services/*.ts)
51
77
  const configs = discoverServiceConfigs('services')
52
78
  deployK8sServices(provider, 'my-namespace', configs)
53
79
  ```
54
80
 
55
81
  ## Service Config
56
82
 
57
- Each file in `infra/services/` exports one config. The package derives Deployment, Service, Ingress, Secret, and probes from it:
83
+ Each file in `infra/services/` exports one config object. The package derives all K8s resources from it:
58
84
 
59
85
  ```typescript
60
86
  import { ports, type K8sServiceConfig } from '@crossdelta/infrastructure'
@@ -75,14 +101,14 @@ const config: K8sServiceConfig = {
75
101
  export default config
76
102
  ```
77
103
 
78
- See `K8sServiceConfig` type for all available fields (replicas, volumes, strategy, containerEnv, etc.).
104
+ See `K8sServiceConfig` for all fields: replicas, volumes, strategy, containerEnv, labels, annotations, serviceType, etc.
79
105
 
80
106
  ## Shared Cluster (Multi-Stack)
81
107
 
82
- When multiple Pulumi stacks share one cluster, use `clusterName`/`vpcName` to pin the DigitalOcean resource name (default appends the stack name):
108
+ Pin DigitalOcean resource names with `clusterName`/`vpcName` to share a cluster across Pulumi stacks:
83
109
 
84
110
  ```typescript
85
- // Stack A (stage) — owns the cluster
111
+ // Stage stack owns the cluster
86
112
  const { provider, kubeconfig } = createDOKSCluster({
87
113
  name: 'my-cluster',
88
114
  clusterName: 'my-cluster',
@@ -91,7 +117,7 @@ const { provider, kubeconfig } = createDOKSCluster({
91
117
  })
92
118
  export const clusterKubeconfig = kubeconfig
93
119
 
94
- // Stack B (production) — references the shared cluster
120
+ // Production stack references stage
95
121
  const stageStack = new StackReference('org/project/stage')
96
122
  const provider = createK8sProviderFromKubeconfig(
97
123
  'production',
@@ -99,9 +125,24 @@ const provider = createK8sProviderFromKubeconfig(
99
125
  )
100
126
  ```
101
127
 
102
- ## generate-env
128
+ ## generate-env CLI
129
+
130
+ Generates `.env.local` from Pulumi secrets and discovered service configs:
131
+
132
+ ```bash
133
+ npx generate-env # defaults to stage stack
134
+ npx generate-env --stack=production
135
+ npx generate-env --no-pulumi # skip Pulumi, only service discovery
136
+ ```
137
+
138
+ ## Limitations
139
+
140
+ - **DOKS only.** No other cloud provider is implemented.
141
+ - **Helm chart versions are pinned.** NGINX v4.11.3, cert-manager v1.16.2, NATS v1.2.6. No override option yet.
142
+ - **Service discovery assumes a convention:** `infra/services/*.ts` files with `export default config`.
143
+ - **`@crossdelta/cloudevents` is a hard dependency** even if you don't use NATS streams https://github.com/orderboss/platform/issues/TBD.
144
+ - **Caddy deployment uses Recreate strategy** (RWO PVC incompatible with rolling updates).
103
145
 
104
- Generates `.env.local` from Pulumi secrets (default: `stage` stack) and discovered services. Override with `--stack=production` or `--no-pulumi`.
105
146
 
106
147
  ## License
107
148
 
package/dist/index.cjs CHANGED
@@ -64,6 +64,7 @@ __export(exports_lib, {
64
64
  generateComposeYaml: () => generateComposeYaml,
65
65
  generateComposeSetupScript: () => generateComposeSetupScript,
66
66
  generateComposeProject: () => generateComposeProject,
67
+ generateCaddyfile: () => generateCaddyfile,
67
68
  fromK8sPorts: () => fromK8sPorts,
68
69
  fromAppPlatformPorts: () => fromAppPlatformPorts,
69
70
  dockerHubImage: () => dockerHubImage,
@@ -77,6 +78,7 @@ __export(exports_lib, {
77
78
  deployK8sServices: () => deployK8sServices,
78
79
  deployK8sService: () => deployK8sService,
79
80
  deployCertManager: () => deployCertManager,
81
+ deployCaddy: () => deployCaddy,
80
82
  createVPC: () => createVPC,
81
83
  createPorts: () => createPorts,
82
84
  createPort: () => createPort,
@@ -1131,9 +1133,206 @@ var materializeStreams = (provider, namespace, config) => {
1131
1133
  });
1132
1134
  return job;
1133
1135
  };
1136
+ // lib/runtimes/doks/caddy.ts
1137
+ var k8s5 = __toESM(require("@pulumi/kubernetes"));
1138
+ var CADDY_DEFAULTS = {
1139
+ resources: {
1140
+ requests: { cpu: "50m", memory: "64Mi" },
1141
+ limits: { cpu: "200m", memory: "256Mi" }
1142
+ },
1143
+ storage: { size: "1Gi", storageClass: "do-block-storage" },
1144
+ healthCheck: { port: 80, path: "/healthz" }
1145
+ };
1146
+ var indent = (text, level) => {
1147
+ const prefix = " ".repeat(level);
1148
+ return text.split(`
1149
+ `).map((line) => line.trim() === "" ? "" : `${prefix}${line}`).join(`
1150
+ `);
1151
+ };
1152
+ var basicAuthLines = (basicAuth) => basicAuth ? ["basicauth {", ` ${basicAuth.user} ${basicAuth.hash}`, "}"] : [];
1153
+ var generateHandleBlock = (handle, level, basicAuth) => {
1154
+ const hasPath = handle.path != null;
1155
+ const header = hasPath ? `handle ${handle.path}* {` : "handle {";
1156
+ const body = [
1157
+ ...basicAuthLines(basicAuth)
1158
+ ];
1159
+ if (handle.redirect) {
1160
+ body.push(`redir ${handle.redirect} permanent`);
1161
+ } else if (handle.upstream) {
1162
+ if ((handle.stripPrefix ?? hasPath) && hasPath) {
1163
+ body.push(`uri strip_prefix ${handle.path}`);
1164
+ }
1165
+ body.push(`reverse_proxy ${handle.upstream}`);
1166
+ }
1167
+ const inner = body.map((line) => ` ${line}`).join(`
1168
+ `);
1169
+ return indent(`${header}
1170
+ ${inner}
1171
+ }`, level);
1172
+ };
1173
+ var generateRouteBlock = (route) => {
1174
+ const body = [];
1175
+ if (route.handles && route.handles.length > 0) {
1176
+ for (const handle of route.handles) {
1177
+ body.push(generateHandleBlock(handle, 1, route.basicAuth));
1178
+ }
1179
+ } else {
1180
+ body.push(...basicAuthLines(route.basicAuth).map((line) => ` ${line}`));
1181
+ if (route.redirect) {
1182
+ body.push(` redir ${route.redirect} permanent`);
1183
+ } else if (route.upstream) {
1184
+ body.push(` reverse_proxy ${route.upstream}`);
1185
+ }
1186
+ }
1187
+ return `${route.hosts} {
1188
+ ${body.join(`
1189
+ `)}
1190
+ }`;
1191
+ };
1192
+ var generateGlobalBlock = (config) => {
1193
+ const lines = ["{"];
1194
+ if (config.acmeEmail) {
1195
+ lines.push(` email ${config.acmeEmail}`);
1196
+ }
1197
+ lines.push(" admin off");
1198
+ if (config.onDemandTls) {
1199
+ lines.push(" on_demand_tls {");
1200
+ lines.push(` ask ${config.onDemandTls.askEndpoint}`);
1201
+ lines.push(" }");
1202
+ }
1203
+ lines.push(" servers {");
1204
+ lines.push(" trusted_proxies static private_ranges");
1205
+ lines.push(" }");
1206
+ lines.push("}");
1207
+ return lines.join(`
1208
+ `);
1209
+ };
1210
+ var generateCatchAllBlock = (upstream) => ["https:// {", " tls {", " on_demand", " }", ` reverse_proxy ${upstream}`, "}"].join(`
1211
+ `);
1212
+ var buildResourceSpec = (config, defaults) => {
1213
+ const resources = config ?? defaults ?? CADDY_DEFAULTS.resources;
1214
+ return {
1215
+ requests: resources.requests,
1216
+ limits: resources.limits
1217
+ };
1218
+ };
1219
+ var generateCaddyfile = (config) => {
1220
+ const healthCheck = config.healthCheck ?? CADDY_DEFAULTS.healthCheck;
1221
+ const healthCheckBlock = `:${healthCheck.port} {
1222
+ respond ${healthCheck.path} 200
1223
+ }`;
1224
+ const blocks = [
1225
+ generateGlobalBlock(config),
1226
+ ...config.routes.map(generateRouteBlock),
1227
+ healthCheckBlock,
1228
+ ...config.catchAllUpstream && config.onDemandTls ? [generateCatchAllBlock(config.catchAllUpstream)] : []
1229
+ ];
1230
+ return blocks.join(`
1231
+
1232
+ `) + `
1233
+ `;
1234
+ };
1235
+ var deployCaddy = (provider, namespace, config) => {
1236
+ const name = "caddy";
1237
+ const labels = { app: name, "app.kubernetes.io/name": name, "app.kubernetes.io/managed-by": "pulumi" };
1238
+ const healthCheck = config.healthCheck ?? CADDY_DEFAULTS.healthCheck;
1239
+ const storage = config.storage ?? CADDY_DEFAULTS.storage;
1240
+ const caddyfile = generateCaddyfile(config);
1241
+ const configMap = new k8s5.core.v1.ConfigMap(name, {
1242
+ metadata: { name, namespace, labels },
1243
+ data: { Caddyfile: caddyfile }
1244
+ }, { provider });
1245
+ const persistentVolumeClaim = new k8s5.core.v1.PersistentVolumeClaim(`${name}-data`, {
1246
+ metadata: { name: `${name}-data`, namespace, labels },
1247
+ spec: {
1248
+ accessModes: ["ReadWriteOnce"],
1249
+ storageClassName: storage.storageClass ?? CADDY_DEFAULTS.storage.storageClass,
1250
+ resources: { requests: { storage: storage.size } }
1251
+ }
1252
+ }, { provider });
1253
+ const caddyContainer = {
1254
+ name: "caddy",
1255
+ image: "caddy:2-alpine",
1256
+ ports: [
1257
+ { name: "https", containerPort: 443, protocol: "TCP" },
1258
+ { name: "http", containerPort: 80, protocol: "TCP" }
1259
+ ],
1260
+ resources: buildResourceSpec(config.resources),
1261
+ volumeMounts: [
1262
+ { name: "caddy-data", mountPath: "/data" },
1263
+ { name: "caddyfile", mountPath: "/etc/caddy/Caddyfile", subPath: "Caddyfile" }
1264
+ ],
1265
+ readinessProbe: {
1266
+ httpGet: { path: healthCheck.path, port: healthCheck.port },
1267
+ initialDelaySeconds: 5,
1268
+ periodSeconds: 10
1269
+ },
1270
+ livenessProbe: {
1271
+ httpGet: { path: healthCheck.path, port: healthCheck.port },
1272
+ initialDelaySeconds: 10,
1273
+ periodSeconds: 30
1274
+ }
1275
+ };
1276
+ const containers = [caddyContainer, ...config.sidecars ?? []];
1277
+ const deployment = new k8s5.apps.v1.Deployment(name, {
1278
+ metadata: { name, namespace, labels },
1279
+ spec: {
1280
+ replicas: 1,
1281
+ strategy: { type: "Recreate" },
1282
+ selector: { matchLabels: { app: name } },
1283
+ template: {
1284
+ metadata: { labels },
1285
+ spec: {
1286
+ containers,
1287
+ volumes: [
1288
+ {
1289
+ name: "caddy-data",
1290
+ persistentVolumeClaim: { claimName: `${name}-data` }
1291
+ },
1292
+ {
1293
+ name: "caddyfile",
1294
+ configMap: { name }
1295
+ }
1296
+ ]
1297
+ }
1298
+ }
1299
+ }
1300
+ }, { provider, dependsOn: [configMap, persistentVolumeClaim] });
1301
+ const service = new k8s5.core.v1.Service(name, {
1302
+ metadata: {
1303
+ name,
1304
+ namespace,
1305
+ labels,
1306
+ annotations: {
1307
+ "service.beta.kubernetes.io/do-loadbalancer-healthcheck-path": healthCheck.path,
1308
+ "service.beta.kubernetes.io/do-loadbalancer-healthcheck-protocol": "http",
1309
+ "service.beta.kubernetes.io/do-loadbalancer-healthcheck-port": String(healthCheck.port)
1310
+ }
1311
+ },
1312
+ spec: {
1313
+ type: "LoadBalancer",
1314
+ selector: { app: name },
1315
+ ports: [
1316
+ { name: "https", port: 443, targetPort: 443, protocol: "TCP" },
1317
+ { name: "http", port: 80, targetPort: 80, protocol: "TCP" }
1318
+ ]
1319
+ }
1320
+ }, { provider, dependsOn: [deployment] });
1321
+ const loadBalancerIp = service.status.apply((status) => {
1322
+ const ingress = status?.loadBalancer?.ingress?.[0];
1323
+ return ingress?.ip ?? ingress?.hostname ?? "";
1324
+ });
1325
+ return {
1326
+ deployment,
1327
+ service,
1328
+ persistentVolumeClaim,
1329
+ configMap,
1330
+ loadBalancerIp
1331
+ };
1332
+ };
1134
1333
  // lib/runtimes/doks/cluster.ts
1135
1334
  var digitalocean = __toESM(require("@pulumi/digitalocean"));
1136
- var k8s5 = __toESM(require("@pulumi/kubernetes"));
1335
+ var k8s6 = __toESM(require("@pulumi/kubernetes"));
1137
1336
  var pulumi4 = __toESM(require("@pulumi/pulumi"));
1138
1337
  function createDOKSCluster(config) {
1139
1338
  const stack = pulumi4.getStack();
@@ -1150,7 +1349,7 @@ function createDOKSCluster(config) {
1150
1349
  day: "sunday",
1151
1350
  startTime: "04:00"
1152
1351
  },
1153
- tags: config.tags ?? ["orderboss", `env:${stack}`],
1352
+ tags: config.tags ?? [`env:${stack}`],
1154
1353
  nodePool: {
1155
1354
  name: config.nodePool.name,
1156
1355
  size: config.nodePool.size,
@@ -1170,7 +1369,7 @@ function createDOKSCluster(config) {
1170
1369
  }
1171
1370
  return firstConfig.rawConfig;
1172
1371
  }));
1173
- const provider = new k8s5.Provider(`${config.name}-k8s-provider`, {
1372
+ const provider = new k8s6.Provider(`${config.name}-k8s-provider`, {
1174
1373
  kubeconfig
1175
1374
  });
1176
1375
  return {
@@ -1181,7 +1380,7 @@ function createDOKSCluster(config) {
1181
1380
  };
1182
1381
  }
1183
1382
  function createK8sProviderFromKubeconfig(name, kubeconfig) {
1184
- return new k8s5.Provider(name, { kubeconfig });
1383
+ return new k8s6.Provider(name, { kubeconfig });
1185
1384
  }
1186
1385
  // lib/runtimes/doks/vpc.ts
1187
1386
  var digitalocean2 = __toESM(require("@pulumi/digitalocean"));
@@ -1197,7 +1396,7 @@ function createVPC(config) {
1197
1396
  });
1198
1397
  }
1199
1398
  // lib/runtimes/doks/workloads.ts
1200
- var k8s6 = __toESM(require("@pulumi/kubernetes"));
1399
+ var k8s7 = __toESM(require("@pulumi/kubernetes"));
1201
1400
  var pulumi6 = __toESM(require("@pulumi/pulumi"));
1202
1401
 
1203
1402
  // lib/runtimes/doks/probes.ts
@@ -1226,7 +1425,7 @@ var normalizeK8sConfig = (config) => {
1226
1425
  if (config.containerPort) {
1227
1426
  console.warn(`⚠️ Service "${config.name}": containerPort is deprecated. Use ports instead.
1228
1427
  ` + ` Example: ports: ports().http(${config.containerPort}).build()
1229
- ` + ` See: https://github.com/orderboss/platform/blob/main/packages/infrastructure/README.md#port-configuration`);
1428
+ ` + ` See: https://www.npmjs.com/package/@crossdelta/infrastructure`);
1230
1429
  const ports2 = fromK8sPorts({
1231
1430
  containerPort: config.containerPort,
1232
1431
  additionalPorts: config.additionalPorts
@@ -1268,7 +1467,7 @@ var createImagePullSecret = (provider, namespace, name, config) => {
1268
1467
  }
1269
1468
  });
1270
1469
  });
1271
- return new k8s6.core.v1.Secret(name, {
1470
+ return new k8s7.core.v1.Secret(name, {
1272
1471
  metadata: {
1273
1472
  name,
1274
1473
  namespace,
@@ -1298,7 +1497,7 @@ var buildEnvVars = (config) => {
1298
1497
  })) : [];
1299
1498
  return [portEnv, ...plainEnvVars, ...secretEnvVars];
1300
1499
  };
1301
- var createServiceSecret = (provider, namespace, config, labels) => !config.secrets || Object.keys(config.secrets).length === 0 ? undefined : new k8s6.core.v1.Secret(`${config.name}-secret`, {
1500
+ var createServiceSecret = (provider, namespace, config, labels) => !config.secrets || Object.keys(config.secrets).length === 0 ? undefined : new k8s7.core.v1.Secret(`${config.name}-secret`, {
1302
1501
  metadata: {
1303
1502
  name: `${config.name}-secret`,
1304
1503
  namespace,
@@ -1311,7 +1510,7 @@ var createServiceVolumes = (provider, namespace, config, labels) => {
1311
1510
  if (!config.volumes) {
1312
1511
  return { pvcs: [], volumeMounts: [], volumes: [] };
1313
1512
  }
1314
- const pvcs = config.volumes.map((vol) => new k8s6.core.v1.PersistentVolumeClaim(`${config.name}-${vol.name}`, {
1513
+ const pvcs = config.volumes.map((vol) => new k8s7.core.v1.PersistentVolumeClaim(`${config.name}-${vol.name}`, {
1315
1514
  metadata: {
1316
1515
  name: `${config.name}-${vol.name}`,
1317
1516
  namespace,
@@ -1414,7 +1613,7 @@ var createServiceIngress = (provider, namespace, config, labels, service) => {
1414
1613
  const allHosts = [...primaryHosts, ...additionalHosts];
1415
1614
  const ingressRules = allHosts.length > 0 ? allHosts.map(createRule) : [createRule()];
1416
1615
  const tlsSecretName = config.ingress.tls?.secretName ?? `${config.name}-tls`;
1417
- return new k8s6.networking.v1.Ingress(`${config.name}-ingress`, {
1616
+ return new k8s7.networking.v1.Ingress(`${config.name}-ingress`, {
1418
1617
  metadata: {
1419
1618
  name: config.name,
1420
1619
  namespace,
@@ -1444,7 +1643,7 @@ var deployK8sService = (provider, namespace, config) => {
1444
1643
  const { livenessProbe, readinessProbe } = buildHealthProbes2(normalizedConfig);
1445
1644
  const containerPorts = buildContainerPorts(normalizedConfig);
1446
1645
  const servicePorts = buildServicePorts(normalizedConfig);
1447
- const deployment = new k8s6.apps.v1.Deployment(`${normalizedConfig.name}-deployment`, {
1646
+ const deployment = new k8s7.apps.v1.Deployment(`${normalizedConfig.name}-deployment`, {
1448
1647
  metadata: {
1449
1648
  name: normalizedConfig.name,
1450
1649
  namespace,
@@ -1493,7 +1692,7 @@ var deployK8sService = (provider, namespace, config) => {
1493
1692
  }
1494
1693
  }
1495
1694
  }, { provider, dependsOn: pvcs.length > 0 ? pvcs : undefined });
1496
- const service = new k8s6.core.v1.Service(`${normalizedConfig.name}-service`, {
1695
+ const service = new k8s7.core.v1.Service(`${normalizedConfig.name}-service`, {
1497
1696
  metadata: {
1498
1697
  name: normalizedConfig.name,
1499
1698
  namespace,
@@ -1526,7 +1725,7 @@ var deployK8sServices = (provider, namespace, configs, options) => configs.filte
1526
1725
  results.set(config.name, deployK8sService(provider, namespace, configWithSecret));
1527
1726
  return results;
1528
1727
  }, new Map);
1529
- var createNamespace = (provider, name, labels) => new k8s6.core.v1.Namespace(name, {
1728
+ var createNamespace = (provider, name, labels) => new k8s7.core.v1.Namespace(name, {
1530
1729
  metadata: {
1531
1730
  name,
1532
1731
  labels: {
@@ -1702,7 +1901,7 @@ var generateComposeSetupScript = (services, projectName = "orderboss-local") =>
1702
1901
  };
1703
1902
  // lib/runtimes/local/k3d.ts
1704
1903
  var DEFAULT_K3D_CONFIG = {
1705
- name: "orderboss-local",
1904
+ name: "local",
1706
1905
  servers: 1,
1707
1906
  agents: 1,
1708
1907
  ports: [
@@ -1747,7 +1946,7 @@ var generateK3dCreateCommand = (config = {}) => {
1747
1946
  }
1748
1947
  return parts.join(" ");
1749
1948
  };
1750
- var generateK3dDeleteCommand = (clusterName = "orderboss-local") => `k3d cluster delete ${clusterName}`;
1949
+ var generateK3dDeleteCommand = (clusterName = "local") => `k3d cluster delete ${clusterName}`;
1751
1950
  var getAllPorts2 = (config) => {
1752
1951
  if (!config.ports)
1753
1952
  return [];
@@ -1893,7 +2092,7 @@ var generateLocalSetupScript = (services, options = {}) => {
1893
2092
  for (const service of services) {
1894
2093
  lines.push(`echo -e "\${GREEN}Deploying ${service.name}...\${NC}"`, generateKubectlApplyCommand(service, namespace), "");
1895
2094
  }
1896
- lines.push('echo -e "${GREEN}Local development environment ready!${NC}"', 'echo -e "${YELLOW}Access services at: http://localhost:8080${NC}"', `echo -e "\${YELLOW}Namespace: ${namespace}\${NC}"`, "", "# Useful commands:", `# kubectl get pods -n ${namespace}`, `# kubectl logs -f <pod-name> -n ${namespace}`, `# k3d cluster delete ${k3dConfig.name || "orderboss-local"}`);
2095
+ lines.push('echo -e "${GREEN}Local development environment ready!${NC}"', 'echo -e "${YELLOW}Access services at: http://localhost:8080${NC}"', `echo -e "\${YELLOW}Namespace: ${namespace}\${NC}"`, "", "# Useful commands:", `# kubectl get pods -n ${namespace}`, `# kubectl logs -f <pod-name> -n ${namespace}`, `# k3d cluster delete ${k3dConfig.name || "local"}`);
1897
2096
  return lines.join(`
1898
2097
  `);
1899
2098
  };
package/dist/index.d.ts CHANGED
@@ -19,7 +19,7 @@
19
19
  * })
20
20
  * ```
21
21
  *
22
- * @see https://github.com/orderboss/platform/tree/main/packages/infrastructure
22
+ * @see https://www.npmjs.com/package/@crossdelta/infrastructure
23
23
  */
24
24
  export * from './core';
25
25
  export type { RuntimeDeploymentConfig, RuntimeDeploymentResult } from './core/runtime';
package/dist/index.js CHANGED
@@ -1037,9 +1037,206 @@ var materializeStreams = (provider, namespace, config) => {
1037
1037
  });
1038
1038
  return job;
1039
1039
  };
1040
+ // lib/runtimes/doks/caddy.ts
1041
+ import * as k8s5 from "@pulumi/kubernetes";
1042
+ var CADDY_DEFAULTS = {
1043
+ resources: {
1044
+ requests: { cpu: "50m", memory: "64Mi" },
1045
+ limits: { cpu: "200m", memory: "256Mi" }
1046
+ },
1047
+ storage: { size: "1Gi", storageClass: "do-block-storage" },
1048
+ healthCheck: { port: 80, path: "/healthz" }
1049
+ };
1050
+ var indent = (text, level) => {
1051
+ const prefix = " ".repeat(level);
1052
+ return text.split(`
1053
+ `).map((line) => line.trim() === "" ? "" : `${prefix}${line}`).join(`
1054
+ `);
1055
+ };
1056
+ var basicAuthLines = (basicAuth) => basicAuth ? ["basicauth {", ` ${basicAuth.user} ${basicAuth.hash}`, "}"] : [];
1057
+ var generateHandleBlock = (handle, level, basicAuth) => {
1058
+ const hasPath = handle.path != null;
1059
+ const header = hasPath ? `handle ${handle.path}* {` : "handle {";
1060
+ const body = [
1061
+ ...basicAuthLines(basicAuth)
1062
+ ];
1063
+ if (handle.redirect) {
1064
+ body.push(`redir ${handle.redirect} permanent`);
1065
+ } else if (handle.upstream) {
1066
+ if ((handle.stripPrefix ?? hasPath) && hasPath) {
1067
+ body.push(`uri strip_prefix ${handle.path}`);
1068
+ }
1069
+ body.push(`reverse_proxy ${handle.upstream}`);
1070
+ }
1071
+ const inner = body.map((line) => ` ${line}`).join(`
1072
+ `);
1073
+ return indent(`${header}
1074
+ ${inner}
1075
+ }`, level);
1076
+ };
1077
+ var generateRouteBlock = (route) => {
1078
+ const body = [];
1079
+ if (route.handles && route.handles.length > 0) {
1080
+ for (const handle of route.handles) {
1081
+ body.push(generateHandleBlock(handle, 1, route.basicAuth));
1082
+ }
1083
+ } else {
1084
+ body.push(...basicAuthLines(route.basicAuth).map((line) => ` ${line}`));
1085
+ if (route.redirect) {
1086
+ body.push(` redir ${route.redirect} permanent`);
1087
+ } else if (route.upstream) {
1088
+ body.push(` reverse_proxy ${route.upstream}`);
1089
+ }
1090
+ }
1091
+ return `${route.hosts} {
1092
+ ${body.join(`
1093
+ `)}
1094
+ }`;
1095
+ };
1096
+ var generateGlobalBlock = (config) => {
1097
+ const lines = ["{"];
1098
+ if (config.acmeEmail) {
1099
+ lines.push(` email ${config.acmeEmail}`);
1100
+ }
1101
+ lines.push(" admin off");
1102
+ if (config.onDemandTls) {
1103
+ lines.push(" on_demand_tls {");
1104
+ lines.push(` ask ${config.onDemandTls.askEndpoint}`);
1105
+ lines.push(" }");
1106
+ }
1107
+ lines.push(" servers {");
1108
+ lines.push(" trusted_proxies static private_ranges");
1109
+ lines.push(" }");
1110
+ lines.push("}");
1111
+ return lines.join(`
1112
+ `);
1113
+ };
1114
+ var generateCatchAllBlock = (upstream) => ["https:// {", " tls {", " on_demand", " }", ` reverse_proxy ${upstream}`, "}"].join(`
1115
+ `);
1116
+ var buildResourceSpec = (config, defaults) => {
1117
+ const resources = config ?? defaults ?? CADDY_DEFAULTS.resources;
1118
+ return {
1119
+ requests: resources.requests,
1120
+ limits: resources.limits
1121
+ };
1122
+ };
1123
+ var generateCaddyfile = (config) => {
1124
+ const healthCheck = config.healthCheck ?? CADDY_DEFAULTS.healthCheck;
1125
+ const healthCheckBlock = `:${healthCheck.port} {
1126
+ respond ${healthCheck.path} 200
1127
+ }`;
1128
+ const blocks = [
1129
+ generateGlobalBlock(config),
1130
+ ...config.routes.map(generateRouteBlock),
1131
+ healthCheckBlock,
1132
+ ...config.catchAllUpstream && config.onDemandTls ? [generateCatchAllBlock(config.catchAllUpstream)] : []
1133
+ ];
1134
+ return blocks.join(`
1135
+
1136
+ `) + `
1137
+ `;
1138
+ };
1139
+ var deployCaddy = (provider, namespace, config) => {
1140
+ const name = "caddy";
1141
+ const labels = { app: name, "app.kubernetes.io/name": name, "app.kubernetes.io/managed-by": "pulumi" };
1142
+ const healthCheck = config.healthCheck ?? CADDY_DEFAULTS.healthCheck;
1143
+ const storage = config.storage ?? CADDY_DEFAULTS.storage;
1144
+ const caddyfile = generateCaddyfile(config);
1145
+ const configMap = new k8s5.core.v1.ConfigMap(name, {
1146
+ metadata: { name, namespace, labels },
1147
+ data: { Caddyfile: caddyfile }
1148
+ }, { provider });
1149
+ const persistentVolumeClaim = new k8s5.core.v1.PersistentVolumeClaim(`${name}-data`, {
1150
+ metadata: { name: `${name}-data`, namespace, labels },
1151
+ spec: {
1152
+ accessModes: ["ReadWriteOnce"],
1153
+ storageClassName: storage.storageClass ?? CADDY_DEFAULTS.storage.storageClass,
1154
+ resources: { requests: { storage: storage.size } }
1155
+ }
1156
+ }, { provider });
1157
+ const caddyContainer = {
1158
+ name: "caddy",
1159
+ image: "caddy:2-alpine",
1160
+ ports: [
1161
+ { name: "https", containerPort: 443, protocol: "TCP" },
1162
+ { name: "http", containerPort: 80, protocol: "TCP" }
1163
+ ],
1164
+ resources: buildResourceSpec(config.resources),
1165
+ volumeMounts: [
1166
+ { name: "caddy-data", mountPath: "/data" },
1167
+ { name: "caddyfile", mountPath: "/etc/caddy/Caddyfile", subPath: "Caddyfile" }
1168
+ ],
1169
+ readinessProbe: {
1170
+ httpGet: { path: healthCheck.path, port: healthCheck.port },
1171
+ initialDelaySeconds: 5,
1172
+ periodSeconds: 10
1173
+ },
1174
+ livenessProbe: {
1175
+ httpGet: { path: healthCheck.path, port: healthCheck.port },
1176
+ initialDelaySeconds: 10,
1177
+ periodSeconds: 30
1178
+ }
1179
+ };
1180
+ const containers = [caddyContainer, ...config.sidecars ?? []];
1181
+ const deployment = new k8s5.apps.v1.Deployment(name, {
1182
+ metadata: { name, namespace, labels },
1183
+ spec: {
1184
+ replicas: 1,
1185
+ strategy: { type: "Recreate" },
1186
+ selector: { matchLabels: { app: name } },
1187
+ template: {
1188
+ metadata: { labels },
1189
+ spec: {
1190
+ containers,
1191
+ volumes: [
1192
+ {
1193
+ name: "caddy-data",
1194
+ persistentVolumeClaim: { claimName: `${name}-data` }
1195
+ },
1196
+ {
1197
+ name: "caddyfile",
1198
+ configMap: { name }
1199
+ }
1200
+ ]
1201
+ }
1202
+ }
1203
+ }
1204
+ }, { provider, dependsOn: [configMap, persistentVolumeClaim] });
1205
+ const service = new k8s5.core.v1.Service(name, {
1206
+ metadata: {
1207
+ name,
1208
+ namespace,
1209
+ labels,
1210
+ annotations: {
1211
+ "service.beta.kubernetes.io/do-loadbalancer-healthcheck-path": healthCheck.path,
1212
+ "service.beta.kubernetes.io/do-loadbalancer-healthcheck-protocol": "http",
1213
+ "service.beta.kubernetes.io/do-loadbalancer-healthcheck-port": String(healthCheck.port)
1214
+ }
1215
+ },
1216
+ spec: {
1217
+ type: "LoadBalancer",
1218
+ selector: { app: name },
1219
+ ports: [
1220
+ { name: "https", port: 443, targetPort: 443, protocol: "TCP" },
1221
+ { name: "http", port: 80, targetPort: 80, protocol: "TCP" }
1222
+ ]
1223
+ }
1224
+ }, { provider, dependsOn: [deployment] });
1225
+ const loadBalancerIp = service.status.apply((status) => {
1226
+ const ingress = status?.loadBalancer?.ingress?.[0];
1227
+ return ingress?.ip ?? ingress?.hostname ?? "";
1228
+ });
1229
+ return {
1230
+ deployment,
1231
+ service,
1232
+ persistentVolumeClaim,
1233
+ configMap,
1234
+ loadBalancerIp
1235
+ };
1236
+ };
1040
1237
  // lib/runtimes/doks/cluster.ts
1041
1238
  import * as digitalocean from "@pulumi/digitalocean";
1042
- import * as k8s5 from "@pulumi/kubernetes";
1239
+ import * as k8s6 from "@pulumi/kubernetes";
1043
1240
  import * as pulumi4 from "@pulumi/pulumi";
1044
1241
  function createDOKSCluster(config) {
1045
1242
  const stack = pulumi4.getStack();
@@ -1056,7 +1253,7 @@ function createDOKSCluster(config) {
1056
1253
  day: "sunday",
1057
1254
  startTime: "04:00"
1058
1255
  },
1059
- tags: config.tags ?? ["orderboss", `env:${stack}`],
1256
+ tags: config.tags ?? [`env:${stack}`],
1060
1257
  nodePool: {
1061
1258
  name: config.nodePool.name,
1062
1259
  size: config.nodePool.size,
@@ -1076,7 +1273,7 @@ function createDOKSCluster(config) {
1076
1273
  }
1077
1274
  return firstConfig.rawConfig;
1078
1275
  }));
1079
- const provider = new k8s5.Provider(`${config.name}-k8s-provider`, {
1276
+ const provider = new k8s6.Provider(`${config.name}-k8s-provider`, {
1080
1277
  kubeconfig
1081
1278
  });
1082
1279
  return {
@@ -1087,7 +1284,7 @@ function createDOKSCluster(config) {
1087
1284
  };
1088
1285
  }
1089
1286
  function createK8sProviderFromKubeconfig(name, kubeconfig) {
1090
- return new k8s5.Provider(name, { kubeconfig });
1287
+ return new k8s6.Provider(name, { kubeconfig });
1091
1288
  }
1092
1289
  // lib/runtimes/doks/vpc.ts
1093
1290
  import * as digitalocean2 from "@pulumi/digitalocean";
@@ -1103,7 +1300,7 @@ function createVPC(config) {
1103
1300
  });
1104
1301
  }
1105
1302
  // lib/runtimes/doks/workloads.ts
1106
- import * as k8s6 from "@pulumi/kubernetes";
1303
+ import * as k8s7 from "@pulumi/kubernetes";
1107
1304
  import * as pulumi6 from "@pulumi/pulumi";
1108
1305
 
1109
1306
  // lib/runtimes/doks/probes.ts
@@ -1132,7 +1329,7 @@ var normalizeK8sConfig = (config) => {
1132
1329
  if (config.containerPort) {
1133
1330
  console.warn(`⚠️ Service "${config.name}": containerPort is deprecated. Use ports instead.
1134
1331
  ` + ` Example: ports: ports().http(${config.containerPort}).build()
1135
- ` + ` See: https://github.com/orderboss/platform/blob/main/packages/infrastructure/README.md#port-configuration`);
1332
+ ` + ` See: https://www.npmjs.com/package/@crossdelta/infrastructure`);
1136
1333
  const ports2 = fromK8sPorts({
1137
1334
  containerPort: config.containerPort,
1138
1335
  additionalPorts: config.additionalPorts
@@ -1174,7 +1371,7 @@ var createImagePullSecret = (provider, namespace, name, config) => {
1174
1371
  }
1175
1372
  });
1176
1373
  });
1177
- return new k8s6.core.v1.Secret(name, {
1374
+ return new k8s7.core.v1.Secret(name, {
1178
1375
  metadata: {
1179
1376
  name,
1180
1377
  namespace,
@@ -1204,7 +1401,7 @@ var buildEnvVars = (config) => {
1204
1401
  })) : [];
1205
1402
  return [portEnv, ...plainEnvVars, ...secretEnvVars];
1206
1403
  };
1207
- var createServiceSecret = (provider, namespace, config, labels) => !config.secrets || Object.keys(config.secrets).length === 0 ? undefined : new k8s6.core.v1.Secret(`${config.name}-secret`, {
1404
+ var createServiceSecret = (provider, namespace, config, labels) => !config.secrets || Object.keys(config.secrets).length === 0 ? undefined : new k8s7.core.v1.Secret(`${config.name}-secret`, {
1208
1405
  metadata: {
1209
1406
  name: `${config.name}-secret`,
1210
1407
  namespace,
@@ -1217,7 +1414,7 @@ var createServiceVolumes = (provider, namespace, config, labels) => {
1217
1414
  if (!config.volumes) {
1218
1415
  return { pvcs: [], volumeMounts: [], volumes: [] };
1219
1416
  }
1220
- const pvcs = config.volumes.map((vol) => new k8s6.core.v1.PersistentVolumeClaim(`${config.name}-${vol.name}`, {
1417
+ const pvcs = config.volumes.map((vol) => new k8s7.core.v1.PersistentVolumeClaim(`${config.name}-${vol.name}`, {
1221
1418
  metadata: {
1222
1419
  name: `${config.name}-${vol.name}`,
1223
1420
  namespace,
@@ -1320,7 +1517,7 @@ var createServiceIngress = (provider, namespace, config, labels, service) => {
1320
1517
  const allHosts = [...primaryHosts, ...additionalHosts];
1321
1518
  const ingressRules = allHosts.length > 0 ? allHosts.map(createRule) : [createRule()];
1322
1519
  const tlsSecretName = config.ingress.tls?.secretName ?? `${config.name}-tls`;
1323
- return new k8s6.networking.v1.Ingress(`${config.name}-ingress`, {
1520
+ return new k8s7.networking.v1.Ingress(`${config.name}-ingress`, {
1324
1521
  metadata: {
1325
1522
  name: config.name,
1326
1523
  namespace,
@@ -1350,7 +1547,7 @@ var deployK8sService = (provider, namespace, config) => {
1350
1547
  const { livenessProbe, readinessProbe } = buildHealthProbes2(normalizedConfig);
1351
1548
  const containerPorts = buildContainerPorts(normalizedConfig);
1352
1549
  const servicePorts = buildServicePorts(normalizedConfig);
1353
- const deployment = new k8s6.apps.v1.Deployment(`${normalizedConfig.name}-deployment`, {
1550
+ const deployment = new k8s7.apps.v1.Deployment(`${normalizedConfig.name}-deployment`, {
1354
1551
  metadata: {
1355
1552
  name: normalizedConfig.name,
1356
1553
  namespace,
@@ -1399,7 +1596,7 @@ var deployK8sService = (provider, namespace, config) => {
1399
1596
  }
1400
1597
  }
1401
1598
  }, { provider, dependsOn: pvcs.length > 0 ? pvcs : undefined });
1402
- const service = new k8s6.core.v1.Service(`${normalizedConfig.name}-service`, {
1599
+ const service = new k8s7.core.v1.Service(`${normalizedConfig.name}-service`, {
1403
1600
  metadata: {
1404
1601
  name: normalizedConfig.name,
1405
1602
  namespace,
@@ -1432,7 +1629,7 @@ var deployK8sServices = (provider, namespace, configs, options) => configs.filte
1432
1629
  results.set(config.name, deployK8sService(provider, namespace, configWithSecret));
1433
1630
  return results;
1434
1631
  }, new Map);
1435
- var createNamespace = (provider, name, labels) => new k8s6.core.v1.Namespace(name, {
1632
+ var createNamespace = (provider, name, labels) => new k8s7.core.v1.Namespace(name, {
1436
1633
  metadata: {
1437
1634
  name,
1438
1635
  labels: {
@@ -1608,7 +1805,7 @@ var generateComposeSetupScript = (services, projectName = "orderboss-local") =>
1608
1805
  };
1609
1806
  // lib/runtimes/local/k3d.ts
1610
1807
  var DEFAULT_K3D_CONFIG = {
1611
- name: "orderboss-local",
1808
+ name: "local",
1612
1809
  servers: 1,
1613
1810
  agents: 1,
1614
1811
  ports: [
@@ -1653,7 +1850,7 @@ var generateK3dCreateCommand = (config = {}) => {
1653
1850
  }
1654
1851
  return parts.join(" ");
1655
1852
  };
1656
- var generateK3dDeleteCommand = (clusterName = "orderboss-local") => `k3d cluster delete ${clusterName}`;
1853
+ var generateK3dDeleteCommand = (clusterName = "local") => `k3d cluster delete ${clusterName}`;
1657
1854
  var getAllPorts2 = (config) => {
1658
1855
  if (!config.ports)
1659
1856
  return [];
@@ -1799,7 +1996,7 @@ var generateLocalSetupScript = (services, options = {}) => {
1799
1996
  for (const service of services) {
1800
1997
  lines.push(`echo -e "\${GREEN}Deploying ${service.name}...\${NC}"`, generateKubectlApplyCommand(service, namespace), "");
1801
1998
  }
1802
- lines.push('echo -e "${GREEN}Local development environment ready!${NC}"', 'echo -e "${YELLOW}Access services at: http://localhost:8080${NC}"', `echo -e "\${YELLOW}Namespace: ${namespace}\${NC}"`, "", "# Useful commands:", `# kubectl get pods -n ${namespace}`, `# kubectl logs -f <pod-name> -n ${namespace}`, `# k3d cluster delete ${k3dConfig.name || "orderboss-local"}`);
1999
+ lines.push('echo -e "${GREEN}Local development environment ready!${NC}"', 'echo -e "${YELLOW}Access services at: http://localhost:8080${NC}"', `echo -e "\${YELLOW}Namespace: ${namespace}\${NC}"`, "", "# Useful commands:", `# kubectl get pods -n ${namespace}`, `# kubectl logs -f <pod-name> -n ${namespace}`, `# k3d cluster delete ${k3dConfig.name || "local"}`);
1803
2000
  return lines.join(`
1804
2001
  `);
1805
2002
  };
@@ -1826,6 +2023,7 @@ export {
1826
2023
  generateComposeYaml,
1827
2024
  generateComposeSetupScript,
1828
2025
  generateComposeProject,
2026
+ generateCaddyfile,
1829
2027
  fromK8sPorts,
1830
2028
  fromAppPlatformPorts,
1831
2029
  dockerHubImage,
@@ -1839,6 +2037,7 @@ export {
1839
2037
  deployK8sServices,
1840
2038
  deployK8sService,
1841
2039
  deployCertManager,
2040
+ deployCaddy,
1842
2041
  createVPC,
1843
2042
  createPorts,
1844
2043
  createPort,
@@ -0,0 +1,53 @@
1
+ /**
2
+ * Caddy reverse proxy deployment for DOKS.
3
+ *
4
+ * Replaces nginx-ingress + cert-manager with a single Caddy deployment
5
+ * that handles routing, TLS (ACME + on-demand), and basic auth.
6
+ */
7
+ import * as k8s from '@pulumi/kubernetes';
8
+ import type { CaddyConfig, CaddyHandle, CaddyResult, CaddyRoute } from './types';
9
+ export type { CaddyConfig, CaddyHandle, CaddyResult, CaddyRoute };
10
+ /**
11
+ * Generate a Caddyfile from a typed configuration.
12
+ *
13
+ * Pure function with no side effects. All route types, basic auth,
14
+ * on-demand TLS, and health checks are supported.
15
+ *
16
+ * @example
17
+ * ```typescript
18
+ * const caddyfile = generateCaddyfile({
19
+ * acmeEmail: 'admin@example.com',
20
+ * routes: [
21
+ * { hosts: 'example.com', upstream: 'website.my-app.svc.cluster.local:3200' },
22
+ * { hosts: 'www.example.com', redirect: 'https://example.com{uri}' },
23
+ * ],
24
+ * onDemandTls: { askEndpoint: 'http://localhost:8080/ask' },
25
+ * catchAllUpstream: 'storefront.my-app.svc.cluster.local:3000',
26
+ * healthCheck: { port: 80, path: '/healthz' },
27
+ * })
28
+ * ```
29
+ */
30
+ export declare const generateCaddyfile: (config: CaddyConfig) => string;
31
+ /**
32
+ * Deploy Caddy as a reverse proxy to the cluster.
33
+ *
34
+ * Creates:
35
+ * - Deployment with `caddy:2-alpine` (Strategy: Recreate for RWO PVC)
36
+ * - LoadBalancer Service with DigitalOcean health check annotations
37
+ * - PVC for cert persistence (`/data`)
38
+ * - ConfigMap with the generated Caddyfile
39
+ * - Optional sidecar containers (e.g., /ask endpoint)
40
+ *
41
+ * @example
42
+ * ```typescript
43
+ * const caddy = deployCaddy(provider, 'my-app', {
44
+ * acmeEmail: 'admin@example.com',
45
+ * routes: [
46
+ * { hosts: 'example.com', upstream: 'website.my-app.svc.cluster.local:3200' },
47
+ * ],
48
+ * onDemandTls: { askEndpoint: 'http://localhost:8080/ask' },
49
+ * catchAllUpstream: 'storefront.my-app.svc.cluster.local:3000',
50
+ * })
51
+ * ```
52
+ */
53
+ export declare const deployCaddy: (provider: k8s.Provider, namespace: string, config: CaddyConfig) => CaddyResult;
@@ -26,7 +26,7 @@ export interface DOKSClusterResult {
26
26
  * @example
27
27
  * ```typescript
28
28
  * const { cluster, provider, kubeconfig } = createDOKSCluster({
29
- * name: 'orderboss-cluster',
29
+ * name: 'my-cluster',
30
30
  * region: 'fra1',
31
31
  * vpcUuid: vpc.id,
32
32
  * nodePool: {
@@ -42,6 +42,8 @@
42
42
  *
43
43
  * @module
44
44
  */
45
+ export type { CaddyConfig, CaddyHandle, CaddyResult, CaddyRoute } from './caddy';
46
+ export { deployCaddy, generateCaddyfile } from './caddy';
45
47
  export type { CertManagerConfig, CertManagerResult } from './cert-manager';
46
48
  export { deployCertManager } from './cert-manager';
47
49
  export type { DOKSClusterResult } from './cluster';
@@ -12,7 +12,7 @@ import type { NatsConfig, NatsDeploymentResult } from './types';
12
12
  *
13
13
  * @example
14
14
  * ```typescript
15
- * const nats = deployNats(provider, 'orderboss', {
15
+ * const nats = deployNats(provider, 'my-app', {
16
16
  * replicas: 3,
17
17
  * jetstream: {
18
18
  * enabled: true,
@@ -26,7 +26,7 @@ import type { NatsConfig, NatsDeploymentResult } from './types';
26
26
  * })
27
27
  *
28
28
  * // Connect from other services:
29
- * // nats://nats.orderboss.svc.cluster.local:4222
29
+ * // nats://nats.my-app.svc.cluster.local:4222
30
30
  * ```
31
31
  */
32
32
  export declare function deployNats(provider: k8s.Provider, namespace: string, config?: NatsConfig): NatsDeploymentResult;
@@ -38,11 +38,11 @@ export declare function deployNats(provider: k8s.Provider, namespace: string, co
38
38
  * @example
39
39
  * ```typescript
40
40
  * // Without auth
41
- * const url = buildNatsUrl('orderboss') // nats://nats.orderboss.svc.cluster.local:4222
41
+ * const url = buildNatsUrl('my-app') // nats://nats.my-app.svc.cluster.local:4222
42
42
  *
43
43
  * // With auth
44
- * const url = buildNatsUrl('orderboss', { user: 'myuser', password: 'secret' })
45
- * // nats://myuser:secret@nats.orderboss.svc.cluster.local:4222
44
+ * const url = buildNatsUrl('my-app', { user: 'myuser', password: 'secret' })
45
+ * // nats://myuser:secret@nats.my-app.svc.cluster.local:4222
46
46
  * ```
47
47
  */
48
48
  export declare function buildNatsUrl(namespace: string, auth?: {
@@ -1,4 +1,5 @@
1
1
  import type * as digitalocean from '@pulumi/digitalocean';
2
+ import type * as k8s from '@pulumi/kubernetes';
2
3
  import type * as pulumi from '@pulumi/pulumi';
3
4
  import type { ServicePorts } from '../../core/types';
4
5
  /** DigitalOcean Kubernetes Cluster Args (from @pulumi/digitalocean) */
@@ -16,7 +17,7 @@ export type Region = digitalocean.Region;
16
17
  * @example
17
18
  * ```typescript
18
19
  * const cluster = createDOKSCluster({
19
- * name: 'orderboss-cluster',
20
+ * name: 'my-cluster',
20
21
  * region: 'fra1',
21
22
  * vpcUuid: vpc.id,
22
23
  * nodePool: {
@@ -180,7 +181,7 @@ export interface K8sVolumeMount {
180
181
  * // Internal service (not publicly accessible)
181
182
  * const config: K8sServiceConfig = {
182
183
  * name: 'orders',
183
- * image: 'ghcr.io/orderboss/platform/orders:latest',
184
+ * image: 'ghcr.io/my-org/my-app/orders:latest',
184
185
  * containerPort: 4001,
185
186
  * replicas: 2,
186
187
  * env: {
@@ -194,7 +195,7 @@ export interface K8sVolumeMount {
194
195
  * // Public service with ingress
195
196
  * const config: K8sServiceConfig = {
196
197
  * name: 'storefront',
197
- * image: 'ghcr.io/orderboss/platform/storefront:latest',
198
+ * image: 'ghcr.io/my-org/my-app/storefront:latest',
198
199
  * containerPort: 3000,
199
200
  * ingress: { path: '/' },
200
201
  * healthCheck: { httpPath: '/health' },
@@ -205,7 +206,7 @@ export interface K8sServiceConfig {
205
206
  /** Unique name of the service (used for deployment, service, and labels) */
206
207
  name: string;
207
208
  /**
208
- * Container image (e.g., 'ghcr.io/orderboss/platform/storefront:latest').
209
+ * Container image (e.g., 'ghcr.io/my-org/my-app/storefront:latest').
209
210
  * If not specified, auto-generated from pf.registry config + service name.
210
211
  */
211
212
  image?: string;
@@ -321,7 +322,7 @@ export interface DeployK8sServicesOptions {
321
322
  *
322
323
  * @example
323
324
  * ```typescript
324
- * const nats = deployNats(provider, 'orderboss', {
325
+ * const nats = deployNats(provider, 'my-app', {
325
326
  * replicas: 3,
326
327
  * jetstream: {
327
328
  * enabled: true,
@@ -389,7 +390,7 @@ export interface K8sServiceDeploymentResult {
389
390
  secret?: unknown;
390
391
  /** PVCs for persistent storage */
391
392
  pvcs?: unknown[];
392
- /** Internal service URL (e.g., 'http://orders.orderboss.svc.cluster.local:4001') */
393
+ /** Internal service URL (e.g., 'http://orders.my-app.svc.cluster.local:4001') */
393
394
  internalUrl: pulumi.Output<string>;
394
395
  /** Service DNS name within cluster */
395
396
  serviceDns: string;
@@ -405,3 +406,67 @@ export interface NatsDeploymentResult {
405
406
  /** Service DNS name */
406
407
  serviceDns: string;
407
408
  }
409
+ export interface CaddyHandle {
410
+ /** Path prefix to match (e.g., '/api'). Omit for default/catch-all handle. */
411
+ path?: string;
412
+ /** Strip the path prefix before forwarding (default: true when path is set) */
413
+ stripPrefix?: boolean;
414
+ /** Upstream address (e.g., 'api-gateway.my-app.svc.cluster.local:4000') */
415
+ upstream?: string;
416
+ /** Redirect target instead of proxying (e.g., 'https://studio.example.com') */
417
+ redirect?: string;
418
+ }
419
+ export interface CaddyRoute {
420
+ /** Hostname(s) for this route (e.g., 'example.com' or 'www.example.com, www.example.de') */
421
+ hosts: string;
422
+ /** Request handlers. Multiple handles create nested handle blocks. Single upstream can use shorthand. */
423
+ handles?: CaddyHandle[];
424
+ /** Shorthand: single upstream for the entire host (use instead of handles for simple routes) */
425
+ upstream?: string;
426
+ /** Shorthand: redirect target for the entire host (use instead of handles for redirect-only routes) */
427
+ redirect?: string;
428
+ /** Basic auth credentials for this route */
429
+ basicAuth?: {
430
+ user: string;
431
+ hash: string;
432
+ };
433
+ }
434
+ export interface CaddyConfig {
435
+ /** Named routes with explicit ACME certs */
436
+ routes: CaddyRoute[];
437
+ /** On-demand TLS configuration for dynamic domains */
438
+ onDemandTls?: {
439
+ /** URL of the /ask endpoint that validates hostnames */
440
+ askEndpoint: string;
441
+ };
442
+ /** Upstream for the catch-all on-demand TLS block (e.g., storefront) */
443
+ catchAllUpstream?: string;
444
+ /** ACME account email for Let's Encrypt */
445
+ acmeEmail?: string;
446
+ /** Additional containers in the Caddy pod (e.g., /ask sidecar) */
447
+ sidecars?: k8s.types.input.core.v1.Container[];
448
+ /** Resource limits for the Caddy container */
449
+ resources?: K8sResourceConfig;
450
+ /** Persistent storage for certs and OCSP cache */
451
+ storage?: {
452
+ size: string;
453
+ storageClass?: string;
454
+ };
455
+ /** Health check endpoint for the LoadBalancer */
456
+ healthCheck?: {
457
+ port: number;
458
+ path: string;
459
+ };
460
+ }
461
+ export interface CaddyResult {
462
+ /** The Kubernetes Deployment */
463
+ deployment: k8s.apps.v1.Deployment;
464
+ /** The LoadBalancer Service */
465
+ service: k8s.core.v1.Service;
466
+ /** PVC for cert persistence */
467
+ persistentVolumeClaim: k8s.core.v1.PersistentVolumeClaim;
468
+ /** ConfigMap containing the Caddyfile */
469
+ configMap: k8s.core.v1.ConfigMap;
470
+ /** The LoadBalancer external IP */
471
+ loadBalancerIp: pulumi.Output<string>;
472
+ }
@@ -9,9 +9,9 @@ import type { VPCConfig } from './types';
9
9
  * @example
10
10
  * ```typescript
11
11
  * const vpc = createVPC({
12
- * name: 'orderboss-vpc',
12
+ * name: 'my-vpc',
13
13
  * region: 'fra1',
14
- * description: 'VPC for orderboss platform',
14
+ * description: 'VPC for my platform',
15
15
  * })
16
16
  * ```
17
17
  */
@@ -6,7 +6,7 @@ import type { DeployK8sServicesOptions, K8sServiceConfig, K8sServiceDeploymentRe
6
6
  *
7
7
  * @example
8
8
  * ```typescript
9
- * const secret = createImagePullSecret(provider, 'orderboss', 'ghcr-secret', {
9
+ * const secret = createImagePullSecret(provider, 'my-app', 'ghcr-secret', {
10
10
  * registry: 'ghcr.io',
11
11
  * username: 'my-org',
12
12
  * password: ghcrToken,
@@ -30,9 +30,9 @@ export declare const createImagePullSecret: (provider: k8s.Provider, namespace:
30
30
  *
31
31
  * @example
32
32
  * ```typescript
33
- * const result = deployK8sService(provider, 'orderboss', {
33
+ * const result = deployK8sService(provider, 'my-app', {
34
34
  * name: 'orders',
35
- * image: 'ghcr.io/orderboss/platform/orders:latest',
35
+ * image: 'ghcr.io/my-org/orders:latest',
36
36
  * containerPort: 4001,
37
37
  * replicas: 2,
38
38
  * env: {
@@ -47,7 +47,7 @@ export declare const createImagePullSecret: (provider: k8s.Provider, namespace:
47
47
  * })
48
48
  *
49
49
  * // Access the service internally:
50
- * // http://orders.orderboss.svc.cluster.local:4001
50
+ * // http://orders.my-app.svc.cluster.local:4001
51
51
  * ```
52
52
  */
53
53
  export declare const deployK8sService: (provider: k8s.Provider, namespace: string, config: K8sServiceConfig) => K8sServiceDeploymentResult;
@@ -56,7 +56,7 @@ export declare const deployK8sService: (provider: k8s.Provider, namespace: strin
56
56
  *
57
57
  * @example
58
58
  * ```typescript
59
- * const results = deployK8sServices(provider, 'orderboss', [
59
+ * const results = deployK8sServices(provider, 'my-app', [
60
60
  * ordersConfig,
61
61
  * storefrontConfig,
62
62
  * apiGatewayConfig,
@@ -69,7 +69,7 @@ export declare const deployK8sServices: (provider: k8s.Provider, namespace: stri
69
69
  *
70
70
  * @example
71
71
  * ```typescript
72
- * const ns = createNamespace(provider, 'orderboss')
72
+ * const ns = createNamespace(provider, 'my-app')
73
73
  * ```
74
74
  */
75
75
  export declare const createNamespace: (provider: k8s.Provider, name: string, labels?: Record<string, string>) => k8s.core.v1.Namespace;
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@crossdelta/infrastructure",
3
- "version": "0.10.1",
3
+ "version": "0.11.1",
4
4
  "type": "module",
5
5
  "license": "MIT",
6
6
  "publishConfig": {