prizmkit 1.0.141 → 1.0.143

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,5 +1,5 @@
1
1
  {
2
- "frameworkVersion": "1.0.141",
3
- "bundledAt": "2026-03-28T22:33:06.813Z",
4
- "bundledFrom": "555434f"
2
+ "frameworkVersion": "1.0.143",
3
+ "bundledAt": "2026-03-28T23:59:01.821Z",
4
+ "bundledFrom": "ea51479"
5
5
  }
@@ -1,5 +1,5 @@
1
1
  {
2
- "version": "1.0.141",
2
+ "version": "1.0.143",
3
3
  "skills": {
4
4
  "prizm-kit": {
5
5
  "description": "Full-lifecycle dev toolkit. Covers spec-driven development, Prizm context docs, code quality, debugging, deployment, and knowledge management.",
@@ -113,38 +113,37 @@ Detect user intent from their message, then follow the corresponding workflow:
113
113
  - **(2) Background daemon** — pipeline runs fully detached via `launch-bugfix-daemon.sh`. Survives AI CLI session closure.
114
114
  - **(3) Manual** — display the final assembled commands only. Do not execute anything. User runs them on their own.
115
115
 
116
- 5. **Present configuration options**: After execution mode is chosen, show the remaining options with defaults. Ask the user to confirm or override.
116
+ 5. **Ask configuration options** ⚠️ MANDATORY INTERACTIVE STEP — applies to ALL execution modes (Foreground, Background, AND Manual). You MUST ask the user to configure options and WAIT for their response BEFORE proceeding to step 6. Do NOT skip this step or merge it with step 6.
117
117
 
118
- **Configuration Options:**
118
+ Use `AskUserQuestion` to present the following configuration choices. Each question is a separate selectable option:
119
119
 
120
- | Option | Default | Description |
121
- |--------|---------|-------------|
122
- | **Verbose logging** | On | Detailed AI session logs including tool calls and subagent activity |
123
- | **Max retries** | 3 | Max retry attempts per failed bug |
124
- | **Session timeout** | None | Per-bug timeout in seconds (e.g. `3600` = 1 hour) |
125
- | **Bug filter** | All | Run specific bugs: `B-001:B-005` (range), `B-001,B-003` (list), or mixed `B-001,B-005:B-010` |
120
+ **Question 1 Verbose logging** (multiSelect: false):
121
+ - On (default) — Detailed AI session logs including tool calls and subagent activity
122
+ - Off Minimal logging
126
123
 
127
- Default Verbose On.
124
+ **Question 2 — Max retries** (multiSelect: false):
125
+ - 3 (default)
126
+ - 1
127
+ - 5
128
128
 
129
- **Environment variable mapping** (for natural language → env var translation):
129
+ **Question 3 — Session timeout** (multiSelect: false):
130
+ - None (default) — No timeout
131
+ - 30 min — `SESSION_TIMEOUT=1800`
132
+ - 1 hour — `SESSION_TIMEOUT=3600`
133
+ - 2 hours — `SESSION_TIMEOUT=7200`
130
134
 
131
- | User says | Environment variable |
132
- |-----------|---------------------|
133
- | "timeout 2 hours" | `SESSION_TIMEOUT=7200` |
134
- | "max 5 retries" | `MAX_RETRIES=5` |
135
- | "no verbose" / "quiet" | `VERBOSE=0` |
136
- | "heartbeat every 60s" | `HEARTBEAT_INTERVAL=60` |
135
+ Note: Bug filter defaults to all bugs (by severity order). If the user selects "Other" on any option, handle their custom input.
137
136
 
138
- Example presentation to user:
139
- ```
140
- Bugfix pipeline will process N bugs in Foreground mode:
141
- - Verbose: On (subagent detection enabled)
142
- - Max retries: 3
143
- - Timeout: none
144
- - Bugs: all (by severity order)
137
+ **Environment variable mapping** (for translating user responses → env vars):
145
138
 
146
- Want to change any options, or proceed with these defaults?
147
- ```
139
+ | Config choice | Environment variable |
140
+ |-----------|---------------------|
141
+ | Verbose: Off | `VERBOSE=0` |
142
+ | Verbose: On | `VERBOSE=1` |
143
+ | Max retries: N | `MAX_RETRIES=N` |
144
+ | Timeout: value | `SESSION_TIMEOUT=<seconds>` |
145
+
146
+ ⚠️ STOP HERE and wait for user response before continuing to step 6.
148
147
 
149
148
  6. **Show final command**: Assemble the complete command from execution mode + confirmed configuration, and present it to the user.
150
149
 
@@ -103,53 +103,78 @@ Detect user intent from their message, then follow the corresponding workflow:
103
103
  --action status 2>/dev/null
104
104
  ```
105
105
 
106
- 4. **Ask execution mode** (first user decision):
106
+ 4. **Run environment preflight checks** (database connectivity, migrations, dev server):
107
+
108
+ Run the preflight script to auto-detect the database type, verify env vars, test connectivity, and check migration status:
109
+ ```bash
110
+ python3 ${SKILL_DIR}/scripts/preflight-check.py feature-list.json
111
+ ```
112
+
113
+ The script:
114
+ - Reads `global_context.database` from `feature-list.json` and `.prizmkit/config.json`
115
+ - Scans `.env.local` / `.env` for connection variables (supports Supabase, PostgreSQL, MySQL, MongoDB, Firebase, and generic `DATABASE_URL`)
116
+ - Tests connectivity using the appropriate method per database type
117
+ - Checks migration status (Prisma, Drizzle, Supabase raw SQL, or generic migration directories)
118
+ - Checks if the dev server is running (from `browser_interaction` URLs)
119
+ - Outputs `PREFLIGHT ✓` (pass), `PREFLIGHT ⚠` (warning), or `PREFLIGHT ℹ` (info) lines
120
+ - Exits 0 (all clear), 1 (warnings found), or 2 (error — feature list not found)
121
+
122
+ If the script reports `⚠` warnings, present them to the user and ask:
123
+ > "Environment preflight found issues (listed above). The pipeline can still run, but database-related features may produce code that passes mock tests without real database verification. Continue anyway?"
124
+
125
+ Wait for user confirmation. If they want to fix issues first, suggest remediation based on the warnings (apply migrations, configure env vars, check database service status).
126
+
127
+ If `global_context.database` is absent and no features mention database keywords, the script skips DB checks automatically.
128
+
129
+ 5. **Ask execution mode** (first user decision):
107
130
 
108
131
  Present the three modes and ask the user to choose:
109
132
  - **(1) Foreground** (recommended) — pipeline runs in the current session via `run.sh run`. Visible output and direct error feedback.
110
133
  - **(2) Background daemon** — pipeline runs fully detached via `launch-daemon.sh`. Survives AI CLI session closure.
111
134
  - **(3) Manual** — display the final assembled commands only. Do not execute anything. User runs them on their own.
112
135
 
113
- 5. **Present configuration options**: After execution mode is chosen, show the remaining options with defaults. Ask the user to confirm or override.
136
+ 6. **Ask configuration options** ⚠️ MANDATORY INTERACTIVE STEP — applies to ALL execution modes (Foreground, Background, AND Manual). You MUST ask the user to configure options and WAIT for their response BEFORE proceeding to step 7. Do NOT skip this step or merge it with step 7.
137
+
138
+ Use `AskUserQuestion` to present the following configuration choices. Each question is a separate selectable option:
139
+
140
+ **Question 1 — Critic review** (multiSelect: false):
141
+ - Off (default) — Skip adversarial review
142
+ - On — Enable critic review after planning & implementation (+5-10 min/feature)
143
+
144
+ **Question 2 — Verbose logging** (multiSelect: false):
145
+ - On (default) — Detailed AI session logs including tool calls and subagent activity
146
+ - Off — Minimal logging
114
147
 
115
- **Configuration Options:**
148
+ **Question 3 — Max retries** (multiSelect: false):
149
+ - 3 (default)
150
+ - 1
151
+ - 5
116
152
 
117
- | Option | Default | Description |
118
- |--------|---------|-------------|
119
- | **Critic review** | Off | Adversarial review after planning & implementation. Increases time ~5-10 min/feature |
120
- | **Verbose logging** | On | Detailed AI session logs including tool calls and subagent activity |
121
- | **Max retries** | 3 | Max retry attempts per failed feature |
122
- | **Session timeout** | None | Per-feature timeout in seconds (e.g. `3600` = 1 hour) |
123
- | **Feature filter** | All | Run specific features: `F-001:F-005` (range), `F-001,F-003` (list), or mixed `F-001,F-005:F-010` |
124
- | **Browser verify** | Auto | Run playwright-cli verification for features with `browser_interaction`. Auto = run if playwright-cli installed and features have the field |
153
+ **Question 4 Session timeout** (multiSelect: false):
154
+ - None (default) — No timeout
155
+ - 30 min `SESSION_TIMEOUT=1800`
156
+ - 1 hour `SESSION_TIMEOUT=3600`
157
+ - 2 hours `SESSION_TIMEOUT=7200`
125
158
 
126
- Default Verbose On. Default Critic to Off unless features have `estimated_complexity: "high"` or above.
159
+ Note: Due to the 4-question limit per `AskUserQuestion` call, Feature filter and Browser verify use their defaults (all features, auto-detect playwright-cli). If the user selects "Other" on any option, handle their custom input.
127
160
 
128
- **Environment variable mapping** (for natural language env var translation):
161
+ Default Critic to Off unless features have `estimated_complexity: "high"` or above (in which case default to On).
129
162
 
130
- | User says | Environment variable |
163
+ **Environment variable mapping** (for translating user responses → env vars):
164
+
165
+ | Config choice | Environment variable |
131
166
  |-----------|---------------------|
132
- | "timeout 2 hours" | `SESSION_TIMEOUT=7200` |
133
- | "max 5 retries" | `MAX_RETRIES=5` |
134
- | "no verbose" / "quiet" | `VERBOSE=0` |
135
- | "heartbeat every 60s" | `HEARTBEAT_INTERVAL=60` |
136
- | "enable critic review" | `ENABLE_CRITIC=true` |
167
+ | Critic: On | `ENABLE_CRITIC=true` |
168
+ | Verbose: Off | `VERBOSE=0` |
169
+ | Verbose: On | `VERBOSE=1` |
170
+ | Max retries: N | `MAX_RETRIES=N` |
171
+ | Timeout: value | `SESSION_TIMEOUT=<seconds>` |
137
172
  | "skip browser verify" | `BROWSER_VERIFY=false` |
138
173
  | "enable browser verify" | `BROWSER_VERIFY=true` |
139
174
 
140
- Example presentation to user:
141
- ```
142
- Pipeline will process N features in Foreground mode:
143
- - Critic: Off
144
- - Verbose: On (subagent detection enabled)
145
- - Max retries: 3
146
- - Timeout: none
147
- - Features: all
148
-
149
- Want to change any options, or proceed with these defaults?
150
- ```
175
+ ⚠️ STOP HERE and wait for user response before continuing to step 7.
151
176
 
152
- 6. **Show final command**: Assemble the complete command from execution mode + confirmed configuration, and present it to the user.
177
+ 7. **Show final command**: After user confirms configuration in step 6, assemble the complete command from execution mode + user-confirmed configuration, and present it to the user.
153
178
 
154
179
  **Foreground command:**
155
180
  ```bash
@@ -171,7 +196,7 @@ Detect user intent from their message, then follow the corresponding workflow:
171
196
  --env "VERBOSE=1 ENABLE_CRITIC=true MAX_RETRIES=5"
172
197
  ```
173
198
 
174
- **Manual mode**: Print the assembled command(s) and **stop here**. Do not execute anything. Do not proceed to step 7.
199
+ **Manual mode**: Print the assembled command(s) and **stop here**. Do not execute anything. Do not proceed to step 8.
175
200
  ```
176
201
  # To run in foreground:
177
202
  VERBOSE=1 dev-pipeline/run.sh run feature-list.json
@@ -183,13 +208,13 @@ Detect user intent from their message, then follow the corresponding workflow:
183
208
  dev-pipeline/run.sh status feature-list.json
184
209
  ```
185
210
 
186
- 7. **Confirm and launch** (Foreground and Background only — Manual mode ends at step 6):
211
+ 8. **Confirm and launch** (Foreground and Background only — Manual mode ends at step 7):
187
212
 
188
213
  Ask: "Ready to launch the pipeline with the above command?"
189
214
 
190
- After user confirms, execute the command from step 6.
215
+ After user confirms, execute the command from step 7.
191
216
 
192
- 8. **Post-launch** (depends on execution mode):
217
+ 9. **Post-launch** (depends on execution mode):
193
218
 
194
219
  **If foreground**: Pipeline runs to completion in the terminal. After it finishes:
195
220
  - Summarize results: total features, succeeded, failed, skipped
@@ -354,6 +379,9 @@ After pipeline completion, if features have `browser_interaction` fields and `pl
354
379
  | All features blocked/failed | Show status, suggest daemon-safe recovery: `dev-pipeline/reset-feature.sh <F-XXX> --clean --run feature-list.json` |
355
380
  | `playwright-cli` not installed | Browser verification skipped (non-blocking). Suggest: `npm install -g @playwright/cli@latest && playwright-cli install --skills` |
356
381
  | Permission denied on script | Run `chmod +x dev-pipeline/launch-daemon.sh dev-pipeline/run.sh` |
382
+ | `.env.local` missing or incomplete | Warn: database connection variables not found. Suggest creating env file with required connection variables for the project's database |
383
+ | Database unreachable | Warn: database features will produce mock-only tests. Suggest checking database service status and connection credentials |
384
+ | Migrations not applied | Warn: tables or schema referenced in migration files not found in database. Suggest applying pending migrations |
357
385
 
358
386
  ### Integration Notes
359
387
 
@@ -0,0 +1,462 @@
1
+ #!/usr/bin/env python3
2
+ """
3
+ dev-pipeline environment preflight checker.
4
+
5
+ Detects database type from feature-list.json / .prizmkit/config.json,
6
+ verifies env vars, tests connectivity, and checks migration status.
7
+
8
+ Usage:
9
+ python3 preflight-check.py [feature-list.json]
10
+
11
+ Output: PREFLIGHT lines to stdout (✓ / ⚠ / ℹ), JSON summary to stderr.
12
+ Exit code: 0 = all clear, 1 = warnings found, 2 = error.
13
+ """
14
+
15
+ import json
16
+ import glob
17
+ import os
18
+ import re
19
+ import subprocess
20
+ import sys
21
+
22
+ # ── Config ──────────────────────────────────────────────────────
23
+
24
+ ENV_FILES = [".env.local", ".env", ".env.development.local", ".env.development"]
25
+
26
+ # (group_label, regex_pattern) — matched against env var names.
27
+ # IMPORTANT: use non-capturing groups (?:...) inside patterns so that
28
+ # group(1) in the scan regex captures the full variable name.
29
+ DB_VAR_PATTERNS = [
30
+ ("SUPABASE_URL", r"(?:NEXT_PUBLIC_)?SUPABASE_URL"),
31
+ ("SUPABASE_KEY", r"(?:NEXT_PUBLIC_)?SUPABASE_(?:ANON_KEY|SERVICE_ROLE_KEY)"),
32
+ ("DATABASE_URL", r"DATABASE_URL"),
33
+ ("DB_CONNECTION", r"DB_(?:HOST|CONNECTION|URL|PORT|NAME|USER|PASSWORD)"),
34
+ ("FIREBASE", r"(?:NEXT_PUBLIC_)?FIREBASE_(?:API_KEY|PROJECT_ID|AUTH_DOMAIN)"),
35
+ ("MONGODB", r"MONGO(?:DB)?_(?:URI|URL|CONNECTION)"),
36
+ ("REDIS", r"REDIS_(?:URL|HOST|PORT)"),
37
+ ("MYSQL", r"(?:MYSQL|PLANETSCALE)_(?:URL|HOST|DATABASE)"),
38
+ ("POSTGRES", r"(?:PG|POSTGRES(?:QL)?)_(?:URL|HOST|CONNECTION)"),
39
+ ]
40
+
41
+ DB_KEYWORDS = [
42
+ "migration", "database", "create table", "table", "rls",
43
+ "storage bucket", "schema", "model", "prisma", "drizzle",
44
+ "sequelize", "typeorm", "supabase", "firebase", "mongodb",
45
+ "postgres", "mysql", "sqlite", "redis",
46
+ ]
47
+
48
+ warnings = []
49
+ passes = []
50
+ infos = []
51
+
52
+
53
+ def out(level, msg):
54
+ """Print a preflight line and collect it."""
55
+ print(f"PREFLIGHT {level} {msg}")
56
+ if level == "⚠":
57
+ warnings.append(msg)
58
+ elif level == "✓":
59
+ passes.append(msg)
60
+ else:
61
+ infos.append(msg)
62
+
63
+
64
+ # ── 1. Detect DB type and DB-related features ──────────────────
65
+
66
+ def detect_db(feature_list_path):
67
+ """Return (db_type_str, list_of_feature_ids_with_db)."""
68
+ db_str = ""
69
+ db_features = []
70
+
71
+ try:
72
+ with open(feature_list_path) as f:
73
+ data = json.load(f)
74
+ except Exception:
75
+ return "", []
76
+
77
+ db_str = data.get("global_context", {}).get("database", "")
78
+
79
+ if not db_str:
80
+ try:
81
+ with open(".prizmkit/config.json") as f:
82
+ cfg = json.load(f)
83
+ db_str = cfg.get("tech_stack", {}).get("database", "")
84
+ except Exception:
85
+ pass
86
+
87
+ if not db_str:
88
+ return "", []
89
+
90
+ for feat in data.get("features", []):
91
+ desc = (feat.get("description", "") + " " + feat.get("title", "")).lower()
92
+ if any(k in desc for k in DB_KEYWORDS):
93
+ db_features.append(feat["id"])
94
+
95
+ return db_str, db_features
96
+
97
+
98
+ # ── 2. Scan env files for DB connection vars ────────────────────
99
+
100
+ def scan_env_vars():
101
+ """Return (env_file_used, {var_name: value})."""
102
+ for env_file in ENV_FILES:
103
+ if not os.path.isfile(env_file):
104
+ continue
105
+ found = {}
106
+ with open(env_file) as f:
107
+ content = f.read()
108
+ for _group, pattern in DB_VAR_PATTERNS:
109
+ for m in re.finditer(
110
+ r"^(?!#)("+pattern + r")\s*=\s*(.+)",
111
+ content,
112
+ re.MULTILINE | re.IGNORECASE,
113
+ ):
114
+ var_name = m.group(1)
115
+ var_val = m.group(2).strip().strip('"').strip("'")
116
+ if var_val:
117
+ found[var_name] = var_val
118
+ if found:
119
+ return env_file, found
120
+ return None, {}
121
+
122
+
123
+ # ── 3. Connectivity checks (per DB type) ───────────────────────
124
+
125
+ def _curl_code(url, headers=None, timeout=10):
126
+ """Run curl and return HTTP status code string."""
127
+ cmd = ["curl", "-s", "-o", "/dev/null", "-w", "%{http_code}",
128
+ "--max-time", str(timeout), url]
129
+ for h in (headers or []):
130
+ cmd += ["-H", h]
131
+ try:
132
+ r = subprocess.run(cmd, capture_output=True, text=True, timeout=timeout + 5)
133
+ return r.stdout.strip()
134
+ except Exception:
135
+ return "000"
136
+
137
+
138
+ def _get_var(env_vars, *patterns):
139
+ """Find first env var matching any pattern."""
140
+ for pat in patterns:
141
+ for k, v in env_vars.items():
142
+ if re.match(pat + "$", k, re.IGNORECASE) and v:
143
+ return k, v
144
+ return None, None
145
+
146
+
147
+ def check_connectivity(db_type, env_vars):
148
+ """Test database connectivity. Returns True if connected."""
149
+ dt = db_type.lower()
150
+
151
+ # ── Supabase ──
152
+ if "supabase" in dt:
153
+ _, url = _get_var(env_vars, r"(?:NEXT_PUBLIC_)?SUPABASE_URL")
154
+ _, key = _get_var(env_vars, r"(?:NEXT_PUBLIC_)?SUPABASE_ANON_KEY")
155
+ if not key:
156
+ _, key = _get_var(env_vars, r"SUPABASE_SERVICE_ROLE_KEY")
157
+ if not (url and key):
158
+ out("⚠", "Supabase URL or anon key not found — cannot test connectivity")
159
+ return False
160
+ # Find a table from first migration to test against
161
+ test_table = "profiles"
162
+ mig_files = sorted(glob.glob("supabase/migrations/*.sql"))
163
+ if mig_files:
164
+ with open(mig_files[0]) as f:
165
+ for line in f:
166
+ m = re.search(r"CREATE TABLE\s+(?:public\.)?(\w+)", line, re.I)
167
+ if m:
168
+ test_table = m.group(1)
169
+ break
170
+ code = _curl_code(
171
+ f"{url}/rest/v1/{test_table}?limit=0",
172
+ [f"apikey: {key}", f"Authorization: Bearer {key}"],
173
+ )
174
+ if code == "200":
175
+ out("✓", "Database API reachable (Supabase)")
176
+ return True
177
+ else:
178
+ out("⚠", f"Database API unreachable (Supabase HTTP {code})")
179
+ return False
180
+
181
+ # ── PostgreSQL / Neon ──
182
+ if any(k in dt for k in ("postgres", "pg", "neon")):
183
+ _, db_url = _get_var(env_vars, r"DATABASE_URL", r"POSTGRES(QL)?_URL", r"PG_URL")
184
+ if not db_url:
185
+ out("⚠", "No DATABASE_URL found — cannot test PostgreSQL connectivity")
186
+ return False
187
+ try:
188
+ r = subprocess.run(
189
+ ["pg_isready", "-d", db_url],
190
+ capture_output=True, text=True, timeout=10,
191
+ )
192
+ if r.returncode == 0:
193
+ out("✓", "PostgreSQL reachable")
194
+ return True
195
+ out("⚠", f"PostgreSQL unreachable: {r.stderr.strip()}")
196
+ return False
197
+ except FileNotFoundError:
198
+ out("ℹ", "pg_isready not installed — cannot verify PostgreSQL connectivity")
199
+ return False
200
+ except Exception as e:
201
+ out("⚠", f"PostgreSQL connectivity test failed: {e}")
202
+ return False
203
+
204
+ # ── MySQL / PlanetScale / MariaDB ──
205
+ if any(k in dt for k in ("mysql", "planetscale", "mariadb")):
206
+ _, db_url = _get_var(env_vars, r"DATABASE_URL", r"MYSQL_(URL|HOST)")
207
+ if not db_url:
208
+ out("⚠", "No DATABASE_URL found — cannot test MySQL connectivity")
209
+ return False
210
+ try:
211
+ hostport = db_url.split("@")[-1].split("/")[0] if "@" in db_url else db_url
212
+ host = hostport.split(":")[0]
213
+ port_args = ["-P", hostport.split(":")[1]] if ":" in hostport else []
214
+ r = subprocess.run(
215
+ ["mysqladmin", "ping", "-h", host] + port_args,
216
+ capture_output=True, text=True, timeout=10,
217
+ )
218
+ if "alive" in r.stdout.lower():
219
+ out("✓", "MySQL reachable")
220
+ return True
221
+ out("⚠", "MySQL unreachable")
222
+ return False
223
+ except FileNotFoundError:
224
+ out("ℹ", "mysqladmin not installed — cannot verify MySQL connectivity")
225
+ return False
226
+ except Exception as e:
227
+ out("⚠", f"MySQL connectivity test failed: {e}")
228
+ return False
229
+
230
+ # ── MongoDB ──
231
+ if "mongo" in dt:
232
+ _, db_url = _get_var(env_vars, r"MONGO(DB)?_(URI|URL|CONNECTION)", r"DATABASE_URL")
233
+ if not db_url:
234
+ out("⚠", "No MongoDB URI found — cannot test connectivity")
235
+ return False
236
+ try:
237
+ r = subprocess.run(
238
+ ["mongosh", "--eval", "db.runCommand({ping:1})", db_url, "--quiet"],
239
+ capture_output=True, text=True, timeout=10,
240
+ )
241
+ if r.returncode == 0:
242
+ out("✓", "MongoDB reachable")
243
+ return True
244
+ out("⚠", "MongoDB unreachable")
245
+ return False
246
+ except FileNotFoundError:
247
+ out("ℹ", "mongosh not installed — cannot verify MongoDB connectivity")
248
+ return False
249
+ except Exception as e:
250
+ out("⚠", f"MongoDB connectivity test failed: {e}")
251
+ return False
252
+
253
+ # ── Firebase ──
254
+ if "firebase" in dt:
255
+ _, project_id = _get_var(env_vars, r"(NEXT_PUBLIC_)?FIREBASE_PROJECT_ID")
256
+ if not project_id:
257
+ out("⚠", "No Firebase project ID found — cannot test connectivity")
258
+ return False
259
+ code = _curl_code(
260
+ f"https://firestore.googleapis.com/v1/projects/{project_id}/databases/(default)/documents?pageSize=0"
261
+ )
262
+ if code in ("200", "401", "403"):
263
+ out("✓", "Firebase project reachable")
264
+ return True
265
+ out("⚠", f"Firebase unreachable (HTTP {code})")
266
+ return False
267
+
268
+ # ── Generic DATABASE_URL fallback ──
269
+ _, db_url = _get_var(env_vars, r"DATABASE_URL")
270
+ if db_url and "://" in db_url:
271
+ proto = db_url.split("://")[0]
272
+ if proto in ("postgres", "postgresql"):
273
+ try:
274
+ r = subprocess.run(
275
+ ["pg_isready", "-d", db_url],
276
+ capture_output=True, text=True, timeout=10,
277
+ )
278
+ ok = r.returncode == 0
279
+ out("✓" if ok else "⚠", f"PostgreSQL {'reachable' if ok else 'unreachable'}")
280
+ return ok
281
+ except FileNotFoundError:
282
+ out("ℹ", "pg_isready not installed — cannot verify connectivity")
283
+ elif proto in ("mysql", "mariadb"):
284
+ out("ℹ", "MySQL DATABASE_URL found — install mysqladmin to verify connectivity")
285
+ elif proto in ("mongodb", "mongodb+srv"):
286
+ out("ℹ", "MongoDB DATABASE_URL found — install mongosh to verify connectivity")
287
+ else:
288
+ out("ℹ", f"DATABASE_URL found (protocol: {proto}) — cannot auto-verify")
289
+ return False
290
+
291
+ out("ℹ", f"Database type \"{db_type}\" detected but no connection variables found")
292
+ return False
293
+
294
+
295
+ # ── 4. Migration status ────────────────────────────────────────
296
+
297
+ def check_migrations(db_type, env_vars, connected):
298
+ """Check whether migrations have been applied."""
299
+ dt = db_type.lower()
300
+ checked = False
301
+
302
+ # ── Prisma ──
303
+ if os.path.isdir("prisma/migrations") or os.path.isfile("prisma/schema.prisma"):
304
+ checked = True
305
+ try:
306
+ env = os.environ.copy()
307
+ env.update(env_vars)
308
+ r = subprocess.run(
309
+ ["npx", "prisma", "migrate", "status"],
310
+ capture_output=True, text=True, timeout=30, env=env,
311
+ )
312
+ if "not yet been applied" in r.stdout.lower():
313
+ out("⚠", "Prisma: unapplied migrations detected")
314
+ for line in r.stdout.splitlines():
315
+ if "not yet been applied" in line.lower() or line.strip().startswith("- "):
316
+ out("⚠", f" {line.strip()}")
317
+ elif r.returncode == 0:
318
+ out("✓", "Prisma: all migrations applied")
319
+ else:
320
+ snippet = (r.stderr or r.stdout or "").strip()[:200]
321
+ snippet = re.sub(r"://[^\s]+@", "://[REDACTED]@", snippet)
322
+ out("⚠", f"Prisma migrate status failed: {snippet}")
323
+ except FileNotFoundError:
324
+ out("ℹ", "npx not found — cannot check Prisma migration status")
325
+ except Exception as e:
326
+ out("⚠", f"Prisma check failed: {e}")
327
+
328
+ # ── Drizzle ──
329
+ if os.path.isdir("drizzle") and glob.glob("drizzle/*.sql"):
330
+ checked = True
331
+ out("ℹ", "Drizzle migrations found — verify with `npx drizzle-kit push` or `npx drizzle-kit migrate`")
332
+
333
+ # ── Supabase raw SQL ──
334
+ if os.path.isdir("supabase/migrations") and "supabase" in dt:
335
+ checked = True
336
+ url = env_vars.get("NEXT_PUBLIC_SUPABASE_URL", env_vars.get("SUPABASE_URL", ""))
337
+ key = env_vars.get("NEXT_PUBLIC_SUPABASE_ANON_KEY", env_vars.get("SUPABASE_ANON_KEY", ""))
338
+
339
+ if url and key and connected:
340
+ for mig in sorted(glob.glob("supabase/migrations/*.sql")):
341
+ with open(mig) as f:
342
+ content = f.read()
343
+ bn = os.path.basename(mig)
344
+ for match in re.finditer(
345
+ r"CREATE TABLE\s+(?:(public|auth|storage)\.)?(\w+)",
346
+ content, re.I,
347
+ ):
348
+ schema = (match.group(1) or "public").lower()
349
+ tbl = match.group(2)
350
+ if schema != "public":
351
+ continue # auth/storage tables not accessible via REST API
352
+ code = _curl_code(
353
+ f"{url}/rest/v1/{tbl}?limit=0",
354
+ [f"apikey: {key}", f"Authorization: Bearer {key}"],
355
+ timeout=5,
356
+ )
357
+ if code == "200":
358
+ out("✓", f"{bn}: table '{tbl}' exists")
359
+ else:
360
+ out("⚠", f"{bn}: table '{tbl}' NOT FOUND — migration may not be applied")
361
+ else:
362
+ n = len(glob.glob("supabase/migrations/*.sql"))
363
+ out("ℹ", f"{n} Supabase migration file(s) found — cannot verify without API connection")
364
+
365
+ # ── Knex / Rails / generic ──
366
+ for mig_dir in ("migrations", "db/migrate", "db/migrations"):
367
+ if os.path.isdir(mig_dir) and not checked:
368
+ checked = True
369
+ n = len(os.listdir(mig_dir))
370
+ out("ℹ", f"{n} migration file(s) in {mig_dir}/ — verify manually that all are applied")
371
+
372
+ if not checked:
373
+ out("ℹ", "No migration directory detected — skipping migration check")
374
+
375
+
376
+ # ── 5. Dev server ──────────────────────────────────────────────
377
+
378
+ def check_dev_server(feature_list_path):
379
+ """Check if dev server is running (from browser_interaction URLs)."""
380
+ try:
381
+ with open(feature_list_path) as f:
382
+ data = json.load(f)
383
+ except Exception:
384
+ return
385
+ checked_bases = set()
386
+ for feat in data.get("features", []):
387
+ bi = feat.get("browser_interaction")
388
+ if bi and isinstance(bi, dict) and bi.get("url"):
389
+ m = re.match(r"(https?://[^/]+)", bi["url"])
390
+ if m:
391
+ base = m.group(1)
392
+ if base in checked_bases:
393
+ continue
394
+ checked_bases.add(base)
395
+ code = _curl_code(base, timeout=5)
396
+ if code in ("200", "302"):
397
+ out("✓", f"Dev server reachable at {base}")
398
+ else:
399
+ out("ℹ", f"Dev server not running at {base} (AI sessions can start it)")
400
+
401
+
402
+ # ── Main ────────────────────────────────────────────────────────
403
+
404
+ def main():
405
+ feature_list = sys.argv[1] if len(sys.argv) > 1 else "feature-list.json"
406
+
407
+ if not os.path.isfile(feature_list):
408
+ print(f"PREFLIGHT ⚠ Feature list not found: {feature_list}")
409
+ sys.exit(2)
410
+
411
+ # 1. Detect database
412
+ db_type, db_features = detect_db(feature_list)
413
+ if not db_type:
414
+ print("PREFLIGHT ℹ No database configured in global_context — skipping DB checks")
415
+ check_dev_server(feature_list)
416
+ _print_summary()
417
+ return
418
+ if not db_features:
419
+ print(f"PREFLIGHT ℹ Database: {db_type} (no features reference DB — skipping detailed checks)")
420
+ check_dev_server(feature_list)
421
+ _print_summary()
422
+ return
423
+
424
+ print(f"PREFLIGHT ℹ Database: {db_type}")
425
+ print(f"PREFLIGHT ℹ DB-related features: {', '.join(db_features)}")
426
+
427
+ # 2. Env vars
428
+ env_file, env_vars = scan_env_vars()
429
+ if not env_file:
430
+ out("⚠", "No env file found (.env.local, .env, etc.) — database connection will likely fail")
431
+ elif not env_vars:
432
+ out("⚠", f"{env_file} exists but no database connection variables detected")
433
+ else:
434
+ for var_name in sorted(env_vars.keys()):
435
+ out("✓", f"{var_name} configured")
436
+
437
+ # 3. Connectivity
438
+ connected = check_connectivity(db_type, env_vars)
439
+
440
+ # 4. Migrations
441
+ check_migrations(db_type, env_vars, connected)
442
+
443
+ # 5. Dev server
444
+ check_dev_server(feature_list)
445
+
446
+ _print_summary()
447
+
448
+
449
+ def _print_summary():
450
+ """Print JSON summary to stderr and set exit code."""
451
+ summary = {
452
+ "pass_count": len(passes),
453
+ "warn_count": len(warnings),
454
+ "info_count": len(infos),
455
+ "warnings": warnings,
456
+ }
457
+ print(json.dumps(summary), file=sys.stderr)
458
+ sys.exit(1 if warnings else 0)
459
+
460
+
461
+ if __name__ == "__main__":
462
+ main()
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "prizmkit",
3
- "version": "1.0.141",
3
+ "version": "1.0.143",
4
4
  "description": "Create a new PrizmKit-powered project with clean initialization — no framework dev files, just what you need.",
5
5
  "type": "module",
6
6
  "bin": {