sneakoscope 0.8.5 → 0.9.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -59,6 +59,19 @@ Research scouts now use named persona-inspired cognitive lenses: Einstein Scout,
59
59
 
60
60
  For existing 0.7.x users, the visible change is new report-only evidence, not a route personality rewrite. Team still feels like Team, DFix stays ultralight, DB remains conservative, QA-LOOP still dogfoods, PPT stays information-first, imagegen still requires real raster evidence, and Honest Mode remains the final truth pass. The original strong reminder idea became neutral RecallPulse so user-facing prompts stay short, professional, and non-repetitive; hook messages can point at status, but `mission-status-ledger.json` is the durable source when app-visible text disappears. The planning source is `docs/RECALLPULSE_0_8_0_TASKS.md`, and implementation is designed to land in safe task-sized slices before any enforcement promotion.
61
61
 
62
+ ## 0.9.0 Report-Only Decision Lattice
63
+
64
+ Sneakoscope 0.9.0 adds a report-only Decision Lattice planner that uses A* over proof-debt signals to explain which route or verification path the pipeline would prefer. It is an evidence and planning surface, not a runtime shortcut: SKS must not claim speedup, fast-lane accuracy, or reduced verification cost from the lattice until replay or scored eval evidence demonstrates those outcomes.
65
+
66
+ The lattice integrates with the existing proof-field and `sks pipeline plan` surfaces. Its reports are expected to show the explored frontier, the selected path, and rejected paths with their proof-debt reasons, so reviewers can audit why a route stayed on the full Team/Honest path or why a smaller verification plan was only proposed. Like RecallPulse, this is designed to land as report-only evidence first; route enforcement and performance claims remain gated by later validation.
67
+
68
+ Quick checks:
69
+
70
+ ```bash
71
+ sks proof-field scan --json --intent "small CLI change"
72
+ sks pipeline plan latest --proof-field --json
73
+ ```
74
+
62
75
  ## Requirements
63
76
 
64
77
  - Node.js `>=20.11`
@@ -174,7 +187,7 @@ sks codex-lb repair
174
187
  sks
175
188
  ```
176
189
 
177
- Bare `sks` can also prompt for codex-lb auth; SKS stores the base URL/key in `~/.codex/sks-codex-lb.env`, loads the provider env key for tmux launches, and syncs the macOS user launch environment so the Codex App can see `CODEX_LB_API_KEY` after restart. npm postinstall upgrades resync that stored env file when postinstall is not stopped by a hard harness conflict, and `sks doctor --fix` does the same during repair. If an older SKS release left the codex-lb dashboard key only in the shared Codex `auth.json` login cache, SKS migrates that key back into `~/.codex/sks-codex-lb.env` when a codex-lb provider or env base URL is already recoverable. It does not rewrite the shared Codex `auth.json` login cache by default; set `SKS_CODEX_LB_SYNC_CODEX_LOGIN=1` only if you intentionally want the old API-key login-cache behavior. When codex-lb is active, SKS opens a fresh `sks-codex-lb-*` tmux session and sweeps older detached codex-lb sessions for the same repo before launch so stale Responses API chains are not reused. Configured launch paths, including non-interactive runs, verify that codex-lb can continue a Responses API chain with `previous_response_id`; if that check fails, SKS bypasses codex-lb for that launch with `model_provider="openai"` instead of letting the Codex session fail mid-work.
190
+ Bare `sks` can also prompt for codex-lb auth; SKS stores the base URL/key in `~/.codex/sks-codex-lb.env`, writes `model_provider = "codex-lb"` into `~/.codex/config.toml` for Codex App routing, loads the provider env key for tmux launches, and syncs the macOS user launch environment so the Codex App can see `CODEX_LB_API_KEY` after restart. npm postinstall upgrades resync that stored env file and restore the Codex App provider selection when postinstall is not stopped by a hard harness conflict, and `sks doctor --fix` does the same during repair. If an older SKS release left the codex-lb dashboard key only in the shared Codex `auth.json` login cache, SKS migrates that key back into `~/.codex/sks-codex-lb.env` when a codex-lb provider or env base URL is already recoverable. It does not rewrite the shared Codex `auth.json` login cache by default; set `SKS_CODEX_LB_SYNC_CODEX_LOGIN=1` only if you intentionally want the old API-key login-cache behavior. When codex-lb is active, SKS opens a fresh `sks-codex-lb-*` tmux session and sweeps older detached codex-lb sessions for the same repo before launch so stale Responses API chains are not reused. Configured launch paths, including non-interactive runs, verify that codex-lb can continue a Responses API chain with `previous_response_id`; if that check fails, SKS bypasses codex-lb for that launch with `model_provider="openai"` instead of letting the Codex session fail mid-work.
178
191
 
179
192
  If codex-lb provider auth drifts after launch/reinstall, run `sks doctor --fix` or `sks codex-lb repair`; to replace it, run `sks codex-lb reconfigure --host <domain> --api-key <key>`.
180
193
 
@@ -239,7 +252,7 @@ sks code-structure scan --json
239
252
 
240
253
  `sks recallpulse` is the 0.8.0 report-only RecallPulse utility. It writes `recallpulse-decision.json`, `mission-status-ledger.json`, `route-proof-capsule.json`, `evidence-envelope.json`, `recallpulse-governance-report.json`, `recallpulse-task-goal-ledger.json`, and `recallpulse-eval-report.json` for the current mission. RecallPulse does not replace route gates, Honest Mode, DB safety, imagegen evidence, or TriWiki validation; it records cache hits, hydration needs, duplicate suppression, route-governance risks, and final-summary-ready durable status so later releases can promote only measured improvements. Checklist updates are sequential: every `Txxx` row is treated as a child `$Goal` checkpoint, and `sks recallpulse checklist ... --task T001 --apply` refuses out-of-order checks unless explicitly overridden.
241
254
 
242
- `sks pipeline plan` shows the active route lane, kept/skipped stages, verification commands, and no-unrequested-fallback invariant. `sks proof-field scan` is the lightweight rubric for small changes; risky or broad signals return to the full Team/Honest path.
255
+ `sks pipeline plan` shows the active route lane, kept/skipped stages, verification commands, and no-unrequested-fallback invariant. The 0.9.0 Decision Lattice augments this planning surface with report-only A*/proof-debt evidence: frontier paths considered, the selected path, and rejected paths with rejection reasons. `sks proof-field scan` remains the lightweight rubric for small changes; risky or broad signals return to the full Team/Honest path, and no speedup claim is valid without replay or eval evidence.
243
256
 
244
257
  ### Ambiguity Questions
245
258
 
@@ -251,7 +264,7 @@ Clarification asks only for ambiguity that changes execution; predictable defaul
251
264
  $PPT create a customer proposal deck as HTML/PDF
252
265
  ```
253
266
 
254
- `$PPT` seals presentation context before artifact work and grounds design in `design.md`, getdesign inputs, and source material.
267
+ `$PPT` seals presentation context before artifact work and grounds design in `design.md`, getdesign inputs, and source material. The route loads `imagegen`; when the sealed deck needs generated raster assets or generated slide visual critique, use Codex App `$imagegen`/`gpt-image-2` and record the real output path in the PPT image/review ledgers.
255
268
 
256
269
  ## Codex App Usage
257
270
 
@@ -275,7 +288,7 @@ For headless remotely controllable Codex App/server sessions on Codex CLI 0.130.
275
288
  sks codex-app remote-control -- --help
276
289
  ```
277
290
 
278
- `sks codex-app check` reports whether the installed Codex CLI is new enough, whether the required app flags are visible, whether Fast/speed-selector config is unlocked, and whether installed OpenAI default plugins such as Browser, Chrome, Computer Use, Documents, Presentations, Spreadsheets, and LaTeX are enabled. codex-lb can remain configured as a custom provider, but SKS keeps it off the top-level Codex App provider setting so native model, speed, and built-in feature UI stay visible. Codex CLI 0.130.0+ app-server/remote-control threads can pick up config changes live; older CLI/TUI sessions should still be restarted after `.codex/config.toml` or MCP/plugin changes.
291
+ `sks codex-app check` reports whether the installed Codex CLI is new enough, whether the required app flags are visible, whether Fast/speed-selector config is unlocked, and whether installed OpenAI default plugins such as Browser, Chrome, Computer Use, Documents, Presentations, Spreadsheets, and LaTeX are enabled. When codex-lb is configured, SKS keeps it selected as the top-level Codex App provider while still preserving required app flags and plugin settings. Codex CLI 0.130.0+ app-server/remote-control threads can pick up config changes live; older CLI/TUI sessions should still be restarted after `.codex/config.toml` or MCP/plugin changes.
279
292
 
280
293
  Then open Codex App and use prompt commands directly in the chat. Examples:
281
294
 
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "sneakoscope",
3
3
  "displayName": "ㅅㅋㅅ",
4
- "version": "0.8.5",
4
+ "version": "0.9.0",
5
5
  "description": "Sneakoscope Codex: database-safe Codex CLI/App harness with Team, Goal, AutoResearch, TriWiki, and Honest Mode.",
6
6
  "type": "module",
7
7
  "homepage": "https://github.com/mandarange/Sneakoscope-Codex#readme",
@@ -387,7 +387,7 @@ export async function repairCodexLbAuth(opts = {}) {
387
387
  status = await codexLbStatus(opts);
388
388
  }
389
389
  }
390
- if (status.env_key_configured && status.base_url && (!status.ok || status.selected || legacyAuthMigrated || hasTopLevelCodexModeLock(currentConfig))) {
390
+ if (status.env_key_configured && status.base_url && (!status.ok || !status.selected || legacyAuthMigrated || hasTopLevelCodexModeLock(currentConfig))) {
391
391
  await ensureDir(path.dirname(status.config_path));
392
392
  const next = normalizeCodexFastModeUiConfig(upsertCodexLbConfig(currentConfig, status.base_url));
393
393
  await writeTextAtomic(status.config_path, next);
@@ -426,6 +426,7 @@ export async function ensureCodexLbAuthDuringInstall(opts = {}) {
426
426
  if (process.env.SKS_SKIP_POSTINSTALL_CODEX_LB_AUTH === '1' && !opts.force) return { status: 'skipped', reason: 'SKS_SKIP_POSTINSTALL_CODEX_LB_AUTH=1' };
427
427
  const status = await codexLbStatus(opts);
428
428
  if (!status.selected && !status.provider_configured && !status.env_file) return { status: 'not_configured', codex_lb: status };
429
+ if (status.ok && !status.selected) return repairCodexLbAuth(opts);
429
430
  if (!status.ok) {
430
431
  if (status.base_url && (status.env_key_configured || status.provider_configured || status.selected || status.env_base_url_configured)) return repairCodexLbAuth(opts);
431
432
  return { status: status.env_key_configured ? 'missing_base_url' : 'missing_env_key', codex_lb: status, config_path: status.config_path, env_path: status.env_path };
@@ -559,7 +560,7 @@ async function syncCodexApiKeyLogin(apiKey, opts = {}) {
559
560
  }
560
561
 
561
562
  function upsertCodexLbConfig(text = '', baseUrl) {
562
- let next = removeTopLevelTomlKeyIfValue(text, 'model_provider', 'codex-lb');
563
+ let next = upsertTopLevelTomlString(text, 'model_provider', 'codex-lb');
563
564
  const block = [
564
565
  '[model_providers.codex-lb]',
565
566
  'name = "OpenAI"',
@@ -1233,7 +1234,7 @@ export async function selftestCodexLb(tmp) {
1233
1234
  const codexLbConfig = await safeReadText(path.join(codexLbHome, '.codex', 'config.toml'));
1234
1235
  const codexLbEnv = await safeReadText(path.join(codexLbHome, '.codex', 'sks-codex-lb.env'));
1235
1236
  const codexLbAuth = await safeReadText(path.join(codexLbHome, '.codex', 'auth.json'));
1236
- if (!codexLbSetupJson.ok || codexLbSetupJson.base_url !== 'https://lb.example.test/backend-api/codex' || hasTopLevelCodexLbSelected(codexLbConfig) || !codexLbConfig.includes('[model_providers.codex-lb]') || !codexLbEnv.includes("CODEX_LB_BASE_URL='https://lb.example.test/backend-api/codex'") || !codexLbEnv.includes("CODEX_LB_API_KEY='sk-test'") || codexLbSetupJson.codex_environment?.ok !== true || codexLbSetupJson.codex_login?.status !== 'skipped' || codexLbAuth.trim()) throw new Error('selftest: codex-lb setup');
1237
+ if (!codexLbSetupJson.ok || codexLbSetupJson.base_url !== 'https://lb.example.test/backend-api/codex' || !hasTopLevelCodexLbSelected(codexLbConfig) || !codexLbConfig.includes('[model_providers.codex-lb]') || !codexLbEnv.includes("CODEX_LB_BASE_URL='https://lb.example.test/backend-api/codex'") || !codexLbEnv.includes("CODEX_LB_API_KEY='sk-test'") || codexLbSetupJson.codex_environment?.ok !== true || codexLbSetupJson.codex_login?.status !== 'skipped' || codexLbAuth.trim()) throw new Error('selftest: codex-lb setup');
1237
1238
  const codexLbFailLaunchctl = path.join(codexLbFakeBin, 'launchctl-fail');
1238
1239
  await writeTextAtomic(codexLbFailLaunchctl, '#!/bin/sh\necho "launchctl denied" >&2\nexit 7\n');
1239
1240
  await fsp.chmod(codexLbFailLaunchctl, 0o755);
@@ -1242,14 +1243,14 @@ export async function selftestCodexLb(tmp) {
1242
1243
  if (!hasCodexUnstableFeatureWarningSuppression(codexLbConfig)) throw new Error('selftest: codex-lb setup did not suppress Codex unstable feature warning');
1243
1244
  await initProject(codexLbHome, { installScope: 'global', force: true, repair: true });
1244
1245
  const codexLbRepairSetupConfig = await safeReadText(path.join(codexLbHome, '.codex', 'config.toml'));
1245
- if (hasTopLevelCodexLbSelected(codexLbRepairSetupConfig) || !codexLbRepairSetupConfig.includes('[model_providers.codex-lb]') || !codexLbRepairSetupConfig.includes('https://lb.example.test/backend-api/codex') || codexLbRepairSetupConfig.includes('sk-test')) throw new Error('selftest: init codex-lb');
1246
+ if (!hasTopLevelCodexLbSelected(codexLbRepairSetupConfig) || !codexLbRepairSetupConfig.includes('[model_providers.codex-lb]') || !codexLbRepairSetupConfig.includes('https://lb.example.test/backend-api/codex') || codexLbRepairSetupConfig.includes('sk-test')) throw new Error('selftest: init codex-lb');
1246
1247
  if (!hasCodexUnstableFeatureWarningSuppression(codexLbRepairSetupConfig)) throw new Error('selftest: init codex-lb did not suppress Codex unstable feature warning');
1247
1248
  await writeTextAtomic(path.join(codexLbHome, '.codex', 'config.toml'), `${codexLbConfig}\n[mcp_servers.supabase]\nurl = "https://mcp.supabase.com/mcp?project_ref=ref&read_only=true&features=database,docs"\n`);
1248
1249
  const ptmp = path.join(tmp, 'codex-lb-project-config'), prevHome = process.env.HOME;
1249
1250
  try { process.env.HOME = codexLbHome; await initProject(ptmp, { installScope: 'global' }); }
1250
1251
  finally { if (prevHome === undefined) delete process.env.HOME; else process.env.HOME = prevHome; }
1251
1252
  const pcfg = await safeReadText(path.join(ptmp, '.codex', 'config.toml'));
1252
- if (hasTopLevelCodexLbSelected(pcfg) || !pcfg.includes('[model_providers.codex-lb]') || !pcfg.includes('[mcp_servers.supabase]') || !pcfg.includes('read_only=true')) throw new Error('selftest: project codex-lb');
1253
+ if (!hasTopLevelCodexLbSelected(pcfg) || !pcfg.includes('[model_providers.codex-lb]') || !pcfg.includes('[mcp_servers.supabase]') || !pcfg.includes('read_only=true')) throw new Error('selftest: project codex-lb');
1253
1254
  if (!hasCodexUnstableFeatureWarningSuppression(pcfg)) throw new Error('selftest: project codex-lb config did not suppress Codex unstable feature warning');
1254
1255
  await writeTextAtomic(path.join(codexLbHome, '.codex', 'auth.json'), '{"auth_mode":"browser"}\n');
1255
1256
  const codexLbRepair = await runProcess(process.execPath, [path.join(packageRoot(), 'bin', 'sks.mjs'), 'auth', 'repair', '--json'], { cwd: tmp, env: codexLbEnvForSelftest, timeoutMs: 15000, maxOutputBytes: 64 * 1024 });
@@ -1308,7 +1309,7 @@ export async function selftestCodexLb(tmp) {
1308
1309
  const codexLbPostBootstrapConfig = await safeReadText(path.join(codexLbHome, '.codex', 'config.toml'));
1309
1310
  const codexLbLoginCallsAfterBootstrap = await codexLbLoginCallCount(codexLbHome);
1310
1311
  if (!codexLbPostBootstrapAuth.includes('"auth_mode":"browser"') || codexLbPostBootstrapAuth.includes('sk-test') || codexLbLoginCallsAfterBootstrap !== codexLbLoginCallsBeforeBootstrap) throw new Error('selftest: postinstall drift auth');
1311
- if (hasTopLevelCodexLbSelected(codexLbPostBootstrapConfig) || !codexLbPostBootstrapConfig.includes('[model_providers.codex-lb]') || !codexLbPostBootstrapConfig.includes('https://lb.example.test/backend-api/codex') || codexLbPostBootstrapConfig.includes('sk-test')) throw new Error('selftest: postinstall drift config');
1312
+ if (!hasTopLevelCodexLbSelected(codexLbPostBootstrapConfig) || !codexLbPostBootstrapConfig.includes('[model_providers.codex-lb]') || !codexLbPostBootstrapConfig.includes('https://lb.example.test/backend-api/codex') || codexLbPostBootstrapConfig.includes('sk-test')) throw new Error('selftest: postinstall drift config');
1312
1313
  const doctorProject = tmpdir();
1313
1314
  await ensureDir(path.join(doctorProject, '.git'));
1314
1315
  await writeTextAtomic(path.join(doctorProject, 'package.json'), '{"name":"codex-lb-doctor-project","version":"0.0.0"}\n');
@@ -1325,7 +1326,7 @@ export async function selftestCodexLb(tmp) {
1325
1326
  const codexLbDoctorJson = JSON.parse(codexLbDoctorRepair.stdout);
1326
1327
  const codexLbDoctorAuth = await safeReadText(path.join(codexLbHome, '.codex', 'auth.json'));
1327
1328
  const codexLbDoctorConfig = await safeReadText(path.join(codexLbHome, '.codex', 'config.toml'));
1328
- if (!codexLbDoctorJson.repair?.codex_lb?.ok || !codexLbDoctorJson.repair.codex_lb.config_repaired || !codexLbDoctorJson.codex_lb?.ok || !codexLbDoctorAuth.includes('"auth_mode":"browser"') || codexLbDoctorAuth.includes('sk-test') || hasTopLevelCodexLbSelected(codexLbDoctorConfig) || !codexLbDoctorConfig.includes('https://lb.example.test/backend-api/codex') || !hasCodexUnstableFeatureWarningSuppression(codexLbDoctorConfig)) throw new Error('selftest: doctor codex-lb');
1329
+ if (!codexLbDoctorJson.repair?.codex_lb?.ok || !codexLbDoctorJson.repair.codex_lb.config_repaired || !codexLbDoctorJson.codex_lb?.ok || !codexLbDoctorAuth.includes('"auth_mode":"browser"') || codexLbDoctorAuth.includes('sk-test') || !hasTopLevelCodexLbSelected(codexLbDoctorConfig) || !codexLbDoctorConfig.includes('https://lb.example.test/backend-api/codex') || !hasCodexUnstableFeatureWarningSuppression(codexLbDoctorConfig)) throw new Error('selftest: doctor codex-lb');
1329
1330
  const codexLbContext7Bin = path.join(tmp, 'codex-lb-context7-bin');
1330
1331
  await ensureDir(codexLbContext7Bin);
1331
1332
  await writeTextAtomic(path.join(codexLbContext7Bin, 'codex'), '#!/bin/sh\nif [ "$1" = "--version" ]; then echo "codex-cli 99.0.0"; exit 0; fi\nif [ "$CODEX_LB_API_KEY" ]; then echo "context7 leaked CODEX_LB_API_KEY" >&2; exit 77; fi\nif [ "$1" = "mcp" ] && [ "$2" = "list" ]; then echo ""; exit 0; fi\nif [ "$1" = "mcp" ] && [ "$2" = "add" ]; then echo "context7 added"; exit 0; fi\necho "unexpected codex $*" >&2\nexit 2\n');
@@ -1388,7 +1389,7 @@ export async function selftestCodexLb(tmp) {
1388
1389
  const codexLbLegacyDoctorEnv = await safeReadText(path.join(codexLbHome, '.codex', 'sks-codex-lb.env'));
1389
1390
  const codexLbLegacyDoctorConfig = await safeReadText(path.join(codexLbHome, '.codex', 'config.toml'));
1390
1391
  const codexLbLegacyDoctorAuth = await safeReadText(path.join(codexLbHome, '.codex', 'auth.json'));
1391
- if (!codexLbLegacyDoctorJson.repair?.codex_lb?.ok || !codexLbLegacyDoctorJson.repair.codex_lb.legacy_auth_migrated || !codexLbLegacyDoctorEnv.includes("CODEX_LB_API_KEY='sk-legacy-doctor'") || !codexLbLegacyDoctorAuth.includes('"auth_mode":"apikey"') || !codexLbLegacyDoctorAuth.includes('sk-legacy-doctor') || hasTopLevelCodexLbSelected(codexLbLegacyDoctorConfig) || !codexLbLegacyDoctorConfig.includes('env_key = "CODEX_LB_API_KEY"')) throw new Error('selftest: legacy doctor codex-lb restore');
1392
+ if (!codexLbLegacyDoctorJson.repair?.codex_lb?.ok || !codexLbLegacyDoctorJson.repair.codex_lb.legacy_auth_migrated || !codexLbLegacyDoctorEnv.includes("CODEX_LB_API_KEY='sk-legacy-doctor'") || !codexLbLegacyDoctorAuth.includes('"auth_mode":"apikey"') || !codexLbLegacyDoctorAuth.includes('sk-legacy-doctor') || !hasTopLevelCodexLbSelected(codexLbLegacyDoctorConfig) || !codexLbLegacyDoctorConfig.includes('env_key = "CODEX_LB_API_KEY"')) throw new Error('selftest: legacy doctor codex-lb restore');
1392
1393
  await writeTextAtomic(path.join(codexLbHome, '.codex', 'sks-codex-lb.env'), "export CODEX_LB_BASE_URL='https://lb.example.test/backend-api/codex'\n");
1393
1394
  await writeTextAtomic(path.join(codexLbHome, '.codex', 'config.toml'), 'model = "gpt-5.5"\nservice_tier = "fast"\n');
1394
1395
  await writeTextAtomic(path.join(codexLbHome, '.codex', 'auth.json'), '{"auth_mode":"apikey","key":"sk-env-only"}\n');
@@ -1404,7 +1405,7 @@ export async function selftestCodexLb(tmp) {
1404
1405
  const codexLbEnvOnlyPostinstallConfig = await safeReadText(path.join(codexLbHome, '.codex', 'config.toml'));
1405
1406
  const codexLbEnvOnlyPostinstallAuth = await safeReadText(path.join(codexLbHome, '.codex', 'auth.json'));
1406
1407
  const codexLbLoginCallsAfterEnvOnlyPostinstall = await codexLbLoginCallCount(codexLbHome);
1407
- if (!String(codexLbEnvOnlyPostinstall.stdout || '').includes('codex-lb auth: restored from existing Codex login cache') || !codexLbEnvOnlyPostinstallEnv.includes("CODEX_LB_API_KEY='sk-env-only'") || !codexLbEnvOnlyPostinstallConfig.includes('env_key = "CODEX_LB_API_KEY"') || !codexLbEnvOnlyPostinstallAuth.includes('sk-env-only') || codexLbLoginCallsAfterEnvOnlyPostinstall !== codexLbLoginCallsBeforeEnvOnlyPostinstall) throw new Error('selftest: env-only codex-lb postinstall restore');
1408
+ if (!String(codexLbEnvOnlyPostinstall.stdout || '').includes('codex-lb auth: restored from existing Codex login cache') || !codexLbEnvOnlyPostinstallEnv.includes("CODEX_LB_API_KEY='sk-env-only'") || !codexLbEnvOnlyPostinstallConfig.includes('env_key = "CODEX_LB_API_KEY"') || !hasTopLevelCodexLbSelected(codexLbEnvOnlyPostinstallConfig) || !codexLbEnvOnlyPostinstallAuth.includes('sk-env-only') || codexLbLoginCallsAfterEnvOnlyPostinstall !== codexLbLoginCallsBeforeEnvOnlyPostinstall) throw new Error('selftest: env-only codex-lb postinstall restore');
1408
1409
  await writeTextAtomic(path.join(codexLbHome, '.codex', 'sks-codex-lb.env'), "export CODEX_LB_BASE_URL='https://lb.example.test/backend-api/codex'\n");
1409
1410
  await writeTextAtomic(path.join(codexLbHome, '.codex', 'config.toml'), 'model = "gpt-5.5"\nservice_tier = "fast"\n');
1410
1411
  await writeTextAtomic(path.join(codexLbHome, '.codex', 'auth.json'), '{"auth_mode":"apikey","key":"sk-env-only-doctor"}\n');
@@ -1422,7 +1423,7 @@ export async function selftestCodexLb(tmp) {
1422
1423
  const codexLbEnvOnlyDoctorEnv = await safeReadText(path.join(codexLbHome, '.codex', 'sks-codex-lb.env'));
1423
1424
  const codexLbEnvOnlyDoctorConfig = await safeReadText(path.join(codexLbHome, '.codex', 'config.toml'));
1424
1425
  const codexLbEnvOnlyDoctorAuth = await safeReadText(path.join(codexLbHome, '.codex', 'auth.json'));
1425
- if (!codexLbEnvOnlyDoctorJson.repair?.codex_lb?.ok || !codexLbEnvOnlyDoctorJson.repair.codex_lb.legacy_auth_migrated || !codexLbEnvOnlyDoctorEnv.includes("CODEX_LB_API_KEY='sk-env-only-doctor'") || !codexLbEnvOnlyDoctorConfig.includes('env_key = "CODEX_LB_API_KEY"') || !codexLbEnvOnlyDoctorAuth.includes('sk-env-only-doctor')) throw new Error('selftest: env-only doctor codex-lb restore');
1426
+ if (!codexLbEnvOnlyDoctorJson.repair?.codex_lb?.ok || !codexLbEnvOnlyDoctorJson.repair.codex_lb.legacy_auth_migrated || !codexLbEnvOnlyDoctorEnv.includes("CODEX_LB_API_KEY='sk-env-only-doctor'") || !codexLbEnvOnlyDoctorConfig.includes('env_key = "CODEX_LB_API_KEY"') || !hasTopLevelCodexLbSelected(codexLbEnvOnlyDoctorConfig) || !codexLbEnvOnlyDoctorAuth.includes('sk-env-only-doctor')) throw new Error('selftest: env-only doctor codex-lb restore');
1426
1427
  await writeTextAtomic(path.join(codexLbHome, '.codex', 'sks-codex-lb.env'), "export CODEX_LB_API_KEY='sk-test'\n");
1427
1428
  const codexLbMissingCli = await runProcess(process.execPath, [path.join(packageRoot(), 'bin', 'sks.mjs'), 'postinstall'], {
1428
1429
  cwd: tmp,
@@ -1527,7 +1528,7 @@ function hasTopLevelCodexModeLock(text = '') {
1527
1528
  const lines = String(text || '').split('\n');
1528
1529
  const firstTable = lines.findIndex((x) => /^\s*\[.+\]\s*$/.test(x));
1529
1530
  const top = (firstTable === -1 ? lines : lines.slice(0, firstTable)).join('\n');
1530
- return /(^|\n)\s*model_provider\s*=\s*"codex-lb"\s*(\n|$)/.test(top) || /(^|\n)\s*model_reasoning_effort\s*=/.test(top);
1531
+ return /(^|\n)\s*model_reasoning_effort\s*=/.test(top);
1531
1532
  }
1532
1533
 
1533
1534
  function hasDeprecatedCodexHooksFeatureFlag(text = '') {
package/src/cli/main.mjs CHANGED
@@ -1148,7 +1148,8 @@ async function codexLbCommand(action = 'status', args = []) {
1148
1148
  console.log(`Provider: ${status.provider_configured ? 'yes' : 'no'}`);
1149
1149
  console.log(`Env file: ${status.env_file ? status.env_path : 'missing'}`);
1150
1150
  if (status.base_url) console.log(`Base URL: ${status.base_url}`);
1151
- if (!status.ok) console.log('\nRun: sks codex-lb setup --host <domain> --api-key <key>');
1151
+ if (status.ok && !status.selected) console.log('\nRun: sks codex-lb repair to activate codex-lb for Codex App.');
1152
+ else if (!status.ok) console.log('\nRun: sks codex-lb setup --host <domain> --api-key <key>');
1152
1153
  else console.log('\nRepair provider auth: sks codex-lb repair');
1153
1154
  return;
1154
1155
  }
@@ -1498,7 +1499,7 @@ function usage(args = []) {
1498
1499
  openclaw: ['OpenClaw', '', ' sks openclaw install', ' sks openclaw path', ' sks openclaw print SKILL.md', '', 'Installs an OpenClaw skill package under ~/.openclaw/skills/sneakoscope-codex so OpenClaw agents can attach skills: [sneakoscope-codex] with the shell tool and call local SKS commands from a project root.'],
1499
1500
  team: ['Team', '', ' sks team "task" executor:5 reviewer:6 user:1', ' sks team open-tmux latest', ' sks team watch latest', ' sks team lane latest --agent analysis_scout_1 --follow', ' sks team message latest --from analysis_scout_1 --to executor_1 --message "handoff note"', ' sks team cleanup-tmux latest', '', '$Team auto-seals a route contract, opens scout-first tmux lanes when available, then runs scouts -> TriWiki attention -> debate -> runtime graph/inbox -> fresh executors -> review -> cleanup -> reflection -> Honest.'],
1500
1501
  'qa-loop': ['QA-LOOP', '', ' sks qa-loop prepare "QA this app"', ' sks qa-loop answer <MISSION_ID> answers.json', ' sks qa-loop run <MISSION_ID> --max-cycles 8', '', 'Report: YYYY-MM-DD-v<version>-qa-report.md'],
1501
- ppt: ['PPT', '', ' $PPT 투자자용 피치덱을 HTML 기반 PDF로 만들어줘', ' $PPT 우리 SaaS 소개자료 만들어줘', ' sks ppt build latest --json', ' sks ppt status latest --json', '', '$PPT infers delivery context, audience profile, STP strategy, decision context, and 3+ pain-point/solution/aha mappings before source research, design-system work, HTML/PDF export, render QA, fact-ledger validation, and bounded review-loop validation. Independent strategy/render/file-write phases run in parallel where inputs allow and are recorded in ppt-parallel-report.json. The visual system must stay simple, restrained, and information-first; editable source HTML is kept under source-html/, PPT-only temporary build files are cleaned, and installed skills/MCPs outside the $PPT allowlist are ignored. Design uses getdesign-reference plus the built-in PPT design pipeline; Codex App $imagegen/gpt-image-2 and Context7 are conditional only when the sealed PPT contract needs raster assets, slide visual critique, or current external docs. Missing required $imagegen/gpt-image-2 output blocks instead of being simulated.'],
1502
+ ppt: ['PPT', '', ' $PPT 투자자용 피치덱을 HTML 기반 PDF로 만들어줘', ' $PPT 우리 SaaS 소개자료 만들어줘', ' sks ppt build latest --json', ' sks ppt status latest --json', '', '$PPT infers delivery context, audience profile, STP strategy, decision context, and 3+ pain-point/solution/aha mappings before source research, design-system work, HTML/PDF export, render QA, fact-ledger validation, and bounded review-loop validation. Independent strategy/render/file-write phases run in parallel where inputs allow and are recorded in ppt-parallel-report.json. The visual system must stay simple, restrained, and information-first; editable source HTML is kept under source-html/, PPT-only temporary build files are cleaned, and installed skills/MCPs outside the $PPT allowlist are ignored. Design uses getdesign-reference plus the built-in PPT design pipeline; imagegen is a required PPT skill so any needed raster assets or generated slide visual critique must invoke Codex App $imagegen/gpt-image-2 and save real outputs into the mission assets/review evidence paths. Context7 is conditional only when the sealed PPT contract needs current external docs. Missing required $imagegen/gpt-image-2 output blocks instead of being simulated.'],
1502
1503
  'image-ux-review': ['Image UX Review', '', ' $Image-UX-Review localhost 화면을 이미지 생성 리뷰 루프로 검수해줘', ' $UX-Review 이 스크린샷을 gpt-image-2 콜아웃 리뷰로 분석하고 고쳐줘', ' sks image-ux-review status latest --json', '', '$Image-UX-Review captures or receives source UI screenshots, runs Codex App $imagegen/gpt-image-2 to create generated annotated review images with numbered callouts, then extracts those generated images into image-ux-issue-ledger.json. Text-only screenshot critique cannot pass image-ux-review-gate.json; missing generated review images remain an explicit blocker.'],
1503
1504
  goal: ['Goal', '', ' sks goal create "task"', ' sks goal status latest', ' sks goal pause latest', ' sks goal resume latest', ' sks goal clear latest'],
1504
1505
  'codex-app': ['Codex App', '', ' sks bootstrap', ' sks codex-app check', ' sks codex-app remote-control --status', ' sks dollar-commands', ' cat .codex/SNEAKOSCOPE.md'],
@@ -1731,7 +1732,7 @@ async function doctor(args) {
1731
1732
  const skillStatus = await checkRequiredSkills(root);
1732
1733
  const globalSkillStatus = await checkRequiredSkills(null, globalCodexSkillsRoot());
1733
1734
  const codexLb = await codexLbStatus();
1734
- const codexLbReady = (!codexLb.selected && !codexLb.provider_configured && !codexLb.env_file) || codexLb.ok;
1735
+ const codexLbReady = (!codexLb.selected && !codexLb.provider_configured && !codexLb.env_file) || (codexLb.ok && codexLb.selected);
1735
1736
  const guardStatus = await harnessGuardStatus(root);
1736
1737
  const versioningInfo = await versioningStatus(root);
1737
1738
  const codexApp = await codexAppFilesStatus(root, skillStatus, versioningInfo);
@@ -3650,6 +3651,7 @@ async function selftest() {
3650
3651
  const pptRoute = routePrompt('$PPT 투자자용 피치덱 만들어줘');
3651
3652
  if (pptRoute?.id !== 'PPT') throw new Error('selftest: $PPT did not route to presentation pipeline');
3652
3653
  if (JSON.stringify(pptRoute.requiredSkills) !== JSON.stringify(PPT_PIPELINE_SKILL_ALLOWLIST)) throw new Error(`selftest: PPT route required skills are not allowlisted: ${pptRoute.requiredSkills.join(',')}`);
3654
+ if (!pptRoute.requiredSkills.includes('imagegen')) throw new Error('selftest: PPT route must load imagegen so required PPT raster assets use Codex App $imagegen');
3653
3655
  if (pptRoute.requiredSkills.includes('design-artifact-expert') || pptRoute.requiredSkills.includes('design-ui-editor') || pptRoute.requiredSkills.includes('design-system-builder')) throw new Error('selftest: PPT route still requires generic design skills');
3654
3656
  const pptSchema = buildQuestionSchema('$PPT 투자자용 피치덱 만들어줘');
3655
3657
  const pptSlotIds = pptSchema.slots.map((s) => s.id);
@@ -3661,7 +3663,7 @@ async function selftest() {
3661
3663
  if (!pptSkillText.includes('simple, restrained, and information-first') || !pptSkillText.includes('over-designed decoration') || !pptSkillText.includes(CODEX_APP_IMAGE_GENERATION_DOC_URL) || !pptSkillText.includes(CODEX_IMAGEGEN_REQUIRED_POLICY) || !pptSkillText.includes(AWESOME_DESIGN_MD_REFERENCE.url) || !pptSkillText.includes('only design decision SSOT') || !pptSkillText.includes('instead of treating references as parallel authorities')) throw new Error('selftest: generated PPT skill missing restrained design/imagegen/fused-SSOT guidance');
3662
3664
  if (!pptSkillText.includes('PPT pipeline allowlist') || !pptSkillText.includes('ignore installed skills and MCPs') || !pptSkillText.includes('prevent AI-like generic presentation design') || !pptSkillText.includes('Do not use generic design skills such as design-artifact-expert')) throw new Error('selftest: generated PPT skill missing pipeline allowlist enforcement');
3663
3665
  if (!pptSkillText.includes('source-html/') || !pptSkillText.includes('temporary build files') || !pptSkillText.includes('ppt-parallel-report.json')) throw new Error('selftest: generated PPT skill missing source preservation/temp cleanup/parallel guidance');
3664
- if (!pptSkillText.includes('ppt-fact-ledger.json') || !pptSkillText.includes('ppt-image-asset-ledger.json') || !pptSkillText.includes('direct API fallback') || !pptSkillText.includes('ppt-review-ledger.json') || !pptSkillText.includes('ppt-iteration-report.json') || !pptSkillText.includes('never simulate missing gpt-image-2 output')) throw new Error('selftest: generated PPT skill missing fact/image/review loop anti-fake guidance');
3666
+ if (!pptSkillText.includes('ppt-fact-ledger.json') || !pptSkillText.includes('ppt-image-asset-ledger.json') || !pptSkillText.includes('direct API fallback') || !pptSkillText.includes('ppt-review-ledger.json') || !pptSkillText.includes('ppt-iteration-report.json') || !pptSkillText.includes('never simulate missing gpt-image-2 output') || !pptSkillText.includes('always loads imagegen') || !pptSkillText.includes('immediately invoke Codex App `$imagegen`')) throw new Error('selftest: generated PPT skill missing fact/image/review loop anti-fake guidance');
3665
3667
  if (routeRequiresSubagents(pptRoute, '$PPT 투자자용 피치덱 만들어줘')) throw new Error('selftest: PPT route should not require subagents by default');
3666
3668
  if (!reflectionRequiredForRoute(pptRoute)) throw new Error('selftest: PPT route should require reflection');
3667
3669
  const pptMission = await createMission(tmp, { mode: 'ppt', prompt: '$PPT 투자자용 피치덱 만들어줘' });
@@ -3753,6 +3755,7 @@ async function selftest() {
3753
3755
  const requiredImageBuild = JSON.parse(requiredImageBuildResult.stdout);
3754
3756
  const requiredImageLedger = await readJson(path.join(requiredImagePptMission.dir, PPT_IMAGE_ASSET_LEDGER_ARTIFACT));
3755
3757
  if (requiredImageBuild.ok || requiredImageBuild.gate?.passed || !requiredImageBuild.gate?.image_asset_ledger_created || requiredImageBuild.gate?.image_asset_policy_satisfied !== false || !requiredImageLedger.required || requiredImageLedger.passed || !requiredImageLedger.blockers?.includes('missing_codex_app_imagegen_gpt_image_2_asset_evidence') || requiredImageLedger.generated_count !== 0) throw new Error('selftest: required PPT image assets were not blocked without Codex App imagegen evidence');
3758
+ if (requiredImageLedger.imagegen_execution?.command !== '$imagegen' || requiredImageLedger.imagegen_execution?.required_skill !== 'imagegen' || !requiredImageLedger.assets?.every((asset) => asset.imagegen_invocation?.command === '$imagegen')) throw new Error('selftest: required PPT image assets did not carry Codex App $imagegen invocation instructions');
3756
3759
  const installUxSchema = buildQuestionSchema('SKS first install/bootstrap UX and Context7 MCP setup improvement');
3757
3760
  const installUxSlotIds = installUxSchema.slots.map((s) => s.id);
3758
3761
  if (installUxSchema.domain_hints.includes('uiux') || installUxSlotIds.includes('VISUAL_REGRESSION_REQUIRED')) throw new Error('selftest: CLI UX install prompt should not ask visual UI questions');
@@ -3825,20 +3828,20 @@ async function selftest() {
3825
3828
  if (!harnessReport.forgetting.fixture.passed || !harnessReport.tmux.views.includes('Harness Experiments View') || !harnessReport.reliability.tool_error_taxonomy.includes('Unknown')) throw new Error('selftest: harness growth fixture incomplete');
3826
3829
  const proofField = await proofFieldFixture();
3827
3830
  if (!proofField.validation.ok || !validateProofFieldReport(proofField.report).ok) throw new Error('selftest: proof field report invalid');
3828
- if (!proofField.checks.route_cone_selected || !proofField.checks.cli_cone_selected || !proofField.checks.catastrophic_guard_present || !proofField.checks.negative_release_work_recorded || !proofField.checks.outcome_rubric_present || !proofField.checks.adversarial_lenses_present || !proofField.checks.route_economy_present || !proofField.checks.simplicity_score_usable || !proofField.checks.execution_fast_lane_selected) throw new Error('selftest: proof field fixture checks incomplete');
3831
+ if (!proofField.checks.route_cone_selected || !proofField.checks.cli_cone_selected || !proofField.checks.catastrophic_guard_present || !proofField.checks.negative_release_work_recorded || !proofField.checks.outcome_rubric_present || !proofField.checks.adversarial_lenses_present || !proofField.checks.route_economy_present || !proofField.checks.decision_lattice_present || !proofField.checks.decision_lattice_report_only || !proofField.checks.decision_lattice_selected_path || !proofField.checks.decision_lattice_frontier_present || !proofField.checks.decision_lattice_rejections_present || !proofField.checks.decision_lattice_scoring_formula_present || !proofField.checks.simplicity_score_usable || !proofField.checks.execution_fast_lane_selected) throw new Error('selftest: proof field fixture checks incomplete');
3829
3832
  if (!speedLanePolicyText().includes('proof_field_fast_lane') || !proofField.report.execution_lane?.skip_when_fast?.includes('planning_debate')) throw new Error('selftest: Proof Field speed lane policy missing');
3830
3833
  const fastPipelinePlan = buildPipelinePlan({ route: routePrompt('$Team small CLI help update'), task: 'small CLI help surface update', proofField: proofField.report });
3831
3834
  if (!validatePipelinePlan(fastPipelinePlan).ok || fastPipelinePlan.runtime_lane?.lane !== 'proof_field_fast_lane' || !fastPipelinePlan.skipped_stages.includes('planning_debate') || !fastPipelinePlan.invariants.includes('no_unrequested_fallback_code')) throw new Error('selftest: pipeline plan did not encode fast lane stage skips and fallback guard');
3832
3835
  const broadProofField = await buildProofField(tmp, { intent: 'database security route refactor', changedFiles: ['src/core/db-safety.mjs', 'src/core/routes.mjs', 'src/cli/main.mjs', 'README.md'] });
3833
3836
  const broadPipelinePlan = buildPipelinePlan({ route: routePrompt('$Team database security route refactor'), task: 'database security route refactor', proofField: broadProofField });
3834
3837
  if (!validatePipelinePlan(broadPipelinePlan).ok || broadPipelinePlan.runtime_lane?.lane === 'proof_field_fast_lane' || broadPipelinePlan.skipped_stages.includes('planning_debate')) throw new Error('selftest: pipeline plan did not fail closed for broad/security work');
3835
- if (broadPipelinePlan.route_economy?.mode !== 'report_only' || !broadPipelinePlan.route_economy.active_team_triggers?.includes('broad_change_set') || !broadPipelinePlan.route_economy.verification_stage_cache_key) throw new Error('selftest: route economy projection missing from pipeline plan');
3838
+ if (broadPipelinePlan.route_economy?.mode !== 'report_only' || !broadPipelinePlan.route_economy.active_team_triggers?.includes('broad_change_set') || !broadPipelinePlan.route_economy.verification_stage_cache_key || !broadPipelinePlan.route_economy.decision_lattice?.report_only) throw new Error('selftest: route economy projection missing from pipeline plan');
3836
3839
  const workflowPerf = await runWorkflowPerfBench(tmp, {
3837
3840
  iterations: 2,
3838
3841
  intent: 'small CLI help surface update',
3839
3842
  changedFiles: ['src/cli/maintenance-commands.mjs', 'src/core/routes.mjs']
3840
3843
  });
3841
- if (!validateWorkflowPerfReport(workflowPerf).ok || workflowPerf.metrics.decision_mode !== 'fast_lane' || workflowPerf.metrics.execution_lane !== 'proof_field_fast_lane' || workflowPerf.metrics.pipeline_lane !== 'proof_field_fast_lane' || !workflowPerf.metrics.fast_lane_eligible || !workflowPerf.metrics.fast_lane_allowed || Number(workflowPerf.metrics.simplicity_score) < 0.75 || Number(workflowPerf.metrics.outcome_criteria_passed) < 3) throw new Error('selftest: workflow perf proof field did not produce a valid outcome-scored fast lane report');
3844
+ if (!validateWorkflowPerfReport(workflowPerf).ok || workflowPerf.metrics.decision_mode !== 'fast_lane' || workflowPerf.metrics.execution_lane !== 'proof_field_fast_lane' || workflowPerf.metrics.pipeline_lane !== 'proof_field_fast_lane' || !workflowPerf.metrics.fast_lane_eligible || !workflowPerf.metrics.fast_lane_allowed || !workflowPerf.metrics.decision_lattice_valid || Number(workflowPerf.metrics.decision_lattice_frontier_count) < 1 || Number(workflowPerf.metrics.simplicity_score) < 0.75 || Number(workflowPerf.metrics.outcome_criteria_passed) < 3) throw new Error('selftest: workflow perf proof field did not produce a valid outcome-scored fast lane report');
3842
3845
  if (classifyToolError({ message: 'operation timed out' }) !== 'Timeout' || classifyToolError({ message: 'unclassified weirdness' }) !== 'Unknown') throw new Error('selftest: tool error taxonomy classification');
3843
3846
  const coord = rgbaToWikiCoord({ r: 12, g: 34, b: 56, a: 255 });
3844
3847
  if (coord.schema !== 'sks.wiki-coordinate.v1' || coord.xyzw.length !== 4) throw new Error('selftest: RGBA wiki coordinate conversion');
@@ -969,6 +969,9 @@ export async function proofFieldCommand(sub, args = []) {
969
969
  console.log(`Workflow complexity: ${report.workflow_complexity.band} (${report.workflow_complexity.score})`);
970
970
  if (report.team_trigger_matrix.active_triggers.length) console.log(`Team triggers: ${report.team_trigger_matrix.active_triggers.join(', ')}`);
971
971
  console.log(`Proof cones: ${report.proof_cones.map((cone) => cone.id).join(', ')}`);
972
+ if (report.decision_lattice?.selected_path?.id) {
973
+ console.log(`Decision lattice: ${report.decision_lattice.selected_path.id} f=${report.decision_lattice.selected_path.cost?.f ?? 'n/a'} frontier=${report.decision_lattice.frontier?.expanded_order?.length || 0} rejected=${report.decision_lattice.rejected_alternatives?.length || 0}`);
974
+ }
972
975
  console.log(`Verification: ${report.fast_lane_decision.verification.join('; ')}`);
973
976
  console.log(`Report: ${path.relative(root, report.report_path)}`);
974
977
  }
@@ -401,7 +401,6 @@ async function codexFastModeConfigStatus(opts = {}) {
401
401
  if (!config.text) continue;
402
402
  const topLevel = topLevelToml(config.text);
403
403
  if (/(^|\n)\s*model_reasoning_effort\s*=/.test(topLevel)) blockers.push(`${config.scope}:top_level_model_reasoning_effort`);
404
- if (/(^|\n)\s*model_provider\s*=\s*"codex-lb"\s*(?:#.*)?(?=\n|$)/.test(topLevel)) blockers.push(`${config.scope}:top_level_codex_lb_provider`);
405
404
  if (/(^|\n)\s*fast_default_opt_out\s*=\s*true\s*(?:#.*)?(?=\n|$)/.test(tomlTable(config.text, 'notice'))) blockers.push(`${config.scope}:fast_default_opt_out`);
406
405
  }
407
406
  const merged = configs.map((config) => config.text).join('\n');
@@ -0,0 +1,481 @@
1
+ export const DECISION_LATTICE_SCHEMA_VERSION = 1;
2
+
3
+ export const DEFAULT_LATTICE_WEIGHTS = Object.freeze({
4
+ step: 1,
5
+ proof_debt: 3,
6
+ risk: 1,
7
+ friction: 1,
8
+ info_gain: 1
9
+ });
10
+
11
+ const AXES = Object.freeze(['contract', 'context', 'implementation', 'verification', 'review']);
12
+
13
+ const DEFAULT_START = Object.freeze({
14
+ contract: 0,
15
+ context: 0,
16
+ implementation: 0,
17
+ verification: 0,
18
+ review: 0
19
+ });
20
+
21
+ const DEFAULT_GOAL = Object.freeze({
22
+ contract: 2,
23
+ context: 2,
24
+ implementation: 2,
25
+ verification: 2,
26
+ review: 1
27
+ });
28
+
29
+ const DEFAULT_ACTIONS = Object.freeze([
30
+ {
31
+ id: 'seal_contract',
32
+ label: 'Seal decision contract',
33
+ delta: { contract: 2 },
34
+ risk: 0.05,
35
+ friction: 0.25,
36
+ info_gain: 0.9,
37
+ notes: ['Removes ambiguity before route selection.']
38
+ },
39
+ {
40
+ id: 'read_triwiki',
41
+ label: 'Read bounded TriWiki context',
42
+ delta: { context: 1 },
43
+ risk: 0.05,
44
+ friction: 0.2,
45
+ info_gain: 0.7,
46
+ notes: ['Uses compact high-trust recall before editing.']
47
+ },
48
+ {
49
+ id: 'proof_field_scan',
50
+ label: 'Run proof-field scan',
51
+ delta: { context: 2, verification: 1 },
52
+ risk: 0.1,
53
+ friction: 0.35,
54
+ info_gain: 0.95,
55
+ notes: ['Scores route surface and escalation triggers.']
56
+ },
57
+ {
58
+ id: 'minimal_patch',
59
+ label: 'Implement smallest scoped change',
60
+ delta: { implementation: 2 },
61
+ risk: 0.35,
62
+ friction: 0.35,
63
+ info_gain: 0.4,
64
+ notes: ['Touches only the selected proof cone.']
65
+ },
66
+ {
67
+ id: 'focused_verification',
68
+ label: 'Run focused verification',
69
+ delta: { verification: 1 },
70
+ risk: 0.12,
71
+ friction: 0.45,
72
+ info_gain: 0.85,
73
+ notes: ['Checks syntax and behavior for the changed module.']
74
+ },
75
+ {
76
+ id: 'five_lane_review',
77
+ label: 'Collect five-lane Team review',
78
+ delta: { review: 1 },
79
+ risk: 0.2,
80
+ friction: 1.1,
81
+ info_gain: 1,
82
+ notes: ['Satisfies Team review gate for broad missions.']
83
+ },
84
+ {
85
+ id: 'honest_mode',
86
+ label: 'Run Honest Mode closeout',
87
+ delta: { verification: 1 },
88
+ risk: 0.05,
89
+ friction: 0.2,
90
+ info_gain: 0.65,
91
+ notes: ['Binds final claims to evidence and gaps.']
92
+ }
93
+ ]);
94
+
95
+ const DEFAULT_ROUTE_PATHS = Object.freeze([
96
+ {
97
+ id: 'proof_field_fast_lane',
98
+ label: 'Proof Field Fast Lane',
99
+ action_ids: ['seal_contract', 'read_triwiki', 'proof_field_scan', 'minimal_patch', 'focused_verification', 'honest_mode'],
100
+ notes: ['Lowest friction when scope is narrow and risk flags stay low.']
101
+ },
102
+ {
103
+ id: 'balanced_team_lane',
104
+ label: 'Balanced Team Lane',
105
+ action_ids: ['seal_contract', 'read_triwiki', 'proof_field_scan', 'minimal_patch', 'focused_verification', 'five_lane_review', 'honest_mode'],
106
+ notes: ['Adds review evidence while preserving a compact change surface.']
107
+ },
108
+ {
109
+ id: 'full_team_honest_path',
110
+ label: 'Full Team Honest Path',
111
+ action_ids: ['seal_contract', 'read_triwiki', 'proof_field_scan', 'five_lane_review', 'minimal_patch', 'focused_verification', 'five_lane_review', 'honest_mode'],
112
+ notes: ['Heaviest default for broad or release-sensitive missions.']
113
+ }
114
+ ]);
115
+
116
+ export function buildDecisionLatticeReport(input = {}) {
117
+ const weights = normalizeWeights(input.weights);
118
+ const start = normalizeState(input.start_state || input.start || DEFAULT_START);
119
+ const goal = normalizeState(input.goal_state || input.target_state || input.target || inferredGoal(input));
120
+ const actions = normalizeActions(input.actions || DEFAULT_ACTIONS);
121
+ const routePaths = normalizeRoutePaths(input.route_paths || input.candidate_route_paths || DEFAULT_ROUTE_PATHS, actions);
122
+ const grid = buildConceptualGrid(start, goal, actions);
123
+ const search = runAStar({ start, goal, actions, weights });
124
+ const routeCandidates = routePaths.map((routePath, index) => evaluateRoutePath(routePath, index, { start, goal, actions, weights }));
125
+ const candidates = routeCandidates.concat([{ ...search.selected_path, rank_hint: routeCandidates.length }]).sort(compareCandidates);
126
+ const selected = selectPath(candidates, search.selected_path);
127
+ const rejected = candidates
128
+ .filter((candidate) => candidate.id !== selected.id)
129
+ .map((candidate) => ({
130
+ id: candidate.id,
131
+ label: candidate.label,
132
+ f: candidate.cost.f,
133
+ delta_from_selected: round(candidate.cost.f - selected.cost.f),
134
+ rejection_reasons: rejectionReasons(candidate, selected)
135
+ }));
136
+ const report = {
137
+ schema_version: DECISION_LATTICE_SCHEMA_VERSION,
138
+ report_only: true,
139
+ deterministic: true,
140
+ module: 'decision-lattice',
141
+ scoring_formula: String(input.scoring_formula || 'f = g + h + risk + friction - info_gain'),
142
+ research_basis: {
143
+ model: 'Decision Lattice A* planner',
144
+ scoring_formula: 'f = g + h + risk + friction - info_gain',
145
+ proof_debt_heuristic: 'h is weighted remaining lattice debt across contract, context, implementation, verification, and review axes.'
146
+ },
147
+ input_summary: {
148
+ intent: String(input.intent || input.goal || '').trim() || null,
149
+ weights,
150
+ start_state: start,
151
+ goal_state: goal,
152
+ action_count: actions.length,
153
+ route_path_count: routePaths.length
154
+ },
155
+ heuristic: {
156
+ id: 'proof_debt',
157
+ h_start: proofDebt(start, goal, weights),
158
+ axes: AXES.map((axis) => ({
159
+ axis,
160
+ start: start[axis],
161
+ goal: goal[axis],
162
+ debt: debtForAxis(start, goal, axis),
163
+ weighted_debt: round(debtForAxis(start, goal, axis) * weights.proof_debt)
164
+ }))
165
+ },
166
+ conceptual_grid: grid,
167
+ frontier: search.frontier,
168
+ candidate_paths: candidates,
169
+ selected_path: selected,
170
+ rejected_alternatives: rejected,
171
+ validation: null
172
+ };
173
+ report.validation = validateDecisionLatticeReport(report);
174
+ return report;
175
+ }
176
+
177
+ function inferredGoal(input = {}) {
178
+ const goal = { ...DEFAULT_GOAL };
179
+ if (input.execution_lane?.fast_lane_allowed === true && !(input.team_trigger_matrix?.active_triggers || []).length) {
180
+ goal.review = 0;
181
+ }
182
+ return goal;
183
+ }
184
+
185
+ export function validateDecisionLatticeReport(report = {}) {
186
+ const issues = [];
187
+ if (report.schema_version !== DECISION_LATTICE_SCHEMA_VERSION) issues.push('schema_version');
188
+ if (report.report_only !== true) issues.push('report_only');
189
+ if (report.deterministic !== true) issues.push('deterministic');
190
+ if (report.research_basis?.scoring_formula !== 'f = g + h + risk + friction - info_gain') issues.push('scoring_formula');
191
+ if (!Array.isArray(report.heuristic?.axes) || report.heuristic.axes.length !== AXES.length) issues.push('heuristic_axes');
192
+ if (!Number.isFinite(Number(report.heuristic?.h_start))) issues.push('heuristic_h_start');
193
+ if (!Array.isArray(report.conceptual_grid?.cells) || report.conceptual_grid.cells.length < 1) issues.push('conceptual_grid');
194
+ if (!Array.isArray(report.frontier?.expanded_order) || report.frontier.expanded_order.length < 1) issues.push('frontier_expanded_order');
195
+ if (!Array.isArray(report.candidate_paths) || report.candidate_paths.length < 1) issues.push('candidate_paths');
196
+ if (!report.selected_path?.id || !Array.isArray(report.selected_path?.steps)) issues.push('selected_path');
197
+ if (!Array.isArray(report.rejected_alternatives)) issues.push('rejected_alternatives');
198
+ if (report.candidate_paths?.some((candidate) => !Number.isFinite(Number(candidate?.cost?.f)))) issues.push('candidate_costs');
199
+ if (report.selected_path?.cost?.f !== Math.min(...(report.candidate_paths || []).map((candidate) => candidate.cost.f))) issues.push('selected_path_not_min_f');
200
+ return { ok: issues.length === 0, issues };
201
+ }
202
+
203
+ function normalizeWeights(input = {}) {
204
+ return {
205
+ step: positiveNumber(input.step, DEFAULT_LATTICE_WEIGHTS.step),
206
+ proof_debt: positiveNumber(input.proof_debt, DEFAULT_LATTICE_WEIGHTS.proof_debt),
207
+ risk: positiveNumber(input.risk, DEFAULT_LATTICE_WEIGHTS.risk),
208
+ friction: positiveNumber(input.friction, DEFAULT_LATTICE_WEIGHTS.friction),
209
+ info_gain: positiveNumber(input.info_gain, DEFAULT_LATTICE_WEIGHTS.info_gain)
210
+ };
211
+ }
212
+
213
+ function normalizeState(input = {}) {
214
+ const state = {};
215
+ for (const axis of AXES) state[axis] = clampInt(input[axis], 0, 3);
216
+ return state;
217
+ }
218
+
219
+ function normalizeActions(input = []) {
220
+ return input
221
+ .map((action, index) => ({
222
+ id: safeId(action.id || `action_${index + 1}`),
223
+ label: String(action.label || action.id || `Action ${index + 1}`),
224
+ delta: normalizeDelta(action.delta || {}),
225
+ risk: nonNegativeNumber(action.risk, 0),
226
+ friction: nonNegativeNumber(action.friction, 0),
227
+ info_gain: nonNegativeNumber(action.info_gain, 0),
228
+ notes: arrayOfStrings(action.notes)
229
+ }))
230
+ .filter((action) => AXES.some((axis) => action.delta[axis] > 0))
231
+ .sort(compareById);
232
+ }
233
+
234
+ function normalizeRoutePaths(input = [], actions = []) {
235
+ const actionIds = new Set(actions.map((action) => action.id));
236
+ return input
237
+ .map((routePath, index) => ({
238
+ id: safeId(routePath.id || `route_path_${index + 1}`),
239
+ label: String(routePath.label || routePath.id || `Route Path ${index + 1}`),
240
+ action_ids: arrayOfStrings(routePath.action_ids || routePath.actions).map(safeId).filter((id) => actionIds.has(id)),
241
+ notes: arrayOfStrings(routePath.notes)
242
+ }))
243
+ .filter((routePath) => routePath.action_ids.length > 0)
244
+ .sort(compareById);
245
+ }
246
+
247
+ function normalizeDelta(delta = {}) {
248
+ const out = {};
249
+ for (const axis of AXES) out[axis] = clampInt(delta[axis], 0, 3);
250
+ return out;
251
+ }
252
+
253
+ function runAStar({ start, goal, actions, weights }) {
254
+ const open = [nodeForState(start, { g: 0, h: proofDebt(start, goal, weights), risk: 0, friction: 0, info_gain: 0, steps: [] })];
255
+ const best = new Map([[stateKey(start), 0]]);
256
+ const closed = [];
257
+ const snapshots = [];
258
+ let selected = open[0];
259
+
260
+ while (open.length > 0 && closed.length < 64) {
261
+ open.sort(compareNodes);
262
+ const current = open.shift();
263
+ closed.push(current);
264
+ snapshots.push({ step: closed.length, current: current.key, f: current.f, open: open.map((node) => node.key).sort() });
265
+ if (isGoal(current.state, goal)) {
266
+ selected = current;
267
+ break;
268
+ }
269
+ for (const action of actions) {
270
+ const nextState = applyAction(current.state, action, goal);
271
+ const key = stateKey(nextState);
272
+ const g = round(current.g + weights.step);
273
+ if (best.has(key) && best.get(key) <= g) continue;
274
+ best.set(key, g);
275
+ const risk = round(current.risk + action.risk * weights.risk);
276
+ const friction = round(current.friction + action.friction * weights.friction);
277
+ const infoGain = round(current.info_gain + action.info_gain * weights.info_gain);
278
+ const h = proofDebt(nextState, goal, weights);
279
+ open.push(nodeForState(nextState, {
280
+ g,
281
+ h,
282
+ risk,
283
+ friction,
284
+ info_gain: infoGain,
285
+ steps: current.steps.concat([stepFromAction(action, nextState)])
286
+ }));
287
+ }
288
+ }
289
+
290
+ return {
291
+ selected_path: pathFromNode('astar_frontier_path', 'A* Frontier Path', selected),
292
+ frontier: {
293
+ expanded_order: closed.map((node, index) => ({ index, key: node.key, f: node.f, h: node.h, steps: node.steps.map((step) => step.id) })),
294
+ open_nodes: open.sort(compareNodes).slice(0, 12).map((node) => ({ key: node.key, f: node.f, h: node.h })),
295
+ closed_nodes: closed.map((node) => node.key),
296
+ snapshots
297
+ }
298
+ };
299
+ }
300
+
301
+ function evaluateRoutePath(routePath, index, { start, goal, actions, weights }) {
302
+ const actionById = new Map(actions.map((action) => [action.id, action]));
303
+ let state = { ...start };
304
+ let g = 0;
305
+ let risk = 0;
306
+ let friction = 0;
307
+ let infoGain = 0;
308
+ const steps = [];
309
+ for (const id of routePath.action_ids) {
310
+ const action = actionById.get(id);
311
+ if (!action) continue;
312
+ g = round(g + weights.step);
313
+ risk = round(risk + action.risk * weights.risk);
314
+ friction = round(friction + action.friction * weights.friction);
315
+ infoGain = round(infoGain + action.info_gain * weights.info_gain);
316
+ state = applyAction(state, action, goal);
317
+ steps.push(stepFromAction(action, state));
318
+ }
319
+ const h = proofDebt(state, goal, weights);
320
+ const f = round(g + h + risk + friction - infoGain);
321
+ return {
322
+ id: routePath.id,
323
+ label: routePath.label,
324
+ rank_hint: index,
325
+ route: routePath.action_ids,
326
+ steps,
327
+ final_state: state,
328
+ proof_debt: h,
329
+ complete: isGoal(state, goal),
330
+ cost: { g, h, risk, friction, info_gain: infoGain, f },
331
+ notes: routePath.notes
332
+ };
333
+ }
334
+
335
+ function selectPath(candidates, astarPath) {
336
+ const complete = candidates.filter((candidate) => candidate.complete);
337
+ const pool = complete.length ? complete : candidates;
338
+ const selected = pool.slice().sort(compareCandidates)[0] || astarPath;
339
+ return selected.cost.f <= astarPath.cost.f ? selected : astarPath;
340
+ }
341
+
342
+ function pathFromNode(id, label, node) {
343
+ return {
344
+ id,
345
+ label,
346
+ route: node.steps.map((step) => step.id),
347
+ steps: node.steps,
348
+ final_state: node.state,
349
+ proof_debt: node.h,
350
+ complete: node.h === 0,
351
+ cost: {
352
+ g: node.g,
353
+ h: node.h,
354
+ risk: node.risk,
355
+ friction: node.friction,
356
+ info_gain: node.info_gain,
357
+ f: node.f
358
+ },
359
+ notes: ['Generated by A* frontier expansion over the conceptual lattice.']
360
+ };
361
+ }
362
+
363
+ function nodeForState(state, input) {
364
+ const f = round(input.g + input.h + input.risk + input.friction - input.info_gain);
365
+ return { ...input, state, key: stateKey(state), f };
366
+ }
367
+
368
+ function applyAction(state, action, goal) {
369
+ const next = {};
370
+ for (const axis of AXES) next[axis] = Math.min(goal[axis], state[axis] + action.delta[axis]);
371
+ return next;
372
+ }
373
+
374
+ function proofDebt(state, goal, weights) {
375
+ return round(AXES.reduce((sum, axis) => sum + debtForAxis(state, goal, axis), 0) * weights.proof_debt);
376
+ }
377
+
378
+ function debtForAxis(state, goal, axis) {
379
+ return Math.max(0, Number(goal[axis] || 0) - Number(state[axis] || 0));
380
+ }
381
+
382
+ function buildConceptualGrid(start, goal, actions) {
383
+ return {
384
+ axes: AXES.map((axis) => ({ axis, start: start[axis], goal: goal[axis], span: Math.max(0, goal[axis] - start[axis]) })),
385
+ cells: AXES.map((axis) => ({
386
+ id: `axis_${axis}`,
387
+ axis,
388
+ start: start[axis],
389
+ goal: goal[axis],
390
+ candidate_actions: actions.filter((action) => action.delta[axis] > 0).map((action) => action.id)
391
+ })),
392
+ legend: {
393
+ g: 'path steps already paid',
394
+ h: 'remaining proof debt',
395
+ risk: 'expected safety and integration exposure',
396
+ friction: 'coordination and verification drag',
397
+ info_gain: 'uncertainty removed by the step'
398
+ }
399
+ };
400
+ }
401
+
402
+ function rejectionReasons(candidate, selected) {
403
+ const reasons = [];
404
+ if (!candidate.complete) reasons.push('remaining_proof_debt');
405
+ if (candidate.cost.risk > selected.cost.risk) reasons.push('higher_risk');
406
+ if (candidate.cost.friction > selected.cost.friction) reasons.push('higher_friction');
407
+ if (candidate.cost.info_gain < selected.cost.info_gain) reasons.push('lower_info_gain');
408
+ if (candidate.cost.f > selected.cost.f) reasons.push('higher_total_f');
409
+ return reasons.length ? reasons : ['tie_broken_by_deterministic_order'];
410
+ }
411
+
412
+ function compareCandidates(a, b) {
413
+ return (a.cost.f - b.cost.f)
414
+ || (a.cost.h - b.cost.h)
415
+ || (a.cost.risk - b.cost.risk)
416
+ || (a.cost.friction - b.cost.friction)
417
+ || (b.cost.info_gain - a.cost.info_gain)
418
+ || a.id.localeCompare(b.id);
419
+ }
420
+
421
+ function compareNodes(a, b) {
422
+ return (a.f - b.f)
423
+ || (a.h - b.h)
424
+ || (a.risk - b.risk)
425
+ || (a.friction - b.friction)
426
+ || (b.info_gain - a.info_gain)
427
+ || a.key.localeCompare(b.key);
428
+ }
429
+
430
+ function compareById(a, b) {
431
+ return a.id.localeCompare(b.id);
432
+ }
433
+
434
+ function stateKey(state) {
435
+ return AXES.map((axis) => `${axis}:${state[axis]}`).join('|');
436
+ }
437
+
438
+ function isGoal(state, goal) {
439
+ return AXES.every((axis) => state[axis] >= goal[axis]);
440
+ }
441
+
442
+ function stepFromAction(action, state) {
443
+ return {
444
+ id: action.id,
445
+ label: action.label,
446
+ state_after: state,
447
+ risk: action.risk,
448
+ friction: action.friction,
449
+ info_gain: action.info_gain,
450
+ notes: action.notes
451
+ };
452
+ }
453
+
454
+ function arrayOfStrings(value) {
455
+ if (!Array.isArray(value)) return [];
456
+ return value.map((item) => String(item || '').trim()).filter(Boolean);
457
+ }
458
+
459
+ function safeId(value) {
460
+ return String(value || 'item').trim().toLowerCase().replace(/[^a-z0-9_./-]+/g, '_').replace(/^_+|_+$/g, '') || 'item';
461
+ }
462
+
463
+ function clampInt(value, min, max) {
464
+ const number = Number(value);
465
+ if (!Number.isFinite(number)) return min;
466
+ return Math.max(min, Math.min(max, Math.floor(number)));
467
+ }
468
+
469
+ function positiveNumber(value, fallback) {
470
+ const number = Number(value);
471
+ return Number.isFinite(number) && number > 0 ? number : fallback;
472
+ }
473
+
474
+ function nonNegativeNumber(value, fallback) {
475
+ const number = Number(value);
476
+ return Number.isFinite(number) && number >= 0 ? number : fallback;
477
+ }
478
+
479
+ function round(value) {
480
+ return Math.round(Number(value || 0) * 1000) / 1000;
481
+ }
package/src/core/fsx.mjs CHANGED
@@ -5,7 +5,7 @@ import os from 'node:os';
5
5
  import crypto from 'node:crypto';
6
6
  import { spawn } from 'node:child_process';
7
7
 
8
- export const PACKAGE_VERSION = '0.8.5';
8
+ export const PACKAGE_VERSION = '0.9.0';
9
9
  export const DEFAULT_PROCESS_TAIL_BYTES = 256 * 1024;
10
10
  export const DEFAULT_PROCESS_TIMEOUT_MS = 30 * 60 * 1000;
11
11
 
package/src/core/init.mjs CHANGED
@@ -48,7 +48,7 @@ export function hasTopLevelCodexModeLock(text = '') {
48
48
  const firstTable = lines.findIndex((x) => /^\s*\[.+\]\s*$/.test(x));
49
49
  const top = (firstTable === -1 ? lines : lines.slice(0, firstTable)).join('\n');
50
50
  const model = top.match(/^model\s*=\s*"([^"]+)"/m)?.[1];
51
- return (Boolean(model) && model !== 'gpt-5.5') || /^model_reasoning_effort\s*=/m.test(top) || /^model_provider\s*=\s*"codex-lb"/m.test(top);
51
+ return (Boolean(model) && model !== 'gpt-5.5') || /^model_reasoning_effort\s*=/m.test(top);
52
52
  }
53
53
 
54
54
  export function hasDeprecatedCodexHooksFeatureFlag(text = '') {
@@ -502,7 +502,6 @@ function installPolicy(scope, commandPrefix) {
502
502
 
503
503
  function mergeManagedCodexConfigToml(existingContent = '') {
504
504
  let next = removeLegacyTopLevelCodexModeLocks(String(existingContent || '').trimEnd());
505
- next = removeTopLevelTomlKeyIfValue(next, 'model_provider', 'codex-lb');
506
505
  next = removeTomlTableKey(next, 'notice', 'fast_default_opt_out');
507
506
  next = removeTomlTableKey(next, 'features', 'codex_hooks');
508
507
  next = upsertTopLevelTomlString(next, 'model', 'gpt-5.5');
@@ -548,14 +547,14 @@ async function mergeGlobalCodexConfigIfAvailable(configText = '', configPath = '
548
547
  let next = mergeGlobalMcpServers(configText, globalConfig);
549
548
  next = mergeGlobalCodexAppRuntimeTables(next, globalConfig);
550
549
  if (selectedRe.test(next) && /\[model_providers\.codex-lb\]/.test(next)) {
551
- return `${removeTopLevelTomlKeyIfValue(next, 'model_provider', 'codex-lb').trim()}\n`;
550
+ return `${next.trim()}\n`;
552
551
  }
553
552
  const envPath = path.join(home, '.codex', 'sks-codex-lb.env');
554
553
  if (!(await exists(envPath))) return next;
555
554
  const envText = await readText(envPath, '');
556
555
  const baseUrl = globalConfig.match(/(^|\n)\[model_providers\.codex-lb\][\s\S]*?\n\s*base_url\s*=\s*"([^"]+)"/)?.[2] || parseCodexLbEnvBaseUrl(envText);
557
- if (!parseCodexLbEnvKey(envText) || !baseUrl || (!selectedRe.test(globalConfig) && !parseCodexLbEnvBaseUrl(envText))) return next;
558
- next = removeTopLevelTomlKeyIfValue(next, 'model_provider', 'codex-lb');
556
+ if (!parseCodexLbEnvKey(envText) || !baseUrl) return next;
557
+ next = upsertTopLevelTomlString(next, 'model_provider', 'codex-lb');
559
558
  next = upsertTomlTable(next, 'model_providers.codex-lb', `[model_providers.codex-lb]\nname = "OpenAI"\nbase_url = "${baseUrl}"\nwire_api = "responses"\nenv_key = "CODEX_LB_API_KEY"\nsupports_websockets = true\nrequires_openai_auth = true`);
560
559
  return `${next.trim()}\n`;
561
560
  }
@@ -916,7 +915,7 @@ export async function installSkills(root) {
916
915
  'team': `---\nname: team\ndescription: SKS Team orchestration for $Team/code work; $From-Chat-IMG is the explicit chat-image alias.\n---\n\nUse for $Team/code work. Auto-seal the route contract from prompt, TriWiki/current-code defaults, and conservative policy; do not surface a prequestion sheet. Read pipeline-plan.json or run sks pipeline plan to see the runtime lane, kept/skipped stages, and verification before implementation. Write team-roster.json; team-gate.json needs team_roster_confirmed=true. executor:N means N scouts, N debate voices, then fresh N executors. ${MIN_TEAM_REVIEW_POLICY_TEXT} After consensus, compile team-graph.json, team-runtime-tasks.json, team-decomposition-report.json, and team-inbox/ so worker handoff uses concrete runtime task ids with role/path/domain/lane hints. Refresh/validate TriWiki before debate, implementation, review, and final; consume attention.use_first and hydrate attention.hydrate_first before risky decisions. ${outcomeRubricPolicyText()} ${speedLanePolicyText()} ${solutionScoutPolicyText('fix this broken behavior')} ${skillDreamPolicyText()} Log events and use sks team message for bounded inter-agent communication in transcript/lane panes. Color-coded tmux lanes distinguish overview/scout/planning/execution/review/safety sessions in one tmux window using split panes when tmux is available. $Team/$team plus sks --mad uses the MAD-SKS permission gate module: live server work, normal DB writes, Supabase MCP writes, direct SQL, schema cleanup, and needed migrations are open for the active invocation; only catastrophic DB wipe/all-row/project-management guards remain. End with cleanup-tmux or a cleanup event so follow panes show cleanup and stop; pass team-session-cleanup.json, then reflection and Honest Mode. Parent integrates/verifies.\n\n${chatCaptureIntakeText()}\n`,
917
916
  'from-chat-img': `---\nname: from-chat-img\ndescription: Explicit $From-Chat-IMG Team alias for chat screenshot plus attachment analysis.\n---\n\nUse only for From-Chat-IMG/$From-Chat-IMG. It enters the normal Team pipeline. Treat uploads as chat screenshot plus originals. Use Codex Computer Use visual inspection when available, list requirements first, match regions to attachments with confidence, write ${FROM_CHAT_IMG_COVERAGE_ARTIFACT}, ${FROM_CHAT_IMG_CHECKLIST_ARTIFACT}, ${FROM_CHAT_IMG_TEMP_TRIWIKI_ARTIFACT}, and ${FROM_CHAT_IMG_QA_LOOP_ARTIFACT}, then continue Team gates, review, reflection, and Honest Mode. ${CODEX_COMPUTER_USE_ONLY_POLICY} The ledger must account for every visible customer request, screenshot image region, and separate attachment; ${FROM_CHAT_IMG_CHECKLIST_ARTIFACT} must have a checked item for each request, image-region/attachment match, work item, scoped QA-LOOP, and verification step; ${FROM_CHAT_IMG_TEMP_TRIWIKI_ARTIFACT} stores temporary TriWiki-backed session context with expires_after_sessions=${FROM_CHAT_IMG_TEMP_TRIWIKI_SESSIONS}. ${FROM_CHAT_IMG_QA_LOOP_ARTIFACT} must prove QA-LOOP ran over the exact customer-request work-order range after implementation, with every work item covered, post-fix verification complete, and zero unresolved findings. team-gate.json cannot pass From-Chat-IMG completion until unresolved_items is empty, every checklist box is checked, and scoped_qa_loop_completed=true.\n`,
918
917
  'qa-loop': `---\nname: qa-loop\ndescription: $QA-LOOP dogfoods UI/API as human proxy with safety gates, Codex Computer Use-only UI evidence, safe fixes, rechecks, and a QA report.\n---\n\nUse only $QA-LOOP. Infer scope, target, mutation policy, and login boundary from the prompt plus TriWiki/current-code defaults; do not surface a prequestion sheet. Credentials are runtime-only; never save secrets. UI-level E2E needs official Codex Computer Use evidence or must be marked unverified; Chrome MCP, Browser Use, Playwright, Selenium, Puppeteer, and other browser automation do not satisfy UI/browser verification. Deployed targets are read-only; destructive removal is forbidden. After answer/run, dogfood real flows, apply safe contract-allowed code/test/docs fixes, recheck, and do not pass qa-gate.json with unresolved findings or without post_fix_verification_complete. Finish qa-ledger, date/version report, gate, completion summary, and Honest Mode.\n`,
919
- 'ppt': `---\nname: ppt\ndescription: $PPT information-first HTML/PDF presentation pipeline with inferred STP, audience, pain-point, format, research, design-system, and verification contract.\n---\n\nUse only when the user invokes $PPT or asks to create a presentation, deck, slides, pitch deck, proposal deck, HTML presentation, or PDF presentation artifact. Before artifact work, auto-seal presentation-specific answers from prompt, TriWiki/current-code defaults, and conservative policy: delivery context, target audience profile including role/average age/job/industry/topic familiarity/decision power, STP strategy, decision context and objections, and 3+ pain-point to solution mappings with expected aha moments. Do not surface a prequestion sheet. Presentation design must be simple, restrained, and information-first: avoid over-designed decoration, ornamental gradients, nested cards, and effects that compete with the message. Design detail should be embedded through typography hierarchy, spacing, alignment, thin rules, source clarity, and subtle accents. ${pptPipelineAllowlistPolicyText()} Use design.md as the only design decision SSOT. If design.md is missing, use docs/Design-Sys-Prompt.md plus getdesign-reference and curated DESIGN.md examples from ${AWESOME_DESIGN_MD_REFERENCE.url} only as source inputs, then fuse them into route-local PPT style tokens with a recorded design_ssot instead of treating references as parallel authorities. If generated image assets or slide visual critique are needed, use Codex App $imagegen/gpt-image-2 only when that asset/review need is explicitly sealed in the $PPT contract; direct API fallback, placeholder files, and prose-only substitutes do not satisfy the route gate. ${CODEX_IMAGEGEN_REQUIRED_POLICY} Use web or Context7 evidence only when external facts/libraries/current docs are required by the PPT contract, record verified claims in ppt-fact-ledger.json, record generated image asset plans/results/blockers in ppt-image-asset-ledger.json, then create the PDF plus editable source HTML under source-html/, keep independent strategy/render/file-write phases parallel where inputs allow, record ppt-parallel-report.json, run the bounded ppt-review-policy/ppt-review-ledger/ppt-iteration-report loop, and verify readability, overlap, format fit, source coverage, export state, unsupported-claim status, image-asset completion, review-loop termination, and temporary build files cleanup. Finish with reflection and Honest Mode.\n`,
918
+ 'ppt': `---\nname: ppt\ndescription: $PPT information-first HTML/PDF presentation pipeline with inferred STP, audience, pain-point, format, research, design-system, and verification contract.\n---\n\nUse only when the user invokes $PPT or asks to create a presentation, deck, slides, pitch deck, proposal deck, HTML presentation, or PDF presentation artifact. Before artifact work, auto-seal presentation-specific answers from prompt, TriWiki/current-code defaults, and conservative policy: delivery context, target audience profile including role/average age/job/industry/topic familiarity/decision power, STP strategy, decision context and objections, and 3+ pain-point to solution mappings with expected aha moments. Do not surface a prequestion sheet. Presentation design must be simple, restrained, and information-first: avoid over-designed decoration, ornamental gradients, nested cards, and effects that compete with the message. Design detail should be embedded through typography hierarchy, spacing, alignment, thin rules, source clarity, and subtle accents. ${pptPipelineAllowlistPolicyText()} Use design.md as the only design decision SSOT. If design.md is missing, use docs/Design-Sys-Prompt.md plus getdesign-reference and curated DESIGN.md examples from ${AWESOME_DESIGN_MD_REFERENCE.url} only as source inputs, then fuse them into route-local PPT style tokens with a recorded design_ssot instead of treating references as parallel authorities. The $PPT route always loads imagegen as a required skill. When the sealed contract needs a generated raster asset or generated slide visual critique, immediately invoke Codex App \`$imagegen\` with gpt-image-2, move/copy the selected output into the mission assets or review evidence path, and record the real file path in ppt-image-asset-ledger.json or ppt-review-ledger.json before building or passing the gate. Direct API fallback, placeholder files, HTML/CSS stand-ins, and prose-only substitutes do not satisfy the route gate. ${CODEX_IMAGEGEN_REQUIRED_POLICY} Use web or Context7 evidence only when external facts/libraries/current docs are required by the PPT contract, record verified claims in ppt-fact-ledger.json, record generated image asset plans/results/blockers in ppt-image-asset-ledger.json, then create the PDF plus editable source HTML under source-html/, keep independent strategy/render/file-write phases parallel where inputs allow, record ppt-parallel-report.json, run the bounded ppt-review-policy/ppt-review-ledger/ppt-iteration-report loop, and verify readability, overlap, format fit, source coverage, export state, unsupported-claim status, image-asset completion, review-loop termination, and temporary build files cleanup. Finish with reflection and Honest Mode.\n`,
920
919
  'computer-use-fast': `---\nname: computer-use-fast\ndescription: Alias for the maximum-speed $Computer-Use/$CU Codex Computer Use lane.\n---\n\nUse the same rules as computer-use: skip Team debate, QA-LOOP clarification, upfront TriWiki refresh, Context7, subagents, and reflection unless explicitly requested. Use Codex Computer Use directly; never substitute Playwright, Chrome MCP, Browser Use, Selenium, Puppeteer, or other browser automation for UI/browser evidence. At the end only, refresh/pack TriWiki, validate it, then provide a concise completion summary plus Honest Mode.\n`,
921
920
  'cu': `---\nname: cu\ndescription: Short alias for the maximum-speed $Computer-Use Codex Computer Use lane.\n---\n\nUse the same rules as computer-use. This is a speed lane for focused UI/browser/visual tasks that require Codex Computer Use evidence, with TriWiki refresh/validate and Honest Mode deferred to final closeout.\n`,
922
921
  'goal': `---\nname: goal\ndescription: Fast $Goal/$goal bridge overlay for Codex native persisted /goal workflows.\n---\n\nUse when the user invokes $Goal/$goal or asks to persist a workflow with Codex native /goal continuation. Prepare with sks goal create or the $Goal route, write only the lightweight bridge artifacts, then use native Codex /goal create, pause, resume, and clear controls where available. Goal does not replace Team, QA, DB, or other SKS execution routes; continue implementation through the selected route and use Context7 only when external API/library docs are involved. Do not recreate the old no-question loop.\n`,
@@ -110,6 +110,11 @@ export async function runWorkflowPerfBench(root, opts = {}) {
110
110
  workflow_complexity_band: proofField?.workflow_complexity?.band || null,
111
111
  team_trigger_count: proofField?.team_trigger_matrix?.active_triggers?.length || 0,
112
112
  verification_stage_cache_key: proofField?.verification_stage_cache?.cache_key || null,
113
+ decision_lattice_selected_path: proofField?.decision_lattice?.selected_path?.id || null,
114
+ decision_lattice_frontier_count: proofField?.decision_lattice?.frontier?.expanded_order?.length || 0,
115
+ decision_lattice_rejected_alternative_count: proofField?.decision_lattice?.rejected_alternatives?.length || 0,
116
+ decision_lattice_scoring_formula: proofField?.decision_lattice?.scoring_formula || null,
117
+ decision_lattice_valid: Boolean(proofField?.decision_lattice?.report_only) && proofValidation.ok,
113
118
  outcome_criteria_passed: (proofField?.simplicity_scorecard?.criteria || []).filter((item) => item.passed).length,
114
119
  proof_field_valid: proofValidation.ok,
115
120
  pipeline_plan_valid: planValidation.ok
@@ -138,7 +143,19 @@ export function validateWorkflowPerfReport(report = {}) {
138
143
  if (!Number.isFinite(Number(report.metrics?.workflow_complexity_score))) issues.push('workflow_complexity_score');
139
144
  if (!report.metrics?.workflow_complexity_band) issues.push('workflow_complexity_band');
140
145
  if (!report.metrics?.verification_stage_cache_key) issues.push('verification_stage_cache_key');
146
+ if (!report.metrics?.decision_lattice_selected_path) issues.push('decision_lattice_selected_path');
147
+ if (!Number.isFinite(Number(report.metrics?.decision_lattice_frontier_count))) issues.push('decision_lattice_frontier_count');
148
+ if (!Number.isFinite(Number(report.metrics?.decision_lattice_rejected_alternative_count))) issues.push('decision_lattice_rejected_alternative_count');
149
+ if (!report.metrics?.decision_lattice_scoring_formula) issues.push('decision_lattice_scoring_formula');
150
+ if (report.metrics?.decision_lattice_valid !== true) issues.push('decision_lattice_valid');
141
151
  if (!report.proof_field || !validateProofFieldReport(report.proof_field).ok) issues.push('proof_field');
152
+ else {
153
+ const lattice = report.proof_field.decision_lattice;
154
+ if (report.metrics.decision_lattice_selected_path !== lattice?.selected_path?.id) issues.push('decision_lattice_selected_path_mismatch');
155
+ if (Number(report.metrics.decision_lattice_frontier_count) !== Number(lattice?.frontier?.expanded_order?.length || 0)) issues.push('decision_lattice_frontier_count_mismatch');
156
+ if (Number(report.metrics.decision_lattice_rejected_alternative_count) !== Number(lattice?.rejected_alternatives?.length || 0)) issues.push('decision_lattice_rejected_alternative_count_mismatch');
157
+ if (report.metrics.decision_lattice_scoring_formula !== lattice?.scoring_formula) issues.push('decision_lattice_scoring_formula_mismatch');
158
+ }
142
159
  if (!report.pipeline_plan || !validatePipelinePlan(report.pipeline_plan).ok) issues.push('pipeline_plan');
143
160
  if (!report.recommendation?.mode) issues.push('recommendation');
144
161
  return { ok: issues.length === 0, issues };
@@ -135,11 +135,32 @@ export function validatePipelinePlan(plan = {}) {
135
135
  if (!Array.isArray(plan.stages) || !plan.stages.length) issues.push('stages');
136
136
  if (!Array.isArray(plan.verification) || !plan.verification.length) issues.push('verification');
137
137
  if (!plan.route_economy?.mode) issues.push('route_economy');
138
+ const routeEconomyLatticeIssues = validateRouteEconomyDecisionLattice(plan.route_economy, plan.proof_field);
139
+ if (routeEconomyLatticeIssues.length) issues.push(...routeEconomyLatticeIssues.map((issue) => `route_economy.decision_lattice:${issue}`));
138
140
  if (plan.no_unrequested_fallback_code !== true || !plan.invariants?.includes('no_unrequested_fallback_code')) issues.push('fallback_guard');
139
141
  if (!plan.next_actions?.length) issues.push('next_actions');
140
142
  return { ok: issues.length === 0, issues };
141
143
  }
142
144
 
145
+ function validateRouteEconomyDecisionLattice(routeEconomy = {}, proof = {}) {
146
+ const lattice = routeEconomy.decision_lattice;
147
+ if (!lattice) return [];
148
+ const issues = [];
149
+ if (routeEconomy.report_only !== true || routeEconomy.mode !== 'report_only') issues.push('requires_report_only_route_economy');
150
+ if (lattice.report_only !== true) issues.push('report_only');
151
+ if (!lattice.selected_path) issues.push('selected_path');
152
+ if (!Number.isFinite(Number(lattice.selected_f_score))) issues.push('selected_f_score');
153
+ if (!Number.isFinite(Number(lattice.frontier_count)) || Number(lattice.frontier_count) < 1) issues.push('frontier_count');
154
+ if (!Number.isFinite(Number(lattice.rejected_alternatives_count))) issues.push('rejected_alternatives_count');
155
+ if (proof?.attached && proof.decision_lattice) {
156
+ const source = proof.decision_lattice;
157
+ if (lattice.selected_path !== source.selected_path?.id) issues.push('selected_path_mismatch');
158
+ if (Number(lattice.frontier_count) !== Number(source.frontier?.expanded_order?.length || 0)) issues.push('frontier_count_mismatch');
159
+ if (Number(lattice.rejected_alternatives_count) !== Number(source.rejected_alternatives?.length || 0)) issues.push('rejected_alternatives_count_mismatch');
160
+ }
161
+ return issues;
162
+ }
163
+
143
164
  function normalizeAmbiguity(value = {}, route) {
144
165
  const required = value.required ?? !CLARIFICATION_BYPASS_ROUTES.has(route?.id);
145
166
  const slots = Number(value.slots || 0);
@@ -173,7 +194,8 @@ function normalizeProofField(report) {
173
194
  contract_clarity: report.contract_clarity || null,
174
195
  workflow_complexity: report.workflow_complexity || null,
175
196
  team_trigger_matrix: report.team_trigger_matrix || null,
176
- verification_stage_cache: report.verification_stage_cache || null
197
+ verification_stage_cache: report.verification_stage_cache || null,
198
+ decision_lattice: report.decision_lattice || null
177
199
  };
178
200
  }
179
201
 
@@ -199,6 +221,13 @@ function routeEconomyPlan(proof = {}) {
199
221
  team_trigger_count: triggers.length,
200
222
  active_team_triggers: triggers,
201
223
  verification_stage_cache_key: proof.verification_stage_cache?.cache_key || null,
224
+ decision_lattice: proof.decision_lattice ? {
225
+ selected_path: proof.decision_lattice.selected_path?.id || null,
226
+ selected_f_score: proof.decision_lattice.selected_path?.cost?.f ?? null,
227
+ frontier_count: proof.decision_lattice.frontier?.expanded_order?.length || 0,
228
+ rejected_alternatives_count: proof.decision_lattice.rejected_alternatives?.length || 0,
229
+ report_only: proof.decision_lattice.report_only === true
230
+ } : null,
202
231
  deletion_policy: 'do_not_delete_or_skip_pipeline_stages_until_report_only_metrics_are_calibrated'
203
232
  };
204
233
  }
@@ -313,7 +342,7 @@ export function promptPipelineContext(prompt, route = null) {
313
342
  if (reflectionRequiredForRoute(route)) lines.push(reflectionInstructionText());
314
343
  if (route?.id === 'Team') lines.push(`Team route: scouts, TriWiki refresh, debate, consensus, runtime graph compile with concrete task ids and worker inboxes, close planning agents, fresh executors, minimum ${MIN_TEAM_REVIEWER_LANES}-lane review/integration, ${TEAM_SESSION_CLEANUP_ARTIFACT}, reflection, and Honest Mode. ${MIN_TEAM_REVIEW_POLICY_TEXT}`);
315
344
  if (route?.id === 'Goal') lines.push('Goal route: write SKS goal bridge artifacts, then use Codex native /goal persistence for create, pause, resume, and clear continuation controls.');
316
- if (route?.id === 'PPT') lines.push(`PPT route: before design or PDF work, infer and seal delivery context, audience profile including average age/job/industry, STP strategy, decision context, and at least three pain-point to solution mappings from the prompt, TriWiki/current-code defaults, and conservative policy. Keep the visual system simple, restrained, and information-first; design detail should come from hierarchy, spacing, alignment, rules, and subtle accents rather than decorative overdesign. ${pptPipelineAllowlistPolicyText()} If generated image assets or slide visual critique are needed, required evidence must come from Codex App $imagegen/gpt-image-2 (${CODEX_APP_IMAGE_GENERATION_DOC_URL}); direct API fallback, placeholders, and prose-only substitutes do not satisfy the route gate. ${CODEX_IMAGEGEN_REQUIRED_POLICY} Then build source ledger, fact ledger, image asset ledger, storyboard with aha moments, style tokens, editable source HTML under source-html/, PDF artifact, render QA, bounded review ledger/iteration report, PPT-only temporary build file cleanup, and ppt-parallel-report.json so independent strategy/render/file-write phases stay parallel-friendly, then reflection and Honest Mode.`);
345
+ if (route?.id === 'PPT') lines.push(`PPT route: before design or PDF work, infer and seal delivery context, audience profile including average age/job/industry, STP strategy, decision context, and at least three pain-point to solution mappings from the prompt, TriWiki/current-code defaults, and conservative policy. Keep the visual system simple, restrained, and information-first; design detail should come from hierarchy, spacing, alignment, rules, and subtle accents rather than decorative overdesign. ${pptPipelineAllowlistPolicyText()} If generated image assets or slide visual critique are needed, actively invoke the loaded imagegen skill through Codex App $imagegen/gpt-image-2 (${CODEX_APP_IMAGE_GENERATION_DOC_URL}), save the selected raster output into the mission assets/review evidence path, and record that real path before build/final. Direct API fallback, placeholders, HTML/CSS stand-ins, and prose-only substitutes do not satisfy the route gate. ${CODEX_IMAGEGEN_REQUIRED_POLICY} Then build source ledger, fact ledger, image asset ledger, storyboard with aha moments, style tokens, editable source HTML under source-html/, PDF artifact, render QA, bounded review ledger/iteration report, PPT-only temporary build file cleanup, and ppt-parallel-report.json so independent strategy/render/file-write phases stay parallel-friendly, then reflection and Honest Mode.`);
317
346
  if (route?.id === 'ImageUXReview') lines.push(`Image UX Review route: ${imageUxReviewPipelinePolicyText()} Use ${IMAGE_UX_REVIEW_POLICY_ARTIFACT}, ${IMAGE_UX_REVIEW_SCREEN_INVENTORY_ARTIFACT}, ${IMAGE_UX_REVIEW_GENERATED_REVIEW_LEDGER_ARTIFACT}, ${IMAGE_UX_REVIEW_ISSUE_LEDGER_ARTIFACT}, ${IMAGE_UX_REVIEW_ITERATION_REPORT_ARTIFACT}, and ${IMAGE_UX_REVIEW_GATE_ARTIFACT} as the route evidence set. The route may suggest safe fixes only when the user requested fixing; otherwise report findings and blockers.`);
318
347
  if (route?.id === 'AutoResearch') lines.push('AutoResearch route: load autoresearch-loop plus seo-geo-optimizer when SEO/GEO, discoverability, README, npm, GitHub stars, ranking, or AI-search visibility is relevant.');
319
348
  if (route?.id === 'DB') lines.push('DB route: scan/check database risk first; destructive DB operations remain forbidden.');
package/src/core/ppt.mjs CHANGED
@@ -1,7 +1,7 @@
1
1
  import path from 'node:path';
2
2
  import fsp from 'node:fs/promises';
3
3
  import { nowIso, readJson, sha256, writeJsonAtomic, writeTextAtomic } from './fsx.mjs';
4
- import { AWESOME_DESIGN_MD_REFERENCE, DESIGN_SYSTEM_SSOT, GETDESIGN_REFERENCE, PPT_CONDITIONAL_SKILL_ALLOWLIST, PPT_PIPELINE_MCP_ALLOWLIST, PPT_PIPELINE_SKILL_ALLOWLIST } from './routes.mjs';
4
+ import { AWESOME_DESIGN_MD_REFERENCE, CODEX_APP_IMAGE_GENERATION_DOC_URL, CODEX_IMAGEGEN_EVIDENCE_SOURCE, DESIGN_SYSTEM_SSOT, GETDESIGN_REFERENCE, PPT_CONDITIONAL_SKILL_ALLOWLIST, PPT_PIPELINE_MCP_ALLOWLIST, PPT_PIPELINE_SKILL_ALLOWLIST } from './routes.mjs';
5
5
 
6
6
  export const PPT_AUDIENCE_STRATEGY_ARTIFACT = 'ppt-audience-strategy.json';
7
7
  export const PPT_GATE_ARTIFACT = 'ppt-gate.json';
@@ -482,18 +482,30 @@ export function planPptImageAssets(contract = {}, storyboard = buildPptStoryboar
482
482
  ].filter(Boolean);
483
483
  return selected.slice(0, maxAssets).map(({ request, page }, index) => {
484
484
  const id = compactId('ppt-image', `${index + 1}:${request || page?.claim || page?.kind}`);
485
+ const prompt = buildImageAssetPrompt({ contract, page, request, styleTokens });
486
+ const relPath = path.join(PPT_ASSET_DIR, `${safeFileSlug(id)}.png`);
485
487
  return {
486
488
  id,
487
489
  slide: page?.number || index + 1,
488
490
  role: index === 0 ? 'hero_visual' : 'supporting_visual',
489
491
  status: 'planned',
490
- prompt: buildImageAssetPrompt({ contract, page, request, styleTokens }),
492
+ prompt,
491
493
  model: 'gpt-image-2',
492
494
  size: cleanText(contract.answers?.PRESENTATION_IMAGE_SIZE, '1536x1024'),
493
495
  quality: cleanText(contract.answers?.PRESENTATION_IMAGE_QUALITY, 'medium'),
494
496
  output_format: 'png',
495
- rel_path: path.join(PPT_ASSET_DIR, `${safeFileSlug(id)}.png`),
496
- html_src: `../${path.join(PPT_ASSET_DIR, `${safeFileSlug(id)}.png`)}`
497
+ rel_path: relPath,
498
+ html_src: `../${relPath}`,
499
+ imagegen_invocation: {
500
+ required_skill: 'imagegen',
501
+ command: '$imagegen',
502
+ surface: 'codex_app_builtin_image_generation',
503
+ evidence_source: CODEX_IMAGEGEN_EVIDENCE_SOURCE,
504
+ model: 'gpt-image-2',
505
+ tool_mode: 'built_in_image_gen',
506
+ prompt,
507
+ save_policy: `After generation, move or copy the selected output into ${relPath} and record output_path.`
508
+ }
497
509
  };
498
510
  });
499
511
  }
@@ -541,7 +553,16 @@ export async function buildPptImageAssetLedger(dir, contract = {}, storyboard =
541
553
  contract_hash: contract.sealed_hash || null,
542
554
  required,
543
555
  policy: 'Required PPT image resources must be generated through Codex App $imagegen/gpt-image-2 and recorded as real output files; direct API fallback, fabricated files, and placeholder ledgers do not satisfy this gate.',
544
- codex_app_imagegen_doc: 'https://developers.openai.com/codex/app/features#image-generation',
556
+ codex_app_imagegen_doc: CODEX_APP_IMAGE_GENERATION_DOC_URL,
557
+ imagegen_execution: {
558
+ required_skill: 'imagegen',
559
+ command: '$imagegen',
560
+ surface: 'codex_app_builtin_image_generation',
561
+ evidence_source: CODEX_IMAGEGEN_EVIDENCE_SOURCE,
562
+ model: 'gpt-image-2',
563
+ tool_mode: 'built_in_image_gen',
564
+ output_requirement: 'Generated raster files must be copied into the mission assets/ directory and referenced by output_path.'
565
+ },
545
566
  provider: {
546
567
  model: 'gpt-image-2',
547
568
  surface: 'codex_app_$imagegen',
@@ -559,7 +580,7 @@ export async function buildPptImageAssetLedger(dir, contract = {}, storyboard =
559
580
  required
560
581
  ? 'The sealed PPT contract requires generated image assets; missing Codex App $imagegen/gpt-image-2 output blocks the PPT gate.'
561
582
  : 'No generated image asset requirement was detected; assets remain optional and are not generated to avoid unrequested API cost.',
562
- 'Run Codex App $imagegen/gpt-image-2 for each blocked asset, place the generated raster under assets/, then rerun the PPT build so existing generated files are verified.'
583
+ 'Invoke the loaded imagegen skill with Codex App $imagegen/gpt-image-2 for each blocked asset, place the generated raster under assets/, then rerun the PPT build so existing generated files are verified.'
563
584
  ]
564
585
  };
565
586
  }
@@ -592,8 +613,12 @@ export function buildPptReviewPolicy(contract = {}, storyboard = buildPptStorybo
592
613
  },
593
614
  visual_review: {
594
615
  model: 'gpt-image-2',
616
+ required_skill: 'imagegen',
617
+ command: '$imagegen',
618
+ surface: 'codex_app_builtin_image_generation',
619
+ evidence_source: CODEX_IMAGEGEN_EVIDENCE_SOURCE,
595
620
  persona: '대한민국 TOSS UI/UX 시니어 총괄 디자이너',
596
- codex_app_imagegen_doc: 'https://developers.openai.com/codex/app/features#image-generation',
621
+ codex_app_imagegen_doc: CODEX_APP_IMAGE_GENERATION_DOC_URL,
597
622
  model_doc: 'https://developers.openai.com/api/docs/models/gpt-image-2',
598
623
  mode: explicitlyRequired ? 'required_by_contract' : 'codex_app_when_available',
599
624
  required_for_gate: explicitlyRequired,
@@ -685,7 +710,7 @@ export function buildPptReviewLedger({ contract = {}, storyboard, styleTokens, f
685
710
  title: 'Required gpt-image-2 visual review evidence missing',
686
711
  detail: 'The sealed PPT contract explicitly requested image/gpt-image-2 visual critique, but no Codex App imagegen review evidence was supplied.',
687
712
  source: 'codex_app_imagegen_gate',
688
- action: 'Run the bounded gpt-image-2 slide review loop in Codex App and record evidence paths before final output.'
713
+ action: 'Invoke the loaded imagegen skill through Codex App $imagegen/gpt-image-2, run the bounded slide review loop, and record evidence paths before final output.'
689
714
  }));
690
715
  }
691
716
  const blocking = issues.filter((issue) => ['P0', 'P1'].includes(issue.severity));
@@ -1,5 +1,6 @@
1
1
  import path from 'node:path';
2
2
  import { nowIso, readText, rel, runProcess, sha256, writeJsonAtomic } from './fsx.mjs';
3
+ import { buildDecisionLatticeReport, validateDecisionLatticeReport } from './decision-lattice.mjs';
3
4
 
4
5
  export const PROOF_FIELD_SCHEMA_VERSION = 1;
5
6
  export const FAST_LANE_MIN_SCORE = 0.75;
@@ -88,6 +89,20 @@ export async function buildProofField(root, opts = {}) {
88
89
  const verificationStageCache = verificationStageCachePlan({ sourceHash, changedFiles, verification: fastLane.verification });
89
90
  const simplicity = outcomeScorecard({ intent, changedFiles, selectedCones, risk, negativeWork, fastLane, workflowComplexity });
90
91
  const executionLane = executionLaneDecision({ fastLane, simplicity, workflowComplexity, teamTriggerMatrix });
92
+ const decisionLattice = normalizeDecisionLatticeReport(await buildDecisionLatticeReport({
93
+ intent,
94
+ changed_files: changedFiles,
95
+ proof_cones: selectedCones,
96
+ risk,
97
+ contract_clarity: contractClarity,
98
+ workflow_complexity: workflowComplexity,
99
+ team_trigger_matrix: teamTriggerMatrix,
100
+ verification_stage_cache: verificationStageCache,
101
+ simplicity_scorecard: simplicity,
102
+ fast_lane_decision: fastLane,
103
+ execution_lane: executionLane,
104
+ scoring_formula: 'simplicity_scorecard.score + contract_clarity.score - workflow_complexity.score - active_team_trigger_penalty'
105
+ }));
91
106
  return {
92
107
  schema_version: PROOF_FIELD_SCHEMA_VERSION,
93
108
  generated_at: nowIso(),
@@ -102,6 +117,7 @@ export async function buildProofField(root, opts = {}) {
102
117
  workflow_complexity: workflowComplexity,
103
118
  team_trigger_matrix: teamTriggerMatrix,
104
119
  verification_stage_cache: verificationStageCache,
120
+ decision_lattice: decisionLattice,
105
121
  simplicity_scorecard: simplicity,
106
122
  execution_lane: executionLane,
107
123
  proof_cones: selectedCones,
@@ -133,6 +149,8 @@ export function validateProofFieldReport(report = {}) {
133
149
  if (!Number.isFinite(Number(report.workflow_complexity?.score))) issues.push('workflow_complexity');
134
150
  if (!Array.isArray(report.team_trigger_matrix?.triggers)) issues.push('team_trigger_matrix');
135
151
  if (report.verification_stage_cache?.report_only !== true || !report.verification_stage_cache?.cache_key) issues.push('verification_stage_cache');
152
+ const latticeValidation = validateDecisionLatticeReport(report.decision_lattice);
153
+ if (!latticeValidation.ok) issues.push(`decision_lattice:${latticeValidation.issues.join('|')}`);
136
154
  if (!report.execution_lane?.lane) issues.push('execution_lane');
137
155
  if (report.execution_lane?.lane === SPEED_LANE_POLICY.fast_lane && report.execution_lane?.score < FAST_LANE_MIN_SCORE) issues.push('execution_lane_score');
138
156
  if (!Array.isArray(report.proof_cones)) issues.push('proof_cones');
@@ -158,12 +176,25 @@ export async function proofFieldFixture() {
158
176
  outcome_rubric_present: report.outcome_rubric.length === OUTCOME_RUBRIC.length,
159
177
  adversarial_lenses_present: report.outcome_rubric.every((item) => item.adversarial_lens) && report.simplicity_scorecard.criteria.every((item) => item.adversarial_lens),
160
178
  route_economy_present: report.contract_clarity?.report_only === true && report.workflow_complexity?.report_only === true && report.team_trigger_matrix?.report_only === true && report.verification_stage_cache?.report_only === true,
179
+ decision_lattice_present: validateDecisionLatticeReport(report.decision_lattice).ok,
180
+ decision_lattice_report_only: report.decision_lattice?.report_only === true,
181
+ decision_lattice_selected_path: Boolean(report.decision_lattice?.selected_path?.id),
182
+ decision_lattice_frontier_present: Array.isArray(report.decision_lattice?.frontier?.expanded_order) && report.decision_lattice.frontier.expanded_order.length > 0,
183
+ decision_lattice_rejections_present: Array.isArray(report.decision_lattice?.rejected_alternatives),
184
+ decision_lattice_scoring_formula_present: Boolean(report.decision_lattice?.scoring_formula),
161
185
  simplicity_score_usable: Number(report.simplicity_scorecard?.score) >= FAST_LANE_MIN_SCORE,
162
186
  execution_fast_lane_selected: report.execution_lane?.lane === SPEED_LANE_POLICY.fast_lane
163
187
  }
164
188
  };
165
189
  }
166
190
 
191
+ function normalizeDecisionLatticeReport(report = {}) {
192
+ return {
193
+ ...report,
194
+ scoring_formula: report.scoring_formula || report.research_basis?.scoring_formula || null
195
+ };
196
+ }
197
+
167
198
  async function gitChangedFiles(root) {
168
199
  const result = await runProcess('git', ['diff', '--name-only', 'HEAD', '--'], { cwd: root, timeoutMs: 10000, maxOutputBytes: 128 * 1024 });
169
200
  if (result.code !== 0) return [];
@@ -92,18 +92,14 @@ export const RECOMMENDED_DESIGN_REFERENCES = [GETDESIGN_REFERENCE, AWESOME_DESIG
92
92
 
93
93
  export const PPT_PIPELINE_SKILL_ALLOWLIST = Object.freeze([
94
94
  'ppt',
95
+ 'imagegen',
95
96
  'getdesign-reference',
96
97
  'prompt-pipeline',
97
98
  REFLECTION_SKILL_NAME,
98
99
  'honest-mode'
99
100
  ]);
100
101
 
101
- export const PPT_CONDITIONAL_SKILL_ALLOWLIST = Object.freeze([
102
- {
103
- skill: 'imagegen',
104
- condition: 'only_when_the_sealed_ppt_contract_explicitly_requires_generated_raster_assets'
105
- }
106
- ]);
102
+ export const PPT_CONDITIONAL_SKILL_ALLOWLIST = Object.freeze([]);
107
103
 
108
104
  export const PPT_PIPELINE_MCP_ALLOWLIST = Object.freeze([
109
105
  {
@@ -113,7 +109,10 @@ export const PPT_PIPELINE_MCP_ALLOWLIST = Object.freeze([
113
109
  ]);
114
110
 
115
111
  export function pptPipelineAllowlistPolicyText() {
116
- return `PPT pipeline allowlist: during $PPT design/render work, ignore installed skills and MCPs that are not explicitly part of the $PPT pipeline. The purpose is to prevent AI-like generic presentation design: decorative gradients, nested cards, vague SaaS visuals, and style choices not grounded in the audience, source material, getdesign reference, or the project design SSOT. Required skills are ${PPT_PIPELINE_SKILL_ALLOWLIST.join(', ')}. Do not use generic design skills such as design-artifact-expert, design-ui-editor, or design-system-builder for $PPT just because they are installed. $PPT design must use getdesign-reference plus the built-in PPT design implementation pipeline: ${DESIGN_SYSTEM_SSOT.authority_file} when present, ${DESIGN_SYSTEM_SSOT.builder_prompt} as the builder prompt when missing, and route-local ppt-style-tokens.json as the fused design projection. Conditional skills/MCPs are allowed only when their condition is sealed in the contract: ${PPT_CONDITIONAL_SKILL_ALLOWLIST.map((entry) => `${entry.skill}=${entry.condition}`).join('; ')}; ${PPT_PIPELINE_MCP_ALLOWLIST.map((entry) => `${entry.mcp}=${entry.condition}`).join('; ')}. Fact, image, and review evidence are first-class artifacts: gather user-provided context and required web/Context7 evidence into ppt-fact-ledger.json, block unsupported critical external claims, plan required image resources through ppt-image-asset-ledger.json, then run a bounded review loop recorded in ppt-review-policy.json, ppt-review-ledger.json, and ppt-iteration-report.json. Required raster asset or generated visual-review evidence must come from Codex App $imagegen/gpt-image-2; direct API fallback, placeholder files, and prose-only substitutes do not satisfy the route gate. The review loop caps full-deck passes at 2, slide retries at 2, requires P0/P1 issue count to be zero, targets score >= 0.88, and stops when improvement delta is below 0.03 or evidence is missing. For Codex App visual critique, use imagegen/gpt-image-2 (${CODEX_APP_IMAGE_GENERATION_DOC_URL}) when required or available; never simulate missing gpt-image-2 output. If required image-review evidence is unavailable, record the blocker instead of passing the gate. ${CODEX_IMAGEGEN_REQUIRED_POLICY}`;
112
+ const conditionalSkills = PPT_CONDITIONAL_SKILL_ALLOWLIST.length
113
+ ? PPT_CONDITIONAL_SKILL_ALLOWLIST.map((entry) => `${entry.skill}=${entry.condition}`).join('; ')
114
+ : 'none';
115
+ return `PPT pipeline allowlist: during $PPT design/render work, ignore installed skills and MCPs that are not explicitly part of the $PPT pipeline. The purpose is to prevent AI-like generic presentation design: decorative gradients, nested cards, vague SaaS visuals, and style choices not grounded in the audience, source material, getdesign reference, or the project design SSOT. Required skills are ${PPT_PIPELINE_SKILL_ALLOWLIST.join(', ')}. The imagegen skill is required for $PPT so Codex App can invoke official built-in $imagegen/gpt-image-2 for every generated raster asset or generated visual-review image; do not route PPT imagery through direct API fallback. Do not use generic design skills such as design-artifact-expert, design-ui-editor, or design-system-builder for $PPT just because they are installed. $PPT design must use getdesign-reference plus the built-in PPT design implementation pipeline: ${DESIGN_SYSTEM_SSOT.authority_file} when present, ${DESIGN_SYSTEM_SSOT.builder_prompt} as the builder prompt when missing, and route-local ppt-style-tokens.json as the fused design projection. Conditional skills/MCPs are allowed only when their condition is sealed in the contract: ${conditionalSkills}; ${PPT_PIPELINE_MCP_ALLOWLIST.map((entry) => `${entry.mcp}=${entry.condition}`).join('; ')}. Fact, image, and review evidence are first-class artifacts: gather user-provided context and required web/Context7 evidence into ppt-fact-ledger.json, block unsupported critical claims, plan required image resources through ppt-image-asset-ledger.json, then run a bounded review loop recorded in ppt-review-policy.json, ppt-review-ledger.json, and ppt-iteration-report.json. Required raster asset or generated visual-review evidence must come from Codex App $imagegen/gpt-image-2; direct API fallback, placeholder files, and prose-only substitutes do not satisfy the route gate. The review loop caps full-deck passes at 2, slide retries at 2, requires P0/P1 issue count to be zero, targets score >= 0.88, and stops when improvement delta is below 0.03 or evidence is missing. For Codex App visual critique, invoke $imagegen/gpt-image-2 (${CODEX_APP_IMAGE_GENERATION_DOC_URL}) when required; never simulate missing gpt-image-2 output. If required image-review evidence is unavailable, record the blocker instead of passing the gate. ${CODEX_IMAGEGEN_REQUIRED_POLICY}`;
117
116
  }
118
117
 
119
118
  export function getdesignReferencePolicyText() {