@voybio/ace-swarm 0.2.3 → 0.2.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,24 +2,25 @@
2
2
 
3
3
  Autonomous Coding Entity (ACE) is a local MCP server and CLI for agent-assisted coding.
4
4
 
5
- `ace-swarm` provides Claude Code, Codex, Cursor, VS Code, and local-model workflows with a shared runtime: durable workspace state, typed handoffs, scheduler support, change intelligence, and operator tooling. All state lives in a single binary file — `.agents/ACE/ace-state.ace` — so the workspace keeps one source of truth instead of a forest of generated state files.
5
+ `ace-swarm` provides Claude Code, Codex, Cursor, VS Code, and provider-backed workflows with a shared runtime: durable workspace state, typed handoffs, scheduler support, change intelligence, and operator tooling. All state lives in a single binary file — `.agents/ACE/ace-state.ace` — so the workspace keeps one source of truth instead of a forest of generated state files.
6
6
 
7
- Modern coding agents can already read files, write code, run commands, and call tools. The missing layer is coordination that survives restarts, model switches, and multi-agent work. ACE keeps the workflow in the store and projects only the files external clients still need. For local models, ACE provides task context, tool surfaces, status events, evidence trails, and resumable state.
7
+ Modern coding agents can already read files, write code, run commands, and call tools. The missing layer is coordination that survives restarts, model switches, and multi-agent work. ACE keeps the workflow in the store and projects only the files external clients still need. For provider-backed runs, ACE provides task context, tool surfaces, status events, evidence trails, and resumable state.
8
8
 
9
9
  ## ACE Manifesto
10
10
 
11
11
  - Agent work should survive restarts, handoffs, and review.
12
12
  - Handoffs should be typed, evidence-linked, and validated.
13
13
  - Shared workflow state should live in the workspace, not only in chat history.
14
- - Local models should get the same structure and observability as hosted models.
14
+ - ACE-Orchestrator is the default entrypoint, and every provider can delegate through the full agent set.
15
+ - Provider-backed runs should get the same structure and observability.
15
16
  - Bootstrap should be contained and reversible: ACE writes into `.agents/ACE/ace-state.ace`.
16
17
 
17
- ### Serving local models
18
+ ### Serving model providers
18
19
 
19
- ACE provides local models with the context they cannot infer on their own: task state, tool surfaces, status events, evidence trails, and the next move in the loop.
20
+ ACE provides model providers with the context they cannot infer on their own: task state, tool surfaces, status events, evidence trails, and the next move in the loop.
20
21
 
21
- - `ace init --llm ollama` or `ace init --llm llama.cpp` records a local runtime profile in the store.
22
- - `ace doctor` discovers the endpoint, checks the selected model, and keeps the runtime honest.
22
+ - `ace init --llm <provider>` records the selected runtime profile in the store.
23
+ - `ace doctor` validates local or hosted runtime wiring, checks the selected model when the provider exposes listings, and keeps the runtime honest.
23
24
  - `ace mcp` and `ace tui` expose the same workspace truth to the host, the terminal, and the model.
24
25
 
25
26
  ## Why This Exists Now
@@ -28,7 +29,7 @@ Frontier models are getting more capable and more restricted at the same time. P
28
29
 
29
30
  - Claude Code proved the terminal-native agent loop.
30
31
  - OpenAI connectors and MCP widened the tool surface.
31
- - Ollama and llama.cpp keep local models in play.
32
+ - Ollama and llama.cpp keep local runtimes in play.
32
33
  - MCP remains the common protocol surface across clients and runtimes.
33
34
 
34
35
  ACE sits underneath that stack. It does not compete with the host you already use. It provides that host with a durable local runtime.
@@ -128,6 +129,15 @@ ace mcp
128
129
 
129
130
  If you already have a local runtime running, `ace doctor --scan` will probe common Ollama and llama.cpp endpoints and write the selected profile back into the store.
130
131
 
132
+ Hosted provider path with Codex:
133
+
134
+ ```bash
135
+ npx -y ace-swarm turnkey --project "My Project" --llm codex --model gpt-5
136
+ export OPENAI_API_KEY=...
137
+ ace doctor --llm codex --model gpt-5
138
+ ace mcp
139
+ ```
140
+
131
141
  ## What Bootstrap Writes
132
142
 
133
143
  ACE bootstrap is intentionally contained. The store is authoritative at `.agents/ACE/ace-state.ace`; only a few host-facing files land in the workspace when requested.
@@ -205,8 +215,8 @@ In that setup, the host remains the interface. ACE becomes the state contract un
205
215
 
206
216
  Local models are good at generating text. They usually need help seeing the workspace, hearing the handoff trail, and remembering what changed two turns ago. ACE closes that gap.
207
217
 
208
- - `ace init --llm ollama` or `ace init --llm llama.cpp` seeds the profile inside `.agents/ACE/ace-state.ace`.
209
- - `ace doctor` verifies that the selected runtime is reachable and configured.
218
+ - `ace init --llm <provider>` seeds the profile inside `.agents/ACE/ace-state.ace`.
219
+ - `ace doctor` verifies that the selected runtime is configured and, when possible, reachable.
210
220
  - `ace tui` can talk to either local runtime and surface the same workspace state the MCP server sees.
211
221
  - The ACE model bridge gives local runs tool selection, execution flow, and persisted state instead of a fragile one-shot chat loop.
212
222
 
@@ -236,7 +246,7 @@ If you want local agents with memory, evidence, and handoff discipline, this is
236
246
  | `ace turnkey [options]` | project the store-backed host surface |
237
247
  | `ace mcp` / `ace serve` | start the MCP server over stdio |
238
248
  | `ace tui [options]` | open the terminal operator surface |
239
- | `ace doctor [options]` | validate local-model readiness |
249
+ | `ace doctor [options]` | validate ACE runtime readiness |
240
250
  | `ace mcp-config [--client <name>|--all]` | print baked client snippets for optional install |
241
251
  | `ace paths` | show resolved package and workspace paths |
242
252
  +---------------------------------------------+---------------------------------------------------+
@@ -1,9 +1,15 @@
1
1
  ---
2
2
  name: ACE-Init
3
3
  description: Bootstrap-only initializer that materializes ACE state, validates readiness, and hands control to ACE-Orchestrator.
4
- target: vscode, codex
4
+ target: vscode, codex, claude, cursor, antigravity, copilot, gemini
5
5
  tools:
6
6
  - vscode
7
+ - codex
8
+ - claude
9
+ - cursor
10
+ - antigravity
11
+ - copilot
12
+ - gemini
7
13
  - execute
8
14
  - read
9
15
  - edit
@@ -1,11 +1,14 @@
1
1
  ---
2
2
  name: ACE-Orchestrator
3
3
  description: Global flow-orchestration contract for routing venture, UX, and engineering work across swarm and composable agents.
4
- target: vscode, codex
4
+ target: vscode, codex, claude, cursor, antigravity, copilot, gemini
5
5
  tools:
6
6
  - vscode
7
7
  - codex
8
8
  - claude
9
+ - cursor
10
+ - antigravity
11
+ - copilot
9
12
  - gemini
10
13
  - execute
11
14
  - read
@@ -173,6 +173,20 @@ Confidence: <score>% (<label>)
173
173
 
174
174
  ---
175
175
 
176
+ ## Validation Surface
177
+
178
+ - Run this skill against at least one known-pass baseline and one known-fail fixture before trusting a promotion decision.
179
+ - Validation command example: execute the configured eval harness runner and confirm deterministic suite counts in `agent-state/EVAL_REPORT.md`.
180
+
181
+ ---
182
+
183
+ ## Portability Notes
184
+
185
+ - Keep this skill vendor-neutral: it should work across Codex, Claude, Cursor, and Antigravity runtimes.
186
+ - Avoid provider-specific assumptions in suite execution or report parsing; rely on ACE artifacts as the source of truth.
187
+
188
+ ---
189
+
176
190
  ## Anti-Patterns
177
191
 
178
192
  | Anti-Pattern | Correct Behavior |
@@ -132,6 +132,20 @@ Also use when:
132
132
 
133
133
  ---
134
134
 
135
+ ## Validation Surface
136
+
137
+ - Run this check with one valid and one intentionally broken handoff payload to verify deterministic PASS/FAIL behavior.
138
+ - Validation command example: execute the handoff lint path used by the runtime and confirm schema, route, and evidence checks all emit rule-level outcomes.
139
+
140
+ ---
141
+
142
+ ## Portability Notes
143
+
144
+ - Keep lint output schema and rule IDs stable so different clients can consume results uniformly.
145
+ - Do not depend on provider-specific behavior; handoff validation must rely only on ACE artifacts and schemas.
146
+
147
+ ---
148
+
135
149
  ## Anti-Patterns
136
150
 
137
151
  | Anti-Pattern | Correct Behavior |
@@ -156,6 +156,20 @@ No incident closure without ALL of:
156
156
 
157
157
  ---
158
158
 
159
+ ## Validation Surface
160
+
161
+ - Run an incident simulation with synthetic `GATE_FAILED` events and verify severity classification, owner assignment, and timeline reconstruction are deterministic.
162
+ - Validation command example: replay a known event stream and confirm `global-state/INCIDENTS.md` and `agent-state/INCIDENT_TIMELINE.md` contain matching evidence-linked rows.
163
+
164
+ ---
165
+
166
+ ## Portability Notes
167
+
168
+ - Incident lifecycle states and emitted event names must remain stable across clients.
169
+ - Keep the protocol provider-agnostic: all decisions should be derivable from ACE state artifacts and event logs.
170
+
171
+ ---
172
+
159
173
  ## Anti-Patterns
160
174
 
161
175
  | Anti-Pattern | Correct Behavior |
@@ -161,6 +161,20 @@ Generated: <ISO8601>
161
161
 
162
162
  ---
163
163
 
164
+ ## Validation Surface
165
+
166
+ - Run this skill on a fixture with known duplicates and contradictions, then verify the reconciliation report counts match expected values.
167
+ - Validation command example: execute memory curation and confirm no source logs are modified while `MEMORY_INDEX.md` updates with provenance links.
168
+
169
+ ---
170
+
171
+ ## Portability Notes
172
+
173
+ - Preserve schema and section labels so downstream consumers can parse curated memory artifacts consistently.
174
+ - Keep curation logic independent of model/provider specifics; evidence linkage is the portable contract.
175
+
176
+ ---
177
+
164
178
  ## Anti-Patterns
165
179
 
166
180
  | Anti-Pattern | Correct Behavior |
@@ -160,6 +160,20 @@ Reason: <one sentence>
160
160
 
161
161
  ---
162
162
 
163
+ ## Validation Surface
164
+
165
+ - Validate this skill with one known-pass and one known-fail release packet to confirm gate outcomes are deterministic.
166
+ - Validation command example: run release sentry checks and verify `RELEASE_DECISION.md` always includes explicit gate evidence rows.
167
+
168
+ ---
169
+
170
+ ## Portability Notes
171
+
172
+ - Keep decision enums and gate names stable so any client can consume release outputs consistently.
173
+ - Avoid provider-specific assumptions; release readiness must be computed from ACE artifacts only.
174
+
175
+ ---
176
+
163
177
  ## Anti-Patterns
164
178
 
165
179
  | Anti-Pattern | Correct Behavior |
@@ -158,6 +158,20 @@ Every risk in `RISKS.md` must contain these fields:
158
158
 
159
159
  ---
160
160
 
161
+ ## Validation Surface
162
+
163
+ - Run this skill against a fixture risk set with expected probability-impact scores and verify computed tiers match exactly.
164
+ - Validation command example: execute risk quantification and confirm HIGH/CRITICAL entries always include owner, mitigation, and verification condition.
165
+
166
+ ---
167
+
168
+ ## Portability Notes
169
+
170
+ - Keep scoring scales and tier thresholds explicit and stable across clients.
171
+ - Quantification must remain vendor-neutral and based on ACE state artifacts, not model-specific behavior.
172
+
173
+ ---
174
+
161
175
  ## Anti-Patterns
162
176
 
163
177
  | Anti-Pattern | Correct Behavior |
@@ -151,6 +151,20 @@ Use when:
151
151
 
152
152
  ---
153
153
 
154
+ ## Validation Surface
155
+
156
+ - Validate schema updates against both old and new sample payload fixtures before promoting contract changes.
157
+ - Validation command example: run JSON-schema validation for backward and forward compatibility samples and record results in `EVIDENCE_LOG.md`.
158
+
159
+ ---
160
+
161
+ ## Portability Notes
162
+
163
+ - Preserve stable schema IDs, version fields, and migration metadata so any ACE client can consume updates.
164
+ - Keep evolution rules provider-agnostic; compatibility decisions must be artifact-driven, not model-driven.
165
+
166
+ ---
167
+
154
168
  ## Anti-Patterns
155
169
 
156
170
  | Anti-Pattern | Correct Behavior |
@@ -161,6 +161,20 @@ Overall: <PASS | WARN | CRITICAL>
161
161
 
162
162
  ---
163
163
 
164
+ ## Validation Surface
165
+
166
+ - Run audits against fixtures with known contradictions to verify deterministic CRITICAL/WARN classification.
167
+ - Validation command example: execute the state audit flow and confirm every checklist row maps to a concrete pass/fail artifact reference.
168
+
169
+ ---
170
+
171
+ ## Portability Notes
172
+
173
+ - Keep report structure and checklist IDs stable so multiple clients can parse outputs consistently.
174
+ - State drift checks must remain independent of model/provider implementation details.
175
+
176
+ ---
177
+
164
178
  ## Anti-Patterns
165
179
 
166
180
  | Anti-Pattern | Correct Behavior |
@@ -2,6 +2,6 @@
2
2
  "id": "gate-correctness",
3
3
  "type": "executable",
4
4
  "invariant": "All required tests pass",
5
- "command": "cd ace-mcp-server && npm test --silent",
5
+ "command": "if [ -f ace-mcp-server/package.json ]; then cd ace-mcp-server && npm test --silent; elif [ -f package.json ]; then npm test --silent; else echo 'package.json not found for gate-correctness' && exit 1; fi",
6
6
  "evidence_requirement": "Command output snippet with exit code 0"
7
7
  }
package/dist/cli.js CHANGED
@@ -13,7 +13,8 @@ import { DiscoveryRepository } from "./store/repositories/discovery-repository.j
13
13
  import { getWorkspaceStorePath, readStoreBlobSync, readStoreJsonSync, } from "./store/store-snapshot.js";
14
14
  import { readFileSync } from "node:fs";
15
15
  import { runTui } from "./tui/index.js";
16
- import { buildOpenAiCompatibleBaseUrl, DEFAULT_LLAMA_CPP_MODEL, DEFAULT_OLLAMA_MODEL, discoverProviderContext, normalizeLocalBaseUrl, scanLocalModelRuntimes, } from "./tui/provider-discovery.js";
16
+ import { buildOpenAiCompatibleBaseUrl, buildProviderDoctorCommands, defaultModelForProvider, discoverProviderContext, isLocalLlmProvider, normalizeLocalBaseUrl, normalizeProvider, scanLocalModelRuntimes, } from "./tui/provider-discovery.js";
17
+ import { diagnoseChatRuntimeConfig, OpenAICompatibleClient, } from "./tui/openai-compatible.js";
17
18
  function printHelp() {
18
19
  console.log(`ACE Swarm CLI
19
20
 
@@ -23,7 +24,8 @@ Usage:
23
24
  ace tui [options] Launch interactive TUI dashboard
24
25
  ace init [options] Bootstrap the ACE store into current workspace
25
26
  ace turnkey [options] Project minimal workspace bootstrap stubs from the ACE store
26
- ace doctor [options] Validate local LLM + MCP readiness
27
+ ace doctor [options] Validate ACE runtime + MCP readiness
28
+ ace cache [options] Cache ACE artifacts into ace-state.ace and optionally clean projections
27
29
  ace mcp-config [options] Print global/client MCP config snippet(s) from store
28
30
  ace preconfig Write .mcp-config/ bundle for all supported clients to workspace root
29
31
  ace paths Show resolved package/workspace paths
@@ -40,18 +42,22 @@ Options for init:
40
42
  --force Overwrite scaffolded files if they already exist
41
43
  --mcp-config Also write .vscode/mcp.json workspace bridge
42
44
  --client-config-bundle Also write minimal workspace host stubs (AGENTS.md, CLAUDE.md, .cursorrules, .github/copilot-instructions.md)
43
- --llm <provider> ollama|llama.cpp
44
- --model <name> Model name for local provider
45
- --base-url <url> Local runtime base URL override
45
+ --llm <provider> ollama|llama.cpp|codex|claude|gemini|copilot|...
46
+ --model <name> Model name for the selected provider
47
+ --base-url <url> Runtime base URL override (local or OpenAI-compatible)
46
48
  --ollama-url <url> Legacy alias for --base-url
47
49
 
48
50
  Options for doctor:
49
- --llm <provider> ollama|llama.cpp (default: auto from ${ACE_ROOT_REL}/ace-state.ace)
51
+ --llm <provider> ollama|llama.cpp|codex|claude|gemini|copilot|... (default: auto from ${ACE_ROOT_REL}/ace-state.ace)
50
52
  --model <name> Model name override
51
- --base-url <url> Local runtime base URL override
53
+ --base-url <url> Runtime base URL override
52
54
  --ollama-url <url> Legacy alias for --base-url
53
55
  --scan Probe common local Ollama + llama.cpp endpoints when URL is unset
54
56
 
57
+ Options for cache:
58
+ --dry-run Preview what would be cached and cleaned (no writes/deletes)
59
+ --no-clean Keep workspace ACE artifacts after caching them into ace-state.ace
60
+
55
61
  Options for mcp-config:
56
62
  --client <name> codex|vscode|claude|cursor|antigravity
57
63
  --all Print all client snippets for optional global install
@@ -92,14 +98,14 @@ function readBaseUrlFlag(args) {
92
98
  }
93
99
  function parseLlmOptions(args) {
94
100
  const profile = readLlmProfile();
95
- const selected = readFlagValue(args, "--llm")?.trim().toLowerCase() ?? profile?.provider?.trim().toLowerCase();
101
+ const selected = normalizeProvider(readFlagValue(args, "--llm")?.trim() ?? profile?.provider?.trim());
96
102
  if (!selected)
97
103
  return {};
98
104
  if (!ALL_LLM_PROVIDERS.includes(selected)) {
99
105
  throw new Error(`Unsupported LLM provider: ${selected}`);
100
106
  }
101
107
  const provider = selected;
102
- const defaultModel = provider === "llama.cpp" ? DEFAULT_LLAMA_CPP_MODEL : DEFAULT_OLLAMA_MODEL;
108
+ const defaultModel = defaultModelForProvider(provider);
103
109
  const llmModel = readFlagValue(args, "--model")?.trim() || profile?.model || defaultModel;
104
110
  const llmBaseUrl = readBaseUrlFlag(args) ?? normalizeLocalBaseUrl(profile?.base_url);
105
111
  return {
@@ -123,30 +129,17 @@ async function writeLlmProfile(profile) {
123
129
  payload.default_api_key = "ollama";
124
130
  }
125
131
  }
126
- const doctorCommands = profile.provider === "ollama"
127
- ? [
128
- "ollama serve",
129
- `ollama pull ${profile.model}`,
130
- ...(profile.baseUrl ? [`curl -s ${profile.baseUrl}/api/tags`] : []),
131
- ]
132
- : [
133
- "# Start llama-server separately, for example:",
134
- "# llama-server -m /path/to/model.gguf --port 8080",
135
- ...(profile.baseUrl ? [`curl -s ${buildOpenAiCompatibleBaseUrl(profile.baseUrl)}/models`] : []),
136
- ];
132
+ const doctorCommands = buildProviderDoctorCommands(profile.provider, profile.model, profile.baseUrl);
137
133
  const store = await openStore(storePath);
138
134
  try {
139
135
  await store.setJSON("state/runtime/llm_profile", payload);
140
136
  await store.setBlob("state/runtime/doctor_checks.md", [
141
- `# ACE + ${profile.provider} Doctor Checks`,
137
+ `# ACE + ${profile.provider} Runtime Checks`,
142
138
  "",
143
- "Run these commands to verify local-model readiness:",
139
+ "Run these commands to verify ACE runtime readiness:",
144
140
  "",
145
141
  "```bash",
146
142
  ...doctorCommands,
147
- profile.baseUrl
148
- ? `ace doctor --llm ${profile.provider} --model ${profile.model} --base-url ${profile.baseUrl}`
149
- : `ace doctor --llm ${profile.provider} --model ${profile.model} --scan`,
150
143
  "```",
151
144
  "",
152
145
  ].join("\n"));
@@ -202,7 +195,7 @@ async function runInit(args, mode = "init") {
202
195
  includeClientConfigBundle,
203
196
  llm: llm.llmProvider ?? undefined,
204
197
  model: llm.llmModel ?? undefined,
205
- ollamaUrl: llm.llmBaseUrl ?? undefined,
198
+ baseUrl: llm.llmBaseUrl ?? undefined,
206
199
  });
207
200
  const astIndex = refreshAstgrepIndex({
208
201
  scope: ".",
@@ -237,7 +230,10 @@ async function runInit(args, mode = "init") {
237
230
  if (llm.llmProvider) {
238
231
  console.log(`LLM provider: ${llm.llmProvider}`);
239
232
  console.log(`Model: ${llm.llmModel}`);
240
- console.log(`Base URL: ${llm.llmBaseUrl ?? "(not set; run ace doctor --scan or pass --base-url)"}`);
233
+ console.log(`Base URL: ${llm.llmBaseUrl ??
234
+ (isLocalLlmProvider(llm.llmProvider)
235
+ ? "(not set; run ace doctor --scan or pass --base-url)"
236
+ : "(not set; provider defaults or env may still apply)")}`);
241
237
  console.log(`LLM profile path: ${storeResult.storePath}#state/runtime/llm_profile`);
242
238
  }
243
239
  if (storeResult.materialized.length > 0) {
@@ -324,7 +320,6 @@ async function listLocalRuntimeModels(provider, baseUrl) {
324
320
  async function runDoctor(args) {
325
321
  const llm = parseLlmOptions(args);
326
322
  const checks = [];
327
- const shouldScan = args.includes("--scan") || !llm.llmBaseUrl;
328
323
  const mcpConfigPaths = [wsPath(".vscode", "mcp.json")];
329
324
  const hasWorkspaceMcpConfig = mcpConfigPaths.some((path) => fileExists(path));
330
325
  const hasStoredMcpConfig = ALL_MCP_CLIENTS.some((client) => typeof readStoredMcpSnippet(client) === "string");
@@ -345,11 +340,12 @@ async function runDoctor(args) {
345
340
  ok: hasProfile,
346
341
  detail: hasProfile
347
342
  ? profilePath
348
- : `Missing ${ACE_ROOT_REL}/ace-state.ace#state/runtime/llm_profile. Run \`ace init --llm ollama\` or \`ace doctor --scan\`.`,
343
+ : `Missing ${ACE_ROOT_REL}/ace-state.ace#state/runtime/llm_profile. Run \`ace init --llm <provider>\` or \`ace doctor --scan\` for a local runtime.`,
349
344
  });
350
345
  let provider = llm.llmProvider;
351
346
  let model = llm.llmModel;
352
347
  let baseUrl = llm.llmBaseUrl;
348
+ const shouldScan = args.includes("--scan") || (!provider && !baseUrl) || (isLocalLlmProvider(provider) && !baseUrl);
353
349
  if (shouldScan) {
354
350
  const scanned = await scanLocalModelRuntimes({
355
351
  workspaceRoot: WORKSPACE_ROOT,
@@ -361,7 +357,7 @@ async function runDoctor(args) {
361
357
  if (chosen) {
362
358
  provider = chosen.provider ?? provider;
363
359
  baseUrl = baseUrl ?? chosen.baseUrl;
364
- model = model || chosen.models[0] || (provider === "llama.cpp" ? DEFAULT_LLAMA_CPP_MODEL : DEFAULT_OLLAMA_MODEL);
360
+ model = model || chosen.models[0] || defaultModelForProvider(provider);
365
361
  checks.push({
366
362
  name: "Runtime endpoint discovered",
367
363
  ok: true,
@@ -369,7 +365,7 @@ async function runDoctor(args) {
369
365
  });
370
366
  const writtenProfilePath = await writeLlmProfile({
371
367
  provider: provider ?? chosen.provider,
372
- model: model ?? (provider === "llama.cpp" ? DEFAULT_LLAMA_CPP_MODEL : DEFAULT_OLLAMA_MODEL),
368
+ model: model ?? defaultModelForProvider(provider ?? chosen.provider),
373
369
  baseUrl,
374
370
  });
375
371
  checks.push({
@@ -387,19 +383,19 @@ async function runDoctor(args) {
387
383
  }
388
384
  }
389
385
  if (!provider) {
390
- throw new Error(`No local runtime provider configured. Use --llm <provider>, bootstrap one into ${ACE_ROOT_REL}/ace-state.ace#state/runtime/llm_profile, or run \`ace doctor --scan\`.`);
386
+ throw new Error(`No runtime provider configured. Use --llm <provider>, bootstrap one into ${ACE_ROOT_REL}/ace-state.ace#state/runtime/llm_profile, or run \`ace doctor --scan\` for a local runtime.`);
391
387
  }
392
388
  if (!model) {
393
- model = provider === "llama.cpp" ? DEFAULT_LLAMA_CPP_MODEL : DEFAULT_OLLAMA_MODEL;
389
+ model = defaultModelForProvider(provider);
394
390
  }
395
- if (!baseUrl) {
391
+ if (isLocalLlmProvider(provider) && !baseUrl) {
396
392
  checks.push({
397
393
  name: "Base URL configured",
398
394
  ok: false,
399
395
  detail: "No local runtime base URL is configured. Pass --base-url or run `ace doctor --scan` to discover one.",
400
396
  });
401
397
  const failed = checks.filter((check) => !check.ok);
402
- console.log("ACE Doctor Report (local runtime)");
398
+ console.log("ACE Doctor Report");
403
399
  console.log(`Workspace: ${WORKSPACE_ROOT}`);
404
400
  console.log(`Provider: ${provider}`);
405
401
  console.log(`Model: ${model}`);
@@ -415,48 +411,94 @@ async function runDoctor(args) {
415
411
  return;
416
412
  }
417
413
  let modelNames = [];
418
- try {
419
- modelNames = await listLocalRuntimeModels(provider, baseUrl);
414
+ if (isLocalLlmProvider(provider)) {
415
+ try {
416
+ modelNames = await listLocalRuntimeModels(provider, baseUrl);
417
+ checks.push({
418
+ name: "Runtime endpoint reachable",
419
+ ok: true,
420
+ detail: provider === "ollama"
421
+ ? `${baseUrl}/api/tags responded with ${modelNames.length} models`
422
+ : `${buildOpenAiCompatibleBaseUrl(baseUrl)}/models responded with ${modelNames.length} models`,
423
+ });
424
+ }
425
+ catch (error) {
426
+ checks.push({
427
+ name: "Runtime endpoint reachable",
428
+ ok: false,
429
+ detail: `${baseUrl} request failed: ${error instanceof Error ? error.message : String(error)}`,
430
+ });
431
+ }
432
+ const hasModel = modelNames.includes(model);
433
+ await recordDiscoveryProfile({
434
+ provider,
435
+ baseUrl: baseUrl,
436
+ models: modelNames,
437
+ selectedModel: model,
438
+ available: modelNames.length > 0,
439
+ });
420
440
  checks.push({
421
- name: "Runtime endpoint reachable",
422
- ok: true,
441
+ name: provider === "ollama" ? "Requested model installed" : "Requested model served",
442
+ ok: hasModel,
423
443
  detail: provider === "ollama"
424
- ? `${baseUrl}/api/tags responded with ${modelNames.length} models`
425
- : `${buildOpenAiCompatibleBaseUrl(baseUrl)}/models responded with ${modelNames.length} models`,
444
+ ? hasModel
445
+ ? `${model} found`
446
+ : `${model} missing (run: ollama pull ${model})`
447
+ : hasModel
448
+ ? `${model} found`
449
+ : `${model} not reported by llama.cpp (available: ${modelNames.join(", ") || "none"})`,
426
450
  });
427
451
  }
428
- catch (error) {
452
+ else {
453
+ const diagnosis = diagnoseChatRuntimeConfig(provider, model, baseUrl ? { baseUrl } : undefined);
429
454
  checks.push({
430
- name: "Runtime endpoint reachable",
431
- ok: false,
432
- detail: `${baseUrl} request failed: ${error instanceof Error ? error.message : String(error)}`,
455
+ name: "Runtime config ready",
456
+ ok: diagnosis.ok,
457
+ detail: diagnosis.message,
433
458
  });
459
+ if (diagnosis.ok) {
460
+ try {
461
+ const client = new OpenAICompatibleClient({
462
+ providerConfigs: baseUrl ? { [provider]: { baseUrl } } : undefined,
463
+ });
464
+ modelNames = await client.listModels(provider);
465
+ checks.push({
466
+ name: "Provider endpoint reachable",
467
+ ok: true,
468
+ detail: `${provider} /models responded with ${modelNames.length} models`,
469
+ });
470
+ }
471
+ catch (error) {
472
+ checks.push({
473
+ name: "Provider endpoint reachable",
474
+ ok: false,
475
+ detail: `${provider} model listing failed: ${error instanceof Error ? error.message : String(error)}`,
476
+ });
477
+ }
478
+ if (modelNames.length > 0) {
479
+ const requestedModel = provider === "copilot" ? model.replace(/^copilot\//, "") : model;
480
+ const hasModel = modelNames.includes(requestedModel);
481
+ checks.push({
482
+ name: "Requested model available",
483
+ ok: hasModel,
484
+ detail: hasModel
485
+ ? `${requestedModel} found`
486
+ : `${requestedModel} not reported by ${provider} (available: ${modelNames.join(", ") || "none"})`,
487
+ });
488
+ }
489
+ }
434
490
  }
435
- const hasModel = modelNames.includes(model);
436
- await recordDiscoveryProfile({
437
- provider,
438
- baseUrl,
439
- models: modelNames,
440
- selectedModel: model,
441
- available: modelNames.length > 0,
442
- });
443
- checks.push({
444
- name: provider === "ollama" ? "Requested model installed" : "Requested model served",
445
- ok: hasModel,
446
- detail: provider === "ollama"
447
- ? hasModel
448
- ? `${model} found`
449
- : `${model} missing (run: ollama pull ${model})`
450
- : hasModel
451
- ? `${model} found`
452
- : `${model} not reported by llama.cpp (available: ${modelNames.join(", ") || "none"})`,
453
- });
454
491
  const failed = checks.filter((check) => !check.ok);
455
- console.log("ACE Doctor Report (local runtime)");
492
+ console.log("ACE Doctor Report");
456
493
  console.log(`Workspace: ${WORKSPACE_ROOT}`);
457
494
  console.log(`Provider: ${provider}`);
458
495
  console.log(`Model: ${model}`);
459
- console.log(`Base URL: ${baseUrl}`);
496
+ if (isLocalLlmProvider(provider)) {
497
+ console.log(`Base URL: ${baseUrl}`);
498
+ }
499
+ else if (baseUrl) {
500
+ console.log(`Base URL override: ${baseUrl}`);
501
+ }
460
502
  console.log("");
461
503
  for (const check of checks) {
462
504
  const status = check.ok ? "PASS" : "FAIL";
@@ -643,6 +685,26 @@ async function main() {
643
685
  console.log(`Store compacted: ${result.before} → ${result.after} bytes (saved ${savedKb}KB)`);
644
686
  return;
645
687
  }
688
+ if (command === "cache") {
689
+ const { cacheWorkspaceArtifacts } = await import("./store/cache-workspace.js");
690
+ const dryRun = args.includes("--dry-run");
691
+ const clean = !args.includes("--no-clean");
692
+ const result = await cacheWorkspaceArtifacts(WORKSPACE_ROOT, { dryRun, clean });
693
+ console.log("ACE cache complete");
694
+ console.log(`Workspace: ${WORKSPACE_ROOT}`);
695
+ console.log(`Store: ${result.storePath}`);
696
+ console.log(`Scanned files: ${result.scanned_files}`);
697
+ console.log(`Cached files: ${result.cached_files}`);
698
+ console.log(`Kept projected files: ${result.kept_projected_files}`);
699
+ console.log(`Skipped files: ${result.skipped_files}`);
700
+ console.log(`Removed files: ${result.removed_files}`);
701
+ if (result.warnings.length > 0) {
702
+ console.warn(`Warnings (${result.warnings.length}):`);
703
+ for (const warning of result.warnings)
704
+ console.warn(` ! ${warning}`);
705
+ }
706
+ return;
707
+ }
646
708
  if (command === "help" || command === "--help" || command === "-h") {
647
709
  printHelp();
648
710
  return;
package/dist/helpers.d.ts CHANGED
@@ -22,7 +22,7 @@ export declare const ACE_HOST_CLAUDE_REL = "CLAUDE.md";
22
22
  export declare const ACE_HOST_CURSOR_RULES_REL = ".cursorrules";
23
23
  export declare const ALL_MCP_CLIENTS: readonly ["codex", "vscode", "claude", "cursor", "antigravity"];
24
24
  export type McpClient = (typeof ALL_MCP_CLIENTS)[number];
25
- export declare const ALL_LLM_PROVIDERS: readonly ["ollama", "llama.cpp"];
25
+ export declare const ALL_LLM_PROVIDERS: readonly ["ollama", "llama.cpp", "codex", "claude", "gemini", "copilot"];
26
26
  export type LlmProvider = (typeof ALL_LLM_PROVIDERS)[number];
27
27
  export declare const ALL_AGENTS: readonly ["orchestrator", "vos", "ui", "coders", "astgrep", "skeptic", "ops", "research", "spec", "builder", "qa", "docs", "memory", "security", "observability", "eval", "release"];
28
28
  export type AgentRole = (typeof ALL_AGENTS)[number];
@@ -66,6 +66,7 @@ export interface BootstrapResult {
66
66
  skipped: string[];
67
67
  }
68
68
  export declare function mapAceWorkspaceRelativePath(path: string): string;
69
+ export declare function isProjectedAceWorkspacePath(filePath: string): boolean;
69
70
  export declare function acePath(...segments: string[]): string;
70
71
  export declare function wsPath(...segments: string[]): string;
71
72
  /** Normalize a path for validation: workspace-relative, forward slashes. */
@@ -79,6 +80,7 @@ export declare function fileExists(filePath: string): boolean;
79
80
  export declare function resolveWorkspaceWritePath(filePath: string): string;
80
81
  export declare function resolveWorkspaceReadPath(filePath: string): string;
81
82
  export declare function resolveWorkspaceArtifactPath(filePath: string, mode?: "read" | "write"): string;
83
+ export declare function resolveStoreFallbackKeysForPath(filePath: string): string[];
82
84
  export declare function classifyPathSource(path?: string): ArtifactSource;
83
85
  /** Safely read a workspace file, returning either text or a not-found/access message. */
84
86
  export declare function safeRead(filePath: string): string;