@voybio/ace-swarm 0.2.3 → 0.2.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,24 +2,25 @@
2
2
 
3
3
  Autonomous Coding Entity (ACE) is a local MCP server and CLI for agent-assisted coding.
4
4
 
5
- `ace-swarm` provides Claude Code, Codex, Cursor, VS Code, and local-model workflows with a shared runtime: durable workspace state, typed handoffs, scheduler support, change intelligence, and operator tooling. All state lives in a single binary file — `.agents/ACE/ace-state.ace` — so the workspace keeps one source of truth instead of a forest of generated state files.
5
+ `ace-swarm` provides Claude Code, Codex, Cursor, VS Code, and provider-backed workflows with a shared runtime: durable workspace state, typed handoffs, scheduler support, change intelligence, and operator tooling. All state lives in a single binary file — `.agents/ACE/ace-state.ace` — so the workspace keeps one source of truth instead of a forest of generated state files.
6
6
 
7
- Modern coding agents can already read files, write code, run commands, and call tools. The missing layer is coordination that survives restarts, model switches, and multi-agent work. ACE keeps the workflow in the store and projects only the files external clients still need. For local models, ACE provides task context, tool surfaces, status events, evidence trails, and resumable state.
7
+ Modern coding agents can already read files, write code, run commands, and call tools. The missing layer is coordination that survives restarts, model switches, and multi-agent work. ACE keeps the workflow in the store and projects only the files external clients still need. For provider-backed runs, ACE provides task context, tool surfaces, status events, evidence trails, and resumable state.
8
8
 
9
9
  ## ACE Manifesto
10
10
 
11
11
  - Agent work should survive restarts, handoffs, and review.
12
12
  - Handoffs should be typed, evidence-linked, and validated.
13
13
  - Shared workflow state should live in the workspace, not only in chat history.
14
- - Local models should get the same structure and observability as hosted models.
14
+ - ACE-Orchestrator is the default entrypoint, and every provider can delegate through the full agent set.
15
+ - Provider-backed runs should get the same structure and observability.
15
16
  - Bootstrap should be contained and reversible: ACE writes into `.agents/ACE/ace-state.ace`.
16
17
 
17
- ### Serving local models
18
+ ### Serving model providers
18
19
 
19
- ACE provides local models with the context they cannot infer on their own: task state, tool surfaces, status events, evidence trails, and the next move in the loop.
20
+ ACE provides model providers with the context they cannot infer on their own: task state, tool surfaces, status events, evidence trails, and the next move in the loop.
20
21
 
21
- - `ace init --llm ollama` or `ace init --llm llama.cpp` records a local runtime profile in the store.
22
- - `ace doctor` discovers the endpoint, checks the selected model, and keeps the runtime honest.
22
+ - `ace init --llm <provider>` records the selected runtime profile in the store.
23
+ - `ace doctor` validates local or hosted runtime wiring, checks the selected model when the provider exposes listings, and keeps the runtime honest.
23
24
  - `ace mcp` and `ace tui` expose the same workspace truth to the host, the terminal, and the model.
24
25
 
25
26
  ## Why This Exists Now
@@ -28,7 +29,7 @@ Frontier models are getting more capable and more restricted at the same time. P
28
29
 
29
30
  - Claude Code proved the terminal-native agent loop.
30
31
  - OpenAI connectors and MCP widened the tool surface.
31
- - Ollama and llama.cpp keep local models in play.
32
+ - Ollama and llama.cpp keep local runtimes in play.
32
33
  - MCP remains the common protocol surface across clients and runtimes.
33
34
 
34
35
  ACE sits underneath that stack. It does not compete with the host you already use. It provides that host with a durable local runtime.
@@ -128,6 +129,15 @@ ace mcp
128
129
 
129
130
  If you already have a local runtime running, `ace doctor --scan` will probe common Ollama and llama.cpp endpoints and write the selected profile back into the store.
130
131
 
132
+ Hosted provider path with Codex:
133
+
134
+ ```bash
135
+ npx -y ace-swarm turnkey --project "My Project" --llm codex --model gpt-5
136
+ export OPENAI_API_KEY=...
137
+ ace doctor --llm codex --model gpt-5
138
+ ace mcp
139
+ ```
140
+
131
141
  ## What Bootstrap Writes
132
142
 
133
143
  ACE bootstrap is intentionally contained. The store is authoritative at `.agents/ACE/ace-state.ace`; only a few host-facing files land in the workspace when requested.
@@ -205,8 +215,8 @@ In that setup, the host remains the interface. ACE becomes the state contract un
205
215
 
206
216
  Local models are good at generating text. They usually need help seeing the workspace, hearing the handoff trail, and remembering what changed two turns ago. ACE closes that gap.
207
217
 
208
- - `ace init --llm ollama` or `ace init --llm llama.cpp` seeds the profile inside `.agents/ACE/ace-state.ace`.
209
- - `ace doctor` verifies that the selected runtime is reachable and configured.
218
+ - `ace init --llm <provider>` seeds the profile inside `.agents/ACE/ace-state.ace`.
219
+ - `ace doctor` verifies that the selected runtime is configured and, when possible, reachable.
210
220
  - `ace tui` can talk to either local runtime and surface the same workspace state the MCP server sees.
211
221
  - The ACE model bridge gives local runs tool selection, execution flow, and persisted state instead of a fragile one-shot chat loop.
212
222
 
@@ -236,7 +246,7 @@ If you want local agents with memory, evidence, and handoff discipline, this is
236
246
  | `ace turnkey [options]` | project the store-backed host surface |
237
247
  | `ace mcp` / `ace serve` | start the MCP server over stdio |
238
248
  | `ace tui [options]` | open the terminal operator surface |
239
- | `ace doctor [options]` | validate local-model readiness |
249
+ | `ace doctor [options]` | validate ACE runtime readiness |
240
250
  | `ace mcp-config [--client <name>|--all]` | print baked client snippets for optional install |
241
251
  | `ace paths` | show resolved package and workspace paths |
242
252
  +---------------------------------------------+---------------------------------------------------+
@@ -1,9 +1,15 @@
1
1
  ---
2
2
  name: ACE-Init
3
3
  description: Bootstrap-only initializer that materializes ACE state, validates readiness, and hands control to ACE-Orchestrator.
4
- target: vscode, codex
4
+ target: vscode, codex, claude, cursor, antigravity, copilot, gemini
5
5
  tools:
6
6
  - vscode
7
+ - codex
8
+ - claude
9
+ - cursor
10
+ - antigravity
11
+ - copilot
12
+ - gemini
7
13
  - execute
8
14
  - read
9
15
  - edit
@@ -1,11 +1,14 @@
1
1
  ---
2
2
  name: ACE-Orchestrator
3
3
  description: Global flow-orchestration contract for routing venture, UX, and engineering work across swarm and composable agents.
4
- target: vscode, codex
4
+ target: vscode, codex, claude, cursor, antigravity, copilot, gemini
5
5
  tools:
6
6
  - vscode
7
7
  - codex
8
8
  - claude
9
+ - cursor
10
+ - antigravity
11
+ - copilot
9
12
  - gemini
10
13
  - execute
11
14
  - read
package/dist/cli.js CHANGED
@@ -13,7 +13,8 @@ import { DiscoveryRepository } from "./store/repositories/discovery-repository.j
13
13
  import { getWorkspaceStorePath, readStoreBlobSync, readStoreJsonSync, } from "./store/store-snapshot.js";
14
14
  import { readFileSync } from "node:fs";
15
15
  import { runTui } from "./tui/index.js";
16
- import { buildOpenAiCompatibleBaseUrl, DEFAULT_LLAMA_CPP_MODEL, DEFAULT_OLLAMA_MODEL, discoverProviderContext, normalizeLocalBaseUrl, scanLocalModelRuntimes, } from "./tui/provider-discovery.js";
16
+ import { buildOpenAiCompatibleBaseUrl, buildProviderDoctorCommands, defaultModelForProvider, discoverProviderContext, isLocalLlmProvider, normalizeLocalBaseUrl, normalizeProvider, scanLocalModelRuntimes, } from "./tui/provider-discovery.js";
17
+ import { diagnoseChatRuntimeConfig, OpenAICompatibleClient, } from "./tui/openai-compatible.js";
17
18
  function printHelp() {
18
19
  console.log(`ACE Swarm CLI
19
20
 
@@ -23,7 +24,7 @@ Usage:
23
24
  ace tui [options] Launch interactive TUI dashboard
24
25
  ace init [options] Bootstrap the ACE store into current workspace
25
26
  ace turnkey [options] Project minimal workspace bootstrap stubs from the ACE store
26
- ace doctor [options] Validate local LLM + MCP readiness
27
+ ace doctor [options] Validate ACE runtime + MCP readiness
27
28
  ace mcp-config [options] Print global/client MCP config snippet(s) from store
28
29
  ace preconfig Write .mcp-config/ bundle for all supported clients to workspace root
29
30
  ace paths Show resolved package/workspace paths
@@ -40,15 +41,15 @@ Options for init:
40
41
  --force Overwrite scaffolded files if they already exist
41
42
  --mcp-config Also write .vscode/mcp.json workspace bridge
42
43
  --client-config-bundle Also write minimal workspace host stubs (AGENTS.md, CLAUDE.md, .cursorrules, .github/copilot-instructions.md)
43
- --llm <provider> ollama|llama.cpp
44
- --model <name> Model name for local provider
45
- --base-url <url> Local runtime base URL override
44
+ --llm <provider> ollama|llama.cpp|codex|claude|gemini|copilot|...
45
+ --model <name> Model name for the selected provider
46
+ --base-url <url> Runtime base URL override (local or OpenAI-compatible)
46
47
  --ollama-url <url> Legacy alias for --base-url
47
48
 
48
49
  Options for doctor:
49
- --llm <provider> ollama|llama.cpp (default: auto from ${ACE_ROOT_REL}/ace-state.ace)
50
+ --llm <provider> ollama|llama.cpp|codex|claude|gemini|copilot|... (default: auto from ${ACE_ROOT_REL}/ace-state.ace)
50
51
  --model <name> Model name override
51
- --base-url <url> Local runtime base URL override
52
+ --base-url <url> Runtime base URL override
52
53
  --ollama-url <url> Legacy alias for --base-url
53
54
  --scan Probe common local Ollama + llama.cpp endpoints when URL is unset
54
55
 
@@ -92,14 +93,14 @@ function readBaseUrlFlag(args) {
92
93
  }
93
94
  function parseLlmOptions(args) {
94
95
  const profile = readLlmProfile();
95
- const selected = readFlagValue(args, "--llm")?.trim().toLowerCase() ?? profile?.provider?.trim().toLowerCase();
96
+ const selected = normalizeProvider(readFlagValue(args, "--llm")?.trim() ?? profile?.provider?.trim());
96
97
  if (!selected)
97
98
  return {};
98
99
  if (!ALL_LLM_PROVIDERS.includes(selected)) {
99
100
  throw new Error(`Unsupported LLM provider: ${selected}`);
100
101
  }
101
102
  const provider = selected;
102
- const defaultModel = provider === "llama.cpp" ? DEFAULT_LLAMA_CPP_MODEL : DEFAULT_OLLAMA_MODEL;
103
+ const defaultModel = defaultModelForProvider(provider);
103
104
  const llmModel = readFlagValue(args, "--model")?.trim() || profile?.model || defaultModel;
104
105
  const llmBaseUrl = readBaseUrlFlag(args) ?? normalizeLocalBaseUrl(profile?.base_url);
105
106
  return {
@@ -123,30 +124,17 @@ async function writeLlmProfile(profile) {
123
124
  payload.default_api_key = "ollama";
124
125
  }
125
126
  }
126
- const doctorCommands = profile.provider === "ollama"
127
- ? [
128
- "ollama serve",
129
- `ollama pull ${profile.model}`,
130
- ...(profile.baseUrl ? [`curl -s ${profile.baseUrl}/api/tags`] : []),
131
- ]
132
- : [
133
- "# Start llama-server separately, for example:",
134
- "# llama-server -m /path/to/model.gguf --port 8080",
135
- ...(profile.baseUrl ? [`curl -s ${buildOpenAiCompatibleBaseUrl(profile.baseUrl)}/models`] : []),
136
- ];
127
+ const doctorCommands = buildProviderDoctorCommands(profile.provider, profile.model, profile.baseUrl);
137
128
  const store = await openStore(storePath);
138
129
  try {
139
130
  await store.setJSON("state/runtime/llm_profile", payload);
140
131
  await store.setBlob("state/runtime/doctor_checks.md", [
141
- `# ACE + ${profile.provider} Doctor Checks`,
132
+ `# ACE + ${profile.provider} Runtime Checks`,
142
133
  "",
143
- "Run these commands to verify local-model readiness:",
134
+ "Run these commands to verify ACE runtime readiness:",
144
135
  "",
145
136
  "```bash",
146
137
  ...doctorCommands,
147
- profile.baseUrl
148
- ? `ace doctor --llm ${profile.provider} --model ${profile.model} --base-url ${profile.baseUrl}`
149
- : `ace doctor --llm ${profile.provider} --model ${profile.model} --scan`,
150
138
  "```",
151
139
  "",
152
140
  ].join("\n"));
@@ -202,7 +190,7 @@ async function runInit(args, mode = "init") {
202
190
  includeClientConfigBundle,
203
191
  llm: llm.llmProvider ?? undefined,
204
192
  model: llm.llmModel ?? undefined,
205
- ollamaUrl: llm.llmBaseUrl ?? undefined,
193
+ baseUrl: llm.llmBaseUrl ?? undefined,
206
194
  });
207
195
  const astIndex = refreshAstgrepIndex({
208
196
  scope: ".",
@@ -237,7 +225,10 @@ async function runInit(args, mode = "init") {
237
225
  if (llm.llmProvider) {
238
226
  console.log(`LLM provider: ${llm.llmProvider}`);
239
227
  console.log(`Model: ${llm.llmModel}`);
240
- console.log(`Base URL: ${llm.llmBaseUrl ?? "(not set; run ace doctor --scan or pass --base-url)"}`);
228
+ console.log(`Base URL: ${llm.llmBaseUrl ??
229
+ (isLocalLlmProvider(llm.llmProvider)
230
+ ? "(not set; run ace doctor --scan or pass --base-url)"
231
+ : "(not set; provider defaults or env may still apply)")}`);
241
232
  console.log(`LLM profile path: ${storeResult.storePath}#state/runtime/llm_profile`);
242
233
  }
243
234
  if (storeResult.materialized.length > 0) {
@@ -324,7 +315,6 @@ async function listLocalRuntimeModels(provider, baseUrl) {
324
315
  async function runDoctor(args) {
325
316
  const llm = parseLlmOptions(args);
326
317
  const checks = [];
327
- const shouldScan = args.includes("--scan") || !llm.llmBaseUrl;
328
318
  const mcpConfigPaths = [wsPath(".vscode", "mcp.json")];
329
319
  const hasWorkspaceMcpConfig = mcpConfigPaths.some((path) => fileExists(path));
330
320
  const hasStoredMcpConfig = ALL_MCP_CLIENTS.some((client) => typeof readStoredMcpSnippet(client) === "string");
@@ -345,11 +335,12 @@ async function runDoctor(args) {
345
335
  ok: hasProfile,
346
336
  detail: hasProfile
347
337
  ? profilePath
348
- : `Missing ${ACE_ROOT_REL}/ace-state.ace#state/runtime/llm_profile. Run \`ace init --llm ollama\` or \`ace doctor --scan\`.`,
338
+ : `Missing ${ACE_ROOT_REL}/ace-state.ace#state/runtime/llm_profile. Run \`ace init --llm <provider>\` or \`ace doctor --scan\` for a local runtime.`,
349
339
  });
350
340
  let provider = llm.llmProvider;
351
341
  let model = llm.llmModel;
352
342
  let baseUrl = llm.llmBaseUrl;
343
+ const shouldScan = args.includes("--scan") || (!provider && !baseUrl) || (isLocalLlmProvider(provider) && !baseUrl);
353
344
  if (shouldScan) {
354
345
  const scanned = await scanLocalModelRuntimes({
355
346
  workspaceRoot: WORKSPACE_ROOT,
@@ -361,7 +352,7 @@ async function runDoctor(args) {
361
352
  if (chosen) {
362
353
  provider = chosen.provider ?? provider;
363
354
  baseUrl = baseUrl ?? chosen.baseUrl;
364
- model = model || chosen.models[0] || (provider === "llama.cpp" ? DEFAULT_LLAMA_CPP_MODEL : DEFAULT_OLLAMA_MODEL);
355
+ model = model || chosen.models[0] || defaultModelForProvider(provider);
365
356
  checks.push({
366
357
  name: "Runtime endpoint discovered",
367
358
  ok: true,
@@ -369,7 +360,7 @@ async function runDoctor(args) {
369
360
  });
370
361
  const writtenProfilePath = await writeLlmProfile({
371
362
  provider: provider ?? chosen.provider,
372
- model: model ?? (provider === "llama.cpp" ? DEFAULT_LLAMA_CPP_MODEL : DEFAULT_OLLAMA_MODEL),
363
+ model: model ?? defaultModelForProvider(provider ?? chosen.provider),
373
364
  baseUrl,
374
365
  });
375
366
  checks.push({
@@ -387,19 +378,19 @@ async function runDoctor(args) {
387
378
  }
388
379
  }
389
380
  if (!provider) {
390
- throw new Error(`No local runtime provider configured. Use --llm <provider>, bootstrap one into ${ACE_ROOT_REL}/ace-state.ace#state/runtime/llm_profile, or run \`ace doctor --scan\`.`);
381
+ throw new Error(`No runtime provider configured. Use --llm <provider>, bootstrap one into ${ACE_ROOT_REL}/ace-state.ace#state/runtime/llm_profile, or run \`ace doctor --scan\` for a local runtime.`);
391
382
  }
392
383
  if (!model) {
393
- model = provider === "llama.cpp" ? DEFAULT_LLAMA_CPP_MODEL : DEFAULT_OLLAMA_MODEL;
384
+ model = defaultModelForProvider(provider);
394
385
  }
395
- if (!baseUrl) {
386
+ if (isLocalLlmProvider(provider) && !baseUrl) {
396
387
  checks.push({
397
388
  name: "Base URL configured",
398
389
  ok: false,
399
390
  detail: "No local runtime base URL is configured. Pass --base-url or run `ace doctor --scan` to discover one.",
400
391
  });
401
392
  const failed = checks.filter((check) => !check.ok);
402
- console.log("ACE Doctor Report (local runtime)");
393
+ console.log("ACE Doctor Report");
403
394
  console.log(`Workspace: ${WORKSPACE_ROOT}`);
404
395
  console.log(`Provider: ${provider}`);
405
396
  console.log(`Model: ${model}`);
@@ -415,48 +406,94 @@ async function runDoctor(args) {
415
406
  return;
416
407
  }
417
408
  let modelNames = [];
418
- try {
419
- modelNames = await listLocalRuntimeModels(provider, baseUrl);
409
+ if (isLocalLlmProvider(provider)) {
410
+ try {
411
+ modelNames = await listLocalRuntimeModels(provider, baseUrl);
412
+ checks.push({
413
+ name: "Runtime endpoint reachable",
414
+ ok: true,
415
+ detail: provider === "ollama"
416
+ ? `${baseUrl}/api/tags responded with ${modelNames.length} models`
417
+ : `${buildOpenAiCompatibleBaseUrl(baseUrl)}/models responded with ${modelNames.length} models`,
418
+ });
419
+ }
420
+ catch (error) {
421
+ checks.push({
422
+ name: "Runtime endpoint reachable",
423
+ ok: false,
424
+ detail: `${baseUrl} request failed: ${error instanceof Error ? error.message : String(error)}`,
425
+ });
426
+ }
427
+ const hasModel = modelNames.includes(model);
428
+ await recordDiscoveryProfile({
429
+ provider,
430
+ baseUrl: baseUrl,
431
+ models: modelNames,
432
+ selectedModel: model,
433
+ available: modelNames.length > 0,
434
+ });
420
435
  checks.push({
421
- name: "Runtime endpoint reachable",
422
- ok: true,
436
+ name: provider === "ollama" ? "Requested model installed" : "Requested model served",
437
+ ok: hasModel,
423
438
  detail: provider === "ollama"
424
- ? `${baseUrl}/api/tags responded with ${modelNames.length} models`
425
- : `${buildOpenAiCompatibleBaseUrl(baseUrl)}/models responded with ${modelNames.length} models`,
439
+ ? hasModel
440
+ ? `${model} found`
441
+ : `${model} missing (run: ollama pull ${model})`
442
+ : hasModel
443
+ ? `${model} found`
444
+ : `${model} not reported by llama.cpp (available: ${modelNames.join(", ") || "none"})`,
426
445
  });
427
446
  }
428
- catch (error) {
447
+ else {
448
+ const diagnosis = diagnoseChatRuntimeConfig(provider, model, baseUrl ? { baseUrl } : undefined);
429
449
  checks.push({
430
- name: "Runtime endpoint reachable",
431
- ok: false,
432
- detail: `${baseUrl} request failed: ${error instanceof Error ? error.message : String(error)}`,
450
+ name: "Runtime config ready",
451
+ ok: diagnosis.ok,
452
+ detail: diagnosis.message,
433
453
  });
454
+ if (diagnosis.ok) {
455
+ try {
456
+ const client = new OpenAICompatibleClient({
457
+ providerConfigs: baseUrl ? { [provider]: { baseUrl } } : undefined,
458
+ });
459
+ modelNames = await client.listModels(provider);
460
+ checks.push({
461
+ name: "Provider endpoint reachable",
462
+ ok: true,
463
+ detail: `${provider} /models responded with ${modelNames.length} models`,
464
+ });
465
+ }
466
+ catch (error) {
467
+ checks.push({
468
+ name: "Provider endpoint reachable",
469
+ ok: false,
470
+ detail: `${provider} model listing failed: ${error instanceof Error ? error.message : String(error)}`,
471
+ });
472
+ }
473
+ if (modelNames.length > 0) {
474
+ const requestedModel = provider === "copilot" ? model.replace(/^copilot\//, "") : model;
475
+ const hasModel = modelNames.includes(requestedModel);
476
+ checks.push({
477
+ name: "Requested model available",
478
+ ok: hasModel,
479
+ detail: hasModel
480
+ ? `${requestedModel} found`
481
+ : `${requestedModel} not reported by ${provider} (available: ${modelNames.join(", ") || "none"})`,
482
+ });
483
+ }
484
+ }
434
485
  }
435
- const hasModel = modelNames.includes(model);
436
- await recordDiscoveryProfile({
437
- provider,
438
- baseUrl,
439
- models: modelNames,
440
- selectedModel: model,
441
- available: modelNames.length > 0,
442
- });
443
- checks.push({
444
- name: provider === "ollama" ? "Requested model installed" : "Requested model served",
445
- ok: hasModel,
446
- detail: provider === "ollama"
447
- ? hasModel
448
- ? `${model} found`
449
- : `${model} missing (run: ollama pull ${model})`
450
- : hasModel
451
- ? `${model} found`
452
- : `${model} not reported by llama.cpp (available: ${modelNames.join(", ") || "none"})`,
453
- });
454
486
  const failed = checks.filter((check) => !check.ok);
455
- console.log("ACE Doctor Report (local runtime)");
487
+ console.log("ACE Doctor Report");
456
488
  console.log(`Workspace: ${WORKSPACE_ROOT}`);
457
489
  console.log(`Provider: ${provider}`);
458
490
  console.log(`Model: ${model}`);
459
- console.log(`Base URL: ${baseUrl}`);
491
+ if (isLocalLlmProvider(provider)) {
492
+ console.log(`Base URL: ${baseUrl}`);
493
+ }
494
+ else if (baseUrl) {
495
+ console.log(`Base URL override: ${baseUrl}`);
496
+ }
460
497
  console.log("");
461
498
  for (const check of checks) {
462
499
  const status = check.ok ? "PASS" : "FAIL";
package/dist/helpers.d.ts CHANGED
@@ -22,7 +22,7 @@ export declare const ACE_HOST_CLAUDE_REL = "CLAUDE.md";
22
22
  export declare const ACE_HOST_CURSOR_RULES_REL = ".cursorrules";
23
23
  export declare const ALL_MCP_CLIENTS: readonly ["codex", "vscode", "claude", "cursor", "antigravity"];
24
24
  export type McpClient = (typeof ALL_MCP_CLIENTS)[number];
25
- export declare const ALL_LLM_PROVIDERS: readonly ["ollama", "llama.cpp"];
25
+ export declare const ALL_LLM_PROVIDERS: readonly ["ollama", "llama.cpp", "codex", "claude", "gemini", "copilot"];
26
26
  export type LlmProvider = (typeof ALL_LLM_PROVIDERS)[number];
27
27
  export declare const ALL_AGENTS: readonly ["orchestrator", "vos", "ui", "coders", "astgrep", "skeptic", "ops", "research", "spec", "builder", "qa", "docs", "memory", "security", "observability", "eval", "release"];
28
28
  export type AgentRole = (typeof ALL_AGENTS)[number];
package/dist/helpers.js CHANGED
@@ -9,7 +9,7 @@ import { buildHostInstructionText } from "./ace-server-instructions.js";
9
9
  import { isInside, normalizeRelPath } from "./shared.js";
10
10
  import { getWorkspaceStorePath, listStoreKeysSync, parseVirtualStorePath, readStoreBlobSync, readVirtualStorePathSync, toVirtualStorePath, } from "./store/store-snapshot.js";
11
11
  import { isOperationalArtifactPath, operationalArtifactKey, writeStoreBlobSync, writeOperationalArtifactSync, } from "./store/store-artifacts.js";
12
- import { buildOpenAiCompatibleBaseUrl, DEFAULT_LLAMA_CPP_MODEL, DEFAULT_OLLAMA_MODEL, normalizeLocalBaseUrl, } from "./tui/provider-discovery.js";
12
+ import { buildProviderDoctorCommands, buildOpenAiCompatibleBaseUrl, defaultModelForProvider, normalizeLocalBaseUrl, normalizeProvider, } from "./tui/provider-discovery.js";
13
13
  export { isInside, isReadError, normalizeRelPath, looksLikeSwarmHandoffPath } from "./shared.js";
14
14
  const __filename = fileURLToPath(import.meta.url);
15
15
  const __dirname = dirname(__filename);
@@ -128,7 +128,14 @@ export const ALL_MCP_CLIENTS = [
128
128
  "cursor",
129
129
  "antigravity",
130
130
  ];
131
- export const ALL_LLM_PROVIDERS = ["ollama", "llama.cpp"];
131
+ export const ALL_LLM_PROVIDERS = [
132
+ "ollama",
133
+ "llama.cpp",
134
+ "codex",
135
+ "claude",
136
+ "gemini",
137
+ "copilot",
138
+ ];
132
139
  export const ALL_AGENTS = [
133
140
  "orchestrator",
134
141
  "vos",
@@ -1325,10 +1332,11 @@ export function bootstrapAceWorkspace(options = {}) {
1325
1332
  const projectName = options.projectName?.trim() || "ACE Project";
1326
1333
  const includeMcpConfig = options.includeMcpConfig ?? true;
1327
1334
  const includeClientConfigBundle = options.includeClientConfigBundle ?? true;
1328
- const llmProvider = options.llmProvider;
1335
+ const normalizedProvider = normalizeProvider(options.llmProvider);
1336
+ const llmProvider = normalizedProvider;
1329
1337
  const llmModel = options.llmModel?.trim() ||
1330
1338
  options.ollamaModel?.trim() ||
1331
- (llmProvider === "llama.cpp" ? DEFAULT_LLAMA_CPP_MODEL : DEFAULT_OLLAMA_MODEL);
1339
+ defaultModelForProvider(llmProvider);
1332
1340
  const llmBaseUrl = normalizeLocalBaseUrl(options.llmBaseUrl ?? options.ollamaBaseUrl);
1333
1341
  if (llmProvider && !ALL_LLM_PROVIDERS.includes(llmProvider)) {
1334
1342
  throw new Error(`Unsupported llmProvider: ${llmProvider}`);
@@ -1654,12 +1662,14 @@ export function bootstrapAceWorkspace(options = {}) {
1654
1662
  ...(llmProvider
1655
1663
  ? [
1656
1664
  "",
1657
- "## Local Model Profile",
1665
+ "## LLM Runtime Profile",
1658
1666
  `- \`${ACE_LLM_ROOT_REL}/llm-profile.json\` (generated for ${llmProvider})`,
1659
- `- \`${ACE_LLM_ROOT_REL}/doctor-checks.md\` (commands to verify local runtime)`,
1667
+ `- \`${ACE_LLM_ROOT_REL}/doctor-checks.md\` (commands to verify provider runtime)`,
1660
1668
  llmBaseUrl
1661
1669
  ? `- Run \`ace doctor --llm ${llmProvider} --model ${llmModel} --base-url ${llmBaseUrl}\``
1662
- : `- Run \`ace doctor --llm ${llmProvider} --model ${llmModel} --scan\` to discover a local endpoint`,
1670
+ : llmProvider === "ollama" || llmProvider === "llama.cpp"
1671
+ ? `- Run \`ace doctor --llm ${llmProvider} --model ${llmModel} --scan\` to discover a local endpoint`
1672
+ : `- Run \`ace doctor --llm ${llmProvider} --model ${llmModel}\` after exporting the provider credentials`,
1663
1673
  ]
1664
1674
  : []),
1665
1675
  ].join("\n"), result, force);
@@ -1678,27 +1688,14 @@ export function bootstrapAceWorkspace(options = {}) {
1678
1688
  }
1679
1689
  }
1680
1690
  writeIfMissingOrForced(wsPath(".ace", "llm-profile.json"), JSON.stringify(profilePayload, null, 2), result, force);
1681
- const doctorCommands = llmProvider === "ollama"
1682
- ? [
1683
- "ollama serve",
1684
- `ollama pull ${llmModel}`,
1685
- ...(llmBaseUrl ? [`curl -s ${llmBaseUrl}/api/tags`] : []),
1686
- ]
1687
- : [
1688
- "# Start llama-server separately, for example:",
1689
- "# llama-server -m /path/to/model.gguf --port 8080",
1690
- ...(llmBaseUrl ? [`curl -s ${buildOpenAiCompatibleBaseUrl(llmBaseUrl)}/models`] : []),
1691
- ];
1691
+ const doctorCommands = buildProviderDoctorCommands(llmProvider, llmModel, llmBaseUrl);
1692
1692
  writeIfMissingOrForced(wsPath(".ace", "doctor-checks.md"), [
1693
- `# ACE + ${llmProvider} Doctor Checks`,
1693
+ `# ACE + ${llmProvider} Runtime Checks`,
1694
1694
  "",
1695
- "Run these commands to verify local-model readiness:",
1695
+ "Run these commands to verify ACE runtime readiness:",
1696
1696
  "",
1697
1697
  "```bash",
1698
1698
  ...doctorCommands,
1699
- llmBaseUrl
1700
- ? `ace doctor --llm ${llmProvider} --model ${llmModel} --base-url ${llmBaseUrl}`
1701
- : `ace doctor --llm ${llmProvider} --model ${llmModel} --scan`,
1702
1699
  "```",
1703
1700
  "",
1704
1701
  `MCP is still configured separately via \`${ACE_VSCODE_ROOT_REL}/mcp.json\` or \`${ACE_BUNDLE_ROOT_REL}/*\`.`,
@@ -1,6 +1,5 @@
1
- import { existsSync, readFileSync } from "node:fs";
2
1
  import { resolve } from "node:path";
3
- import { ROLE_ENUM, scoreDomains } from "./shared.js";
2
+ import { ROLE_ENUM } from "./shared.js";
4
3
  import { resolveWorkspaceRoot } from "./helpers.js";
5
4
  import { executeAceInternalTool } from "./ace-internal-tools.js";
6
5
  import { ModelBridge } from "./model-bridge.js";
@@ -28,61 +27,17 @@ function normalizeRoleCandidate(input) {
28
27
  ? candidate
29
28
  : undefined;
30
29
  }
31
- function parseRoleFromRoutingSummary(summary) {
32
- const match = summary.match(/\*\*Primary Swarm Agent\(s\):\*\*\s*([^\n]+)/i);
33
- if (!match?.[1])
34
- return undefined;
35
- const firstAgent = match[1]
36
- .split(",")
37
- .map((entry) => entry.trim())
38
- .find(Boolean);
39
- return normalizeRoleCandidate(firstAgent);
40
- }
41
- function hasAlignedTaskContract(workspaceRoot) {
42
- const required = [
43
- "agent-state/TASK.md",
44
- "agent-state/SCOPE.md",
45
- "agent-state/QUALITY_GATES.md",
46
- "agent-state/HANDOFF.json",
47
- ];
48
- return required.every((relativePath) => {
49
- const absolute = resolve(workspaceRoot, relativePath);
50
- if (!existsSync(absolute))
51
- return false;
52
- return readFileSync(absolute, "utf-8").trim().length > 0;
53
- });
30
+ function fallbackRoleForTask() {
31
+ return "orchestrator";
54
32
  }
55
- function fallbackRoleForTask(task, workspaceRoot) {
56
- if (!hasAlignedTaskContract(workspaceRoot)) {
57
- return "orchestrator";
58
- }
59
- const { domain } = scoreDomains(task);
60
- switch (domain) {
61
- case "engineering":
62
- return "coders";
63
- case "venture":
64
- return "vos";
65
- case "ux":
66
- return "ui";
67
- default:
68
- return "orchestrator";
69
- }
70
- }
71
- async function resolveRole(task, workspaceRoot, sessionId, requestedRole) {
33
+ async function resolveRole(task, sessionId, requestedRole) {
72
34
  const explicitRole = normalizeRoleCandidate(requestedRole);
73
35
  if (explicitRole) {
74
36
  return { role: explicitRole, routingSummary: undefined };
75
37
  }
76
38
  const routing = await executeAceInternalTool("route_task", { description: task, domain: "unknown" }, sessionId);
77
39
  const routingSummary = extractTextContent(routing);
78
- const routedRole = parseRoleFromRoutingSummary(routingSummary);
79
- const fallbackRole = fallbackRoleForTask(task, workspaceRoot);
80
- return {
81
- role: routedRole && !(routedRole === "orchestrator" && fallbackRole !== "orchestrator")
82
- ? routedRole
83
- : fallbackRole,
84
- routingSummary,
85
- };
40
+ return { role: fallbackRoleForTask(), routingSummary };
86
41
  }
87
42
  function resolveTier(requested, provider, model, role) {
88
43
  if (requested && requested !== "auto")
@@ -152,7 +107,7 @@ export async function runLocalModelTask(options) {
152
107
  ollamaUrl: options.ollamaUrl,
153
108
  });
154
109
  const bridge = new ModelBridge(options.clients ?? createDefaultModelBridgeClients(runtime));
155
- const { role, routingSummary } = await resolveRole(options.task, runtime.workspaceRoot, undefined, options.role);
110
+ const { role, routingSummary } = await resolveRole(options.task, undefined, options.role);
156
111
  const tier = resolveTier(options.tier, runtime.provider, runtime.model, role);
157
112
  const result = await bridge.run({
158
113
  task: options.task,
@@ -24,6 +24,7 @@ export interface BootstrapOptions {
24
24
  mcpServerPath?: string;
25
25
  llm?: string;
26
26
  model?: string;
27
+ baseUrl?: string;
27
28
  ollamaUrl?: string;
28
29
  }
29
30
  export interface BootstrapResult {
@@ -27,7 +27,7 @@ import { ProjectionManager } from "./materializers/projection-manager.js";
27
27
  import { TodoSyncer } from "./materializers/todo-syncer.js";
28
28
  import { OPERATIONAL_ARTIFACT_REL_PATHS, operationalArtifactKey, } from "./store-artifacts.js";
29
29
  import { STORE_VERSION } from "./types.js";
30
- import { DEFAULT_LLAMA_CPP_MODEL, DEFAULT_OLLAMA_MODEL, } from "../tui/provider-discovery.js";
30
+ import { buildProviderDoctorCommands, defaultModelForProvider, normalizeLocalBaseUrl, normalizeProvider, } from "../tui/provider-discovery.js";
31
31
  const __dirname = dirname(fileURLToPath(import.meta.url));
32
32
  function getAssetsRoot() {
33
33
  // Resolve relative to this file's location in the installed package
@@ -40,45 +40,32 @@ function getAssetsRoot() {
40
40
  async function applyLlmRuntimeConfig(store, opts) {
41
41
  if (!opts.llm)
42
42
  return;
43
- const configuredModel = opts.model ?? (opts.llm === "llama.cpp" ? DEFAULT_LLAMA_CPP_MODEL : DEFAULT_OLLAMA_MODEL);
43
+ const provider = normalizeProvider(opts.llm) ?? opts.llm;
44
+ const configuredModel = opts.model ?? defaultModelForProvider(provider);
45
+ const configuredBaseUrl = normalizeLocalBaseUrl(opts.baseUrl ?? opts.ollamaUrl);
44
46
  const profilePayload = {
45
- provider: opts.llm,
47
+ provider,
46
48
  model: configuredModel,
47
49
  generated_at: new Date().toISOString(),
48
50
  };
49
- if (opts.ollamaUrl) {
50
- profilePayload.base_url = opts.ollamaUrl;
51
- if (opts.llm === "ollama") {
52
- profilePayload.api_compat_base_url = opts.ollamaUrl.endsWith("/v1")
53
- ? opts.ollamaUrl
54
- : `${opts.ollamaUrl}/v1`;
51
+ if (configuredBaseUrl) {
52
+ profilePayload.base_url = configuredBaseUrl;
53
+ if (provider === "ollama") {
54
+ profilePayload.api_compat_base_url = configuredBaseUrl.endsWith("/v1")
55
+ ? configuredBaseUrl
56
+ : `${configuredBaseUrl}/v1`;
55
57
  profilePayload.default_api_key = "ollama";
56
58
  }
57
59
  }
58
60
  await store.setJSON("state/runtime/llm_profile", profilePayload);
59
- const doctorCommands = opts.llm === "ollama"
60
- ? [
61
- "ollama serve",
62
- `ollama pull ${configuredModel}`,
63
- ...(opts.ollamaUrl ? [`curl -s ${opts.ollamaUrl}/api/tags`] : []),
64
- ]
65
- : [
66
- "# Start llama-server separately, for example:",
67
- "# llama-server -m /path/to/model.gguf --port 8080",
68
- ...(opts.ollamaUrl
69
- ? [`curl -s ${opts.ollamaUrl.endsWith("/v1") ? opts.ollamaUrl : `${opts.ollamaUrl}/v1`}/models`]
70
- : []),
71
- ];
61
+ const doctorCommands = buildProviderDoctorCommands(provider, configuredModel, configuredBaseUrl);
72
62
  await store.setBlob("state/runtime/doctor_checks.md", [
73
- `# ACE + ${opts.llm} Doctor Checks`,
63
+ `# ACE + ${provider} Runtime Checks`,
74
64
  "",
75
- "Run these commands to verify local-model readiness:",
65
+ "Run these commands to verify ACE runtime readiness:",
76
66
  "",
77
67
  "```bash",
78
68
  ...doctorCommands,
79
- opts.ollamaUrl
80
- ? `ace doctor --llm ${opts.llm} --model ${configuredModel} --base-url ${opts.ollamaUrl}`
81
- : `ace doctor --llm ${opts.llm} --model ${configuredModel} --scan`,
82
69
  "```",
83
70
  "",
84
71
  ].join("\n"));
@@ -52,25 +52,6 @@ function extractToolTextContent(result) {
52
52
  .filter(Boolean)
53
53
  .join("\n");
54
54
  }
55
- function normalizeRoleCandidate(input) {
56
- if (!input)
57
- return undefined;
58
- const normalized = input.trim().toLowerCase().replace(/^ace-/, "").replace(/[^a-z0-9]+/g, "");
59
- const candidate = normalized === "coder" ? "coders" : normalized;
60
- return ROLE_ENUM.options.includes(candidate)
61
- ? candidate
62
- : undefined;
63
- }
64
- function parseRoleFromRoutingSummary(summary) {
65
- const match = summary.match(/\*\*Primary Swarm Agent\(s\):\*\*\s*([^\n]+)/i);
66
- if (!match?.[1])
67
- return undefined;
68
- const firstAgent = match[1]
69
- .split(",")
70
- .map((entry) => entry.trim())
71
- .find(Boolean);
72
- return normalizeRoleCandidate(firstAgent);
73
- }
74
55
  function mapRoleToTaskType(role) {
75
56
  switch (role) {
76
57
  case "vos":
@@ -91,9 +72,8 @@ function extractHandoffId(text) {
91
72
  return lineMatch?.[1];
92
73
  }
93
74
  async function buildOrchestratorSteps(task, sessionId) {
94
- const routing = await executeAceInternalTool("route_task", { description: task, domain: "unknown" }, sessionId);
95
- const routedRole = parseRoleFromRoutingSummary(extractToolTextContent(routing)) ?? "orchestrator";
96
- return [{ role: routedRole, task }];
75
+ void sessionId;
76
+ return [{ role: "orchestrator", task }];
97
77
  }
98
78
  function appendUniqueNote(target, note) {
99
79
  if (!target.includes(note)) {
@@ -566,9 +546,9 @@ export function registerAgentTools(server) {
566
546
  ],
567
547
  };
568
548
  });
569
- server.tool("run_local_model", "Offload a governed ACE subtask to a local model bridge and return the result", {
570
- task: z.string().describe("Task to execute with the local ACE model bridge"),
571
- role: ROLE_ENUM.optional().describe("Optional ACE role; defaults to route_task"),
549
+ server.tool("run_local_model", "Offload a governed ACE subtask to the provider-backed ACE bridge and return the result", {
550
+ task: z.string().describe("Task to execute with the ACE model bridge"),
551
+ role: ROLE_ENUM.optional().describe("Optional ACE role; defaults to orchestrator"),
572
552
  max_turns: z
573
553
  .number()
574
554
  .int()
@@ -582,15 +562,15 @@ export function registerAgentTools(server) {
582
562
  provider: z
583
563
  .string()
584
564
  .optional()
585
- .describe("Optional provider override; otherwise discovered from local runtime context"),
565
+ .describe("Optional provider override; otherwise discovered from workspace/runtime context"),
586
566
  model: z
587
567
  .string()
588
568
  .optional()
589
- .describe("Optional model override; otherwise discovered from local runtime context"),
569
+ .describe("Optional model override; otherwise discovered from workspace/runtime context"),
590
570
  base_url: z
591
571
  .string()
592
572
  .optional()
593
- .describe("Optional local runtime base URL override"),
573
+ .describe("Optional provider base URL override"),
594
574
  ollama_url: z
595
575
  .string()
596
576
  .optional()
@@ -621,7 +601,7 @@ export function registerAgentTools(server) {
621
601
  {
622
602
  type: "text",
623
603
  text: [
624
- "## Local Model Run",
604
+ "## Delegated Run",
625
605
  `- role: ${delegated.role}`,
626
606
  `- provider: ${delegated.runtime.provider}`,
627
607
  `- model: ${delegated.runtime.model}`,
@@ -655,7 +635,7 @@ export function registerAgentTools(server) {
655
635
  ],
656
636
  };
657
637
  });
658
- server.tool("run_orchestrator", "Execute a supervised plan via model bridge child runs; when steps are omitted, auto-planning falls back to a single route_task-derived step", {
638
+ server.tool("run_orchestrator", "Execute a supervised plan via model bridge child runs; when steps are omitted, the plan starts with ACE-Orchestrator", {
659
639
  task: z.string().describe("The task to decompose and execute"),
660
640
  steps: z
661
641
  .array(z.object({
@@ -675,7 +655,7 @@ export function registerAgentTools(server) {
675
655
  .describe("Optional ACE tool allowlist for the step"),
676
656
  }))
677
657
  .optional()
678
- .describe("Pre-defined steps; if omitted, the orchestrator derives a single routed step"),
658
+ .describe("Pre-defined steps; if omitted, the orchestrator starts with a single ACE-Orchestrator step"),
679
659
  execution_mode: z
680
660
  .enum(["sequential", "scheduled"])
681
661
  .optional()
@@ -689,15 +669,15 @@ export function registerAgentTools(server) {
689
669
  provider: z
690
670
  .string()
691
671
  .optional()
692
- .describe("Optional provider override; otherwise discovered from local runtime context"),
672
+ .describe("Optional provider override; otherwise discovered from workspace/runtime context"),
693
673
  model: z
694
674
  .string()
695
675
  .optional()
696
- .describe("Optional model override; otherwise discovered from local runtime context"),
676
+ .describe("Optional model override; otherwise discovered from workspace/runtime context"),
697
677
  base_url: z
698
678
  .string()
699
679
  .optional()
700
- .describe("Optional local runtime base URL override"),
680
+ .describe("Optional provider base URL override"),
701
681
  ollama_url: z
702
682
  .string()
703
683
  .optional()
@@ -715,7 +695,7 @@ export function registerAgentTools(server) {
715
695
  ollamaUrl: ollama_url,
716
696
  });
717
697
  const sessionId = typeof extra?.sessionId === "string" ? extra.sessionId : undefined;
718
- const planSource = Array.isArray(steps) && steps.length > 0 ? "explicit_steps" : "route_task_single_step";
698
+ const planSource = Array.isArray(steps) && steps.length > 0 ? "explicit_steps" : "orchestrator_default_step";
719
699
  const planSteps = Array.isArray(steps) && steps.length > 0
720
700
  ? steps
721
701
  : await buildOrchestratorSteps(task, sessionId);
@@ -840,8 +820,8 @@ export function registerAgentTools(server) {
840
820
  text: JSON.stringify({
841
821
  runtime,
842
822
  plan_source: planSource,
843
- planning_note: planSource === "route_task_single_step"
844
- ? "Auto-planning currently falls back to a single route_task-derived step. Pass explicit steps for multi-step orchestration."
823
+ planning_note: planSource === "orchestrator_default_step"
824
+ ? "Auto-planning currently starts with ACE-Orchestrator. Pass explicit steps for multi-step orchestration."
845
825
  : null,
846
826
  plan: supervised.plan,
847
827
  step_summaries,
@@ -4,9 +4,9 @@
4
4
  */
5
5
  import { z } from "zod";
6
6
  import { bootstrapStoreWorkspace } from "./store/bootstrap-store.js";
7
- import { ACE_ROOT_REL, ACE_TASKS_ROOT_REL, ALL_MCP_CLIENTS, ALL_AGENTS, COMPOSABLE_AGENTS, SWARM_AGENTS, SWARM_SUBAGENT_MAP, WORKSPACE_ROOT, classifyPathSource, detectAssetDrift, getAllMcpServerConfigSnippets, getAgentInstructionPath, getAgentManifestPath, getKernelArtifactPath, getMcpClientInstallHint, getMcpServerConfigSnippet, getTaskArtifactPath, isSwarmRole, listAvailableSkills, normalizePathForValidation, safeRead, safeWrite, withFileLock, wsPath, } from "./helpers.js";
7
+ import { ACE_ROOT_REL, ACE_TASKS_ROOT_REL, ALL_MCP_CLIENTS, ALL_LLM_PROVIDERS, ALL_AGENTS, COMPOSABLE_AGENTS, SWARM_AGENTS, SWARM_SUBAGENT_MAP, WORKSPACE_ROOT, classifyPathSource, detectAssetDrift, getAllMcpServerConfigSnippets, getAgentInstructionPath, getAgentManifestPath, getKernelArtifactPath, getMcpClientInstallHint, getMcpServerConfigSnippet, getTaskArtifactPath, isSwarmRole, listAvailableSkills, normalizePathForValidation, safeRead, safeWrite, withFileLock, wsPath, } from "./helpers.js";
8
8
  import { getRoleTitle, MCP_CLIENT_ENUM, scoreDomains, } from "./shared.js";
9
- import { DEFAULT_LLAMA_CPP_MODEL, DEFAULT_OLLAMA_MODEL, } from "./tui/provider-discovery.js";
9
+ import { defaultModelForProvider, } from "./tui/provider-discovery.js";
10
10
  import { refreshAstgrepIndex } from "./astgrep-index.js";
11
11
  import { scanWorkspaceDelta } from "./index-store.js";
12
12
  import { getOrRefreshKanbanSnapshot } from "./kanban.js";
@@ -289,7 +289,7 @@ export function registerFrameworkTools(server) {
289
289
  }
290
290
  const routingMap = {
291
291
  venture: {
292
- swarm_agents: ["vos"],
292
+ swarm_agents: ["orchestrator"],
293
293
  subagents: [
294
294
  "astgrep",
295
295
  "research",
@@ -298,11 +298,11 @@ export function registerFrameworkTools(server) {
298
298
  "skeptic",
299
299
  "ops",
300
300
  ],
301
- pipeline: "VOS -> UI -> Coders (as needed) with composable Research/Spec/Skeptic/Ops support",
302
- prompt: "ace-vos",
301
+ pipeline: "Orchestrator -> VOS/UI/Coders as needed with composable Research/Spec/Skeptic/Ops support",
302
+ prompt: "ace-orchestrator",
303
303
  },
304
304
  ux: {
305
- swarm_agents: ["ui"],
305
+ swarm_agents: ["orchestrator"],
306
306
  subagents: [
307
307
  "astgrep",
308
308
  "research",
@@ -313,11 +313,11 @@ export function registerFrameworkTools(server) {
313
313
  "skeptic",
314
314
  "ops",
315
315
  ],
316
- pipeline: "UI -> Coders with composable Spec/Builder/QA and Skeptic/Ops sidecars",
317
- prompt: "ace-ui",
316
+ pipeline: "Orchestrator -> UI/Coders with composable Spec/Builder/QA and Skeptic/Ops sidecars",
317
+ prompt: "ace-orchestrator",
318
318
  },
319
319
  engineering: {
320
- swarm_agents: ["coders"],
320
+ swarm_agents: ["orchestrator"],
321
321
  subagents: [
322
322
  "astgrep",
323
323
  "spec",
@@ -331,8 +331,8 @@ export function registerFrameworkTools(server) {
331
331
  "skeptic",
332
332
  "ops",
333
333
  ],
334
- pipeline: "Coders with composable Spec -> Builder -> QA -> Docs and Skeptic/Ops guards",
335
- prompt: "ace-coders",
334
+ pipeline: "Orchestrator -> Coders with composable Spec -> Builder -> QA -> Docs and Skeptic/Ops guards",
335
+ prompt: "ace-orchestrator",
336
336
  },
337
337
  mixed: {
338
338
  swarm_agents: ["orchestrator"],
@@ -486,7 +486,8 @@ export function registerFrameworkTools(server) {
486
486
  `**Primary Swarm Agent(s):** ${activeRoute.swarm_agents
487
487
  .map((agent) => `ACE-${getRoleTitle(agent)}`)
488
488
  .join(", ")}`,
489
- "**Hierarchy Rule:** Top-level routing stays locked to ACE-Orchestrator, ACE-VOS, ACE-UI, or ACE-Coders. Composable agents are delegated specialists, not peer replacements.",
489
+ "**Default Entry Agent:** ACE-Orchestrator",
490
+ "**Hierarchy Rule:** Top-level work starts with ACE-Orchestrator. VOS, UI, and Coders are delegated specialists, not peer replacements.",
490
491
  `**Preflight Owner:** ACE-Orchestrator`,
491
492
  `**Task Contract:** ${taskContract.ok ? "aligned" : "attention required"}`,
492
493
  `**Composable Subagents (Universal):** ${[...COMPOSABLE_AGENTS].join(", ")}`,
@@ -523,7 +524,7 @@ export function registerFrameworkTools(server) {
523
524
  };
524
525
  });
525
526
  // ── Bootstrap ─────────────────────────────────────────────────────
526
- server.tool("bootstrap_state", "Bootstrap ACE framework files (state, tasks, instructions, skills, scripts, MCP configs, optional local LLM profile)", {
527
+ server.tool("bootstrap_state", "Bootstrap ACE framework files (state, tasks, instructions, skills, scripts, MCP configs, optional runtime profile)", {
527
528
  project_name: z
528
529
  .string()
529
530
  .optional()
@@ -541,18 +542,28 @@ export function registerFrameworkTools(server) {
541
542
  .optional()
542
543
  .describe("Write minimal workspace host stubs (AGENTS.md, CLAUDE.md, .cursorrules, .github/copilot-instructions.md)"),
543
544
  llm_provider: z
544
- .enum(["ollama", "llama.cpp"])
545
+ .enum(ALL_LLM_PROVIDERS)
546
+ .optional()
547
+ .describe("Optional LLM runtime provider to scaffold"),
548
+ llm_model: z
549
+ .string()
550
+ .optional()
551
+ .describe("Model name for generated runtime profile"),
552
+ llm_base_url: z
553
+ .string()
545
554
  .optional()
546
- .describe("Optional local LLM profile provider to scaffold"),
555
+ .describe("Runtime base URL for generated provider profile"),
547
556
  ollama_model: z
548
557
  .string()
549
558
  .optional()
550
- .describe("Model name for generated local profile (legacy field name kept for compatibility)"),
559
+ .describe("Legacy alias for llm_model"),
551
560
  ollama_base_url: z
552
561
  .string()
553
562
  .optional()
554
- .describe("Local runtime base URL for generated profile (legacy field name kept for compatibility)"),
555
- }, async ({ project_name, force, include_mcp_config, include_client_config_bundle, llm_provider, ollama_model, ollama_base_url, }) => {
563
+ .describe("Legacy alias for llm_base_url"),
564
+ }, async ({ project_name, force, include_mcp_config, include_client_config_bundle, llm_provider, llm_model, llm_base_url, ollama_model, ollama_base_url, }) => {
565
+ const resolvedLlmModel = llm_model ?? ollama_model;
566
+ const resolvedLlmBaseUrl = llm_base_url ?? ollama_base_url;
556
567
  // Store-first bootstrap — initializes ace-state.ace, bakes host bundles into store,
557
568
  // and optionally materializes minimal workspace stubs.
558
569
  const storeResult = await bootstrapStoreWorkspace({
@@ -562,8 +573,8 @@ export function registerFrameworkTools(server) {
562
573
  includeMcpConfig: include_mcp_config,
563
574
  includeClientConfigBundle: include_client_config_bundle,
564
575
  llm: llm_provider ?? undefined,
565
- model: ollama_model ?? undefined,
566
- ollamaUrl: ollama_base_url ?? undefined,
576
+ model: resolvedLlmModel ?? undefined,
577
+ baseUrl: resolvedLlmBaseUrl ?? undefined,
567
578
  });
568
579
  const astIndex = refreshAstgrepIndex({
569
580
  scope: ".",
@@ -601,8 +612,8 @@ export function registerFrameworkTools(server) {
601
612
  materialized: storeResult.materialized.length,
602
613
  force: Boolean(force),
603
614
  llm_provider: llm_provider ?? null,
604
- llm_model: ollama_model ?? null,
605
- llm_base_url: ollama_base_url ?? null,
615
+ llm_model: resolvedLlmModel ?? null,
616
+ llm_base_url: resolvedLlmBaseUrl ?? null,
606
617
  indexed_files: delta.snapshot.file_count,
607
618
  index_truncated: delta.truncated,
608
619
  ast_index_scope: astIndex.scope,
@@ -645,10 +656,13 @@ export function registerFrameworkTools(server) {
645
656
  ...(llm_provider
646
657
  ? [
647
658
  "",
648
- "## Local LLM Profile",
659
+ "## LLM Runtime Profile",
649
660
  `- provider: ${llm_provider}`,
650
- `- model: ${ollama_model ?? (llm_provider === "llama.cpp" ? DEFAULT_LLAMA_CPP_MODEL : DEFAULT_OLLAMA_MODEL)}`,
651
- `- base_url: ${ollama_base_url ?? "discover via ace doctor --scan or set explicitly"}`,
661
+ `- model: ${resolvedLlmModel ?? defaultModelForProvider(llm_provider)}`,
662
+ `- base_url: ${resolvedLlmBaseUrl ??
663
+ (llm_provider === "ollama" || llm_provider === "llama.cpp"
664
+ ? "discover via ace doctor --scan or set explicitly"
665
+ : "optional; use provider defaults or set explicitly")}`,
652
666
  `- profile_path: ${storeResult.storePath}#state/runtime/llm_profile`,
653
667
  `- doctor_path: ${storeResult.storePath}#state/runtime/doctor_checks.md`,
654
668
  ]
package/dist/tui/index.js CHANGED
@@ -17,7 +17,7 @@ import { OpenAICompatibleClient, diagnoseChatRuntimeConfig, } from "./openai-com
17
17
  import { detectColorLevel, write, cursor, screen, fg, style } from "./renderer.js";
18
18
  import { ALL_AGENTS, WORKSPACE_ROOT } from "../helpers.js";
19
19
  import { backfillHandoffsIntoScheduler } from "../tools-handoff.js";
20
- import { DEFAULT_OLLAMA_MODEL, inferProviderFromModel, normalizeLocalBaseUrl, } from "./provider-discovery.js";
20
+ import { DEFAULT_OLLAMA_MODEL, defaultModelForProvider, inferProviderFromModel, normalizeLocalBaseUrl, } from "./provider-discovery.js";
21
21
  import { resolveAceStateLayout } from "../ace-state-resolver.js";
22
22
  import { withLocalModelRuntimeRepository, } from "../store/repositories/local-model-runtime-repository.js";
23
23
  const DASHBOARD_CONTROLS = ["provider", "model", "chat", "logs", "refresh"];
@@ -56,7 +56,7 @@ export class AceTui {
56
56
  const workspaceRoot = options.workspaceRoot ?? WORKSPACE_ROOT;
57
57
  this.workspaceRoot = workspaceRoot;
58
58
  this.provider = this.normalizeProvider(options.provider ?? inferProviderFromModel(options.model) ?? "ollama") ?? "ollama";
59
- this.model = (options.model ?? DEFAULT_OLLAMA_MODEL).trim();
59
+ this.model = (options.model ?? defaultModelForProvider(this.provider)).trim();
60
60
  // Initialize modules
61
61
  const colorLevel = detectColorLevel();
62
62
  for (const [provider, baseUrl] of Object.entries(options.providerBaseUrls ?? {})) {
@@ -758,7 +758,7 @@ export class AceTui {
758
758
  this.model = DEFAULT_OLLAMA_MODEL;
759
759
  }
760
760
  else {
761
- this.model = `${this.provider}/default`;
761
+ this.model = defaultModelForProvider(this.provider);
762
762
  }
763
763
  this.config.set("model", this.model);
764
764
  this.telemetry.setModel(this.model);
@@ -28,24 +28,8 @@ function extractStatusField(statusRaw, label) {
28
28
  const match = statusRaw.match(pattern);
29
29
  return match?.[1] ? clip(oneLine(match[1])) : undefined;
30
30
  }
31
- function determineRole(task, preferredRole) {
32
- const lowered = task.toLowerCase();
33
- if (preferredRole && preferredRole !== "orchestrator") {
34
- return preferredRole;
35
- }
36
- if (/\b(doc|docs|readme|changelog|guide)\b/.test(lowered))
37
- return "docs";
38
- if (/\b(test|qa|verify|regression|assert|review)\b/.test(lowered))
39
- return "qa";
40
- if (/\b(ui|design|layout|ux|css|frontend)\b/.test(lowered))
41
- return "ui";
42
- if (/\b(research|investigate|compare|audit|analyze|analysis)\b/.test(lowered))
43
- return "research";
44
- if (/\b(spec|schema|contract|interface)\b/.test(lowered))
45
- return "spec";
46
- if (/\b(build|implement|fix|patch|code|refactor|wire|edit)\b/.test(lowered))
47
- return "coders";
48
- return "orchestrator";
31
+ function determineRole(preferredRole) {
32
+ return preferredRole?.trim() || "orchestrator";
49
33
  }
50
34
  export function shouldSynthesizeShortPlan(task) {
51
35
  const lowered = task.toLowerCase();
@@ -103,17 +87,15 @@ export function buildAcePreflightPacket(input) {
103
87
  ? "thin"
104
88
  : "healthy";
105
89
  const preflightState = blockers.length > 0 ? "blocked" : warnings.length > 0 || blockedByStatus ? "attention_required" : "ready";
106
- const recommendedRole = determineRole(input.task, input.preferredRole);
90
+ const recommendedRole = determineRole(input.preferredRole);
107
91
  const synthesize = shouldSynthesizeShortPlan(input.task);
108
92
  const recommendedNextAction = preflightState === "blocked"
109
93
  ? "validate_framework"
110
94
  : synthesize
111
95
  ? "run_orchestrator"
112
- : quartetHealth !== "healthy"
113
- ? "recall_context"
114
- : recommendedRole === "orchestrator"
115
- ? "route_task"
116
- : "recall_context";
96
+ : recommendedRole === "orchestrator"
97
+ ? "run_orchestrator"
98
+ : "recall_context";
117
99
  return {
118
100
  session_id: input.sessionId,
119
101
  workspace_root: input.workspaceRoot,
@@ -154,7 +136,7 @@ export function buildStartupNudge(preflight, ledger) {
154
136
  const text = action === "validate_framework"
155
137
  ? "ACE state is incomplete. Run validate_framework before free-form execution."
156
138
  : action === "run_orchestrator"
157
- ? "This looks multi-step. Prefer run_orchestrator over a single free-form reply."
139
+ ? "Start with run_orchestrator so ACE-Orchestrator can route the work."
158
140
  : action === "route_task"
159
141
  ? "Role selection is still ambiguous. Route the task before deeper work."
160
142
  : "Load ACE context first with recall_context before deeper work.";
@@ -14,6 +14,8 @@
14
14
  */
15
15
  export declare const DEFAULT_OLLAMA_MODEL = "llama3.1:8b";
16
16
  export declare const DEFAULT_LLAMA_CPP_MODEL = "local-model";
17
+ export declare const LOCAL_LLM_PROVIDERS: readonly ["ollama", "llama.cpp"];
18
+ export type LocalLlmProvider = (typeof LOCAL_LLM_PROVIDERS)[number];
17
19
  export interface ProviderDiscoveryOptions {
18
20
  workspaceRoot: string;
19
21
  cliProvider?: string;
@@ -53,6 +55,10 @@ export declare function normalizeProvider(input: string | undefined): string | u
53
55
  export declare function normalizeLocalBaseUrl(url: string | undefined): string | undefined;
54
56
  export declare function buildOpenAiCompatibleBaseUrl(baseUrl: string): string;
55
57
  export declare function inferProviderFromModel(model: string | undefined): string | undefined;
58
+ export declare function isLocalLlmProvider(providerInput: string | undefined): providerInput is LocalLlmProvider;
59
+ export declare function defaultModelForProvider(providerInput: string | undefined): string;
60
+ export declare function providerEnvPrefix(providerInput: string | undefined): string;
61
+ export declare function buildProviderDoctorCommands(providerInput: string | undefined, modelInput: string | undefined, baseUrlInput?: string): string[];
56
62
  export declare function parseJsoncLoose(raw: string): unknown;
57
63
  export declare function discoverProviderContext(options: ProviderDiscoveryOptions): ProviderDiscoveryResult;
58
64
  export declare function scanLocalModelRuntimes(options?: LocalRuntimeScanOptions): Promise<LocalRuntimeScanResult>;
@@ -18,7 +18,14 @@ import { join, resolve } from "node:path";
18
18
  import { readStoreJsonSync } from "../store/store-snapshot.js";
19
19
  export const DEFAULT_OLLAMA_MODEL = "llama3.1:8b";
20
20
  export const DEFAULT_LLAMA_CPP_MODEL = "local-model";
21
- const LOCAL_RUNTIME_PROVIDER_PREFERENCE = ["ollama", "llama.cpp"];
21
+ const DEFAULT_HOSTED_MODELS = {
22
+ codex: "gpt-5",
23
+ claude: "claude-3-7-sonnet",
24
+ gemini: "gemini-2.5-pro",
25
+ copilot: "copilot/gpt-5-mini",
26
+ };
27
+ export const LOCAL_LLM_PROVIDERS = ["ollama", "llama.cpp"];
28
+ const LOCAL_RUNTIME_PROVIDER_PREFERENCE = LOCAL_LLM_PROVIDERS;
22
29
  const PROVIDER_PREFERENCE = [
23
30
  ...LOCAL_RUNTIME_PROVIDER_PREFERENCE,
24
31
  "codex",
@@ -95,12 +102,69 @@ export function inferProviderFromModel(model) {
95
102
  }
96
103
  return undefined;
97
104
  }
98
- function defaultModelForProvider(provider) {
99
- if (provider === "ollama")
105
+ export function isLocalLlmProvider(providerInput) {
106
+ const provider = normalizeProvider(providerInput);
107
+ return provider === "ollama" || provider === "llama.cpp";
108
+ }
109
+ export function defaultModelForProvider(providerInput) {
110
+ const provider = normalizeProvider(providerInput);
111
+ if (provider === "ollama" || !provider)
100
112
  return DEFAULT_OLLAMA_MODEL;
101
113
  if (provider === "llama.cpp")
102
114
  return DEFAULT_LLAMA_CPP_MODEL;
103
- return `${provider}/default`;
115
+ return DEFAULT_HOSTED_MODELS[provider] ?? DEFAULT_HOSTED_MODELS.codex;
116
+ }
117
+ export function providerEnvPrefix(providerInput) {
118
+ const provider = normalizeProvider(providerInput);
119
+ if (!provider)
120
+ return "LLM";
121
+ if (provider === "llama.cpp")
122
+ return "LLAMA_CPP";
123
+ return provider.toUpperCase().replace(/[^A-Z0-9]+/g, "_");
124
+ }
125
+ export function buildProviderDoctorCommands(providerInput, modelInput, baseUrlInput) {
126
+ const provider = normalizeProvider(providerInput) ?? "ollama";
127
+ const model = modelInput?.trim() || defaultModelForProvider(provider);
128
+ const baseUrl = normalizeLocalBaseUrl(baseUrlInput);
129
+ if (provider === "ollama") {
130
+ return [
131
+ "ollama serve",
132
+ `ollama pull ${model}`,
133
+ ...(baseUrl ? [`curl -s ${baseUrl}/api/tags`] : []),
134
+ baseUrl
135
+ ? `ace doctor --llm ${provider} --model ${model} --base-url ${baseUrl}`
136
+ : `ace doctor --llm ${provider} --model ${model} --scan`,
137
+ ];
138
+ }
139
+ if (provider === "llama.cpp") {
140
+ return [
141
+ "# Start llama-server separately, for example:",
142
+ "# llama-server -m /path/to/model.gguf --port 8080",
143
+ ...(baseUrl ? [`curl -s ${buildOpenAiCompatibleBaseUrl(baseUrl)}/models`] : []),
144
+ baseUrl
145
+ ? `ace doctor --llm ${provider} --model ${model} --base-url ${baseUrl}`
146
+ : `ace doctor --llm ${provider} --model ${model} --scan`,
147
+ ];
148
+ }
149
+ if (provider === "codex") {
150
+ return [
151
+ "export OPENAI_API_KEY=<token>",
152
+ ...(baseUrl ? [`export OPENAI_BASE_URL=${baseUrl}`] : []),
153
+ `ace doctor --llm ${provider} --model ${model}${baseUrl ? ` --base-url ${baseUrl}` : ""}`,
154
+ ];
155
+ }
156
+ if (provider === "copilot") {
157
+ return [
158
+ "gh auth login # or export GITHUB_TOKEN=<token>",
159
+ `ace doctor --llm ${provider} --model ${model}`,
160
+ ];
161
+ }
162
+ const prefix = providerEnvPrefix(provider);
163
+ return [
164
+ `export ${prefix}_BASE_URL=${baseUrl ?? "<openai-compatible-base-url>"}`,
165
+ `export ${prefix}_API_KEY=<token>`,
166
+ `ace doctor --llm ${provider} --model ${model}${baseUrl ? ` --base-url ${baseUrl}` : ""}`,
167
+ ];
104
168
  }
105
169
  function looksLikeModel(value) {
106
170
  const v = value.trim().toLowerCase();
@@ -326,6 +390,8 @@ export function discoverProviderContext(options) {
326
390
  process.env.LLM_MODEL?.trim() ||
327
391
  undefined;
328
392
  const envGenericBaseUrl = normalizeLocalBaseUrl(process.env.ACE_TUI_BASE_URL ?? process.env.ACE_LLM_BASE_URL ?? process.env.LLM_BASE_URL);
393
+ const openAiBaseUrl = normalizeLocalBaseUrl(process.env.OPENAI_BASE_URL);
394
+ const genericBaseUrl = envGenericBaseUrl ?? openAiBaseUrl;
329
395
  const cliProvider = normalizeProvider(options.cliProvider);
330
396
  const cliModel = options.cliModel?.trim() || undefined;
331
397
  const cliBaseUrl = normalizeLocalBaseUrl(options.cliBaseUrl ?? options.cliOllamaUrl);
@@ -355,9 +421,18 @@ export function discoverProviderContext(options) {
355
421
  inferProviderFromModel(profileModel);
356
422
  }
357
423
  if (!provider) {
358
- provider = "ollama";
359
- if (settingHints.length > 0) {
360
- notes.push("runtime_default=ollama (VS Code model hints are discovery-only)");
424
+ const genericOpenAiSignals = Boolean(genericBaseUrl) ||
425
+ Boolean(process.env.OPENAI_API_KEY?.trim()) ||
426
+ Boolean(process.env.CODEX_API_KEY?.trim());
427
+ if (genericOpenAiSignals) {
428
+ provider = "codex";
429
+ notes.push("runtime_default=codex (generic OpenAI-compatible env detected)");
430
+ }
431
+ else {
432
+ provider = "ollama";
433
+ if (settingHints.length > 0) {
434
+ notes.push("runtime_default=ollama (VS Code model hints are discovery-only)");
435
+ }
361
436
  }
362
437
  }
363
438
  setProviderBaseUrl(providerBaseUrls, profileProvider, profileBaseUrl);
@@ -366,8 +441,8 @@ export function discoverProviderContext(options) {
366
441
  if (!providerBaseUrls.has(provider) && cliBaseUrl) {
367
442
  providerBaseUrls.set(provider, cliBaseUrl);
368
443
  }
369
- if (!providerBaseUrls.has(provider) && envGenericBaseUrl) {
370
- providerBaseUrls.set(provider, envGenericBaseUrl);
444
+ if (!providerBaseUrls.has(provider) && genericBaseUrl) {
445
+ providerBaseUrls.set(provider, genericBaseUrl);
371
446
  }
372
447
  if (!providerBaseUrls.has(provider) && profileBaseUrl && profileProvider === provider) {
373
448
  providerBaseUrls.set(provider, profileBaseUrl);
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@voybio/ace-swarm",
3
- "version": "0.2.3",
3
+ "version": "0.2.4",
4
4
  "description": "ACE Framework MCP server and CLI — single-file ACEPACK state, local-model serving, agent orchestration, and host compliance enforcement.",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",