@swarmclawai/swarmclaw 1.5.43 → 1.5.44

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -389,6 +389,15 @@ Operational docs: https://swarmclaw.ai/docs/observability
389
389
 
390
390
  ## Releases
391
391
 
392
+ ### v1.5.44 Highlights
393
+
394
+ - **Model lists refreshed across every provider**: dropdowns now lead with the April-2026 flagship models instead of mid-2025 names. OpenAI goes to GPT-5.4 / 5.4-mini / 5.4-nano / 5.3 / o3-mini. Google and Gemini CLI lead with Gemini 3.1 Pro, Gemini 3 Flash, and 3.1 Flash-Lite, keeping 2.5 as a legacy fallback. xAI jumps from Grok 3 to Grok 4 plus the Grok 4 / 4.1 Fast reasoning and non-reasoning variants. Groq drops the deprecated `deepseek-r1-distill-llama-70b` and leads with Llama 4 Maverick, Llama 4 Scout, Kimi K2, and gpt-oss 120b/20b. Mistral moves to Magistral 1.2, Devstral 2, Codestral, and Mistral Small 4. Fireworks / Nebius / DeepInfra now lead with DeepSeek V3.2, Kimi K2.5, and Qwen 3 235B instead of the older R1-0528 checkpoint. Anthropic and Claude CLI reorder Opus 4.6 / Sonnet 4.6 / Haiku 4.5 newest-first. OpenCode Web refreshes its `providerID/modelID` seed list.
395
+ - **OpenRouter default set expanded**: was one model (`openai/gpt-4.1-mini`). Now ten flagship routes including `openrouter/auto`, Claude 4.6 Opus / Sonnet / Haiku, GPT-5.4, Gemini 3.1 Pro / 3 Flash, Grok 4, DeepSeek V3.2, and Llama 4 Maverick. Much better first-run experience for the "provider that routes to every other provider".
396
+ - **`DEFAULT_AGENTS` models refreshed**: 11 starter-agent models updated to match the new flagship lineups (OpenAI → GPT-5.4, xAI → Grok 4, Google / Gemini CLI → Gemini 3.1 Pro, Groq → Llama 4 Maverick, Fireworks / Nebius / DeepInfra → DeepSeek V3.2, OpenCode Web / Copilot CLI → Claude Sonnet 4.6, OpenRouter → Claude Sonnet 4.6). Starter agents created from the setup wizard now default to the right model out of the box.
397
+ - **Starter-agent tool bundles now include `droid_cli` and `copilot_cli`**: these delegation backends were added in v1.5.37 and v1.5.3 respectively but never made it into `STARTER_AGENT_TOOLS` / `BUILDER_AGENT_TOOLS`. Every starter kit (Sidekick, Researcher, Builder, Reviewer, Operator, OpenClaw fleet) now picks them up on new workspace creation.
398
+ - **DeepSeek note**: `deepseek-chat` and `deepseek-reasoner` remain the recommended model names — they are stable aliases that auto-track the current `V3.2` weights. No action required.
399
+ - **Registry sanity test**: added `provider-models.test.ts` which asserts every provider declares a non-empty deduplicated models array, matching metadata keys, and a working `handler.streamChat`. Guards against future copy-paste regressions in the registry.
400
+
392
401
  ### v1.5.43 Highlights
393
402
 
394
403
  - **`/api/version` no longer 500s in Docker**: the route used to shell out to `git` at runtime, which fails in the production image because `.git/` is not copied. The route now returns 200 with `{ source: 'package', version }` from `package.json` when git metadata is unavailable, and `{ source: 'git', version, commit, ... }` when it is. `/api/version/update` short-circuits on Docker-style installs with a clear `no_git_metadata` reason instead of an opaque 500. ([#41](https://github.com/swarmclawai/swarmclaw/issues/41) Bug 1, reported by [@SteamedFish](https://github.com/SteamedFish).)
@@ -415,15 +424,6 @@ Operational docs: https://swarmclaw.ai/docs/observability
415
424
  - **Classifier timeout raised to 10 s**: 2 s was too tight for Ollama Cloud with a fully-configured agent (observed 4–6 s calls). Result caching means the latency tax only applies to first-seen messages.
416
425
  - **Reflection memories dedup across runs**: the supervisor reflection writer now compares candidate notes against recent (last 7 days) reflection memories for the same agent and skips ones that have already been stored, stopping the ~7-per-turn rediscovery churn on top of the within-run dedup shipped in v1.5.38.
417
426
 
418
- ### v1.5.39 Highlights
419
-
420
- - **Agents default to scoped tool access**: new agents (and existing agents whose `tools` list is non-empty) now only see the tools they've been given in the system prompt. This trims ~3 k input tokens per turn — an observed CEO/coordinator agent with 14 tools and 4 loaded skills went from 62 k to 38 k chars of system prompt. Opt back into the old firehose by toggling **Universal tool access** in the agent sheet's new "Context & Tool Access" section. Memory, context management, and `ask_human` are always included regardless of the scoped list.
421
- - **Pinned skills budget hardening**: one long markdown skill was eating 24 k of a 62 k prompt. Inlined pinned-skill content is now capped at 3 k chars with a pointer to `use_skill` action="load" for the full guide, and auto-attached *learned* skills get a dedicated sub-budget (max 6 skills / 8 k chars) so they cannot dominate the main pinned-skills section.
422
- - **OpenClaw chat fast-fails on dangling credentials**: v1.5.38 added gateway-side fast-fail; the chat streaming path now does the same, emitting a clear `err` event naming the missing credential instead of dialing the gateway unauthenticated and waiting 120 s for the timeout.
423
- - **Queue: orphan-recovery auto-heals stale checkouts**: pre-1.5.38 storage could leave `queued` tasks with a stale `checkoutRunId` that `checkoutTask()` refused forever. Orphan recovery now clears the stale id in the same sweep that re-queues the task, and `reconcileFinishedRunningTasks` / agent-not-found / capability-mismatch paths also null out the checkout when they terminally fail a task.
424
- - **Perf ring buffer raised to 2 000 entries**: queue/task repository events fire ~20 Hz during task processing and were evicting chat-execution/prompt perf entries out of the 200-entry buffer before they could be read. The larger buffer lets the perf viewer actually show a full turn.
425
- - **Tests**: added regression tests for pre-1.5.38 stale-checkout orphan recovery and for the scoped-tool-access algorithm.
426
-
427
427
  Older releases: https://swarmclaw.ai/docs/release-notes
428
428
 
429
429
  - GitHub releases: https://github.com/swarmclawai/swarmclaw/releases
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@swarmclawai/swarmclaw",
3
- "version": "1.5.43",
3
+ "version": "1.5.44",
4
4
  "description": "Build and run autonomous AI agents with OpenClaw, Hermes, multiple model providers, orchestration, delegation, memory, skills, schedules, and chat connectors.",
5
5
  "main": "electron-dist/main.js",
6
6
  "license": "MIT",
@@ -55,7 +55,7 @@ export const PROVIDERS: Record<string, BuiltinProviderConfig> = {
55
55
  'claude-cli': {
56
56
  id: 'claude-cli',
57
57
  name: 'Claude Code CLI',
58
- models: ['claude-sonnet-4-6', 'claude-opus-4-6', 'claude-haiku-4-5-20251001', 'claude-sonnet-4-5-20250514'],
58
+ models: ['claude-opus-4-6', 'claude-sonnet-4-6', 'claude-haiku-4-5'],
59
59
  requiresApiKey: false,
60
60
  requiresEndpoint: false,
61
61
  handler: { streamChat: streamClaudeCliChat },
@@ -63,7 +63,7 @@ export const PROVIDERS: Record<string, BuiltinProviderConfig> = {
63
63
  'codex-cli': {
64
64
  id: 'codex-cli',
65
65
  name: 'OpenAI Codex CLI',
66
- models: ['gpt-5.3-codex', 'gpt-5.2-codex', 'gpt-5.1-codex', 'gpt-5-codex', 'gpt-5-codex-mini'],
66
+ models: ['gpt-5.4-codex', 'gpt-5.3-codex', 'gpt-5.2-codex', 'gpt-5.1-codex', 'gpt-5-codex-mini'],
67
67
  requiresApiKey: false,
68
68
  requiresEndpoint: false,
69
69
  handler: { streamChat: streamCodexCliChat },
@@ -71,7 +71,7 @@ export const PROVIDERS: Record<string, BuiltinProviderConfig> = {
71
71
  openai: {
72
72
  id: 'openai',
73
73
  name: 'OpenAI',
74
- models: ['gpt-4o', 'gpt-4o-mini', 'gpt-4.1', 'gpt-4.1-mini', 'gpt-4.1-nano', 'o3', 'o3-mini', 'o4-mini'],
74
+ models: ['gpt-5.4', 'gpt-5.4-mini', 'gpt-5.4-nano', 'gpt-5.3', 'o3-mini', 'gpt-4.1', 'gpt-4.1-mini'],
75
75
  requiresApiKey: true,
76
76
  requiresEndpoint: false,
77
77
  handler: { streamChat: streamOpenAiChat },
@@ -79,7 +79,15 @@ export const PROVIDERS: Record<string, BuiltinProviderConfig> = {
79
79
  openrouter: {
80
80
  id: 'openrouter',
81
81
  name: 'OpenRouter',
82
- models: ['openai/gpt-4.1-mini'],
82
+ models: [
83
+ 'openrouter/auto',
84
+ 'anthropic/claude-opus-4.6', 'anthropic/claude-sonnet-4.6', 'anthropic/claude-haiku-4.5',
85
+ 'openai/gpt-5.4', 'openai/gpt-5.4-mini',
86
+ 'google/gemini-3.1-pro', 'google/gemini-3-flash',
87
+ 'x-ai/grok-4',
88
+ 'deepseek/deepseek-v3.2',
89
+ 'meta-llama/llama-4-maverick-17b-128e-instruct',
90
+ ],
83
91
  requiresApiKey: true,
84
92
  requiresEndpoint: false,
85
93
  defaultEndpoint: 'https://openrouter.ai/api/v1',
@@ -96,7 +104,7 @@ export const PROVIDERS: Record<string, BuiltinProviderConfig> = {
96
104
  anthropic: {
97
105
  id: 'anthropic',
98
106
  name: 'Anthropic',
99
- models: ['claude-sonnet-4-6', 'claude-opus-4-6', 'claude-haiku-4-5-20251001'],
107
+ models: ['claude-opus-4-6', 'claude-sonnet-4-6', 'claude-haiku-4-5'],
100
108
  requiresApiKey: true,
101
109
  requiresEndpoint: false,
102
110
  handler: { streamChat: streamAnthropicChat },
@@ -132,7 +140,7 @@ export const PROVIDERS: Record<string, BuiltinProviderConfig> = {
132
140
  'opencode-cli': {
133
141
  id: 'opencode-cli',
134
142
  name: 'OpenCode CLI',
135
- models: ['claude-sonnet-4-6', 'gpt-4.1', 'gemini-2.5-pro', 'gemini-2.5-flash'],
143
+ models: ['claude-opus-4-6', 'claude-sonnet-4-6', 'gpt-5.4', 'gemini-3.1-pro', 'gemini-3-flash'],
136
144
  requiresApiKey: false,
137
145
  requiresEndpoint: false,
138
146
  handler: { streamChat: streamOpenCodeCliChat },
@@ -142,7 +150,11 @@ export const PROVIDERS: Record<string, BuiltinProviderConfig> = {
142
150
  name: 'OpenCode Web',
143
151
  // OpenCode addresses models as `providerID/modelID`. Free-text entry is
144
152
  // supported; these defaults seed the dropdown with common combinations.
145
- models: ['anthropic/claude-sonnet-4-5', 'anthropic/claude-opus-4-5', 'openai/gpt-4.1', 'openai/o4-mini', 'google/gemini-2.5-pro'],
153
+ models: [
154
+ 'anthropic/claude-opus-4-6', 'anthropic/claude-sonnet-4-6', 'anthropic/claude-haiku-4-5',
155
+ 'openai/gpt-5.4', 'openai/gpt-5.4-mini',
156
+ 'google/gemini-3.1-pro', 'google/gemini-3-flash',
157
+ ],
146
158
  requiresApiKey: false,
147
159
  optionalApiKey: true,
148
160
  requiresEndpoint: true,
@@ -152,7 +164,7 @@ export const PROVIDERS: Record<string, BuiltinProviderConfig> = {
152
164
  'gemini-cli': {
153
165
  id: 'gemini-cli',
154
166
  name: 'Gemini CLI',
155
- models: ['gemini-2.5-pro', 'gemini-2.5-flash', 'gemini-2.5-flash-lite'],
167
+ models: ['gemini-3.1-pro', 'gemini-3-flash', 'gemini-3.1-flash-lite', 'gemini-2.5-pro', 'gemini-2.5-flash'],
156
168
  requiresApiKey: false,
157
169
  requiresEndpoint: false,
158
170
  handler: { streamChat: streamGeminiCliChat },
@@ -160,7 +172,7 @@ export const PROVIDERS: Record<string, BuiltinProviderConfig> = {
160
172
  'copilot-cli': {
161
173
  id: 'copilot-cli',
162
174
  name: 'GitHub Copilot CLI',
163
- models: ['claude-sonnet-4-5', 'gpt-4.1', 'gemini-3-pro'],
175
+ models: ['claude-sonnet-4-6', 'gpt-5.4', 'gemini-3.1-pro'],
164
176
  requiresApiKey: false,
165
177
  requiresEndpoint: false,
166
178
  handler: { streamChat: streamCopilotCliChat },
@@ -202,7 +214,7 @@ export const PROVIDERS: Record<string, BuiltinProviderConfig> = {
202
214
  google: {
203
215
  id: 'google',
204
216
  name: 'Google Gemini',
205
- models: ['gemini-2.5-pro', 'gemini-2.5-flash', 'gemini-2.5-flash-lite'],
217
+ models: ['gemini-3.1-pro', 'gemini-3-flash', 'gemini-3.1-flash-lite', 'gemini-2.5-pro', 'gemini-2.5-flash'],
206
218
  requiresApiKey: true,
207
219
  requiresEndpoint: false,
208
220
  defaultEndpoint: 'https://generativelanguage.googleapis.com/v1beta/openai',
@@ -219,6 +231,9 @@ export const PROVIDERS: Record<string, BuiltinProviderConfig> = {
219
231
  deepseek: {
220
232
  id: 'deepseek',
221
233
  name: 'DeepSeek',
234
+ // Stable aliases: 'deepseek-chat' is the non-thinking mode of the latest
235
+ // V-series (currently V3.2), 'deepseek-reasoner' is the thinking mode.
236
+ // DeepSeek rotates the underlying weights without changing these names.
222
237
  models: ['deepseek-chat', 'deepseek-reasoner'],
223
238
  requiresApiKey: true,
224
239
  requiresEndpoint: false,
@@ -236,7 +251,15 @@ export const PROVIDERS: Record<string, BuiltinProviderConfig> = {
236
251
  groq: {
237
252
  id: 'groq',
238
253
  name: 'Groq',
239
- models: ['llama-3.3-70b-versatile', 'deepseek-r1-distill-llama-70b', 'qwen-qwq-32b', 'gemma2-9b-it'],
254
+ models: [
255
+ 'meta-llama/llama-4-maverick-17b-128e-instruct',
256
+ 'meta-llama/llama-4-scout-17b-16e-instruct',
257
+ 'moonshotai/kimi-k2-instruct-0905',
258
+ 'openai/gpt-oss-120b',
259
+ 'openai/gpt-oss-20b',
260
+ 'qwen/qwen3-32b',
261
+ 'llama-3.3-70b-versatile',
262
+ ],
240
263
  requiresApiKey: true,
241
264
  requiresEndpoint: false,
242
265
  defaultEndpoint: 'https://api.groq.com/openai/v1',
@@ -253,7 +276,14 @@ export const PROVIDERS: Record<string, BuiltinProviderConfig> = {
253
276
  together: {
254
277
  id: 'together',
255
278
  name: 'Together AI',
256
- models: ['meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8', 'deepseek-ai/DeepSeek-R1', 'Qwen/Qwen2.5-72B-Instruct'],
279
+ models: [
280
+ 'meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8',
281
+ 'meta-llama/Llama-4-Scout-17B-16E-Instruct',
282
+ 'deepseek-ai/DeepSeek-V3.2',
283
+ 'deepseek-ai/DeepSeek-R1',
284
+ 'Qwen/Qwen3-235B-A22B-Instruct',
285
+ 'moonshotai/Kimi-K2-Instruct-0905',
286
+ ],
257
287
  requiresApiKey: true,
258
288
  requiresEndpoint: false,
259
289
  defaultEndpoint: 'https://api.together.xyz/v1',
@@ -270,7 +300,16 @@ export const PROVIDERS: Record<string, BuiltinProviderConfig> = {
270
300
  mistral: {
271
301
  id: 'mistral',
272
302
  name: 'Mistral AI',
273
- models: ['mistral-large-latest', 'mistral-small-latest', 'magistral-medium-2506', 'devstral-small-latest'],
303
+ models: [
304
+ 'magistral-medium-1.2',
305
+ 'magistral-small-1.2',
306
+ 'devstral-medium',
307
+ 'devstral-small-1.1',
308
+ 'codestral-latest',
309
+ 'mistral-small-4',
310
+ 'mistral-large-latest',
311
+ 'ministral-3b-latest',
312
+ ],
274
313
  requiresApiKey: true,
275
314
  requiresEndpoint: false,
276
315
  defaultEndpoint: 'https://api.mistral.ai/v1',
@@ -287,7 +326,7 @@ export const PROVIDERS: Record<string, BuiltinProviderConfig> = {
287
326
  xai: {
288
327
  id: 'xai',
289
328
  name: 'xAI (Grok)',
290
- models: ['grok-3', 'grok-3-fast', 'grok-3-mini', 'grok-3-mini-fast'],
329
+ models: ['grok-4', 'grok-4-fast-reasoning', 'grok-4-fast-non-reasoning', 'grok-4-1-fast-reasoning', 'grok-4-1-fast-non-reasoning'],
291
330
  requiresApiKey: true,
292
331
  requiresEndpoint: false,
293
332
  defaultEndpoint: 'https://api.x.ai/v1',
@@ -304,7 +343,13 @@ export const PROVIDERS: Record<string, BuiltinProviderConfig> = {
304
343
  fireworks: {
305
344
  id: 'fireworks',
306
345
  name: 'Fireworks AI',
307
- models: ['accounts/fireworks/models/deepseek-r1-0528', 'accounts/fireworks/models/llama-v3p3-70b-instruct', 'accounts/fireworks/models/qwen3-235b-a22b'],
346
+ models: [
347
+ 'accounts/fireworks/models/deepseek-v3p2',
348
+ 'accounts/fireworks/models/kimi-k2-instruct-0905',
349
+ 'accounts/fireworks/models/glm-5',
350
+ 'accounts/fireworks/models/qwen3-235b-a22b',
351
+ 'accounts/fireworks/models/llama-v3p3-70b-instruct',
352
+ ],
308
353
  requiresApiKey: true,
309
354
  requiresEndpoint: false,
310
355
  defaultEndpoint: 'https://api.fireworks.ai/inference/v1',
@@ -321,7 +366,13 @@ export const PROVIDERS: Record<string, BuiltinProviderConfig> = {
321
366
  nebius: {
322
367
  id: 'nebius',
323
368
  name: 'Nebius',
324
- models: ['deepseek-ai/DeepSeek-R1-0528', 'Qwen/Qwen3-235B-A22B', 'meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8'],
369
+ models: [
370
+ 'deepseek-ai/DeepSeek-V3.2',
371
+ 'moonshotai/Kimi-K2-Instruct',
372
+ 'Qwen/Qwen3-235B-A22B',
373
+ 'meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8',
374
+ 'deepseek-ai/DeepSeek-R1',
375
+ ],
325
376
  requiresApiKey: true,
326
377
  requiresEndpoint: false,
327
378
  defaultEndpoint: 'https://api.tokenfactory.nebius.com/v1',
@@ -338,7 +389,13 @@ export const PROVIDERS: Record<string, BuiltinProviderConfig> = {
338
389
  deepinfra: {
339
390
  id: 'deepinfra',
340
391
  name: 'DeepInfra',
341
- models: ['deepseek-ai/DeepSeek-R1-0528', 'meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8', 'Qwen/Qwen3-235B-A22B'],
392
+ models: [
393
+ 'deepseek-ai/DeepSeek-V3.2',
394
+ 'moonshotai/Kimi-K2-Instruct',
395
+ 'Qwen/Qwen3-235B-A22B',
396
+ 'meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8',
397
+ 'deepseek-ai/DeepSeek-R1',
398
+ ],
342
399
  requiresApiKey: true,
343
400
  requiresEndpoint: false,
344
401
  defaultEndpoint: 'https://api.deepinfra.com/v1/openai',
@@ -364,7 +421,7 @@ export const PROVIDERS: Record<string, BuiltinProviderConfig> = {
364
421
  'gemini-3-flash-preview', 'gemma3',
365
422
  'devstral-2', 'devstral-small-2', 'ministral-3', 'mistral-large-3',
366
423
  'gpt-oss', 'cogito-2.1', 'rnj-1', 'nemotron-3-nano',
367
- 'llama3.3', 'llama3.2', 'llama3.1',
424
+ 'llama3.3', 'llama3.2',
368
425
  ],
369
426
  requiresApiKey: false,
370
427
  optionalApiKey: true,
@@ -0,0 +1,44 @@
1
+ import { describe, it } from 'node:test'
2
+ import assert from 'node:assert/strict'
3
+ import { PROVIDERS } from '@/lib/providers'
4
+
5
+ describe('PROVIDERS model list sanity', () => {
6
+ it('every provider declares a non-empty models array', () => {
7
+ for (const [id, entry] of Object.entries(PROVIDERS)) {
8
+ assert.ok(Array.isArray(entry.models), `${id}: models must be an array`)
9
+ assert.ok(entry.models.length > 0, `${id}: models must be non-empty`)
10
+ }
11
+ })
12
+
13
+ it('every model id is a non-empty trimmed string', () => {
14
+ for (const [id, entry] of Object.entries(PROVIDERS)) {
15
+ for (const model of entry.models) {
16
+ assert.equal(typeof model, 'string', `${id}: model entries must be strings`)
17
+ assert.ok(model.length > 0, `${id}: model id must be non-empty`)
18
+ assert.equal(model, model.trim(), `${id}: model id must be trimmed (got "${model}")`)
19
+ }
20
+ }
21
+ })
22
+
23
+ it('no duplicate model ids within a single provider', () => {
24
+ for (const [id, entry] of Object.entries(PROVIDERS)) {
25
+ const seen = new Set<string>()
26
+ for (const model of entry.models) {
27
+ assert.ok(!seen.has(model), `${id}: duplicate model id "${model}"`)
28
+ seen.add(model)
29
+ }
30
+ }
31
+ })
32
+
33
+ it('every provider declares the required metadata fields', () => {
34
+ for (const [id, entry] of Object.entries(PROVIDERS)) {
35
+ assert.equal(typeof entry.id, 'string', `${id}: id must be a string`)
36
+ assert.equal(entry.id, id, `${id}: id field must match registry key`)
37
+ assert.equal(typeof entry.name, 'string', `${id}: name must be a string`)
38
+ assert.ok(entry.name.length > 0, `${id}: name must be non-empty`)
39
+ assert.equal(typeof entry.requiresApiKey, 'boolean', `${id}: requiresApiKey must be boolean`)
40
+ assert.equal(typeof entry.requiresEndpoint, 'boolean', `${id}: requiresEndpoint must be boolean`)
41
+ assert.equal(typeof entry.handler?.streamChat, 'function', `${id}: handler.streamChat must be a function`)
42
+ }
43
+ })
44
+ })
@@ -361,6 +361,8 @@ export const STARTER_AGENT_TOOLS = [
361
361
  'codex_cli',
362
362
  'opencode_cli',
363
363
  'gemini_cli',
364
+ 'copilot_cli',
365
+ 'droid_cli',
364
366
  'cursor_cli',
365
367
  'qwen_code_cli',
366
368
  'openclaw_workspace',
@@ -545,6 +547,8 @@ const BUILDER_AGENT_TOOLS = [
545
547
  'codex_cli',
546
548
  'opencode_cli',
547
549
  'gemini_cli',
550
+ 'copilot_cli',
551
+ 'droid_cli',
548
552
  'cursor_cli',
549
553
  'qwen_code_cli',
550
554
  ]
@@ -761,21 +765,21 @@ export const DEFAULT_AGENTS: Record<SetupProvider, DefaultAgentConfig> = {
761
765
  name: 'OpenCode Web',
762
766
  description: 'A helpful assistant powered by a remote OpenCode HTTP server.',
763
767
  systemPrompt: SWARMCLAW_ASSISTANT_PROMPT,
764
- model: 'anthropic/claude-sonnet-4-5',
768
+ model: 'anthropic/claude-sonnet-4-6',
765
769
  tools: STARTER_AGENT_TOOLS,
766
770
  },
767
771
  'gemini-cli': {
768
772
  name: 'Gemini CLI',
769
773
  description: 'A helpful assistant powered by Gemini CLI.',
770
774
  systemPrompt: SWARMCLAW_ASSISTANT_PROMPT,
771
- model: 'gemini-2.5-pro',
775
+ model: 'gemini-3.1-pro',
772
776
  tools: STARTER_AGENT_TOOLS,
773
777
  },
774
778
  'copilot-cli': {
775
779
  name: 'Copilot CLI',
776
780
  description: 'A helpful assistant powered by GitHub Copilot CLI.',
777
781
  systemPrompt: SWARMCLAW_ASSISTANT_PROMPT,
778
- model: 'claude-sonnet-4-5',
782
+ model: 'claude-sonnet-4-6',
779
783
  tools: STARTER_AGENT_TOOLS,
780
784
  },
781
785
  'droid-cli': {
@@ -817,21 +821,21 @@ export const DEFAULT_AGENTS: Record<SetupProvider, DefaultAgentConfig> = {
817
821
  name: 'Atlas',
818
822
  description: 'A helpful GPT-powered assistant.',
819
823
  systemPrompt: SWARMCLAW_ASSISTANT_PROMPT,
820
- model: 'gpt-4o',
824
+ model: 'gpt-5.4',
821
825
  tools: STARTER_AGENT_TOOLS,
822
826
  },
823
827
  openrouter: {
824
828
  name: 'Router',
825
829
  description: 'A helpful assistant powered through OpenRouter.',
826
830
  systemPrompt: SWARMCLAW_ASSISTANT_PROMPT,
827
- model: 'openai/gpt-4.1-mini',
831
+ model: 'anthropic/claude-sonnet-4.6',
828
832
  tools: STARTER_AGENT_TOOLS,
829
833
  },
830
834
  google: {
831
835
  name: 'Gemini',
832
836
  description: 'A helpful Gemini-powered assistant.',
833
837
  systemPrompt: SWARMCLAW_ASSISTANT_PROMPT,
834
- model: 'gemini-2.5-pro',
838
+ model: 'gemini-3.1-pro',
835
839
  tools: STARTER_AGENT_TOOLS,
836
840
  },
837
841
  deepseek: {
@@ -845,7 +849,7 @@ export const DEFAULT_AGENTS: Record<SetupProvider, DefaultAgentConfig> = {
845
849
  name: 'Bolt',
846
850
  description: 'A low-latency assistant powered by Groq.',
847
851
  systemPrompt: SWARMCLAW_ASSISTANT_PROMPT,
848
- model: 'llama-3.3-70b-versatile',
852
+ model: 'meta-llama/llama-4-maverick-17b-128e-instruct',
849
853
  tools: STARTER_AGENT_TOOLS,
850
854
  },
851
855
  together: {
@@ -866,28 +870,28 @@ export const DEFAULT_AGENTS: Record<SetupProvider, DefaultAgentConfig> = {
866
870
  name: 'Grok',
867
871
  description: 'A helpful assistant powered by xAI Grok.',
868
872
  systemPrompt: SWARMCLAW_ASSISTANT_PROMPT,
869
- model: 'grok-3',
873
+ model: 'grok-4',
870
874
  tools: STARTER_AGENT_TOOLS,
871
875
  },
872
876
  fireworks: {
873
877
  name: 'Spark',
874
878
  description: 'A helpful assistant powered by Fireworks AI.',
875
879
  systemPrompt: SWARMCLAW_ASSISTANT_PROMPT,
876
- model: 'accounts/fireworks/models/deepseek-r1-0528',
880
+ model: 'accounts/fireworks/models/deepseek-v3p2',
877
881
  tools: STARTER_AGENT_TOOLS,
878
882
  },
879
883
  nebius: {
880
884
  name: 'Nebius Agent',
881
885
  description: 'A helpful assistant powered by Nebius.',
882
886
  systemPrompt: SWARMCLAW_ASSISTANT_PROMPT,
883
- model: 'deepseek-ai/DeepSeek-R1-0528',
887
+ model: 'deepseek-ai/DeepSeek-V3.2',
884
888
  tools: STARTER_AGENT_TOOLS,
885
889
  },
886
890
  deepinfra: {
887
891
  name: 'DeepInfra Agent',
888
892
  description: 'A helpful assistant powered by DeepInfra.',
889
893
  systemPrompt: SWARMCLAW_ASSISTANT_PROMPT,
890
- model: 'deepseek-ai/DeepSeek-R1-0528',
894
+ model: 'deepseek-ai/DeepSeek-V3.2',
891
895
  tools: STARTER_AGENT_TOOLS,
892
896
  },
893
897
  ollama: {