agent-worker 0.1.0 → 0.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,448 @@
1
+ # agent-worker
2
+
3
+ CLI and SDK for creating and managing AI agent sessions with multiple backends.
4
+
5
+ ## Installation
6
+
7
+ ```bash
8
+ npm install -g agent-worker
9
+ # or
10
+ bun add -g agent-worker
11
+ ```
12
+
13
+ ## CLI Usage
14
+
15
+ ### Session Management
16
+
17
+ ```bash
18
+ # Create a session (SDK backend, default)
19
+ agent-worker session new -m anthropic/claude-sonnet-4-5
20
+
21
+ # Create with system prompt
22
+ agent-worker session new -s "You are a code reviewer."
23
+
24
+ # Create with system prompt from file
25
+ agent-worker session new -f ./prompts/reviewer.txt
26
+
27
+ # Create named session
28
+ agent-worker session new -n my-session
29
+
30
+ # Create with Claude CLI backend
31
+ agent-worker session new -b claude
32
+
33
+ # List all sessions
34
+ agent-worker session list
35
+
36
+ # Switch default session
37
+ agent-worker session use my-session
38
+
39
+ # Check session status
40
+ agent-worker session status
41
+
42
+ # End session
43
+ agent-worker session end
44
+
45
+ # End specific session
46
+ agent-worker session end my-session
47
+
48
+ # End all sessions
49
+ agent-worker session end --all
50
+ ```
51
+
52
+ ### Sending Messages
53
+
54
+ ```bash
55
+ # Send to current session (async by default - returns immediately)
56
+ agent-worker send "Analyze this codebase"
57
+
58
+ # Send to specific session
59
+ agent-worker send "Explain recursion" --to my-session
60
+
61
+ # Send and wait for response (synchronous mode)
62
+ agent-worker send "What is 2+2?" --wait
63
+
64
+ # Send with debug logging (useful for troubleshooting)
65
+ agent-worker send "Debug this" --debug
66
+
67
+ # View conversation messages
68
+ agent-worker peek # Show last 10 messages (default)
69
+ agent-worker peek --last 5 # Show last 5 messages
70
+ agent-worker peek --all # Show all messages
71
+ agent-worker peek --find "error" # Search messages containing "error"
72
+
73
+ # View token usage
74
+ agent-worker stats
75
+
76
+ # Export transcript
77
+ agent-worker export > transcript.json
78
+
79
+ # Clear messages (keep session)
80
+ agent-worker clear
81
+ ```
82
+
83
+ **Understanding Message Flow:**
84
+
85
+ The CLI supports two modes for sending messages:
86
+
87
+ 1. **Asynchronous (default)**: The command returns immediately after sending. The agent processes in the background. Use `peek` to view the response.
88
+ ```bash
89
+ # Send message (returns immediately)
90
+ agent-worker send "Analyze this large codebase"
91
+ # Output: "Message sent. Use 'peek' command to view response."
92
+
93
+ # View response later
94
+ agent-worker peek --last 2
95
+ ```
96
+
97
+ 2. **Synchronous (`--wait`)**: The command waits for the agent to fully process the message and return a response. This is best for quick questions when you need immediate results.
98
+ ```bash
99
+ agent-worker send "What is 2+2?" --wait
100
+ # Waits for response, then prints: 4
101
+ ```
102
+
103
+ **Troubleshooting:**
104
+
105
+ If `send` appears stuck or times out, use `--debug` to see what's happening:
106
+
107
+ ```bash
108
+ agent-worker send "test message" --debug
109
+ ```
110
+
111
+ Debug output shows:
112
+ - Session lookup and connection status
113
+ - Socket communication
114
+ - Request/response timing
115
+ - Error details
116
+
117
+ **Backend Limitations:**
118
+
119
+ The CLI supports two backend types:
120
+
121
+ 1. **SDK Backend (default)**: Full-featured, works in all environments. Requires `ANTHROPIC_API_KEY` environment variable.
122
+
123
+ 2. **Claude CLI Backend (`-b claude`)**: Uses `claude -p` for non-interactive mode.
124
+ - **Known limitation**: May not work properly within Claude Code environment itself due to command interception
125
+ - **Timeout**: Async requests timeout after 60 seconds to prevent indefinite hangs
126
+ - **Recommended use**: Normal terminal environments outside Claude Code
127
+ - **For testing**: Use SDK backend instead when developing within Claude Code
128
+
129
+ If you see messages stuck in `(processing...)` state for more than 60 seconds, it indicates a backend issue. The message will automatically update to show a timeout error.
130
+
131
+ ### Tool Management (SDK Backend Only)
132
+
133
+ ```bash
134
+ # Add a tool
135
+ agent-worker tool add get_weather \
136
+ -d "Get weather for a location" \
137
+ -p "location:string:City name"
138
+
139
+ # Add tool requiring approval
140
+ agent-worker tool add delete_file \
141
+ -d "Delete a file" \
142
+ -p "path:string:File path" \
143
+ --needs-approval
144
+
145
+ # Import tools from file
146
+ agent-worker tool import ./my-tools.ts
147
+
148
+ # Mock tool response (for testing)
149
+ agent-worker tool mock get_weather '{"temp": 72, "condition": "sunny"}'
150
+
151
+ # List registered tools
152
+ agent-worker tool list
153
+ ```
154
+
155
+ ### Approval Workflow
156
+
157
+ ```bash
158
+ # Check pending approvals
159
+ agent-worker pending
160
+
161
+ # Approve a tool call
162
+ agent-worker approve <approval-id>
163
+
164
+ # Deny with reason
165
+ agent-worker deny <approval-id> -r "Path not allowed"
166
+ ```
167
+
168
+ ### Agent Skills
169
+
170
+ Agent skills provide reusable instructions and methodologies that agents can access on demand. Compatible with the [Agent Skills](https://agentskills.io) ecosystem.
171
+
172
+ ```bash
173
+ # Load skills from default directories (.agents/skills, .claude/skills, ~/.agents/skills)
174
+ agent-worker session new
175
+
176
+ # Add a specific skill directory
177
+ agent-worker session new --skill ./my-skills/custom-skill
178
+
179
+ # Scan additional directories for skills
180
+ agent-worker session new --skill-dir ./team-skills --skill-dir ~/shared-skills
181
+
182
+ # Combine multiple options
183
+ agent-worker session new \
184
+ --skill ./my-skills/dive \
185
+ --skill-dir ~/company-skills
186
+
187
+ # Import skills from Git repositories (temporary, session-scoped)
188
+ agent-worker session new --import-skill vercel-labs/agent-skills:dive
189
+ agent-worker session new --import-skill lidessen/skills:{memory,orientation}
190
+ agent-worker session new --import-skill gitlab:myorg/skills@v1.0.0:custom
191
+ ```
192
+
193
+ **Import Skill Spec Format:**
194
+
195
+ The `--import-skill` option supports temporary skill imports from Git repositories. Skills are cloned to a session-specific temp directory and cleaned up when the session ends.
196
+
197
+ ```
198
+ [provider:]owner/repo[@ref]:{skill1,skill2,...}
199
+ ```
200
+
201
+ Examples:
202
+ - `vercel-labs/agent-skills` - Import all skills from GitHub main branch
203
+ - `vercel-labs/agent-skills:dive` - Import single skill
204
+ - `vercel-labs/agent-skills:{dive,memory}` - Import multiple skills (brace expansion)
205
+ - `vercel-labs/agent-skills@v1.0.0:dive` - Import from specific tag/branch
206
+ - `gitlab:myorg/skills:custom` - Import from GitLab
207
+ - `gitee:org/repo@dev:{a,b}` - Import from Gitee dev branch
208
+
209
+ Supported providers: `github` (default), `gitlab`, `gitee`
210
+
211
+ **Skills Support by Backend:**
212
+
213
+ | Backend | Skills Support | How It Works | `--import-skill` |
214
+ |---------|----------------|--------------|------------------|
215
+ | **SDK** (default) | ✅ Full | Skills loaded as a tool that agents can call | ✅ Supported |
216
+ | **Claude CLI** | ✅ Full | Loads from `.claude/skills/` and `~/.claude/skills/` | ⚠️ Manual install required |
217
+ | **Codex CLI** | ✅ Full | Loads from `.agents/skills/`, `~/.codex/skills/`, `~/.agents/skills/` | ⚠️ Manual install required |
218
+ | **Cursor CLI** | ✅ Full | Loads from `.agents/skills/` and `~/.cursor/skills/` | ⚠️ Manual install required |
219
+
220
+ **Notes:**
221
+ - **SDK Backend**: Skills work through the Skills tool, allowing dynamic file reading. `--import-skill` is fully supported.
222
+ - **CLI Backends** (claude, codex, cursor): Skills are loaded from filesystem locations by the CLI tool itself. To use `--import-skill` with these backends, install skills manually using `npx skills add <repo> --global`.
223
+ - If you specify `--import-skill` with a CLI backend, agent-worker will show a warning and suggest using SDK backend or manual installation.
224
+
225
+ **Default Skill Directories:**
226
+ - `.agents/skills/` - Project-level skills (all backends)
227
+ - `.claude/skills/` - Claude Code project skills
228
+ - `.cursor/skills/` - Cursor project skills
229
+ - `~/.agents/skills/` - User-level global skills (all backends)
230
+ - `~/.claude/skills/` - User-level Claude skills
231
+ - `~/.codex/skills/` - User-level Codex skills
232
+
233
+ **Using Skills in Sessions:**
234
+
235
+ Once loaded, agents can interact with skills via the `Skills` tool:
236
+
237
+ ```typescript
238
+ // List available skills
239
+ Skills({ operation: 'list' })
240
+
241
+ // View a skill's complete instructions
242
+ Skills({ operation: 'view', skillName: 'dive' })
243
+
244
+ // Read skill reference files
245
+ Skills({
246
+ operation: 'readFile',
247
+ skillName: 'dive',
248
+ filePath: 'references/search-strategies.md'
249
+ })
250
+ ```
251
+
252
+ **Installing Skills:**
253
+
254
+ Use the [skills CLI](https://github.com/vercel-labs/skills) to install skills:
255
+
256
+ ```bash
257
+ # Install from GitHub
258
+ npx skills add vercel-labs/agent-skills
259
+
260
+ # Or use the official skills tool
261
+ npm install -g @agentskills/cli
262
+ skills add vercel-labs/agent-skills
263
+ ```
264
+
265
+ ### Backends
266
+
267
+ ```bash
268
+ # Check available backends
269
+ agent-worker backends
270
+
271
+ # Check SDK providers
272
+ agent-worker providers
273
+ ```
274
+
275
+ | Backend | Command | Best For |
276
+ |---------|---------|----------|
277
+ | SDK (default) | `session new -m provider/model` | Full control, tool injection, mocking |
278
+ | Claude CLI | `session new -b claude` | Use existing Claude installation |
279
+ | Codex | `session new -b codex` | OpenAI Codex workflows |
280
+ | Cursor | `session new -b cursor` | Cursor Agent integration |
281
+
282
+ > **⚠️ Important:** Different backends have different capabilities. CLI backends (claude, codex, cursor) don't support:
283
+ > - Dynamic tool management (`tool_add`, `tool_mock`, `tool_import`)
284
+ > - Approval system (`approve`, `deny`)
285
+ > - `--import-skill` (use `npx skills add --global` instead)
286
+ >
287
+ > See [BACKEND_COMPATIBILITY.md](./BACKEND_COMPATIBILITY.md) for a complete feature comparison.
288
+
289
+ ### Model Formats (SDK Backend)
290
+
291
+ ```bash
292
+ # Gateway format (recommended)
293
+ agent-worker session new -m anthropic/claude-sonnet-4-5
294
+ agent-worker session new -m openai/gpt-5.2
295
+
296
+ # Provider-only (uses frontier model)
297
+ agent-worker session new -m anthropic
298
+ agent-worker session new -m openai
299
+
300
+ # Direct provider format
301
+ agent-worker session new -m deepseek:deepseek-chat
302
+ ```
303
+
304
+ ## SDK Usage
305
+
306
+ ### Basic Session
307
+
308
+ ```typescript
309
+ import { AgentSession } from 'agent-worker'
310
+
311
+ const session = new AgentSession({
312
+ model: 'anthropic/claude-sonnet-4-5',
313
+ system: 'You are a helpful assistant.',
314
+ tools: [/* your tools */]
315
+ })
316
+
317
+ // Send message
318
+ const response = await session.send('Hello')
319
+ console.log(response.content)
320
+ console.log(response.toolCalls)
321
+ console.log(response.usage)
322
+
323
+ // Stream response
324
+ for await (const chunk of session.sendStream('Tell me a story')) {
325
+ process.stdout.write(chunk)
326
+ }
327
+
328
+ // Get state for persistence
329
+ const state = session.getState()
330
+ ```
331
+
332
+ ### With Skills
333
+
334
+ ```typescript
335
+ import {
336
+ AgentSession,
337
+ SkillsProvider,
338
+ createSkillsTool
339
+ } from 'agent-worker'
340
+
341
+ // Setup skills
342
+ const skillsProvider = new SkillsProvider()
343
+ await skillsProvider.scanDirectory('.agents/skills')
344
+ await skillsProvider.scanDirectory('~/my-skills')
345
+
346
+ // Or add individual skills
347
+ await skillsProvider.addSkill('./custom-skills/my-skill')
348
+
349
+ // Create session with Skills tool
350
+ const session = new AgentSession({
351
+ model: 'anthropic/claude-sonnet-4-5',
352
+ system: 'You are a helpful assistant.',
353
+ tools: [
354
+ createSkillsTool(skillsProvider),
355
+ // ... other tools
356
+ ]
357
+ })
358
+
359
+ // Agent can now access skills
360
+ const response = await session.send(
361
+ 'What skills are available? Use the dive skill to analyze this codebase.'
362
+ )
363
+ ```
364
+
365
+ ### With Imported Skills
366
+
367
+ ```typescript
368
+ import {
369
+ AgentSession,
370
+ SkillsProvider,
371
+ SkillImporter,
372
+ createSkillsTool
373
+ } from 'agent-worker'
374
+
375
+ // Setup skills provider
376
+ const skillsProvider = new SkillsProvider()
377
+
378
+ // Import skills from Git repositories
379
+ const sessionId = 'my-session-123'
380
+ const importer = new SkillImporter(sessionId)
381
+
382
+ // Import from GitHub
383
+ await importer.import('vercel-labs/agent-skills:dive')
384
+ await importer.import('lidessen/skills:{memory,orientation}')
385
+
386
+ // Or import multiple specs at once
387
+ await importer.importMultiple([
388
+ 'vercel-labs/agent-skills:{dive,react}',
389
+ 'gitlab:myorg/skills@v1.0.0:custom'
390
+ ])
391
+
392
+ // Add imported skills to provider
393
+ await skillsProvider.addImportedSkills(importer)
394
+
395
+ // Create session
396
+ const session = new AgentSession({
397
+ model: 'anthropic/claude-sonnet-4-5',
398
+ system: 'You are a helpful assistant.',
399
+ tools: [createSkillsTool(skillsProvider)]
400
+ })
401
+
402
+ // Don't forget cleanup when done
403
+ process.on('exit', async () => {
404
+ await importer.cleanup()
405
+ })
406
+ ```
407
+
408
+ ## Common Patterns
409
+
410
+ ### Prompt Testing
411
+
412
+ ```bash
413
+ agent-worker session new -f ./my-prompt.txt -n test
414
+ agent-worker send "Test case 1: ..." --to test
415
+ agent-worker send "Test case 2: ..." --to test
416
+ agent-worker peek --to test
417
+ agent-worker session end test
418
+ ```
419
+
420
+ ### Tool Development with Mocks
421
+
422
+ ```bash
423
+ agent-worker session new -n dev
424
+ agent-worker tool add my_api -d "Call my API" -p "endpoint:string"
425
+ agent-worker tool mock my_api '{"status": "ok"}'
426
+ agent-worker send "Call my API at /users"
427
+ # Update mock, test error handling
428
+ agent-worker tool mock my_api '{"status": "error", "code": 500}'
429
+ agent-worker send "Call my API at /users"
430
+ ```
431
+
432
+ ### Multi-Model Comparison
433
+
434
+ ```bash
435
+ agent-worker session new -m anthropic/claude-sonnet-4-5 -n claude
436
+ agent-worker session new -m openai/gpt-5.2 -n gpt
437
+ agent-worker send "Explain recursion" --to claude
438
+ agent-worker send "Explain recursion" --to gpt
439
+ ```
440
+
441
+ ## Requirements
442
+
443
+ - Node.js 18+ or Bun
444
+ - API key for chosen provider (e.g., `ANTHROPIC_API_KEY`, `OPENAI_API_KEY`)
445
+
446
+ ## License
447
+
448
+ MIT
@@ -1,7 +1,185 @@
1
- import { i as createModelAsync, r as createModel } from "./models-FOOpWB91.mjs";
2
- import { generateText } from "ai";
1
+ import { gateway, generateText } from "ai";
3
2
  import { spawn } from "node:child_process";
4
3
 
4
+ //#region src/models.ts
5
+ const providerCache = {};
6
+ /**
7
+ * Lazy load a provider, caching the result
8
+ * Supports custom baseURL and apiKey for providers using compatible APIs (e.g., MiniMax using Claude API)
9
+ */
10
+ async function loadProvider(name, packageName, exportName, options) {
11
+ if (name in providerCache) return providerCache[name];
12
+ try {
13
+ const module = await import(packageName);
14
+ if (options?.baseURL || options?.apiKeyEnvVar) {
15
+ const createProvider = module[`create${exportName.charAt(0).toUpperCase() + exportName.slice(1)}`];
16
+ if (createProvider) {
17
+ const providerOptions = {};
18
+ if (options.baseURL) providerOptions.baseURL = options.baseURL;
19
+ if (options.apiKeyEnvVar) providerOptions.apiKey = process.env[options.apiKeyEnvVar];
20
+ providerCache[name] = createProvider(providerOptions);
21
+ return providerCache[name];
22
+ }
23
+ }
24
+ providerCache[name] = module[exportName];
25
+ return providerCache[name];
26
+ } catch {
27
+ providerCache[name] = null;
28
+ return null;
29
+ }
30
+ }
31
+ /**
32
+ * Parse model identifier and return the appropriate provider model
33
+ *
34
+ * Supports three formats:
35
+ *
36
+ * 1. Provider-only format: provider
37
+ * Uses first model from FRONTIER_MODELS via gateway
38
+ * Examples: anthropic → anthropic/claude-sonnet-4-5, openai → openai/gpt-5.2
39
+ *
40
+ * 2. Gateway format: provider/model-name
41
+ * Uses Vercel AI Gateway (requires AI_GATEWAY_API_KEY)
42
+ * Examples: anthropic/claude-sonnet-4-5, openai/gpt-5.2, deepseek/deepseek-chat
43
+ *
44
+ * 3. Direct provider format: provider:model-name
45
+ * Requires installing the specific @ai-sdk/provider package
46
+ * Examples: anthropic:claude-sonnet-4-5, openai:gpt-5.2, deepseek:deepseek-chat
47
+ */
48
+ function createModel(modelId) {
49
+ if (modelId.includes("/")) return gateway(modelId);
50
+ if (!modelId.includes(":")) {
51
+ const provider = modelId;
52
+ if (provider in FRONTIER_MODELS) {
53
+ const defaultModel = FRONTIER_MODELS[provider][0];
54
+ return gateway(`${provider}/${defaultModel}`);
55
+ }
56
+ throw new Error(`Unknown provider: ${modelId}. Supported: ${Object.keys(FRONTIER_MODELS).join(", ")}`);
57
+ }
58
+ const colonIndex = modelId.indexOf(":");
59
+ const provider = modelId.slice(0, colonIndex);
60
+ const modelName = modelId.slice(colonIndex + 1);
61
+ if (!modelName) throw new Error(`Invalid model identifier: ${modelId}. Model name is required.`);
62
+ if (provider in providerCache && providerCache[provider]) return providerCache[provider](modelName);
63
+ throw new Error(`Provider '${provider}' not loaded. Use gateway format (${provider}/${modelName}) or call createModelAsync() for direct provider access.`);
64
+ }
65
+ /**
66
+ * Async version of createModel - supports lazy loading of direct providers
67
+ * Use this when you need direct provider access (provider:model format)
68
+ */
69
+ async function createModelAsync(modelId) {
70
+ if (modelId.includes("/")) return gateway(modelId);
71
+ if (!modelId.includes(":")) {
72
+ const provider = modelId;
73
+ if (provider in FRONTIER_MODELS) {
74
+ const defaultModel = FRONTIER_MODELS[provider][0];
75
+ return gateway(`${provider}/${defaultModel}`);
76
+ }
77
+ throw new Error(`Unknown provider: ${modelId}. Supported: ${Object.keys(FRONTIER_MODELS).join(", ")}`);
78
+ }
79
+ const colonIndex = modelId.indexOf(":");
80
+ const provider = modelId.slice(0, colonIndex);
81
+ const modelName = modelId.slice(colonIndex + 1);
82
+ if (!modelName) throw new Error(`Invalid model identifier: ${modelId}. Model name is required.`);
83
+ const providerConfigs = {
84
+ anthropic: {
85
+ package: "@ai-sdk/anthropic",
86
+ export: "anthropic"
87
+ },
88
+ openai: {
89
+ package: "@ai-sdk/openai",
90
+ export: "openai"
91
+ },
92
+ deepseek: {
93
+ package: "@ai-sdk/deepseek",
94
+ export: "deepseek"
95
+ },
96
+ google: {
97
+ package: "@ai-sdk/google",
98
+ export: "google"
99
+ },
100
+ groq: {
101
+ package: "@ai-sdk/groq",
102
+ export: "groq"
103
+ },
104
+ mistral: {
105
+ package: "@ai-sdk/mistral",
106
+ export: "mistral"
107
+ },
108
+ xai: {
109
+ package: "@ai-sdk/xai",
110
+ export: "xai"
111
+ },
112
+ minimax: {
113
+ package: "@ai-sdk/anthropic",
114
+ export: "anthropic",
115
+ options: {
116
+ baseURL: "https://api.minimax.chat/v1",
117
+ apiKeyEnvVar: "MINIMAX_API_KEY"
118
+ }
119
+ }
120
+ };
121
+ const config = providerConfigs[provider];
122
+ if (!config) throw new Error(`Unknown provider: ${provider}. Supported: ${Object.keys(providerConfigs).join(", ")}. Or use gateway format: provider/model (e.g., openai/gpt-5.2)`);
123
+ const providerFn = await loadProvider(provider, config.package, config.export, config.options);
124
+ if (!providerFn) throw new Error(`Install ${config.package} to use ${provider} models directly`);
125
+ return providerFn(modelName);
126
+ }
127
+ /**
128
+ * List of supported providers for direct access
129
+ * Note: minimax uses Claude-compatible API via @ai-sdk/anthropic with custom baseURL
130
+ */
131
+ const SUPPORTED_PROVIDERS = [
132
+ "anthropic",
133
+ "openai",
134
+ "deepseek",
135
+ "google",
136
+ "groq",
137
+ "mistral",
138
+ "xai",
139
+ "minimax"
140
+ ];
141
+ /**
142
+ * Default provider when none specified
143
+ */
144
+ const DEFAULT_PROVIDER = "anthropic";
145
+ /**
146
+ * Get the default model identifier (provider/model format)
147
+ * Uses the first model from the default provider
148
+ */
149
+ function getDefaultModel() {
150
+ return `${DEFAULT_PROVIDER}/${FRONTIER_MODELS[DEFAULT_PROVIDER][0]}`;
151
+ }
152
+ /**
153
+ * Frontier models for each provider (as of 2026-02)
154
+ * Only includes the latest/best models, no legacy versions
155
+ *
156
+ * Note: Some models may be placeholders for testing or future releases.
157
+ * Always verify model availability with the provider before production use.
158
+ */
159
+ const FRONTIER_MODELS = {
160
+ anthropic: [
161
+ "claude-sonnet-4-5",
162
+ "claude-haiku-4-5",
163
+ "claude-opus-4-5"
164
+ ],
165
+ openai: ["gpt-5.2", "gpt-5.2-codex"],
166
+ google: [
167
+ "gemini-3-pro-preview",
168
+ "gemini-2.5-flash",
169
+ "gemini-2.5-pro"
170
+ ],
171
+ deepseek: ["deepseek-chat", "deepseek-reasoner"],
172
+ groq: ["meta-llama/llama-4-scout-17b-16e-instruct", "deepseek-r1-distill-llama-70b"],
173
+ mistral: [
174
+ "mistral-large-latest",
175
+ "pixtral-large-latest",
176
+ "magistral-medium-2506"
177
+ ],
178
+ xai: ["grok-4", "grok-4-fast-reasoning"],
179
+ minimax: ["MiniMax-M2"]
180
+ };
181
+
182
+ //#endregion
5
183
  //#region src/backends/claude-cli.ts
6
184
  /**
7
185
  * Claude Code CLI backend
@@ -20,7 +198,12 @@ var ClaudeCliBackend = class {
20
198
  return new Promise((resolve, reject) => {
21
199
  const proc = spawn("claude", args, {
22
200
  cwd: this.options.cwd,
23
- env: process.env
201
+ env: process.env,
202
+ stdio: [
203
+ "ignore",
204
+ "pipe",
205
+ "pipe"
206
+ ]
24
207
  });
25
208
  let stdout = "";
26
209
  let stderr = "";
@@ -378,4 +561,4 @@ async function listBackends() {
378
561
  }
379
562
 
380
563
  //#endregion
381
- export { CursorCliBackend as a, SdkBackend as i, createBackend as n, CodexCliBackend as o, listBackends as r, ClaudeCliBackend as s, checkBackends as t };
564
+ export { CursorCliBackend as a, FRONTIER_MODELS as c, createModelAsync as d, getDefaultModel as f, SdkBackend as i, SUPPORTED_PROVIDERS as l, createBackend as n, CodexCliBackend as o, listBackends as r, ClaudeCliBackend as s, checkBackends as t, createModel as u };
@@ -1,4 +1,3 @@
1
- import "./models-FOOpWB91.mjs";
2
- import { a as CursorCliBackend, i as SdkBackend, n as createBackend, o as CodexCliBackend, r as listBackends, s as ClaudeCliBackend, t as checkBackends } from "./backends-BklSbwcH.mjs";
1
+ import { a as CursorCliBackend, i as SdkBackend, n as createBackend, o as CodexCliBackend, r as listBackends, s as ClaudeCliBackend, t as checkBackends } from "./backends-BZ866Ij9.mjs";
3
2
 
4
3
  export { listBackends };