@easynet/agent-llm 1.0.2 → 1.0.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,101 +1,62 @@
1
- # Agent LLM
1
+ # @easynet/agent-llm
2
2
 
3
- **LLM component for the Agent ecosystem** (standalone: no dependency on agent-tool or agent-memory). Supports **OpenAI-compatible** format (`/v1/chat/completions`) and **extension providers** loaded by npm package name. Multi-instance: each LLM has an **id** and **type** (chat / image). Supports **baseURL** for Azure, Ollama, Groq, and other /v1-compatible endpoints. Has its own **llm** section in agent.yaml.
3
+ Load an LLM from **llm.yaml** (or your config) and use it from the command line or in a LangChain agent. Supports OpenAI and OpenAI-compatible endpoints (Ollama, Groq, Azure, etc.).
4
4
 
5
- ## Install
5
+ ## Command line
6
+
7
+ Install and run with a question:
6
8
 
7
9
  ```bash
8
- cd agent-llm
9
- npm install
10
+ npx @easynet/agent-llm "hi"
11
+ npx @easynet/agent-llm "What is 10 + 20?"
10
12
  ```
11
13
 
12
- ## Config (agent.yaml llm or models section)
13
-
14
- **provider** supports `openai`, `openai-compatible`, and providers registered by extensions (see **llm.type**). Optional **base_url** / **baseURL** for Groq, Ollama, and other compatible endpoints.
14
+ Config is read from **llm.yaml** or **config/llm.yaml** in the current directory (or parent). See [config/llm.yaml.example](config/llm.yaml.example). Placeholders like `${OPENAI_API_KEY}` are replaced from the environment.
15
15
 
16
- **Flat format** (id base_url, name, options):
16
+ ## Use in a LangChain agent
17
17
 
18
- ```yaml
19
- llm:
20
- default: strong
21
- strong:
22
- provider: openai
23
- base_url: https://api.groq.com/openai
24
- name: openai/gpt-oss-120b
25
- options:
26
- apiKey: ${GROQ_API_KEY}
27
- temperature: 0.7
28
- medium:
29
- provider: openai
30
- base_url: http://192.168.0.201:11434
31
- name: qwen3:4b
32
- options:
33
- apiKey: ${API_KEY}
34
- ```
18
+ **1.** Get the LLM from config with `createAgentLlM()`:
35
19
 
36
- **Single object** (one default chat):
20
+ ```ts
21
+ import { createAgentLlM } from "@easynet/agent-llm";
22
+ import { createAgent, tool } from "langchain";
23
+ import { z } from "zod";
37
24
 
38
- ```yaml
39
- llm:
40
- provider: openai
41
- model: gpt-4o-mini
42
- temperature: 0
43
- apiKey: ${OPENAI_API_KEY}
44
- # base_url: https://api.groq.com/openai
25
+ const llm = createAgentLlM();
45
26
  ```
46
27
 
47
- **Multi-instance** (instances array with id, type, provider, model) see llm.yaml.example. Use **llm.type** to load extension packages that register additional providers.
48
-
49
- ## LangChain adapter
50
-
51
- Build a LangChain ChatModel from the llm section of agent.yaml (for `createAgent`, etc.):
28
+ **2.** Create your agent with LangChain’s `createAgent` and your tools:
52
29
 
53
- - **`createChatModelFromLlmConfig({ llmSection })`** — returns `ChatOpenAI` (openai) or an extension-registered ChatModel for other providers.
54
- - Extension packages can register ChatModel factories via `registerChatModelProvider`; call **loadLLMExtensions** before use.
30
+ ```ts
31
+ const add = tool(
32
+ (input: { a: number; b: number }) => String(input.a + input.b),
33
+ { name: "add", description: "Add two numbers.", schema: z.object({ a: z.number(), b: z.number() }) }
34
+ );
55
35
 
56
- The agent package re-exports these; you can also depend on `@easynet/agent-llm` directly.
36
+ const agent = createAgent({ model: llm as unknown as Parameters<typeof createAgent>[0]["model"], tools: [add] });
37
+ ```
57
38
 
58
- ## Usage
39
+ **3.** Invoke with messages:
59
40
 
60
41
  ```ts
61
- import { createLLMRegistry } from "@easynet/agent-llm";
62
-
63
- const config = await loadAgentConfig("agent.yaml");
64
- const registry = createLLMRegistry({ llmSection: config.llm });
65
-
66
- const defaultId = registry.defaultId();
67
- if (defaultId) {
68
- const llm = registry.get(defaultId)!;
69
- const result = await llm.chat([
70
- { role: "user", content: "Hello" },
71
- ]);
72
- console.log(result.content);
73
- }
74
-
75
- const imageLlm = registry.get("image");
76
- if (imageLlm?.generateImage) {
77
- const img = await imageLlm.generateImage({ prompt: "A cat" });
78
- console.log(img.url);
79
- }
80
- ```
81
-
82
- ## API
42
+ import { HumanMessage } from "@langchain/core/messages";
83
43
 
84
- - **createLLMRegistry({ llmSection })** create registry from agent config llm section
85
- - **registry.get(id)** — get ILLMClient by id
86
- - **registry.defaultId()** — default LLM id
87
- - **registry.ids()** — all ids
88
- - **ILLMClient.chat(messages)** — chat (type=chat)
89
- - **ILLMClient.generateImage?(options)** — image generation (type=image)
44
+ const result = await agent.invoke({ messages: [new HumanMessage("What is 10 + 20?")] });
45
+ console.log(result.messages?.slice(-1)[0]?.content);
46
+ ```
90
47
 
91
- ## Ecosystem
48
+ Full example: [examples/langchain-react-agent.ts](examples/langchain-react-agent.ts).
92
49
 
93
- - **agent** — orchestration; agent.yaml has llm section and uses agent-llm’s createLLMRegistry(config.llm).
94
- - **agent-llm** — this repo: OpenAI-compatible and extension providers; config parsing and multi-instance chat/image API.
50
+ ## Config
95
51
 
96
- ## Publishing to npm
52
+ Put **llm.yaml** (or **config/llm.yaml**) in your project. Example:
97
53
 
98
- Releases are published as **@easynet/agent-llm** to https://www.npmjs.com via GitHub Actions (patch-only, starting at 0.0.1).
54
+ ```yaml
55
+ provider: openai
56
+ model: gpt-4o-mini
57
+ temperature: 0
58
+ apiKey: ${OPENAI_API_KEY}
59
+ # baseURL: https://api.openai.com/v1 # or Ollama, Groq, etc.
60
+ ```
99
61
 
100
- 1. In this repo: **Settings Secrets and variables → Actions**, add secret **NPM_TOKEN** (npm token with Automation or Publish, allow bypass 2FA). Create at https://www.npmjs.com/settings/~/tokens.
101
- 2. Push to **master** or run **Actions → Release → Run workflow** to trigger the release. The workflow runs tests, builds, then semantic-release to bump patch and publish to npm.
62
+ Optional: pass a path when calling `createAgentLlM({ configPath: "/path/to/llm.yaml" })`.
@@ -0,0 +1,422 @@
1
+ // src/config.ts
2
+ var DEFAULT_LLM_ID = "default";
3
+ var RESERVED_KEYS = /* @__PURE__ */ new Set([
4
+ "default",
5
+ "instances",
6
+ "catalog",
7
+ "provider",
8
+ "model",
9
+ "temperature",
10
+ "apiKey",
11
+ "baseURL",
12
+ "base_url",
13
+ "type",
14
+ "id"
15
+ ]);
16
+ function parseLlmSection(section) {
17
+ if (section == null || typeof section !== "object") {
18
+ return { defaultId: DEFAULT_LLM_ID, configs: [] };
19
+ }
20
+ if (Array.isArray(section)) {
21
+ const configs = section.filter((i) => i != null && typeof i === "object").map((item, i) => normalizeLlmConfig({ ...item, id: item.id ?? item.name ?? String(i) })).filter((c) => c != null);
22
+ const defaultId = configs.length > 0 ? configs[0].id : DEFAULT_LLM_ID;
23
+ return { defaultId, configs };
24
+ }
25
+ const s = section;
26
+ const flatEntries = Object.entries(s).filter(
27
+ ([k, v]) => !RESERVED_KEYS.has(k) && v != null && typeof v === "object" && !Array.isArray(v)
28
+ );
29
+ if (flatEntries.length > 0) {
30
+ const configs = [];
31
+ for (const [id, entry] of flatEntries) {
32
+ const c = entryToLlmConfig(id, entry);
33
+ if (c) configs.push(c);
34
+ }
35
+ const defaultId = typeof s.default === "string" && s.default && flatEntries.some(([k]) => k === s.default) ? s.default : configs.length > 0 ? configs[0].id : DEFAULT_LLM_ID;
36
+ return { defaultId, configs };
37
+ }
38
+ if (Array.isArray(s.instances)) {
39
+ const configs = s.instances.filter((i) => i != null && typeof i === "object").map((i) => normalizeLlmConfig(i)).filter((c) => c != null);
40
+ const defaultId = typeof s.default === "string" && s.default ? s.default : configs.length > 0 ? configs[0].id : DEFAULT_LLM_ID;
41
+ return { defaultId, configs };
42
+ }
43
+ if (typeof s.provider === "string" || typeof s.model === "string" || typeof s.name === "string") {
44
+ const one = singleObjectToLlmConfig(s);
45
+ return { defaultId: one.id, configs: [one] };
46
+ }
47
+ return { defaultId: DEFAULT_LLM_ID, configs: [] };
48
+ }
49
+ var EXTENSION_OPTION_KEYS = ["featureKey", "tenant", "authToken", "verifySSL", "bypassAuth", "host", "resolveHost", "timeoutMs", "options"];
50
+ function entryToLlmConfig(id, entry) {
51
+ const opts = entry.options;
52
+ const baseURL = typeof entry.base_url === "string" ? entry.base_url : typeof entry.baseURL === "string" ? entry.baseURL : void 0;
53
+ const model = typeof entry.name === "string" ? entry.name : typeof entry.model === "string" ? entry.model : void 0;
54
+ const provider = typeof entry.provider === "string" && entry.provider ? entry.provider : "openai";
55
+ const config = {
56
+ id,
57
+ type: "chat",
58
+ provider,
59
+ model,
60
+ temperature: typeof opts?.temperature === "number" ? opts.temperature : typeof entry.temperature === "number" ? entry.temperature : void 0,
61
+ apiKey: typeof opts?.apiKey === "string" ? opts.apiKey : typeof entry.apiKey === "string" ? entry.apiKey : void 0,
62
+ baseURL
63
+ };
64
+ if (typeof entry.type === "string" && entry.type === "image") config.type = "image";
65
+ if (opts && typeof opts === "object") config.options = opts;
66
+ for (const k of EXTENSION_OPTION_KEYS) {
67
+ if (entry[k] !== void 0) config[k] = entry[k];
68
+ else if (opts && opts[k] !== void 0) config[k] = opts[k];
69
+ }
70
+ return config;
71
+ }
72
+ function singleObjectToLlmConfig(s) {
73
+ const one = {
74
+ id: DEFAULT_LLM_ID,
75
+ type: "chat",
76
+ provider: typeof s.provider === "string" ? s.provider : "openai",
77
+ model: typeof s.model === "string" ? s.model : typeof s.name === "string" ? s.name : void 0,
78
+ temperature: typeof s.temperature === "number" ? s.temperature : void 0,
79
+ apiKey: typeof s.apiKey === "string" ? s.apiKey : void 0,
80
+ baseURL: typeof s.baseURL === "string" ? s.baseURL : typeof s.base_url === "string" ? s.base_url : void 0
81
+ };
82
+ Object.keys(s).forEach((k) => {
83
+ if (!["id", "type", "provider", "model", "name", "temperature", "apiKey", "baseURL", "base_url", "default", "instances"].includes(k)) {
84
+ one[k] = s[k];
85
+ }
86
+ });
87
+ return one;
88
+ }
89
+ function normalizeLlmConfig(o) {
90
+ const id = typeof o.id === "string" && o.id ? o.id : DEFAULT_LLM_ID;
91
+ const type = o.type === "image" ? "image" : "chat";
92
+ const provider = typeof o.provider === "string" && o.provider ? o.provider : "openai";
93
+ const opts = o.options;
94
+ const config = {
95
+ id,
96
+ type,
97
+ provider,
98
+ model: typeof o.model === "string" ? o.model : typeof o.name === "string" ? o.name : void 0,
99
+ temperature: typeof o.temperature === "number" ? o.temperature : typeof opts?.temperature === "number" ? opts.temperature : void 0,
100
+ apiKey: typeof o.apiKey === "string" ? o.apiKey : typeof opts?.apiKey === "string" ? opts.apiKey : void 0,
101
+ baseURL: typeof o.baseURL === "string" ? o.baseURL : typeof o.base_url === "string" ? o.base_url : void 0
102
+ };
103
+ Object.keys(o).forEach((k) => {
104
+ if (!["id", "type", "provider", "model", "name", "temperature", "apiKey", "baseURL", "base_url"].includes(k)) {
105
+ config[k] = o[k];
106
+ }
107
+ });
108
+ return config;
109
+ }
110
+
111
+ // src/loadLlmConfig.ts
112
+ import { readFileSync, existsSync } from "fs";
113
+ import { parse as parseYaml } from "yaml";
114
+ function substituteEnv(obj) {
115
+ if (obj === null || obj === void 0) return obj;
116
+ if (typeof obj === "string") {
117
+ const m = obj.match(/^\$\{(\w+)\}$/);
118
+ return m ? process.env[m[1]] ?? obj : obj;
119
+ }
120
+ if (Array.isArray(obj)) return obj.map(substituteEnv);
121
+ if (typeof obj === "object") {
122
+ const out = {};
123
+ for (const [k, v] of Object.entries(obj)) out[k] = substituteEnv(v);
124
+ return out;
125
+ }
126
+ return obj;
127
+ }
128
+ function parseLlmYaml(content, options = {}) {
129
+ const { substituteEnv: doSub = true } = options;
130
+ const parsed = parseYaml(content);
131
+ const llm = parsed?.llm;
132
+ if (llm == null) return void 0;
133
+ return doSub ? substituteEnv(llm) : llm;
134
+ }
135
+ function loadLlmConfig(filePath, options = {}) {
136
+ if (!existsSync(filePath)) return null;
137
+ try {
138
+ const raw = readFileSync(filePath, "utf8");
139
+ const llm = parseLlmYaml(raw, options);
140
+ return llm ?? null;
141
+ } catch {
142
+ return null;
143
+ }
144
+ }
145
+
146
+ // src/providers/openai.ts
147
+ import OpenAI from "openai";
148
+ function getApiKey(config) {
149
+ const key = config.apiKey ?? process.env.OPENAI_API_KEY ?? "";
150
+ if (!key) throw new Error("OpenAI-compatible apiKey required (config.apiKey or OPENAI_API_KEY)");
151
+ return key;
152
+ }
153
+ function createOpenAIClientOptions(config) {
154
+ const opts = { apiKey: getApiKey(config) };
155
+ if (typeof config.baseURL === "string" && config.baseURL) opts.baseURL = config.baseURL;
156
+ return opts;
157
+ }
158
+ function serializeMessage(m) {
159
+ if (m.role === "tool")
160
+ return { role: "tool", content: m.content, tool_call_id: m.tool_call_id };
161
+ if (m.role === "assistant" && "tool_calls" in m && m.tool_calls?.length) {
162
+ return {
163
+ role: "assistant",
164
+ content: m.content ?? null,
165
+ tool_calls: m.tool_calls.map((tc) => ({
166
+ id: tc.id,
167
+ type: "function",
168
+ function: { name: tc.function.name, arguments: tc.function.arguments }
169
+ }))
170
+ };
171
+ }
172
+ return { role: m.role, content: m.content };
173
+ }
174
+ function createOpenAIChatClient(config) {
175
+ const client = new OpenAI(createOpenAIClientOptions(config));
176
+ const model = config.model ?? process.env.OPENAI_MODEL ?? "gpt-4o-mini";
177
+ const temperature = config.temperature ?? 0;
178
+ return {
179
+ id: config.id,
180
+ type: "chat",
181
+ async chat(messages) {
182
+ const resp = await client.chat.completions.create({
183
+ model,
184
+ temperature,
185
+ messages: messages.map((m) => ({ role: m.role, content: m.content }))
186
+ });
187
+ const content = resp.choices[0]?.message?.content ?? "";
188
+ const usage = resp.usage ? { promptTokens: resp.usage.prompt_tokens, completionTokens: resp.usage.completion_tokens } : void 0;
189
+ return { content, usage };
190
+ },
191
+ async chatWithTools(messages, tools, _options) {
192
+ const resp = await client.chat.completions.create({
193
+ model,
194
+ temperature,
195
+ messages: messages.map(serializeMessage),
196
+ tools: tools.map((t) => ({
197
+ type: "function",
198
+ function: {
199
+ name: t.function.name,
200
+ description: t.function.description,
201
+ parameters: t.function.parameters ?? void 0
202
+ }
203
+ }))
204
+ });
205
+ const msg = resp.choices[0]?.message;
206
+ const usage = resp.usage ? { promptTokens: resp.usage.prompt_tokens, completionTokens: resp.usage.completion_tokens } : void 0;
207
+ return {
208
+ message: {
209
+ role: "assistant",
210
+ content: msg?.content ?? null,
211
+ tool_calls: msg?.tool_calls?.map((tc) => ({
212
+ id: tc.id,
213
+ type: "function",
214
+ function: {
215
+ name: tc.function?.name ?? "",
216
+ arguments: tc.function?.arguments ?? ""
217
+ }
218
+ }))
219
+ },
220
+ usage
221
+ };
222
+ }
223
+ };
224
+ }
225
+ function createOpenAIImageClient(config) {
226
+ const client = new OpenAI(createOpenAIClientOptions(config));
227
+ const model = config.model ?? "dall-e-3";
228
+ return {
229
+ id: config.id,
230
+ type: "image",
231
+ async chat() {
232
+ throw new Error("OpenAI image model does not support chat; use generateImage()");
233
+ },
234
+ async generateImage(options) {
235
+ const resp = await client.images.generate({
236
+ model,
237
+ prompt: options.prompt,
238
+ size: options.size ?? "1024x1024",
239
+ n: options.n ?? 1,
240
+ response_format: "url"
241
+ });
242
+ const url = resp.data?.[0]?.url ?? void 0;
243
+ return { url };
244
+ }
245
+ };
246
+ }
247
+ function createOpenAIClient(config) {
248
+ if (config.type === "image") return createOpenAIImageClient(config);
249
+ return createOpenAIChatClient(config);
250
+ }
251
+
252
+ // src/providers/index.ts
253
+ var OPENAI_COMPATIBLE = "openai-compatible";
254
+ function createOpenAICompat(config) {
255
+ return createOpenAIClient(config);
256
+ }
257
+ var PROVIDERS = {
258
+ openai: createOpenAICompat,
259
+ [OPENAI_COMPATIBLE]: createOpenAICompat
260
+ };
261
+ function createClient(config) {
262
+ const p = (config.provider ?? "").toLowerCase();
263
+ const fn = PROVIDERS[p];
264
+ if (!fn) {
265
+ const supported = [.../* @__PURE__ */ new Set([...Object.keys(PROVIDERS), "extension providers"])].sort().join(", ");
266
+ throw new Error(
267
+ `Unsupported LLM provider: ${config.provider}. Supported: ${supported}.`
268
+ );
269
+ }
270
+ return fn(config);
271
+ }
272
+ function registerProvider(name, factory) {
273
+ PROVIDERS[name.toLowerCase()] = factory;
274
+ }
275
+
276
+ // src/factory.ts
277
+ function createLLMRegistry(options) {
278
+ const { defaultId, configs } = parseLlmSection(options.llmSection);
279
+ const map = /* @__PURE__ */ new Map();
280
+ for (const config of configs) {
281
+ try {
282
+ const client = createClient(config);
283
+ map.set(config.id, client);
284
+ } catch (err) {
285
+ console.warn(`[agent-llm] Skip LLM "${config.id}": ${err instanceof Error ? err.message : String(err)}`);
286
+ }
287
+ }
288
+ return {
289
+ get(id) {
290
+ return map.get(id);
291
+ },
292
+ defaultId() {
293
+ if (map.has(defaultId)) return defaultId;
294
+ return map.size > 0 ? [...map.keys()][0] : void 0;
295
+ },
296
+ ids() {
297
+ return [...map.keys()];
298
+ }
299
+ };
300
+ }
301
+
302
+ // src/chatModelRegistry.ts
303
+ var CHAT_MODEL_FACTORIES = /* @__PURE__ */ new Map();
304
+ function registerChatModelProvider(providerName, factory) {
305
+ CHAT_MODEL_FACTORIES.set(providerName.toLowerCase(), factory);
306
+ }
307
+ function getChatModelFactory(providerName) {
308
+ return CHAT_MODEL_FACTORIES.get(providerName.toLowerCase());
309
+ }
310
+
311
+ // src/llmAdapter.ts
312
+ import { ChatOpenAI } from "@langchain/openai";
313
+ var DEFAULT_MODEL = "gpt-4o-mini";
314
+ function createChatModelFromLlmConfig(options) {
315
+ const { llmSection, modelEnv, apiKeyEnv } = options;
316
+ const { defaultId, configs } = parseLlmSection(llmSection ?? null);
317
+ const defaultConfig = configs.find((c) => c.id === defaultId) ?? configs[0];
318
+ if (!defaultConfig) {
319
+ const model2 = modelEnv ?? process.env.OPENAI_MODEL ?? DEFAULT_MODEL;
320
+ const apiKey2 = apiKeyEnv ?? process.env.OPENAI_API_KEY;
321
+ return new ChatOpenAI({
322
+ model: model2,
323
+ temperature: 0,
324
+ ...apiKey2 ? { apiKey: apiKey2 } : {}
325
+ });
326
+ }
327
+ const provider = defaultConfig.provider ?? "openai";
328
+ const chatModelFactory = getChatModelFactory(provider);
329
+ if (chatModelFactory) {
330
+ const config = {
331
+ ...defaultConfig,
332
+ model: modelEnv ?? defaultConfig.model,
333
+ temperature: typeof defaultConfig.temperature === "number" ? defaultConfig.temperature : 0
334
+ };
335
+ return chatModelFactory(config);
336
+ }
337
+ const model = modelEnv ?? defaultConfig?.model ?? process.env.OPENAI_MODEL ?? DEFAULT_MODEL;
338
+ let apiKey = apiKeyEnv ?? defaultConfig?.apiKey ?? process.env.OPENAI_API_KEY;
339
+ let baseURL = defaultConfig?.baseURL;
340
+ if (baseURL && !baseURL.replace(/\/$/, "").endsWith("/v1")) {
341
+ baseURL = baseURL.replace(/\/$/, "") + "/v1";
342
+ }
343
+ if (baseURL && apiKey === void 0) {
344
+ apiKey = "ollama";
345
+ }
346
+ const temperature = typeof defaultConfig?.temperature === "number" ? defaultConfig.temperature : 0;
347
+ const constructorOptions = {
348
+ model,
349
+ temperature,
350
+ ...apiKey ? { apiKey } : {},
351
+ ...baseURL ? { configuration: { baseURL } } : {}
352
+ };
353
+ return new ChatOpenAI(constructorOptions);
354
+ }
355
+
356
+ // src/createAgentLlM.ts
357
+ import { join } from "path";
358
+ import { existsSync as existsSync2 } from "fs";
359
+ function resolveDefaultConfigPath() {
360
+ const cwd = process.cwd();
361
+ if (existsSync2(join(cwd, "llm.yaml"))) return join(cwd, "llm.yaml");
362
+ if (existsSync2(join(cwd, "config", "llm.yaml"))) return join(cwd, "config", "llm.yaml");
363
+ const parentConfig = join(cwd, "..", "config", "llm.yaml");
364
+ if (existsSync2(parentConfig)) return parentConfig;
365
+ return join(cwd, "config", "llm.yaml");
366
+ }
367
+ function createAgentLlM(options = {}) {
368
+ const configPath = options.configPath ?? resolveDefaultConfigPath();
369
+ const llmSection = loadLlmConfig(configPath);
370
+ if (llmSection == null) {
371
+ throw new Error(`No LLM config at ${configPath}. Add llm.yaml or config/llm.yaml, or pass configPath.`);
372
+ }
373
+ return createChatModelFromLlmConfig({ llmSection });
374
+ }
375
+
376
+ // src/loadLLMExtensions.ts
377
+ var loadedPackages = /* @__PURE__ */ new Set();
378
+ var DEFAULT_EXTENSIONS = ["wallee-llm"];
379
+ function resolveLLMExtensionPackages(types) {
380
+ const typeList = types == null ? [] : Array.isArray(types) ? types : [types];
381
+ const packages = typeList.filter(
382
+ (t) => typeof t === "string" && t.length > 0
383
+ );
384
+ return packages.length > 0 ? packages : DEFAULT_EXTENSIONS;
385
+ }
386
+ async function loadLLMExtensions(extensionPackages) {
387
+ const packages = extensionPackages ?? DEFAULT_EXTENSIONS;
388
+ for (const pkg of packages) {
389
+ if (loadedPackages.has(pkg)) continue;
390
+ loadedPackages.add(pkg);
391
+ try {
392
+ const m = await import(
393
+ /* @vite-ignore */
394
+ pkg
395
+ );
396
+ if (typeof m.registerLLMExtension === "function") {
397
+ m.registerLLMExtension();
398
+ }
399
+ } catch {
400
+ }
401
+ }
402
+ }
403
+
404
+ export {
405
+ parseLlmSection,
406
+ substituteEnv,
407
+ parseLlmYaml,
408
+ loadLlmConfig,
409
+ createOpenAIChatClient,
410
+ createOpenAIImageClient,
411
+ createOpenAIClient,
412
+ createClient,
413
+ registerProvider,
414
+ createLLMRegistry,
415
+ registerChatModelProvider,
416
+ getChatModelFactory,
417
+ createChatModelFromLlmConfig,
418
+ createAgentLlM,
419
+ resolveLLMExtensionPackages,
420
+ loadLLMExtensions
421
+ };
422
+ //# sourceMappingURL=chunk-3Z3KXKNU.js.map
@@ -0,0 +1 @@
1
+ {"version":3,"sources":["../src/config.ts","../src/loadLlmConfig.ts","../src/providers/openai.ts","../src/providers/index.ts","../src/factory.ts","../src/chatModelRegistry.ts","../src/llmAdapter.ts","../src/createAgentLlM.ts","../src/loadLLMExtensions.ts"],"sourcesContent":["/**\n * Parse agent.yaml llm section into normalized LLMConfig[] and default id.\n * Supports: flat (each model keyed by name), instances[], or single object.\n */\n\nimport type { LLMConfig } from \"./types.js\";\n\nconst DEFAULT_LLM_ID = \"default\";\n\nconst RESERVED_KEYS = new Set([\n \"default\",\n \"instances\",\n \"catalog\",\n \"provider\",\n \"model\",\n \"temperature\",\n \"apiKey\",\n \"baseURL\",\n \"base_url\",\n \"type\",\n \"id\",\n]);\n\n/**\n * Parse llm section: flat (each model keyed by name), default+instances, or single object.\n */\nexport function parseLlmSection(section: unknown): { defaultId: string; configs: LLMConfig[] } {\n if (section == null || typeof section !== \"object\") {\n return { defaultId: DEFAULT_LLM_ID, configs: [] };\n }\n\n if (Array.isArray(section)) {\n const configs = section\n .filter((i): i is Record<string, unknown> => i != null && typeof i === \"object\")\n .map((item, i) => normalizeLlmConfig({ ...item, id: item.id ?? item.name ?? String(i) }))\n .filter((c): c is LLMConfig => c != null);\n const defaultId = configs.length > 0 ? configs[0]!.id : DEFAULT_LLM_ID;\n return { defaultId, configs };\n }\n\n const s = section as Record<string, unknown>;\n\n const flatEntries = Object.entries(s).filter(\n ([k, v]) => !RESERVED_KEYS.has(k) && v != null && typeof v === \"object\" && !Array.isArray(v)\n );\n if (flatEntries.length > 0) {\n const configs: LLMConfig[] = [];\n for (const [id, entry] of flatEntries) {\n const c = entryToLlmConfig(id, entry as Record<string, unknown>);\n if (c) configs.push(c);\n }\n const defaultId =\n typeof s.default === \"string\" && s.default && flatEntries.some(([k]) => k === s.default)\n ? s.default\n : configs.length > 0\n ? configs[0]!.id\n : DEFAULT_LLM_ID;\n return { defaultId, configs };\n }\n\n if (Array.isArray(s.instances)) {\n const configs = (s.instances as unknown[])\n .filter((i): i is Record<string, unknown> => i != null && typeof i === \"object\")\n .map((i) => normalizeLlmConfig(i))\n .filter((c): c is LLMConfig => c != null);\n const defaultId =\n typeof s.default === \"string\" && s.default\n ? s.default\n : configs.length > 0\n ? configs[0]!.id\n : DEFAULT_LLM_ID;\n return { defaultId, configs };\n }\n\n if (typeof s.provider === \"string\" || typeof s.model === \"string\" || typeof (s as { name?: string }).name === \"string\") {\n const one = singleObjectToLlmConfig(s);\n return { defaultId: one.id, configs: [one] };\n }\n\n return { defaultId: DEFAULT_LLM_ID, configs: [] };\n}\n\nconst EXTENSION_OPTION_KEYS = [\"featureKey\", \"tenant\", \"authToken\", \"verifySSL\", \"bypassAuth\", \"host\", \"resolveHost\", \"timeoutMs\", \"options\"];\n\nfunction entryToLlmConfig(id: string, entry: Record<string, unknown>): LLMConfig | null {\n const opts = entry.options as Record<string, unknown> | undefined;\n const baseURL =\n typeof entry.base_url === \"string\"\n ? entry.base_url\n : typeof entry.baseURL === \"string\"\n ? entry.baseURL\n : undefined;\n const model = typeof entry.name === \"string\" ? entry.name : typeof entry.model === \"string\" ? entry.model : undefined;\n const provider = typeof entry.provider === \"string\" && entry.provider ? entry.provider : \"openai\";\n const config: LLMConfig = {\n id,\n type: \"chat\",\n provider,\n model,\n temperature: typeof opts?.temperature === \"number\" ? opts.temperature : typeof entry.temperature === \"number\" ? entry.temperature : undefined,\n apiKey: typeof opts?.apiKey === \"string\" ? opts.apiKey : typeof entry.apiKey === \"string\" ? entry.apiKey : undefined,\n baseURL,\n };\n if (typeof entry.type === \"string\" && entry.type === \"image\") config.type = \"image\";\n if (opts && typeof opts === \"object\") (config as Record<string, unknown>).options = opts;\n for (const k of EXTENSION_OPTION_KEYS) {\n if (entry[k] !== undefined) (config as Record<string, unknown>)[k] = entry[k];\n else if (opts && opts[k] !== undefined) (config as Record<string, unknown>)[k] = opts[k];\n }\n return config;\n}\n\nfunction singleObjectToLlmConfig(s: Record<string, unknown>): LLMConfig {\n const one: LLMConfig = {\n id: DEFAULT_LLM_ID,\n type: \"chat\",\n provider: typeof s.provider === \"string\" ? s.provider : \"openai\",\n model: typeof s.model === \"string\" ? s.model : (typeof (s as { name?: string }).name === \"string\" ? (s as { name: string }).name : undefined),\n temperature: typeof s.temperature === \"number\" ? s.temperature : undefined,\n apiKey: typeof s.apiKey === \"string\" ? s.apiKey : undefined,\n baseURL:\n typeof s.baseURL === \"string\" ? s.baseURL : typeof s.base_url === \"string\" ? s.base_url : undefined,\n };\n Object.keys(s).forEach((k) => {\n if (![\"id\", \"type\", \"provider\", \"model\", \"name\", \"temperature\", \"apiKey\", \"baseURL\", \"base_url\", \"default\", \"instances\"].includes(k)) {\n (one as Record<string, unknown>)[k] = s[k];\n }\n });\n return one;\n}\n\nfunction normalizeLlmConfig(o: Record<string, unknown>): LLMConfig | null {\n const id = typeof o.id === \"string\" && o.id ? o.id : DEFAULT_LLM_ID;\n const type = o.type === \"image\" ? \"image\" : \"chat\";\n const provider = typeof o.provider === \"string\" && o.provider ? o.provider : \"openai\";\n const opts = o.options as Record<string, unknown> | undefined;\n const config: LLMConfig = {\n id,\n type,\n provider,\n model: typeof o.model === \"string\" ? o.model : (typeof o.name === \"string\" ? o.name : undefined),\n temperature:\n typeof o.temperature === \"number\"\n ? o.temperature\n : typeof opts?.temperature === \"number\"\n ? opts.temperature\n : undefined,\n apiKey:\n typeof o.apiKey === \"string\"\n ? o.apiKey\n : typeof opts?.apiKey === \"string\"\n ? opts.apiKey\n : undefined,\n baseURL: typeof o.baseURL === \"string\" ? o.baseURL : (typeof o.base_url === \"string\" ? o.base_url : undefined),\n };\n Object.keys(o).forEach((k) => {\n if (![\"id\", \"type\", \"provider\", \"model\", \"name\", \"temperature\", \"apiKey\", \"baseURL\", \"base_url\"].includes(k)) {\n (config as Record<string, unknown>)[k] = o[k];\n }\n });\n return config;\n}\n","/**\n * Load and parse LLM config from YAML (e.g. config/llm.yaml).\n * Supports ${VAR} substitution from process.env.\n */\n\nimport { readFileSync, existsSync } from \"node:fs\";\nimport { parse as parseYaml } from \"yaml\";\n\nexport interface LoadLlmConfigOptions {\n /** Replace ${VAR} with process.env.VAR. Default true. */\n substituteEnv?: boolean;\n}\n\n/**\n * Recursively replace ${VAR} in strings with process.env.VAR.\n */\nexport function substituteEnv(obj: unknown): unknown {\n if (obj === null || obj === undefined) return obj;\n if (typeof obj === \"string\") {\n const m = obj.match(/^\\$\\{(\\w+)\\}$/);\n return m ? (process.env[m[1]] ?? obj) : obj;\n }\n if (Array.isArray(obj)) return obj.map(substituteEnv);\n if (typeof obj === \"object\") {\n const out: Record<string, unknown> = {};\n for (const [k, v] of Object.entries(obj)) out[k] = substituteEnv(v);\n return out;\n }\n return obj;\n}\n\n/**\n * Parse YAML string and return the llm section (top-level key \"llm\").\n * Returns undefined if content has no llm key.\n */\nexport function parseLlmYaml(\n content: string,\n options: LoadLlmConfigOptions = {}\n): unknown {\n const { substituteEnv: doSub = true } = options;\n const parsed = parseYaml(content) as { llm?: unknown };\n const llm = parsed?.llm;\n if (llm == null) return undefined;\n return doSub ? substituteEnv(llm) : llm;\n}\n\n/**\n * Load LLM config from a YAML file (e.g. config/llm.yaml).\n * Returns the llm section for use with createChatModelFromLlmConfig or parseLlmSection.\n * Returns null if file does not exist or has no llm key.\n */\nexport function loadLlmConfig(\n filePath: string,\n options: LoadLlmConfigOptions = {}\n): unknown | null {\n if (!existsSync(filePath)) return null;\n try {\n const raw = readFileSync(filePath, \"utf8\");\n const llm = parseLlmYaml(raw, options);\n return llm ?? null;\n } catch {\n return null;\n }\n}\n","/**\n * OpenAI-compatible format: chat (/v1/chat/completions) and image.\n * Supports baseURL for Azure, local proxy, and other compatible endpoints.\n */\n\nimport OpenAI from \"openai\";\nimport type {\n LLMConfig,\n ChatMessage,\n ChatResult,\n ImageResult,\n ILLMClient,\n ChatWithToolsMessage,\n ChatWithToolsResult,\n ToolDefinition,\n} from \"../types.js\";\n\nfunction getApiKey(config: LLMConfig): string {\n const key = config.apiKey ?? process.env.OPENAI_API_KEY ?? \"\";\n if (!key) throw new Error(\"OpenAI-compatible apiKey required (config.apiKey or OPENAI_API_KEY)\");\n return key;\n}\n\nfunction createOpenAIClientOptions(config: LLMConfig): { apiKey: string; baseURL?: string } {\n const opts: { apiKey: string; baseURL?: string } = { apiKey: getApiKey(config) };\n if (typeof config.baseURL === \"string\" && config.baseURL) opts.baseURL = config.baseURL;\n return opts;\n}\n\nfunction serializeMessage(\n m: ChatWithToolsMessage\n): OpenAI.Chat.Completions.ChatCompletionMessageParam {\n if (m.role === \"tool\")\n return { role: \"tool\", content: m.content, tool_call_id: m.tool_call_id };\n if (m.role === \"assistant\" && \"tool_calls\" in m && m.tool_calls?.length) {\n return {\n role: \"assistant\",\n content: m.content ?? null,\n tool_calls: m.tool_calls.map((tc) => ({\n id: tc.id,\n type: \"function\" as const,\n function: { name: tc.function.name, arguments: tc.function.arguments },\n })),\n };\n }\n return { role: m.role, content: (m as ChatMessage).content };\n}\n\nexport function createOpenAIChatClient(config: LLMConfig): ILLMClient {\n const client = new OpenAI(createOpenAIClientOptions(config));\n const model = config.model ?? process.env.OPENAI_MODEL ?? \"gpt-4o-mini\";\n const temperature = config.temperature ?? 0;\n\n return {\n id: config.id,\n type: \"chat\",\n async chat(messages: ChatMessage[]): Promise<ChatResult> {\n const resp = await client.chat.completions.create({\n model,\n temperature,\n messages: messages.map((m) => ({ role: m.role, content: m.content })),\n });\n const content = resp.choices[0]?.message?.content ?? \"\";\n const usage = resp.usage\n ? { promptTokens: resp.usage.prompt_tokens, completionTokens: resp.usage.completion_tokens }\n : undefined;\n return { content, usage };\n },\n async chatWithTools(\n messages: ChatWithToolsMessage[],\n tools: ToolDefinition[],\n _options?: { timeoutMs?: number }\n ): Promise<ChatWithToolsResult> {\n const resp = await client.chat.completions.create({\n model,\n temperature,\n messages: messages.map(serializeMessage),\n tools: tools.map((t) => ({\n type: \"function\" as const,\n function: {\n name: t.function.name,\n description: t.function.description,\n parameters: (t.function.parameters ?? undefined) as Record<string, unknown> | undefined,\n },\n })),\n });\n const msg = resp.choices[0]?.message;\n const usage = resp.usage\n ? { promptTokens: resp.usage.prompt_tokens, completionTokens: resp.usage.completion_tokens }\n : undefined;\n return {\n message: {\n role: \"assistant\",\n content: msg?.content ?? null,\n tool_calls: msg?.tool_calls?.map((tc) => ({\n id: tc.id,\n type: \"function\" as const,\n function: {\n name: tc.function?.name ?? \"\",\n arguments: tc.function?.arguments ?? \"\",\n },\n })),\n },\n usage,\n };\n },\n };\n}\n\nexport function createOpenAIImageClient(config: LLMConfig): ILLMClient {\n const client = new OpenAI(createOpenAIClientOptions(config));\n const model = (config.model as string) ?? \"dall-e-3\";\n\n return {\n id: config.id,\n type: \"image\",\n async chat(): Promise<ChatResult> {\n throw new Error(\"OpenAI image model does not support chat; use generateImage()\");\n },\n async generateImage(options: { prompt: string; size?: string; n?: number }): Promise<ImageResult> {\n const resp = await client.images.generate({\n model,\n prompt: options.prompt,\n size: (options.size as \"1024x1024\" | \"1792x1024\" | \"1024x1792\") ?? \"1024x1024\",\n n: options.n ?? 1,\n response_format: \"url\",\n });\n const url = resp.data?.[0]?.url ?? undefined;\n return { url };\n },\n };\n}\n\nexport function createOpenAIClient(config: LLMConfig): ILLMClient {\n if (config.type === \"image\") return createOpenAIImageClient(config);\n return createOpenAIChatClient(config);\n}\n","/**\n * Supports OpenAI-compatible and extension providers.\n */\n\nimport type { LLMConfig, ILLMClient } from \"../types.js\";\nimport { createOpenAIClient } from \"./openai.js\";\n\nconst OPENAI_COMPATIBLE = \"openai-compatible\";\n\nfunction createOpenAICompat(config: LLMConfig): ILLMClient {\n return createOpenAIClient(config);\n}\n\nconst PROVIDERS: Record<string, (config: LLMConfig) => ILLMClient> = {\n openai: createOpenAICompat,\n [OPENAI_COMPATIBLE]: createOpenAICompat,\n};\n\nexport function createClient(config: LLMConfig): ILLMClient {\n const p = (config.provider ?? \"\").toLowerCase();\n const fn = PROVIDERS[p];\n if (!fn) {\n const supported = [...new Set([...Object.keys(PROVIDERS), \"extension providers\"])].sort().join(\", \");\n throw new Error(\n `Unsupported LLM provider: ${config.provider}. Supported: ${supported}.`\n );\n }\n return fn(config);\n}\n\nexport function registerProvider(name: string, factory: (config: LLMConfig) => ILLMClient): void {\n PROVIDERS[name.toLowerCase()] = factory;\n}\n","/**\n * Create LLM registry from agent.yaml llm section.\n */\n\nimport { parseLlmSection } from \"./config.js\";\nimport { createClient } from \"./providers/index.js\";\nimport type { AgentConfigLlmSection, ILLMClient, ILLMRegistry } from \"./types.js\";\n\nexport interface CreateLLMRegistryOptions {\n /** Parsed llm section (e.g. from loadAgentConfig's config.llm) */\n llmSection: AgentConfigLlmSection | null | undefined;\n}\n\n/**\n * Create LLM registry from agent config llm section; supports multiple providers/models, each LLM has id and type.\n */\nexport function createLLMRegistry(options: CreateLLMRegistryOptions): ILLMRegistry {\n const { defaultId, configs } = parseLlmSection(options.llmSection);\n const map = new Map<string, ILLMClient>();\n\n for (const config of configs) {\n try {\n const client = createClient(config);\n map.set(config.id, client);\n } catch (err) {\n console.warn(`[agent-llm] Skip LLM \"${config.id}\": ${err instanceof Error ? err.message : String(err)}`);\n }\n }\n\n return {\n get(id: string): ILLMClient | undefined {\n return map.get(id);\n },\n defaultId(): string | undefined {\n if (map.has(defaultId)) return defaultId;\n return map.size > 0 ? [...map.keys()][0] : undefined;\n },\n ids(): string[] {\n return [...map.keys()];\n },\n };\n}\n","/**\n * Registry for LangChain ChatModel by provider name.\n * Extensions register via registerChatModelProvider; llmAdapter uses getChatModelFactory.\n */\n\nimport type { BaseChatModel } from \"@langchain/core/language_models/chat_models\";\nimport type { LLMConfig } from \"./types.js\";\n\nexport type ChatModelFactory = (config: LLMConfig) => BaseChatModel;\n\nconst CHAT_MODEL_FACTORIES = new Map<string, ChatModelFactory>();\n\n/**\n * Register a ChatModel factory for a provider name.\n * Called by extensions (e.g. wallee-llm) on load.\n */\nexport function registerChatModelProvider(providerName: string, factory: ChatModelFactory): void {\n CHAT_MODEL_FACTORIES.set(providerName.toLowerCase(), factory);\n}\n\n/**\n * Get the ChatModel factory for a provider name, if registered.\n */\nexport function getChatModelFactory(providerName: string): ChatModelFactory | undefined {\n return CHAT_MODEL_FACTORIES.get(providerName.toLowerCase());\n}\n","/**\n * Build LangChain ChatModel from agent.yaml llm section.\n * Supports single object, default + instances, and flat keyed configs.\n * When provider is registered by an extension, uses that extension's ChatModel;\n * otherwise uses ChatOpenAI.\n */\n\nimport { ChatOpenAI } from \"@langchain/openai\";\nimport type { BaseChatModel } from \"@langchain/core/language_models/chat_models\";\nimport { parseLlmSection } from \"./config.js\";\nimport { getChatModelFactory } from \"./chatModelRegistry.js\";\n\nconst DEFAULT_MODEL = \"gpt-4o-mini\";\n\nexport interface CreateChatModelFromLlmConfigOptions {\n /** agent.yaml llm section (raw or parsed); compatible with AgentConfigLlmSection / AgentConfigLlm */\n llmSection?: unknown;\n /** Override model from env */\n modelEnv?: string;\n /** Override API key from env */\n apiKeyEnv?: string;\n}\n\n/**\n * Create a LangChain ChatModel from agent config llm section.\n * Uses extension-registered ChatModel when available; otherwise ChatOpenAI.\n */\nexport function createChatModelFromLlmConfig(\n options: CreateChatModelFromLlmConfigOptions\n): BaseChatModel {\n const { llmSection, modelEnv, apiKeyEnv } = options;\n const { defaultId, configs } = parseLlmSection(llmSection ?? null);\n const defaultConfig = configs.find((c) => c.id === defaultId) ?? configs[0];\n\n if (!defaultConfig) {\n const model =\n modelEnv ?? process.env.OPENAI_MODEL ?? DEFAULT_MODEL;\n const apiKey = apiKeyEnv ?? process.env.OPENAI_API_KEY;\n return new ChatOpenAI({\n model,\n temperature: 0,\n ...(apiKey ? { apiKey } : {}),\n });\n }\n\n const provider = (defaultConfig as { provider?: string }).provider ?? \"openai\";\n const chatModelFactory = getChatModelFactory(provider);\n if (chatModelFactory) {\n const config = {\n ...defaultConfig,\n model: modelEnv ?? defaultConfig.model,\n temperature:\n typeof defaultConfig.temperature === \"number\"\n ? defaultConfig.temperature\n : 0,\n };\n return chatModelFactory(config);\n }\n\n const model =\n modelEnv ??\n defaultConfig?.model ??\n process.env.OPENAI_MODEL ??\n DEFAULT_MODEL;\n\n let apiKey =\n apiKeyEnv ?? defaultConfig?.apiKey ?? process.env.OPENAI_API_KEY;\n let baseURL = defaultConfig?.baseURL;\n // OpenAI client appends path (e.g. /chat/completions) to baseURL; Ollama and OpenAI-compatible APIs expect /v1/chat/completions.\n if (baseURL && !baseURL.replace(/\\/$/, \"\").endsWith(\"/v1\")) {\n baseURL = baseURL.replace(/\\/$/, \"\") + \"/v1\";\n }\n // OpenAI client throws if apiKey is undefined; Ollama and many compatible endpoints accept a dummy.\n if (baseURL && apiKey === undefined) {\n apiKey = \"ollama\";\n }\n\n const temperature =\n typeof defaultConfig?.temperature === \"number\" ? defaultConfig.temperature : 0;\n\n const constructorOptions: ConstructorParameters<typeof ChatOpenAI>[0] = {\n model,\n temperature,\n ...(apiKey ? { apiKey } : {}),\n ...(baseURL ? { configuration: { baseURL } } : {}),\n };\n\n return new ChatOpenAI(constructorOptions);\n}\n","/**\n * Return a LangChain-formatted LLM from config (llm.yaml or config/llm.yaml).\n * Use this LLM with LangChain's createAgent (e.g. createToolCallingAgent + AgentExecutor).\n */\nimport { join } from \"node:path\";\nimport { existsSync } from \"node:fs\";\nimport { createChatModelFromLlmConfig } from \"./llmAdapter.js\";\nimport { loadLlmConfig } from \"./loadLlmConfig.js\";\n\nexport interface CreateAgentLlMOptions {\n /** Path to YAML config file. If omitted, uses llm.yaml in cwd or config/llm.yaml in cwd/parent. */\n configPath?: string;\n}\n\nfunction resolveDefaultConfigPath(): string {\n const cwd = process.cwd();\n if (existsSync(join(cwd, \"llm.yaml\"))) return join(cwd, \"llm.yaml\");\n if (existsSync(join(cwd, \"config\", \"llm.yaml\"))) return join(cwd, \"config\", \"llm.yaml\");\n const parentConfig = join(cwd, \"..\", \"config\", \"llm.yaml\");\n if (existsSync(parentConfig)) return parentConfig;\n return join(cwd, \"config\", \"llm.yaml\");\n}\n\n/**\n * Create a LangChain-formatted LLM from config.\n * Pass configPath to use a specific YAML file; otherwise uses llm.yaml (cwd) or config/llm.yaml (cwd/parent).\n */\nexport function createAgentLlM(options: CreateAgentLlMOptions = {}) {\n const configPath = options.configPath ?? resolveDefaultConfigPath();\n const llmSection = loadLlmConfig(configPath);\n if (llmSection == null) {\n throw new Error(`No LLM config at ${configPath}. Add llm.yaml or config/llm.yaml, or pass configPath.`);\n }\n return createChatModelFromLlmConfig({ llmSection });\n}\n","/**\n * Load optional LLM extensions by npm package name (e.g. wallee-llm).\n * Call before createChatModelFromLlmConfig when using extension providers.\n * Config llm.type = npm package name(s); we dynamic load those packages. No extensions field.\n */\n\nconst loadedPackages = new Set<string>();\n\nconst DEFAULT_EXTENSIONS = [\"wallee-llm\"];\n\n/**\n * Resolve llm.type to a list of npm package names to load.\n * type is the npm package name or array of package names; we load them directly (no mapping).\n */\nexport function resolveLLMExtensionPackages(types?: string | string[]): string[] {\n const typeList = types == null ? [] : Array.isArray(types) ? types : [types];\n const packages = typeList.filter(\n (t): t is string => typeof t === \"string\" && t.length > 0\n );\n return packages.length > 0 ? packages : DEFAULT_EXTENSIONS;\n}\n\n/**\n * Dynamically load LLM extensions by npm package name.\n * Each package must export registerLLMExtension() and will register its provider(s) and ChatModel factory.\n * Safe to call multiple times; each package is loaded at most once.\n * @param extensionPackages npm package names; default [\"wallee-llm\"] when omitted\n */\nexport async function loadLLMExtensions(\n extensionPackages?: string[]\n): Promise<void> {\n const packages = extensionPackages ?? DEFAULT_EXTENSIONS;\n for (const pkg of packages) {\n if (loadedPackages.has(pkg)) continue;\n loadedPackages.add(pkg);\n try {\n const m = await import(/* @vite-ignore */ pkg);\n if (\n typeof (m as { registerLLMExtension?: () => void })\n .registerLLMExtension === \"function\"\n ) {\n (m as { registerLLMExtension: () => void }).registerLLMExtension();\n }\n } catch {\n // extension not installed or load failed\n }\n }\n}\n"],"mappings":";AAOA,IAAM,iBAAiB;AAEvB,IAAM,gBAAgB,oBAAI,IAAI;AAAA,EAC5B;AAAA,EACA;AAAA,EACA;AAAA,EACA;AAAA,EACA;AAAA,EACA;AAAA,EACA;AAAA,EACA;AAAA,EACA;AAAA,EACA;AAAA,EACA;AACF,CAAC;AAKM,SAAS,gBAAgB,SAA+D;AAC7F,MAAI,WAAW,QAAQ,OAAO,YAAY,UAAU;AAClD,WAAO,EAAE,WAAW,gBAAgB,SAAS,CAAC,EAAE;AAAA,EAClD;AAEA,MAAI,MAAM,QAAQ,OAAO,GAAG;AAC1B,UAAM,UAAU,QACb,OAAO,CAAC,MAAoC,KAAK,QAAQ,OAAO,MAAM,QAAQ,EAC9E,IAAI,CAAC,MAAM,MAAM,mBAAmB,EAAE,GAAG,MAAM,IAAI,KAAK,MAAM,KAAK,QAAQ,OAAO,CAAC,EAAE,CAAC,CAAC,EACvF,OAAO,CAAC,MAAsB,KAAK,IAAI;AAC1C,UAAM,YAAY,QAAQ,SAAS,IAAI,QAAQ,CAAC,EAAG,KAAK;AACxD,WAAO,EAAE,WAAW,QAAQ;AAAA,EAC9B;AAEA,QAAM,IAAI;AAEV,QAAM,cAAc,OAAO,QAAQ,CAAC,EAAE;AAAA,IACpC,CAAC,CAAC,GAAG,CAAC,MAAM,CAAC,cAAc,IAAI,CAAC,KAAK,KAAK,QAAQ,OAAO,MAAM,YAAY,CAAC,MAAM,QAAQ,CAAC;AAAA,EAC7F;AACA,MAAI,YAAY,SAAS,GAAG;AAC1B,UAAM,UAAuB,CAAC;AAC9B,eAAW,CAAC,IAAI,KAAK,KAAK,aAAa;AACrC,YAAM,IAAI,iBAAiB,IAAI,KAAgC;AAC/D,UAAI,EAAG,SAAQ,KAAK,CAAC;AAAA,IACvB;AACA,UAAM,YACJ,OAAO,EAAE,YAAY,YAAY,EAAE,WAAW,YAAY,KAAK,CAAC,CAAC,CAAC,MAAM,MAAM,EAAE,OAAO,IACnF,EAAE,UACF,QAAQ,SAAS,IACf,QAAQ,CAAC,EAAG,KACZ;AACR,WAAO,EAAE,WAAW,QAAQ;AAAA,EAC9B;AAEA,MAAI,MAAM,QAAQ,EAAE,SAAS,GAAG;AAC9B,UAAM,UAAW,EAAE,UAChB,OAAO,CAAC,MAAoC,KAAK,QAAQ,OAAO,MAAM,QAAQ,EAC9E,IAAI,CAAC,MAAM,mBAAmB,CAAC,CAAC,EAChC,OAAO,CAAC,MAAsB,KAAK,IAAI;AAC1C,UAAM,YACJ,OAAO,EAAE,YAAY,YAAY,EAAE,UAC/B,EAAE,UACF,QAAQ,SAAS,IACf,QAAQ,CAAC,EAAG,KACZ;AACR,WAAO,EAAE,WAAW,QAAQ;AAAA,EAC9B;AAEA,MAAI,OAAO,EAAE,aAAa,YAAY,OAAO,EAAE,UAAU,YAAY,OAAQ,EAAwB,SAAS,UAAU;AACtH,UAAM,MAAM,wBAAwB,CAAC;AACrC,WAAO,EAAE,WAAW,IAAI,IAAI,SAAS,CAAC,GAAG,EAAE;AAAA,EAC7C;AAEA,SAAO,EAAE,WAAW,gBAAgB,SAAS,CAAC,EAAE;AAClD;AAEA,IAAM,wBAAwB,CAAC,cAAc,UAAU,aAAa,aAAa,cAAc,QAAQ,eAAe,aAAa,SAAS;AAE5I,SAAS,iBAAiB,IAAY,OAAkD;AACtF,QAAM,OAAO,MAAM;AACnB,QAAM,UACJ,OAAO,MAAM,aAAa,WACtB,MAAM,WACN,OAAO,MAAM,YAAY,WACvB,MAAM,UACN;AACR,QAAM,QAAQ,OAAO,MAAM,SAAS,WAAW,MAAM,OAAO,OAAO,MAAM,UAAU,WAAW,MAAM,QAAQ;AAC5G,QAAM,WAAW,OAAO,MAAM,aAAa,YAAY,MAAM,WAAW,MAAM,WAAW;AACzF,QAAM,SAAoB;AAAA,IACxB;AAAA,IACA,MAAM;AAAA,IACN;AAAA,IACA;AAAA,IACA,aAAa,OAAO,MAAM,gBAAgB,WAAW,KAAK,cAAc,OAAO,MAAM,gBAAgB,WAAW,MAAM,cAAc;AAAA,IACpI,QAAQ,OAAO,MAAM,WAAW,WAAW,KAAK,SAAS,OAAO,MAAM,WAAW,WAAW,MAAM,SAAS;AAAA,IAC3G;AAAA,EACF;AACA,MAAI,OAAO,MAAM,SAAS,YAAY,MAAM,SAAS,QAAS,QAAO,OAAO;AAC5E,MAAI,QAAQ,OAAO,SAAS,SAAU,CAAC,OAAmC,UAAU;AACpF,aAAW,KAAK,uBAAuB;AACrC,QAAI,MAAM,CAAC,MAAM,OAAW,CAAC,OAAmC,CAAC,IAAI,MAAM,CAAC;AAAA,aACnE,QAAQ,KAAK,CAAC,MAAM,OAAW,CAAC,OAAmC,CAAC,IAAI,KAAK,CAAC;AAAA,EACzF;AACA,SAAO;AACT;AAEA,SAAS,wBAAwB,GAAuC;AACtE,QAAM,MAAiB;AAAA,IACrB,IAAI;AAAA,IACJ,MAAM;AAAA,IACN,UAAU,OAAO,EAAE,aAAa,WAAW,EAAE,WAAW;AAAA,IACxD,OAAO,OAAO,EAAE,UAAU,WAAW,EAAE,QAAS,OAAQ,EAAwB,SAAS,WAAY,EAAuB,OAAO;AAAA,IACnI,aAAa,OAAO,EAAE,gBAAgB,WAAW,EAAE,cAAc;AAAA,IACjE,QAAQ,OAAO,EAAE,WAAW,WAAW,EAAE,SAAS;AAAA,IAClD,SACE,OAAO,EAAE,YAAY,WAAW,EAAE,UAAU,OAAO,EAAE,aAAa,WAAW,EAAE,WAAW;AAAA,EAC9F;AACA,SAAO,KAAK,CAAC,EAAE,QAAQ,CAAC,MAAM;AAC5B,QAAI,CAAC,CAAC,MAAM,QAAQ,YAAY,SAAS,QAAQ,eAAe,UAAU,WAAW,YAAY,WAAW,WAAW,EAAE,SAAS,CAAC,GAAG;AACpI,MAAC,IAAgC,CAAC,IAAI,EAAE,CAAC;AAAA,IAC3C;AAAA,EACF,CAAC;AACD,SAAO;AACT;AAEA,SAAS,mBAAmB,GAA8C;AACxE,QAAM,KAAK,OAAO,EAAE,OAAO,YAAY,EAAE,KAAK,EAAE,KAAK;AACrD,QAAM,OAAO,EAAE,SAAS,UAAU,UAAU;AAC5C,QAAM,WAAW,OAAO,EAAE,aAAa,YAAY,EAAE,WAAW,EAAE,WAAW;AAC7E,QAAM,OAAO,EAAE;AACf,QAAM,SAAoB;AAAA,IACxB;AAAA,IACA;AAAA,IACA;AAAA,IACA,OAAO,OAAO,EAAE,UAAU,WAAW,EAAE,QAAS,OAAO,EAAE,SAAS,WAAW,EAAE,OAAO;AAAA,IACtF,aACE,OAAO,EAAE,gBAAgB,WACrB,EAAE,cACF,OAAO,MAAM,gBAAgB,WAC3B,KAAK,cACL;AAAA,IACR,QACE,OAAO,EAAE,WAAW,WAChB,EAAE,SACF,OAAO,MAAM,WAAW,WACtB,KAAK,SACL;AAAA,IACR,SAAS,OAAO,EAAE,YAAY,WAAW,EAAE,UAAW,OAAO,EAAE,aAAa,WAAW,EAAE,WAAW;AAAA,EACtG;AACA,SAAO,KAAK,CAAC,EAAE,QAAQ,CAAC,MAAM;AAC5B,QAAI,CAAC,CAAC,MAAM,QAAQ,YAAY,SAAS,QAAQ,eAAe,UAAU,WAAW,UAAU,EAAE,SAAS,CAAC,GAAG;AAC5G,MAAC,OAAmC,CAAC,IAAI,EAAE,CAAC;AAAA,IAC9C;AAAA,EACF,CAAC;AACD,SAAO;AACT;;;AC5JA,SAAS,cAAc,kBAAkB;AACzC,SAAS,SAAS,iBAAiB;AAU5B,SAAS,cAAc,KAAuB;AACnD,MAAI,QAAQ,QAAQ,QAAQ,OAAW,QAAO;AAC9C,MAAI,OAAO,QAAQ,UAAU;AAC3B,UAAM,IAAI,IAAI,MAAM,eAAe;AACnC,WAAO,IAAK,QAAQ,IAAI,EAAE,CAAC,CAAC,KAAK,MAAO;AAAA,EAC1C;AACA,MAAI,MAAM,QAAQ,GAAG,EAAG,QAAO,IAAI,IAAI,aAAa;AACpD,MAAI,OAAO,QAAQ,UAAU;AAC3B,UAAM,MAA+B,CAAC;AACtC,eAAW,CAAC,GAAG,CAAC,KAAK,OAAO,QAAQ,GAAG,EAAG,KAAI,CAAC,IAAI,cAAc,CAAC;AAClE,WAAO;AAAA,EACT;AACA,SAAO;AACT;AAMO,SAAS,aACd,SACA,UAAgC,CAAC,GACxB;AACT,QAAM,EAAE,eAAe,QAAQ,KAAK,IAAI;AACxC,QAAM,SAAS,UAAU,OAAO;AAChC,QAAM,MAAM,QAAQ;AACpB,MAAI,OAAO,KAAM,QAAO;AACxB,SAAO,QAAQ,cAAc,GAAG,IAAI;AACtC;AAOO,SAAS,cACd,UACA,UAAgC,CAAC,GACjB;AAChB,MAAI,CAAC,WAAW,QAAQ,EAAG,QAAO;AAClC,MAAI;AACF,UAAM,MAAM,aAAa,UAAU,MAAM;AACzC,UAAM,MAAM,aAAa,KAAK,OAAO;AACrC,WAAO,OAAO;AAAA,EAChB,QAAQ;AACN,WAAO;AAAA,EACT;AACF;;;AC1DA,OAAO,YAAY;AAYnB,SAAS,UAAU,QAA2B;AAC5C,QAAM,MAAM,OAAO,UAAU,QAAQ,IAAI,kBAAkB;AAC3D,MAAI,CAAC,IAAK,OAAM,IAAI,MAAM,qEAAqE;AAC/F,SAAO;AACT;AAEA,SAAS,0BAA0B,QAAyD;AAC1F,QAAM,OAA6C,EAAE,QAAQ,UAAU,MAAM,EAAE;AAC/E,MAAI,OAAO,OAAO,YAAY,YAAY,OAAO,QAAS,MAAK,UAAU,OAAO;AAChF,SAAO;AACT;AAEA,SAAS,iBACP,GACoD;AACpD,MAAI,EAAE,SAAS;AACb,WAAO,EAAE,MAAM,QAAQ,SAAS,EAAE,SAAS,cAAc,EAAE,aAAa;AAC1E,MAAI,EAAE,SAAS,eAAe,gBAAgB,KAAK,EAAE,YAAY,QAAQ;AACvE,WAAO;AAAA,MACL,MAAM;AAAA,MACN,SAAS,EAAE,WAAW;AAAA,MACtB,YAAY,EAAE,WAAW,IAAI,CAAC,QAAQ;AAAA,QACpC,IAAI,GAAG;AAAA,QACP,MAAM;AAAA,QACN,UAAU,EAAE,MAAM,GAAG,SAAS,MAAM,WAAW,GAAG,SAAS,UAAU;AAAA,MACvE,EAAE;AAAA,IACJ;AAAA,EACF;AACA,SAAO,EAAE,MAAM,EAAE,MAAM,SAAU,EAAkB,QAAQ;AAC7D;AAEO,SAAS,uBAAuB,QAA+B;AACpE,QAAM,SAAS,IAAI,OAAO,0BAA0B,MAAM,CAAC;AAC3D,QAAM,QAAQ,OAAO,SAAS,QAAQ,IAAI,gBAAgB;AAC1D,QAAM,cAAc,OAAO,eAAe;AAE1C,SAAO;AAAA,IACL,IAAI,OAAO;AAAA,IACX,MAAM;AAAA,IACN,MAAM,KAAK,UAA8C;AACvD,YAAM,OAAO,MAAM,OAAO,KAAK,YAAY,OAAO;AAAA,QAChD;AAAA,QACA;AAAA,QACA,UAAU,SAAS,IAAI,CAAC,OAAO,EAAE,MAAM,EAAE,MAAM,SAAS,EAAE,QAAQ,EAAE;AAAA,MACtE,CAAC;AACD,YAAM,UAAU,KAAK,QAAQ,CAAC,GAAG,SAAS,WAAW;AACrD,YAAM,QAAQ,KAAK,QACf,EAAE,cAAc,KAAK,MAAM,eAAe,kBAAkB,KAAK,MAAM,kBAAkB,IACzF;AACJ,aAAO,EAAE,SAAS,MAAM;AAAA,IAC1B;AAAA,IACA,MAAM,cACJ,UACA,OACA,UAC8B;AAC9B,YAAM,OAAO,MAAM,OAAO,KAAK,YAAY,OAAO;AAAA,QAChD;AAAA,QACA;AAAA,QACA,UAAU,SAAS,IAAI,gBAAgB;AAAA,QACvC,OAAO,MAAM,IAAI,CAAC,OAAO;AAAA,UACvB,MAAM;AAAA,UACN,UAAU;AAAA,YACR,MAAM,EAAE,SAAS;AAAA,YACjB,aAAa,EAAE,SAAS;AAAA,YACxB,YAAa,EAAE,SAAS,cAAc;AAAA,UACxC;AAAA,QACF,EAAE;AAAA,MACJ,CAAC;AACD,YAAM,MAAM,KAAK,QAAQ,CAAC,GAAG;AAC7B,YAAM,QAAQ,KAAK,QACf,EAAE,cAAc,KAAK,MAAM,eAAe,kBAAkB,KAAK,MAAM,kBAAkB,IACzF;AACJ,aAAO;AAAA,QACL,SAAS;AAAA,UACP,MAAM;AAAA,UACN,SAAS,KAAK,WAAW;AAAA,UACzB,YAAY,KAAK,YAAY,IAAI,CAAC,QAAQ;AAAA,YACxC,IAAI,GAAG;AAAA,YACP,MAAM;AAAA,YACN,UAAU;AAAA,cACR,MAAM,GAAG,UAAU,QAAQ;AAAA,cAC3B,WAAW,GAAG,UAAU,aAAa;AAAA,YACvC;AAAA,UACF,EAAE;AAAA,QACJ;AAAA,QACA;AAAA,MACF;AAAA,IACF;AAAA,EACF;AACF;AAEO,SAAS,wBAAwB,QAA+B;AACrE,QAAM,SAAS,IAAI,OAAO,0BAA0B,MAAM,CAAC;AAC3D,QAAM,QAAS,OAAO,SAAoB;AAE1C,SAAO;AAAA,IACL,IAAI,OAAO;AAAA,IACX,MAAM;AAAA,IACN,MAAM,OAA4B;AAChC,YAAM,IAAI,MAAM,+DAA+D;AAAA,IACjF;AAAA,IACA,MAAM,cAAc,SAA8E;AAChG,YAAM,OAAO,MAAM,OAAO,OAAO,SAAS;AAAA,QACxC;AAAA,QACA,QAAQ,QAAQ;AAAA,QAChB,MAAO,QAAQ,QAAoD;AAAA,QACnE,GAAG,QAAQ,KAAK;AAAA,QAChB,iBAAiB;AAAA,MACnB,CAAC;AACD,YAAM,MAAM,KAAK,OAAO,CAAC,GAAG,OAAO;AACnC,aAAO,EAAE,IAAI;AAAA,IACf;AAAA,EACF;AACF;AAEO,SAAS,mBAAmB,QAA+B;AAChE,MAAI,OAAO,SAAS,QAAS,QAAO,wBAAwB,MAAM;AAClE,SAAO,uBAAuB,MAAM;AACtC;;;ACjIA,IAAM,oBAAoB;AAE1B,SAAS,mBAAmB,QAA+B;AACzD,SAAO,mBAAmB,MAAM;AAClC;AAEA,IAAM,YAA+D;AAAA,EACnE,QAAQ;AAAA,EACR,CAAC,iBAAiB,GAAG;AACvB;AAEO,SAAS,aAAa,QAA+B;AAC1D,QAAM,KAAK,OAAO,YAAY,IAAI,YAAY;AAC9C,QAAM,KAAK,UAAU,CAAC;AACtB,MAAI,CAAC,IAAI;AACP,UAAM,YAAY,CAAC,GAAG,oBAAI,IAAI,CAAC,GAAG,OAAO,KAAK,SAAS,GAAG,qBAAqB,CAAC,CAAC,EAAE,KAAK,EAAE,KAAK,IAAI;AACnG,UAAM,IAAI;AAAA,MACR,6BAA6B,OAAO,QAAQ,gBAAgB,SAAS;AAAA,IACvE;AAAA,EACF;AACA,SAAO,GAAG,MAAM;AAClB;AAEO,SAAS,iBAAiB,MAAc,SAAkD;AAC/F,YAAU,KAAK,YAAY,CAAC,IAAI;AAClC;;;AChBO,SAAS,kBAAkB,SAAiD;AACjF,QAAM,EAAE,WAAW,QAAQ,IAAI,gBAAgB,QAAQ,UAAU;AACjE,QAAM,MAAM,oBAAI,IAAwB;AAExC,aAAW,UAAU,SAAS;AAC5B,QAAI;AACF,YAAM,SAAS,aAAa,MAAM;AAClC,UAAI,IAAI,OAAO,IAAI,MAAM;AAAA,IAC3B,SAAS,KAAK;AACZ,cAAQ,KAAK,yBAAyB,OAAO,EAAE,MAAM,eAAe,QAAQ,IAAI,UAAU,OAAO,GAAG,CAAC,EAAE;AAAA,IACzG;AAAA,EACF;AAEA,SAAO;AAAA,IACL,IAAI,IAAoC;AACtC,aAAO,IAAI,IAAI,EAAE;AAAA,IACnB;AAAA,IACA,YAAgC;AAC9B,UAAI,IAAI,IAAI,SAAS,EAAG,QAAO;AAC/B,aAAO,IAAI,OAAO,IAAI,CAAC,GAAG,IAAI,KAAK,CAAC,EAAE,CAAC,IAAI;AAAA,IAC7C;AAAA,IACA,MAAgB;AACd,aAAO,CAAC,GAAG,IAAI,KAAK,CAAC;AAAA,IACvB;AAAA,EACF;AACF;;;AC/BA,IAAM,uBAAuB,oBAAI,IAA8B;AAMxD,SAAS,0BAA0B,cAAsB,SAAiC;AAC/F,uBAAqB,IAAI,aAAa,YAAY,GAAG,OAAO;AAC9D;AAKO,SAAS,oBAAoB,cAAoD;AACtF,SAAO,qBAAqB,IAAI,aAAa,YAAY,CAAC;AAC5D;;;AClBA,SAAS,kBAAkB;AAK3B,IAAM,gBAAgB;AAef,SAAS,6BACd,SACe;AACf,QAAM,EAAE,YAAY,UAAU,UAAU,IAAI;AAC5C,QAAM,EAAE,WAAW,QAAQ,IAAI,gBAAgB,cAAc,IAAI;AACjE,QAAM,gBAAgB,QAAQ,KAAK,CAAC,MAAM,EAAE,OAAO,SAAS,KAAK,QAAQ,CAAC;AAE1E,MAAI,CAAC,eAAe;AAClB,UAAMA,SACJ,YAAY,QAAQ,IAAI,gBAAgB;AAC1C,UAAMC,UAAS,aAAa,QAAQ,IAAI;AACxC,WAAO,IAAI,WAAW;AAAA,MACpB,OAAAD;AAAA,MACA,aAAa;AAAA,MACb,GAAIC,UAAS,EAAE,QAAAA,QAAO,IAAI,CAAC;AAAA,IAC7B,CAAC;AAAA,EACH;AAEA,QAAM,WAAY,cAAwC,YAAY;AACtE,QAAM,mBAAmB,oBAAoB,QAAQ;AACrD,MAAI,kBAAkB;AACpB,UAAM,SAAS;AAAA,MACb,GAAG;AAAA,MACH,OAAO,YAAY,cAAc;AAAA,MACjC,aACE,OAAO,cAAc,gBAAgB,WACjC,cAAc,cACd;AAAA,IACR;AACA,WAAO,iBAAiB,MAAM;AAAA,EAChC;AAEA,QAAM,QACJ,YACA,eAAe,SACf,QAAQ,IAAI,gBACZ;AAEF,MAAI,SACF,aAAa,eAAe,UAAU,QAAQ,IAAI;AACpD,MAAI,UAAU,eAAe;AAE7B,MAAI,WAAW,CAAC,QAAQ,QAAQ,OAAO,EAAE,EAAE,SAAS,KAAK,GAAG;AAC1D,cAAU,QAAQ,QAAQ,OAAO,EAAE,IAAI;AAAA,EACzC;AAEA,MAAI,WAAW,WAAW,QAAW;AACnC,aAAS;AAAA,EACX;AAEA,QAAM,cACJ,OAAO,eAAe,gBAAgB,WAAW,cAAc,cAAc;AAE/E,QAAM,qBAAkE;AAAA,IACtE;AAAA,IACA;AAAA,IACA,GAAI,SAAS,EAAE,OAAO,IAAI,CAAC;AAAA,IAC3B,GAAI,UAAU,EAAE,eAAe,EAAE,QAAQ,EAAE,IAAI,CAAC;AAAA,EAClD;AAEA,SAAO,IAAI,WAAW,kBAAkB;AAC1C;;;ACpFA,SAAS,YAAY;AACrB,SAAS,cAAAC,mBAAkB;AAS3B,SAAS,2BAAmC;AAC1C,QAAM,MAAM,QAAQ,IAAI;AACxB,MAAIC,YAAW,KAAK,KAAK,UAAU,CAAC,EAAG,QAAO,KAAK,KAAK,UAAU;AAClE,MAAIA,YAAW,KAAK,KAAK,UAAU,UAAU,CAAC,EAAG,QAAO,KAAK,KAAK,UAAU,UAAU;AACtF,QAAM,eAAe,KAAK,KAAK,MAAM,UAAU,UAAU;AACzD,MAAIA,YAAW,YAAY,EAAG,QAAO;AACrC,SAAO,KAAK,KAAK,UAAU,UAAU;AACvC;AAMO,SAAS,eAAe,UAAiC,CAAC,GAAG;AAClE,QAAM,aAAa,QAAQ,cAAc,yBAAyB;AAClE,QAAM,aAAa,cAAc,UAAU;AAC3C,MAAI,cAAc,MAAM;AACtB,UAAM,IAAI,MAAM,oBAAoB,UAAU,wDAAwD;AAAA,EACxG;AACA,SAAO,6BAA6B,EAAE,WAAW,CAAC;AACpD;;;AC5BA,IAAM,iBAAiB,oBAAI,IAAY;AAEvC,IAAM,qBAAqB,CAAC,YAAY;AAMjC,SAAS,4BAA4B,OAAqC;AAC/E,QAAM,WAAW,SAAS,OAAO,CAAC,IAAI,MAAM,QAAQ,KAAK,IAAI,QAAQ,CAAC,KAAK;AAC3E,QAAM,WAAW,SAAS;AAAA,IACxB,CAAC,MAAmB,OAAO,MAAM,YAAY,EAAE,SAAS;AAAA,EAC1D;AACA,SAAO,SAAS,SAAS,IAAI,WAAW;AAC1C;AAQA,eAAsB,kBACpB,mBACe;AACf,QAAM,WAAW,qBAAqB;AACtC,aAAW,OAAO,UAAU;AAC1B,QAAI,eAAe,IAAI,GAAG,EAAG;AAC7B,mBAAe,IAAI,GAAG;AACtB,QAAI;AACF,YAAM,IAAI,MAAM;AAAA;AAAA,QAA0B;AAAA;AAC1C,UACE,OAAQ,EACL,yBAAyB,YAC5B;AACA,QAAC,EAA2C,qBAAqB;AAAA,MACnE;AAAA,IACF,QAAQ;AAAA,IAER;AAAA,EACF;AACF;","names":["model","apiKey","existsSync","existsSync"]}
package/dist/cli.d.ts ADDED
@@ -0,0 +1,9 @@
1
+ #!/usr/bin/env node
2
+ /**
3
+ * CLI for @easynet/agent-llm: run the LangChain agent with a query.
4
+ * Usage: agent-llm "your question"
5
+ * or: agent-llm --config ./config/llm.yaml "hi"
6
+ * or: npx @easynet/agent-llm "hi"
7
+ */
8
+ export {};
9
+ //# sourceMappingURL=cli.d.ts.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"cli.d.ts","sourceRoot":"","sources":["../src/cli.ts"],"names":[],"mappings":";AACA;;;;;GAKG"}