@node-llm/core 1.6.1 → 1.7.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +64 -249
- package/dist/aliases.d.ts +2 -0
- package/dist/aliases.d.ts.map +1 -1
- package/dist/aliases.js +3 -1
- package/dist/chat/Chat.d.ts +10 -1
- package/dist/chat/Chat.d.ts.map +1 -1
- package/dist/chat/Chat.js +17 -3
- package/dist/chat/ChatOptions.d.ts +2 -1
- package/dist/chat/ChatOptions.d.ts.map +1 -1
- package/dist/chat/ChatResponse.d.ts +5 -2
- package/dist/chat/ChatResponse.d.ts.map +1 -1
- package/dist/chat/ChatResponse.js +5 -2
- package/dist/chat/ChatStream.d.ts.map +1 -1
- package/dist/chat/ChatStream.js +15 -1
- package/dist/errors/index.d.ts +19 -0
- package/dist/errors/index.d.ts.map +1 -1
- package/dist/errors/index.js +28 -0
- package/dist/index.d.ts +2 -0
- package/dist/index.d.ts.map +1 -1
- package/dist/index.js +1 -0
- package/dist/models/models.d.ts.map +1 -1
- package/dist/models/models.js +109 -20
- package/dist/providers/Provider.d.ts +32 -0
- package/dist/providers/Provider.d.ts.map +1 -1
- package/dist/providers/anthropic/Chat.d.ts.map +1 -1
- package/dist/providers/anthropic/Chat.js +23 -1
- package/dist/providers/anthropic/Streaming.d.ts.map +1 -1
- package/dist/providers/anthropic/Streaming.js +15 -0
- package/dist/providers/anthropic/types.d.ts +7 -1
- package/dist/providers/anthropic/types.d.ts.map +1 -1
- package/dist/providers/deepseek/Chat.d.ts.map +1 -1
- package/dist/providers/deepseek/Chat.js +9 -2
- package/dist/providers/deepseek/Streaming.d.ts.map +1 -1
- package/dist/providers/deepseek/Streaming.js +3 -2
- package/dist/providers/gemini/Chat.d.ts.map +1 -1
- package/dist/providers/gemini/Chat.js +18 -2
- package/dist/providers/gemini/Streaming.d.ts.map +1 -1
- package/dist/providers/gemini/Streaming.js +17 -2
- package/dist/providers/gemini/types.d.ts +4 -0
- package/dist/providers/gemini/types.d.ts.map +1 -1
- package/dist/providers/openai/Chat.d.ts.map +1 -1
- package/dist/providers/openai/Chat.js +18 -3
- package/dist/providers/openai/Streaming.d.ts.map +1 -1
- package/dist/providers/openai/Streaming.js +11 -3
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -1,312 +1,127 @@
|
|
|
1
|
+
# @node-llm/core
|
|
2
|
+
|
|
1
3
|
<p align="left">
|
|
2
4
|
<a href="https://node-llm.eshaiju.com/">
|
|
3
5
|
<img src="https://node-llm.eshaiju.com/assets/images/logo.jpg" alt="NodeLLM logo" width="300" />
|
|
4
6
|
</a>
|
|
5
7
|
</p>
|
|
6
8
|
|
|
7
|
-
# NodeLLM
|
|
8
|
-
|
|
9
|
-
**An architectural layer for integrating Large Language Models in Node.js.**
|
|
10
|
-
|
|
11
|
-
**Provider-agnostic by design.**
|
|
12
|
-
|
|
13
|
-
Integrating multiple LLM providers often means juggling different SDKs, API styles, and update cycles. NodeLLM provides a single, unified, production-oriented API for interacting with over 540+ models across multiple providers (OpenAI, Gemini, Anthropic, DeepSeek, OpenRouter, Ollama, etc.) that stays consistent even when providers change.
|
|
14
|
-
|
|
15
|
-
<p align="left">
|
|
16
|
-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openai.svg" height="28" />
|
|
17
|
-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openai-text.svg" height="22" />
|
|
18
|
-
|
|
19
|
-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/anthropic-text.svg" height="18" />
|
|
20
|
-
|
|
21
|
-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/gemini-color.svg" height="28" />
|
|
22
|
-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/gemini-text.svg" height="20" />
|
|
23
|
-
|
|
24
|
-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/deepseek-color.svg" height="28" />
|
|
25
|
-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/deepseek-text.svg" height="20" />
|
|
26
|
-
|
|
27
|
-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openrouter.svg" height="28" />
|
|
28
|
-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openrouter-text.svg" height="22" />
|
|
29
|
-
|
|
30
|
-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/ollama.svg" height="28" />
|
|
31
|
-
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/ollama-text.svg" height="18" />
|
|
32
|
-
</p>
|
|
33
|
-
|
|
34
|
-
<br/>
|
|
35
|
-
|
|
36
9
|
[](https://www.npmjs.com/package/@node-llm/core)
|
|
37
10
|
[](https://opensource.org/licenses/MIT)
|
|
38
11
|
[](https://www.typescriptlang.org/)
|
|
39
12
|
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
## 🛑 What NodeLLM is NOT
|
|
43
|
-
|
|
44
|
-
NodeLLM represents a clear architectural boundary between your system and LLM vendors.
|
|
45
|
-
|
|
46
|
-
NodeLLM is **NOT**:
|
|
13
|
+
**The production-grade LLM engine for Node.js. Provider-agnostic by design.**
|
|
47
14
|
|
|
48
|
-
-
|
|
49
|
-
- A prompt-engineering framework
|
|
50
|
-
- An agent playground or experimental toy
|
|
15
|
+
`@node-llm/core` provides a single, unified API for interacting with over 540+ models across all major providers. It is built for developers who need stable infrastructure, standard streaming, and automated tool execution without vendor lock-in.
|
|
51
16
|
|
|
52
17
|
---
|
|
53
18
|
|
|
54
|
-
##
|
|
55
|
-
|
|
56
|
-
Direct integrations often become tightly coupled to specific providers, making it difficult to adapt as models evolve. **LLMs should be treated as infrastructure**, and NodeLLM helps you build a stable foundation that persists regardless of which model is currently "state of the art."
|
|
57
|
-
|
|
58
|
-
NodeLLM helps solve **architectural problems**, not just provide API access. It serves as the core integration layer for LLMs in the Node.js ecosystem.
|
|
59
|
-
|
|
60
|
-
### Strategic Goals
|
|
61
|
-
|
|
62
|
-
- **Provider Isolation**: Decouple your services from vendor SDKs.
|
|
63
|
-
- **Production-Ready**: Native support for streaming, automatic retries, and unified error handling.
|
|
64
|
-
- **Predictable API**: Consistent behavior for Tools, Vision, and Structured Outputs across all models, **now including full parity for streaming**.
|
|
65
|
-
|
|
66
|
-
---
|
|
67
|
-
|
|
68
|
-
## ⚡ The Architectural Path
|
|
69
|
-
|
|
70
|
-
```ts
|
|
71
|
-
import { NodeLLM } from "@node-llm/core";
|
|
72
|
-
|
|
73
|
-
// 1. Zero-Config (NodeLLM automatically reads NODELLM_PROVIDER and API keys)
|
|
74
|
-
const chat = NodeLLM.chat("gpt-4o");
|
|
75
|
-
|
|
76
|
-
// 2. Chat (High-level request/response)
|
|
77
|
-
const response = await chat.ask("Explain event-driven architecture");
|
|
78
|
-
console.log(response.content);
|
|
79
|
-
|
|
80
|
-
// 3. Streaming (Standard AsyncIterator)
|
|
81
|
-
for await (const chunk of chat.stream("Explain event-driven architecture")) {
|
|
82
|
-
process.stdout.write(chunk.content);
|
|
83
|
-
}
|
|
84
|
-
```
|
|
85
|
-
|
|
86
|
-
### 🎯 Real-World Example: Brand Perception Checker
|
|
87
|
-
|
|
88
|
-
Built with NodeLLM - Multi-provider AI analysis, tool calling, and structured outputs working together.
|
|
19
|
+
## 🚀 Key Features
|
|
89
20
|
|
|
90
|
-
**
|
|
21
|
+
- **Unified API**: One interface for OpenAI, Anthropic, Gemini, DeepSeek, OpenRouter, and Ollama.
|
|
22
|
+
- **Automated Tool Loops**: Recursive tool execution handled automatically—no manual loops required.
|
|
23
|
+
- **Streaming + Tools**: Seamlessly execute tools and continue the stream with the final response.
|
|
24
|
+
- **Structured Output**: Native Zod support for rigorous schema validation (`.withSchema()`).
|
|
25
|
+
- **Multimodal engine**: Built-in handling for Vision, Audio (Whisper), and Video (Gemini).
|
|
26
|
+
- **Security-First**: Integrated circuit breakers for timeouts, max tokens, and infinite tool loops.
|
|
91
27
|
|
|
92
28
|
---
|
|
93
29
|
|
|
94
|
-
##
|
|
95
|
-
|
|
96
|
-
NodeLLM provides a flexible, **lazy-initialized** configuration system designed for enterprise usage. It is safe for ESM and resolved only when your first request is made, eliminating the common `dotenv` race condition.
|
|
97
|
-
|
|
98
|
-
```ts
|
|
99
|
-
// Recommended for multi-provider pipelines
|
|
100
|
-
const llm = createLLM({
|
|
101
|
-
openaiApiKey: process.env.OPENAI_API_KEY,
|
|
102
|
-
anthropicApiKey: process.env.ANTHROPIC_API_KEY,
|
|
103
|
-
ollamaApiBase: process.env.OLLAMA_API_BASE
|
|
104
|
-
});
|
|
105
|
-
|
|
106
|
-
// Support for Custom Endpoints (e.g., Azure or LocalAI)
|
|
107
|
-
const llm = createLLM({
|
|
108
|
-
openaiApiKey: process.env.AZURE_KEY,
|
|
109
|
-
openaiApiBase: "https://your-resource.openai.azure.com/openai/deployments/..."
|
|
110
|
-
});
|
|
111
|
-
```
|
|
112
|
-
|
|
113
|
-
**[Full Configuration Guide →](docs/getting_started/configuration.md)**
|
|
30
|
+
## 📋 Supported Providers
|
|
114
31
|
|
|
115
|
-
|
|
32
|
+
| Provider | Supported Features |
|
|
33
|
+
| :------------- | :--------------------------------------------------------------- |
|
|
34
|
+
| **OpenAI** | Chat, Streaming, Tools, Vision, Audio, Images, Reasoning (o1/o3) |
|
|
35
|
+
| **Anthropic** | Chat, Streaming, Tools, Vision, PDF Support (Claude 3.5) |
|
|
36
|
+
| **Gemini** | Chat, Streaming, Tools, Vision, Audio, Video, Embeddings |
|
|
37
|
+
| **DeepSeek** | Chat (V3), Reasoning (R1), Streaming + Tools |
|
|
38
|
+
| **OpenRouter** | 540+ models via a single API with automatic capability detection |
|
|
39
|
+
| **Ollama** | Local LLM inference with full Tool and Vision support |
|
|
116
40
|
|
|
117
41
|
---
|
|
118
42
|
|
|
119
|
-
##
|
|
120
|
-
|
|
121
|
-
### 💬 Unified Chat
|
|
43
|
+
## ⚡ Quick Start
|
|
122
44
|
|
|
123
|
-
|
|
45
|
+
### Installation
|
|
124
46
|
|
|
125
|
-
```
|
|
126
|
-
|
|
127
|
-
|
|
128
|
-
// Uses NODELLM_PROVIDER from environment (defaults to GPT-4o)
|
|
129
|
-
const chat = NodeLLM.chat();
|
|
130
|
-
await chat.ask("Hello world");
|
|
47
|
+
```bash
|
|
48
|
+
npm install @node-llm/core
|
|
131
49
|
```
|
|
132
50
|
|
|
133
|
-
###
|
|
51
|
+
### Basic Chat & Streaming
|
|
134
52
|
|
|
135
|
-
|
|
53
|
+
NodeLLM automatically reads your API keys from environment variables (e.g., `OPENAI_API_KEY`).
|
|
136
54
|
|
|
137
55
|
```ts
|
|
138
|
-
|
|
139
|
-
files: ["./screenshot.png", "https://example.com/spec.pdf"]
|
|
140
|
-
});
|
|
141
|
-
```
|
|
56
|
+
import { createLLM } from "@node-llm/core";
|
|
142
57
|
|
|
143
|
-
|
|
144
|
-
|
|
145
|
-
Define tools once;`NodeLLM` manages the recursive execution loop for you, keeping your controller logic clean. **Works seamlessly with both regular chat and streaming!**
|
|
146
|
-
|
|
147
|
-
```ts
|
|
148
|
-
import { Tool, z } from "@node-llm/core";
|
|
58
|
+
const llm = createLLM({ provider: "openai" });
|
|
149
59
|
|
|
150
|
-
//
|
|
151
|
-
|
|
152
|
-
|
|
153
|
-
description = "Get current weather";
|
|
154
|
-
schema = z.object({ location: z.string() });
|
|
60
|
+
// 1. Standard Request
|
|
61
|
+
const res = await llm.chat("gpt-4o").ask("What is the speed of light?");
|
|
62
|
+
console.log(res.content);
|
|
155
63
|
|
|
156
|
-
|
|
157
|
-
|
|
158
|
-
|
|
64
|
+
// 2. Real-time Streaming
|
|
65
|
+
for await (const chunk of llm.chat().stream("Tell me a long story")) {
|
|
66
|
+
process.stdout.write(chunk.content);
|
|
159
67
|
}
|
|
160
|
-
|
|
161
|
-
// Now the model can use it automatically
|
|
162
|
-
await chat.withTool(WeatherTool).ask("What's the weather in Tokyo?");
|
|
163
|
-
|
|
164
|
-
// Lifecycle Hooks for Error & Flow Control
|
|
165
|
-
chat.onToolCallError((call, err) => "STOP");
|
|
166
68
|
```
|
|
167
69
|
|
|
168
|
-
|
|
169
|
-
|
|
170
|
-
### 🔍 Comprehensive Debug Logging
|
|
70
|
+
### Structured Output (Zod)
|
|
171
71
|
|
|
172
|
-
|
|
173
|
-
|
|
174
|
-
```ts
|
|
175
|
-
// Set environment variable
|
|
176
|
-
process.env.NODELLM_DEBUG = "true";
|
|
177
|
-
|
|
178
|
-
// Now see detailed logs for every API call:
|
|
179
|
-
// [NodeLLM] [OpenAI] Request: POST https://api.openai.com/v1/chat/completions
|
|
180
|
-
// { "model": "gpt-4o", "messages": [...] }
|
|
181
|
-
// [NodeLLM] [OpenAI] Response: 200 OK
|
|
182
|
-
// { "id": "chatcmpl-123", ... }
|
|
183
|
-
```
|
|
184
|
-
|
|
185
|
-
**Covers:** Chat, Streaming, Images, Embeddings, Transcription, Moderation - across all providers!
|
|
186
|
-
|
|
187
|
-
### ✨ Structured Output
|
|
188
|
-
|
|
189
|
-
Get type-safe, validated JSON back using **Zod** schemas.
|
|
72
|
+
Stop parsing markdown. Get typed objects directly.
|
|
190
73
|
|
|
191
74
|
```ts
|
|
192
75
|
import { z } from "@node-llm/core";
|
|
193
|
-
const Product = z.object({ name: z.string(), price: z.number() });
|
|
194
|
-
|
|
195
|
-
const res = await chat.withSchema(Product).ask("Generate a gadget");
|
|
196
|
-
console.log(res.parsed.name); // Full type-safety
|
|
197
|
-
```
|
|
198
|
-
|
|
199
|
-
### 🎨 Image Generation
|
|
200
76
|
|
|
201
|
-
|
|
202
|
-
|
|
203
|
-
|
|
204
|
-
|
|
205
|
-
|
|
206
|
-
|
|
207
|
-
```ts
|
|
208
|
-
await NodeLLM.transcribe("meeting-recording.wav");
|
|
209
|
-
```
|
|
210
|
-
|
|
211
|
-
### 💾 Persistence Layer
|
|
212
|
-
|
|
213
|
-
Automatically track chat history, tool executions, and API metrics with **@node-llm/orm**.
|
|
214
|
-
|
|
215
|
-
```ts
|
|
216
|
-
import { createChat } from "@node-llm/orm/prisma";
|
|
77
|
+
const PlayerSchema = z.object({
|
|
78
|
+
name: z.string(),
|
|
79
|
+
powerLevel: z.number(),
|
|
80
|
+
abilities: z.array(z.string())
|
|
81
|
+
});
|
|
217
82
|
|
|
218
|
-
|
|
219
|
-
const
|
|
83
|
+
const chat = llm.chat("gpt-4o-mini").withSchema(PlayerSchema);
|
|
84
|
+
const response = await chat.ask("Generate a random RPG character");
|
|
220
85
|
|
|
221
|
-
|
|
222
|
-
// -> Saves User Message
|
|
223
|
-
// -> Saves Assistant Response
|
|
224
|
-
// -> Tracks Token Usage & Cost
|
|
225
|
-
// -> Logs Tool Calls & Results
|
|
86
|
+
console.log(response.parsed.name); // Fully typed!
|
|
226
87
|
```
|
|
227
88
|
|
|
228
|
-
|
|
229
|
-
|
|
230
|
-
Run multiple providers in parallel safely without global configuration side effects using isolated contexts.
|
|
231
|
-
|
|
232
|
-
```ts
|
|
233
|
-
const [gpt, claude] = await Promise.all([
|
|
234
|
-
// Each call branch off into its own isolated context
|
|
235
|
-
NodeLLM.withProvider("openai").chat("gpt-4o").ask(prompt),
|
|
236
|
-
NodeLLM.withProvider("anthropic").chat("claude-3-5-sonnet").ask(prompt)
|
|
237
|
-
]);
|
|
238
|
-
```
|
|
89
|
+
---
|
|
239
90
|
|
|
240
|
-
|
|
91
|
+
## 🛡️ Security Circuit Breakers
|
|
241
92
|
|
|
242
|
-
|
|
93
|
+
NodeLLM protects your production environment with four built-in safety pillars:
|
|
243
94
|
|
|
244
95
|
```ts
|
|
245
|
-
const
|
|
246
|
-
|
|
96
|
+
const llm = createLLM({
|
|
97
|
+
requestTimeout: 15000, // 15s DoS Protection
|
|
98
|
+
maxTokens: 4096, // Cost Protection
|
|
99
|
+
maxRetries: 3, // Retry Storm Protection
|
|
100
|
+
maxToolCalls: 5 // Infinite Loop Protection
|
|
101
|
+
});
|
|
247
102
|
```
|
|
248
103
|
|
|
249
104
|
---
|
|
250
105
|
|
|
251
|
-
##
|
|
106
|
+
## 💾 Ecosystem
|
|
252
107
|
|
|
253
|
-
|
|
254
|
-
| :-------------------- | :---------------------------- | :-------------------------- | :------------------------ |
|
|
255
|
-
| **Provider Logic** | Transparently Handled | Exposed to your code | **Low Coupling** |
|
|
256
|
-
| **Streaming** | Standard `AsyncIterator` | Vendor-specific Events | **Predictable Data Flow** |
|
|
257
|
-
| **Streaming + Tools** | Automated Execution | Manual implementation | **Seamless UX** |
|
|
258
|
-
| **Tool Loops** | Automated Recursion | Manual implementation | **Reduced Boilerplate** |
|
|
259
|
-
| **Files/Vision** | Intelligent Path/URL handling | Base64/Buffer management | **Cleaner Service Layer** |
|
|
260
|
-
| **Configuration** | Centralized & Global | Per-instance initialization | **Easier Lifecycle Mgmt** |
|
|
108
|
+
Looking for persistence? use **[@node-llm/orm](https://www.npmjs.com/package/@node-llm/orm)**.
|
|
261
109
|
|
|
262
|
-
|
|
263
|
-
|
|
264
|
-
## 📋 Supported Providers
|
|
265
|
-
|
|
266
|
-
| Provider | Supported Features |
|
|
267
|
-
| :----------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------- |
|
|
268
|
-
| <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openai.svg" height="18"> **OpenAI** | Chat, **Streaming + Tools**, Vision, Audio, Images, Transcription, **Reasoning** |
|
|
269
|
-
| <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/gemini-color.svg" height="18"> **Gemini** | Chat, **Streaming + Tools**, Vision, Audio, Video, Embeddings |
|
|
270
|
-
| <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/anthropic-text.svg" height="12"> **Anthropic** | Chat, **Streaming + Tools**, Vision, PDF, Structured Output |
|
|
271
|
-
| <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/deepseek-color.svg" height="18"> **DeepSeek** | Chat (V3), **Reasoning (R1)**, **Streaming + Tools** |
|
|
272
|
-
| <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openrouter.svg" height="18"> **OpenRouter** | **Aggregator**, Chat, Streaming, Tools, Vision, Embeddings, **Reasoning** |
|
|
273
|
-
| <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/ollama.svg" height="18"> **Ollama** | **Local Inference**, Chat, Streaming, Tools, Vision, Embeddings |
|
|
110
|
+
- Automatically saves chat history to PostgreSQL/MySQL/SQLite via Prisma.
|
|
111
|
+
- Tracks tool execution results and API metrics (latency, cost, tokens).
|
|
274
112
|
|
|
275
113
|
---
|
|
276
114
|
|
|
277
|
-
## 📚 Documentation
|
|
278
|
-
|
|
279
|
-
```bash
|
|
280
|
-
npm install @node-llm/core
|
|
281
|
-
```
|
|
282
|
-
|
|
283
|
-
**[View Full Documentation ↗](https://node-llm.eshaiju.com/)**
|
|
284
|
-
|
|
285
|
-
### 🍿 Try the Live Demo
|
|
286
|
-
|
|
287
|
-
Want to see it in action? Run this in your terminal:
|
|
288
|
-
|
|
289
|
-
```bash
|
|
290
|
-
git clone https://github.com/node-llm/node-llm.git
|
|
291
|
-
cd node-llm
|
|
292
|
-
npm install
|
|
293
|
-
npm run demo
|
|
294
|
-
```
|
|
295
|
-
|
|
296
|
-
---
|
|
297
|
-
|
|
298
|
-
## 🤝 Contributing
|
|
299
|
-
|
|
300
|
-
We welcome contributions! Please see our **[Contributing Guide](CONTRIBUTING.md)** for more details on how to get started.
|
|
301
|
-
|
|
302
|
-
---
|
|
115
|
+
## 📚 Full Documentation
|
|
303
116
|
|
|
304
|
-
|
|
117
|
+
Visit **[node-llm.eshaiju.com](https://node-llm.eshaiju.com/)** for:
|
|
305
118
|
|
|
306
|
-
|
|
119
|
+
- [Deep Dive into Tool Calling](https://node-llm.eshaiju.com/core-features/tools)
|
|
120
|
+
- [Multi-modal Vision & Audio Guide](https://node-llm.eshaiju.com/core-features/multimodal)
|
|
121
|
+
- [Custom Provider Plugin System](https://node-llm.eshaiju.com/advanced/custom-providers)
|
|
307
122
|
|
|
308
123
|
---
|
|
309
124
|
|
|
310
|
-
##
|
|
125
|
+
## License
|
|
311
126
|
|
|
312
|
-
MIT © [NodeLLM
|
|
127
|
+
MIT © [NodeLLM Contributors]
|
package/dist/aliases.d.ts
CHANGED
|
@@ -279,6 +279,7 @@ declare const _default: {
|
|
|
279
279
|
readonly openrouter: "z-ai/glm-4.6:exacto";
|
|
280
280
|
};
|
|
281
281
|
readonly "glm-4.7": {
|
|
282
|
+
readonly gemini: "zai-org/glm-4.7-maas";
|
|
282
283
|
readonly openrouter: "z-ai/glm-4.7";
|
|
283
284
|
};
|
|
284
285
|
readonly "glm-z1-32b-free": {
|
|
@@ -362,6 +363,7 @@ declare const _default: {
|
|
|
362
363
|
};
|
|
363
364
|
readonly "gpt-5.1-codex-max": {
|
|
364
365
|
readonly openai: "gpt-5.1-codex-max";
|
|
366
|
+
readonly openrouter: "openai/gpt-5.1-codex-max";
|
|
365
367
|
};
|
|
366
368
|
readonly "gpt-5.1-codex-mini": {
|
|
367
369
|
readonly openai: "gpt-5.1-codex-mini";
|
package/dist/aliases.d.ts.map
CHANGED
|
@@ -1 +1 @@
|
|
|
1
|
-
{"version":3,"file":"aliases.d.ts","sourceRoot":"","sources":["../src/aliases.ts"],"names":[],"mappings":"
|
|
1
|
+
{"version":3,"file":"aliases.d.ts","sourceRoot":"","sources":["../src/aliases.ts"],"names":[],"mappings":";;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;AAAA,wBAqnBW"}
|
package/dist/aliases.js
CHANGED
|
@@ -279,6 +279,7 @@ export default {
|
|
|
279
279
|
"openrouter": "z-ai/glm-4.6:exacto"
|
|
280
280
|
},
|
|
281
281
|
"glm-4.7": {
|
|
282
|
+
"gemini": "zai-org/glm-4.7-maas",
|
|
282
283
|
"openrouter": "z-ai/glm-4.7"
|
|
283
284
|
},
|
|
284
285
|
"glm-z1-32b-free": {
|
|
@@ -361,7 +362,8 @@ export default {
|
|
|
361
362
|
"openrouter": "openai/gpt-5.1-codex"
|
|
362
363
|
},
|
|
363
364
|
"gpt-5.1-codex-max": {
|
|
364
|
-
"openai": "gpt-5.1-codex-max"
|
|
365
|
+
"openai": "gpt-5.1-codex-max",
|
|
366
|
+
"openrouter": "openai/gpt-5.1-codex-max"
|
|
365
367
|
},
|
|
366
368
|
"gpt-5.1-codex-mini": {
|
|
367
369
|
"openai": "gpt-5.1-codex-mini",
|
package/dist/chat/Chat.d.ts
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
import { Message } from "./Message.js";
|
|
2
2
|
import { ContentPart } from "./Content.js";
|
|
3
3
|
import { ChatOptions } from "./ChatOptions.js";
|
|
4
|
-
import { Provider, Usage, ChatChunk } from "../providers/Provider.js";
|
|
4
|
+
import { Provider, Usage, ChatChunk, ThinkingConfig } from "../providers/Provider.js";
|
|
5
5
|
import { Stream } from "../streaming/Stream.js";
|
|
6
6
|
import { ToolResolvable } from "./Tool.js";
|
|
7
7
|
import { Schema } from "../schema/Schema.js";
|
|
@@ -15,6 +15,7 @@ export interface AskOptions {
|
|
|
15
15
|
headers?: Record<string, string>;
|
|
16
16
|
maxToolCalls?: number;
|
|
17
17
|
requestTimeout?: number;
|
|
18
|
+
thinking?: ThinkingConfig;
|
|
18
19
|
signal?: AbortSignal;
|
|
19
20
|
}
|
|
20
21
|
import { ChatResponseString } from "./ChatResponse.js";
|
|
@@ -106,6 +107,14 @@ export declare class Chat {
|
|
|
106
107
|
* Can accept a Schema object or a Zod schema/JSON Schema directly.
|
|
107
108
|
*/
|
|
108
109
|
withSchema(schema: Schema | z.ZodType<unknown> | Record<string, unknown> | null): this;
|
|
110
|
+
/**
|
|
111
|
+
* Enable and configure extended thinking for reasoning models.
|
|
112
|
+
*/
|
|
113
|
+
withThinking(config: ThinkingConfig): this;
|
|
114
|
+
/**
|
|
115
|
+
* Shortcut to set thinking effort.
|
|
116
|
+
*/
|
|
117
|
+
withEffort(effort: "low" | "medium" | "high" | "none"): this;
|
|
109
118
|
onNewMessage(handler: () => void): this;
|
|
110
119
|
onEndMessage(handler: (message: ChatResponseString) => void): this;
|
|
111
120
|
onToolCall(handler: (toolCall: unknown) => void): this;
|
package/dist/chat/Chat.d.ts.map
CHANGED
|
@@ -1 +1 @@
|
|
|
1
|
-
{"version":3,"file":"Chat.d.ts","sourceRoot":"","sources":["../../src/chat/Chat.ts"],"names":[],"mappings":"AACA,OAAO,EAAE,OAAO,EAAE,MAAM,cAAc,CAAC;AACvC,OAAO,EACL,WAAW,EAIZ,MAAM,cAAc,CAAC;AACtB,OAAO,EAAE,WAAW,EAAE,MAAM,kBAAkB,CAAC;AAC/C,OAAO,
|
|
1
|
+
{"version":3,"file":"Chat.d.ts","sourceRoot":"","sources":["../../src/chat/Chat.ts"],"names":[],"mappings":"AACA,OAAO,EAAE,OAAO,EAAE,MAAM,cAAc,CAAC;AACvC,OAAO,EACL,WAAW,EAIZ,MAAM,cAAc,CAAC;AACtB,OAAO,EAAE,WAAW,EAAE,MAAM,kBAAkB,CAAC;AAC/C,OAAO,EACL,QAAQ,EACR,KAAK,EACL,SAAS,EAET,cAAc,EACf,MAAM,0BAA0B,CAAC;AAGlC,OAAO,EAAE,MAAM,EAAE,MAAM,wBAAwB,CAAC;AAChD,OAAO,EAAkB,cAAc,EAAE,MAAM,WAAW,CAAC;AAC3D,OAAO,EAAE,MAAM,EAAE,MAAM,qBAAqB,CAAC;AAE7C,OAAO,EAAE,CAAC,EAAE,MAAM,KAAK,CAAC;AAExB,OAAO,EAAE,iBAAiB,EAAE,MAAM,iBAAiB,CAAC;AAMpD,MAAM,WAAW,UAAU;IACzB,MAAM,CAAC,EAAE,MAAM,EAAE,CAAC;IAClB,KAAK,CAAC,EAAE,MAAM,EAAE,CAAC;IACjB,WAAW,CAAC,EAAE,MAAM,CAAC;IACrB,SAAS,CAAC,EAAE,MAAM,CAAC;IACnB,OAAO,CAAC,EAAE,MAAM,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;IACjC,YAAY,CAAC,EAAE,MAAM,CAAC;IACtB,cAAc,CAAC,EAAE,MAAM,CAAC;IACxB,QAAQ,CAAC,EAAE,cAAc,CAAC;IAC1B,MAAM,CAAC,EAAE,WAAW,CAAC;CACtB;AAED,OAAO,EAAE,kBAAkB,EAAE,MAAM,mBAAmB,CAAC;AAEvD,qBAAa,IAAI;IAMb,OAAO,CAAC,QAAQ,CAAC,QAAQ;IACzB,OAAO,CAAC,KAAK;IACb,OAAO,CAAC,QAAQ,CAAC,OAAO;IAP1B,OAAO,CAAC,QAAQ,CAAiB;IACjC,OAAO,CAAC,cAAc,CAAiB;IACvC,OAAO,CAAC,QAAQ,CAAW;gBAGR,QAAQ,EAAE,QAAQ,EAC3B,KAAK,EAAE,MAAM,EACJ,OAAO,GAAE,WAAgB,EAC1C,WAAW,GAAE;QAAE,QAAQ,EAAE,MAAM,CAAC;QAAC,OAAO,EAAE,MAAM,CAAA;KAAgC;IA6BlF;;OAEG;IACH,IAAI,OAAO,IAAI,SAAS,OAAO,EAAE,CAEhC;IAED,IAAI,OAAO,IAAI,MAAM,CAEpB;IAED;;OAEG;IACH,IAAI,UAAU,IAAI,KAAK,CAuBtB;IAED;;;OAGG;IACH,QAAQ,CAAC,IAAI,EAAE,cAAc,GAAG,IAAI;IAIpC;;;;;;;OAOG;IACH,SAAS,CAAC,KAAK,EAAE,cAAc,EAAE,EAAE,OAAO,CAAC,EAAE;QAAE,OAAO,CAAC,EAAE,OAAO,CAAA;KAAE,GAAG,IAAI;IAkBzE;;;;OAIG;IACH,gBAAgB,CAAC,WAAW,EAAE,MAAM,EAAE,OAAO,CAAC,EAAE;QAAE,OAAO,CAAC,EAAE,OAAO,CAAA;KAAE,GAAG,IAAI;IAW5E;;OAEG;IACH,gBAAgB,CAAC,WAAW,EAAE,MAAM,EAAE,OAAO,CAAC,EAAE;QAAE,OAAO,CAAC,EAAE,OAAO,CAAA;KAAE,GAAG,IAAI;IAI5E;;OAEG;IACH,MAAM,CAAC,WAAW,EAAE,MAAM,EAAE,OAAO,CAAC,EAAE;QAAE,OAAO,CAAC,EAAE,OAAO,CAAA;KAAE,GAAG,IAAI;IAIlE;;;OAGG;IACH,GAAG,CAAC,IAAI,EAAE,MAAM,GAAG,WAAW,GAAG,QAAQ,GAAG,WAAW,EAAE,OAAO,EAAE,MAAM,GAAG,IAAI;IAS/E;;;OAGG;IACH,eAAe,CAAC,IAAI,EAAE,MAAM,GAAG,IAAI;IAKnC;;OAEG;IACH,SAAS,CAAC,KAAK,EAAE,MAAM,GAAG,IAAI;IAK9B;;;OAGG;IACH,kBAAkB,CAAC,OAAO,EAAE;QAC1B,OAAO,CAAC,EAAE,MAAM,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;QACjC,cAAc,CAAC,EAAE,OAAO,CAAC;KAC1B,GAAG,IAAI;IAUR;;;OAGG;IACH,UAAU,CAAC,MAAM,EAAE,MAAM,CAAC,MAAM,EAAE,OAAO,CAAC,GAAG,IAAI;IAKjD;;;OAGG;IACH,UAAU,CAAC,MAAM,EAAE,MAAM,GAAG,CAAC,CAAC,OAAO,CAAC,OAAO,CAAC,GAAG,MAAM,CAAC,MAAM,EAAE,OAAO,CAAC,GAAG,IAAI,GAAG,IAAI;IAgBtF;;OAEG;IACH,YAAY,CAAC,MAAM,EAAE,cAAc,GAAG,IAAI;IAK1C;;OAEG;IACH,UAAU,CAAC,MAAM,EAAE,KAAK,GAAG,QAAQ,GAAG,MAAM,GAAG,MAAM,GAAG,IAAI;IAM5D,YAAY,CAAC,OAAO,EAAE,MAAM,IAAI,GAAG,IAAI;IAKvC,YAAY,CAAC,OAAO,EAAE,CAAC,OAAO,EAAE,kBAAkB,KAAK,IAAI,GAAG,IAAI;IAKlE,UAAU,CAAC,OAAO,EAAE,CAAC,QAAQ,EAAE,OAAO,KAAK,IAAI,GAAG,IAAI;IAItD,YAAY,CAAC,OAAO,EAAE,CAAC,MAAM,EAAE,OAAO,KAAK,IAAI,GAAG,IAAI;IAItD;;OAEG;IACH,eAAe,CAAC,OAAO,EAAE,CAAC,QAAQ,EAAE,OAAO,KAAK,IAAI,GAAG,IAAI;IAK3D;;OAEG;IACH,aAAa,CAAC,OAAO,EAAE,CAAC,QAAQ,EAAE,OAAO,EAAE,MAAM,EAAE,OAAO,KAAK,IAAI,GAAG,IAAI;IAK1E,eAAe,CACb,OAAO,EAAE,CACP,QAAQ,EAAE,OAAO,EACjB,KAAK,EAAE,KAAK,KACT,MAAM,GAAG,UAAU,GAAG,OAAO,GAAG,IAAI,GAAG,OAAO,CAAC,MAAM,GAAG,UAAU,GAAG,OAAO,GAAG,IAAI,CAAC,GACxF,IAAI;IAKP;;;;;OAKG;IACH,iBAAiB,CAAC,IAAI,EAAE,iBAAiB,GAAG,IAAI;IAKhD;;;OAGG;IACH,iBAAiB,CAAC,OAAO,EAAE,CAAC,QAAQ,EAAE,OAAO,KAAK,OAAO,CAAC,OAAO,CAAC,GAAG,OAAO,GAAG,IAAI;IAKnF;;;OAGG;IACH,aAAa,CAAC,OAAO,EAAE,CAAC,QAAQ,EAAE,OAAO,EAAE,KAAK,OAAO,CAAC,OAAO,EAAE,GAAG,IAAI,CAAC,GAAG,IAAI;IAKhF;;;OAGG;IACH,aAAa,CACX,OAAO,EAAE,CAAC,QAAQ,EAAE,kBAAkB,KAAK,OAAO,CAAC,kBAAkB,GAAG,IAAI,CAAC,GAC5E,IAAI;IAKP;;OAEG;IACG,GAAG,CAAC,OAAO,EAAE,MAAM,GAAG,WAAW,EAAE,EAAE,OAAO,CAAC,EAAE,UAAU,GAAG,OAAO,CAAC,kBAAkB,CAAC;IAuQ7F;;OAEG;IACH,MAAM,CAAC,OAAO,EAAE,MAAM,GAAG,WAAW,EAAE,EAAE,OAAO,GAAE,UAAe,GAAG,MAAM,CAAC,SAAS,CAAC;IAWpF;;OAEG;IACH,OAAO,CAAC,aAAa;CAoDtB"}
|
package/dist/chat/Chat.js
CHANGED
|
@@ -203,6 +203,19 @@ export class Chat {
|
|
|
203
203
|
}
|
|
204
204
|
return this;
|
|
205
205
|
}
|
|
206
|
+
/**
|
|
207
|
+
* Enable and configure extended thinking for reasoning models.
|
|
208
|
+
*/
|
|
209
|
+
withThinking(config) {
|
|
210
|
+
this.options.thinking = { ...this.options.thinking, ...config };
|
|
211
|
+
return this;
|
|
212
|
+
}
|
|
213
|
+
/**
|
|
214
|
+
* Shortcut to set thinking effort.
|
|
215
|
+
*/
|
|
216
|
+
withEffort(effort) {
|
|
217
|
+
return this.withThinking({ effort });
|
|
218
|
+
}
|
|
206
219
|
// --- Event Handlers ---
|
|
207
220
|
onNewMessage(handler) {
|
|
208
221
|
this.options.onNewMessage = handler;
|
|
@@ -313,6 +326,7 @@ export class Chat {
|
|
|
313
326
|
headers: { ...this.options.headers, ...options?.headers },
|
|
314
327
|
response_format: responseFormat, // Pass to provider
|
|
315
328
|
requestTimeout: options?.requestTimeout ?? this.options.requestTimeout ?? config.requestTimeout,
|
|
329
|
+
thinking: options?.thinking ?? this.options.thinking,
|
|
316
330
|
signal: options?.signal,
|
|
317
331
|
...this.options.params
|
|
318
332
|
};
|
|
@@ -344,7 +358,7 @@ export class Chat {
|
|
|
344
358
|
this.options.onNewMessage();
|
|
345
359
|
let response = await this.executor.executeChat(executeOptions);
|
|
346
360
|
trackUsage(response.usage);
|
|
347
|
-
let assistantMessage = new ChatResponseString(response.content ?? "", response.usage ?? { input_tokens: 0, output_tokens: 0, total_tokens: 0 }, this.model, this.provider.id, response.reasoning, response.tool_calls);
|
|
361
|
+
let assistantMessage = new ChatResponseString(response.content ?? "", response.usage ?? { input_tokens: 0, output_tokens: 0, total_tokens: 0 }, this.model, this.provider.id, response.thinking, response.reasoning, response.tool_calls);
|
|
348
362
|
// --- Content Policy Hooks (Output - Turn 1) ---
|
|
349
363
|
if (this.options.onAfterResponse) {
|
|
350
364
|
const result = await this.options.onAfterResponse(assistantMessage);
|
|
@@ -428,7 +442,7 @@ export class Chat {
|
|
|
428
442
|
...this.options.params
|
|
429
443
|
});
|
|
430
444
|
trackUsage(response.usage);
|
|
431
|
-
assistantMessage = new ChatResponseString(response.content ?? "", response.usage ?? { input_tokens: 0, output_tokens: 0, total_tokens: 0 }, this.model, this.provider.id, response.reasoning);
|
|
445
|
+
assistantMessage = new ChatResponseString(response.content ?? "", response.usage ?? { input_tokens: 0, output_tokens: 0, total_tokens: 0 }, this.model, this.provider.id, response.thinking, response.reasoning);
|
|
432
446
|
// --- Content Policy Hooks (Output - Tool Turns) ---
|
|
433
447
|
if (this.options.onAfterResponse) {
|
|
434
448
|
const result = await this.options.onAfterResponse(assistantMessage);
|
|
@@ -448,7 +462,7 @@ export class Chat {
|
|
|
448
462
|
}
|
|
449
463
|
// For the final return, we might want to aggregate reasoning too if it happened in multiple turns?
|
|
450
464
|
// Usually reasoning only happens once or we just want the last one.
|
|
451
|
-
return new ChatResponseString(assistantMessage.toString() || "", totalUsage, this.model, this.provider.id, assistantMessage.reasoning, response.tool_calls);
|
|
465
|
+
return new ChatResponseString(assistantMessage.toString() || "", totalUsage, this.model, this.provider.id, assistantMessage.thinking, assistantMessage.reasoning, response.tool_calls);
|
|
452
466
|
}
|
|
453
467
|
/**
|
|
454
468
|
* Streams the model's response to a user question.
|
|
@@ -3,7 +3,7 @@ import { ToolResolvable } from "./Tool.js";
|
|
|
3
3
|
import { Schema } from "../schema/Schema.js";
|
|
4
4
|
import { ChatResponseString } from "./ChatResponse.js";
|
|
5
5
|
import { ToolExecutionMode } from "../constants.js";
|
|
6
|
-
import { ResponseFormat } from "../providers/Provider.js";
|
|
6
|
+
import { ResponseFormat, ThinkingConfig } from "../providers/Provider.js";
|
|
7
7
|
export interface ChatOptions {
|
|
8
8
|
systemPrompt?: string;
|
|
9
9
|
messages?: Message[];
|
|
@@ -23,6 +23,7 @@ export interface ChatOptions {
|
|
|
23
23
|
provider?: string;
|
|
24
24
|
maxToolCalls?: number;
|
|
25
25
|
requestTimeout?: number;
|
|
26
|
+
thinking?: ThinkingConfig;
|
|
26
27
|
toolExecution?: ToolExecutionMode;
|
|
27
28
|
onConfirmToolCall?: (toolCall: unknown) => Promise<boolean> | boolean;
|
|
28
29
|
onBeforeRequest?: (messages: Message[]) => Promise<Message[] | void>;
|
|
@@ -1 +1 @@
|
|
|
1
|
-
{"version":3,"file":"ChatOptions.d.ts","sourceRoot":"","sources":["../../src/chat/ChatOptions.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,OAAO,EAAE,MAAM,cAAc,CAAC;AACvC,OAAO,EAAE,cAAc,EAAE,MAAM,WAAW,CAAC;AAC3C,OAAO,EAAE,MAAM,EAAE,MAAM,qBAAqB,CAAC;AAC7C,OAAO,EAAE,kBAAkB,EAAE,MAAM,mBAAmB,CAAC;AACvD,OAAO,EAAE,iBAAiB,EAAE,MAAM,iBAAiB,CAAC;AACpD,OAAO,EAAE,cAAc,EAAE,MAAM,0BAA0B,CAAC;
|
|
1
|
+
{"version":3,"file":"ChatOptions.d.ts","sourceRoot":"","sources":["../../src/chat/ChatOptions.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,OAAO,EAAE,MAAM,cAAc,CAAC;AACvC,OAAO,EAAE,cAAc,EAAE,MAAM,WAAW,CAAC;AAC3C,OAAO,EAAE,MAAM,EAAE,MAAM,qBAAqB,CAAC;AAC7C,OAAO,EAAE,kBAAkB,EAAE,MAAM,mBAAmB,CAAC;AACvD,OAAO,EAAE,iBAAiB,EAAE,MAAM,iBAAiB,CAAC;AACpD,OAAO,EAAE,cAAc,EAAE,cAAc,EAAE,MAAM,0BAA0B,CAAC;AAE1E,MAAM,WAAW,WAAW;IAC1B,YAAY,CAAC,EAAE,MAAM,CAAC;IACtB,QAAQ,CAAC,EAAE,OAAO,EAAE,CAAC;IACrB,KAAK,CAAC,EAAE,cAAc,EAAE,CAAC;IACzB,WAAW,CAAC,EAAE,MAAM,CAAC;IACrB,SAAS,CAAC,EAAE,MAAM,CAAC;IACnB,YAAY,CAAC,EAAE,MAAM,IAAI,CAAC;IAC1B,YAAY,CAAC,EAAE,CAAC,OAAO,EAAE,kBAAkB,KAAK,IAAI,CAAC;IACrD,eAAe,CAAC,EAAE,CAAC,QAAQ,EAAE,OAAO,KAAK,IAAI,CAAC;IAC9C,aAAa,CAAC,EAAE,CAAC,QAAQ,EAAE,OAAO,EAAE,MAAM,EAAE,OAAO,KAAK,IAAI,CAAC;IAC7D,eAAe,CAAC,EAAE,CAChB,QAAQ,EAAE,OAAO,EACjB,KAAK,EAAE,KAAK,KACT,MAAM,GAAG,UAAU,GAAG,OAAO,GAAG,IAAI,GAAG,OAAO,CAAC,MAAM,GAAG,UAAU,GAAG,OAAO,GAAG,IAAI,CAAC,CAAC;IAC1F,OAAO,CAAC,EAAE,MAAM,CAAC,MAAM,EAAE,MAAM,CAAC,CAAC;IACjC,MAAM,CAAC,EAAE,MAAM,CAAC;IAChB,cAAc,CAAC,EAAE,cAAc,CAAC;IAChC,MAAM,CAAC,EAAE,MAAM,CAAC,MAAM,EAAE,OAAO,CAAC,CAAC;IACjC,iBAAiB,CAAC,EAAE,OAAO,CAAC;IAC5B,QAAQ,CAAC,EAAE,MAAM,CAAC;IAClB,YAAY,CAAC,EAAE,MAAM,CAAC;IACtB,cAAc,CAAC,EAAE,MAAM,CAAC;IACxB,QAAQ,CAAC,EAAE,cAAc,CAAC;IAC1B,aAAa,CAAC,EAAE,iBAAiB,CAAC;IAClC,iBAAiB,CAAC,EAAE,CAAC,QAAQ,EAAE,OAAO,KAAK,OAAO,CAAC,OAAO,CAAC,GAAG,OAAO,CAAC;IACtE,eAAe,CAAC,EAAE,CAAC,QAAQ,EAAE,OAAO,EAAE,KAAK,OAAO,CAAC,OAAO,EAAE,GAAG,IAAI,CAAC,CAAC;IACrE,eAAe,CAAC,EAAE,CAAC,QAAQ,EAAE,kBAAkB,KAAK,OAAO,CAAC,kBAAkB,GAAG,IAAI,CAAC,CAAC;CACxF"}
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
import { Usage } from "../providers/Provider.js";
|
|
1
|
+
import { Usage, ThinkingResult } from "../providers/Provider.js";
|
|
2
2
|
import { ToolCall } from "./Tool.js";
|
|
3
3
|
/**
|
|
4
4
|
* Enhanced string that includes token usage metadata.
|
|
@@ -8,9 +8,10 @@ export declare class ChatResponseString extends String {
|
|
|
8
8
|
readonly usage: Usage;
|
|
9
9
|
readonly model: string;
|
|
10
10
|
readonly provider: string;
|
|
11
|
+
readonly thinking?: ThinkingResult | undefined;
|
|
11
12
|
readonly reasoning?: string | null | undefined;
|
|
12
13
|
readonly tool_calls?: ToolCall[] | undefined;
|
|
13
|
-
constructor(content: string, usage: Usage, model: string, provider: string, reasoning?: string | null | undefined, tool_calls?: ToolCall[] | undefined);
|
|
14
|
+
constructor(content: string, usage: Usage, model: string, provider: string, thinking?: ThinkingResult | undefined, reasoning?: string | null | undefined, tool_calls?: ToolCall[] | undefined);
|
|
14
15
|
get input_tokens(): number;
|
|
15
16
|
get output_tokens(): number;
|
|
16
17
|
get total_tokens(): number;
|
|
@@ -32,6 +33,7 @@ export declare class ChatResponseString extends String {
|
|
|
32
33
|
usage: Usage;
|
|
33
34
|
model: string;
|
|
34
35
|
provider: string;
|
|
36
|
+
thinking: ThinkingResult | undefined;
|
|
35
37
|
reasoning: string | null | undefined;
|
|
36
38
|
tool_calls: ToolCall[] | undefined;
|
|
37
39
|
};
|
|
@@ -42,6 +44,7 @@ export declare class ChatResponseString extends String {
|
|
|
42
44
|
usage: Usage;
|
|
43
45
|
model: string;
|
|
44
46
|
provider: string;
|
|
47
|
+
thinking: ThinkingResult | undefined;
|
|
45
48
|
reasoning: string | null | undefined;
|
|
46
49
|
tool_calls: ToolCall[] | undefined;
|
|
47
50
|
};
|
|
@@ -1 +1 @@
|
|
|
1
|
-
{"version":3,"file":"ChatResponse.d.ts","sourceRoot":"","sources":["../../src/chat/ChatResponse.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,KAAK,EAAE,MAAM,0BAA0B,CAAC;
|
|
1
|
+
{"version":3,"file":"ChatResponse.d.ts","sourceRoot":"","sources":["../../src/chat/ChatResponse.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,KAAK,EAAE,cAAc,EAAE,MAAM,0BAA0B,CAAC;AACjE,OAAO,EAAE,QAAQ,EAAE,MAAM,WAAW,CAAC;AAErC;;;GAGG;AACH,qBAAa,kBAAmB,SAAQ,MAAM;aAG1B,KAAK,EAAE,KAAK;aACZ,KAAK,EAAE,MAAM;aACb,QAAQ,EAAE,MAAM;aAChB,QAAQ,CAAC,EAAE,cAAc;aACzB,SAAS,CAAC,EAAE,MAAM,GAAG,IAAI;aACzB,UAAU,CAAC,EAAE,QAAQ,EAAE;gBANvC,OAAO,EAAE,MAAM,EACC,KAAK,EAAE,KAAK,EACZ,KAAK,EAAE,MAAM,EACb,QAAQ,EAAE,MAAM,EAChB,QAAQ,CAAC,EAAE,cAAc,YAAA,EACzB,SAAS,CAAC,EAAE,MAAM,GAAG,IAAI,YAAA,EACzB,UAAU,CAAC,EAAE,QAAQ,EAAE,YAAA;IAKzC,IAAI,YAAY,WAEf;IACD,IAAI,aAAa,WAEhB;IACD,IAAI,YAAY,WAEf;IACD,IAAI,aAAa,uBAEhB;IACD,IAAI,IAAI,uBAEP;IACD,IAAI,UAAU,uBAEb;IACD,IAAI,WAAW,uBAEd;IAGD,IAAI,WAAW,WAEd;IACD,IAAI,YAAY,WAEf;IACD,IAAI,WAAW,WAEd;IACD,IAAI,YAAY,uBAEf;IAED,IAAI,OAAO,IAAI,MAAM,CAEpB;IAED,IAAI,QAAQ,IAAI,MAAM,CAErB;IAED;;;OAGG;IACH,IAAI,IAAI;;;;;;;MASP;IAED;;OAEG;IACH,IAAI,GAAG;;;;;;;MAEN;IAED,QAAQ;IAIR;;OAEG;IACH,WAAW,CAAC,UAAU,EAAE,MAAM,GAAG,kBAAkB;IAYnD;;;OAGG;IACH,IAAI,MAAM,IAAI,OAAO,CAMpB;CACF"}
|
|
@@ -6,13 +6,15 @@ export class ChatResponseString extends String {
|
|
|
6
6
|
usage;
|
|
7
7
|
model;
|
|
8
8
|
provider;
|
|
9
|
+
thinking;
|
|
9
10
|
reasoning;
|
|
10
11
|
tool_calls;
|
|
11
|
-
constructor(content, usage, model, provider, reasoning, tool_calls) {
|
|
12
|
+
constructor(content, usage, model, provider, thinking, reasoning, tool_calls) {
|
|
12
13
|
super(content);
|
|
13
14
|
this.usage = usage;
|
|
14
15
|
this.model = model;
|
|
15
16
|
this.provider = provider;
|
|
17
|
+
this.thinking = thinking;
|
|
16
18
|
this.reasoning = reasoning;
|
|
17
19
|
this.tool_calls = tool_calls;
|
|
18
20
|
}
|
|
@@ -65,6 +67,7 @@ export class ChatResponseString extends String {
|
|
|
65
67
|
usage: this.usage,
|
|
66
68
|
model: this.model,
|
|
67
69
|
provider: this.provider,
|
|
70
|
+
thinking: this.thinking,
|
|
68
71
|
reasoning: this.reasoning,
|
|
69
72
|
tool_calls: this.tool_calls
|
|
70
73
|
};
|
|
@@ -82,7 +85,7 @@ export class ChatResponseString extends String {
|
|
|
82
85
|
* Return a new ChatResponseString with modified content but preserved metadata.
|
|
83
86
|
*/
|
|
84
87
|
withContent(newContent) {
|
|
85
|
-
return new ChatResponseString(newContent, this.usage, this.model, this.provider, this.reasoning, this.tool_calls);
|
|
88
|
+
return new ChatResponseString(newContent, this.usage, this.model, this.provider, this.thinking, this.reasoning, this.tool_calls);
|
|
86
89
|
}
|
|
87
90
|
/**
|
|
88
91
|
* Attempt to parse the content as JSON.
|
|
@@ -1 +1 @@
|
|
|
1
|
-
{"version":3,"file":"ChatStream.d.ts","sourceRoot":"","sources":["../../src/chat/ChatStream.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,OAAO,EAAE,MAAM,cAAc,CAAC;AACvC,OAAO,EACL,WAAW,EAIZ,MAAM,cAAc,CAAC;AACtB,OAAO,EAAE,WAAW,EAAE,MAAM,kBAAkB,CAAC;AAC/C,OAAO,EAAE,QAAQ,EAAE,SAAS,
|
|
1
|
+
{"version":3,"file":"ChatStream.d.ts","sourceRoot":"","sources":["../../src/chat/ChatStream.ts"],"names":[],"mappings":"AAAA,OAAO,EAAE,OAAO,EAAE,MAAM,cAAc,CAAC;AACvC,OAAO,EACL,WAAW,EAIZ,MAAM,cAAc,CAAC;AACtB,OAAO,EAAE,WAAW,EAAE,MAAM,kBAAkB,CAAC;AAC/C,OAAO,EAAE,QAAQ,EAAE,SAAS,EAAyB,MAAM,0BAA0B,CAAC;AAEtF,OAAO,EAAE,MAAM,EAAE,MAAM,wBAAwB,CAAC;AAGhD,OAAO,EAAE,UAAU,EAAE,MAAM,WAAW,CAAC;AASvC;;;GAGG;AACH,qBAAa,UAAU;IAKnB,OAAO,CAAC,QAAQ,CAAC,QAAQ;IACzB,OAAO,CAAC,QAAQ,CAAC,KAAK;IACtB,OAAO,CAAC,QAAQ,CAAC,OAAO;IAN1B,OAAO,CAAC,QAAQ,CAAY;IAC5B,OAAO,CAAC,cAAc,CAAY;gBAGf,QAAQ,EAAE,QAAQ,EAClB,KAAK,EAAE,MAAM,EACb,OAAO,GAAE,WAAgB,EAC1C,QAAQ,CAAC,EAAE,OAAO,EAAE,EACpB,cAAc,CAAC,EAAE,OAAO,EAAE;IA6B5B,IAAI,OAAO,IAAI,SAAS,OAAO,EAAE,CAEhC;IAED,MAAM,CAAC,OAAO,EAAE,MAAM,GAAG,WAAW,EAAE,EAAE,OAAO,GAAE,UAAe,GAAG,MAAM,CAAC,SAAS,CAAC;CA4QrF"}
|