@node-llm/core 1.5.2 → 1.6.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +84 -110
- package/dist/aliases.js +239 -239
- package/dist/chat/Chat.d.ts +18 -17
- package/dist/chat/Chat.d.ts.map +1 -1
- package/dist/chat/Chat.js +63 -42
- package/dist/chat/ChatOptions.d.ts +8 -9
- package/dist/chat/ChatOptions.d.ts.map +1 -1
- package/dist/chat/ChatResponse.d.ts +1 -1
- package/dist/chat/ChatResponse.d.ts.map +1 -1
- package/dist/chat/ChatResponse.js +22 -8
- package/dist/chat/ChatStream.d.ts.map +1 -1
- package/dist/chat/ChatStream.js +16 -22
- package/dist/chat/Content.d.ts +3 -3
- package/dist/chat/Content.d.ts.map +1 -1
- package/dist/chat/Content.js +3 -6
- package/dist/chat/Message.d.ts +3 -1
- package/dist/chat/Message.d.ts.map +1 -1
- package/dist/chat/Role.d.ts.map +1 -1
- package/dist/chat/Tool.d.ts +15 -15
- package/dist/chat/Tool.d.ts.map +1 -1
- package/dist/chat/Tool.js +9 -7
- package/dist/chat/ToolHandler.d.ts +4 -3
- package/dist/chat/ToolHandler.d.ts.map +1 -1
- package/dist/chat/ToolHandler.js +10 -14
- package/dist/chat/Validation.d.ts.map +1 -1
- package/dist/chat/Validation.js +13 -6
- package/dist/config.d.ts +4 -0
- package/dist/config.d.ts.map +1 -1
- package/dist/config.js +80 -25
- package/dist/constants.js +1 -1
- package/dist/errors/index.d.ts +21 -7
- package/dist/errors/index.d.ts.map +1 -1
- package/dist/errors/index.js +14 -0
- package/dist/index.d.ts +1 -1
- package/dist/index.d.ts.map +1 -1
- package/dist/index.js +1 -1
- package/dist/llm.d.ts +44 -46
- package/dist/llm.d.ts.map +1 -1
- package/dist/llm.js +206 -134
- package/dist/model_aliases.d.ts.map +1 -1
- package/dist/models/ModelRegistry.d.ts.map +1 -1
- package/dist/models/ModelRegistry.js +12 -10
- package/dist/models/PricingRegistry.d.ts +31 -0
- package/dist/models/PricingRegistry.d.ts.map +1 -0
- package/dist/models/PricingRegistry.js +101 -0
- package/dist/models/models.d.ts.map +1 -1
- package/dist/models/models.js +5954 -7352
- package/dist/models/types.d.ts +37 -34
- package/dist/models/types.d.ts.map +1 -1
- package/dist/moderation/Moderation.d.ts.map +1 -1
- package/dist/moderation/Moderation.js +15 -5
- package/dist/providers/BaseProvider.d.ts +12 -8
- package/dist/providers/BaseProvider.d.ts.map +1 -1
- package/dist/providers/BaseProvider.js +17 -7
- package/dist/providers/Provider.d.ts +20 -5
- package/dist/providers/Provider.d.ts.map +1 -1
- package/dist/providers/anthropic/AnthropicProvider.d.ts +1 -1
- package/dist/providers/anthropic/AnthropicProvider.d.ts.map +1 -1
- package/dist/providers/anthropic/AnthropicProvider.js +3 -3
- package/dist/providers/anthropic/Capabilities.d.ts +2 -1
- package/dist/providers/anthropic/Capabilities.d.ts.map +1 -1
- package/dist/providers/anthropic/Capabilities.js +3 -20
- package/dist/providers/anthropic/Chat.d.ts.map +1 -1
- package/dist/providers/anthropic/Chat.js +27 -17
- package/dist/providers/anthropic/Errors.d.ts.map +1 -1
- package/dist/providers/anthropic/Errors.js +5 -2
- package/dist/providers/anthropic/Models.d.ts.map +1 -1
- package/dist/providers/anthropic/Models.js +6 -6
- package/dist/providers/anthropic/Streaming.d.ts.map +1 -1
- package/dist/providers/anthropic/Streaming.js +17 -12
- package/dist/providers/anthropic/Utils.js +8 -5
- package/dist/providers/anthropic/index.d.ts.map +1 -1
- package/dist/providers/anthropic/index.js +4 -3
- package/dist/providers/anthropic/types.d.ts +11 -4
- package/dist/providers/anthropic/types.d.ts.map +1 -1
- package/dist/providers/deepseek/Capabilities.d.ts +7 -5
- package/dist/providers/deepseek/Capabilities.d.ts.map +1 -1
- package/dist/providers/deepseek/Capabilities.js +9 -5
- package/dist/providers/deepseek/Chat.d.ts.map +1 -1
- package/dist/providers/deepseek/Chat.js +10 -9
- package/dist/providers/deepseek/DeepSeekProvider.d.ts +1 -1
- package/dist/providers/deepseek/DeepSeekProvider.d.ts.map +1 -1
- package/dist/providers/deepseek/DeepSeekProvider.js +4 -4
- package/dist/providers/deepseek/Models.d.ts.map +1 -1
- package/dist/providers/deepseek/Models.js +7 -7
- package/dist/providers/deepseek/Streaming.d.ts.map +1 -1
- package/dist/providers/deepseek/Streaming.js +11 -8
- package/dist/providers/deepseek/index.d.ts.map +1 -1
- package/dist/providers/deepseek/index.js +5 -4
- package/dist/providers/gemini/Capabilities.d.ts +5 -33
- package/dist/providers/gemini/Capabilities.d.ts.map +1 -1
- package/dist/providers/gemini/Capabilities.js +7 -30
- package/dist/providers/gemini/Chat.d.ts.map +1 -1
- package/dist/providers/gemini/Chat.js +24 -19
- package/dist/providers/gemini/ChatUtils.d.ts.map +1 -1
- package/dist/providers/gemini/ChatUtils.js +10 -10
- package/dist/providers/gemini/Embeddings.d.ts.map +1 -1
- package/dist/providers/gemini/Embeddings.js +2 -2
- package/dist/providers/gemini/Errors.d.ts.map +1 -1
- package/dist/providers/gemini/Errors.js +5 -2
- package/dist/providers/gemini/GeminiProvider.d.ts +1 -1
- package/dist/providers/gemini/GeminiProvider.d.ts.map +1 -1
- package/dist/providers/gemini/GeminiProvider.js +3 -3
- package/dist/providers/gemini/Image.d.ts.map +1 -1
- package/dist/providers/gemini/Image.js +8 -8
- package/dist/providers/gemini/Models.d.ts.map +1 -1
- package/dist/providers/gemini/Models.js +6 -6
- package/dist/providers/gemini/Streaming.d.ts.map +1 -1
- package/dist/providers/gemini/Streaming.js +18 -14
- package/dist/providers/gemini/Transcription.d.ts.map +1 -1
- package/dist/providers/gemini/Transcription.js +11 -11
- package/dist/providers/gemini/index.d.ts +1 -1
- package/dist/providers/gemini/index.d.ts.map +1 -1
- package/dist/providers/gemini/index.js +5 -4
- package/dist/providers/gemini/types.d.ts +4 -4
- package/dist/providers/gemini/types.d.ts.map +1 -1
- package/dist/providers/ollama/Capabilities.d.ts.map +1 -1
- package/dist/providers/ollama/Capabilities.js +6 -2
- package/dist/providers/ollama/Models.js +1 -1
- package/dist/providers/ollama/OllamaProvider.d.ts +1 -1
- package/dist/providers/ollama/OllamaProvider.d.ts.map +1 -1
- package/dist/providers/ollama/OllamaProvider.js +2 -2
- package/dist/providers/ollama/index.d.ts +1 -1
- package/dist/providers/ollama/index.d.ts.map +1 -1
- package/dist/providers/ollama/index.js +7 -3
- package/dist/providers/openai/Capabilities.d.ts +2 -1
- package/dist/providers/openai/Capabilities.d.ts.map +1 -1
- package/dist/providers/openai/Capabilities.js +9 -21
- package/dist/providers/openai/Chat.d.ts.map +1 -1
- package/dist/providers/openai/Chat.js +18 -14
- package/dist/providers/openai/Embedding.d.ts.map +1 -1
- package/dist/providers/openai/Embedding.js +11 -7
- package/dist/providers/openai/Errors.d.ts.map +1 -1
- package/dist/providers/openai/Errors.js +5 -2
- package/dist/providers/openai/Image.d.ts.map +1 -1
- package/dist/providers/openai/Image.js +6 -6
- package/dist/providers/openai/Models.d.ts +1 -1
- package/dist/providers/openai/Models.d.ts.map +1 -1
- package/dist/providers/openai/Models.js +12 -8
- package/dist/providers/openai/Moderation.d.ts.map +1 -1
- package/dist/providers/openai/Moderation.js +6 -6
- package/dist/providers/openai/OpenAIProvider.d.ts +2 -3
- package/dist/providers/openai/OpenAIProvider.d.ts.map +1 -1
- package/dist/providers/openai/OpenAIProvider.js +4 -4
- package/dist/providers/openai/Streaming.d.ts.map +1 -1
- package/dist/providers/openai/Streaming.js +18 -13
- package/dist/providers/openai/Transcription.d.ts.map +1 -1
- package/dist/providers/openai/Transcription.js +15 -12
- package/dist/providers/openai/index.d.ts +1 -1
- package/dist/providers/openai/index.d.ts.map +1 -1
- package/dist/providers/openai/index.js +6 -5
- package/dist/providers/openai/types.d.ts +1 -1
- package/dist/providers/openai/utils.js +2 -2
- package/dist/providers/openrouter/Capabilities.d.ts +3 -3
- package/dist/providers/openrouter/Capabilities.d.ts.map +1 -1
- package/dist/providers/openrouter/Capabilities.js +21 -24
- package/dist/providers/openrouter/Models.d.ts.map +1 -1
- package/dist/providers/openrouter/Models.js +20 -16
- package/dist/providers/openrouter/OpenRouterProvider.d.ts.map +1 -1
- package/dist/providers/openrouter/OpenRouterProvider.js +1 -1
- package/dist/providers/openrouter/index.d.ts +1 -1
- package/dist/providers/openrouter/index.d.ts.map +1 -1
- package/dist/providers/openrouter/index.js +6 -5
- package/dist/providers/registry.d.ts +18 -2
- package/dist/providers/registry.d.ts.map +1 -1
- package/dist/providers/registry.js +17 -2
- package/dist/providers/utils.js +1 -1
- package/dist/schema/Schema.d.ts +3 -3
- package/dist/schema/Schema.d.ts.map +1 -1
- package/dist/schema/Schema.js +2 -2
- package/dist/schema/to-json-schema.d.ts +1 -1
- package/dist/schema/to-json-schema.d.ts.map +1 -1
- package/dist/streaming/Stream.d.ts.map +1 -1
- package/dist/streaming/Stream.js +3 -3
- package/dist/utils/Binary.d.ts.map +1 -1
- package/dist/utils/Binary.js +32 -23
- package/dist/utils/FileLoader.d.ts.map +1 -1
- package/dist/utils/FileLoader.js +25 -4
- package/dist/utils/audio.js +2 -2
- package/dist/utils/fetch.d.ts.map +1 -1
- package/dist/utils/fetch.js +14 -4
- package/dist/utils/logger.d.ts +3 -3
- package/dist/utils/logger.d.ts.map +1 -1
- package/dist/utils/logger.js +2 -2
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -1,16 +1,16 @@
|
|
|
1
1
|
<p align="left">
|
|
2
2
|
<a href="https://node-llm.eshaiju.com/">
|
|
3
|
-
<img src="
|
|
3
|
+
<img src="docs/assets/images/logo.jpg" alt="NodeLLM logo" width="300" />
|
|
4
4
|
</a>
|
|
5
5
|
</p>
|
|
6
6
|
|
|
7
7
|
# NodeLLM
|
|
8
8
|
|
|
9
|
-
**An
|
|
9
|
+
**An architectural layer for integrating Large Language Models in Node.js.**
|
|
10
10
|
|
|
11
11
|
**Provider-agnostic by design.**
|
|
12
12
|
|
|
13
|
-
|
|
13
|
+
Integrating multiple LLM providers often means juggling different SDKs, API styles, and update cycles. NodeLLM provides a single, unified, production-oriented API for interacting with over 540+ models across multiple providers (OpenAI, Gemini, Anthropic, DeepSeek, OpenRouter, Ollama, etc.) that stays consistent even when providers change.
|
|
14
14
|
|
|
15
15
|
<p align="left">
|
|
16
16
|
<img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openai.svg" height="28" />
|
|
@@ -44,22 +44,24 @@ Most LLM SDKs **tightly couple** your application to vendors, APIs, and churn. N
|
|
|
44
44
|
NodeLLM represents a clear architectural boundary between your system and LLM vendors.
|
|
45
45
|
|
|
46
46
|
NodeLLM is **NOT**:
|
|
47
|
-
|
|
48
|
-
|
|
49
|
-
|
|
47
|
+
|
|
48
|
+
- A wrapper around a single provider SDK (like `openai` or `@google/generative-ai`)
|
|
49
|
+
- A prompt-engineering framework
|
|
50
|
+
- An agent playground or experimental toy
|
|
50
51
|
|
|
51
52
|
---
|
|
52
53
|
|
|
53
54
|
## 🏗️ Why NodeLLM?
|
|
54
55
|
|
|
55
|
-
|
|
56
|
+
Direct integrations often become tightly coupled to specific providers, making it difficult to adapt as models evolve. **LLMs should be treated as infrastructure**, and NodeLLM helps you build a stable foundation that persists regardless of which model is currently "state of the art."
|
|
56
57
|
|
|
57
|
-
NodeLLM
|
|
58
|
+
NodeLLM helps solve **architectural problems**, not just provide API access. It serves as the core integration layer for LLMs in the Node.js ecosystem.
|
|
58
59
|
|
|
59
60
|
### Strategic Goals
|
|
61
|
+
|
|
60
62
|
- **Provider Isolation**: Decouple your services from vendor SDKs.
|
|
61
|
-
- **Production-Ready**: Native support for streaming, retries, and unified error handling.
|
|
62
|
-
- **Predictable API**: Consistent behavior for Tools, Vision, and Structured Outputs across all models
|
|
63
|
+
- **Production-Ready**: Native support for streaming, automatic retries, and unified error handling.
|
|
64
|
+
- **Predictable API**: Consistent behavior for Tools, Vision, and Structured Outputs across all models, **now including full parity for streaming**.
|
|
63
65
|
|
|
64
66
|
---
|
|
65
67
|
|
|
@@ -68,11 +70,10 @@ NodeLLM exists to solve **architectural problems**, not just provide API access.
|
|
|
68
70
|
```ts
|
|
69
71
|
import { NodeLLM } from "@node-llm/core";
|
|
70
72
|
|
|
71
|
-
// 1.
|
|
72
|
-
NodeLLM.
|
|
73
|
+
// 1. Zero-Config (NodeLLM automatically reads NODELLM_PROVIDER and API keys)
|
|
74
|
+
const chat = NodeLLM.chat("gpt-4o");
|
|
73
75
|
|
|
74
76
|
// 2. Chat (High-level request/response)
|
|
75
|
-
const chat = NodeLLM.chat("gpt-4o");
|
|
76
77
|
const response = await chat.ask("Explain event-driven architecture");
|
|
77
78
|
console.log(response.content);
|
|
78
79
|
|
|
@@ -82,32 +83,34 @@ for await (const chunk of chat.stream("Explain event-driven architecture")) {
|
|
|
82
83
|
}
|
|
83
84
|
```
|
|
84
85
|
|
|
86
|
+
### 🎯 Real-World Example: Brand Perception Checker
|
|
87
|
+
|
|
88
|
+
Built with NodeLLM - Multi-provider AI analysis, tool calling, and structured outputs working together.
|
|
89
|
+
|
|
90
|
+
**[View Example →](examples/brand-perception-checker/)**
|
|
85
91
|
|
|
86
92
|
---
|
|
87
93
|
|
|
88
94
|
## 🔧 Strategic Configuration
|
|
89
95
|
|
|
90
|
-
NodeLLM provides a flexible configuration system designed for enterprise usage
|
|
96
|
+
NodeLLM provides a flexible, **lazy-initialized** configuration system designed for enterprise usage. It is safe for ESM and resolved only when your first request is made, eliminating the common `dotenv` race condition.
|
|
91
97
|
|
|
92
98
|
```ts
|
|
93
99
|
// Recommended for multi-provider pipelines
|
|
94
|
-
|
|
95
|
-
|
|
96
|
-
|
|
97
|
-
|
|
100
|
+
const llm = createLLM({
|
|
101
|
+
openaiApiKey: process.env.OPENAI_API_KEY,
|
|
102
|
+
anthropicApiKey: process.env.ANTHROPIC_API_KEY,
|
|
103
|
+
ollamaApiBase: process.env.OLLAMA_API_BASE
|
|
98
104
|
});
|
|
99
105
|
|
|
100
|
-
// Switch providers at the framework level
|
|
101
|
-
NodeLLM.configure({ provider: "anthropic" });
|
|
102
|
-
|
|
103
106
|
// Support for Custom Endpoints (e.g., Azure or LocalAI)
|
|
104
|
-
|
|
107
|
+
const llm = createLLM({
|
|
105
108
|
openaiApiKey: process.env.AZURE_KEY,
|
|
106
|
-
openaiApiBase: "https://your-resource.openai.azure.com/openai/deployments/..."
|
|
109
|
+
openaiApiBase: "https://your-resource.openai.azure.com/openai/deployments/..."
|
|
107
110
|
});
|
|
108
111
|
```
|
|
109
112
|
|
|
110
|
-
**[Full Configuration Guide →](
|
|
113
|
+
**[Full Configuration Guide →](docs/getting_started/configuration.md)**
|
|
111
114
|
|
|
112
115
|
---
|
|
113
116
|
|
|
@@ -116,21 +119,29 @@ NodeLLM.configure({
|
|
|
116
119
|
## 🔮 Capabilities
|
|
117
120
|
|
|
118
121
|
### 💬 Unified Chat
|
|
122
|
+
|
|
119
123
|
Stop rewriting code for every provider. `NodeLLM` normalizes inputs and outputs into a single, predictable mental model.
|
|
124
|
+
|
|
120
125
|
```ts
|
|
121
|
-
|
|
126
|
+
import { NodeLLM } from "@node-llm/core";
|
|
127
|
+
|
|
128
|
+
// Uses NODELLM_PROVIDER from environment (defaults to GPT-4o)
|
|
129
|
+
const chat = NodeLLM.chat();
|
|
122
130
|
await chat.ask("Hello world");
|
|
123
131
|
```
|
|
124
132
|
|
|
125
133
|
### 👁️ Smart Vision & Files
|
|
126
|
-
|
|
134
|
+
|
|
135
|
+
Pass images, PDFs, or audio files directly to **both `ask()` and `stream()`**. We handle the heavy lifting: fetching remote URLs, base64 encoding, and MIME type mapping.
|
|
136
|
+
|
|
127
137
|
```ts
|
|
128
|
-
await chat.ask("Analyze this interface", {
|
|
129
|
-
files: ["./screenshot.png", "https://example.com/spec.pdf"]
|
|
138
|
+
await chat.ask("Analyze this interface", {
|
|
139
|
+
files: ["./screenshot.png", "https://example.com/spec.pdf"]
|
|
130
140
|
});
|
|
131
141
|
```
|
|
132
142
|
|
|
133
143
|
### 🛠️ Auto-Executing Tools
|
|
144
|
+
|
|
134
145
|
Define tools once;`NodeLLM` manages the recursive execution loop for you, keeping your controller logic clean. **Works seamlessly with both regular chat and streaming!**
|
|
135
146
|
|
|
136
147
|
```ts
|
|
@@ -142,45 +153,24 @@ class WeatherTool extends Tool {
|
|
|
142
153
|
description = "Get current weather";
|
|
143
154
|
schema = z.object({ location: z.string() });
|
|
144
155
|
|
|
145
|
-
async
|
|
146
|
-
return `Sunny in ${location}`;
|
|
156
|
+
async execute({ location }) {
|
|
157
|
+
return `Sunny in ${location}`;
|
|
147
158
|
}
|
|
148
159
|
}
|
|
149
160
|
|
|
150
161
|
// Now the model can use it automatically
|
|
151
162
|
await chat.withTool(WeatherTool).ask("What's the weather in Tokyo?");
|
|
152
|
-
```
|
|
153
|
-
|
|
154
|
-
### 🛡️ Loop Protection & Resource Limits
|
|
155
|
-
Prevent runaway costs, infinite loops, and hanging requests with comprehensive protection against resource exhaustion.
|
|
156
|
-
|
|
157
|
-
NodeLLM provides **defense-in-depth** security that you can configure globally or per-request:
|
|
158
|
-
|
|
159
|
-
```ts
|
|
160
|
-
// 1. Global config
|
|
161
|
-
NodeLLM.configure({
|
|
162
|
-
requestTimeout: 30000, // Timeout requests after 30 seconds (default)
|
|
163
|
-
maxToolCalls: 5, // Stop after 5 sequential tool execution turns
|
|
164
|
-
maxRetries: 2, // Retry provider-level errors up to 2 times
|
|
165
|
-
maxTokens: 4096 // Limit output to 4K tokens (default)
|
|
166
|
-
});
|
|
167
163
|
|
|
168
|
-
//
|
|
169
|
-
|
|
170
|
-
requestTimeout: 120000, // 2 minutes for this request
|
|
171
|
-
maxToolCalls: 10,
|
|
172
|
-
maxTokens: 8192 // 8K tokens for this request
|
|
173
|
-
});
|
|
164
|
+
// Lifecycle Hooks for Error & Flow Control
|
|
165
|
+
chat.onToolCallError((call, err) => "STOP");
|
|
174
166
|
```
|
|
175
167
|
|
|
176
|
-
**
|
|
177
|
-
- **`requestTimeout`**: Prevents DoS attacks and hanging requests
|
|
178
|
-
- **`maxToolCalls`**: Prevents infinite tool execution loops
|
|
179
|
-
- **`maxRetries`**: Prevents retry storms during outages
|
|
180
|
-
- **`maxTokens`**: Prevents excessive output and cost overruns
|
|
168
|
+
**[Full Tool Calling Guide →](https://node-llm.eshaiju.com/core-features/tool-calling)**
|
|
181
169
|
|
|
182
170
|
### 🔍 Comprehensive Debug Logging
|
|
171
|
+
|
|
183
172
|
Enable detailed logging for all API requests and responses across every feature and provider:
|
|
173
|
+
|
|
184
174
|
```ts
|
|
185
175
|
// Set environment variable
|
|
186
176
|
process.env.NODELLM_DEBUG = "true";
|
|
@@ -191,46 +181,13 @@ process.env.NODELLM_DEBUG = "true";
|
|
|
191
181
|
// [NodeLLM] [OpenAI] Response: 200 OK
|
|
192
182
|
// { "id": "chatcmpl-123", ... }
|
|
193
183
|
```
|
|
194
|
-
**Covers:** Chat, Streaming, Images, Embeddings, Transcription, Moderation - across all providers!
|
|
195
|
-
|
|
196
|
-
### 🛡️ Content Policy Hooks
|
|
197
|
-
NodeLLM provides pluggable hooks to implement custom security, compliance, and moderation logic. Instead of hard-coded rules, you can inject your own policies at the edge.
|
|
198
|
-
|
|
199
|
-
- **`beforeRequest()`**: Intercept and modify messages before they hit the LLM (e.g., PII detection/redaction).
|
|
200
|
-
- **`afterResponse()`**: Process the final response before it returns to your code (e.g., output masking or compliance checks).
|
|
201
|
-
|
|
202
|
-
```ts
|
|
203
|
-
chat
|
|
204
|
-
.beforeRequest(async (messages) => {
|
|
205
|
-
// Detect PII and redact
|
|
206
|
-
return redactSSN(messages);
|
|
207
|
-
})
|
|
208
|
-
.afterResponse(async (response) => {
|
|
209
|
-
// Ensure output compliance
|
|
210
|
-
return response.withContent(maskSensitiveData(response.content));
|
|
211
|
-
});
|
|
212
|
-
```
|
|
213
|
-
|
|
214
|
-
### 🧱 Smart Context Isolation
|
|
215
|
-
Stop worrying about prompt injection or instruction drift. NodeLLM automatically separates system instructions from the conversation history, providing a higher level of protection and strictness.
|
|
216
184
|
|
|
217
|
-
|
|
218
|
-
- **Smart Model Mapping**: Automatically uses OpenAI's modern `developer` role for compatible models (GPT-4o, o1, o3) while safely falling back to the standard `system` role for older or local models (Ollama, DeepSeek, etc.).
|
|
219
|
-
- **Universal Context**: Instructions stay separated internally, ensuring they are always prioritized by the model and never accidentally overridden by user messages.
|
|
220
|
-
- **Provider Agnostic**: Write instructions once; NodeLLM handles the specific role requirements for every major provider (OpenAI, Anthropic, Gemini).
|
|
221
|
-
|
|
222
|
-
### 🔍 Observability & Tool Auditing
|
|
223
|
-
For enterprise compliance, NodeLLM provides deep visibility into the tool execution lifecycle. You can monitor, log, and audit every step of a tool's execution.
|
|
224
|
-
|
|
225
|
-
```ts
|
|
226
|
-
chat
|
|
227
|
-
.onToolCallStart((call) => log(`Starting tool: ${call.function.name}`))
|
|
228
|
-
.onToolCallEnd((call, res) => log(`Tool ${call.id} finished with: ${res}`))
|
|
229
|
-
.onToolCallError((call, err) => alert(`Tool ${call.function.name} failed: ${err.message}`));
|
|
230
|
-
```
|
|
185
|
+
**Covers:** Chat, Streaming, Images, Embeddings, Transcription, Moderation - across all providers!
|
|
231
186
|
|
|
232
187
|
### ✨ Structured Output
|
|
188
|
+
|
|
233
189
|
Get type-safe, validated JSON back using **Zod** schemas.
|
|
190
|
+
|
|
234
191
|
```ts
|
|
235
192
|
import { z } from "@node-llm/core";
|
|
236
193
|
const Product = z.object({ name: z.string(), price: z.number() });
|
|
@@ -240,27 +197,33 @@ console.log(res.parsed.name); // Full type-safety
|
|
|
240
197
|
```
|
|
241
198
|
|
|
242
199
|
### 🎨 Image Generation
|
|
200
|
+
|
|
243
201
|
```ts
|
|
244
202
|
await NodeLLM.paint("A cyberpunk city in rain");
|
|
245
203
|
```
|
|
246
204
|
|
|
247
205
|
### 🎤 Audio Transcription
|
|
206
|
+
|
|
248
207
|
```ts
|
|
249
208
|
await NodeLLM.transcribe("meeting-recording.wav");
|
|
250
209
|
```
|
|
251
210
|
|
|
252
211
|
### ⚡ Scoped Parallelism
|
|
212
|
+
|
|
253
213
|
Run multiple providers in parallel safely without global configuration side effects using isolated contexts.
|
|
214
|
+
|
|
254
215
|
```ts
|
|
255
216
|
const [gpt, claude] = await Promise.all([
|
|
256
217
|
// Each call branch off into its own isolated context
|
|
257
218
|
NodeLLM.withProvider("openai").chat("gpt-4o").ask(prompt),
|
|
258
|
-
NodeLLM.withProvider("anthropic").chat("claude-3-5-sonnet").ask(prompt)
|
|
219
|
+
NodeLLM.withProvider("anthropic").chat("claude-3-5-sonnet").ask(prompt)
|
|
259
220
|
]);
|
|
260
221
|
```
|
|
261
222
|
|
|
262
223
|
### 🧠 Deep Reasoning
|
|
224
|
+
|
|
263
225
|
Direct access to the thought process of models like **DeepSeek R1** or **OpenAI o1/o3** using the `.reasoning` field.
|
|
226
|
+
|
|
264
227
|
```ts
|
|
265
228
|
const res = await NodeLLM.chat("deepseek-reasoner").ask("Solve this logical puzzle");
|
|
266
229
|
console.log(res.reasoning); // Chain-of-thought
|
|
@@ -270,27 +233,27 @@ console.log(res.reasoning); // Chain-of-thought
|
|
|
270
233
|
|
|
271
234
|
## 🚀 Why use this over official SDKs?
|
|
272
235
|
|
|
273
|
-
| Feature
|
|
274
|
-
|
|
|
275
|
-
| **Provider Logic**
|
|
276
|
-
| **Streaming**
|
|
277
|
-
| **Streaming + Tools** | Automated Execution
|
|
278
|
-
| **Tool Loops**
|
|
279
|
-
| **Files/Vision**
|
|
280
|
-
| **Configuration**
|
|
236
|
+
| Feature | NodeLLM | Official SDKs | Architectural Impact |
|
|
237
|
+
| :-------------------- | :---------------------------- | :-------------------------- | :------------------------ |
|
|
238
|
+
| **Provider Logic** | Transparently Handled | Exposed to your code | **Low Coupling** |
|
|
239
|
+
| **Streaming** | Standard `AsyncIterator` | Vendor-specific Events | **Predictable Data Flow** |
|
|
240
|
+
| **Streaming + Tools** | Automated Execution | Manual implementation | **Seamless UX** |
|
|
241
|
+
| **Tool Loops** | Automated Recursion | Manual implementation | **Reduced Boilerplate** |
|
|
242
|
+
| **Files/Vision** | Intelligent Path/URL handling | Base64/Buffer management | **Cleaner Service Layer** |
|
|
243
|
+
| **Configuration** | Centralized & Global | Per-instance initialization | **Easier Lifecycle Mgmt** |
|
|
281
244
|
|
|
282
245
|
---
|
|
283
246
|
|
|
284
247
|
## 📋 Supported Providers
|
|
285
248
|
|
|
286
|
-
| Provider
|
|
287
|
-
|
|
|
288
|
-
| <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openai.svg" height="18"> **OpenAI**
|
|
289
|
-
| <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/gemini-color.svg" height="18"> **Gemini**
|
|
290
|
-
| <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/anthropic-text.svg" height="12"> **Anthropic** | Chat, **Streaming + Tools**, Vision, PDF, Structured Output
|
|
291
|
-
| <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/deepseek-color.svg" height="18"> **DeepSeek**
|
|
292
|
-
| <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openrouter.svg" height="18"> **OpenRouter**
|
|
293
|
-
| <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/ollama.svg" height="18"> **Ollama**
|
|
249
|
+
| Provider | Supported Features |
|
|
250
|
+
| :----------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------- |
|
|
251
|
+
| <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openai.svg" height="18"> **OpenAI** | Chat, **Streaming + Tools**, Vision, Audio, Images, Transcription, **Reasoning** |
|
|
252
|
+
| <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/gemini-color.svg" height="18"> **Gemini** | Chat, **Streaming + Tools**, Vision, Audio, Video, Embeddings |
|
|
253
|
+
| <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/anthropic-text.svg" height="12"> **Anthropic** | Chat, **Streaming + Tools**, Vision, PDF, Structured Output |
|
|
254
|
+
| <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/deepseek-color.svg" height="18"> **DeepSeek** | Chat (V3), **Reasoning (R1)**, **Streaming + Tools** |
|
|
255
|
+
| <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/openrouter.svg" height="18"> **OpenRouter** | **Aggregator**, Chat, Streaming, Tools, Vision, Embeddings, **Reasoning** |
|
|
256
|
+
| <img src="https://registry.npmmirror.com/@lobehub/icons-static-svg/latest/files/icons/ollama.svg" height="18"> **Ollama** | **Local Inference**, Chat, Streaming, Tools, Vision, Embeddings |
|
|
294
257
|
|
|
295
258
|
---
|
|
296
259
|
|
|
@@ -302,11 +265,22 @@ npm install @node-llm/core
|
|
|
302
265
|
|
|
303
266
|
**[View Full Documentation ↗](https://node-llm.eshaiju.com/)**
|
|
304
267
|
|
|
268
|
+
### 🍿 Try the Live Demo
|
|
269
|
+
|
|
270
|
+
Want to see it in action? Run this in your terminal:
|
|
271
|
+
|
|
272
|
+
```bash
|
|
273
|
+
git clone https://github.com/node-llm/node-llm.git
|
|
274
|
+
cd node-llm
|
|
275
|
+
npm install
|
|
276
|
+
npm run demo
|
|
277
|
+
```
|
|
278
|
+
|
|
305
279
|
---
|
|
306
280
|
|
|
307
281
|
## 🤝 Contributing
|
|
308
282
|
|
|
309
|
-
We welcome contributions! Please see our **[Contributing Guide](
|
|
283
|
+
We welcome contributions! Please see our **[Contributing Guide](CONTRIBUTING.md)** for more details on how to get started.
|
|
310
284
|
|
|
311
285
|
---
|
|
312
286
|
|