llmjs2 1.0.8 → 1.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 littlellmjs
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md CHANGED
@@ -1,129 +1,357 @@
1
1
  # llmjs2
2
2
 
3
- `llmjs2` is a lightweight llm wrapper for building simple / personal AI applications.
3
+ A lightweight llm Node.js library for building simple / personal AI applications
4
4
 
5
- ## Features
5
+ ## Supported Providers
6
6
 
7
- - Zero runtime dependencies
8
- - ESM-only
9
- - OpenAI-compatible `messages` schema
10
- - Automatic provider routing via `provider/model-name`
11
- - Default fallback to `https://api.ollama.com`
12
- - Clear connection and model errors
7
+ - **Ollama** - Connect to Ollama's cloud API
8
+ - **OpenRouter** - Access multiple LLM models through OpenRouter
13
9
 
14
10
  ## Installation
15
11
 
16
- No dependencies are required. Install or link the package as usual:
17
-
18
12
  ```bash
19
13
  npm install llmjs2
20
14
  ```
21
15
 
22
16
  ## Usage
23
17
 
24
- ```js
18
+ llmjs2 supports three calling conventions:
19
+
20
+ ### Simple API (Auto-Detection)
21
+
22
+ ```javascript
25
23
  import { completion } from 'llmjs2';
26
24
 
27
- const response = await completion('ollama/llama3', 'Write a short poem about Node.js.');
28
- console.log(response);
25
+ // Just provide a prompt - the library handles the rest
26
+ const result = await completion('Explain the use of llmjs2');
27
+
28
+ // Or provide a model and prompt
29
+ const result = await completion('ollama/minimax-m2.5:cloud', 'Explain the use of llmjs2');
29
30
  ```
30
31
 
31
- Or with a full options object:
32
+ **How it works:**
33
+ - Looks for `OLLAMA_API_KEY` and `OPEN_ROUTER_API_KEY` environment variables
34
+ - If only one is set, uses that provider
35
+ - If both are set, randomly chooses one
36
+ - Uses `OLLAMA_DEFAULT_MODEL` or defaults to `minimax-m2.5:cloud` for Ollama
37
+ - Uses `OPEN_ROUTER_DEFAULT_MODEL` or defaults to `openrouter/free` for OpenRouter
38
+ - If a model is provided, uses that model instead of the default
39
+
40
+ ### Function-Based API
32
41
 
33
- ```js
42
+ ```javascript
34
43
  import { completion } from 'llmjs2';
35
44
 
36
- const response = await completion({
37
- model: 'ollama/llama3',
38
- messages: [
39
- { role: 'system', content: 'You are a helpful assistant.' },
40
- { role: 'user', content: 'Summarize the benefits of zero dependencies.' },
41
- ],
42
- });
45
+ // Using Ollama
46
+ const resultOllama = await completion('ollama/minimax-m2.5:cloud', 'Explain the use of llmjs2', 'your-api-key');
43
47
 
44
- console.log(response);
48
+ // Using OpenRouter
49
+ const resultOR = await completion('openrouter/openrouter/free', 'Explain the use of llmjs2', 'your-api-key');
45
50
  ```
46
51
 
47
- ## generate()
48
-
49
- The library also exposes `generate()` for prompt-based flows with optional images, references, system instructions, and tools.
52
+ ### Object-Based API
50
53
 
51
- ```js
52
- import { generate } from 'llmjs2';
54
+ ```javascript
55
+ import { completion } from 'llmjs2';
53
56
 
54
- const response = await generate({
55
- model: 'ollama/llama3',
56
- userPrompt: 'Describe this picture.',
57
- images: ['https://example.com/image.png'],
58
- references: ['Some reference text about the image.'],
59
- systemPrompt: 'You are a visual assistant.',
60
- tools: [
61
- {
62
- name: 'get_weather',
63
- description: 'Get the current weather for a location',
64
- parameters: {
65
- location: { type: 'string', required: true, description: 'City and state' },
66
- },
67
- handler: ({ location }) => `Weather in ${location}: Sunny`,
68
- },
57
+ // Using Ollama with system message
58
+ const resultOllama = await completion({
59
+ model: 'ollama/minimax-m2.5:cloud',
60
+ messages: [
61
+ { role: 'system', content: 'You are a helpful AI assistant.' },
62
+ { role: 'user', content: 'Explain the use of llmjs2.' }
69
63
  ],
64
+ apiKey: 'your-api-key' // optional
70
65
  });
71
66
 
72
- console.log(response);
67
+ // Using OpenRouter with system message
68
+ const resultOR = await completion({
69
+ model: 'openrouter/openrouter/free',
70
+ messages: [
71
+ { role: 'system', content: 'You are a helpful AI assistant.' },
72
+ { role: 'user', content: 'Explain the use of llmjs2.' }
73
+ ],
74
+ apiKey: 'your-api-key' // optional
75
+ });
73
76
  ```
74
77
 
75
- You can also pass a message history directly:
78
+ ## Tools Support
79
+
80
+ llmjs2 supports function calling (tools) through the object-based API:
76
81
 
77
- ```js
78
- import { generate } from 'llmjs2';
82
+ ```javascript
83
+ import { completion } from 'llmjs2';
79
84
 
80
- const response = await generate({
81
- model: 'ollama/llama3',
85
+ const result = await completion({
86
+ model: 'openrouter/openrouter/free',
82
87
  messages: [
83
- { role: 'system', content: 'You are a helpful assistant.' },
84
- { role: 'user', content: 'Use a tool if needed.' },
88
+ { role: 'user', content: 'What is the weather like in Paris?' }
85
89
  ],
86
90
  tools: [
87
91
  {
88
- name: 'get_weather',
89
- description: 'Get the current weather for a location',
90
- parameters: {
91
- location: { type: 'string', required: true, description: 'City and state' },
92
- },
93
- handler: ({ location }) => `Weather in ${location}: Sunny`,
94
- },
95
- ],
92
+ type: 'function',
93
+ function: {
94
+ name: 'get_weather',
95
+ description: 'Get the current weather in a given location',
96
+ parameters: {
97
+ type: 'object',
98
+ properties: {
99
+ location: {
100
+ type: 'string',
101
+ description: 'The city and state, e.g. San Francisco, CA'
102
+ },
103
+ unit: {
104
+ type: 'string',
105
+ enum: ['celsius', 'fahrenheit'],
106
+ description: 'The temperature unit to use'
107
+ }
108
+ },
109
+ required: ['location']
110
+ }
111
+ }
112
+ }
113
+ ]
96
114
  });
97
115
 
98
- console.log(response);
116
+ // Result when tools are used:
117
+ // {
118
+ // content: '',
119
+ // tool_calls: [
120
+ // {
121
+ // id: 'call_123',
122
+ // type: 'function',
123
+ // function: {
124
+ // name: 'get_weather',
125
+ // arguments: '{"location": "Paris, France"}'
126
+ // }
127
+ // }
128
+ // ]
129
+ // }
99
130
  ```
100
131
 
101
- ## Configuration
132
+ ## API Key Configuration
102
133
 
103
- The library resolves connection details in this order:
134
+ You can provide API keys in four ways:
104
135
 
105
- 1. Explicit config via `options.ollamaBaseUrl` / `options.ollamaApiKey`
106
- 2. Environment variables `OLLAMA_BASE_URL` and `OLLAMA_API_KEY`
107
- 3. Default fallback `https://api.ollama.com`
136
+ ### 1. Simple API (Environment Variables)
108
137
 
109
- Example:
138
+ ```bash
139
+ export OLLAMA_API_KEY=your-ollama-api-key
140
+ export OPEN_ROUTER_API_KEY=your-openrouter-api-key
141
+
142
+ # Optional: Set default models
143
+ export OLLAMA_DEFAULT_MODEL=minimax-m2.5:cloud
144
+ export OPEN_ROUTER_DEFAULT_MODEL=openrouter/free
145
+ ```
146
+
147
+ ```javascript
148
+ const result = await completion('Your prompt');
149
+ ```
150
+
151
+ ### 2. Direct Parameter (Function API)
110
152
 
111
- ```js
112
- const response = await completion({
113
- model: 'ollama/llama3',
114
- prompt: 'What is llmjs2?',
115
- ollamaBaseUrl: 'https://my-ollama-proxy.local',
116
- ollamaApiKey: process.env.OLLAMA_API_KEY,
153
+ ```javascript
154
+ const result = await completion('ollama/minimax-m2.5:cloud', 'Your prompt', 'your-api-key');
155
+ ```
156
+
157
+ ### 3. Object Property (Object API)
158
+
159
+ ```javascript
160
+ const result = await completion({
161
+ model: 'ollama/minimax-m2.5:cloud',
162
+ messages: [{ role: 'user', content: 'Your prompt' }],
163
+ apiKey: 'your-api-key'
117
164
  });
118
165
  ```
119
166
 
167
+ ### 4. Environment Variables (Function/Object API)
168
+
169
+ ```bash
170
+ export OLLAMA_API_KEY=your-ollama-api-key
171
+ export OPEN_ROUTER_API_KEY=your-openrouter-api-key
172
+ ```
173
+
174
+ ```javascript
175
+ // Function API
176
+ const result = await completion('ollama/minimax-m2.5:cloud', 'Your prompt');
177
+
178
+ // Object API
179
+ const result = await completion({
180
+ model: 'ollama/minimax-m2.5:cloud',
181
+ messages: [{ role: 'user', content: 'Your prompt' }]
182
+ });
183
+ ```
184
+
185
+ ## Model Format
186
+
187
+ Models must be specified in the format: `provider/model_name`
188
+
189
+ The provider is the text before the first `/`, and the model name is everything after it.
190
+
191
+ Examples:
192
+ - `ollama/minimax-m2.5:cloud`
193
+ - `ollama/llama2`
194
+ - `openrouter/openrouter/free`
195
+ - `openrouter/meta-llama/llama-2-70b-chat`
196
+
197
+ ## Messages Format (Object API)
198
+
199
+ The `messages` parameter is an array of message objects with the following structure:
200
+
201
+ ```javascript
202
+ [
203
+ { role: 'system', content: 'You are a helpful AI assistant.' },
204
+ { role: 'user', content: 'What is the capital of France?' },
205
+ { role: 'assistant', content: 'The capital of France is Paris.' },
206
+ { role: 'user', content: 'What is its population?' }
207
+ ]
208
+ ```
209
+
210
+ **Supported roles:**
211
+ - `system` - System instructions
212
+ - `user` - User messages
213
+ - `assistant` - Assistant responses
214
+
215
+ ## Tools Format (Object API)
216
+
217
+ The `tools` parameter is an array of tool definitions:
218
+
219
+ ```javascript
220
+ [
221
+ {
222
+ type: 'function',
223
+ function: {
224
+ name: 'function_name',
225
+ description: 'Description of what the function does',
226
+ parameters: {
227
+ type: 'object',
228
+ properties: {
229
+ param1: {
230
+ type: 'string',
231
+ description: 'Description of parameter'
232
+ }
233
+ },
234
+ required: ['param1']
235
+ }
236
+ }
237
+ }
238
+ ]
239
+ ```
240
+
120
241
  ## Error Handling
121
242
 
122
- - `llmjs2: Could not connect to [URL]. Check your OLLAMA_BASE_URL.`
123
- - `llmjs2: Model "[name]" not found on provider "[provider]".`
124
- - `llmjs2: Unsupported provider "[provider]".`
243
+ The library throws descriptive errors for:
244
+ - Missing or invalid parameters
245
+ - Missing API keys
246
+ - API request failures
247
+ - Invalid response formats
248
+ - Request timeouts (60 seconds)
249
+ - Invalid tools format
250
+
251
+ ```javascript
252
+ try {
253
+ const result = await completion('Your prompt');
254
+ } catch (error) {
255
+ console.error('Completion failed:', error.message);
256
+ }
257
+ ```
258
+
259
+ ## Example Programs
260
+
261
+ ### Main Example
262
+
263
+ A real usage test program is included in `example.js`. To run it:
264
+
265
+ ```bash
266
+ # Set your API keys
267
+ export OLLAMA_API_KEY=your-ollama-api-key
268
+ export OPEN_ROUTER_API_KEY=your-openrouter-api-key
269
+
270
+ # Run the example
271
+ node example.js
272
+ ```
273
+
274
+ The example program will:
275
+ - Test simple API (auto-detection)
276
+ - Test simple API with model
277
+ - Test Ollama with function-based API
278
+ - Test Ollama with object-based API
279
+ - Test Ollama with tools
280
+ - Test OpenRouter with function-based API
281
+ - Test OpenRouter with object-based API
282
+ - Test OpenRouter with tools
283
+ - Display results and test summary
284
+
285
+
286
+
287
+ ## API Reference
288
+
289
+ ### completion(prompt)
290
+
291
+ **Simple API (Prompt Only)**
292
+
293
+ **Parameters:**
294
+ - `prompt` (string): The prompt to send to the LLM
295
+
296
+ **Returns:**
297
+ - `Promise<string>`: The completion result
298
+
299
+ **Behavior:**
300
+ - Auto-detects provider based on available API keys
301
+ - Uses `OLLAMA_DEFAULT_MODEL` or defaults to `minimax-m2.5:cloud` for Ollama
302
+ - Uses `OPEN_ROUTER_DEFAULT_MODEL` or defaults to `openrouter/free` for OpenRouter
303
+ - Randomly chooses provider if both API keys are set
304
+
305
+ ### completion(model, prompt)
306
+
307
+ **Simple API (Model and Prompt)**
308
+
309
+ **Parameters:**
310
+ - `model` (string): Model identifier in format "provider/model_name"
311
+ - `prompt` (string): The prompt to send to the LLM
312
+
313
+ **Returns:**
314
+ - `Promise<string>`: The completion result
315
+
316
+ **Behavior:**
317
+ - Auto-detects provider based on available API keys
318
+ - Uses the provided model instead of the default
319
+ - Randomly chooses provider if both API keys are set
320
+
321
+ ### completion(model, prompt, apiKey)
322
+
323
+ **Function-Based API**
324
+
325
+ **Parameters:**
326
+ - `model` (string): Model identifier in format "provider/model_name"
327
+ - `prompt` (string): The prompt to send to the LLM
328
+ - `apiKey` (string, optional): API key (falls back to environment variables)
329
+
330
+ **Returns:**
331
+ - `Promise<string>`: The completion result
332
+
333
+ ### completion(options)
334
+
335
+ **Object-Based API**
336
+
337
+ **Parameters:**
338
+ - `options` (object): Configuration object
339
+ - `model` (string): Model identifier in format "provider/model_name"
340
+ - `messages` (array): Array of message objects with role and content
341
+ - `apiKey` (string, optional): API key (falls back to environment variables)
342
+ - `tools` (array, optional): Array of tool definitions
343
+
344
+ **Returns:**
345
+ - `Promise<string|object>`: The completion result (string or object with tool calls)
346
+
347
+ **Throws:**
348
+ - Error if model format is invalid
349
+ - Error if prompt/messages is missing
350
+ - Error if API key is not provided
351
+ - Error if API request fails
352
+ - Error if request times out (60 seconds)
353
+ - Error if tools format is invalid
125
354
 
126
- ## Notes
355
+ ## License
127
356
 
128
- - Node.js 18.0.0 or later is required for native `fetch` support.
129
- - No hyper-parameters such as `temperature` or `max_tokens` are exposed in the high-level API.
357
+ MIT