llmjs2 1.3.9 → 1.6.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (49) hide show
  1. package/README.md +31 -476
  2. package/chain/AGENT_STEP_README.md +102 -0
  3. package/chain/README.md +257 -0
  4. package/chain/WORKFLOW_README.md +85 -0
  5. package/chain/agent-step-example.js +232 -0
  6. package/chain/docs/AGENT.md +126 -0
  7. package/chain/docs/GRAPH.md +490 -0
  8. package/chain/examples.js +314 -0
  9. package/chain/index.js +31 -0
  10. package/chain/lib/agent.js +338 -0
  11. package/chain/lib/flow/agent-step.js +119 -0
  12. package/chain/lib/flow/edge.js +24 -0
  13. package/chain/lib/flow/flow.js +76 -0
  14. package/chain/lib/flow/graph.js +331 -0
  15. package/chain/lib/flow/index.js +7 -0
  16. package/chain/lib/flow/step.js +63 -0
  17. package/chain/lib/memory/in-memory.js +117 -0
  18. package/chain/lib/memory/index.js +36 -0
  19. package/chain/lib/memory/lance-memory.js +225 -0
  20. package/chain/lib/memory/sqlite-memory.js +309 -0
  21. package/chain/simple-agent-step-example.js +168 -0
  22. package/chain/workflow-example-usage.js +70 -0
  23. package/chain/workflow-example.json +59 -0
  24. package/core/README.md +485 -0
  25. package/core/cli.js +275 -0
  26. package/core/docs/BASIC_USAGE.md +62 -0
  27. package/core/docs/CLI.md +104 -0
  28. package/{docs → core/docs}/GET_STARTED.md +129 -129
  29. package/{docs → core/docs}/GUARDRAILS_GUIDE.md +734 -734
  30. package/{docs → core/docs}/README.md +47 -47
  31. package/core/docs/ROUTER_GUIDE.md +199 -0
  32. package/{docs → core/docs}/SERVER_MODE.md +358 -350
  33. package/core/index.js +115 -0
  34. package/{providers → core/providers}/ollama.js +14 -6
  35. package/{providers → core/providers}/openai.js +14 -6
  36. package/{providers → core/providers}/openrouter.js +14 -6
  37. package/core/router.js +252 -0
  38. package/{server.js → core/server.js} +15 -5
  39. package/package.json +43 -27
  40. package/cli.js +0 -195
  41. package/docs/BASIC_USAGE.md +0 -296
  42. package/docs/CLI.md +0 -455
  43. package/docs/ROUTER_GUIDE.md +0 -402
  44. package/index.js +0 -267
  45. package/router.js +0 -273
  46. package/test-completion.js +0 -99
  47. package/test.js +0 -246
  48. /package/{config.yaml → core/config.yaml} +0 -0
  49. /package/{logger.js → core/logger.js} +0 -0
package/core/README.md ADDED
@@ -0,0 +1,485 @@
1
+ # llmjs2
2
+
3
+ A unified Node.js library for connecting to multiple Large Language Model (LLM) providers: OpenAI, Ollama, and OpenRouter.
4
+
5
+ **Features:**
6
+ - **Unified API**: Single interface for OpenAI, Ollama, and OpenRouter
7
+ - **Intelligent Router**: Load balancing and multiple routing strategies
8
+ - **Guardrails System**: Content filtering, logging, rate limiting, and custom processing
9
+ - **OpenAI-Compatible Server**: OpenAI-style chat completions schema
10
+ - **CLI Interface**: Command-line server management with configuration files
11
+ - **Enterprise Security**: Input validation, error sanitization, and safe defaults
12
+ - **Lightweight Dependencies**: Uses Node.js core modules plus YAML parsing
13
+
14
+ ## Features
15
+
16
+ - **Unified API**: Single interface for OpenAI, Ollama, and OpenRouter
17
+ - **Auto-detection**: Automatically chooses available providers based on API keys
18
+ - **Enterprise-grade**: Robust error handling, input validation, and security measures
19
+ - **Minimal dependencies**: Uses only one external dependency (`yaml`)
20
+ - **TypeScript-free**: Pure JavaScript, no compilation required
21
+ - **Production-ready**: Comprehensive testing and security auditing
22
+
23
+ ## Installation
24
+
25
+ ```bash
26
+ npm install llmjs2
27
+ ```
28
+
29
+ Or for global CLI usage:
30
+
31
+ ```bash
32
+ npm install -g llmjs2
33
+ ```
34
+
35
+ ## Quick Test
36
+
37
+ Try the sample configuration:
38
+
39
+ ```bash
40
+ # Start server with sample config
41
+ llmjs2 --config config.yaml --port 3001
42
+
43
+ # Test the API
44
+ curl -X POST http://localhost:3001/v1/chat/completions \
45
+ -H "Content-Type: application/json" \
46
+ -d '{"messages":[{"role":"user","content":"Hello!"}]}'
47
+
48
+ # Response format:
49
+ # {
50
+ # "id": "chatcmpl-123456",
51
+ # "object": "chat.completion",
52
+ # "created": 1640995200,
53
+ # "model": "ollama/minimax-m2.5:cloud",
54
+ # "messages": [
55
+ # {"role": "user", "content": "Hello!"},
56
+ # {"role": "assistant", "content": "Hi there!"}
57
+ # ]
58
+ # }
59
+
60
+ ## Programmatic Configuration
61
+
62
+ For advanced users, you can configure llmjs2 router programmatically instead of using YAML:
63
+
64
+ ```bash
65
+ npm run router:example
66
+ ```
67
+
68
+ See `core/docs/ROUTER_GUIDE.md` for complete routing examples and guardrail configuration.
69
+
70
+ ## AI Chat App
71
+
72
+ Experience llmjs2 with a simple terminal-based chat interface:
73
+
74
+ ```bash
75
+ npm run chat
76
+ ```
77
+
78
+ Features:
79
+ - Conversational chat with message history
80
+ - Automatic model routing (random selection)
81
+ - Shows which model was used for each response
82
+ - Simple guardrails (logging)
83
+ - Graceful exit with "exit", "quit", or "bye"
84
+
85
+ The chat app uses the same router configuration as the programmatic examples but provides an interactive chat experience.
86
+
87
+ See `core/docs/CLI.md` and `core/docs/SERVER_MODE.md` for configuration and server examples.
88
+
89
+ ## Quick Start
90
+
91
+ ```javascript
92
+ import { completion } from 'llmjs2';
93
+
94
+ // Set API keys
95
+ process.env.OPENAI_API_KEY = 'your-openai-key';
96
+ process.env.OLLAMA_API_KEY = 'your-ollama-key';
97
+ process.env.OPEN_ROUTER_API_KEY = 'your-openrouter-key';
98
+
99
+ // Simple completion
100
+ const response = await completion('Hello, how are you?');
101
+ console.log(response);
102
+ ```
103
+
104
+ ## API Keys Setup
105
+
106
+ Set your API keys as environment variables:
107
+
108
+ ```bash
109
+ export OPENAI_API_KEY=your_openai_api_key
110
+ export OLLAMA_API_KEY=your_ollama_api_key
111
+ export OPEN_ROUTER_API_KEY=your_openrouter_api_key
112
+ ```
113
+
114
+ ## Usage Patterns
115
+
116
+ ### 1. Simple API (Auto-detection)
117
+
118
+ ```javascript
119
+ import { completion } from 'llmjs2';
120
+
121
+ // Auto-selects provider based on available API keys
122
+ // Cycles through: Ollama → OpenRouter → OpenAI → Ollama...
123
+ const response = await completion('Explain quantum physics simply');
124
+ ```
125
+
126
+ **Auto-Selection Logic:**
127
+ - Checks for `OLLAMA_API_KEY`, `OPEN_ROUTER_API_KEY`, `OPENAI_API_KEY`
128
+ - Cycles sequentially through available providers
129
+ - Uses default model for each provider
130
+ - Falls back gracefully if keys are missing
131
+
132
+ ### 2. Provider-Specific Model
133
+
134
+ ```javascript
135
+ // OpenAI
136
+ const openaiResponse = await completion({
137
+ model: 'openai/gpt-4',
138
+ messages: [{ role: 'user', content: 'Write a haiku about coding' }]
139
+ });
140
+
141
+ // Ollama
142
+ const ollamaResponse = await completion({
143
+ model: 'ollama/minimax-m2.5:cloud',
144
+ messages: [{ role: 'user', content: 'What is AI?' }]
145
+ });
146
+
147
+ // OpenRouter
148
+ const openrouterResponse = await completion({
149
+ model: 'openrouter/openrouter/free',
150
+ messages: [{ role: 'user', content: 'Tell me a joke' }]
151
+ });
152
+ ```
153
+
154
+ ### 3. Advanced Object API
155
+
156
+ ```javascript
157
+ const response = await completion({
158
+ model: 'openai/gpt-3.5-turbo',
159
+ messages: [
160
+ { role: 'system', content: 'You are a helpful assistant.' },
161
+ { role: 'user', content: 'What is the capital of France?' }
162
+ ],
163
+ temperature: 0.7,
164
+ maxTokens: 100
165
+ });
166
+ ```
167
+
168
+ ## Configuration
169
+
170
+ ### Default Models
171
+
172
+ Set default models for each provider:
173
+
174
+ ```bash
175
+ export OPENAI_DEFAULT_MODEL=gpt-4
176
+ export OLLAMA_DEFAULT_MODEL=minimax-m2.5:cloud
177
+ export OPEN_ROUTER_DEFAULT_MODEL=openrouter/free
178
+ ```
179
+
180
+ ### Base URLs
181
+
182
+ Customize API endpoints:
183
+
184
+ ```bash
185
+ export OPENAI_BASE_URL=https://api.openai.com/v1
186
+ export OLLAMA_BASE_URL=https://ollama.com/api/chat
187
+ export OPEN_ROUTER_BASE_URL=https://openrouter.ai/api/v1/chat/completions
188
+ ```
189
+
190
+ ## Error Handling
191
+
192
+ ```javascript
193
+ import { completion } from 'llmjs2';
194
+
195
+ try {
196
+ const response = await completion('Tell me a joke');
197
+ console.log(response);
198
+ } catch (error) {
199
+ console.error('Error:', error.message);
200
+ }
201
+ ```
202
+
203
+ ## Conversations
204
+
205
+ ```javascript
206
+ import { completion } from 'llmjs2';
207
+
208
+ const messages = [
209
+ { role: 'system', content: 'You are a helpful coding assistant.' },
210
+ { role: 'user', content: 'How do I reverse a string in JavaScript?' }
211
+ ];
212
+
213
+ let response = await completion({ model: 'openai/gpt-4', messages });
214
+ console.log('Assistant:', response);
215
+
216
+ // Continue conversation
217
+ messages.push({ role: 'assistant', content: response });
218
+ messages.push({ role: 'user', content: 'Can you show me with an example?' });
219
+
220
+ response = await completion({ model: 'openai/gpt-4', messages });
221
+ console.log('Assistant:', response);
222
+ ```
223
+
224
+ ## Function Calling (Tools)
225
+
226
+ ```javascript
227
+ import { completion } from 'llmjs2';
228
+
229
+ const weatherTool = {
230
+ type: 'function',
231
+ function: {
232
+ name: 'get_weather',
233
+ description: 'Get current weather for a location',
234
+ parameters: {
235
+ type: 'object',
236
+ properties: {
237
+ location: { type: 'string', description: 'City name' }
238
+ },
239
+ required: ['location']
240
+ }
241
+ }
242
+ };
243
+
244
+ const response = await completion({
245
+ model: 'openai/gpt-4',
246
+ messages: [{ role: 'user', content: 'What is the weather in Paris?' }],
247
+ tools: [weatherTool]
248
+ });
249
+
250
+ if (response.tool_calls) {
251
+ console.log('Tool calls:', response.tool_calls);
252
+ }
253
+ ```
254
+
255
+ ## Router System
256
+
257
+ Intelligent model routing with load balancing and multiple strategies:
258
+
259
+ ```javascript
260
+ import { router } from 'llmjs2';
261
+
262
+ const modelList = [
263
+ {
264
+ model_name: 'gpt-3.5-turbo',
265
+ llm_params: {
266
+ model: 'ollama/chatgpt-v-2',
267
+ api_key: process.env.OLLAMA_API_KEY
268
+ }
269
+ },
270
+ {
271
+ model_name: 'gpt-3.5-turbo',
272
+ llm_params: {
273
+ model: 'openai/gpt-3.5-turbo',
274
+ api_key: process.env.OPENAI_API_KEY
275
+ }
276
+ }
277
+ ];
278
+
279
+ // Load balancing across models with same name
280
+ const route = router(modelList, 'random');
281
+ const response = await route.completion({
282
+ model: 'gpt-3.5-turbo',
283
+ messages: [{ role: 'user', content: 'Hello!' }]
284
+ });
285
+
286
+ // Auto-routing with different strategies
287
+ const randomRouter = router(modelList, 'random');
288
+ const sequentialRouter = router(modelList, 'sequential');
289
+ ```
290
+
291
+ **Routing Strategies:**
292
+ - `default`: Load balance across models with same name
293
+ - `random`: Randomly select from all models
294
+ - `sequential`: Cycle through models in order
295
+
296
+ ## Guardrails System
297
+
298
+ Add custom logic before and after LLM calls for content filtering, logging, and processing:
299
+
300
+ ```javascript
301
+ import { router } from 'llmjs2';
302
+
303
+ const route = router(modelList);
304
+
305
+ route.setGuardrails([
306
+ {
307
+ name: 'content_filter',
308
+ mode: 'pre_call',
309
+ code: (processId, input) => {
310
+ // Filter inappropriate content
311
+ const filteredMessages = input.messages.map(msg => ({
312
+ ...msg,
313
+ content: msg.content.replace(/badword/gi, '****')
314
+ }));
315
+ return { ...input, messages: filteredMessages };
316
+ }
317
+ },
318
+ {
319
+ name: 'response_logger',
320
+ mode: 'post_call',
321
+ code: (processId, result) => {
322
+ console.log(`[${processId}] Response:`, result);
323
+ return result;
324
+ }
325
+ }
326
+ ]);
327
+ ```
328
+
329
+ ## Server Mode
330
+
331
+ Run an API server that returns responses with metadata and message arrays:
332
+
333
+ ```javascript
334
+ import { router, app } from 'llmjs2';
335
+
336
+ const route = router(modelList);
337
+ app.use(route);
338
+ app.listen(3000);
339
+ ```
340
+
341
+ Or use the CLI:
342
+
343
+ ```bash
344
+ llmjs2 --config config.yaml --port 3000
345
+ ```
346
+
347
+ ## CLI Interface
348
+
349
+ Manage servers from the command line:
350
+
351
+ ```bash
352
+ # Start server with defaults
353
+ llmjs2
354
+
355
+ # Use configuration file
356
+ llmjs2 --config config.yaml
357
+
358
+ # Custom port and host
359
+ llmjs2 --port 8080 --host 0.0.0.0
360
+
361
+ # Get help
362
+ llmjs2 --help
363
+ ```
364
+
365
+ ## Configuration Files
366
+
367
+ Use YAML for advanced configuration:
368
+
369
+ ```yaml
370
+ model_list:
371
+ - model_name: premium
372
+ llm_params:
373
+ model: openrouter/openai/gpt-4
374
+ api_key: os.environ/OPEN_ROUTER_API_KEY
375
+
376
+ - model_name: standard
377
+ llm_params:
378
+ model: ollama/minimax-m2.5:cloud
379
+ api_key: os.environ/OLLAMA_API_KEY
380
+
381
+ guardrails:
382
+ - name: content_filter
383
+ mode: pre_call
384
+ code: |
385
+ (processId, input) => {
386
+ // Content filtering logic
387
+ return input;
388
+ }
389
+
390
+ router_settings:
391
+ routing_strategy: random
392
+ ```
393
+
394
+ **Note**: Model names in the configuration use the format `[provider]/[actual-model-name]` (e.g., `openai/gpt-4`, `ollama/minimax-m2.5:cloud`). The `[provider]/` prefix is used for routing and is automatically stripped when sending requests to LLM providers.
395
+
396
+ ## Security Features
397
+
398
+ - **No API key logging**: Sensitive information is never logged
399
+ - **Input validation**: All inputs are validated and sanitized
400
+ - **Error sanitization**: Error messages don't leak sensitive data
401
+ - **Timeout protection**: Requests timeout to prevent hanging
402
+ - **HTTPS only**: All communications use HTTPS
403
+
404
+ ## Testing
405
+
406
+ Run the test suite:
407
+
408
+ ```bash
409
+ npm test
410
+ ```
411
+
412
+ Test basic completion functionality:
413
+
414
+ ```bash
415
+ npm run test:completion
416
+ ```
417
+
418
+ ## Logging
419
+
420
+ Configure logging levels for LLM provider requests and responses:
421
+
422
+ ```bash
423
+ # Show all logs (DEBUG, WARN, INFO, ERROR)
424
+ LLMJS2_LOG=debug node your-script.js
425
+
426
+ # Show INFO and ERROR logs only
427
+ LLMJS2_LOG=info node your-script.js
428
+
429
+ # Show WARN, INFO, and ERROR logs
430
+ LLMJS2_LOG=warn node your-script.js
431
+
432
+ # Show ERROR logs only
433
+ LLMJS2_LOG=error node your-script.js
434
+
435
+ # Examples
436
+ LLMJS2_LOG=debug npm run chat
437
+ LLMJS2_LOG=info npm run test:completion
438
+ ```
439
+
440
+ ### Log Levels
441
+
442
+ - **DEBUG**: All logs including detailed request/response data
443
+ - **INFO**: LLM provider communication and important events
444
+ - **WARN**: Warnings and important notifications
445
+ - **ERROR**: Errors and failures
446
+
447
+ ### Log Format
448
+
449
+ ```
450
+ [TIMESTAMP] [LEVEL] MESSAGE
451
+ DATA_OBJECT (JSON formatted)
452
+ ```
453
+
454
+ ### Example Output
455
+
456
+ ```
457
+ [2026-04-13T10:14:58.123Z] [INFO] LLMJS2 📤 Sending to LLM provider
458
+ {
459
+ "source": "completion",
460
+ "provider": "ollama",
461
+ "model": "minimax-m2.5:cloud",
462
+ "apiKey": "2620e31dea...",
463
+ "messages": [{"role": "user", "content": "Hello!"}]
464
+ }
465
+
466
+ [2026-04-13T10:15:01.456Z] [INFO] LLMJS2 📥 Received from LLM provider
467
+ {
468
+ "source": "completion",
469
+ "content": "Hello! How can I help you?",
470
+ "role": "assistant",
471
+ "usage": {"prompt_eval_count": 10, "eval_count": 15}
472
+ }
473
+ ```
474
+
475
+ ## License
476
+
477
+ MIT
478
+
479
+ ## Contributing
480
+
481
+ Contributions welcome! Please ensure all tests pass and add tests for new features.
482
+
483
+ ## Support
484
+
485
+ For issues and questions, please create an issue on GitHub.