llmjs2 1.3.9 → 1.7.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (50) hide show
  1. package/README.md +31 -476
  2. package/chain/AGENT_STEP_README.md +102 -0
  3. package/chain/README.md +257 -0
  4. package/chain/WORKFLOW_README.md +85 -0
  5. package/chain/agent-step-example.js +232 -0
  6. package/chain/docs/AGENT.md +126 -0
  7. package/chain/docs/GRAPH.md +490 -0
  8. package/chain/examples.js +314 -0
  9. package/chain/index.js +31 -0
  10. package/chain/lib/agent.js +338 -0
  11. package/chain/lib/flow/agent-step.js +119 -0
  12. package/chain/lib/flow/edge.js +24 -0
  13. package/chain/lib/flow/flow.js +76 -0
  14. package/chain/lib/flow/graph.js +331 -0
  15. package/chain/lib/flow/index.js +7 -0
  16. package/chain/lib/flow/step.js +63 -0
  17. package/chain/lib/memory/in-memory.js +117 -0
  18. package/chain/lib/memory/index.js +36 -0
  19. package/chain/lib/memory/lance-memory.js +225 -0
  20. package/chain/lib/memory/sqlite-memory.js +309 -0
  21. package/chain/simple-agent-step-example.js +168 -0
  22. package/chain/workflow-example-usage.js +70 -0
  23. package/chain/workflow-example.json +59 -0
  24. package/core/README.md +485 -0
  25. package/core/cli.js +275 -0
  26. package/core/docs/BASIC_USAGE.md +62 -0
  27. package/core/docs/CLI.md +104 -0
  28. package/{docs → core/docs}/GET_STARTED.md +129 -129
  29. package/{docs → core/docs}/GUARDRAILS_GUIDE.md +734 -734
  30. package/{docs → core/docs}/README.md +47 -47
  31. package/core/docs/ROUTER_GUIDE.md +199 -0
  32. package/{docs → core/docs}/SERVER_MODE.md +358 -350
  33. package/core/index.js +115 -0
  34. package/{providers → core/providers}/ollama.js +14 -6
  35. package/{providers → core/providers}/openai.js +14 -6
  36. package/core/providers/openrouter.js +206 -0
  37. package/core/router.js +252 -0
  38. package/{server.js → core/server.js} +15 -5
  39. package/package.json +46 -27
  40. package/cli.js +0 -195
  41. package/docs/BASIC_USAGE.md +0 -296
  42. package/docs/CLI.md +0 -455
  43. package/docs/ROUTER_GUIDE.md +0 -402
  44. package/index.js +0 -267
  45. package/providers/openrouter.js +0 -113
  46. package/router.js +0 -273
  47. package/test-completion.js +0 -99
  48. package/test.js +0 -246
  49. /package/{config.yaml → core/config.yaml} +0 -0
  50. /package/{logger.js → core/logger.js} +0 -0
package/README.md CHANGED
@@ -1,476 +1,31 @@
1
- # llmjs2
2
-
3
- A unified Node.js library for connecting to multiple Large Language Model (LLM) providers: OpenAI, Ollama, and OpenRouter.
4
-
5
- **Features:**
6
- - **Unified API**: Single interface for OpenAI, Ollama, and OpenRouter
7
- - **Intelligent Router**: Load balancing and multiple routing strategies
8
- - **Guardrails System**: Content filtering, logging, rate limiting, and custom processing
9
- - **OpenAI-Compatible Server**: Drop-in replacement for OpenAI API clients
10
- - **CLI Interface**: Command-line server management with configuration files
11
- - **Enterprise Security**: Input validation, error sanitization, and safe defaults
12
- - **Zero External Dependencies**: Pure Node.js implementation
13
-
14
- ## Features
15
-
16
- - **Unified API**: Single interface for OpenAI, Ollama, and OpenRouter
17
- - **Auto-detection**: Automatically chooses available providers based on API keys
18
- - **Enterprise-grade**: Robust error handling, input validation, and security measures
19
- - **Zero dependencies**: Uses only Node.js built-in modules
20
- - **TypeScript-free**: Pure JavaScript, no compilation required
21
- - **Production-ready**: Comprehensive testing and security auditing
22
-
23
- ## Installation
24
-
25
- ```bash
26
- npm install llmjs2
27
- ```
28
-
29
- Or for global CLI usage:
30
-
31
- ```bash
32
- npm install -g llmjs2
33
- ```
34
-
35
- ## Quick Test
36
-
37
- Try the sample configuration:
38
-
39
- ```bash
40
- # Start server with sample config
41
- llmjs2 --config config.yaml --port 3001
42
-
43
- # Test the API
44
- curl -X POST http://localhost:3001/v1/chat/completions \
45
- -H "Content-Type: application/json" \
46
- -d '{"messages":[{"role":"user","content":"Hello!"}]}'
47
-
48
- # Response format:
49
- # {
50
- # "id": "chatcmpl-123456",
51
- # "object": "chat.completion",
52
- # "created": 1640995200,
53
- # "model": "ollama/minimax-m2.5:cloud",
54
- # "messages": [
55
- # {"role": "user", "content": "Hello!"},
56
- # {"role": "assistant", "content": "Hi there!"}
57
- # ]
58
- # }
59
-
60
- ## Programmatic Configuration
61
-
62
- For advanced users, you can configure llmjs2 router programmatically instead of using YAML:
63
-
64
- ```bash
65
- npm run router:example
66
- ```
67
-
68
- See `server-config.js` for a complete example of configuring models, guardrails, and routing in JavaScript code, with direct completion usage.
69
-
70
- ## AI Chat App
71
-
72
- Experience llmjs2 with a simple terminal-based chat interface:
73
-
74
- ```bash
75
- npm run chat
76
- ```
77
-
78
- Features:
79
- - Conversational chat with message history
80
- - Automatic model routing (random selection)
81
- - Shows which model was used for each response
82
- - Simple guardrails (logging)
83
- - Graceful exit with "exit", "quit", or "bye"
84
-
85
- The chat app uses the same router configuration as the programmatic examples but provides an interactive chat experience.
86
-
87
- See `CONFIG_README.md` for detailed configuration examples.
88
-
89
- ## Quick Start
90
-
91
- ```javascript
92
- import { completion } from 'llmjs2';
93
-
94
- // Set API keys
95
- process.env.OPENAI_API_KEY = 'your-openai-key';
96
- process.env.OLLAMA_API_KEY = 'your-ollama-key';
97
- process.env.OPEN_ROUTER_API_KEY = 'your-openrouter-key';
98
-
99
- // Simple completion
100
- const response = await completion('Hello, how are you?');
101
- console.log(response);
102
- ```
103
-
104
- ## API Keys Setup
105
-
106
- Set your API keys as environment variables:
107
-
108
- ```bash
109
- export OPENAI_API_KEY=your_openai_api_key
110
- export OLLAMA_API_KEY=your_ollama_api_key
111
- export OPEN_ROUTER_API_KEY=your_openrouter_api_key
112
- ```
113
-
114
- ## Usage Patterns
115
-
116
- ### 1. Simple API (Auto-detection)
117
-
118
- ```javascript
119
- import { completion } from 'llmjs2';
120
-
121
- // Auto-selects provider based on available API keys
122
- // Cycles through: Ollama → OpenRouter → OpenAI → Ollama...
123
- const response = await completion('Explain quantum physics simply');
124
- ```
125
-
126
- **Auto-Selection Logic:**
127
- - Checks for `OLLAMA_API_KEY`, `OPEN_ROUTER_API_KEY`, `OPENAI_API_KEY`
128
- - Cycles sequentially through available providers
129
- - Uses default model for each provider
130
- - Falls back gracefully if keys are missing
131
-
132
- ### 2. Provider-Specific Model
133
-
134
- ```javascript
135
- // OpenAI
136
- const openaiResponse = await completion('openai/gpt-4', 'Write a haiku about coding');
137
-
138
- // Ollama
139
- const ollamaResponse = await completion('ollama/minimax-m2.5:cloud', 'What is AI?');
140
-
141
- // OpenRouter
142
- const openrouterResponse = await completion('openrouter/openrouter/free', 'Tell me a joke');
143
- ```
144
-
145
- ### 3. Advanced Object API
146
-
147
- ```javascript
148
- const response = await completion({
149
- model: 'openai/gpt-3.5-turbo',
150
- messages: [
151
- { role: 'system', content: 'You are a helpful assistant.' },
152
- { role: 'user', content: 'What is the capital of France?' }
153
- ],
154
- temperature: 0.7,
155
- maxTokens: 100
156
- });
157
- ```
158
-
159
- ## Configuration
160
-
161
- ### Default Models
162
-
163
- Set default models for each provider:
164
-
165
- ```bash
166
- export OPENAI_DEFAULT_MODEL=gpt-4
167
- export OLLAMA_DEFAULT_MODEL=minimax-m2.5:cloud
168
- export OPEN_ROUTER_DEFAULT_MODEL=openrouter/free
169
- ```
170
-
171
- ### Base URLs
172
-
173
- Customize API endpoints:
174
-
175
- ```bash
176
- export OPENAI_BASE_URL=https://api.openai.com/v1
177
- export OLLAMA_BASE_URL=https://ollama.com/api/chat
178
- export OPEN_ROUTER_BASE_URL=https://openrouter.ai/api/v1/chat/completions
179
- ```
180
-
181
- ## Error Handling
182
-
183
- ```javascript
184
- import { completion } from 'llmjs2';
185
-
186
- try {
187
- const response = await completion('Tell me a joke');
188
- console.log(response);
189
- } catch (error) {
190
- console.error('Error:', error.message);
191
- }
192
- ```
193
-
194
- ## Conversations
195
-
196
- ```javascript
197
- import { completion } from 'llmjs2';
198
-
199
- const messages = [
200
- { role: 'system', content: 'You are a helpful coding assistant.' },
201
- { role: 'user', content: 'How do I reverse a string in JavaScript?' }
202
- ];
203
-
204
- let response = await completion({ model: 'openai/gpt-4', messages });
205
- console.log('Assistant:', response);
206
-
207
- // Continue conversation
208
- messages.push({ role: 'assistant', content: response });
209
- messages.push({ role: 'user', content: 'Can you show me with an example?' });
210
-
211
- response = await completion({ model: 'openai/gpt-4', messages });
212
- console.log('Assistant:', response);
213
- ```
214
-
215
- ## Function Calling (Tools)
216
-
217
- ```javascript
218
- import { completion } from 'llmjs2';
219
-
220
- const weatherTool = {
221
- type: 'function',
222
- function: {
223
- name: 'get_weather',
224
- description: 'Get current weather for a location',
225
- parameters: {
226
- type: 'object',
227
- properties: {
228
- location: { type: 'string', description: 'City name' }
229
- },
230
- required: ['location']
231
- }
232
- }
233
- };
234
-
235
- const response = await completion({
236
- model: 'openai/gpt-4',
237
- messages: [{ role: 'user', content: 'What is the weather in Paris?' }],
238
- tools: [weatherTool]
239
- });
240
-
241
- if (response.tool_calls) {
242
- console.log('Tool calls:', response.tool_calls);
243
- }
244
- ```
245
-
246
- ## Router System
247
-
248
- Intelligent model routing with load balancing and multiple strategies:
249
-
250
- ```javascript
251
- import { router } from 'llmjs2';
252
-
253
- const modelList = [
254
- {
255
- model_name: 'gpt-3.5-turbo',
256
- llm_params: {
257
- model: 'ollama/chatgpt-v-2',
258
- api_key: process.env.OLLAMA_API_KEY
259
- }
260
- },
261
- {
262
- model_name: 'gpt-3.5-turbo',
263
- llm_params: {
264
- model: 'openai/gpt-3.5-turbo',
265
- api_key: process.env.OPENAI_API_KEY
266
- }
267
- }
268
- ];
269
-
270
- // Load balancing across models with same name
271
- const route = router(modelList, 'random');
272
- const response = await route.completion({
273
- model: 'gpt-3.5-turbo',
274
- messages: [{ role: 'user', content: 'Hello!' }]
275
- });
276
-
277
- // Auto-routing with different strategies
278
- const randomRouter = router(modelList, 'random');
279
- const sequentialRouter = router(modelList, 'sequential');
280
- ```
281
-
282
- **Routing Strategies:**
283
- - `default`: Load balance across models with same name
284
- - `random`: Randomly select from all models
285
- - `sequential`: Cycle through models in order
286
-
287
- ## Guardrails System
288
-
289
- Add custom logic before and after LLM calls for content filtering, logging, and processing:
290
-
291
- ```javascript
292
- import { router } from 'llmjs2';
293
-
294
- const route = router(modelList);
295
-
296
- route.setGuardrails([
297
- {
298
- name: 'content_filter',
299
- mode: 'pre_call',
300
- code: (processId, input) => {
301
- // Filter inappropriate content
302
- const filteredMessages = input.messages.map(msg => ({
303
- ...msg,
304
- content: msg.content.replace(/badword/gi, '****')
305
- }));
306
- return { ...input, messages: filteredMessages };
307
- }
308
- },
309
- {
310
- name: 'response_logger',
311
- mode: 'post_call',
312
- code: (processId, result) => {
313
- console.log(`[${processId}] Response:`, result);
314
- return result;
315
- }
316
- }
317
- ]);
318
- ```
319
-
320
- ## Server Mode
321
-
322
- Run an API server that returns responses with metadata and message arrays:
323
-
324
- ```javascript
325
- import { router, app } from 'llmjs2';
326
-
327
- const route = router(modelList);
328
- app.use(route);
329
- app.listen(3000);
330
- ```
331
-
332
- Or use the CLI:
333
-
334
- ```bash
335
- llmjs2 --config config.yaml --port 3000
336
- ```
337
-
338
- ## CLI Interface
339
-
340
- Manage servers from the command line:
341
-
342
- ```bash
343
- # Start server with defaults
344
- llmjs2
345
-
346
- # Use configuration file
347
- llmjs2 --config config.yaml
348
-
349
- # Custom port and host
350
- llmjs2 --port 8080 --host 0.0.0.0
351
-
352
- # Get help
353
- llmjs2 --help
354
- ```
355
-
356
- ## Configuration Files
357
-
358
- Use YAML for advanced configuration:
359
-
360
- ```yaml
361
- model_list:
362
- - model_name: premium
363
- llm_params:
364
- model: openrouter/openai/gpt-4
365
- api_key: os.environ/OPEN_ROUTER_API_KEY
366
-
367
- - model_name: standard
368
- llm_params:
369
- model: ollama/minimax-m2.5:cloud
370
- api_key: os.environ/OLLAMA_API_KEY
371
-
372
- guardrails:
373
- - name: content_filter
374
- mode: pre_call
375
- code: |
376
- (processId, input) => {
377
- // Content filtering logic
378
- return input;
379
- }
380
-
381
- router_settings:
382
- routing_strategy: random
383
- ```
384
-
385
- **Note**: Model names in the configuration use the format `[provider]/[actual-model-name]` (e.g., `openai/gpt-4`, `ollama/minimax-m2.5:cloud`). The `[provider]/` prefix is used for routing and is automatically stripped when sending requests to LLM providers.
386
-
387
- ## Security Features
388
-
389
- - **No API key logging**: Sensitive information is never logged
390
- - **Input validation**: All inputs are validated and sanitized
391
- - **Error sanitization**: Error messages don't leak sensitive data
392
- - **Timeout protection**: Requests timeout to prevent hanging
393
- - **HTTPS only**: All communications use HTTPS
394
-
395
- ## Testing
396
-
397
- Run the test suite:
398
-
399
- ```bash
400
- npm test
401
- ```
402
-
403
- Test basic completion functionality:
404
-
405
- ```bash
406
- npm run test:completion
407
- ```
408
-
409
- ## Logging
410
-
411
- Configure logging levels for LLM provider requests and responses:
412
-
413
- ```bash
414
- # Show all logs (DEBUG, WARN, INFO, ERROR)
415
- LLMJS2_LOG=debug node your-script.js
416
-
417
- # Show INFO and ERROR logs only
418
- LLMJS2_LOG=info node your-script.js
419
-
420
- # Show WARN, INFO, and ERROR logs
421
- LLMJS2_LOG=warn node your-script.js
422
-
423
- # Show ERROR logs only
424
- LLMJS2_LOG=error node your-script.js
425
-
426
- # Examples
427
- LLMJS2_LOG=debug npm run chat
428
- LLMJS2_LOG=info npm run test:completion
429
- ```
430
-
431
- ### Log Levels
432
-
433
- - **DEBUG**: All logs including detailed request/response data
434
- - **INFO**: LLM provider communication and important events
435
- - **WARN**: Warnings and important notifications
436
- - **ERROR**: Errors and failures
437
-
438
- ### Log Format
439
-
440
- ```
441
- [TIMESTAMP] [LEVEL] MESSAGE
442
- DATA_OBJECT (JSON formatted)
443
- ```
444
-
445
- ### Example Output
446
-
447
- ```
448
- [2026-04-13T10:14:58.123Z] [INFO] LLMJS2 📤 Sending to LLM provider
449
- {
450
- "source": "completion",
451
- "provider": "ollama",
452
- "model": "minimax-m2.5:cloud",
453
- "apiKey": "2620e31dea...",
454
- "messages": [{"role": "user", "content": "Hello!"}]
455
- }
456
-
457
- [2026-04-13T10:15:01.456Z] [INFO] LLMJS2 📥 Received from LLM provider
458
- {
459
- "source": "completion",
460
- "content": "Hello! How can I help you?",
461
- "role": "assistant",
462
- "usage": {"prompt_eval_count": 10, "eval_count": 15}
463
- }
464
- ```
465
-
466
- ## License
467
-
468
- MIT
469
-
470
- ## Contributing
471
-
472
- Contributions welcome! Please ensure all tests pass and add tests for new features.
473
-
474
- ## Support
475
-
476
- For issues and questions, please create an issue on GitHub.
1
+ # llmjs2
2
+
3
+ llmjs2 is a unified Node.js library for OpenAI, Ollama, and OpenRouter with routing, guardrails, and an OpenAI-style server mode.
4
+
5
+ ## Quick Links
6
+
7
+ - Core README: core/README.md
8
+ - CLI Guide: core/docs/CLI.md
9
+ - Server Mode: core/docs/SERVER_MODE.md
10
+ - Router Guide: core/docs/ROUTER_GUIDE.md
11
+
12
+ ## Install
13
+
14
+ ```bash
15
+ npm install llmjs2
16
+ ```
17
+
18
+ ## Basic Usage
19
+
20
+ ```javascript
21
+ const { completion } = require('llmjs2');
22
+
23
+ (async () => {
24
+ const result = await completion({
25
+ model: 'openai/gpt-4',
26
+ messages: [{ role: 'user', content: 'Say hello in one sentence.' }]
27
+ });
28
+
29
+ console.log(result);
30
+ })();
31
+ ```
@@ -0,0 +1,102 @@
1
+ # AgentStep Examples
2
+
3
+ This directory contains example programs demonstrating how to use AgentStep in llmjs-chain workflows.
4
+
5
+ ## Files
6
+
7
+ - **`simple-agent-step-example.js`** - Complete example with mock agents (fast execution)
8
+ - **`agent-step-example.js`** - Advanced example with real AI agents (slower but comprehensive)
9
+
10
+ ## Running the Examples
11
+
12
+ ### Simple Example (Recommended for Testing)
13
+ ```bash
14
+ node simple-agent-step-example.js
15
+ ```
16
+
17
+ **Features Demonstrated:**
18
+ - ✅ Function-based input/output mapping
19
+ - ✅ Template-based input mapping (`{{variable}}` syntax)
20
+ - ✅ Default mapping behavior
21
+ - ✅ Sequential workflow execution
22
+ - ✅ Parallel agent execution
23
+ - ✅ Context passing between steps
24
+ - ✅ Mock agents for fast testing
25
+
26
+ ### Advanced Example (Real AI)
27
+ ```bash
28
+ node agent-step-example.js
29
+ ```
30
+
31
+ **Features Demonstrated:**
32
+ - ✅ Real AI agent integration
33
+ - ✅ Multi-agent content creation pipeline
34
+ - ✅ Complex input/output mapping
35
+ - ✅ Tool usage within agents
36
+ - ✅ Error handling
37
+
38
+ ## Key AgentStep Features
39
+
40
+ ### Input Mapping Options
41
+
42
+ ```javascript
43
+ // 1. Function mapper - full control
44
+ inputMapper: (context) => `Process: ${JSON.stringify(context.data)}`
45
+
46
+ // 2. Template mapper - simple interpolation
47
+ inputMapper: 'Analyze {{data}} from step {{stepName}}'
48
+
49
+ // 3. Default mapper - automatic context conversion
50
+ // (no inputMapper specified)
51
+ ```
52
+
53
+ ### Output Mapping Options
54
+
55
+ ```javascript
56
+ // Custom transformation
57
+ outputMapper: (agentResponse, context) => ({
58
+ result: agentResponse,
59
+ processedAt: new Date(),
60
+ confidence: 0.95
61
+ })
62
+
63
+ // Default mapping returns: { response, agent, timestamp }
64
+ ```
65
+
66
+ ### Workflow Integration
67
+
68
+ ```javascript
69
+ const workflow = new Graph({ name: 'ai-workflow' })
70
+ .step(agentStep1, agentStep2, regularStep)
71
+ .edge(agentStep1, agentStep2)
72
+ .edge(agentStep2, regularStep)
73
+ .compile();
74
+
75
+ const result = await workflow.run(initialContext);
76
+ ```
77
+
78
+ ## Use Cases
79
+
80
+ - **Content Creation**: Multi-agent writing and editing pipelines
81
+ - **Data Analysis**: Specialized agents for different analysis stages
82
+ - **Quality Assurance**: AI-powered validation and testing
83
+ - **Decision Making**: Agent-based routing and recommendations
84
+ - **Integration Workflows**: Mix of AI and traditional processing steps
85
+
86
+ ## Performance Notes
87
+
88
+ - **Mock Agents**: Use for development and testing (fast)
89
+ - **Real Agents**: Production use with proper rate limiting
90
+ - **Parallel Execution**: Multiple agents can run simultaneously
91
+ - **Context Size**: Large contexts may impact performance
92
+
93
+ ## Integration with Existing Workflows
94
+
95
+ AgentStep is fully compatible with:
96
+ - Regular Step classes
97
+ - Graph.load() JSON workflows
98
+ - Conditional edges
99
+ - Parallel execution
100
+ - Memory systems
101
+
102
+ Start with the simple example to understand the API, then move to real agents for production use!