llmjs2 1.3.8 → 1.6.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (49) hide show
  1. package/README.md +31 -476
  2. package/chain/AGENT_STEP_README.md +102 -0
  3. package/chain/README.md +257 -0
  4. package/chain/WORKFLOW_README.md +85 -0
  5. package/chain/agent-step-example.js +232 -0
  6. package/chain/docs/AGENT.md +126 -0
  7. package/chain/docs/GRAPH.md +490 -0
  8. package/chain/examples.js +314 -0
  9. package/chain/index.js +31 -0
  10. package/chain/lib/agent.js +338 -0
  11. package/chain/lib/flow/agent-step.js +119 -0
  12. package/chain/lib/flow/edge.js +24 -0
  13. package/chain/lib/flow/flow.js +76 -0
  14. package/chain/lib/flow/graph.js +331 -0
  15. package/chain/lib/flow/index.js +7 -0
  16. package/chain/lib/flow/step.js +63 -0
  17. package/chain/lib/memory/in-memory.js +117 -0
  18. package/chain/lib/memory/index.js +36 -0
  19. package/chain/lib/memory/lance-memory.js +225 -0
  20. package/chain/lib/memory/sqlite-memory.js +309 -0
  21. package/chain/simple-agent-step-example.js +168 -0
  22. package/chain/workflow-example-usage.js +70 -0
  23. package/chain/workflow-example.json +59 -0
  24. package/core/README.md +485 -0
  25. package/core/cli.js +275 -0
  26. package/core/docs/BASIC_USAGE.md +62 -0
  27. package/core/docs/CLI.md +104 -0
  28. package/{docs → core/docs}/GET_STARTED.md +129 -129
  29. package/{docs → core/docs}/GUARDRAILS_GUIDE.md +734 -734
  30. package/{docs → core/docs}/README.md +47 -47
  31. package/core/docs/ROUTER_GUIDE.md +199 -0
  32. package/{docs → core/docs}/SERVER_MODE.md +358 -350
  33. package/core/index.js +115 -0
  34. package/{providers → core/providers}/ollama.js +14 -6
  35. package/{providers → core/providers}/openai.js +14 -6
  36. package/{providers → core/providers}/openrouter.js +14 -6
  37. package/core/router.js +252 -0
  38. package/{server.js → core/server.js} +15 -5
  39. package/package.json +43 -27
  40. package/cli.js +0 -195
  41. package/docs/BASIC_USAGE.md +0 -296
  42. package/docs/CLI.md +0 -455
  43. package/docs/ROUTER_GUIDE.md +0 -402
  44. package/index.js +0 -265
  45. package/router.js +0 -273
  46. package/test-completion.js +0 -99
  47. package/test.js +0 -246
  48. /package/{config.yaml → core/config.yaml} +0 -0
  49. /package/{logger.js → core/logger.js} +0 -0
package/docs/CLI.md DELETED
@@ -1,455 +0,0 @@
1
- # CLI Guide
2
-
3
- Use the llmjs2 command-line interface to manage servers and configurations without writing code.
4
-
5
- ## Installation
6
-
7
- If you installed llmjs2 globally, the `llmjs2` command is available everywhere:
8
-
9
- ```bash
10
- npm install -g llmjs2
11
- ```
12
-
13
- ## Basic Usage
14
-
15
- ### Start Server with Defaults
16
-
17
- ```bash
18
- llmjs2
19
- ```
20
-
21
- This starts a server on `http://localhost:3000` using:
22
-
23
- - Environment variables for API keys
24
- - Default models
25
- - No configuration file
26
-
27
- ### Custom Server Options
28
-
29
- ```bash
30
- # Custom port
31
- llmjs2 --port 8080
32
-
33
- # Custom host
34
- llmjs2 --host 0.0.0.0
35
-
36
- # Custom port and host
37
- llmjs2 --port 8080 --host 0.0.0.0
38
-
39
- # Use configuration file
40
- llmjs2 --config my-config.yaml
41
-
42
- # Combine options
43
- llmjs2 --config my-config.yaml --port 8080 --host 0.0.0.0
44
- ```
45
-
46
- ### Get Help
47
-
48
- ```bash
49
- llmjs2 --help
50
- ```
51
-
52
- Output:
53
-
54
- ```
55
- 🤖 llmjs2 - OpenAI-Compatible API Server
56
-
57
- USAGE:
58
- llmjs2 [options]
59
-
60
- DESCRIPTION:
61
- Starts an OpenAI-compatible API server with intelligent routing and guardrails.
62
- Supports model load balancing, content filtering, and custom request processing.
63
- The server listens for POST requests to /v1/chat/completions.
64
-
65
- OPTIONS:
66
- -c, --config <file> YAML config file with models, guardrails, and routing
67
- -p, --port <port> Port to listen on (default: 3000)
68
- -H, --host <host> Host to bind to (default: localhost)
69
- -h, --help Show this help message
70
-
71
- EXAMPLES:
72
- llmjs2
73
- llmjs2 --config config.yaml
74
- llmjs2 --config config.yaml --port 8080 --host 0.0.0.0
75
- ```
76
-
77
- ## Configuration Files
78
-
79
- Use YAML configuration files to define models, guardrails, and router settings for advanced LLM routing and processing.
80
-
81
- ### Configuration with Guardrails
82
-
83
- Create `config.yaml` with models, guardrails, and routing settings:
84
-
85
- ```yaml
86
- model_list:
87
- - model_name: premium
88
- llm_params:
89
- model: openrouter/openai/gpt-4-turbo-preview
90
- api_key: os.environ/PROD_OPEN_ROUTER_API_KEY
91
-
92
- - model_name: standard
93
- llm_params:
94
- model: ollama/minimax-m2.5:cloud
95
- api_key: os.environ/PROD_OLLAMA_API_KEY
96
-
97
- guardrails:
98
- - name: "content_filter"
99
- mode: "pre_call"
100
- code: |
101
- (processId, input) => {
102
- // Filter inappropriate content before LLM call
103
- const { model, messages } = input;
104
- const filteredMessages = messages.map(msg => ({
105
- ...msg,
106
- content: msg.content.replace(/badword/gi, '****')
107
- }));
108
- return { model, messages: filteredMessages };
109
- }
110
-
111
- - name: "response_logger"
112
- mode: "post_call"
113
- code: |
114
- (processId, result) => {
115
- console.log(`[${processId}] Response logged:`, typeof result);
116
- return result;
117
- }
118
-
119
- router_settings:
120
- routing_strategy: "random"
121
- ```
122
-
123
- Run with configuration:
124
-
125
- ```bash
126
- llmjs2 --config config.yaml
127
- ```
128
-
129
- ```yaml
130
-
131
- ```
132
-
133
- ## Environment Variables
134
-
135
- The CLI uses the same environment variables as the library:
136
-
137
- ```bash
138
- # Required: API Keys
139
- export OLLAMA_API_KEY=your_ollama_key
140
- export OPEN_ROUTER_API_KEY=your_openrouter_key
141
-
142
- # Optional: Default Models
143
- export OLLAMA_DEFAULT_MODEL=minimax-m2.5:cloud
144
- export OPEN_ROUTER_DEFAULT_MODEL=openrouter/free
145
-
146
- # Optional: Server Settings
147
- export PORT=3000
148
- export HOST=localhost
149
- ```
150
-
151
- ### Using .env Files
152
-
153
- Create a `.env` file in your project directory:
154
-
155
- ```bash
156
- # .env
157
- OLLAMA_API_KEY=your_ollama_key_here
158
- OPEN_ROUTER_API_KEY=your_openrouter_key_here
159
- OLLAMA_DEFAULT_MODEL=minimax-m2.5:cloud
160
- OPEN_ROUTER_DEFAULT_MODEL=openrouter/free
161
- PORT=3000
162
- HOST=localhost
163
- ```
164
-
165
- The CLI will automatically load `.env` files.
166
-
167
- ## Example Configurations
168
-
169
- ### Development Setup
170
-
171
- ```yaml
172
- # dev-config.yaml
173
- model_list:
174
- - model_name: fast-dev
175
- llm_params:
176
- model: openrouter/openrouter/free
177
- api_key: os.environ/OPEN_ROUTER_API_KEY
178
-
179
- guardrails:
180
- - name: "dev_logger"
181
- mode: "post_call"
182
- code: |
183
- (processId, result) => {
184
- console.log(`[DEV] ${processId}:`, typeof result);
185
- return result;
186
- }
187
-
188
- router_settings:
189
- routing_strategy: "sequential"
190
- ```
191
-
192
- ```bash
193
- llmjs2 --config dev-config.yaml --port 3001
194
- ```
195
-
196
- ### Production Setup
197
-
198
- ```yaml
199
- # prod-config.yaml
200
- model_list:
201
- - model_name: premium
202
- llm_params:
203
- model: openrouter/openai/gpt-4-turbo-preview
204
- api_key: os.environ/PROD_OPEN_ROUTER_API_KEY
205
-
206
- - model_name: standard
207
- llm_params:
208
- model: ollama/minimax-m2.5:cloud
209
- api_key: os.environ/PROD_OLLAMA_API_KEY
210
-
211
- guardrails:
212
- - name: "content_filter"
213
- mode: "pre_call"
214
- code: |
215
- (processId, input) => {
216
- const { model, messages } = input;
217
- const filteredMessages = messages.map(msg => ({
218
- ...msg,
219
- content: msg.content.replace(/badword/gi, '****')
220
- }));
221
- return { model, messages: filteredMessages };
222
- }
223
-
224
- - name: "rate_limiter"
225
- mode: "pre_call"
226
- code: |
227
- (processId, input) => {
228
- // Implement rate limiting logic
229
- return input;
230
- }
231
-
232
- router_settings:
233
- routing_strategy: "random"
234
- ```
235
-
236
- ```bash
237
- llmjs2 --config dev-config.yaml --port 3001
238
- ```
239
-
240
- ### Production Setup
241
-
242
- ```yaml
243
- # prod-config.yaml
244
- model_list:
245
- - model_name: premium
246
- llm_params:
247
- model: openrouter/openai/gpt-4-turbo-preview
248
- api_key: os.environ/PROD_OPEN_ROUTER_API_KEY
249
-
250
- - model_name: standard
251
- llm_params:
252
- model: ollama/minimax-m2.5:cloud
253
- api_key: os.environ/PROD_OLLAMA_API_KEY
254
-
255
- guardrails:
256
- - name: "my-custom-processor"
257
- mode: "pre_call" # Use 'post_call' to process the LLM's response
258
- code: |
259
- (processId, input) => {
260
- // Filter inappropriate content before LLM call
261
- const { model, messages } = input;
262
- const filteredMessages = messages.map(msg => ({
263
- ...msg,
264
- content: msg.content.replace(/badword/gi, '****')
265
- }));
266
- return { model, messages: filteredMessages };
267
- }
268
- router_settings:
269
- routing_strategy: "random"
270
- ```
271
-
272
- ### Multi-Environment Setup
273
-
274
- ```yaml
275
- # config.yaml
276
- model_list:
277
- - model_name: default
278
- llm_params:
279
- model: ollama/minimax-m2.5:cloud
280
- api_key: os.environ/OLLAMA_API_KEY
281
-
282
- - model_name: premium
283
- llm_params:
284
- model: openrouter/openai/gpt-4
285
- api_key: os.environ/OPEN_ROUTER_API_KEY
286
-
287
- guardrails:
288
- - name: "environment_logger"
289
- mode: "post_call"
290
- code: |
291
- (processId, result) => {
292
- console.log(`[${process.env.NODE_ENV || 'development'}] ${processId}:`, typeof result);
293
- return result;
294
- }
295
-
296
- router_settings:
297
- routing_strategy: "random"
298
- ```
299
-
300
- ## Testing Your Configuration
301
-
302
- ### Test with cURL
303
-
304
- ```bash
305
- # Start server
306
- llmjs2 --config config.yaml --port 3001
307
-
308
- # Test direct model routing
309
- curl -X POST http://localhost:3001/v1/chat/completions \
310
- -H "Content-Type: application/json" \
311
- -d '{
312
- "model": "default",
313
- "messages": [{"role": "user", "content": "Hello!"}]
314
- }'
315
-
316
- # Test automatic routing (uses router strategy from config)
317
- curl -X POST http://localhost:3001/v1/chat/completions \
318
- -H "Content-Type: application/json" \
319
- -d '{
320
- "messages": [{"role": "user", "content": "Hello!"}]
321
- }'
322
- ```
323
-
324
- ### Test Specific Models
325
-
326
- ```bash
327
- # Test premium model
328
- curl -X POST http://localhost:3001/v1/chat/completions \
329
- -H "Content-Type: application/json" \
330
- -H "Authorization: Bearer test-key" \
331
- -d '{
332
- "model": "premium",
333
- "messages": [{"role": "user", "content": "Hello premium!"}]
334
- }'
335
- ```
336
-
337
- ## Advanced CLI Usage
338
-
339
- ### Running in Background
340
-
341
- ```bash
342
- # Linux/Mac
343
- llmjs2 --config config.yaml > server.log 2>&1 &
344
-
345
- # Windows (PowerShell)
346
- Start-Job -ScriptBlock { llmjs2 --config config.yaml } -Name llmjs2-server
347
- ```
348
-
349
- ### Process Management
350
-
351
- ```bash
352
- # Find process
353
- ps aux | grep llmjs2
354
-
355
- # Kill process (replace PID)
356
- kill PID
357
-
358
- # Or use pkill
359
- pkill -f llmjs2
360
- ```
361
-
362
- ### Systemd Service (Linux)
363
-
364
- Create `/etc/systemd/system/llmjs2.service`:
365
-
366
- ```ini
367
- [Unit]
368
- Description=llmjs2 API Server
369
- After=network.target
370
-
371
- [Service]
372
- Type=simple
373
- User=your-user
374
- WorkingDirectory=/path/to/your/app
375
- ExecStart=/usr/bin/llmjs2 --config config.yaml
376
- Restart=always
377
- RestartSec=10
378
- Environment=OLLAMA_API_KEY=your_key
379
- Environment=OPEN_ROUTER_API_KEY=your_key
380
-
381
- [Install]
382
- WantedBy=multi-user.target
383
- ```
384
-
385
- Enable and start:
386
-
387
- ```bash
388
- sudo systemctl enable llmjs2
389
- sudo systemctl start llmjs2
390
- sudo systemctl status llmjs2
391
- ```
392
-
393
- ## Troubleshooting CLI Issues
394
-
395
- ### Configuration File Errors
396
-
397
- **YAML syntax error:**
398
-
399
- ```yaml
400
- # Check your YAML syntax
401
- yamllint config.yaml
402
- ```
403
-
404
- **File not found:**
405
-
406
- ```bash
407
- # Check file exists and path is correct
408
- ls -la config.yaml
409
- llmjs2 --config ./config.yaml
410
- ```
411
-
412
- ### Environment Variable Issues
413
-
414
- **API key not found:**
415
-
416
- ```bash
417
- # Check environment variables are set
418
- echo $OLLAMA_API_KEY
419
- echo $OPEN_ROUTER_API_KEY
420
-
421
- # Or use a .env file
422
- echo "OLLAMA_API_KEY=your_key" > .env
423
- ```
424
-
425
- ### Port Issues
426
-
427
- **Port already in use:**
428
-
429
- ```bash
430
- # Check what's using the port
431
- lsof -i :3000
432
-
433
- # Use a different port
434
- llmjs2 --port 8080
435
- ```
436
-
437
- ### Permission Issues
438
-
439
- **Cannot bind to port:**
440
-
441
- ```bash
442
- # Try ports above 1024
443
- llmjs2 --port 8080
444
-
445
- # Or run with sudo (not recommended for production)
446
- sudo llmjs2 --port 80
447
- ```
448
-
449
- ## Next Steps
450
-
451
- - **[Get Started](GET_STARTED.md)** - Basic setup and first steps
452
- - **[Basic Usage](BASIC_USAGE.md)** - Learn different API patterns
453
- - **[Server Mode](SERVER_MODE.md)** - Advanced server configuration
454
-
455
- The CLI makes it easy to deploy and manage llmjs2 servers in any environment!