llmjs2 1.3.9 → 1.6.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (49) hide show
  1. package/README.md +31 -476
  2. package/chain/AGENT_STEP_README.md +102 -0
  3. package/chain/README.md +257 -0
  4. package/chain/WORKFLOW_README.md +85 -0
  5. package/chain/agent-step-example.js +232 -0
  6. package/chain/docs/AGENT.md +126 -0
  7. package/chain/docs/GRAPH.md +490 -0
  8. package/chain/examples.js +314 -0
  9. package/chain/index.js +31 -0
  10. package/chain/lib/agent.js +338 -0
  11. package/chain/lib/flow/agent-step.js +119 -0
  12. package/chain/lib/flow/edge.js +24 -0
  13. package/chain/lib/flow/flow.js +76 -0
  14. package/chain/lib/flow/graph.js +331 -0
  15. package/chain/lib/flow/index.js +7 -0
  16. package/chain/lib/flow/step.js +63 -0
  17. package/chain/lib/memory/in-memory.js +117 -0
  18. package/chain/lib/memory/index.js +36 -0
  19. package/chain/lib/memory/lance-memory.js +225 -0
  20. package/chain/lib/memory/sqlite-memory.js +309 -0
  21. package/chain/simple-agent-step-example.js +168 -0
  22. package/chain/workflow-example-usage.js +70 -0
  23. package/chain/workflow-example.json +59 -0
  24. package/core/README.md +485 -0
  25. package/core/cli.js +275 -0
  26. package/core/docs/BASIC_USAGE.md +62 -0
  27. package/core/docs/CLI.md +104 -0
  28. package/{docs → core/docs}/GET_STARTED.md +129 -129
  29. package/{docs → core/docs}/GUARDRAILS_GUIDE.md +734 -734
  30. package/{docs → core/docs}/README.md +47 -47
  31. package/core/docs/ROUTER_GUIDE.md +199 -0
  32. package/{docs → core/docs}/SERVER_MODE.md +358 -350
  33. package/core/index.js +115 -0
  34. package/{providers → core/providers}/ollama.js +14 -6
  35. package/{providers → core/providers}/openai.js +14 -6
  36. package/{providers → core/providers}/openrouter.js +14 -6
  37. package/core/router.js +252 -0
  38. package/{server.js → core/server.js} +15 -5
  39. package/package.json +43 -27
  40. package/cli.js +0 -195
  41. package/docs/BASIC_USAGE.md +0 -296
  42. package/docs/CLI.md +0 -455
  43. package/docs/ROUTER_GUIDE.md +0 -402
  44. package/index.js +0 -267
  45. package/router.js +0 -273
  46. package/test-completion.js +0 -99
  47. package/test.js +0 -246
  48. /package/{config.yaml → core/config.yaml} +0 -0
  49. /package/{logger.js → core/logger.js} +0 -0
@@ -1,47 +1,47 @@
1
- # llmjs2 Documentation
2
-
3
- Welcome to the llmjs2 documentation! This folder contains all the documentation for using and understanding the llmjs2 library.
4
-
5
- ## Documentation Overview
6
-
7
- ### 📖 Getting Started
8
-
9
- - **[GET_STARTED.md](GET_STARTED.md)** - Quick setup guide for new users (5 minutes to first completion)
10
-
11
- ### 🔧 Usage Guides
12
-
13
- - **[BASIC_USAGE.md](BASIC_USAGE.md)** - Core API patterns, configuration, and common use cases
14
- - **[ROUTER_GUIDE.md](ROUTER_GUIDE.md)** - Model routing and load balancing
15
- - **[GUARDRAILS_GUIDE.md](GUARDRAILS_GUIDE.md)** - Content filtering and request processing
16
- - **[SERVER_MODE.md](SERVER_MODE.md)** - Run llmjs2 as an OpenAI-compatible API server with routing
17
- - **[CLI.md](CLI.md)** - Command-line interface for server management
18
-
19
- ## Quick Navigation
20
-
21
- ### New to llmjs2?
22
-
23
- Start with **[GET_STARTED.md](GET_STARTED.md)** to get up and running quickly.
24
-
25
- ### Want to use the API directly?
26
-
27
- Check out **[BASIC_USAGE.md](BASIC_USAGE.md)** for different API patterns and examples.
28
-
29
- ### Need routing and load balancing?
30
-
31
- See **[ROUTER_GUIDE.md](ROUTER_GUIDE.md)** for intelligent model routing.
32
-
33
- ### Need content filtering or custom processing?
34
-
35
- See **[GUARDRAILS_GUIDE.md](GUARDRAILS_GUIDE.md)** for guardrails and request processing.
36
-
37
- ### Need to set up a server?
38
-
39
- See **[SERVER_MODE.md](SERVER_MODE.md)** for OpenAI-compatible server setup.
40
-
41
- ### Prefer command-line tools?
42
-
43
- **[CLI.md](CLI.md)** covers the command-line interface and configuration files.
44
-
45
- ## Contributing
46
-
47
- Documentation improvements are welcome! Please ensure any changes maintain consistency across all documentation files.
1
+ # llmjs2 Documentation
2
+
3
+ Welcome to the llmjs2 documentation! This folder contains all the documentation for using and understanding the llmjs2 library.
4
+
5
+ ## Documentation Overview
6
+
7
+ ### 📖 Getting Started
8
+
9
+ - **[GET_STARTED.md](GET_STARTED.md)** - Quick setup guide for new users (5 minutes to first completion)
10
+
11
+ ### 🔧 Usage Guides
12
+
13
+ - **[BASIC_USAGE.md](BASIC_USAGE.md)** - Core API patterns, configuration, and common use cases
14
+ - **[ROUTER_GUIDE.md](ROUTER_GUIDE.md)** - Model routing and load balancing
15
+ - **[GUARDRAILS_GUIDE.md](GUARDRAILS_GUIDE.md)** - Content filtering and request processing
16
+ - **[SERVER_MODE.md](SERVER_MODE.md)** - Run llmjs2 as an OpenAI-compatible API server with routing
17
+ - **[CLI.md](CLI.md)** - Command-line interface for server management
18
+
19
+ ## Quick Navigation
20
+
21
+ ### New to llmjs2?
22
+
23
+ Start with **[GET_STARTED.md](GET_STARTED.md)** to get up and running quickly.
24
+
25
+ ### Want to use the API directly?
26
+
27
+ Check out **[BASIC_USAGE.md](BASIC_USAGE.md)** for different API patterns and examples.
28
+
29
+ ### Need routing and load balancing?
30
+
31
+ See **[ROUTER_GUIDE.md](ROUTER_GUIDE.md)** for intelligent model routing.
32
+
33
+ ### Need content filtering or custom processing?
34
+
35
+ See **[GUARDRAILS_GUIDE.md](GUARDRAILS_GUIDE.md)** for guardrails and request processing.
36
+
37
+ ### Need to set up a server?
38
+
39
+ See **[SERVER_MODE.md](SERVER_MODE.md)** for OpenAI-compatible server setup.
40
+
41
+ ### Prefer command-line tools?
42
+
43
+ **[CLI.md](CLI.md)** covers the command-line interface and configuration files.
44
+
45
+ ## Contributing
46
+
47
+ Documentation improvements are welcome! Please ensure any changes maintain consistency across all documentation files.
@@ -0,0 +1,199 @@
1
+ # llmjs2 Router Usage Guide
2
+
3
+ The llmjs2 router provides intelligent model routing and load balancing capabilities, allowing you to distribute requests across multiple model deployments with different strategies.
4
+
5
+ ## Overview
6
+
7
+ The router system enables:
8
+
9
+ - **Load balancing** across models with the same name
10
+ - **Multiple routing strategies** (default, random, sequential)
11
+ - **Provider-agnostic routing** with unified API
12
+ - **Flexible model configuration** for different providers
13
+ -
14
+
15
+ ## Quick Start
16
+
17
+ ### Basic Setup
18
+
19
+ ```javascript
20
+ import { router } from 'llmjs2';
21
+
22
+ // Define your model deployments
23
+ const modelList = [
24
+ {
25
+ "model_name": "gpt-3.5-turbo",
26
+ "llm_params": {
27
+ "model": "ollama/chatgpt-v-2",
28
+ "api_key": process.env.OLLAMA_API_KEY,
29
+ "api_base": process.env.OLLAMA_API_BASE
30
+ }
31
+ },
32
+ {
33
+ "model_name": "openai-turbo",
34
+ "llm_params": {
35
+ "model": "gpt-3.5-turbo",
36
+ "api_key": process.env.OPENAI_API_KEY
37
+ }
38
+ },
39
+ {
40
+ "model_name": "gpt-4",
41
+ "llm_params": {
42
+ "model": "ollama/gpt-4",
43
+ "api_key": process.env.OLLAMA_API_KEY,
44
+ "api_base": process.env.OLLAMA_API_BASE
45
+ }
46
+ }
47
+ ];
48
+
49
+ // Create routers with different strategies
50
+ const defaultRouter = router(modelList);
51
+ const randomRouter = router(modelList, 'random');
52
+ const sequentialRouter = router(modelList, 'sequential');
53
+ ```
54
+
55
+ ### Basic Usage
56
+
57
+ ```javascript
58
+ // Route to specific model
59
+ const response = await defaultRouter.completion({
60
+ model: "gpt-3.5-turbo",
61
+ messages: [{"role": "user", "content": "Hey, how's it going?"}]
62
+ });
63
+
64
+ // Auto-route with random strategy
65
+ const randomResponse = await randomRouter.completion({
66
+ messages: [{"role": "user", "content": "Hey, how's it going?"}]
67
+ });
68
+
69
+ // Auto-route with sequential strategy
70
+ const seqResponse = await sequentialRouter.completion({
71
+ messages: [{"role": "user", "content": "Hey, how's it going?"}]
72
+ });
73
+ ```
74
+
75
+ ## Model Configuration
76
+
77
+ ### Model List Format
78
+
79
+ Each model in the list is defined with:
80
+
81
+ ```javascript
82
+ {
83
+ "model_name": "string", // Alias for routing (can have multiple providers)
84
+ "llm_params": { // Provider-specific parameters
85
+ "model": "string", // Actual model identifier for the provider
86
+ "api_key": "string", // API key (can use environment variables)
87
+ "api_base": "string?", // Custom API base URL (optional)
88
+ // ... other provider-specific params
89
+ }
90
+ }
91
+ ```
92
+
93
+ ### Supported Providers
94
+
95
+ #### Ollama
96
+
97
+ ```javascript
98
+ {
99
+ "model_name": "my-ollama-model",
100
+ "llm_params": {
101
+ "model": "ollama/llama2",
102
+ "api_key": process.env.OLLAMA_API_KEY,
103
+ "api_base": process.env.OLLAMA_API_BASE
104
+ }
105
+ }
106
+ ```
107
+
108
+ #### OpenRouter
109
+
110
+ ```javascript
111
+ {
112
+ "model_name": "my-openrouter-model",
113
+ "llm_params": {
114
+ "model": "openrouter/free-model",
115
+ "api_key": process.env.OPEN_ROUTER_API_KEY
116
+ }
117
+ }
118
+ ```
119
+
120
+ #### OpenAI
121
+
122
+ ```javascript
123
+ {
124
+ "model_name": "my-openai-model",
125
+ "llm_params": {
126
+ "model": "openai/gpt-4",
127
+ "api_key": process.env.OPENAI_API_KEY
128
+ }
129
+ }
130
+ ```
131
+
132
+ ## Routing Strategies
133
+
134
+ ### Default Strategy
135
+
136
+ When no strategy is specified, uses sequential selection across all available models for auto-routing (when no specific model is requested).
137
+
138
+ ```javascript
139
+ const route = router(modelList); // or router(modelList, 'default')
140
+
141
+ // Auto-route with sequential selection (cycles through all models)
142
+ const response = await route.completion({
143
+ messages: [...]
144
+ });
145
+
146
+ // Routes to one of the models with model_name="gpt-3.5-turbo" (load balancing)
147
+ const response = await route.completion({
148
+ model: "gpt-3.5-turbo",
149
+ messages: [...]
150
+ });
151
+ ```
152
+
153
+ ### Random Strategy
154
+
155
+ Randomly selects from available models when no specific model is requested.
156
+
157
+ ```javascript
158
+ const route = router(modelList, 'random');
159
+
160
+ // Randomly selects from ALL models in the list
161
+ const response = await route.completion({
162
+ messages: [...]
163
+ });
164
+ ```
165
+
166
+ ### Sequential Strategy
167
+
168
+ Cycles through models in order for each request.
169
+
170
+ ```javascript
171
+ const route = router(modelList, 'sequential');
172
+
173
+ // Uses first model, then second, then third, etc.
174
+ const response1 = await route.completion({ messages: [...] }); // model 1
175
+ const response2 = await route.completion({ messages: [...] }); // model 2
176
+ const response3 = await route.completion({ messages: [...] }); // model 3
177
+ // ... cycles back to model 1
178
+ ```
179
+
180
+ ## Error Handling
181
+
182
+ ```javascript
183
+ try {
184
+ const response = await route.completion({
185
+ model: "non-existent-model",
186
+ messages: [{"role": "user", "content": "Hello"}]
187
+ });
188
+ } catch (error) {
189
+ if (error.message.includes('Model not found')) {
190
+ console.log('Model not configured in router');
191
+ } else if (error.message.includes('API key')) {
192
+ console.log('Provider API key missing');
193
+ } else {
194
+ console.log('Routing error:', error.message);
195
+ }
196
+ }
197
+ ```
198
+
199
+ ##