llmjs2 1.3.1 → 1.3.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,6 +1,6 @@
1
1
  # llmjs2
2
2
 
3
- A unified, enterprise-grade Node.js library for connecting to multiple Large Language Model (LLM) providers: OpenAI, Ollama, and OpenRouter.
3
+ A unified Node.js library for connecting to multiple Large Language Model (LLM) providers: OpenAI, Ollama, and OpenRouter.
4
4
 
5
5
  **Features:**
6
6
  - **Unified API**: Single interface for OpenAI, Ollama, and OpenRouter
@@ -37,11 +37,8 @@ npm install -g llmjs2
37
37
  Try the sample configuration:
38
38
 
39
39
  ```bash
40
- # Validate configuration
41
- node validate-config.js
42
-
43
40
  # Start server with sample config
44
- node cli.js --config config.yaml --port 3001
41
+ llmjs2 --config config.yaml --port 3001
45
42
 
46
43
  # Test the API
47
44
  curl -X POST http://localhost:3001/v1/chat/completions \
@@ -59,8 +56,34 @@ curl -X POST http://localhost:3001/v1/chat/completions \
59
56
  # {"role": "assistant", "content": "Hi there!"}
60
57
  # ]
61
58
  # }
59
+
60
+ ## Programmatic Configuration
61
+
62
+ For advanced users, you can configure llmjs2 router programmatically instead of using YAML:
63
+
64
+ ```bash
65
+ npm run router:example
62
66
  ```
63
67
 
68
+ See `server-config.js` for a complete example of configuring models, guardrails, and routing in JavaScript code, with direct completion usage.
69
+
70
+ ## AI Chat App
71
+
72
+ Experience llmjs2 with a simple terminal-based chat interface:
73
+
74
+ ```bash
75
+ npm run chat
76
+ ```
77
+
78
+ Features:
79
+ - Conversational chat with message history
80
+ - Automatic model routing (random selection)
81
+ - Shows which model was used for each response
82
+ - Simple guardrails (logging)
83
+ - Graceful exit with "exit", "quit", or "bye"
84
+
85
+ The chat app uses the same router configuration as the programmatic examples but provides an interactive chat experience.
86
+
64
87
  See `CONFIG_README.md` for detailed configuration examples.
65
88
 
66
89
  ## Quick Start
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "llmjs2",
3
- "version": "1.3.1",
3
+ "version": "1.3.2",
4
4
  "description": "A unified Node.js library for connecting to multiple LLM providers: OpenAI, Ollama, and OpenRouter",
5
5
  "main": "index.js",
6
6
  "type": "commonjs",
@@ -8,6 +8,8 @@
8
8
  "test": "node test.js",
9
9
  "start": "node cli.js",
10
10
  "server": "node cli.js",
11
+ "router:example": "node server-config.js",
12
+ "chat": "node chat-app.js",
11
13
  "lint": "echo 'No linting configured'",
12
14
  "typecheck": "echo 'No TypeScript configured'"
13
15
  },
package/CONFIG_README.md DELETED
@@ -1,98 +0,0 @@
1
- # llmjs2 Configuration Guide
2
-
3
- This directory contains sample configuration files for testing the llmjs2 library.
4
-
5
- ## Files
6
-
7
- - `config.yaml` - Comprehensive sample configuration with models, guardrails, and routing
8
- - `.env` - Sample environment variables (add your API keys here)
9
- - `validate-config.js` - Script to validate your configuration
10
-
11
- ## Quick Start
12
-
13
- 1. **Add your API keys** to `.env`:
14
- ```bash
15
- # Uncomment and add your keys
16
- OPENAI_API_KEY=your_actual_openai_key
17
- # OLLAMA_API_KEY and OPEN_ROUTER_API_KEY are already set for testing
18
- ```
19
-
20
- 2. **Validate the configuration**:
21
- ```bash
22
- node validate-config.js
23
- ```
24
-
25
- 3. **Start the server**:
26
- ```bash
27
- node cli.js --config config.yaml --port 3001
28
- ```
29
-
30
- 4. **Test the API**:
31
- ```bash
32
- curl -X POST http://localhost:3001/v1/chat/completions \
33
- -H "Content-Type: application/json" \
34
- -d '{"messages":[{"role":"user","content":"Hello!"}]}'
35
- ```
36
-
37
- ## Configuration Structure
38
-
39
- ### Model List
40
- Defines available models and their providers:
41
- - `model_name`: Alias for routing (can have multiple providers)
42
- - `llm_params`: Provider-specific configuration
43
- - `model`: Full model identifier in format `[provider]/[actual-model-name]` (e.g., `openai/gpt-4`, `ollama/minimax-m2.5:cloud`)
44
- - `api_key`: API key (supports `os.environ/VAR_NAME` syntax)
45
- - `api_base`: Optional custom API endpoint
46
-
47
- **Important**: The `[provider]/` prefix is used internally for routing. When requests are sent to LLM providers, only the `[actual-model-name]` part is used.
48
-
49
- ### Guardrails
50
- Custom processing logic for requests/responses:
51
- - `name`: Unique identifier
52
- - `mode`: `pre_call` (before LLM) or `post_call` (after LLM)
53
- - `code`: JavaScript function as string
54
-
55
- ### Router Settings
56
- - `routing_strategy`: `default`, `random`, or `sequential`
57
-
58
- ## Testing Different Features
59
-
60
- ### Load Balancing
61
- The config includes multiple models with the same `model_name` (like `free-model`) to demonstrate load balancing.
62
-
63
- ### Guardrails
64
- Try sending requests with inappropriate content to see filtering in action:
65
- ```bash
66
- curl -X POST http://localhost:3001/v1/chat/completions \
67
- -H "Content-Type: application/json" \
68
- -d '{"messages":[{"role":"user","content":"This contains badword"}]}'
69
- ```
70
-
71
- ### Different Routing Strategies
72
- Edit `router_settings.routing_strategy` to test:
73
- - `random`: Random model selection
74
- - `sequential`: Cycle through models
75
- - `default`: Load balance same model names
76
-
77
- ## Environment Variables
78
-
79
- The configuration supports environment variables:
80
- - `os.environ/VAR_NAME` syntax in YAML
81
- - `.env` file loaded automatically
82
- - Standard environment variables override defaults
83
-
84
- ## Troubleshooting
85
-
86
- - **"Model not found"**: Check that API keys are set in `.env`
87
- - **"Rate limit exceeded"**: The config includes rate limiting (10 requests/minute)
88
- - **Server won't start**: Run `node validate-config.js` to check configuration
89
- - **API errors**: Check server logs for detailed error messages
90
-
91
- ## Production Usage
92
-
93
- For production:
94
- 1. Remove test API keys from `.env`
95
- 2. Add your actual API keys
96
- 3. Configure proper rate limiting and content filtering
97
- 4. Set up proper logging and monitoring
98
- 5. Use environment-specific config files (dev/prod)
@@ -1,87 +0,0 @@
1
- #!/usr/bin/env node
2
-
3
- // Test script to validate config.yaml loading
4
- const yaml = require('yaml');
5
- const fs = require('fs');
6
- const path = require('path');
7
-
8
- function loadAndValidateConfig() {
9
- const configPath = path.join(__dirname, 'config.yaml');
10
-
11
- if (!fs.existsSync(configPath)) {
12
- console.error('❌ config.yaml not found');
13
- return false;
14
- }
15
-
16
- try {
17
- const configContent = fs.readFileSync(configPath, 'utf8');
18
- const config = yaml.parse(configContent);
19
-
20
- console.log('✅ Config file loaded successfully');
21
- console.log(`📋 Found ${config.model_list?.length || 0} models`);
22
- console.log(`🛡️ Found ${config.guardrails?.length || 0} guardrails`);
23
- console.log(`🔀 Routing strategy: ${config.router_settings?.routing_strategy || 'default'}`);
24
-
25
- // Validate model list
26
- if (!config.model_list || !Array.isArray(config.model_list)) {
27
- console.error('❌ model_list must be an array');
28
- return false;
29
- }
30
-
31
- // Check each model
32
- const modelNames = [];
33
- for (const model of config.model_list) {
34
- if (!model.model_name || !model.llm_params?.model) {
35
- console.error('❌ Each model must have model_name and llm_params.model');
36
- return false;
37
- }
38
- modelNames.push(model.model_name);
39
- }
40
-
41
- console.log(`📊 Model names: ${[...new Set(modelNames)].join(', ')}`);
42
-
43
- // Validate guardrails
44
- if (config.guardrails) {
45
- for (const guardrail of config.guardrails) {
46
- if (!guardrail.name || !guardrail.mode || !guardrail.code) {
47
- console.error(`❌ Guardrail "${guardrail.name || 'unnamed'}" missing required fields`);
48
- return false;
49
- }
50
- if (!['pre_call', 'post_call'].includes(guardrail.mode)) {
51
- console.error(`❌ Guardrail "${guardrail.name}" has invalid mode: ${guardrail.mode}`);
52
- return false;
53
- }
54
- }
55
- }
56
-
57
- // Test environment variable resolution
58
- const testModel = config.model_list[0];
59
- if (testModel.llm_params.api_key?.startsWith('os.environ/')) {
60
- const envVar = testModel.llm_params.api_key.replace('os.environ/', '');
61
- const envValue = process.env[envVar];
62
- console.log(`🔑 API key for ${testModel.llm_params.model}: ${envValue ? '✅ Set' : '❌ Not set'}`);
63
- }
64
-
65
- console.log('🎉 Config validation passed!');
66
- return true;
67
-
68
- } catch (error) {
69
- console.error('❌ Config validation failed:', error.message);
70
- return false;
71
- }
72
- }
73
-
74
- // Run validation
75
- console.log('🔍 Validating config.yaml...\n');
76
- const success = loadAndValidateConfig();
77
-
78
- if (!success) {
79
- process.exit(1);
80
- }
81
-
82
- console.log('\n💡 To test the server, run:');
83
- console.log(' node cli.js --config config.yaml --port 3001');
84
- console.log('\n📖 To test with curl:');
85
- console.log(' curl -X POST http://localhost:3001/v1/chat/completions \\');
86
- console.log(' -H "Content-Type: application/json" \\');
87
- console.log(' -d \'{"messages":[{"role":"user","content":"Hello!"}]}\'');