unified-ai-router 3.4.3 โ†’ 3.4.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/package.json +1 -1
  2. package/provider.js +2 -1
  3. package/readme.md +17 -3
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "unified-ai-router",
3
- "version": "3.4.3",
3
+ "version": "3.4.4",
4
4
  "description": "A unified interface for multiple LLM providers with automatic fallback. This project includes an OpenAI-compatible server and a deployable Telegram bot with a Mini App interface. It supports major providers like OpenAI, Google, Grok, and more, ensuring reliability and flexibility for your AI applications.",
5
5
  "license": "ISC",
6
6
  "author": "mlibre",
package/provider.js CHANGED
@@ -24,7 +24,8 @@ module.exports = [
24
24
  process.env.OPENROUTER_API_KEY,
25
25
  process.env.OPENROUTER_API_KEY_2,
26
26
  process.env.OPENROUTER_API_KEY_3,
27
- process.env.OPENROUTER_API_KEY_4
27
+ process.env.OPENROUTER_API_KEY_4,
28
+ process.env.OPENROUTER_API_KEY_5
28
29
  ],
29
30
  model: "qwen/qwen3-coder:free",
30
31
  apiUrl: "https://openrouter.ai/api/v1",
package/readme.md CHANGED
@@ -2,8 +2,8 @@
2
2
 
3
3
  Unified AI Router is a comprehensive toolkit for AI applications, featuring:
4
4
 
5
- - An OpenAI-compatible server for seamless API integration
6
- - A unified interface for multiple LLM providers with automatic fallback
5
+ - An **OpenAI-compatible** server for seamless API integration
6
+ - A **unified interface** for multiple LLM providers with **automatic fallback**
7
7
 
8
8
  It supports all the OpenAI-compatible servers, including major providers like OpenAI, Google, Grok, Litellm, Vllm, Ollama and more, ensuring reliability and flexibility.
9
9
 
@@ -77,6 +77,19 @@ const response = await llm.chatCompletion(messages, {
77
77
  console.log(response);
78
78
  ```
79
79
 
80
+ You can also provide an array of API keys for a single provider definition.
81
+
82
+ ```javascript
83
+ const providers = [
84
+ {
85
+ name: "openai",
86
+ apiKey: [process.env.OPENAI_API_KEY_1, process.env.OPENAI_API_KEY_2],
87
+ model: "gpt-4",
88
+ apiUrl: "https://api.openai.com/v1"
89
+ }
90
+ ];
91
+ ```
92
+
80
93
  ### ๐Ÿ”Œ OpenAI-Compatible Server
81
94
 
82
95
  The OpenAI-compatible server provides a drop-in replacement for the OpenAI API. It routes requests through the unified router with fallback logic, ensuring high availability.
@@ -90,7 +103,7 @@ The server uses the provider configurations defined in [provider.js](provider.js
90
103
 
91
104
  2. Edit `.env` and add your API keys for the desired providers (see [๐Ÿ”‘ API Keys](#-api-keys) for sources).
92
105
 
93
- 3. Configure your providers in `provider.js`. Add new provider or modify existing ones with the appropriate `name`, `apiKey` (referencing the corresponding env variable), `model`, and `apiUrl` for the providers you want to use.
106
+ 3. Configure your providers in `provider.js`. Add new provider or modify existing ones with the appropriate `name`, `apiKey`, `model`, and `apiUrl` for the providers you want to use.
94
107
 
95
108
  To start the server locally, run:
96
109
 
@@ -105,6 +118,7 @@ The server listens at `http://localhost:3000/` and supports the following OpenAI
105
118
  - `GET /v1/models` - List available models
106
119
  - `GET /models` - List available models
107
120
  - `GET /health` - Health check
121
+ - `GET /v1/providers/status` - Check the status of all configured providers
108
122
 
109
123
  ### ๐Ÿงช Testing
110
124