unified-ai-router 3.3.9 → 3.3.11

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -2,8 +2,6 @@
2
2
 
3
3
  This page focuses on how to configure the router for local development and production deployments: setting environment variables (`.env`) and customizing `provider.js`.
4
4
 
5
- ---
6
-
7
5
  ## Goals
8
6
 
9
7
  * Explain which environment variables the project expects and best practices for storing them.
@@ -11,8 +9,6 @@ This page focuses on how to configure the router for local development and produ
11
9
  * Provide examples for local, staging, and cloud deployments (Render.com, ...).
12
10
  * Troubleshooting tips when providers fail or models are not found.
13
11
 
14
- ---
15
-
16
12
  ## .env (environment variables)
17
13
 
18
14
  The repository includes a `.env.example` file with common keys. Copy it to `.env` and fill the keys for the providers you plan to use:
@@ -38,8 +34,6 @@ cp .env.example .env
38
34
  * For cloud deployments, set the same variables in your provider’s environment configuration (Render, etc.).
39
35
  * Rotate keys regularly and use least-privileged keys where provider supports them.
40
36
 
41
- ---
42
-
43
37
  ## `provider.js` — how it works
44
38
 
45
39
  `provider.js` exports an **ordered array** of provider configuration objects. The router will attempt each provider in array order and fall back automatically if one fails.
@@ -105,22 +99,16 @@ module.exports = [
105
99
  }
106
100
  ```
107
101
 
108
- ---
109
-
110
102
  ## Model selection and compatibility
111
103
 
112
104
  * Choose a `model` that the provider actually exposes. The router attempts to list models via the provider client using `client.models.list()` — if the model is not found it will warn in logs.
113
105
  * Some providers require different model name formats (e.g. `models/gpt-4` vs `gpt-4`). If in doubt, query the provider’s models endpoint or check their docs.
114
106
 
115
- ---
116
-
117
107
  ## Tool-calling and streaming
118
108
 
119
109
  * If you plan to use **tools** (the project supports OpenAI-style tool metadata), pass `tools` into `chatCompletion` calls and make sure the chosen provider supports tool-calling. Not all providers do.
120
110
  * Streaming is enabled by passing `stream: true` to the endpoint or API call. Ensure the provider supports SSE/streaming and model supports streaming.
121
111
 
122
- ---
123
-
124
112
  ## Local testing & examples
125
113
 
126
114
  * Non-streaming test:
@@ -141,15 +129,11 @@ node tests/openai-server-stream.js
141
129
  node tests/chat.js
142
130
  ```
143
131
 
144
- ---
145
-
146
132
  ## Deployment tips
147
133
 
148
134
  * **Render**: Add the same env variables to service settings. Use `npm start` as the start command (project `package.json` already sets this).
149
135
  * If you change `.env` or `provider.js`, restart the Node process.
150
136
 
151
- ---
152
-
153
137
  ## Troubleshooting
154
138
 
155
139
  * `Skipping provider ... due to missing API key` — check `.env` and deployment env configuration.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "unified-ai-router",
3
- "version": "3.3.9",
3
+ "version": "3.3.11",
4
4
  "description": "A unified interface for multiple LLM providers with automatic fallback. This project includes an OpenAI-compatible server and a deployable Telegram bot with a Mini App interface. It supports major providers like OpenAI, Google, Grok, and more, ensuring reliability and flexibility for your AI applications.",
5
5
  "license": "ISC",
6
6
  "author": "mlibre",