unified-ai-router 3.3.10 → 3.3.11

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -9,8 +9,6 @@ This page focuses on how to configure the router for local development and produ
9
9
  * Provide examples for local, staging, and cloud deployments (Render.com, ...).
10
10
  * Troubleshooting tips when providers fail or models are not found.
11
11
 
12
- ---
13
-
14
12
  ## .env (environment variables)
15
13
 
16
14
  The repository includes a `.env.example` file with common keys. Copy it to `.env` and fill the keys for the providers you plan to use:
@@ -36,8 +34,6 @@ cp .env.example .env
36
34
  * For cloud deployments, set the same variables in your provider’s environment configuration (Render, etc.).
37
35
  * Rotate keys regularly and use least-privileged keys where provider supports them.
38
36
 
39
- ---
40
-
41
37
  ## `provider.js` — how it works
42
38
 
43
39
  `provider.js` exports an **ordered array** of provider configuration objects. The router will attempt each provider in array order and fall back automatically if one fails.
@@ -103,22 +99,16 @@ module.exports = [
103
99
  }
104
100
  ```
105
101
 
106
- ---
107
-
108
102
  ## Model selection and compatibility
109
103
 
110
104
  * Choose a `model` that the provider actually exposes. The router attempts to list models via the provider client using `client.models.list()` — if the model is not found it will warn in logs.
111
105
  * Some providers require different model name formats (e.g. `models/gpt-4` vs `gpt-4`). If in doubt, query the provider’s models endpoint or check their docs.
112
106
 
113
- ---
114
-
115
107
  ## Tool-calling and streaming
116
108
 
117
109
  * If you plan to use **tools** (the project supports OpenAI-style tool metadata), pass `tools` into `chatCompletion` calls and make sure the chosen provider supports tool-calling. Not all providers do.
118
110
  * Streaming is enabled by passing `stream: true` to the endpoint or API call. Ensure the provider supports SSE/streaming and model supports streaming.
119
111
 
120
- ---
121
-
122
112
  ## Local testing & examples
123
113
 
124
114
  * Non-streaming test:
@@ -139,15 +129,11 @@ node tests/openai-server-stream.js
139
129
  node tests/chat.js
140
130
  ```
141
131
 
142
- ---
143
-
144
132
  ## Deployment tips
145
133
 
146
134
  * **Render**: Add the same env variables to service settings. Use `npm start` as the start command (project `package.json` already sets this).
147
135
  * If you change `.env` or `provider.js`, restart the Node process.
148
136
 
149
- ---
150
-
151
137
  ## Troubleshooting
152
138
 
153
139
  * `Skipping provider ... due to missing API key` — check `.env` and deployment env configuration.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "unified-ai-router",
3
- "version": "3.3.10",
3
+ "version": "3.3.11",
4
4
  "description": "A unified interface for multiple LLM providers with automatic fallback. This project includes an OpenAI-compatible server and a deployable Telegram bot with a Mini App interface. It supports major providers like OpenAI, Google, Grok, and more, ensuring reliability and flexibility for your AI applications.",
5
5
  "license": "ISC",
6
6
  "author": "mlibre",