@khanglvm/llm-router 1.0.9 → 1.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +22 -0
- package/README.md +36 -6
- package/package.json +1 -1
- package/src/cli/router-module.js +669 -91
- package/src/cli-entry.js +33 -5
- package/src/node/config-workflows.js +30 -2
- package/src/node/listen-port.js +43 -0
- package/src/node/port-reclaim.js +7 -1
- package/src/node/start-command.js +222 -19
- package/src/node/startup-manager.js +4 -1
- package/src/runtime/codex-request-transformer.js +120 -0
- package/src/runtime/config.js +62 -9
- package/src/runtime/handler/provider-call.js +10 -0
- package/src/runtime/subscription-auth.js +349 -0
- package/src/runtime/subscription-constants.js +30 -0
- package/src/runtime/subscription-provider.js +361 -0
- package/src/runtime/subscription-tokens.js +131 -0
package/CHANGELOG.md
CHANGED
|
@@ -5,6 +5,27 @@ All notable changes to this project will be documented in this file.
|
|
|
5
5
|
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
|
6
6
|
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
|
7
7
|
|
|
8
|
+
## [1.1.0] - 2026-03-04
|
|
9
|
+
|
|
10
|
+
### Added
|
|
11
|
+
- Added full `config --operation=upsert-provider` UX support for subscription providers:
|
|
12
|
+
- `--type=subscription`
|
|
13
|
+
- `--subscription-type=chatgpt-codex`
|
|
14
|
+
- `--subscription-profile=<name>`
|
|
15
|
+
- Added subscription provider coverage tests for config workflows and runtime provider-call behavior.
|
|
16
|
+
- Added `.gitignore` rules for local IDE and deploy temp artifacts (`.idea/`, `.llm-router.deploy.*.wrangler.toml`).
|
|
17
|
+
|
|
18
|
+
### Changed
|
|
19
|
+
- Updated config summaries and AI-help guidance to include subscription provider details and setup commands.
|
|
20
|
+
- Updated README setup guide with explicit ChatGPT Codex subscription onboarding flow.
|
|
21
|
+
|
|
22
|
+
### Fixed
|
|
23
|
+
- Fixed subscription status command import path so `llm-router subscription status` works reliably.
|
|
24
|
+
- Fixed subscription provider request path to run standard request translation/mapping before OAuth-backed provider call.
|
|
25
|
+
- Fixed subscription provider config validation and normalization:
|
|
26
|
+
- subscription providers no longer require `baseUrl`
|
|
27
|
+
- predefined ChatGPT Codex model list is enforced during normalization.
|
|
28
|
+
|
|
8
29
|
## [1.0.9] - 2026-03-03
|
|
9
30
|
|
|
10
31
|
### Added
|
|
@@ -16,6 +37,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|
|
16
37
|
- Refactored CLI deploy/runtime handler code into focused modules with cleaner boundaries.
|
|
17
38
|
- Updated provider-call timeout handling to support both `AbortSignal.timeout` and `AbortController` fallback.
|
|
18
39
|
- Documented Worker safety defaults and switched README release/security links to canonical GitHub URLs.
|
|
40
|
+
- Added local start port resolution via `--port`, `LLM_ROUTER_PORT`, or generic `PORT` env variables.
|
|
19
41
|
|
|
20
42
|
## [1.0.8] - 2026-02-28
|
|
21
43
|
|
package/README.md
CHANGED
|
@@ -24,7 +24,7 @@ Run `llm-router ai-help` first, then set up and operate llm-router for me using
|
|
|
24
24
|
|
|
25
25
|
## Main Workflow
|
|
26
26
|
|
|
27
|
-
1. Add
|
|
27
|
+
1. Add providers + models into llm-router (standard API-key providers or OAuth subscription providers)
|
|
28
28
|
2. Optionally, group models as alias with load balancing and auto fallback support
|
|
29
29
|
3. Start llm-router server, point your coding tool API and model to llm-router
|
|
30
30
|
|
|
@@ -77,10 +77,32 @@ Then follow this order.
|
|
|
77
77
|
Flow:
|
|
78
78
|
1. `Config manager`
|
|
79
79
|
2. `Add/Edit provider`
|
|
80
|
-
3.
|
|
81
|
-
|
|
80
|
+
3. Select provider type:
|
|
81
|
+
- `standard` -> endpoint + API key + model list
|
|
82
|
+
- `subscription` -> OAuth profile for predefined ChatGPT Codex models
|
|
83
|
+
4. Enter provider details
|
|
82
84
|
5. Save
|
|
83
85
|
|
|
86
|
+
### 1b) Add Subscription Provider (ChatGPT Codex)
|
|
87
|
+
Commandline example:
|
|
88
|
+
|
|
89
|
+
```bash
|
|
90
|
+
llm-router config \
|
|
91
|
+
--operation=upsert-provider \
|
|
92
|
+
--provider-id=chatgpt \
|
|
93
|
+
--name="ChatGPT Subscription" \
|
|
94
|
+
--type=subscription \
|
|
95
|
+
--subscription-type=chatgpt-codex \
|
|
96
|
+
--subscription-profile=default
|
|
97
|
+
|
|
98
|
+
llm-router subscription login --profile=default
|
|
99
|
+
llm-router subscription status --profile=default
|
|
100
|
+
```
|
|
101
|
+
|
|
102
|
+
Notes:
|
|
103
|
+
- `chatgpt-codex` subscription providers use predefined model IDs managed by llm-router releases.
|
|
104
|
+
- No provider API key or endpoint probing is required for this provider type.
|
|
105
|
+
|
|
84
106
|
### 2) Configure Model Fallback (Optional)
|
|
85
107
|
Flow:
|
|
86
108
|
1. `Config manager`
|
|
@@ -127,10 +149,18 @@ Flow:
|
|
|
127
149
|
llm-router start
|
|
128
150
|
```
|
|
129
151
|
|
|
152
|
+
Custom port (optional):
|
|
153
|
+
|
|
154
|
+
```bash
|
|
155
|
+
llm-router start --port=3001
|
|
156
|
+
# or
|
|
157
|
+
LLM_ROUTER_PORT=3001 llm-router start
|
|
158
|
+
```
|
|
159
|
+
|
|
130
160
|
Local endpoints:
|
|
131
|
-
- Unified: `http://127.0.0.1
|
|
132
|
-
- Anthropic-style: `http://127.0.0.1
|
|
133
|
-
- OpenAI-style: `http://127.0.0.1
|
|
161
|
+
- Unified: `http://127.0.0.1:<port>/route`
|
|
162
|
+
- Anthropic-style: `http://127.0.0.1:<port>/anthropic`
|
|
163
|
+
- OpenAI-style: `http://127.0.0.1:<port>/openai`
|
|
134
164
|
|
|
135
165
|
## Connect your coding tool
|
|
136
166
|
|