@khanglvm/llm-router 1.1.1 → 1.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +17 -0
- package/README.md +17 -13
- package/package.json +1 -1
- package/src/cli/router-module.js +1289 -511
- package/src/runtime/codex-request-transformer.js +284 -28
- package/src/runtime/codex-response-transformer.js +433 -0
- package/src/runtime/config.js +9 -7
- package/src/runtime/handler/provider-call.js +83 -2
- package/src/runtime/subscription-auth.js +31 -4
- package/src/runtime/subscription-constants.js +11 -7
- package/src/runtime/subscription-provider.js +159 -32
package/CHANGELOG.md
CHANGED
|
@@ -5,6 +5,23 @@ All notable changes to this project will be documented in this file.
|
|
|
5
5
|
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
|
6
6
|
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
|
7
7
|
|
|
8
|
+
## [1.2.0] - 2026-03-04
|
|
9
|
+
|
|
10
|
+
### Added
|
|
11
|
+
- Added Codex Responses API compatibility layer:
|
|
12
|
+
- request transformation into Codex Responses payload shape
|
|
13
|
+
- response transformation from Codex responses/SSE events to OpenAI Chat Completions-compatible output
|
|
14
|
+
- dedicated runtime tests for request + response transformation coverage
|
|
15
|
+
- Added explicit project-level ignore for local `AGENTS.md`.
|
|
16
|
+
|
|
17
|
+
### Changed
|
|
18
|
+
- Improved TUI/CLI operation reports to user-friendly structured layouts and tables across provider/model-alias/rate-limit/config flows and operational actions.
|
|
19
|
+
- Improved startup/deploy/worker-key/status outputs to avoid raw config variable style and show friendly fields.
|
|
20
|
+
- Updated subscription auth/provider flow behavior and tests for more robust OAuth/Codex subscription handling.
|
|
21
|
+
|
|
22
|
+
### Fixed
|
|
23
|
+
- Fixed migration/reporting test expectations and summary rendering stability after report format refactor.
|
|
24
|
+
|
|
8
25
|
## [1.1.1] - 2026-03-04
|
|
9
26
|
|
|
10
27
|
### Fixed
|
package/README.md
CHANGED
|
@@ -77,10 +77,15 @@ Then follow this order.
|
|
|
77
77
|
Flow:
|
|
78
78
|
1. `Config manager`
|
|
79
79
|
2. `Add/Edit provider`
|
|
80
|
-
3. Select provider
|
|
81
|
-
- `
|
|
82
|
-
- `
|
|
83
|
-
4.
|
|
80
|
+
3. Select provider auth mode:
|
|
81
|
+
- `API Key` -> endpoint + API key + model list
|
|
82
|
+
- `OAuth` -> browser OAuth + editable model list
|
|
83
|
+
4. For `OAuth`:
|
|
84
|
+
- Choose subscription provider (`ChatGPT` for now)
|
|
85
|
+
- Enter Friendly Name and Provider ID
|
|
86
|
+
- Complete browser OAuth login inside this same flow
|
|
87
|
+
- Edit model list (pre-filled defaults; you can add/remove)
|
|
88
|
+
- llm-router live-tests every selected model before save
|
|
84
89
|
5. Save
|
|
85
90
|
|
|
86
91
|
### 1b) Add Subscription Provider (ChatGPT Codex)
|
|
@@ -90,18 +95,14 @@ Commandline example:
|
|
|
90
95
|
llm-router config \
|
|
91
96
|
--operation=upsert-provider \
|
|
92
97
|
--provider-id=chatgpt \
|
|
93
|
-
--name="
|
|
94
|
-
--type=subscription
|
|
95
|
-
--subscription-type=chatgpt-codex \
|
|
96
|
-
--subscription-profile=default
|
|
97
|
-
|
|
98
|
-
llm-router subscription login --profile=default
|
|
99
|
-
llm-router subscription status --profile=default
|
|
98
|
+
--name="GPT Sub" \
|
|
99
|
+
--type=subscription
|
|
100
100
|
```
|
|
101
101
|
|
|
102
102
|
Notes:
|
|
103
|
-
-
|
|
104
|
-
-
|
|
103
|
+
- OAuth login is run during provider upsert (browser flow by default).
|
|
104
|
+
- `chatgpt-codex` is the current subscription type and its default model list is prefilled, but editable.
|
|
105
|
+
- No provider API key or endpoint probe input is required for subscription mode.
|
|
105
106
|
|
|
106
107
|
### 2) Configure Model Fallback (Optional)
|
|
107
108
|
Flow:
|
|
@@ -161,6 +162,7 @@ Local endpoints:
|
|
|
161
162
|
- Unified: `http://127.0.0.1:<port>/route`
|
|
162
163
|
- Anthropic-style: `http://127.0.0.1:<port>/anthropic`
|
|
163
164
|
- OpenAI-style: `http://127.0.0.1:<port>/openai`
|
|
165
|
+
- OpenAI Responses-style: `http://127.0.0.1:<port>/openai/v1/responses` (Codex CLI-compatible)
|
|
164
166
|
|
|
165
167
|
## Connect your coding tool
|
|
166
168
|
|
|
@@ -190,6 +192,8 @@ When local server is running:
|
|
|
190
192
|
|
|
191
193
|
No stop/start cycle needed.
|
|
192
194
|
|
|
195
|
+
Config/status outputs are shown in structured table layouts for easier operator review.
|
|
196
|
+
|
|
193
197
|
## Cloudflare Worker (Hosted)
|
|
194
198
|
|
|
195
199
|
Use when you want a hosted endpoint instead of local server.
|