@khanglvm/llm-router 1.1.0 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -5,6 +5,30 @@ All notable changes to this project will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [1.2.0] - 2026-03-04
9
+
10
+ ### Added
11
+ - Added Codex Responses API compatibility layer:
12
+ - request transformation into Codex Responses payload shape
13
+ - response transformation from Codex responses/SSE events to OpenAI Chat Completions-compatible output
14
+ - dedicated runtime tests for request + response transformation coverage
15
+ - Added explicit project-level ignore for local `AGENTS.md`.
16
+
17
+ ### Changed
18
+ - Improved TUI/CLI operation reports to user-friendly structured layouts and tables across provider/model-alias/rate-limit/config flows and operational actions.
19
+ - Improved startup/deploy/worker-key/status outputs to avoid raw config variable style and show friendly fields.
20
+ - Updated subscription auth/provider flow behavior and tests for more robust OAuth/Codex subscription handling.
21
+
22
+ ### Fixed
23
+ - Fixed migration/reporting test expectations and summary rendering stability after report format refactor.
24
+
25
+ ## [1.1.1] - 2026-03-04
26
+
27
+ ### Fixed
28
+ - Upgraded `@levu/snap` to `^0.3.12`, which declares the missing runtime dependency `picocolors`.
29
+ - Fixes global-install runtime error:
30
+ - `Cannot find package 'picocolors' imported from .../@levu/snap/dist/...`
31
+
8
32
  ## [1.1.0] - 2026-03-04
9
33
 
10
34
  ### Added
package/README.md CHANGED
@@ -77,10 +77,15 @@ Then follow this order.
77
77
  Flow:
78
78
  1. `Config manager`
79
79
  2. `Add/Edit provider`
80
- 3. Select provider type:
81
- - `standard` -> endpoint + API key + model list
82
- - `subscription` -> OAuth profile for predefined ChatGPT Codex models
83
- 4. Enter provider details
80
+ 3. Select provider auth mode:
81
+ - `API Key` -> endpoint + API key + model list
82
+ - `OAuth` -> browser OAuth + editable model list
83
+ 4. For `OAuth`:
84
+ - Choose subscription provider (`ChatGPT` for now)
85
+ - Enter Friendly Name and Provider ID
86
+ - Complete browser OAuth login inside this same flow
87
+ - Edit model list (pre-filled defaults; you can add/remove)
88
+ - llm-router live-tests every selected model before save
84
89
  5. Save
85
90
 
86
91
  ### 1b) Add Subscription Provider (ChatGPT Codex)
@@ -90,18 +95,14 @@ Commandline example:
90
95
  llm-router config \
91
96
  --operation=upsert-provider \
92
97
  --provider-id=chatgpt \
93
- --name="ChatGPT Subscription" \
94
- --type=subscription \
95
- --subscription-type=chatgpt-codex \
96
- --subscription-profile=default
97
-
98
- llm-router subscription login --profile=default
99
- llm-router subscription status --profile=default
98
+ --name="GPT Sub" \
99
+ --type=subscription
100
100
  ```
101
101
 
102
102
  Notes:
103
- - `chatgpt-codex` subscription providers use predefined model IDs managed by llm-router releases.
104
- - No provider API key or endpoint probing is required for this provider type.
103
+ - OAuth login is run during provider upsert (browser flow by default).
104
+ - `chatgpt-codex` is the current subscription type and its default model list is prefilled, but editable.
105
+ - No provider API key or endpoint probe input is required for subscription mode.
105
106
 
106
107
  ### 2) Configure Model Fallback (Optional)
107
108
  Flow:
@@ -161,6 +162,7 @@ Local endpoints:
161
162
  - Unified: `http://127.0.0.1:<port>/route`
162
163
  - Anthropic-style: `http://127.0.0.1:<port>/anthropic`
163
164
  - OpenAI-style: `http://127.0.0.1:<port>/openai`
165
+ - OpenAI Responses-style: `http://127.0.0.1:<port>/openai/v1/responses` (Codex CLI-compatible)
164
166
 
165
167
  ## Connect your coding tool
166
168
 
@@ -190,6 +192,8 @@ When local server is running:
190
192
 
191
193
  No stop/start cycle needed.
192
194
 
195
+ Config/status outputs are shown in structured table layouts for easier operator review.
196
+
193
197
  ## Cloudflare Worker (Hosted)
194
198
 
195
199
  Use when you want a hosted endpoint instead of local server.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@khanglvm/llm-router",
3
- "version": "1.1.0",
3
+ "version": "1.2.0",
4
4
  "description": "Single gateway endpoint for multi-provider LLMs with unified OpenAI+Anthropic format and seamless fallback",
5
5
  "keywords": [
6
6
  "llm-router",
@@ -30,7 +30,7 @@
30
30
  "test:provider-smoke": "node ./scripts/provider-smoke-suite.mjs"
31
31
  },
32
32
  "dependencies": {
33
- "@levu/snap": "^0.3.11"
33
+ "@levu/snap": "^0.3.12"
34
34
  },
35
35
  "devDependencies": {
36
36
  "wrangler": "^4.68.1"