@ljoukov/llm 3.0.1 → 3.0.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +17 -11
- package/dist/index.cjs +748 -26
- package/dist/index.cjs.map +1 -1
- package/dist/index.d.cts +4 -3
- package/dist/index.d.ts +4 -3
- package/dist/index.js +747 -26
- package/dist/index.js.map +1 -1
- package/package.json +5 -6
package/README.md
CHANGED
|
@@ -9,7 +9,7 @@ Unified TypeScript wrapper over:
|
|
|
9
9
|
|
|
10
10
|
- **OpenAI Responses API** (`openai`)
|
|
11
11
|
- **Google Gemini via Vertex AI** (`@google/genai`)
|
|
12
|
-
- **Fireworks chat-completions models** (`kimi-k2.5`, `glm-5`, `minimax-m2.1`)
|
|
12
|
+
- **Fireworks chat-completions models** (`kimi-k2.5`, `glm-5`, `minimax-m2.1`, `gpt-oss-120b`)
|
|
13
13
|
- **ChatGPT subscription models** via `chatgpt-*` model ids (reuses Codex auth store, or a token provider)
|
|
14
14
|
|
|
15
15
|
Designed around a single streaming API that yields:
|
|
@@ -34,6 +34,8 @@ See Node.js docs on environment variables and dotenv files: https://nodejs.org/a
|
|
|
34
34
|
### OpenAI
|
|
35
35
|
|
|
36
36
|
- `OPENAI_API_KEY`
|
|
37
|
+
- `OPENAI_RESPONSES_WEBSOCKET_MODE` (`auto` | `off` | `only`, default: `auto`)
|
|
38
|
+
- `OPENAI_BASE_URL` (optional; defaults to `https://api.openai.com/v1`)
|
|
37
39
|
|
|
38
40
|
### Gemini (Vertex AI)
|
|
39
41
|
|
|
@@ -86,20 +88,24 @@ refresh-token rotation and serves short-lived access tokens.
|
|
|
86
88
|
- `CHATGPT_AUTH_TOKEN_PROVIDER_URL` (example: `https://chatgpt-auth.<your-domain>`)
|
|
87
89
|
- `CHATGPT_AUTH_API_KEY` (shared secret; sent as `Authorization: Bearer ...` and `x-chatgpt-auth: ...`)
|
|
88
90
|
- `CHATGPT_AUTH_TOKEN_PROVIDER_STORE` (`kv` or `d1`, defaults to `kv`)
|
|
91
|
+
- `CHATGPT_RESPONSES_WEBSOCKET_MODE` (`auto` | `off` | `only`, default: `auto`)
|
|
89
92
|
|
|
90
|
-
This repo includes a Cloudflare Workers token provider implementation in `chatgpt-auth
|
|
93
|
+
This repo includes a Cloudflare Workers token provider implementation in `workers/chatgpt-auth/`.
|
|
91
94
|
|
|
92
|
-
|
|
95
|
+
If `CHATGPT_AUTH_TOKEN_PROVIDER_URL` + `CHATGPT_AUTH_API_KEY` are set, `chatgpt-*` models will fetch tokens from the
|
|
96
|
+
token provider and will not read the local Codex auth store.
|
|
93
97
|
|
|
94
|
-
|
|
95
|
-
npm run chatgpt-auth:seed -- --worker-url https://chatgpt-auth.<your-domain>
|
|
96
|
-
```
|
|
98
|
+
### Responses transport
|
|
97
99
|
|
|
98
|
-
|
|
99
|
-
|
|
100
|
+
For OpenAI and `chatgpt-*` model paths, this library now tries **Responses WebSocket transport first** and falls back
|
|
101
|
+
to HTTP/SSE automatically when needed.
|
|
100
102
|
|
|
101
|
-
|
|
102
|
-
|
|
103
|
+
- `auto` (default): try WebSocket first, then fall back to SSE
|
|
104
|
+
- `off`: use SSE only
|
|
105
|
+
- `only`: require WebSocket (no fallback)
|
|
106
|
+
|
|
107
|
+
When fallback is triggered by an unsupported WebSocket upgrade response (for example `426`), the library keeps using
|
|
108
|
+
SSE for the rest of the process to avoid repeated failing upgrade attempts.
|
|
103
109
|
|
|
104
110
|
## Usage
|
|
105
111
|
|
|
@@ -272,7 +278,7 @@ console.log(result.text);
|
|
|
272
278
|
|
|
273
279
|
### Fireworks
|
|
274
280
|
|
|
275
|
-
Use Fireworks model ids directly (for example `kimi-k2.5`, `glm-5`, `minimax-m2.1`):
|
|
281
|
+
Use Fireworks model ids directly (for example `kimi-k2.5`, `glm-5`, `minimax-m2.1`, `gpt-oss-120b`):
|
|
276
282
|
|
|
277
283
|
```ts
|
|
278
284
|
import { generateText } from "@ljoukov/llm";
|