@push.rocks/smartai 2.2.0 → 4.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/readme.md CHANGED
@@ -17,7 +17,7 @@ For reporting bugs, issues, or security vulnerabilities, please visit [community
17
17
  - **🔌 One function, eight providers** — `getModel()` returns a standard `LanguageModelV3`. Switch providers by changing a string.
18
18
  - **🧱 Built on Vercel AI SDK** — Uses `ai` v6 under the hood. Your model works with `generateText()`, `streamText()`, tool calling, structured output, and everything else in the AI SDK ecosystem.
19
19
  - **🏠 Custom Ollama provider** — A full `LanguageModelV3` implementation for Ollama with support for `think` mode, `num_ctx`, auto-tuned temperature for Qwen models, and native tool calling.
20
- - **💰 Anthropic prompt caching** — Automatic `cacheControl` middleware reduces cost and latency on repeated calls. Enabled by default, opt out with `promptCaching: false`.
20
+ - **💰 Prompt caching** — Anthropic cache-control middleware is enabled by default; provider cache helpers are available for agent/session integrations.
21
21
  - **📦 Modular subpath exports** — Vision, audio, image, document, and research capabilities ship as separate imports. Only import what you need.
22
22
  - **⚡ Zero lock-in** — Your code uses standard AI SDK types. Swap providers without touching application logic.
23
23
 
@@ -107,6 +107,61 @@ console.log(result.text);
107
107
 
108
108
  OpenAI `reasoningEffort` supports `'none'`, `'minimal'`, `'low'`, `'medium'`, `'high'`, and `'xhigh'`. Model IDs are accepted as strings, so new IDs like `'gpt-5.5'` can be used before upstream model unions are updated.
109
109
 
110
+ ### OpenAI ChatGPT / Codex Auth
111
+
112
+ SmartAI can request ChatGPT subscription-backed Codex credentials with OpenAI's device-code flow. The returned credentials are passed to `getModel()` through `openAiChatGptAuth`; SmartAI then routes OpenAI model calls through the ChatGPT Codex backend with the required account headers.
113
+
114
+ ```typescript
115
+ import {
116
+ completeOpenAiChatGptDeviceCodeLogin,
117
+ getModel,
118
+ requestOpenAiChatGptDeviceCode,
119
+ } from '@push.rocks/smartai';
120
+
121
+ const deviceCode = await requestOpenAiChatGptDeviceCode();
122
+ console.log(`Open ${deviceCode.verificationUrl} and enter ${deviceCode.userCode}`);
123
+
124
+ const openAiChatGptAuth = await completeOpenAiChatGptDeviceCodeLogin(deviceCode);
125
+ const model = getModel({
126
+ provider: 'openai',
127
+ model: 'gpt-5.5',
128
+ openAiChatGptAuth,
129
+ });
130
+ ```
131
+
132
+ Use `refreshOpenAiChatGptTokenData(openAiChatGptAuth)` before stored credentials expire, or after receiving an unauthorized response.
133
+
134
+ Node.js consumers can inspect and resolve local ChatGPT auth files through the Node-only subpath. This supports SmartAI's canonical auth file, OpenCode's `~/.local/share/opencode/auth.json`, and Codex's `~/.codex/auth.json` without exposing token values in inspection results.
135
+
136
+ ```typescript
137
+ import {
138
+ inspectOpenAiChatGptAuthSources,
139
+ resolveOpenAiChatGptAuth,
140
+ } from '@push.rocks/smartai/openai-chatgpt-auth';
141
+
142
+ const sources = await inspectOpenAiChatGptAuthSources({
143
+ sources: ['smartai', 'opencode', 'codex'],
144
+ });
145
+
146
+ const resolved = await resolveOpenAiChatGptAuth({
147
+ sources: ['smartai', 'opencode', 'codex'],
148
+ refresh: 'ifNeeded',
149
+ writeBack: {
150
+ smartai: true,
151
+ opencode: false,
152
+ codex: false,
153
+ },
154
+ });
155
+
156
+ if (resolved) {
157
+ const model = getModel({
158
+ provider: 'openai',
159
+ model: 'gpt-5.5',
160
+ openAiChatGptAuth: resolved.tokenData,
161
+ });
162
+ }
163
+ ```
164
+
110
165
  ### Re-exported AI SDK Functions
111
166
 
112
167
  SmartAI re-exports the most commonly used functions from `ai` for convenience:
@@ -250,9 +305,9 @@ console.log(result.text);
250
305
  - **Streaming with reasoning** — `doStream()` emits proper `reasoning-start`, `reasoning-delta`, `reasoning-end` parts alongside text.
251
306
  - **All Ollama options** — `num_ctx`, `top_k`, `top_p`, `repeat_penalty`, `num_predict`, `stop`, `seed`.
252
307
 
253
- ## 💰 Anthropic Prompt Caching
308
+ ## 💰 Prompt Caching
254
309
 
255
- When using the Anthropic provider, SmartAI automatically wraps the model with caching middleware that adds `cacheControl: { type: 'ephemeral' }` to the last system message and last user message. This can significantly reduce cost and latency for repeated calls with the same system prompt.
310
+ When using the Anthropic provider, SmartAI automatically wraps the model with caching middleware. The middleware follows the same breakpoint strategy used by opencode: cache the first two system messages and the two most recent non-system messages. This can significantly reduce cost and latency for repeated agent calls with stable system/tool context.
256
311
 
257
312
  ```typescript
258
313
  // Caching enabled by default
@@ -271,6 +326,17 @@ const modelNoCaching = getModel({
271
326
  });
272
327
  ```
273
328
 
329
+ Longer Anthropic cache TTL is opt-in:
330
+
331
+ ```typescript
332
+ const modelWithOneHourCache = getModel({
333
+ provider: 'anthropic',
334
+ model: 'claude-sonnet-4-5-20250929',
335
+ apiKey: process.env.ANTHROPIC_TOKEN,
336
+ promptCaching: { retention: '1h' },
337
+ });
338
+ ```
339
+
274
340
  You can also use the middleware directly:
275
341
 
276
342
  ```typescript
@@ -281,6 +347,23 @@ const middleware = createAnthropicCachingMiddleware();
281
347
  const cachedModel = wrapLanguageModel({ model: baseModel, middleware });
282
348
  ```
283
349
 
350
+ For agent frameworks, SmartAI exports lower-level helpers:
351
+
352
+ ```typescript
353
+ import {
354
+ applySmartAiCacheProviderOptions,
355
+ createSmartAiCachingMiddleware,
356
+ } from '@push.rocks/smartai';
357
+
358
+ const providerOptions = applySmartAiCacheProviderOptions({
359
+ provider: 'openai',
360
+ sessionId: 'stable-session-id',
361
+ cache: 'auto',
362
+ });
363
+ ```
364
+
365
+ OpenAI request-level cache affinity is only added when a stable `sessionId` or explicit cache `key` is provided. Extended OpenAI retention (`'24h'`) is opt-in.
366
+
284
367
  ## 📦 Subpath Exports
285
368
 
286
369
  SmartAI provides specialized capabilities as separate subpath imports. Each one is a focused utility that takes a model (or API key) and does one thing well.
@@ -3,6 +3,6 @@
3
3
  */
4
4
  export const commitinfo = {
5
5
  name: '@push.rocks/smartai',
6
- version: '2.2.0',
6
+ version: '4.0.0',
7
7
  description: 'Provider registry and capability utilities for ai-sdk (Vercel AI SDK). Core export returns LanguageModel; subpath exports provide vision, audio, image, document and research capabilities.'
8
8
  }
package/ts/index.ts CHANGED
@@ -1,6 +1,13 @@
1
1
  export { getModel, getModelSetup } from './smartai.classes.smartai.js';
2
2
  export type {
3
3
  IOpenAiProviderOptions,
4
+ IOpenAiChatGptAuthCredentials,
5
+ IOpenAiChatGptAuthOptions,
6
+ IOpenAiChatGptCompleteDeviceCodeOptions,
7
+ IOpenAiChatGptDeviceCode,
8
+ IOpenAiChatGptDeviceCodePollOptions,
9
+ IOpenAiChatGptTokenData,
10
+ IOpenAiChatGptTokenInfo,
4
11
  ISmartAiModelSetup,
5
12
  ISmartAiOptions,
6
13
  TOpenAiReasoningEffort,
@@ -9,9 +16,41 @@ export type {
9
16
  TSmartAiProviderOptions,
10
17
  IOllamaModelOptions,
11
18
  LanguageModelV3,
19
+ LanguageModelV3Prompt,
12
20
  } from './smartai.interfaces.js';
13
21
  export { createAnthropicCachingMiddleware } from './smartai.middleware.anthropic.js';
22
+ export {
23
+ applySmartAiCacheProviderOptions,
24
+ applySmartAiPromptCaching,
25
+ createSmartAiCachingMiddleware,
26
+ getSmartAiCacheProviderOptions,
27
+ getSmartAiMessageCacheProviderOptions,
28
+ mergeSmartAiProviderOptions,
29
+ resolveSmartAiCacheProvider,
30
+ } from './smartai.cache.js';
31
+ export type {
32
+ ISmartAiCacheOptions,
33
+ TSmartAiCacheRetention,
34
+ TSmartAiCacheSetting,
35
+ TSmartAiMessageCacheProvider,
36
+ } from './smartai.cache.js';
14
37
  export { createOllamaModel } from './smartai.provider.ollama.js';
38
+ export {
39
+ OPENAI_CHATGPT_AUTH_ISSUER,
40
+ OPENAI_CHATGPT_CLIENT_ID,
41
+ OPENAI_CHATGPT_CODEX_BASE_URL,
42
+ OPENAI_CHATGPT_DEFAULT_ORIGINATOR,
43
+ OpenAiChatGptAuthError,
44
+ completeOpenAiChatGptDeviceCodeLogin,
45
+ createOpenAiChatGptProviderSettings,
46
+ ensureOpenAiChatGptWorkspaceAllowed,
47
+ exchangeOpenAiChatGptAuthorizationCode,
48
+ parseOpenAiChatGptTokenInfo,
49
+ pollOpenAiChatGptDeviceCode,
50
+ refreshOpenAiChatGptTokenData,
51
+ requestOpenAiChatGptDeviceCode,
52
+ } from './smartai.auth.openai.js';
53
+ export type { IOpenAiChatGptAuthorizationCode } from './smartai.auth.openai.js';
15
54
 
16
55
  // Re-export commonly used ai-sdk functions for consumer convenience
17
56
  export { generateText, streamText, tool, jsonSchema } from 'ai';
@@ -0,0 +1,312 @@
1
+ import type {
2
+ IOpenAiChatGptAuthCredentials,
3
+ IOpenAiChatGptAuthOptions,
4
+ IOpenAiChatGptCompleteDeviceCodeOptions,
5
+ IOpenAiChatGptDeviceCode,
6
+ IOpenAiChatGptDeviceCodePollOptions,
7
+ IOpenAiChatGptTokenData,
8
+ IOpenAiChatGptTokenInfo,
9
+ } from './smartai.interfaces.js';
10
+
11
+ export const OPENAI_CHATGPT_AUTH_ISSUER = 'https://auth.openai.com';
12
+ export const OPENAI_CHATGPT_CLIENT_ID = 'app_EMoamEEZ73f0CkXaXp7hrann';
13
+ export const OPENAI_CHATGPT_CODEX_BASE_URL = 'https://chatgpt.com/backend-api/codex';
14
+ export const OPENAI_CHATGPT_DEFAULT_ORIGINATOR = 'smartai';
15
+
16
+ const DEVICE_CODE_TIMEOUT_MS = 15 * 60 * 1000;
17
+
18
+ export class OpenAiChatGptAuthError extends Error {
19
+ public status?: number;
20
+ public body?: string;
21
+
22
+ constructor(message: string, options: { status?: number; body?: string } = {}) {
23
+ super(message);
24
+ this.name = 'OpenAiChatGptAuthError';
25
+ this.status = options.status;
26
+ this.body = options.body;
27
+ }
28
+ }
29
+
30
+ export interface IOpenAiChatGptAuthorizationCode {
31
+ authorizationCode: string;
32
+ codeChallenge: string;
33
+ codeVerifier: string;
34
+ }
35
+
36
+ interface IOpenAiChatGptTokenResponse {
37
+ id_token?: unknown;
38
+ access_token?: unknown;
39
+ refresh_token?: unknown;
40
+ }
41
+
42
+ function getFetch(options: IOpenAiChatGptAuthOptions): typeof fetch {
43
+ const fetchFunction = options.fetch ?? globalThis.fetch;
44
+ if (!fetchFunction) {
45
+ throw new OpenAiChatGptAuthError('fetch is not available for OpenAI ChatGPT authentication.');
46
+ }
47
+ return fetchFunction;
48
+ }
49
+
50
+ function getIssuer(options: IOpenAiChatGptAuthOptions): string {
51
+ return (options.issuer ?? OPENAI_CHATGPT_AUTH_ISSUER).replace(/\/+$/, '');
52
+ }
53
+
54
+ function getClientId(options: IOpenAiChatGptAuthOptions): string {
55
+ return options.clientId ?? OPENAI_CHATGPT_CLIENT_ID;
56
+ }
57
+
58
+ function asString(value: unknown, name: string): string {
59
+ if (typeof value !== 'string' || value.length === 0) {
60
+ throw new OpenAiChatGptAuthError(`OpenAI ChatGPT auth response is missing ${name}.`);
61
+ }
62
+ return value;
63
+ }
64
+
65
+ function asOptionalString(value: unknown): string | undefined {
66
+ return typeof value === 'string' && value.length > 0 ? value : undefined;
67
+ }
68
+
69
+ function asIntervalSeconds(value: unknown): number {
70
+ const interval = typeof value === 'number' ? value : Number.parseInt(String(value ?? ''), 10);
71
+ if (!Number.isFinite(interval) || interval <= 0) {
72
+ throw new OpenAiChatGptAuthError('OpenAI ChatGPT device-code response has an invalid interval.');
73
+ }
74
+ return interval;
75
+ }
76
+
77
+ async function readJson(response: Response, context: string): Promise<unknown> {
78
+ const body = await response.text();
79
+ if (!response.ok) {
80
+ throw new OpenAiChatGptAuthError(`${context} failed with status ${response.status}.`, {
81
+ status: response.status,
82
+ body,
83
+ });
84
+ }
85
+
86
+ try {
87
+ return body ? JSON.parse(body) : {};
88
+ } catch (error) {
89
+ throw new OpenAiChatGptAuthError(`${context} returned invalid JSON: ${(error as Error).message}`, {
90
+ status: response.status,
91
+ body,
92
+ });
93
+ }
94
+ }
95
+
96
+ async function postJson(url: string, body: unknown, options: IOpenAiChatGptAuthOptions): Promise<unknown> {
97
+ const response = await getFetch(options)(url, {
98
+ method: 'POST',
99
+ headers: { 'Content-Type': 'application/json' },
100
+ body: JSON.stringify(body),
101
+ });
102
+ return readJson(response, `POST ${url}`);
103
+ }
104
+
105
+ async function postForm(url: string, body: URLSearchParams, options: IOpenAiChatGptAuthOptions): Promise<unknown> {
106
+ const response = await getFetch(options)(url, {
107
+ method: 'POST',
108
+ headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
109
+ body: body.toString(),
110
+ });
111
+ return readJson(response, `POST ${url}`);
112
+ }
113
+
114
+ function sleep(ms: number): Promise<void> {
115
+ return new Promise((resolve) => setTimeout(resolve, ms));
116
+ }
117
+
118
+ function parseJwtPayload(jwt: string): Record<string, unknown> {
119
+ const parts = jwt.split('.');
120
+ if (parts.length !== 3 || !parts[1]) {
121
+ throw new OpenAiChatGptAuthError('OpenAI ChatGPT auth returned an invalid token.');
122
+ }
123
+
124
+ try {
125
+ return JSON.parse(Buffer.from(parts[1], 'base64url').toString('utf8')) as Record<string, unknown>;
126
+ } catch (error) {
127
+ throw new OpenAiChatGptAuthError(`OpenAI ChatGPT token could not be parsed: ${(error as Error).message}`);
128
+ }
129
+ }
130
+
131
+ export function parseOpenAiChatGptTokenInfo(token: string): IOpenAiChatGptTokenInfo {
132
+ const claims = parseJwtPayload(token);
133
+ const profile = claims['https://api.openai.com/profile'] as Record<string, unknown> | undefined;
134
+ const auth = claims['https://api.openai.com/auth'] as Record<string, unknown> | undefined;
135
+ const expiresAtSeconds = typeof claims.exp === 'number' ? claims.exp : undefined;
136
+
137
+ return {
138
+ email: asOptionalString(claims.email) ?? asOptionalString(profile?.email),
139
+ chatgptPlanType: asOptionalString(auth?.chatgpt_plan_type),
140
+ chatgptUserId: asOptionalString(auth?.chatgpt_user_id) ?? asOptionalString(auth?.user_id),
141
+ chatgptAccountId: asOptionalString(auth?.chatgpt_account_id),
142
+ chatgptAccountIsFedramp: auth?.chatgpt_account_is_fedramp === true,
143
+ expiresAt: expiresAtSeconds ? new Date(expiresAtSeconds * 1000).toISOString() : undefined,
144
+ rawJwt: token,
145
+ };
146
+ }
147
+
148
+ function createTokenData(
149
+ response: IOpenAiChatGptTokenResponse,
150
+ existingTokenData?: IOpenAiChatGptTokenData,
151
+ ): IOpenAiChatGptTokenData {
152
+ const accessToken = asOptionalString(response.access_token) ?? existingTokenData?.accessToken;
153
+ const refreshToken = asOptionalString(response.refresh_token) ?? existingTokenData?.refreshToken;
154
+ const idToken = asOptionalString(response.id_token) ?? existingTokenData?.idToken;
155
+ if (!accessToken) {
156
+ throw new OpenAiChatGptAuthError('OpenAI ChatGPT auth response is missing access_token.');
157
+ }
158
+ if (!refreshToken) {
159
+ throw new OpenAiChatGptAuthError('OpenAI ChatGPT auth response is missing refresh_token.');
160
+ }
161
+ const tokenInfo = parseOpenAiChatGptTokenInfo(idToken ?? accessToken);
162
+
163
+ return {
164
+ accessToken,
165
+ refreshToken,
166
+ idToken,
167
+ accountId: tokenInfo.chatgptAccountId,
168
+ tokenInfo,
169
+ };
170
+ }
171
+
172
+ export async function requestOpenAiChatGptDeviceCode(
173
+ options: IOpenAiChatGptAuthOptions = {},
174
+ ): Promise<IOpenAiChatGptDeviceCode> {
175
+ const issuer = getIssuer(options);
176
+ const response = await postJson(`${issuer}/api/accounts/deviceauth/usercode`, {
177
+ client_id: getClientId(options),
178
+ }, options) as Record<string, unknown>;
179
+
180
+ return {
181
+ verificationUrl: `${issuer}/codex/device`,
182
+ userCode: asString(response.user_code ?? response.usercode, 'user_code'),
183
+ deviceAuthId: asString(response.device_auth_id, 'device_auth_id'),
184
+ intervalSeconds: asIntervalSeconds(response.interval),
185
+ };
186
+ }
187
+
188
+ export async function pollOpenAiChatGptDeviceCode(
189
+ deviceCode: IOpenAiChatGptDeviceCode,
190
+ options: IOpenAiChatGptDeviceCodePollOptions = {},
191
+ ): Promise<IOpenAiChatGptAuthorizationCode> {
192
+ const issuer = getIssuer(options);
193
+ const pollUrl = `${issuer}/api/accounts/deviceauth/token`;
194
+ const timeoutMs = options.timeoutMs ?? DEVICE_CODE_TIMEOUT_MS;
195
+ const sleepFunction = options.sleep ?? sleep;
196
+ const startedAt = Date.now();
197
+
198
+ while (Date.now() - startedAt < timeoutMs) {
199
+ const response = await getFetch(options)(pollUrl, {
200
+ method: 'POST',
201
+ headers: { 'Content-Type': 'application/json' },
202
+ body: JSON.stringify({
203
+ device_auth_id: deviceCode.deviceAuthId,
204
+ user_code: deviceCode.userCode,
205
+ }),
206
+ });
207
+
208
+ if (response.ok) {
209
+ const body = await readJson(response, `POST ${pollUrl}`) as Record<string, unknown>;
210
+ return {
211
+ authorizationCode: asString(body.authorization_code, 'authorization_code'),
212
+ codeChallenge: asString(body.code_challenge, 'code_challenge'),
213
+ codeVerifier: asString(body.code_verifier, 'code_verifier'),
214
+ };
215
+ }
216
+
217
+ if (response.status !== 403 && response.status !== 404) {
218
+ const body = await response.text();
219
+ throw new OpenAiChatGptAuthError(`OpenAI ChatGPT device-code polling failed with status ${response.status}.`, {
220
+ status: response.status,
221
+ body,
222
+ });
223
+ }
224
+
225
+ await response.arrayBuffer().catch(() => undefined);
226
+ const remaining = timeoutMs - (Date.now() - startedAt);
227
+ await sleepFunction(Math.min(deviceCode.intervalSeconds * 1000, Math.max(remaining, 0)));
228
+ }
229
+
230
+ throw new OpenAiChatGptAuthError('OpenAI ChatGPT device-code login timed out.');
231
+ }
232
+
233
+ export async function exchangeOpenAiChatGptAuthorizationCode(
234
+ authorizationCode: IOpenAiChatGptAuthorizationCode,
235
+ options: IOpenAiChatGptAuthOptions = {},
236
+ ): Promise<IOpenAiChatGptTokenData> {
237
+ const issuer = getIssuer(options);
238
+ const response = await postForm(`${issuer}/oauth/token`, new URLSearchParams({
239
+ grant_type: 'authorization_code',
240
+ code: authorizationCode.authorizationCode,
241
+ redirect_uri: `${issuer}/deviceauth/callback`,
242
+ client_id: getClientId(options),
243
+ code_verifier: authorizationCode.codeVerifier,
244
+ }), options) as IOpenAiChatGptTokenResponse;
245
+
246
+ return createTokenData(response);
247
+ }
248
+
249
+ export function ensureOpenAiChatGptWorkspaceAllowed(
250
+ tokenData: IOpenAiChatGptTokenData,
251
+ forcedChatGptWorkspaceId?: string,
252
+ ): void {
253
+ if (!forcedChatGptWorkspaceId) {
254
+ return;
255
+ }
256
+ if (tokenData.tokenInfo.chatgptAccountId !== forcedChatGptWorkspaceId) {
257
+ throw new OpenAiChatGptAuthError(`OpenAI ChatGPT login is restricted to workspace ${forcedChatGptWorkspaceId}.`);
258
+ }
259
+ }
260
+
261
+ export async function completeOpenAiChatGptDeviceCodeLogin(
262
+ deviceCode: IOpenAiChatGptDeviceCode,
263
+ options: IOpenAiChatGptCompleteDeviceCodeOptions = {},
264
+ ): Promise<IOpenAiChatGptTokenData> {
265
+ const authorizationCode = await pollOpenAiChatGptDeviceCode(deviceCode, options);
266
+ const tokenData = await exchangeOpenAiChatGptAuthorizationCode(authorizationCode, options);
267
+ ensureOpenAiChatGptWorkspaceAllowed(tokenData, options.forcedChatGptWorkspaceId);
268
+ return tokenData;
269
+ }
270
+
271
+ export async function refreshOpenAiChatGptTokenData(
272
+ tokenData: IOpenAiChatGptTokenData,
273
+ options: IOpenAiChatGptAuthOptions = {},
274
+ ): Promise<IOpenAiChatGptTokenData> {
275
+ const issuer = getIssuer(options);
276
+ const response = await postJson(`${issuer}/oauth/token`, {
277
+ client_id: getClientId(options),
278
+ grant_type: 'refresh_token',
279
+ refresh_token: tokenData.refreshToken,
280
+ }, options) as IOpenAiChatGptTokenResponse;
281
+
282
+ return createTokenData({
283
+ id_token: response.id_token ?? tokenData.idToken,
284
+ access_token: response.access_token ?? tokenData.accessToken,
285
+ refresh_token: response.refresh_token ?? tokenData.refreshToken,
286
+ }, tokenData);
287
+ }
288
+
289
+ export function createOpenAiChatGptProviderSettings(credentials: IOpenAiChatGptAuthCredentials): {
290
+ apiKey: string;
291
+ baseURL: string;
292
+ headers: Record<string, string>;
293
+ } {
294
+ const accountId = credentials.accountId ?? credentials.tokenInfo?.chatgptAccountId;
295
+ const isFedrampAccount = credentials.tokenInfo?.chatgptAccountIsFedramp === true;
296
+ const headers: Record<string, string> = {
297
+ originator: credentials.originator ?? OPENAI_CHATGPT_DEFAULT_ORIGINATOR,
298
+ };
299
+
300
+ if (accountId) {
301
+ headers['ChatGPT-Account-ID'] = accountId;
302
+ }
303
+ if (isFedrampAccount) {
304
+ headers['X-OpenAI-Fedramp'] = 'true';
305
+ }
306
+
307
+ return {
308
+ apiKey: credentials.accessToken,
309
+ baseURL: credentials.baseUrl ?? OPENAI_CHATGPT_CODEX_BASE_URL,
310
+ headers,
311
+ };
312
+ }