copilot-api-plus 1.0.40 → 1.0.42

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,7 +1,5 @@
1
1
  # Copilot API Plus
2
2
 
3
- > **Fork of [ericc-ch/copilot-api](https://github.com/ericc-ch/copilot-api)** with bug fixes and improvements.
4
-
5
3
  将 GitHub Copilot、OpenCode Zen、Google Antigravity 等 AI 服务转换为 **OpenAI** 和 **Anthropic** 兼容 API,支持与 [Claude Code](https://docs.anthropic.com/en/docs/claude-code/overview)、[opencode](https://github.com/sst/opencode) 等工具无缝集成。
6
4
 
7
5
  ---
@@ -18,7 +16,9 @@
18
16
  - [Claude Code 集成](#-claude-code-集成)
19
17
  - [opencode 集成](#-opencode-集成)
20
18
  - [API 端点](#-api-端点)
21
- - [命令行参考](#-命令行参考)
19
+ - [API Key 认证](#-api-key-认证)
20
+ - [技术细节](#-技术细节)
21
+ - [命令行参考](#️-命令行参考)
22
22
  - [Docker 部署](#-docker-部署)
23
23
  - [常见问题](#-常见问题)
24
24
 
@@ -36,6 +36,10 @@
36
36
  | ⚡ **速率限制** | 内置请求频率控制,避免触发限制 |
37
37
  | 🌐 **代理支持** | 支持 HTTP/HTTPS 代理,配置持久化 |
38
38
  | 🐳 **Docker 支持** | 提供完整的 Docker 部署方案 |
39
+ | 🔑 **API Key 认证** | 可选的 API Key 鉴权,保护公开部署的服务 |
40
+ | ✂️ **智能上下文压缩** | Prompt 超过模型 Token 限制时自动截断,保留系统消息和最近对话 |
41
+ | 🔍 **智能模型匹配** | 自动处理模型名格式差异(日期后缀、dash/dot 版本号等) |
42
+ | 🔁 **Antigravity 端点容错** | 双端点自动切换,按模型族追踪速率限制,指数退避重试 |
39
43
 
40
44
  ---
41
45
 
@@ -558,6 +562,7 @@ curl http://localhost:4141/v1/messages \
558
562
  | `--github-token` | `-g` | - | 直接提供 GitHub Token |
559
563
  | `--show-token` | - | false | 显示 Token 信息 |
560
564
  | `--proxy-env` | - | false | 从环境变量读取代理 |
565
+ | `--api-key` | - | - | API Key 鉴权(可多次指定) |
561
566
 
562
567
  ### proxy 命令参数
563
568
 
@@ -604,6 +609,72 @@ npx copilot-api-plus@latest antigravity clear # 清除所有账户
604
609
 
605
610
  ---
606
611
 
612
+ ## 🔑 API Key 认证
613
+
614
+ 如果你将服务暴露到公网,可以启用 API Key 认证来保护接口:
615
+
616
+ ```bash
617
+ # 单个 Key
618
+ npx copilot-api-plus@latest start --api-key my-secret-key
619
+
620
+ # 多个 Key
621
+ npx copilot-api-plus@latest start --api-key key1 --api-key key2
622
+ ```
623
+
624
+ 启用后,所有请求需要携带 API Key:
625
+
626
+ ```bash
627
+ # OpenAI 格式 - 通过 Authorization header
628
+ curl http://localhost:4141/v1/chat/completions \
629
+ -H "Authorization: Bearer my-secret-key" \
630
+ -H "Content-Type: application/json" \
631
+ -d '{"model": "claude-sonnet-4", "messages": [{"role": "user", "content": "Hello"}]}'
632
+
633
+ # Anthropic 格式 - 通过 x-api-key header
634
+ curl http://localhost:4141/v1/messages \
635
+ -H "x-api-key: my-secret-key" \
636
+ -H "Content-Type: application/json" \
637
+ -d '{"model": "claude-sonnet-4", "max_tokens": 1024, "messages": [{"role": "user", "content": "Hello"}]}'
638
+ ```
639
+
640
+ 在 Claude Code 中使用时,将 `ANTHROPIC_AUTH_TOKEN` 设为你的 API Key 即可。
641
+
642
+ ---
643
+
644
+ ## 🔧 技术细节
645
+
646
+ ### 智能上下文压缩
647
+
648
+ 当 Prompt Token 数量超过模型的上下文窗口限制时,代理层会自动截断消息以避免上游 API 返回 400 错误:
649
+
650
+ - **保留系统/开发者消息**:system 和 developer 角色的消息始终保留
651
+ - **保留最近对话**:优先丢弃最早的消息,保留最近的上下文
652
+ - **工具调用分组**:assistant 的 tool_calls 和对应的 tool result 消息作为一组,不会被拆散
653
+ - **5% 安全余量**:实际限制为模型上下文窗口的 95%,避免边界情况
654
+
655
+ ### 智能模型名匹配
656
+
657
+ Anthropic 格式的模型名(如 `claude-opus-4-6`)和 Copilot 的模型列表 ID 可能存在格式差异。代理使用多策略精确匹配:
658
+
659
+ | 策略 | 示例 |
660
+ |------|------|
661
+ | 精确匹配 | `claude-opus-4-6` → `claude-opus-4-6` |
662
+ | 去日期后缀 | `claude-opus-4-6-20251101` → `claude-opus-4-6` |
663
+ | Dash → Dot | `claude-opus-4-5` → `claude-opus-4.5` |
664
+ | Dot → Dash | `claude-opus-4.5` → `claude-opus-4-5` |
665
+
666
+ 对于 Anthropic 端点(`/v1/messages`),还会先通过 `translateModelName` 做格式转换(包括旧格式 `claude-3-5-sonnet` → `claude-sonnet-4.5` 的映射),再通过上述策略匹配。
667
+
668
+ ### Antigravity 端点容错
669
+
670
+ Google Antigravity 模式内置了可靠性保障:
671
+
672
+ - **双端点自动切换**:daily sandbox 和 production 两个端点,一个失败自动切换到另一个
673
+ - **按模型族速率追踪**:分别追踪 Gemini 和 Claude 模型族的速率限制状态
674
+ - **指数退避重试**:429/503 等限流错误自动退避重试,短间隔走同端点,长间隔切换端点
675
+
676
+ ---
677
+
607
678
  ## 🐳 Docker 部署
608
679
 
609
680
  ### 快速启动
@@ -15,7 +15,7 @@ function getZenAuthPath() {
15
15
  async function saveZenAuth(auth) {
16
16
  await ensurePaths();
17
17
  const authPath = getZenAuthPath();
18
- await (await import("node:fs/promises")).writeFile(authPath, JSON.stringify(auth, null, 2), "utf-8");
18
+ await (await import("node:fs/promises")).writeFile(authPath, JSON.stringify(auth, null, 2), "utf8");
19
19
  consola.success("Zen API key saved to", authPath);
20
20
  }
21
21
  /**
@@ -30,7 +30,7 @@ async function loadZenAuth() {
30
30
  } catch {
31
31
  return null;
32
32
  }
33
- const content = await fs.readFile(authPath, "utf-8");
33
+ const content = await fs.readFile(authPath);
34
34
  return JSON.parse(content);
35
35
  } catch {
36
36
  return null;
@@ -74,4 +74,4 @@ async function setupZenApiKey(force = false) {
74
74
 
75
75
  //#endregion
76
76
  export { clearZenAuth, getZenAuthPath, loadZenAuth, saveZenAuth, setupZenApiKey };
77
- //# sourceMappingURL=auth-T55-Bhoo.js.map
77
+ //# sourceMappingURL=auth-BrdL89xk.js.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"auth-BrdL89xk.js","names":[],"sources":["../src/services/zen/auth.ts"],"sourcesContent":["/**\n * OpenCode Zen Authentication\n *\n * Handles API key authentication for OpenCode Zen.\n * API keys are created at https://opencode.ai/zen\n */\n\nimport consola from \"consola\"\n\nimport { PATHS, ensurePaths } from \"~/lib/paths\"\n\nexport interface ZenAuth {\n apiKey: string\n}\n\nconst ZEN_AUTH_FILENAME = \"zen-auth.json\"\n\n/**\n * Get the path to the Zen auth file\n */\nexport function getZenAuthPath(): string {\n return `${PATHS.DATA_DIR}/${ZEN_AUTH_FILENAME}`\n}\n\n/**\n * Save Zen API key to file\n */\nexport async function saveZenAuth(auth: ZenAuth): Promise<void> {\n await ensurePaths()\n const authPath = getZenAuthPath()\n const fs = await import(\"node:fs/promises\")\n await fs.writeFile(authPath, JSON.stringify(auth, null, 2), \"utf8\")\n consola.success(\"Zen API key saved to\", authPath)\n}\n\n/**\n * Load Zen API key from file\n */\nexport async function loadZenAuth(): Promise<ZenAuth | null> {\n try {\n const authPath = getZenAuthPath()\n const fs = await import(\"node:fs/promises\")\n\n // Check if file exists\n try {\n await fs.access(authPath)\n } catch {\n return null\n }\n\n const content = await fs.readFile(authPath)\n return JSON.parse(content) as ZenAuth\n } catch {\n return null\n }\n}\n\n/**\n * Clear Zen API key\n */\nexport async function clearZenAuth(): Promise<void> {\n try {\n const authPath = getZenAuthPath()\n const fs = await import(\"node:fs/promises\")\n await fs.unlink(authPath)\n consola.success(\"Zen API key cleared\")\n } catch {\n // File might not exist, ignore\n }\n}\n\n/**\n * Setup Zen API key interactively\n */\nexport async function setupZenApiKey(force = false): Promise<string> {\n const existingAuth = await loadZenAuth()\n\n if (existingAuth && !force) {\n consola.info(\"Using existing Zen API key\")\n return existingAuth.apiKey\n }\n\n consola.info(\"OpenCode Zen gives you access to all the best coding models\")\n consola.info(\"Get your API key at: https://opencode.ai/zen\")\n consola.info(\"\")\n\n const apiKey = await consola.prompt(\"Enter your OpenCode Zen API key:\", {\n type: \"text\",\n })\n\n if (!apiKey || typeof apiKey !== \"string\") {\n throw new Error(\"API key is required\")\n }\n\n // Validate the API key by fetching models\n try {\n const response = await fetch(\"https://opencode.ai/zen/v1/models\", {\n headers: {\n Authorization: `Bearer ${apiKey}`,\n },\n })\n\n if (!response.ok) {\n throw new Error(\n `Invalid API key: ${response.status} ${response.statusText}`,\n )\n }\n\n consola.success(\"API key validated successfully\")\n } catch (error) {\n consola.error(\"Failed to validate API key:\", error)\n throw error\n }\n\n await saveZenAuth({ apiKey })\n return apiKey\n}\n"],"mappings":";;;;AAeA,MAAM,oBAAoB;;;;AAK1B,SAAgB,iBAAyB;AACvC,QAAO,GAAG,MAAM,SAAS,GAAG;;;;;AAM9B,eAAsB,YAAY,MAA8B;AAC9D,OAAM,aAAa;CACnB,MAAM,WAAW,gBAAgB;AAEjC,QADW,MAAM,OAAO,qBACf,UAAU,UAAU,KAAK,UAAU,MAAM,MAAM,EAAE,EAAE,OAAO;AACnE,SAAQ,QAAQ,wBAAwB,SAAS;;;;;AAMnD,eAAsB,cAAuC;AAC3D,KAAI;EACF,MAAM,WAAW,gBAAgB;EACjC,MAAM,KAAK,MAAM,OAAO;AAGxB,MAAI;AACF,SAAM,GAAG,OAAO,SAAS;UACnB;AACN,UAAO;;EAGT,MAAM,UAAU,MAAM,GAAG,SAAS,SAAS;AAC3C,SAAO,KAAK,MAAM,QAAQ;SACpB;AACN,SAAO;;;;;;AAOX,eAAsB,eAA8B;AAClD,KAAI;EACF,MAAM,WAAW,gBAAgB;AAEjC,SADW,MAAM,OAAO,qBACf,OAAO,SAAS;AACzB,UAAQ,QAAQ,sBAAsB;SAChC;;;;;AAQV,eAAsB,eAAe,QAAQ,OAAwB;CACnE,MAAM,eAAe,MAAM,aAAa;AAExC,KAAI,gBAAgB,CAAC,OAAO;AAC1B,UAAQ,KAAK,6BAA6B;AAC1C,SAAO,aAAa;;AAGtB,SAAQ,KAAK,8DAA8D;AAC3E,SAAQ,KAAK,+CAA+C;AAC5D,SAAQ,KAAK,GAAG;CAEhB,MAAM,SAAS,MAAM,QAAQ,OAAO,oCAAoC,EACtE,MAAM,QACP,CAAC;AAEF,KAAI,CAAC,UAAU,OAAO,WAAW,SAC/B,OAAM,IAAI,MAAM,sBAAsB;AAIxC,KAAI;EACF,MAAM,WAAW,MAAM,MAAM,qCAAqC,EAChE,SAAS,EACP,eAAe,UAAU,UAC1B,EACF,CAAC;AAEF,MAAI,CAAC,SAAS,GACZ,OAAM,IAAI,MACR,oBAAoB,SAAS,OAAO,GAAG,SAAS,aACjD;AAGH,UAAQ,QAAQ,iCAAiC;UAC1C,OAAO;AACd,UAAQ,MAAM,+BAA+B,MAAM;AACnD,QAAM;;AAGR,OAAM,YAAY,EAAE,QAAQ,CAAC;AAC7B,QAAO"}
@@ -1,4 +1,4 @@
1
1
  import "./paths-CVYLp61D.js";
2
- import { clearZenAuth, getZenAuthPath, loadZenAuth, saveZenAuth, setupZenApiKey } from "./auth-T55-Bhoo.js";
2
+ import { clearZenAuth, getZenAuthPath, loadZenAuth, saveZenAuth, setupZenApiKey } from "./auth-BrdL89xk.js";
3
3
 
4
4
  export { loadZenAuth, setupZenApiKey };
package/dist/main.js CHANGED
@@ -3,9 +3,9 @@ import { PATHS, ensurePaths } from "./paths-CVYLp61D.js";
3
3
  import { state } from "./state-CcLGr8VN.js";
4
4
  import { GITHUB_API_BASE_URL, copilotBaseUrl, copilotHeaders, githubHeaders } from "./get-user-BzIEATcF.js";
5
5
  import { HTTPError, forwardError } from "./error-CvU5otz-.js";
6
- import { cacheModels, cacheVSCodeVersion, clearGithubToken, isNullish, setupCopilotToken, setupGitHubToken, sleep } from "./token-ClgudjZm.js";
6
+ import { cacheModels, cacheVSCodeVersion, clearGithubToken, findModel, isNullish, setupCopilotToken, setupGitHubToken, sleep } from "./token-B777vbx8.js";
7
7
  import { clearAntigravityAuth, disableCurrentAccount, getAntigravityAuthPath, getApiKey, getCurrentProjectId, getValidAccessToken, rotateAccount } from "./auth-CWGl6kMf.js";
8
- import { clearZenAuth, getZenAuthPath } from "./auth-T55-Bhoo.js";
8
+ import { clearZenAuth, getZenAuthPath } from "./auth-BrdL89xk.js";
9
9
  import { getAntigravityModels, getAntigravityUsage, isThinkingModel } from "./get-models-uEbEgq0L.js";
10
10
  import { createRequire } from "node:module";
11
11
  import { defineCommand, runMain } from "citty";
@@ -1194,6 +1194,15 @@ const apiKeyAuthMiddleware = async (c, next) => {
1194
1194
  //#endregion
1195
1195
  //#region src/lib/model-logger.ts
1196
1196
  /**
1197
+ * Global token usage store for passing usage info from handlers to logger.
1198
+ * Handlers call setTokenUsage() when usage is available,
1199
+ * logger reads and clears it after await next().
1200
+ */
1201
+ let pendingTokenUsage;
1202
+ function setTokenUsage(usage) {
1203
+ pendingTokenUsage = usage;
1204
+ }
1205
+ /**
1197
1206
  * Get timestamp string in format HH:mm:ss
1198
1207
  */
1199
1208
  function getTime() {
@@ -1207,6 +1216,15 @@ function formatDuration(ms) {
1207
1216
  return `${(ms / 1e3).toFixed(1)}s`;
1208
1217
  }
1209
1218
  /**
1219
+ * Format token usage for log output
1220
+ */
1221
+ function formatTokenUsage(usage) {
1222
+ const parts = [`in:${usage.inputTokens}`, `out:${usage.outputTokens}`];
1223
+ if (usage.cacheReadTokens) parts.push(`cache_read:${usage.cacheReadTokens}`);
1224
+ if (usage.cacheCreationTokens) parts.push(`cache_create:${usage.cacheCreationTokens}`);
1225
+ return parts.join(" ");
1226
+ }
1227
+ /**
1210
1228
  * Extract model name from request body
1211
1229
  */
1212
1230
  async function extractModel(c) {
@@ -1221,7 +1239,7 @@ async function extractModel(c) {
1221
1239
  *
1222
1240
  * Output format:
1223
1241
  * [model] HH:mm:ss <-- METHOD /path
1224
- * [model] HH:mm:ss --> METHOD /path STATUS DURATION
1242
+ * [model] HH:mm:ss --> METHOD /path STATUS DURATION [in:N out:N]
1225
1243
  */
1226
1244
  function modelLogger() {
1227
1245
  return async (c, next) => {
@@ -1233,12 +1251,16 @@ function modelLogger() {
1233
1251
  if (method === "POST" && c.req.header("content-type")?.includes("json")) model = await extractModel(c);
1234
1252
  const modelPrefix = model ? `[${model}] ` : "";
1235
1253
  const startTime = getTime();
1254
+ pendingTokenUsage = void 0;
1236
1255
  console.log(`${modelPrefix}${startTime} <-- ${method} ${fullPath}`);
1237
1256
  const start$1 = Date.now();
1238
1257
  await next();
1239
1258
  const duration = Date.now() - start$1;
1240
1259
  const endTime = getTime();
1241
- console.log(`${modelPrefix}${endTime} --> ${method} ${fullPath} ${c.res.status} ${formatDuration(duration)}`);
1260
+ const usage = pendingTokenUsage;
1261
+ pendingTokenUsage = void 0;
1262
+ const usageSuffix = usage ? ` [${formatTokenUsage(usage)}]` : "";
1263
+ console.log(`${modelPrefix}${endTime} --> ${method} ${fullPath} ${c.res.status} ${formatDuration(duration)}${usageSuffix}`);
1242
1264
  };
1243
1265
  }
1244
1266
 
@@ -2957,9 +2979,12 @@ const createChatCompletions = async (payload) => {
2957
2979
  //#region src/routes/chat-completions/handler.ts
2958
2980
  /**
2959
2981
  * Calculate token count, log it, and auto-truncate if needed.
2982
+ *
2983
+ * Uses multi-strategy exact matching via findModel() to handle
2984
+ * mismatches between requested and available model names.
2960
2985
  */
2961
2986
  async function processPayloadTokens(payload) {
2962
- const selectedModel = state.models?.data.find((model) => model.id === payload.model);
2987
+ const selectedModel = findModel(payload.model);
2963
2988
  if (!selectedModel) {
2964
2989
  consola.warn("No model selected, skipping token count calculation");
2965
2990
  return payload;
@@ -2991,12 +3016,28 @@ async function handleCompletion$1(c) {
2991
3016
  const response = await createChatCompletions(payload);
2992
3017
  if (isNonStreaming$1(response)) {
2993
3018
  consola.debug("Non-streaming response:", JSON.stringify(response));
3019
+ if (response.usage) setTokenUsage({
3020
+ inputTokens: response.usage.prompt_tokens,
3021
+ outputTokens: response.usage.completion_tokens,
3022
+ cacheReadTokens: response.usage.prompt_tokens_details?.cached_tokens
3023
+ });
2994
3024
  return c.json(response);
2995
3025
  }
2996
3026
  consola.debug("Streaming response");
2997
3027
  return streamSSE(c, async (stream) => {
2998
3028
  for await (const chunk of response) {
2999
3029
  consola.debug("Streaming chunk:", JSON.stringify(chunk));
3030
+ try {
3031
+ const sseChunk = chunk;
3032
+ if (sseChunk.data && sseChunk.data !== "[DONE]") {
3033
+ const parsed = JSON.parse(sseChunk.data);
3034
+ if (parsed.usage) setTokenUsage({
3035
+ inputTokens: parsed.usage.prompt_tokens ?? 0,
3036
+ outputTokens: parsed.usage.completion_tokens ?? 0,
3037
+ cacheReadTokens: parsed.usage.prompt_tokens_details?.cached_tokens
3038
+ });
3039
+ }
3040
+ } catch {}
3000
3041
  await stream.writeSSE(chunk);
3001
3042
  }
3002
3043
  });
@@ -3085,14 +3126,6 @@ function translateModelName(model) {
3085
3126
  "claude-3-5-haiku": "claude-haiku-4.5",
3086
3127
  "claude-3-haiku": "claude-haiku-4.5"
3087
3128
  })) if (modelBase.startsWith(oldFormat) && supportedModels.includes(newFormat)) return newFormat;
3088
- let modelFamily = null;
3089
- if (model.includes("opus")) modelFamily = "opus";
3090
- else if (model.includes("sonnet")) modelFamily = "sonnet";
3091
- else if (model.includes("haiku")) modelFamily = "haiku";
3092
- if (modelFamily) {
3093
- const familyModel = supportedModels.find((m) => m.includes(modelFamily));
3094
- if (familyModel) return familyModel;
3095
- }
3096
3129
  return model;
3097
3130
  }
3098
3131
  function translateAnthropicMessagesToOpenAI(anthropicMessages, system) {
@@ -3260,16 +3293,21 @@ function getAnthropicToolUseBlocks(toolCalls) {
3260
3293
  //#endregion
3261
3294
  //#region src/routes/messages/count-tokens-handler.ts
3262
3295
  /**
3263
- * Handles token counting for Anthropic messages
3296
+ * Handles token counting for Anthropic messages.
3297
+ *
3298
+ * Uses multi-strategy model matching:
3299
+ * 1. findModel(translatedName) — translated Copilot name with format variants
3300
+ * 2. findModel(originalName) — original Anthropic name with format variants
3264
3301
  */
3265
3302
  async function handleCountTokens(c) {
3266
3303
  try {
3267
3304
  const anthropicBeta = c.req.header("anthropic-beta");
3268
3305
  const anthropicPayload = await c.req.json();
3269
3306
  const openAIPayload = translateToOpenAI(anthropicPayload);
3270
- const selectedModel = state.models?.data.find((model) => model.id === anthropicPayload.model);
3307
+ const translatedModelName = translateModelName(anthropicPayload.model);
3308
+ const selectedModel = findModel(translatedModelName) ?? findModel(anthropicPayload.model);
3271
3309
  if (!selectedModel) {
3272
- consola.warn("Model not found, returning default token count");
3310
+ consola.warn(`Model not found for "${anthropicPayload.model}" (translated: "${translatedModelName}"), returning default token count`);
3273
3311
  return c.json({ input_tokens: 1 });
3274
3312
  }
3275
3313
  const tokenCount = await getTokenCount(openAIPayload, selectedModel);
@@ -3420,9 +3458,12 @@ function translateChunkToAnthropicEvents(chunk, state$1) {
3420
3458
  //#region src/routes/messages/handler.ts
3421
3459
  /**
3422
3460
  * Auto-truncate OpenAI payload if prompt tokens exceed model limit.
3461
+ *
3462
+ * Uses multi-strategy exact matching via findModel() to handle
3463
+ * mismatches between Anthropic and Copilot model naming conventions.
3423
3464
  */
3424
3465
  async function autoTruncatePayload(payload) {
3425
- const selectedModel = state.models?.data.find((model) => model.id === payload.model);
3466
+ const selectedModel = findModel(payload.model);
3426
3467
  if (!selectedModel) {
3427
3468
  consola.warn("No model selected for Anthropic endpoint, skipping auto-truncation");
3428
3469
  return payload;
@@ -3443,6 +3484,12 @@ async function handleCompletion(c) {
3443
3484
  const response = await createChatCompletions(openAIPayload);
3444
3485
  if (isNonStreaming(response)) {
3445
3486
  const anthropicResponse = translateToAnthropic(response);
3487
+ setTokenUsage({
3488
+ inputTokens: anthropicResponse.usage.input_tokens,
3489
+ outputTokens: anthropicResponse.usage.output_tokens,
3490
+ cacheReadTokens: anthropicResponse.usage.cache_read_input_tokens,
3491
+ cacheCreationTokens: anthropicResponse.usage.cache_creation_input_tokens
3492
+ });
3446
3493
  return c.json(anthropicResponse);
3447
3494
  }
3448
3495
  return streamSSE(c, async (stream) => {
@@ -3457,6 +3504,11 @@ async function handleCompletion(c) {
3457
3504
  if (!rawEvent.data) continue;
3458
3505
  const chunk = JSON.parse(rawEvent.data);
3459
3506
  const events$1 = translateChunkToAnthropicEvents(chunk, streamState);
3507
+ if (chunk.usage) setTokenUsage({
3508
+ inputTokens: chunk.usage.prompt_tokens - (chunk.usage.prompt_tokens_details?.cached_tokens ?? 0),
3509
+ outputTokens: chunk.usage.completion_tokens,
3510
+ cacheReadTokens: chunk.usage.prompt_tokens_details?.cached_tokens
3511
+ });
3460
3512
  for (const event of events$1) await stream.writeSSE({
3461
3513
  event: event.type,
3462
3514
  data: JSON.stringify(event)
@@ -3982,7 +4034,7 @@ async function runServer(options$1) {
3982
4034
  state.zenApiKey = options$1.zenApiKey;
3983
4035
  consola.info("Using provided Zen API key");
3984
4036
  } else {
3985
- const { setupZenApiKey, loadZenAuth } = await import("./auth-CCwbOOQN.js");
4037
+ const { setupZenApiKey, loadZenAuth } = await import("./auth-pRwfByMF.js");
3986
4038
  const existingAuth = await loadZenAuth();
3987
4039
  if (existingAuth) {
3988
4040
  state.zenApiKey = existingAuth.apiKey;
@@ -4002,7 +4054,7 @@ async function runServer(options$1) {
4002
4054
  }
4003
4055
  if (hasApiKey()) {
4004
4056
  consola.info("Using Gemini API Key for authentication (from GEMINI_API_KEY)");
4005
- consola.info("API Key: " + getApiKey$1()?.slice(0, 10) + "...");
4057
+ consola.info(`API Key: ${getApiKey$1()?.slice(0, 10) ?? ""}...`);
4006
4058
  } else {
4007
4059
  const existingAuth = await loadAntigravityAuth();
4008
4060
  if (!existingAuth || existingAuth.accounts.length === 0) {
@@ -4048,7 +4100,7 @@ async function runServer(options$1) {
4048
4100
  const { HTTPError: HTTPError$1 } = await import("./error-CsShqJjE.js");
4049
4101
  if (error instanceof HTTPError$1 && error.response.status === 401) {
4050
4102
  consola.error("Failed to get Copilot token - GitHub token may be invalid or Copilot access revoked");
4051
- const { clearGithubToken: clearGithubToken$1 } = await import("./token-sYqHiHEd.js");
4103
+ const { clearGithubToken: clearGithubToken$1 } = await import("./token-CCg0yU7a.js");
4052
4104
  await clearGithubToken$1();
4053
4105
  consola.info("Please restart to re-authenticate");
4054
4106
  }