@jeffreycao/copilot-api 1.4.2 → 1.4.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE CHANGED
@@ -1,6 +1,6 @@
1
1
  MIT License
2
2
 
3
- Copyright (c) 2025 Erick Christian Purwanto
3
+ Copyright (c) 2025-, Erick Christian Purwanto, and a number of other contributors
4
4
 
5
5
  Permission is hereby granted, free of charge, to any person obtaining a copy
6
6
  of this software and associated documentation files (the "Software"), to deal
package/README.md CHANGED
@@ -22,7 +22,22 @@
22
22
  ---
23
23
 
24
24
  > [!NOTE]
25
- > [opencode](https://github.com/sst/opencode) already ships with a built-in GitHub Copilot provider, so you may not need this project for basic usage. This proxy is still useful if you want OpenCode to talk to Copilot through `@ai-sdk/anthropic`, preserve Anthropic Messages semantics for tool use, prefer the native Messages API over plain Chat Completions for Claude-family models, use `gpt-5.4` phase-aware commentary, or fine-tune premium-request usage with small-model fallbacks.
25
+ > [opencode](https://github.com/sst/opencode) already ships with a built-in GitHub Copilot provider, so you may not need this project for basic usage. This proxy is still useful if you want OpenCode to talk to Copilot through `@ai-sdk/anthropic`, preserve Anthropic Messages semantics for tool use, prefer the native Messages API over Chat Completions API for Claude-family models, use gpt phase-aware commentary, or optimize premium requests.
26
+
27
+ ---
28
+ ## Important Notes
29
+
30
+ > [!IMPORTANT]
31
+ > **Before using, please be aware of the following:**
32
+ >
33
+ > 1. **Claude Code model ID configuration:** When using with Claude Code, please configure the model ID as `claude-opus-4-6` or `claude-opus-4.6` (without the `[1m]` suffix, exceeding GitHub Copilot's context window limit too much may lead to being banned).
34
+ >
35
+ > 2. **Recommend for Opencode:** When using with opencode, we recommend starting with the opencode OAuth app. This approach behaves identically to opencode's built-in GitHub Copilot provider with no Terms of Service risk:
36
+ > ```sh
37
+ > npx @jeffreycao/copilot-api@latest --oauth-app=opencode start
38
+ > ```
39
+ >
40
+ > 3. **Disable multi agent when using codex:** If you're using codex via GitHub Copilot, it's recommended to disable the multi agent feature. Currently, GitHub Copilot charges based on the last message being a user role when using codex, and the billing logic has not been adjusted.
26
41
 
27
42
  ---
28
43
 
@@ -36,10 +51,10 @@ Compared with routing everything through plain Chat Completions compatibility, t
36
51
 
37
52
  - **OpenAI & Anthropic Compatibility**: Exposes GitHub Copilot as an OpenAI-compatible (`/v1/responses`, `/v1/chat/completions`, `/v1/models`, `/v1/embeddings`) and Anthropic-compatible (`/v1/messages`) API.
38
53
  - **Anthropic-First Routing for Claude Models**: When a model supports Copilot's native `/v1/messages` endpoint, the proxy prefers it over `/responses` or `/chat/completions`, preserving Anthropic-style `tool_use` / `tool_result` flows and more Claude-native behavior.
39
- - **Fewer Unnecessary Premium Requests**: Reduces wasted premium usage by routing warmup and compact/background requests to `smallModel`, merging `tool_result` follow-ups back into the tool flow, and treating resumed tool turns as continuation traffic instead of fresh premium interactions.
54
+ - **Fewer Unnecessary Premium Requests**: Reduces wasted premium usage by routing warmup requests to `smallModel`, merging `tool_result` follow-ups back into the tool flow, and treating resumed tool turns as continuation traffic instead of fresh premium interactions.
40
55
  - **Phase-Aware `gpt-5.4` and `gpt-5.3-codex`**: These models can emit user-friendly commentary before deeper reasoning or tool use, so long-running coding actions are easier to understand instead of appearing as a sudden tool burst.
41
56
  - **Claude Native Beta Support**: On the Messages API path, supports Anthropic-native capabilities such as `interleaved-thinking`, `advanced-tool-use`, and `context-management`, which are difficult or unavailable through plain Chat Completions compatibility.
42
- - **Subagent Marker Integration**: Optional Claude Code and opencode plugins can inject `__SUBAGENT_MARKER__...` and propagate `x-session-id` so subagent traffic keeps the correct root session and agent/user semantics.
57
+ - **Subagent Marker Integration**: Claude Code and opencode plugins can inject `__SUBAGENT_MARKER__...` and propagate `x-session-id` so subagent traffic keeps the correct root session and agent/user semantics.
43
58
  - **OpenCode via `@ai-sdk/anthropic`**: Point OpenCode at this proxy as an Anthropic provider so Anthropic Messages semantics, premium-request optimizations, and Claude-native behavior are preserved end to end.
44
59
  - **Claude Code Integration**: Easily configure and launch [Claude Code](https://docs.anthropic.com/en/docs/claude-code/overview) to use Copilot as its backend with a simple command-line flag (`--claude-code`).
45
60
  - **Usage Dashboard**: A web-based dashboard to monitor your Copilot API usage, view quotas, and see detailed statistics.
@@ -52,6 +67,8 @@ Compared with routing everything through plain Chat Completions compatibility, t
52
67
  - **GitHub Enterprise Support**: Connect to GHE.com by setting `COPILOT_API_ENTERPRISE_URL` environment variable (e.g., `company.ghe.com`) or using `--enterprise-url=company.ghe.com` command line option.
53
68
  - **Custom Data Directory**: Change the default data directory (where tokens and config are stored) by setting `COPILOT_API_HOME` environment variable or using `--api-home=/path/to/dir` command line option.
54
69
  - **Multi-Provider Anthropic Proxy Routes**: Add global provider configs and call external Anthropic-compatible APIs via `/:provider/v1/messages` and `/:provider/v1/models`.
70
+ - **Accurate Claude Token Counting**: Optionally forward `/v1/messages/count_tokens` requests for Claude models to Anthropic's free token counting endpoint for exact counts instead of GPT tokenizer estimation.
71
+ - **GPT Context Management**: Configurable context compaction for long-running GPT conversations via `responsesApiContextManagementModels`, reducing unnecessary premium requests when approaching token limits. See [Configuration](#configuration-configjson) for details.
55
72
 
56
73
  ## Better Agent Semantics
57
74
 
@@ -72,7 +89,6 @@ Supported `anthropic-beta` values are filtered and forwarded on the native Messa
72
89
  The proxy includes request-accounting safeguards designed for tool-heavy coding workflows:
73
90
 
74
91
  - tool-less warmup or probe requests can be forced onto `smallModel` so background checks do not spend premium usage;
75
- - compact/background requests can be downgraded to `smallModel` automatically;
76
92
  - mixed `tool_result` + reminder text blocks are merged back into the `tool_result` flow instead of being counted like fresh user turns;
77
93
  - `x-initiator` is derived from the latest message or item, not stale assistant history.
78
94
 
@@ -90,7 +106,24 @@ For subagent-based clients, this project can preserve root session context and c
90
106
 
91
107
  The marker flow uses `__SUBAGENT_MARKER__...` inside a `<system-reminder>` block together with root `x-session-id` propagation. When a marker is detected, the proxy can keep the parent session identity, infer `x-initiator: agent`, and tag the interaction as subagent traffic instead of a fresh top-level request.
92
108
 
93
- Optional marker producers are included for both Claude Code and opencode; see [Subagent Marker Integration](#subagent-marker-integration-optional) below for setup details.
109
+ Plugin integrations are included for both Claude Code and opencode; see [Plugin Integrations](#plugin-integrations) below for setup details.
110
+
111
+ ### Accurate Claude token counting
112
+
113
+ By default, `/v1/messages/count_tokens` estimates Claude token counts using the GPT `o200k_base` tokenizer with a 1.15x multiplier. This consistently underestimates actual Claude token usage, which can cause tools like Claude Code to compact too late and hit "prompt token count exceeds limit" errors.
114
+
115
+ When an Anthropic API key is configured, the proxy forwards Claude model token counting requests to [Anthropic's real `/v1/messages/count_tokens` endpoint](https://docs.anthropic.com/en/docs/build-with-claude/token-counting) instead. This returns exact counts and eliminates the estimation mismatch. Non-Claude models and failures fall back to the GPT tokenizer estimation automatically.
116
+
117
+ **Setup:**
118
+
119
+ 1. Create an Anthropic API account at [console.anthropic.com](https://console.anthropic.com) and add a minimum $5 credit balance (required to activate the API key, but the token counting endpoint itself is free)
120
+ 2. Create an API key from Settings > API Keys
121
+ 3. Configure the key via **one** of:
122
+ - `config.json`: set `"anthropicApiKey": "sk-ant-..."`
123
+ - Environment variable: `ANTHROPIC_API_KEY=sk-ant-...`
124
+
125
+ > [!NOTE]
126
+ > Anthropic's `/v1/messages/count_tokens` endpoint is **free** (no per-token cost). It is rate-limited to 100 RPM at Tier 1. The $5 credit purchase is only needed to activate API access — the token counting calls themselves cost nothing.
94
127
 
95
128
  ## Demo
96
129
 
@@ -276,7 +309,8 @@ The following command line options are available for the `start` command:
276
309
  "gpt-5.4": "xhigh"
277
310
  },
278
311
  "useFunctionApplyPatch": true,
279
- "useMessagesApi": true
312
+ "useMessagesApi": true,
313
+ "anthropicApiKey": ""
280
314
  }
281
315
  ```
282
316
  - **auth.apiKeys:** API keys used for request authentication. Supports multiple keys for rotation. Requests can authenticate with either `x-api-key: <key>` or `Authorization: Bearer <key>`. If empty or omitted, authentication is disabled.
@@ -291,10 +325,11 @@ The following command line options are available for the `start` command:
291
325
  - `topP` (optional): Default top_p value used when the request does not specify one.
292
326
  - `topK` (optional): Default top_k value used when the request does not specify one.
293
327
  - **smallModel:** Fallback model used for tool-less warmup messages (e.g., Claude Code probe requests) to avoid spending premium requests; defaults to gpt-5-mini.
294
- - **responsesApiContextManagementModels:** List of model IDs that should receive Responses API `context_management` compaction instructions. Use this when a model supports server-side context management and you want the proxy to keep only the latest compaction carrier on follow-up turns.
328
+ - **responsesApiContextManagementModels:** List of GPT model IDs that should receive Responses API `context_management` compaction instructions. This defaults to `[]`, so you need to opt in explicitly. A good starting point is `["gpt-5-mini", "gpt-5.3-codex", "gpt-5.4-mini", "gpt-5.4"]`. When enabled, the request includes `context_management` in the body and keeps only the latest compaction carrier on follow-up turns. The actual compaction is handled server-side and appears to begin when usage approaches roughly 90% of the model's `maxPromptTokens`, which makes it especially useful for long-running tasks without consuming additional premium requests. In practice, the effective `compact_threshold` also appears to be fixed on the server side, so changing it in this project does not currently alter compaction behavior. At the moment, this optimization is intended for GPT-family models only.
295
329
  - **modelReasoningEfforts:** Per-model `reasoning.effort` sent to the Copilot Responses API. Allowed values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`. If a model isn’t listed, `high` is used by default.
296
330
  - **useFunctionApplyPatch:** When `true`, the server will convert any custom tool named `apply_patch` in Responses payloads into an OpenAI-style function tool (`type: "function"`) with a parameter schema so assistants can call it using function-calling semantics to edit files. Set to `false` to leave tools unchanged. Defaults to `true`.
297
331
  - **useMessagesApi:** When `true`, Claude-family models that support Copilot's native `/v1/messages` endpoint will use the Messages API; otherwise they fall back to `/chat/completions`. Set to `false` to disable Messages API routing and always use `/chat/completions`. Defaults to `true`.
332
+ - **anthropicApiKey:** Anthropic API key used for accurate Claude token counting (see [Accurate Claude Token Counting](#accurate-claude-token-counting) below). Can also be set via the `ANTHROPIC_API_KEY` environment variable. If not set, token counting falls back to GPT tokenizer estimation.
298
333
 
299
334
  Edit this file to customize prompts or swap in your own fast model. Restart the server (or rerun the command) after changes so the cached config is refreshed.
300
335
 
@@ -577,7 +612,6 @@ Here is an example `.claude/settings.json` file:
577
612
  "ANTHROPIC_DEFAULT_HAIKU_MODEL": "gpt-5-mini",
578
613
  "DISABLE_NON_ESSENTIAL_MODEL_CALLS": "1",
579
614
  "CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1",
580
- "BASH_MAX_TIMEOUT_MS": "600000",
581
615
  "CLAUDE_CODE_ATTRIBUTION_HEADER": "0",
582
616
  "CLAUDE_CODE_ENABLE_PROMPT_SUGGESTION": "false"
583
617
  },
@@ -593,13 +627,13 @@ You can find more options here: [Claude Code settings](https://docs.anthropic.co
593
627
 
594
628
  You can also read more about IDE integration here: [Add Claude Code to your IDE](https://docs.anthropic.com/en/docs/claude-code/ide-integrations)
595
629
 
596
- ## Subagent Marker Integration (Optional)
630
+ ## Plugin Integrations
597
631
 
598
- This project supports `x-initiator: agent` for subagent-originated requests and can preserve the root session identity with `x-session-id` when a subagent marker is present.
632
+ Plugin integrations are available for Claude Code and opencode.
599
633
 
600
- #### Claude Code plugin producer (marketplace-based)
634
+ #### Claude Code plugin integration (marketplace-based)
601
635
 
602
- The marker producer is packaged as a Claude Code plugin named `claude-plugin`.
636
+ The Claude Code integration is packaged as a plugin named `claude-plugin`.
603
637
 
604
638
  - Marketplace catalog in this repository: `.claude-plugin/marketplace.json`
605
639
  - Plugin source in this repository: `claude-plugin`
@@ -618,9 +652,14 @@ Install the plugin from the marketplace:
618
652
 
619
653
  After installation, the plugin injects `__SUBAGENT_MARKER__...` on `SubagentStart`, and this proxy uses it to infer `x-initiator: agent`.
620
654
 
621
- #### Opencode plugin producer
655
+ The plugin also registers a `UserPromptSubmit` hook that returns `{"continue": true}`, and it can inject `SessionStart` reminder rules through environment variables:
656
+
657
+ - `CLAUDE_PLUGIN_ENABLE_QUESTION_RULES=1` enables the two `question`-tool reminders.
658
+ - `CLAUDE_PLUGIN_ENABLE_NO_BACKGROUND_AGENTS_RULE=1` enables the `run_in_background: true` avoidance reminder for agent hooks.
659
+
660
+ #### Opencode plugin
622
661
 
623
- The marker producer is packaged as an opencode plugin located at `.opencode/plugins/subagent-marker.js`.
662
+ The subagent marker producer is packaged as an opencode plugin located at `.opencode/plugins/subagent-marker.js`.
624
663
 
625
664
  **Installation:**
626
665
 
@@ -1,6 +1,6 @@
1
1
  import { PATHS, ensurePaths } from "./paths-Cla6y5eD.js";
2
- import { state } from "./utils-BzmtBkw4.js";
3
- import { setupGitHubToken } from "./token-Ct55tQaP.js";
2
+ import { state } from "./utils-Dm_whFBz.js";
3
+ import { setupGitHubToken } from "./token-vhvlcrDZ.js";
4
4
  import { defineCommand } from "citty";
5
5
  import consola from "consola";
6
6
 
@@ -43,4 +43,4 @@ const auth = defineCommand({
43
43
 
44
44
  //#endregion
45
45
  export { auth };
46
- //# sourceMappingURL=auth-BCT7hxQA.js.map
46
+ //# sourceMappingURL=auth-CsOZjVQp.js.map
@@ -1 +1 @@
1
- {"version":3,"file":"auth-BCT7hxQA.js","names":[],"sources":["../src/auth.ts"],"sourcesContent":["#!/usr/bin/env node\n\nimport { defineCommand } from \"citty\"\nimport consola from \"consola\"\n\nimport { PATHS, ensurePaths } from \"./lib/paths\"\nimport { state } from \"./lib/state\"\nimport { setupGitHubToken } from \"./lib/token\"\n\ninterface RunAuthOptions {\n verbose: boolean\n showToken: boolean\n}\n\nexport async function runAuth(options: RunAuthOptions): Promise<void> {\n if (options.verbose) {\n consola.level = 5\n consola.info(\"Verbose logging enabled\")\n }\n\n state.showToken = options.showToken\n\n await ensurePaths()\n await setupGitHubToken({ force: true })\n consola.success(\"GitHub token written to\", PATHS.GITHUB_TOKEN_PATH)\n}\n\nexport const auth = defineCommand({\n meta: {\n name: \"auth\",\n description: \"Run GitHub auth flow without running the server\",\n },\n args: {\n verbose: {\n alias: \"v\",\n type: \"boolean\",\n default: false,\n description: \"Enable verbose logging\",\n },\n \"show-token\": {\n type: \"boolean\",\n default: false,\n description: \"Show GitHub token on auth\",\n },\n },\n run({ args }) {\n return runAuth({\n verbose: args.verbose,\n showToken: args[\"show-token\"],\n })\n },\n})\n"],"mappings":";;;;;;;AAcA,eAAsB,QAAQ,SAAwC;AACpE,KAAI,QAAQ,SAAS;AACnB,UAAQ,QAAQ;AAChB,UAAQ,KAAK,0BAA0B;;AAGzC,OAAM,YAAY,QAAQ;AAE1B,OAAM,aAAa;AACnB,OAAM,iBAAiB,EAAE,OAAO,MAAM,CAAC;AACvC,SAAQ,QAAQ,2BAA2B,MAAM,kBAAkB;;AAGrE,MAAa,OAAO,cAAc;CAChC,MAAM;EACJ,MAAM;EACN,aAAa;EACd;CACD,MAAM;EACJ,SAAS;GACP,OAAO;GACP,MAAM;GACN,SAAS;GACT,aAAa;GACd;EACD,cAAc;GACZ,MAAM;GACN,SAAS;GACT,aAAa;GACd;EACF;CACD,IAAI,EAAE,QAAQ;AACZ,SAAO,QAAQ;GACb,SAAS,KAAK;GACd,WAAW,KAAK;GACjB,CAAC;;CAEL,CAAC"}
1
+ {"version":3,"file":"auth-CsOZjVQp.js","names":[],"sources":["../src/auth.ts"],"sourcesContent":["#!/usr/bin/env node\n\nimport { defineCommand } from \"citty\"\nimport consola from \"consola\"\n\nimport { PATHS, ensurePaths } from \"./lib/paths\"\nimport { state } from \"./lib/state\"\nimport { setupGitHubToken } from \"./lib/token\"\n\ninterface RunAuthOptions {\n verbose: boolean\n showToken: boolean\n}\n\nexport async function runAuth(options: RunAuthOptions): Promise<void> {\n if (options.verbose) {\n consola.level = 5\n consola.info(\"Verbose logging enabled\")\n }\n\n state.showToken = options.showToken\n\n await ensurePaths()\n await setupGitHubToken({ force: true })\n consola.success(\"GitHub token written to\", PATHS.GITHUB_TOKEN_PATH)\n}\n\nexport const auth = defineCommand({\n meta: {\n name: \"auth\",\n description: \"Run GitHub auth flow without running the server\",\n },\n args: {\n verbose: {\n alias: \"v\",\n type: \"boolean\",\n default: false,\n description: \"Enable verbose logging\",\n },\n \"show-token\": {\n type: \"boolean\",\n default: false,\n description: \"Show GitHub token on auth\",\n },\n },\n run({ args }) {\n return runAuth({\n verbose: args.verbose,\n showToken: args[\"show-token\"],\n })\n },\n})\n"],"mappings":";;;;;;;AAcA,eAAsB,QAAQ,SAAwC;AACpE,KAAI,QAAQ,SAAS;AACnB,UAAQ,QAAQ;AAChB,UAAQ,KAAK,0BAA0B;;AAGzC,OAAM,YAAY,QAAQ;AAE1B,OAAM,aAAa;AACnB,OAAM,iBAAiB,EAAE,OAAO,MAAM,CAAC;AACvC,SAAQ,QAAQ,2BAA2B,MAAM,kBAAkB;;AAGrE,MAAa,OAAO,cAAc;CAChC,MAAM;EACJ,MAAM;EACN,aAAa;EACd;CACD,MAAM;EACJ,SAAS;GACP,OAAO;GACP,MAAM;GACN,SAAS;GACT,aAAa;GACd;EACD,cAAc;GACZ,MAAM;GACN,SAAS;GACT,aAAa;GACd;EACF;CACD,IAAI,EAAE,QAAQ;AACZ,SAAO,QAAQ;GACb,SAAS,KAAK;GACd,WAAW,KAAK;GACjB,CAAC;;CAEL,CAAC"}
@@ -1,7 +1,7 @@
1
1
  import { ensurePaths } from "./paths-Cla6y5eD.js";
2
- import "./utils-BzmtBkw4.js";
3
- import { setupGitHubToken } from "./token-Ct55tQaP.js";
4
- import { getCopilotUsage } from "./get-copilot-usage-CnL_6H-N.js";
2
+ import "./utils-Dm_whFBz.js";
3
+ import { setupGitHubToken } from "./token-vhvlcrDZ.js";
4
+ import { getCopilotUsage } from "./get-copilot-usage-DbzBiP2c.js";
5
5
  import { defineCommand } from "citty";
6
6
  import consola from "consola";
7
7
 
@@ -42,4 +42,4 @@ const checkUsage = defineCommand({
42
42
 
43
43
  //#endregion
44
44
  export { checkUsage };
45
- //# sourceMappingURL=check-usage-YCS0L_nI.js.map
45
+ //# sourceMappingURL=check-usage-DBchI-i1.js.map
@@ -1 +1 @@
1
- {"version":3,"file":"check-usage-YCS0L_nI.js","names":[],"sources":["../src/check-usage.ts"],"sourcesContent":["import { defineCommand } from \"citty\"\nimport consola from \"consola\"\n\nimport { ensurePaths } from \"./lib/paths\"\nimport { setupGitHubToken } from \"./lib/token\"\nimport {\n getCopilotUsage,\n type QuotaDetail,\n} from \"./services/github/get-copilot-usage\"\n\nexport const checkUsage = defineCommand({\n meta: {\n name: \"check-usage\",\n description: \"Show current GitHub Copilot usage/quota information\",\n },\n async run() {\n await ensurePaths()\n await setupGitHubToken()\n try {\n const usage = await getCopilotUsage()\n const premium = usage.quota_snapshots.premium_interactions\n const premiumTotal = premium.entitlement\n const premiumUsed = premiumTotal - premium.remaining\n const premiumPercentUsed =\n premiumTotal > 0 ? (premiumUsed / premiumTotal) * 100 : 0\n const premiumPercentRemaining = premium.percent_remaining\n\n // Helper to summarize a quota snapshot\n function summarizeQuota(name: string, snap: QuotaDetail | undefined) {\n if (!snap) return `${name}: N/A`\n const total = snap.entitlement\n const used = total - snap.remaining\n const percentUsed = total > 0 ? (used / total) * 100 : 0\n const percentRemaining = snap.percent_remaining\n return `${name}: ${used}/${total} used (${percentUsed.toFixed(1)}% used, ${percentRemaining.toFixed(1)}% remaining)`\n }\n\n const premiumLine = `Premium: ${premiumUsed}/${premiumTotal} used (${premiumPercentUsed.toFixed(1)}% used, ${premiumPercentRemaining.toFixed(1)}% remaining)`\n const chatLine = summarizeQuota(\"Chat\", usage.quota_snapshots.chat)\n const completionsLine = summarizeQuota(\n \"Completions\",\n usage.quota_snapshots.completions,\n )\n\n consola.box(\n `Copilot Usage (plan: ${usage.copilot_plan})\\n`\n + `Quota resets: ${usage.quota_reset_date}\\n`\n + `\\nQuotas:\\n`\n + ` ${premiumLine}\\n`\n + ` ${chatLine}\\n`\n + ` ${completionsLine}`,\n )\n } catch (err) {\n consola.error(\"Failed to fetch Copilot usage:\", err)\n process.exit(1)\n }\n },\n})\n"],"mappings":";;;;;;;;AAUA,MAAa,aAAa,cAAc;CACtC,MAAM;EACJ,MAAM;EACN,aAAa;EACd;CACD,MAAM,MAAM;AACV,QAAM,aAAa;AACnB,QAAM,kBAAkB;AACxB,MAAI;GACF,MAAM,QAAQ,MAAM,iBAAiB;GACrC,MAAM,UAAU,MAAM,gBAAgB;GACtC,MAAM,eAAe,QAAQ;GAC7B,MAAM,cAAc,eAAe,QAAQ;GAC3C,MAAM,qBACJ,eAAe,IAAK,cAAc,eAAgB,MAAM;GAC1D,MAAM,0BAA0B,QAAQ;GAGxC,SAAS,eAAe,MAAc,MAA+B;AACnE,QAAI,CAAC,KAAM,QAAO,GAAG,KAAK;IAC1B,MAAM,QAAQ,KAAK;IACnB,MAAM,OAAO,QAAQ,KAAK;IAC1B,MAAM,cAAc,QAAQ,IAAK,OAAO,QAAS,MAAM;IACvD,MAAM,mBAAmB,KAAK;AAC9B,WAAO,GAAG,KAAK,IAAI,KAAK,GAAG,MAAM,SAAS,YAAY,QAAQ,EAAE,CAAC,UAAU,iBAAiB,QAAQ,EAAE,CAAC;;GAGzG,MAAM,cAAc,YAAY,YAAY,GAAG,aAAa,SAAS,mBAAmB,QAAQ,EAAE,CAAC,UAAU,wBAAwB,QAAQ,EAAE,CAAC;GAChJ,MAAM,WAAW,eAAe,QAAQ,MAAM,gBAAgB,KAAK;GACnE,MAAM,kBAAkB,eACtB,eACA,MAAM,gBAAgB,YACvB;AAED,WAAQ,IACN,wBAAwB,MAAM,aAAa,mBACtB,MAAM,iBAAiB,iBAEnC,YAAY,MACZ,SAAS,MACT,kBACV;WACM,KAAK;AACZ,WAAQ,MAAM,kCAAkC,IAAI;AACpD,WAAQ,KAAK,EAAE;;;CAGpB,CAAC"}
1
+ {"version":3,"file":"check-usage-DBchI-i1.js","names":[],"sources":["../src/check-usage.ts"],"sourcesContent":["import { defineCommand } from \"citty\"\nimport consola from \"consola\"\n\nimport { ensurePaths } from \"./lib/paths\"\nimport { setupGitHubToken } from \"./lib/token\"\nimport {\n getCopilotUsage,\n type QuotaDetail,\n} from \"./services/github/get-copilot-usage\"\n\nexport const checkUsage = defineCommand({\n meta: {\n name: \"check-usage\",\n description: \"Show current GitHub Copilot usage/quota information\",\n },\n async run() {\n await ensurePaths()\n await setupGitHubToken()\n try {\n const usage = await getCopilotUsage()\n const premium = usage.quota_snapshots.premium_interactions\n const premiumTotal = premium.entitlement\n const premiumUsed = premiumTotal - premium.remaining\n const premiumPercentUsed =\n premiumTotal > 0 ? (premiumUsed / premiumTotal) * 100 : 0\n const premiumPercentRemaining = premium.percent_remaining\n\n // Helper to summarize a quota snapshot\n function summarizeQuota(name: string, snap: QuotaDetail | undefined) {\n if (!snap) return `${name}: N/A`\n const total = snap.entitlement\n const used = total - snap.remaining\n const percentUsed = total > 0 ? (used / total) * 100 : 0\n const percentRemaining = snap.percent_remaining\n return `${name}: ${used}/${total} used (${percentUsed.toFixed(1)}% used, ${percentRemaining.toFixed(1)}% remaining)`\n }\n\n const premiumLine = `Premium: ${premiumUsed}/${premiumTotal} used (${premiumPercentUsed.toFixed(1)}% used, ${premiumPercentRemaining.toFixed(1)}% remaining)`\n const chatLine = summarizeQuota(\"Chat\", usage.quota_snapshots.chat)\n const completionsLine = summarizeQuota(\n \"Completions\",\n usage.quota_snapshots.completions,\n )\n\n consola.box(\n `Copilot Usage (plan: ${usage.copilot_plan})\\n`\n + `Quota resets: ${usage.quota_reset_date}\\n`\n + `\\nQuotas:\\n`\n + ` ${premiumLine}\\n`\n + ` ${chatLine}\\n`\n + ` ${completionsLine}`,\n )\n } catch (err) {\n consola.error(\"Failed to fetch Copilot usage:\", err)\n process.exit(1)\n }\n },\n})\n"],"mappings":";;;;;;;;AAUA,MAAa,aAAa,cAAc;CACtC,MAAM;EACJ,MAAM;EACN,aAAa;EACd;CACD,MAAM,MAAM;AACV,QAAM,aAAa;AACnB,QAAM,kBAAkB;AACxB,MAAI;GACF,MAAM,QAAQ,MAAM,iBAAiB;GACrC,MAAM,UAAU,MAAM,gBAAgB;GACtC,MAAM,eAAe,QAAQ;GAC7B,MAAM,cAAc,eAAe,QAAQ;GAC3C,MAAM,qBACJ,eAAe,IAAK,cAAc,eAAgB,MAAM;GAC1D,MAAM,0BAA0B,QAAQ;GAGxC,SAAS,eAAe,MAAc,MAA+B;AACnE,QAAI,CAAC,KAAM,QAAO,GAAG,KAAK;IAC1B,MAAM,QAAQ,KAAK;IACnB,MAAM,OAAO,QAAQ,KAAK;IAC1B,MAAM,cAAc,QAAQ,IAAK,OAAO,QAAS,MAAM;IACvD,MAAM,mBAAmB,KAAK;AAC9B,WAAO,GAAG,KAAK,IAAI,KAAK,GAAG,MAAM,SAAS,YAAY,QAAQ,EAAE,CAAC,UAAU,iBAAiB,QAAQ,EAAE,CAAC;;GAGzG,MAAM,cAAc,YAAY,YAAY,GAAG,aAAa,SAAS,mBAAmB,QAAQ,EAAE,CAAC,UAAU,wBAAwB,QAAQ,EAAE,CAAC;GAChJ,MAAM,WAAW,eAAe,QAAQ,MAAM,gBAAgB,KAAK;GACnE,MAAM,kBAAkB,eACtB,eACA,MAAM,gBAAgB,YACvB;AAED,WAAQ,IACN,wBAAwB,MAAM,aAAa,mBACtB,MAAM,iBAAiB,iBAEnC,YAAY,MACZ,SAAS,MACT,kBACV;WACM,KAAK;AACZ,WAAQ,MAAM,kCAAkC,IAAI;AACpD,WAAQ,KAAK,EAAE;;;CAGpB,CAAC"}
@@ -166,7 +166,10 @@ function getProviderConfig(name) {
166
166
  function isMessagesApiEnabled() {
167
167
  return getConfig().useMessagesApi ?? true;
168
168
  }
169
+ function getAnthropicApiKey() {
170
+ return getConfig().anthropicApiKey ?? process.env.ANTHROPIC_API_KEY ?? void 0;
171
+ }
169
172
 
170
173
  //#endregion
171
- export { getConfig, getExtraPromptForModel, getProviderConfig, getReasoningEffortForModel, getSmallModel, isMessagesApiEnabled, isResponsesApiContextManagementModel, mergeConfigWithDefaults };
172
- //# sourceMappingURL=config-DIqcOsnZ.js.map
174
+ export { getAnthropicApiKey, getConfig, getExtraPromptForModel, getProviderConfig, getReasoningEffortForModel, getSmallModel, isMessagesApiEnabled, isResponsesApiContextManagementModel, mergeConfigWithDefaults };
175
+ //# sourceMappingURL=config-bVq-BhC7.js.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"config-bVq-BhC7.js","names":["defaultConfig: AppConfig","cachedConfig: AppConfig | null"],"sources":["../src/lib/config.ts"],"sourcesContent":["import consola from \"consola\"\nimport fs from \"node:fs\"\n\nimport { PATHS } from \"./paths\"\n\nexport interface AppConfig {\n auth?: {\n apiKeys?: Array<string>\n }\n providers?: Record<string, ProviderConfig>\n extraPrompts?: Record<string, string>\n smallModel?: string\n responsesApiContextManagementModels?: Array<string>\n modelReasoningEfforts?: Record<\n string,\n \"none\" | \"minimal\" | \"low\" | \"medium\" | \"high\" | \"xhigh\"\n >\n useFunctionApplyPatch?: boolean\n useMessagesApi?: boolean\n anthropicApiKey?: string\n}\n\nexport interface ModelConfig {\n temperature?: number\n topP?: number\n topK?: number\n}\n\nexport interface ProviderConfig {\n type?: string\n enabled?: boolean\n baseUrl?: string\n apiKey?: string\n models?: Record<string, ModelConfig>\n adjustInputTokens?: boolean\n}\n\nexport interface ResolvedProviderConfig {\n name: string\n type: \"anthropic\"\n baseUrl: string\n apiKey: string\n models?: Record<string, ModelConfig>\n adjustInputTokens?: boolean\n}\n\nconst gpt5ExplorationPrompt = `## Exploration and reading files\n- **Think first.** Before any tool call, decide ALL files/resources you will need.\n- **Batch everything.** If you need multiple files (even from different places), read them together.\n- **multi_tool_use.parallel** Use multi_tool_use.parallel to parallelize tool calls and only this.\n- **Only make sequential calls if you truly cannot know the next file without seeing a result first.**\n- **Workflow:** (a) plan all needed reads → (b) issue one parallel batch → (c) analyze results → (d) repeat if new, unpredictable reads arise.`\n\nconst gpt5CommentaryPrompt = `# Working with the user\n\nYou interact with the user through a terminal. You have 2 ways of communicating with the users: \n- Share intermediary updates in \\`commentary\\` channel. \n- After you have completed all your work, send a message to the \\`final\\` channel. \n\n## Intermediary updates\n\n- Intermediary updates go to the \\`commentary\\` channel.\n- User updates are short updates while you are working, they are NOT final answers.\n- You use 1-2 sentence user updates to communicate progress and new information to the user as you are doing work.\n- Do not begin responses with conversational interjections or meta commentary. Avoid openers such as acknowledgements (“Done —”, “Got it”, “Great question, ”) or framing phrases.\n- You provide user updates frequently, every 20s.\n- Before exploring or doing substantial work, you start with a user update acknowledging the request and explaining your first step. You should include your understanding of the user request and explain what you will do. Avoid commenting on the request or using starters such as \"Got it -\" or \"Understood -\" etc.\n- When exploring, e.g. searching, reading files, you provide user updates as you go, every 20s, explaining what context you are gathering and what you've learned. Vary your sentence structure when providing these updates to avoid sounding repetitive - in particular, don't start each sentence the same way.\n- After you have sufficient context, and the work is substantial, you provide a longer plan (this is the only user update that may be longer than 2 sentences and can contain formatting).\n- Before performing file edits of any kind, you provide updates explaining what edits you are making.\n- As you are thinking, you very frequently provide updates even if not taking any actions, informing the user of your progress. You interrupt your thinking and send multiple updates in a row if thinking for more than 100 words.\n- Tone of your updates MUST match your personality.`\n\nconst defaultConfig: AppConfig = {\n auth: {\n apiKeys: [],\n },\n providers: {},\n extraPrompts: {\n \"gpt-5-mini\": gpt5ExplorationPrompt,\n \"gpt-5.3-codex\": gpt5CommentaryPrompt,\n \"gpt-5.4-mini\": gpt5CommentaryPrompt,\n \"gpt-5.4\": gpt5CommentaryPrompt,\n },\n smallModel: \"gpt-5-mini\",\n responsesApiContextManagementModels: [],\n modelReasoningEfforts: {\n \"gpt-5-mini\": \"low\",\n \"gpt-5.3-codex\": \"xhigh\",\n \"gpt-5.4-mini\": \"xhigh\",\n \"gpt-5.4\": \"xhigh\",\n },\n useFunctionApplyPatch: true,\n useMessagesApi: true,\n}\n\nlet cachedConfig: AppConfig | null = null\n\nfunction ensureConfigFile(): void {\n try {\n fs.accessSync(PATHS.CONFIG_PATH, fs.constants.R_OK | fs.constants.W_OK)\n } catch {\n fs.mkdirSync(PATHS.APP_DIR, { recursive: true })\n fs.writeFileSync(\n PATHS.CONFIG_PATH,\n `${JSON.stringify(defaultConfig, null, 2)}\\n`,\n \"utf8\",\n )\n try {\n fs.chmodSync(PATHS.CONFIG_PATH, 0o600)\n } catch {\n return\n }\n }\n}\n\nfunction readConfigFromDisk(): AppConfig {\n ensureConfigFile()\n try {\n const raw = fs.readFileSync(PATHS.CONFIG_PATH, \"utf8\")\n if (!raw.trim()) {\n fs.writeFileSync(\n PATHS.CONFIG_PATH,\n `${JSON.stringify(defaultConfig, null, 2)}\\n`,\n \"utf8\",\n )\n return defaultConfig\n }\n return JSON.parse(raw) as AppConfig\n } catch (error) {\n consola.error(\"Failed to read config file, using default config\", error)\n return defaultConfig\n }\n}\n\nfunction mergeDefaultConfig(config: AppConfig): {\n mergedConfig: AppConfig\n changed: boolean\n} {\n const extraPrompts = config.extraPrompts ?? {}\n const defaultExtraPrompts = defaultConfig.extraPrompts ?? {}\n const modelReasoningEfforts = config.modelReasoningEfforts ?? {}\n const defaultModelReasoningEfforts = defaultConfig.modelReasoningEfforts ?? {}\n\n const missingExtraPromptModels = Object.keys(defaultExtraPrompts).filter(\n (model) => !Object.hasOwn(extraPrompts, model),\n )\n\n const missingReasoningEffortModels = Object.keys(\n defaultModelReasoningEfforts,\n ).filter((model) => !Object.hasOwn(modelReasoningEfforts, model))\n\n const hasExtraPromptChanges = missingExtraPromptModels.length > 0\n const hasReasoningEffortChanges = missingReasoningEffortModels.length > 0\n\n if (!hasExtraPromptChanges && !hasReasoningEffortChanges) {\n return { mergedConfig: config, changed: false }\n }\n\n return {\n mergedConfig: {\n ...config,\n extraPrompts: {\n ...defaultExtraPrompts,\n ...extraPrompts,\n },\n modelReasoningEfforts: {\n ...defaultModelReasoningEfforts,\n ...modelReasoningEfforts,\n },\n },\n changed: true,\n }\n}\n\nexport function mergeConfigWithDefaults(): AppConfig {\n const config = readConfigFromDisk()\n const { mergedConfig, changed } = mergeDefaultConfig(config)\n\n if (changed) {\n try {\n fs.writeFileSync(\n PATHS.CONFIG_PATH,\n `${JSON.stringify(mergedConfig, null, 2)}\\n`,\n \"utf8\",\n )\n } catch (writeError) {\n consola.warn(\n \"Failed to write merged extraPrompts to config file\",\n writeError,\n )\n }\n }\n\n cachedConfig = mergedConfig\n return mergedConfig\n}\n\nexport function getConfig(): AppConfig {\n cachedConfig ??= readConfigFromDisk()\n return cachedConfig\n}\n\nexport function getExtraPromptForModel(model: string): string {\n const config = getConfig()\n return config.extraPrompts?.[model] ?? \"\"\n}\n\nexport function getSmallModel(): string {\n const config = getConfig()\n return config.smallModel ?? \"gpt-5-mini\"\n}\n\nexport function getResponsesApiContextManagementModels(): Array<string> {\n const config = getConfig()\n return (\n config.responsesApiContextManagementModels\n ?? defaultConfig.responsesApiContextManagementModels\n ?? []\n )\n}\n\nexport function isResponsesApiContextManagementModel(model: string): boolean {\n return getResponsesApiContextManagementModels().includes(model)\n}\n\nexport function getReasoningEffortForModel(\n model: string,\n): \"none\" | \"minimal\" | \"low\" | \"medium\" | \"high\" | \"xhigh\" {\n const config = getConfig()\n return config.modelReasoningEfforts?.[model] ?? \"high\"\n}\n\nexport function normalizeProviderBaseUrl(url: string): string {\n return url.trim().replace(/\\/+$/u, \"\")\n}\n\nexport function getProviderConfig(name: string): ResolvedProviderConfig | null {\n const providerName = name.trim()\n if (!providerName) {\n return null\n }\n\n const config = getConfig()\n const provider = config.providers?.[providerName]\n if (!provider) {\n return null\n }\n\n if (provider.enabled === false) {\n return null\n }\n\n const type = provider.type ?? \"anthropic\"\n if (type !== \"anthropic\") {\n consola.warn(\n `Provider ${providerName} is ignored because only anthropic type is supported`,\n )\n return null\n }\n\n const baseUrl = normalizeProviderBaseUrl(provider.baseUrl ?? \"\")\n const apiKey = (provider.apiKey ?? \"\").trim()\n if (!baseUrl || !apiKey) {\n consola.warn(\n `Provider ${providerName} is enabled but missing baseUrl or apiKey`,\n )\n return null\n }\n\n return {\n name: providerName,\n type,\n baseUrl,\n apiKey,\n models: provider.models,\n adjustInputTokens: provider.adjustInputTokens,\n }\n}\n\nexport function listEnabledProviders(): Array<string> {\n const config = getConfig()\n const providerNames = Object.keys(config.providers ?? {})\n return providerNames.filter((name) => getProviderConfig(name) !== null)\n}\n\nexport function isMessagesApiEnabled(): boolean {\n const config = getConfig()\n return config.useMessagesApi ?? true\n}\n\nexport function getAnthropicApiKey(): string | undefined {\n const config = getConfig()\n return config.anthropicApiKey ?? process.env.ANTHROPIC_API_KEY ?? undefined\n}\n"],"mappings":";;;;;AA8CA,MAAM,wBAAwB;;;;;;AAO9B,MAAM,uBAAuB;;;;;;;;;;;;;;;;;;;AAoB7B,MAAMA,gBAA2B;CAC/B,MAAM,EACJ,SAAS,EAAE,EACZ;CACD,WAAW,EAAE;CACb,cAAc;EACZ,cAAc;EACd,iBAAiB;EACjB,gBAAgB;EAChB,WAAW;EACZ;CACD,YAAY;CACZ,qCAAqC,EAAE;CACvC,uBAAuB;EACrB,cAAc;EACd,iBAAiB;EACjB,gBAAgB;EAChB,WAAW;EACZ;CACD,uBAAuB;CACvB,gBAAgB;CACjB;AAED,IAAIC,eAAiC;AAErC,SAAS,mBAAyB;AAChC,KAAI;AACF,KAAG,WAAW,MAAM,aAAa,GAAG,UAAU,OAAO,GAAG,UAAU,KAAK;SACjE;AACN,KAAG,UAAU,MAAM,SAAS,EAAE,WAAW,MAAM,CAAC;AAChD,KAAG,cACD,MAAM,aACN,GAAG,KAAK,UAAU,eAAe,MAAM,EAAE,CAAC,KAC1C,OACD;AACD,MAAI;AACF,MAAG,UAAU,MAAM,aAAa,IAAM;UAChC;AACN;;;;AAKN,SAAS,qBAAgC;AACvC,mBAAkB;AAClB,KAAI;EACF,MAAM,MAAM,GAAG,aAAa,MAAM,aAAa,OAAO;AACtD,MAAI,CAAC,IAAI,MAAM,EAAE;AACf,MAAG,cACD,MAAM,aACN,GAAG,KAAK,UAAU,eAAe,MAAM,EAAE,CAAC,KAC1C,OACD;AACD,UAAO;;AAET,SAAO,KAAK,MAAM,IAAI;UACf,OAAO;AACd,UAAQ,MAAM,oDAAoD,MAAM;AACxE,SAAO;;;AAIX,SAAS,mBAAmB,QAG1B;CACA,MAAM,eAAe,OAAO,gBAAgB,EAAE;CAC9C,MAAM,sBAAsB,cAAc,gBAAgB,EAAE;CAC5D,MAAM,wBAAwB,OAAO,yBAAyB,EAAE;CAChE,MAAM,+BAA+B,cAAc,yBAAyB,EAAE;CAE9E,MAAM,2BAA2B,OAAO,KAAK,oBAAoB,CAAC,QAC/D,UAAU,CAAC,OAAO,OAAO,cAAc,MAAM,CAC/C;CAED,MAAM,+BAA+B,OAAO,KAC1C,6BACD,CAAC,QAAQ,UAAU,CAAC,OAAO,OAAO,uBAAuB,MAAM,CAAC;CAEjE,MAAM,wBAAwB,yBAAyB,SAAS;CAChE,MAAM,4BAA4B,6BAA6B,SAAS;AAExE,KAAI,CAAC,yBAAyB,CAAC,0BAC7B,QAAO;EAAE,cAAc;EAAQ,SAAS;EAAO;AAGjD,QAAO;EACL,cAAc;GACZ,GAAG;GACH,cAAc;IACZ,GAAG;IACH,GAAG;IACJ;GACD,uBAAuB;IACrB,GAAG;IACH,GAAG;IACJ;GACF;EACD,SAAS;EACV;;AAGH,SAAgB,0BAAqC;CACnD,MAAM,SAAS,oBAAoB;CACnC,MAAM,EAAE,cAAc,YAAY,mBAAmB,OAAO;AAE5D,KAAI,QACF,KAAI;AACF,KAAG,cACD,MAAM,aACN,GAAG,KAAK,UAAU,cAAc,MAAM,EAAE,CAAC,KACzC,OACD;UACM,YAAY;AACnB,UAAQ,KACN,sDACA,WACD;;AAIL,gBAAe;AACf,QAAO;;AAGT,SAAgB,YAAuB;AACrC,kBAAiB,oBAAoB;AACrC,QAAO;;AAGT,SAAgB,uBAAuB,OAAuB;AAE5D,QADe,WAAW,CACZ,eAAe,UAAU;;AAGzC,SAAgB,gBAAwB;AAEtC,QADe,WAAW,CACZ,cAAc;;AAG9B,SAAgB,yCAAwD;AAEtE,QADe,WAAW,CAEjB,uCACJ,cAAc,uCACd,EAAE;;AAIT,SAAgB,qCAAqC,OAAwB;AAC3E,QAAO,wCAAwC,CAAC,SAAS,MAAM;;AAGjE,SAAgB,2BACd,OAC0D;AAE1D,QADe,WAAW,CACZ,wBAAwB,UAAU;;AAGlD,SAAgB,yBAAyB,KAAqB;AAC5D,QAAO,IAAI,MAAM,CAAC,QAAQ,SAAS,GAAG;;AAGxC,SAAgB,kBAAkB,MAA6C;CAC7E,MAAM,eAAe,KAAK,MAAM;AAChC,KAAI,CAAC,aACH,QAAO;CAIT,MAAM,WADS,WAAW,CACF,YAAY;AACpC,KAAI,CAAC,SACH,QAAO;AAGT,KAAI,SAAS,YAAY,MACvB,QAAO;CAGT,MAAM,OAAO,SAAS,QAAQ;AAC9B,KAAI,SAAS,aAAa;AACxB,UAAQ,KACN,YAAY,aAAa,sDAC1B;AACD,SAAO;;CAGT,MAAM,UAAU,yBAAyB,SAAS,WAAW,GAAG;CAChE,MAAM,UAAU,SAAS,UAAU,IAAI,MAAM;AAC7C,KAAI,CAAC,WAAW,CAAC,QAAQ;AACvB,UAAQ,KACN,YAAY,aAAa,2CAC1B;AACD,SAAO;;AAGT,QAAO;EACL,MAAM;EACN;EACA;EACA;EACA,QAAQ,SAAS;EACjB,mBAAmB,SAAS;EAC7B;;AASH,SAAgB,uBAAgC;AAE9C,QADe,WAAW,CACZ,kBAAkB;;AAGlC,SAAgB,qBAAyC;AAEvD,QADe,WAAW,CACZ,mBAAmB,QAAQ,IAAI,qBAAqB"}
@@ -1,4 +1,4 @@
1
- import { HTTPError, getGitHubApiBaseUrl, githubHeaders, state } from "./utils-BzmtBkw4.js";
1
+ import { HTTPError, getGitHubApiBaseUrl, githubHeaders, state } from "./utils-Dm_whFBz.js";
2
2
 
3
3
  //#region src/services/github/get-copilot-usage.ts
4
4
  const getCopilotUsage = async () => {
@@ -9,4 +9,4 @@ const getCopilotUsage = async () => {
9
9
 
10
10
  //#endregion
11
11
  export { getCopilotUsage };
12
- //# sourceMappingURL=get-copilot-usage-CnL_6H-N.js.map
12
+ //# sourceMappingURL=get-copilot-usage-DbzBiP2c.js.map
@@ -1 +1 @@
1
- {"version":3,"file":"get-copilot-usage-CnL_6H-N.js","names":[],"sources":["../src/services/github/get-copilot-usage.ts"],"sourcesContent":["import { getGitHubApiBaseUrl, githubHeaders } from \"~/lib/api-config\"\nimport { HTTPError } from \"~/lib/error\"\nimport { state } from \"~/lib/state\"\n\nexport const getCopilotUsage = async (): Promise<CopilotUsageResponse> => {\n const response = await fetch(\n `${getGitHubApiBaseUrl()}/copilot_internal/user`,\n {\n headers: githubHeaders(state),\n },\n )\n\n if (!response.ok) {\n throw new HTTPError(\"Failed to get Copilot usage\", response)\n }\n\n return (await response.json()) as CopilotUsageResponse\n}\n\nexport interface QuotaDetail {\n entitlement: number\n overage_count: number\n overage_permitted: boolean\n percent_remaining: number\n quota_id: string\n quota_remaining: number\n remaining: number\n unlimited: boolean\n}\n\ninterface QuotaSnapshots {\n chat: QuotaDetail\n completions: QuotaDetail\n premium_interactions: QuotaDetail\n}\n\ninterface CopilotUsageResponse {\n access_type_sku: string\n analytics_tracking_id: string\n assigned_date: string\n can_signup_for_limited: boolean\n chat_enabled: boolean\n copilot_plan: string\n organization_login_list: Array<unknown>\n organization_list: Array<unknown>\n quota_reset_date: string\n quota_snapshots: QuotaSnapshots\n}\n"],"mappings":";;;AAIA,MAAa,kBAAkB,YAA2C;CACxE,MAAM,WAAW,MAAM,MACrB,GAAG,qBAAqB,CAAC,yBACzB,EACE,SAAS,cAAc,MAAM,EAC9B,CACF;AAED,KAAI,CAAC,SAAS,GACZ,OAAM,IAAI,UAAU,+BAA+B,SAAS;AAG9D,QAAQ,MAAM,SAAS,MAAM"}
1
+ {"version":3,"file":"get-copilot-usage-DbzBiP2c.js","names":[],"sources":["../src/services/github/get-copilot-usage.ts"],"sourcesContent":["import { getGitHubApiBaseUrl, githubHeaders } from \"~/lib/api-config\"\nimport { HTTPError } from \"~/lib/error\"\nimport { state } from \"~/lib/state\"\n\nexport const getCopilotUsage = async (): Promise<CopilotUsageResponse> => {\n const response = await fetch(\n `${getGitHubApiBaseUrl()}/copilot_internal/user`,\n {\n headers: githubHeaders(state),\n },\n )\n\n if (!response.ok) {\n throw new HTTPError(\"Failed to get Copilot usage\", response)\n }\n\n return (await response.json()) as CopilotUsageResponse\n}\n\nexport interface QuotaDetail {\n entitlement: number\n overage_count: number\n overage_permitted: boolean\n percent_remaining: number\n quota_id: string\n quota_remaining: number\n remaining: number\n unlimited: boolean\n}\n\ninterface QuotaSnapshots {\n chat: QuotaDetail\n completions: QuotaDetail\n premium_interactions: QuotaDetail\n}\n\ninterface CopilotUsageResponse {\n access_type_sku: string\n analytics_tracking_id: string\n assigned_date: string\n can_signup_for_limited: boolean\n chat_enabled: boolean\n copilot_plan: string\n organization_login_list: Array<unknown>\n organization_list: Array<unknown>\n quota_reset_date: string\n quota_snapshots: QuotaSnapshots\n}\n"],"mappings":";;;AAIA,MAAa,kBAAkB,YAA2C;CACxE,MAAM,WAAW,MAAM,MACrB,GAAG,qBAAqB,CAAC,yBACzB,EACE,SAAS,cAAc,MAAM,EAC9B,CACF;AAED,KAAI,CAAC,SAAS,GACZ,OAAM,IAAI,UAAU,+BAA+B,SAAS;AAG9D,QAAQ,MAAM,SAAS,MAAM"}
package/dist/main.js CHANGED
@@ -20,10 +20,10 @@ const args = parseArgs(process.argv, cliArgs);
20
20
  if (typeof args["api-home"] === "string") process.env.COPILOT_API_HOME = args["api-home"];
21
21
  if (typeof args["oauth-app"] === "string") process.env.COPILOT_API_OAUTH_APP = args["oauth-app"];
22
22
  if (typeof args["enterprise-url"] === "string") process.env.COPILOT_API_ENTERPRISE_URL = args["enterprise-url"];
23
- const { auth } = await import("./auth-BCT7hxQA.js");
24
- const { checkUsage } = await import("./check-usage-YCS0L_nI.js");
23
+ const { auth } = await import("./auth-CsOZjVQp.js");
24
+ const { checkUsage } = await import("./check-usage-DBchI-i1.js");
25
25
  const { debug } = await import("./debug-Dx1S6uWG.js");
26
- const { start } = await import("./start-BECWyGAL.js");
26
+ const { start } = await import("./start-BbSn36bX.js");
27
27
  const main = defineCommand({
28
28
  meta: {
29
29
  name: "copilot-api",
@@ -1,7 +1,7 @@
1
1
  import { PATHS } from "./paths-Cla6y5eD.js";
2
- import { HTTPError, cacheModels, copilotBaseUrl, copilotHeaders, forwardError, generateRequestIdFromPayload, getRootSessionId, getUUID, isNullish, parseUserIdMetadata, prepareForCompact, prepareInteractionHeaders, sleep, state } from "./utils-BzmtBkw4.js";
3
- import { getCopilotUsage } from "./get-copilot-usage-CnL_6H-N.js";
4
- import { getConfig, getExtraPromptForModel, getProviderConfig, getReasoningEffortForModel, getSmallModel, isMessagesApiEnabled, isResponsesApiContextManagementModel } from "./config-DIqcOsnZ.js";
2
+ import { HTTPError, cacheModels, copilotBaseUrl, copilotHeaders, forwardError, generateRequestIdFromPayload, getRootSessionId, getUUID, isNullish, parseUserIdMetadata, prepareForCompact, prepareInteractionHeaders, sleep, state } from "./utils-Dm_whFBz.js";
3
+ import { getCopilotUsage } from "./get-copilot-usage-DbzBiP2c.js";
4
+ import { getAnthropicApiKey, getConfig, getExtraPromptForModel, getProviderConfig, getReasoningEffortForModel, getSmallModel, isMessagesApiEnabled, isResponsesApiContextManagementModel } from "./config-bVq-BhC7.js";
5
5
  import consola from "consola";
6
6
  import path from "node:path";
7
7
  import { Hono } from "hono";
@@ -851,12 +851,48 @@ function getAnthropicToolUseBlocks(toolCalls) {
851
851
  //#endregion
852
852
  //#region src/routes/messages/count-tokens-handler.ts
853
853
  /**
854
- * Handles token counting for Anthropic messages
854
+ * Forwards token counting to Anthropic's real /v1/messages/count_tokens endpoint.
855
+ * Returns the result on success, or null to fall through to estimation.
856
+ */
857
+ async function countTokensViaAnthropic(c, payload) {
858
+ if (!payload.model.startsWith("claude")) return null;
859
+ const apiKey = getAnthropicApiKey();
860
+ if (!apiKey) return null;
861
+ const model = payload.model.replaceAll(".", "-");
862
+ const res = await fetch("https://api.anthropic.com/v1/messages/count_tokens", {
863
+ method: "POST",
864
+ headers: {
865
+ "content-type": "application/json",
866
+ "x-api-key": apiKey,
867
+ "anthropic-version": "2023-06-01",
868
+ "anthropic-beta": "token-counting-2024-11-01"
869
+ },
870
+ body: JSON.stringify({
871
+ ...payload,
872
+ model
873
+ })
874
+ });
875
+ if (!res.ok) {
876
+ consola.warn("Anthropic count_tokens failed:", res.status, await res.text().catch(() => ""), "- falling back to estimation");
877
+ return null;
878
+ }
879
+ const result = await res.json();
880
+ consola.info("Token count (Anthropic API):", result.input_tokens);
881
+ return c.json(result);
882
+ }
883
+ /**
884
+ * Handles token counting for Anthropic messages.
885
+ *
886
+ * When an Anthropic API key is available (via config or ANTHROPIC_API_KEY env var)
887
+ * and the model is a Claude model, forwards to Anthropic's free /v1/messages/count_tokens
888
+ * endpoint for accurate counts. Otherwise falls back to GPT tokenizer estimation.
855
889
  */
856
890
  async function handleCountTokens(c) {
857
891
  try {
858
- const anthropicBeta = c.req.header("anthropic-beta");
859
892
  const anthropicPayload = await c.req.json();
893
+ const anthropicResult = await countTokensViaAnthropic(c, anthropicPayload);
894
+ if (anthropicResult) return anthropicResult;
895
+ const anthropicBeta = c.req.header("anthropic-beta");
860
896
  const openAIPayload = translateToOpenAI(anthropicPayload);
861
897
  const selectedModel = findEndpointModel(anthropicPayload.model);
862
898
  anthropicPayload.model = selectedModel?.id ?? anthropicPayload.model;
@@ -2605,6 +2641,7 @@ async function handleProviderMessages(c) {
2605
2641
  }
2606
2642
  let data = chunk.data;
2607
2643
  if (!data) continue;
2644
+ if (chunk.data === "[DONE]") break;
2608
2645
  try {
2609
2646
  const parsed = JSON.parse(data);
2610
2647
  if (parsed.type === "message_start") adjustInputTokens(providerConfig, parsed.message.usage);
@@ -2877,4 +2914,4 @@ server.route("/:provider/v1/models", providerModelRoutes);
2877
2914
 
2878
2915
  //#endregion
2879
2916
  export { server };
2880
- //# sourceMappingURL=server-CSVu0O6G.js.map
2917
+ //# sourceMappingURL=server-D86GQv-x.js.map