@jeffreycao/copilot-api 1.9.10 → 1.9.11
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +4 -4
- package/README.zh-CN.md +4 -4
- package/dist/main.js +1 -1
- package/dist/{server-BG69Fgym.js → server-D4pg54e1.js} +2228 -2191
- package/dist/server-D4pg54e1.js.map +1 -0
- package/dist/{start-CpqH2Ekm.js → start-D2K2jpHF.js} +2 -2
- package/dist/{start-CpqH2Ekm.js.map → start-D2K2jpHF.js.map} +1 -1
- package/package.json +1 -1
- package/dist/server-BG69Fgym.js.map +0 -1
package/README.md
CHANGED
|
@@ -62,7 +62,7 @@ Compared with routing everything through plain Chat Completions compatibility, t
|
|
|
62
62
|
- **Opencode OAuth Support**: Use opencode GitHub Copilot authentication by setting `COPILOT_API_OAUTH_APP=opencode` environment variable or using `--oauth-app=opencode` command line option.
|
|
63
63
|
- **GitHub Enterprise Support**: Connect to GHE.com by setting `COPILOT_API_ENTERPRISE_URL` environment variable (e.g., `company.ghe.com`) or using `--enterprise-url=company.ghe.com` command line option.
|
|
64
64
|
- **Custom Data Directory**: Change the default data directory (where tokens and config are stored) by setting `COPILOT_API_HOME` environment variable or using `--api-home=/path/to/dir` command line option.
|
|
65
|
-
- **Multi-Provider Messages Proxy Routes**: Add global provider configs and call external Anthropic-compatible or OpenAI-compatible APIs via `/:provider/v1/messages` and `/:provider/v1/models
|
|
65
|
+
- **Multi-Provider Messages Proxy Routes**: Add global provider configs and call external Anthropic-compatible or OpenAI-compatible APIs via `/:provider/v1/messages` and `/:provider/v1/models`, or send `model: "provider/model"` to the top-level `/v1/messages` API.
|
|
66
66
|
- **Accurate Claude Token Counting**: Optionally forward `/v1/messages/count_tokens` requests for Claude models to Anthropic's free token counting endpoint for exact counts instead of GPT tokenizer estimation.
|
|
67
67
|
- **GPT Context Management**: Configurable context compaction for long-running GPT conversations via `responsesApiContextManagementModels`, reducing unnecessary premium requests when approaching token limits. See [Configuration](#configuration-configjson) for details.
|
|
68
68
|
|
|
@@ -357,7 +357,7 @@ The following command line options are available for the `start` command:
|
|
|
357
357
|
```
|
|
358
358
|
- **auth.apiKeys:** API keys used for request authentication. Supports multiple keys for rotation. Requests can authenticate with either `x-api-key: <key>` or `Authorization: Bearer <key>`. If empty or omitted, authentication is disabled.
|
|
359
359
|
- **extraPrompts:** Map of `model -> prompt` appended to the first system prompt when translating Anthropic-style requests to Copilot. Use this to inject guardrails or guidance per model. Missing default entries are auto-added without overwriting your custom prompts. The built-in prompts for `gpt-5.3-codex` and `gpt-5.4` enable phase-aware commentary, which lets the model emit a short user-facing progress update before tools or deeper reasoning.
|
|
360
|
-
- **providers:** Global upstream provider map. Each provider key (for example `custom`) becomes a route prefix (`/custom/v1/messages`). Supports `type: "anthropic"` and `type: "openai-compatible"`.
|
|
360
|
+
- **providers:** Global upstream provider map. Each provider key (for example `custom`) becomes a route prefix (`/custom/v1/messages`). Supports `type: "anthropic"` and `type: "openai-compatible"`. Top-level Anthropic clients can also use `model: "custom/model-id"` with `/v1/messages` and `/v1/messages/count_tokens`; the proxy strips the `custom/` prefix before forwarding upstream. `GET /v1/models` does not aggregate provider models; use `GET /custom/v1/models` for provider model lists.
|
|
361
361
|
- `enabled` defaults to `true` if omitted.
|
|
362
362
|
- `baseUrl` should be provider API base URL without the final endpoint. For Anthropic providers, omit `/v1/messages`; for OpenAI-compatible providers, omit `/v1/chat/completions`.
|
|
363
363
|
- `apiKey` is used as the upstream credential value.
|
|
@@ -419,8 +419,8 @@ These endpoints are designed to be compatible with the Anthropic Messages API.
|
|
|
419
419
|
|
|
420
420
|
| Endpoint | Method | Description |
|
|
421
421
|
| -------------------------------- | ------ | ------------------------------------------------------------ |
|
|
422
|
-
| `POST /v1/messages` | `POST` | Creates a model response for a given conversation.
|
|
423
|
-
| `POST /v1/messages/count_tokens` | `POST` | Calculates the number of tokens for a given set of messages. |
|
|
422
|
+
| `POST /v1/messages` | `POST` | Creates a model response for a given conversation. Supports `provider/model` aliases for configured providers. |
|
|
423
|
+
| `POST /v1/messages/count_tokens` | `POST` | Calculates the number of tokens for a given set of messages. Supports `provider/model` aliases for configured providers. |
|
|
424
424
|
| `POST /:provider/v1/messages` | `POST` | Proxies Anthropic Messages requests to the configured Anthropic or OpenAI-compatible provider. |
|
|
425
425
|
| `GET /:provider/v1/models` | `GET` | Proxies model listing requests to the configured provider. |
|
|
426
426
|
| `POST /:provider/v1/messages/count_tokens` | `POST` | Calculates tokens locally for provider route requests. |
|
package/README.zh-CN.md
CHANGED
|
@@ -62,7 +62,7 @@
|
|
|
62
62
|
- **支持 opencode OAuth**:可通过设置环境变量 `COPILOT_API_OAUTH_APP=opencode` 或使用命令行参数 `--oauth-app=opencode` 来启用 opencode GitHub Copilot 认证。
|
|
63
63
|
- **支持 GitHub Enterprise**:可通过设置环境变量 `COPILOT_API_ENTERPRISE_URL`(例如 `company.ghe.com`)或命令行参数 `--enterprise-url=company.ghe.com` 连接到 GHE.com。
|
|
64
64
|
- **自定义数据目录**:可通过环境变量 `COPILOT_API_HOME` 或命令行参数 `--api-home=/path/to/dir` 修改默认数据目录(存放 token 和配置)。
|
|
65
|
-
- **多 Provider Messages 代理路由**:可以添加全局 provider 配置,并通过 `/:provider/v1/messages` 与 `/:provider/v1/models` 调用外部 Anthropic 或 OpenAI 兼容 API
|
|
65
|
+
- **多 Provider Messages 代理路由**:可以添加全局 provider 配置,并通过 `/:provider/v1/messages` 与 `/:provider/v1/models` 调用外部 Anthropic 或 OpenAI 兼容 API,也可以把 `model` 写成 `"provider/model"` 后直接发到顶层 `/v1/messages`。
|
|
66
66
|
- **精确的 Claude Token 计数**:可以选择将 Claude 模型的 `/v1/messages/count_tokens` 请求转发到 Anthropic 的免费 token counting 端点,以获得精确计数,而不是依赖 GPT tokenizer 估算。
|
|
67
67
|
- **GPT 上下文管理**:可通过 `responsesApiContextManagementModels` 为长上下文 GPT 对话启用可配置的上下文压缩,在接近 token 限制时减少不必要的 Premium 请求。详见 [配置](#configuration-configjson)。
|
|
68
68
|
|
|
@@ -361,7 +361,7 @@ Copilot API 现在使用子命令结构,主要命令包括:
|
|
|
361
361
|
```
|
|
362
362
|
- **auth.apiKeys:** 用于请求认证的 API key。支持多个 key 轮换使用。请求可通过 `x-api-key: <key>` 或 `Authorization: Bearer <key>` 进行认证。若为空或省略,则禁用认证。
|
|
363
363
|
- **extraPrompts:** `model -> prompt` 的映射。把 Anthropic 风格请求翻译给 Copilot 时,会将其附加到第一条 system prompt 后面。你可以借此为不同模型注入护栏或指引。缺失的默认项会自动补齐,但不会覆盖你自定义的 prompt。内置的 `gpt-5.3-codex` 和 `gpt-5.4` prompt 会启用带阶段感知的 commentary,让模型在工具调用或更深层推理前先发出简短的用户可见进度说明。
|
|
364
|
-
- **providers:** 全局上游 provider 映射。每个 provider key(例如 `custom`)都会变成一个路由前缀(`/custom/v1/messages`)。支持 `type: "anthropic"` 和 `type: "openai-compatible"`。
|
|
364
|
+
- **providers:** 全局上游 provider 映射。每个 provider key(例如 `custom`)都会变成一个路由前缀(`/custom/v1/messages`)。支持 `type: "anthropic"` 和 `type: "openai-compatible"`。顶层 Anthropic 客户端也可以在 `/v1/messages` 和 `/v1/messages/count_tokens` 中使用 `model: "custom/model-id"`;代理会在转发上游前移除 `custom/` 前缀。`GET /v1/models` 不聚合 provider 模型;provider 模型列表请使用 `GET /custom/v1/models`。
|
|
365
365
|
- `enabled`:可选,若省略则默认为 `true`。
|
|
366
366
|
- `baseUrl`:provider API 的基础 URL,不要带结尾的 endpoint。Anthropic provider 不要带 `/v1/messages`;OpenAI 兼容 provider 不要带 `/v1/chat/completions`。
|
|
367
367
|
- `apiKey`:作为上游凭据值使用。
|
|
@@ -423,8 +423,8 @@ curl http://localhost:4141/v1/models \
|
|
|
423
423
|
|
|
424
424
|
| 端点 | 方法 | 说明 |
|
|
425
425
|
| --- | --- | --- |
|
|
426
|
-
| `POST /v1/messages` | `POST` |
|
|
427
|
-
| `POST /v1/messages/count_tokens` | `POST` | 计算一组消息的 token
|
|
426
|
+
| `POST /v1/messages` | `POST` | 为给定对话创建模型响应。支持已配置 provider 的 `provider/model` 别名。 |
|
|
427
|
+
| `POST /v1/messages/count_tokens` | `POST` | 计算一组消息的 token 数。支持已配置 provider 的 `provider/model` 别名。 |
|
|
428
428
|
| `POST /:provider/v1/messages` | `POST` | 将 Anthropic Messages 请求代理到已配置的 Anthropic 或 OpenAI 兼容 provider。 |
|
|
429
429
|
| `GET /:provider/v1/models` | `GET` | 将模型列表请求代理到已配置的 provider。 |
|
|
430
430
|
| `POST /:provider/v1/messages/count_tokens` | `POST` | 为 provider 路由请求在本地计算 token 数。 |
|
package/dist/main.js
CHANGED
|
@@ -44,7 +44,7 @@ bindElectronFetch();
|
|
|
44
44
|
const { auth } = await import("./auth-CWEhhJYn.js");
|
|
45
45
|
const { checkUsage } = await import("./check-usage-B5yr4fpk.js");
|
|
46
46
|
const { debug } = await import("./debug-DcC7ZPH0.js");
|
|
47
|
-
const { start } = await import("./start-
|
|
47
|
+
const { start } = await import("./start-D2K2jpHF.js");
|
|
48
48
|
const main = defineCommand({
|
|
49
49
|
meta: {
|
|
50
50
|
name: "copilot-api",
|