copilot-api-plus 1.2.7 → 1.2.9
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.en.md +5 -4
- package/README.md +5 -4
- package/dist/main.js +17 -4
- package/dist/main.js.map +1 -1
- package/package.json +1 -1
package/README.en.md
CHANGED
|
@@ -46,7 +46,7 @@ English | [简体中文](README.md)
|
|
|
46
46
|
| 👥 **Multi-Account** | Multiple GitHub accounts with automatic failover on quota exhaustion/rate limiting/bans |
|
|
47
47
|
| 🔀 **Model Routing** | Flexible model name mapping and per-model concurrency control |
|
|
48
48
|
| 📱 **Visual Management** | Web dashboard for account management, model config, and runtime stats |
|
|
49
|
-
| 🛡️ **Network Resilience** | 120s connection timeout +
|
|
49
|
+
| 🛡️ **Network Resilience** | 120s connection timeout + smart retry (pool reset + fast-fail) |
|
|
50
50
|
| ✂️ **Context Passthrough** | Full context passthrough to upstream API; clients (e.g. Claude Code) manage compression |
|
|
51
51
|
| 🔍 **Smart Model Matching** | Handles model name format differences (date suffixes, dash/dot versions, etc.) |
|
|
52
52
|
| 🧠 **Thinking Chain** | Automatically enables deep thinking (thinking/reasoning) for supported models, improving code quality |
|
|
@@ -580,10 +580,11 @@ Each API request outputs a log line with model name, status code, and duration:
|
|
|
580
580
|
|
|
581
581
|
### Network Resilience
|
|
582
582
|
|
|
583
|
-
Built-in connection timeout and
|
|
583
|
+
Built-in connection timeout and smart retry for upstream API requests, minimizing Copilot request credit consumption:
|
|
584
584
|
|
|
585
|
-
- **Connection timeout**: 120 seconds (
|
|
586
|
-
- **Retry strategy**: Up to
|
|
585
|
+
- **Connection timeout**: 120 seconds for the first attempt, 20 seconds for retries (fail fast)
|
|
586
|
+
- **Retry strategy**: Up to 1 retry (2 total attempts), 2-second delay
|
|
587
|
+
- **Connection pool reset**: Automatically destroys all pooled connections on the first network error and creates fresh instances, preventing retries from hitting stale sockets
|
|
587
588
|
- Only retries network-layer errors (timeout, TLS disconnect, connection reset, etc.); HTTP error codes (e.g. 400/500) are not retried
|
|
588
589
|
- SSE stream interruptions gracefully send error events to the client
|
|
589
590
|
|
package/README.md
CHANGED
|
@@ -47,7 +47,7 @@
|
|
|
47
47
|
| 👥 **多账号管理** | 支持添加多个 GitHub 账号,额度耗尽/限流/封禁时自动切换下一个 |
|
|
48
48
|
| 🔀 **模型路由** | 灵活的模型名映射和每模型并发控制 |
|
|
49
49
|
| 📱 **可视化管理** | Web 仪表盘支持账号管理、模型管理、运行统计 |
|
|
50
|
-
| 🛡️ **网络弹性** | 120s 连接超时 +
|
|
50
|
+
| 🛡️ **网络弹性** | 120s 连接超时 + 智能重试(连接池重置 + 短超时快速失败) |
|
|
51
51
|
| ✂️ **上下文透传** | 全量透传上下文至上游 API,由客户端(如 Claude Code)自行管理压缩 |
|
|
52
52
|
| 🔍 **智能模型匹配** | 自动处理模型名格式差异(日期后缀、dash/dot 版本号等) |
|
|
53
53
|
| 🧠 **Thinking 思维链** | 自动为支持的模型启用深度思考(thinking/reasoning),提升代码质量 |
|
|
@@ -743,10 +743,11 @@ Anthropic 格式的模型名(如 `claude-opus-4-6`)和 Copilot 的模型列
|
|
|
743
743
|
|
|
744
744
|
### 网络弹性
|
|
745
745
|
|
|
746
|
-
对上游 API
|
|
746
|
+
对上游 API 的请求内置了连接超时和智能重试,以最小化 Copilot 请求次数消耗:
|
|
747
747
|
|
|
748
|
-
-
|
|
749
|
-
- **重试策略**:最多重试
|
|
748
|
+
- **连接超时**:首次请求 120 秒,重试请求 20 秒(快速失败,避免白等)
|
|
749
|
+
- **重试策略**:最多重试 1 次(共 2 次尝试),间隔 2 秒
|
|
750
|
+
- **连接池重置**:首次网络错误后自动销毁所有连接并创建新实例,避免后续请求复用坏连接
|
|
750
751
|
- 仅重试网络层错误(超时、TLS 断开、连接重置等),HTTP 错误码(如 400/500)不重试
|
|
751
752
|
- SSE 流传输中断时,优雅地向客户端发送错误事件
|
|
752
753
|
|
package/dist/main.js
CHANGED
|
@@ -121,9 +121,13 @@ async function applyProxyConfig() {
|
|
|
121
121
|
//#endregion
|
|
122
122
|
//#region src/lib/proxy.ts
|
|
123
123
|
const agentOptions = {
|
|
124
|
-
keepAliveTimeout:
|
|
125
|
-
keepAliveMaxTimeout:
|
|
126
|
-
connect: {
|
|
124
|
+
keepAliveTimeout: 6e4,
|
|
125
|
+
keepAliveMaxTimeout: 3e5,
|
|
126
|
+
connect: {
|
|
127
|
+
timeout: 15e3,
|
|
128
|
+
keepAlive: true,
|
|
129
|
+
keepAliveInitialDelay: 15e3
|
|
130
|
+
}
|
|
127
131
|
};
|
|
128
132
|
let direct;
|
|
129
133
|
let proxies = /* @__PURE__ */ new Map();
|
|
@@ -1715,6 +1719,14 @@ const FETCH_TIMEOUT_MS = 12e4;
|
|
|
1715
1719
|
*/
|
|
1716
1720
|
const RETRY_DELAYS = [2e3];
|
|
1717
1721
|
/**
|
|
1722
|
+
* Shorter timeout for retry attempts. The first request uses the full
|
|
1723
|
+
* FETCH_TIMEOUT_MS (120 s) to accommodate slow models. Retries happen
|
|
1724
|
+
* after a connection-pool reset, so a fresh socket should connect quickly —
|
|
1725
|
+
* if it doesn't respond within 20 s, the upstream is genuinely down and
|
|
1726
|
+
* waiting longer just burns time (and possibly credits).
|
|
1727
|
+
*/
|
|
1728
|
+
const RETRY_TIMEOUT_MS = 2e4;
|
|
1729
|
+
/**
|
|
1718
1730
|
* Wrapper around `fetch()` that aborts if the server doesn't respond within
|
|
1719
1731
|
* `timeoutMs`. The timeout only covers the period until the response headers
|
|
1720
1732
|
* arrive – once the body starts streaming, the timeout is cleared so that
|
|
@@ -1745,7 +1757,8 @@ async function fetchWithRetry(url, buildInit) {
|
|
|
1745
1757
|
let lastError;
|
|
1746
1758
|
const maxAttempts = RETRY_DELAYS.length + 1;
|
|
1747
1759
|
for (let attempt = 0; attempt < maxAttempts; attempt++) try {
|
|
1748
|
-
|
|
1760
|
+
const timeout = attempt === 0 ? FETCH_TIMEOUT_MS : RETRY_TIMEOUT_MS;
|
|
1761
|
+
return await fetchWithTimeout(url, buildInit(), timeout);
|
|
1749
1762
|
} catch (error) {
|
|
1750
1763
|
lastError = error;
|
|
1751
1764
|
if (attempt === 0) resetConnections();
|