copilot-api-plus 1.2.10 → 1.2.11

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.en.md CHANGED
@@ -46,7 +46,7 @@ English | [简体中文](README.md)
46
46
  | 👥 **Multi-Account** | Multiple GitHub accounts with automatic failover on quota exhaustion/rate limiting/bans |
47
47
  | 🔀 **Model Routing** | Flexible model name mapping and per-model concurrency control |
48
48
  | 📱 **Visual Management** | Web dashboard for account management, model config, and runtime stats |
49
- | 🛡️ **Network Resilience** | 120s connection timeout + smart retry (pool reset + HTTP/2 keepalive + proxy optimization) |
49
+ | 🛡️ **Network Resilience** | 120s timeout + smart retry + proxy tunnel keepalive (45s heartbeat) |
50
50
  | ✂️ **Context Passthrough** | Full context passthrough to upstream API; clients (e.g. Claude Code) manage compression |
51
51
  | 🔍 **Smart Model Matching** | Handles model name format differences (date suffixes, dash/dot versions, etc.) |
52
52
  | 🧠 **Thinking Chain** | Automatically enables deep thinking (thinking/reasoning) for supported models, improving code quality |
@@ -582,11 +582,11 @@ Each API request outputs a log line with model name, status code, and duration:
582
582
 
583
583
  Built-in connection timeout and smart retry for upstream API requests, minimizing Copilot request credit consumption:
584
584
 
585
- - **Connection timeout**: 120 seconds for the first attempt, 90 seconds for retries (enough time for model thinking)
585
+ - **Connection timeout**: 120 seconds for the first attempt, 30 seconds for retries (headers typically arrive in 3–5s)
586
586
  - **Retry strategy**: Up to 2 retries (3 total attempts), 2-3 second delays
587
587
  - **Connection pool reset**: Automatically destroys all pooled connections on the first network error and creates fresh instances, preventing retries from hitting stale sockets
588
- - **HTTP/2 keepalive**: Enables HTTP/2 protocol; PING frames traverse proxy tunnels to prevent idle disconnections
589
- - **TCP keepalive**: Sends TCP probes every 15s to prevent proxies/firewalls from dropping idle connections
588
+ - **Proxy tunnel keepalive**: Sends lightweight heartbeat requests every 45s while SSE streams are active, preventing proxy nodes from killing CONNECT tunnels due to inactivity
589
+ - **HTTP/2 support**: Enables HTTP/2 protocol for better multiplexing performance
590
590
  - Only retries network-layer errors (timeout, TLS disconnect, connection reset, etc.); HTTP error codes (e.g. 400/500) are not retried
591
591
  - SSE stream interruptions gracefully send error events to the client
592
592
 
package/README.md CHANGED
@@ -47,7 +47,7 @@
47
47
  | 👥 **多账号管理** | 支持添加多个 GitHub 账号,额度耗尽/限流/封禁时自动切换下一个 |
48
48
  | 🔀 **模型路由** | 灵活的模型名映射和每模型并发控制 |
49
49
  | 📱 **可视化管理** | Web 仪表盘支持账号管理、模型管理、运行统计 |
50
- | 🛡️ **网络弹性** | 120s 连接超时 + 智能重试(连接池重置 + HTTP/2 保活 + 代理穿透优化) |
50
+ | 🛡️ **网络弹性** | 120s 连接超时 + 智能重试 + 代理隧道保活(45s 心跳防断连) |
51
51
  | ✂️ **上下文透传** | 全量透传上下文至上游 API,由客户端(如 Claude Code)自行管理压缩 |
52
52
  | 🔍 **智能模型匹配** | 自动处理模型名格式差异(日期后缀、dash/dot 版本号等) |
53
53
  | 🧠 **Thinking 思维链** | 自动为支持的模型启用深度思考(thinking/reasoning),提升代码质量 |
@@ -745,11 +745,11 @@ Anthropic 格式的模型名(如 `claude-opus-4-6`)和 Copilot 的模型列
745
745
 
746
746
  对上游 API 的请求内置了连接超时和智能重试,以最小化 Copilot 请求次数消耗:
747
747
 
748
- - **连接超时**:首次请求 120 秒,重试请求 90 秒(给模型足够的思考时间)
748
+ - **连接超时**:首次请求 120 秒,重试请求 30 秒(响应头通常 3~5 秒到达)
749
749
  - **重试策略**:最多重试 2 次(共 3 次尝试),间隔 2-3 秒
750
750
  - **连接池重置**:首次网络错误后自动销毁所有连接并创建新实例,避免后续请求复用坏连接
751
- - **HTTP/2 保活**:启用 HTTP/2 协议,PING 帧穿透代理隧道防止空闲断连
752
- - **TCP 保活**:每 15 秒发送 TCP 探测包,防止代理/防火墙因空闲而断开连接
751
+ - **代理隧道保活**:SSE 流传输期间每 45 秒发送一次轻量心跳请求,防止代理节点因空闲而杀断 CONNECT 隧道
752
+ - **HTTP/2 支持**:启用 HTTP/2 协议,提升多路复用性能
753
753
  - 仅重试网络层错误(超时、TLS 断开、连接重置等),HTTP 错误码(如 400/500)不重试
754
754
  - SSE 流传输中断时,优雅地向客户端发送错误事件
755
755
 
package/dist/main.js CHANGED
@@ -132,6 +132,59 @@ const agentOptions = {
132
132
  };
133
133
  let direct;
134
134
  let proxies = /* @__PURE__ */ new Map();
135
+ /** Whether a proxy is actually configured and in use. */
136
+ let proxyActive = false;
137
+ /**
138
+ * Many proxy nodes (especially third-party VPN/airport services) kill
139
+ * CONNECT tunnels that are idle for ~60 s. During long model thinking
140
+ * phases the SSE stream carries no data, which looks "idle" to the proxy.
141
+ *
142
+ * This keepalive sends a tiny HEAD request to the Copilot API every 45 s
143
+ * through the same proxy. The encrypted packets flowing through the
144
+ * CONNECT tunnel reset the proxy's idle timer, keeping the tunnel alive.
145
+ *
146
+ * The keepalive is active ONLY while there are SSE streams in flight
147
+ * (tracked via `streamCount`). When no streams are active it stops to
148
+ * avoid unnecessary traffic.
149
+ */
150
+ let keepaliveTimer;
151
+ let streamCount = 0;
152
+ const KEEPALIVE_INTERVAL_MS = 45e3;
153
+ const KEEPALIVE_URL = "https://api.individual.githubcopilot.com/";
154
+ function startKeepalive() {
155
+ if (keepaliveTimer) return;
156
+ keepaliveTimer = setInterval(() => {
157
+ fetch(KEEPALIVE_URL, { method: "HEAD" }).catch(() => {});
158
+ consola.debug("Proxy keepalive ping sent");
159
+ }, KEEPALIVE_INTERVAL_MS);
160
+ keepaliveTimer.unref();
161
+ consola.debug("Proxy keepalive started (45 s interval)");
162
+ }
163
+ function stopKeepalive() {
164
+ if (keepaliveTimer) {
165
+ clearInterval(keepaliveTimer);
166
+ keepaliveTimer = void 0;
167
+ consola.debug("Proxy keepalive stopped (no active streams)");
168
+ }
169
+ }
170
+ /**
171
+ * Call when an SSE stream starts. Activates the proxy-tunnel keepalive
172
+ * if this is the first active stream and a proxy is configured.
173
+ */
174
+ function notifyStreamStart() {
175
+ if (!proxyActive) return;
176
+ streamCount++;
177
+ if (streamCount === 1) startKeepalive();
178
+ }
179
+ /**
180
+ * Call when an SSE stream ends (success or error). Stops the keepalive
181
+ * once no streams are active.
182
+ */
183
+ function notifyStreamEnd() {
184
+ if (!proxyActive) return;
185
+ streamCount = Math.max(0, streamCount - 1);
186
+ if (streamCount === 0) stopKeepalive();
187
+ }
135
188
  function initProxyFromEnv() {
136
189
  if (typeof Bun !== "undefined") return;
137
190
  try {
@@ -175,6 +228,7 @@ function initProxyFromEnv() {
175
228
  return direct.destroy();
176
229
  }
177
230
  });
231
+ proxyActive = true;
178
232
  consola.debug("HTTP proxy configured from environment (per-URL)");
179
233
  } catch (err) {
180
234
  consola.debug("Proxy setup skipped:", err);
@@ -1717,19 +1771,20 @@ const FETCH_TIMEOUT_MS = 12e4;
1717
1771
  * (see `resetConnections`), so retries use fresh sockets. We allow up to
1718
1772
  * 2 retries because SSE streams through HTTP proxies are frequently
1719
1773
  * interrupted during long model thinking phases (~60 s idle timeout on
1720
- * many proxy nodes), and each retry may also be cut short by the same
1721
- * timeout. Keeping the delay short avoids wasting wall-clock time.
1774
+ * many proxy nodes). Keeping the delay short avoids wasting wall-clock time.
1722
1775
  */
1723
1776
  const RETRY_DELAYS = [2e3, 3e3];
1724
1777
  /**
1725
- * Timeout for retry attempts. The first request uses the full
1726
- * FETCH_TIMEOUT_MS (120 s) to accommodate slow models. Retries also
1727
- * need a generous timeout because the model restarts its thinking from
1728
- * scratch 20 s was too short and caused immediate failures. 90 s
1729
- * gives the model enough time to produce a response while still failing
1730
- * faster than the initial attempt if the network is truly down.
1778
+ * Timeout for retry attempts (waiting for response headers only).
1779
+ * Response headers typically arrive within 3–5 s, even on slow models.
1780
+ * 30 s is generous enough for a fresh socket to connect and receive
1781
+ * headers, while still failing fast when the upstream is truly down.
1782
+ *
1783
+ * NOTE: This does NOT affect the SSE streaming phase — once headers
1784
+ * arrive, the timeout is cleared and the stream runs until completion
1785
+ * or interruption.
1731
1786
  */
1732
- const RETRY_TIMEOUT_MS = 9e4;
1787
+ const RETRY_TIMEOUT_MS = 3e4;
1733
1788
  /**
1734
1789
  * Wrapper around `fetch()` that aborts if the server doesn't respond within
1735
1790
  * `timeoutMs`. The timeout only covers the period until the response headers
@@ -1777,11 +1832,14 @@ async function fetchWithRetry(url, buildInit) {
1777
1832
  /**
1778
1833
  * Wraps an AsyncGenerator so that `releaseSlot` is called when the generator
1779
1834
  * finishes (return or throw), not when the outer function returns.
1835
+ * Also tracks active streams for the proxy-tunnel keepalive mechanism.
1780
1836
  */
1781
1837
  async function* wrapGeneratorWithRelease(gen, releaseSlot) {
1838
+ notifyStreamStart();
1782
1839
  try {
1783
1840
  yield* gen;
1784
1841
  } finally {
1842
+ notifyStreamEnd();
1785
1843
  releaseSlot();
1786
1844
  }
1787
1845
  }