@jeffreycao/copilot-api 1.0.0 → 1.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +32 -25
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -121,19 +121,19 @@ The Docker image includes:
121
121
  You can run the project directly using npx:
122
122
 
123
123
  ```sh
124
- npx copilot-api@latest start
124
+ npx @jeffreycao/copilot-api@latest start
125
125
  ```
126
126
 
127
127
  With options:
128
128
 
129
129
  ```sh
130
- npx copilot-api@latest start --port 8080
130
+ npx @jeffreycao/copilot-api@latest start --port 8080
131
131
  ```
132
132
 
133
133
  For authentication only:
134
134
 
135
135
  ```sh
136
- npx copilot-api@latest auth
136
+ npx @jeffreycao/copilot-api@latest auth
137
137
  ```
138
138
 
139
139
  ## Command Structure
@@ -190,12 +190,16 @@ The following command line options are available for the `start` command:
190
190
  "smallModel": "gpt-5-mini",
191
191
  "modelReasoningEfforts": {
192
192
  "gpt-5-mini": "low"
193
- }
193
+ },
194
+ "useFunctionApplyPatch": true,
195
+ "compactUseSmallModel": true
194
196
  }
195
197
  ```
196
198
  - **extraPrompts:** Map of `model -> prompt` appended to the first system prompt when translating Anthropic-style requests to Copilot. Use this to inject guardrails or guidance per model. Missing default entries are auto-added without overwriting your custom prompts.
197
199
  - **smallModel:** Fallback model used for tool-less warmup messages (e.g., Claude Code probe requests) to avoid spending premium requests; defaults to `gpt-5-mini`.
198
200
  - **modelReasoningEfforts:** Per-model `reasoning.effort` sent to the Copilot Responses API. Allowed values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`. If a model isn’t listed, `high` is used by default.
201
+ - **useFunctionApplyPatch:** When `true`, the server will convert any custom tool named `apply_patch` in Responses payloads into an OpenAI-style function tool (`type: "function"`) with a parameter schema so assistants can call it using function-calling semantics to edit files. Set to `false` to leave tools unchanged. Defaults to `true`.
202
+ - **compactUseSmallModel:** When `true`, detected "compact" requests (e.g., from Claude Code or Opencode compact mode) will automatically use the configured `smallModel` to avoid consuming premium model usage for short/background tasks. Defaults to `true`.
199
203
 
200
204
  Edit this file to customize prompts or swap in your own fast model. Restart the server (or rerun the command) after changes so the cached config is refreshed.
201
205
 
@@ -238,46 +242,46 @@ Using with npx:
238
242
 
239
243
  ```sh
240
244
  # Basic usage with start command
241
- npx copilot-api@latest start
245
+ npx @jeffreycao/copilot-api@latest start
242
246
 
243
247
  # Run on custom port with verbose logging
244
- npx copilot-api@latest start --port 8080 --verbose
248
+ npx @jeffreycao/copilot-api@latest start --port 8080 --verbose
245
249
 
246
250
  # Use with a business plan GitHub account
247
- npx copilot-api@latest start --account-type business
251
+ npx @jeffreycao/copilot-api@latest start --account-type business
248
252
 
249
253
  # Use with an enterprise plan GitHub account
250
- npx copilot-api@latest start --account-type enterprise
254
+ npx @jeffreycao/copilot-api@latest start --account-type enterprise
251
255
 
252
256
  # Enable manual approval for each request
253
- npx copilot-api@latest start --manual
257
+ npx @jeffreycao/copilot-api@latest start --manual
254
258
 
255
259
  # Set rate limit to 30 seconds between requests
256
- npx copilot-api@latest start --rate-limit 30
260
+ npx @jeffreycao/copilot-api@latest start --rate-limit 30
257
261
 
258
262
  # Wait instead of error when rate limit is hit
259
- npx copilot-api@latest start --rate-limit 30 --wait
263
+ npx @jeffreycao/copilot-api@latest start --rate-limit 30 --wait
260
264
 
261
265
  # Provide GitHub token directly
262
- npx copilot-api@latest start --github-token ghp_YOUR_TOKEN_HERE
266
+ npx @jeffreycao/copilot-api@latest start --github-token ghp_YOUR_TOKEN_HERE
263
267
 
264
268
  # Run only the auth flow
265
- npx copilot-api@latest auth
269
+ npx @jeffreycao/copilot-api@latest auth
266
270
 
267
271
  # Run auth flow with verbose logging
268
- npx copilot-api@latest auth --verbose
272
+ npx @jeffreycao/copilot-api@latest auth --verbose
269
273
 
270
274
  # Show your Copilot usage/quota in the terminal (no server needed)
271
- npx copilot-api@latest check-usage
275
+ npx @jeffreycao/copilot-api@latest check-usage
272
276
 
273
277
  # Display debug information for troubleshooting
274
- npx copilot-api@latest debug
278
+ npx @jeffreycao/copilot-api@latest debug
275
279
 
276
280
  # Display debug information in JSON format
277
- npx copilot-api@latest debug --json
281
+ npx @jeffreycao/copilot-api@latest debug --json
278
282
 
279
283
  # Initialize proxy from environment variables (HTTP_PROXY, HTTPS_PROXY, etc.)
280
- npx copilot-api@latest start --proxy-env
284
+ npx @jeffreycao/copilot-api@latest start --proxy-env
281
285
  ```
282
286
 
283
287
  ## Using the Usage Viewer
@@ -286,7 +290,7 @@ After starting the server, a URL to the Copilot Usage Dashboard will be displaye
286
290
 
287
291
  1. Start the server. For example, using npx:
288
292
  ```sh
289
- npx copilot-api@latest start
293
+ npx @jeffreycao/copilot-api@latest start
290
294
  ```
291
295
  2. The server will output a URL to the usage viewer. Copy and paste this URL into your browser. It will look something like this:
292
296
  `https://ericc-ch.github.io/copilot-api?endpoint=http://localhost:4141/usage`
@@ -312,7 +316,7 @@ There are two ways to configure Claude Code to use this proxy:
312
316
  To get started, run the `start` command with the `--claude-code` flag:
313
317
 
314
318
  ```sh
315
- npx copilot-api@latest start --claude-code
319
+ npx @jeffreycao/copilot-api@latest start --claude-code
316
320
  ```
317
321
 
318
322
  You will be prompted to select a primary model and a "small, fast" model for background tasks. After selecting the models, a command will be copied to your clipboard. This command sets the necessary environment variables for Claude Code to use the proxy.
@@ -330,12 +334,15 @@ Here is an example `.claude/settings.json` file:
330
334
  "env": {
331
335
  "ANTHROPIC_BASE_URL": "http://localhost:4141",
332
336
  "ANTHROPIC_AUTH_TOKEN": "dummy",
333
- "ANTHROPIC_MODEL": "gpt-4.1",
334
- "ANTHROPIC_DEFAULT_SONNET_MODEL": "gpt-4.1",
335
- "ANTHROPIC_SMALL_FAST_MODEL": "gpt-4.1",
336
- "ANTHROPIC_DEFAULT_HAIKU_MODEL": "gpt-4.1",
337
+ "ANTHROPIC_MODEL": "gpt-5.2",
338
+ "ANTHROPIC_DEFAULT_SONNET_MODEL": "gpt-5.2",
339
+ "ANTHROPIC_DEFAULT_HAIKU_MODEL": "gpt-5-mini",
340
+ "CLAUDE_CODE_SUBAGENT_MODEL": "gpt-5-mini",
337
341
  "DISABLE_NON_ESSENTIAL_MODEL_CALLS": "1",
338
- "CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1"
342
+ "CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1",
343
+ "BASH_MAX_TIMEOUT_MS": "600000",
344
+ "CLAUDE_CODE_ATTRIBUTION_HEADER": "0",
345
+ "CLAUDE_CODE_ENABLE_PROMPT_SUGGESTION": "false"
339
346
  },
340
347
  "permissions": {
341
348
  "deny": [
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@jeffreycao/copilot-api",
3
- "version": "1.0.0",
3
+ "version": "1.0.1",
4
4
  "description": "Turn GitHub Copilot into OpenAI/Anthropic API compatible server. Usable with Claude Code Or Codex Or Opencode!",
5
5
  "keywords": [
6
6
  "proxy",