ghc-proxy 0.1.3 → 0.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +281 -78
- package/dist/main.mjs +5077 -0
- package/dist/main.mjs.map +1 -0
- package/package.json +21 -21
- package/dist/main.js +0 -1879
- package/dist/main.js.map +0 -1
package/README.md
CHANGED
|
@@ -4,70 +4,64 @@
|
|
|
4
4
|
[](https://github.com/wxxb789/ghc-proxy/actions/workflows/ci.yml)
|
|
5
5
|
[](https://github.com/wxxb789/ghc-proxy/blob/master/LICENSE)
|
|
6
6
|
|
|
7
|
-
A proxy that turns your GitHub Copilot subscription into an OpenAI and Anthropic compatible API. Use it to power [Claude Code](https://docs.anthropic.com/en/docs/claude-code/overview), [Cursor](https://www.cursor.com/), or any tool that speaks the OpenAI Chat Completions or Anthropic Messages protocol.
|
|
7
|
+
A proxy that turns your GitHub Copilot subscription into an OpenAI and Anthropic compatible API. Use it to power [Claude Code](https://docs.anthropic.com/en/docs/claude-code/overview), [Cursor](https://www.cursor.com/), or any tool that speaks the OpenAI Chat Completions, OpenAI Responses, or Anthropic Messages protocol.
|
|
8
8
|
|
|
9
9
|
> [!WARNING]
|
|
10
|
-
> Reverse-engineered, unofficial, may break. Excessive use can trigger GitHub abuse detection. Use at your own risk
|
|
10
|
+
> Reverse-engineered, unofficial, may break at any time. Excessive use can trigger GitHub abuse detection. **Use at your own risk.**
|
|
11
11
|
|
|
12
|
-
**
|
|
12
|
+
**TL;DR** — Install [Bun](https://bun.com/docs/installation), then run:
|
|
13
13
|
|
|
14
|
-
|
|
15
|
-
|
|
16
|
-
|
|
17
|
-
|
|
18
|
-
npx ghc-proxy@latest start
|
|
19
|
-
|
|
20
|
-
This starts the proxy on `http://localhost:4141`. It will walk you through GitHub authentication on first run.
|
|
14
|
+
```bash
|
|
15
|
+
bunx ghc-proxy@latest start --wait
|
|
16
|
+
```
|
|
21
17
|
|
|
22
|
-
|
|
18
|
+
## Prerequisites
|
|
23
19
|
|
|
24
|
-
|
|
20
|
+
Before you start, make sure you have:
|
|
25
21
|
|
|
26
|
-
|
|
22
|
+
1. **Bun** (>= 1.2) -- a fast JavaScript runtime used to run the proxy
|
|
23
|
+
- **Windows:** `winget install --id Oven-sh.Bun`
|
|
24
|
+
- **Other platforms:** see the [official installation guide](https://bun.com/docs/installation)
|
|
25
|
+
2. **A GitHub Copilot subscription** -- individual, business, or enterprise
|
|
27
26
|
|
|
28
|
-
|
|
29
|
-
cd ghc-proxy
|
|
30
|
-
bun install
|
|
31
|
-
bun run dev
|
|
27
|
+
## Quick Start
|
|
32
28
|
|
|
33
|
-
|
|
29
|
+
1. Start the proxy:
|
|
34
30
|
|
|
35
|
-
ghc-proxy
|
|
31
|
+
bunx ghc-proxy@latest start --wait
|
|
36
32
|
|
|
37
|
-
**
|
|
33
|
+
> **Recommended:** The `--wait` flag queues requests instead of rejecting them with a 429 error when you hit Copilot rate limits. This is the simplest way to run the proxy for daily use.
|
|
38
34
|
|
|
39
|
-
-
|
|
40
|
-
- `GET /v1/models` -- list available models
|
|
41
|
-
- `POST /v1/embeddings` -- generate embeddings
|
|
35
|
+
2. On the first run, you will be guided through GitHub's device-code authentication flow. Follow the prompts to authorize the proxy.
|
|
42
36
|
|
|
43
|
-
|
|
37
|
+
3. Once authenticated, the proxy starts on **`http://localhost:4141`** and is ready to accept requests.
|
|
44
38
|
|
|
45
|
-
|
|
46
|
-
- `POST /v1/messages/count_tokens` -- token counting
|
|
39
|
+
That's it. Any tool that supports the OpenAI or Anthropic API can now point to `http://localhost:4141`.
|
|
47
40
|
|
|
48
|
-
|
|
41
|
+
## Using with Claude Code
|
|
49
42
|
|
|
50
|
-
|
|
43
|
+
This is the most common use case. There are two ways to set it up:
|
|
51
44
|
|
|
52
|
-
|
|
45
|
+
### Option A: One-command launch
|
|
53
46
|
|
|
54
|
-
|
|
47
|
+
```bash
|
|
48
|
+
bunx ghc-proxy@latest start --claude-code
|
|
49
|
+
```
|
|
55
50
|
|
|
56
|
-
|
|
51
|
+
This starts the proxy, opens an interactive model picker, and copies a ready-to-paste environment command to your clipboard. Run that command in another terminal to launch Claude Code with the correct configuration.
|
|
57
52
|
|
|
58
|
-
|
|
53
|
+
### Option B: Permanent config (Recommended)
|
|
59
54
|
|
|
60
|
-
|
|
55
|
+
Create or edit `~/.claude/settings.json` (this applies globally to all projects):
|
|
61
56
|
|
|
62
57
|
```json
|
|
63
58
|
{
|
|
64
59
|
"env": {
|
|
65
60
|
"ANTHROPIC_BASE_URL": "http://localhost:4141",
|
|
66
|
-
"ANTHROPIC_AUTH_TOKEN": "dummy",
|
|
67
|
-
"ANTHROPIC_MODEL": "
|
|
68
|
-
"ANTHROPIC_DEFAULT_SONNET_MODEL": "
|
|
69
|
-
"
|
|
70
|
-
"ANTHROPIC_DEFAULT_HAIKU_MODEL": "gpt-4.1",
|
|
61
|
+
"ANTHROPIC_AUTH_TOKEN": "dummy-token",
|
|
62
|
+
"ANTHROPIC_MODEL": "claude-opus-4.6",
|
|
63
|
+
"ANTHROPIC_DEFAULT_SONNET_MODEL": "claude-sonnet-4.6",
|
|
64
|
+
"ANTHROPIC_DEFAULT_HAIKU_MODEL": "claude-haiku-4.5",
|
|
71
65
|
"DISABLE_NON_ESSENTIAL_MODEL_CALLS": "1",
|
|
72
66
|
"CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1"
|
|
73
67
|
},
|
|
@@ -77,18 +71,109 @@ If you prefer a permanent setup, create `.claude/settings.json` in your project:
|
|
|
77
71
|
}
|
|
78
72
|
```
|
|
79
73
|
|
|
74
|
+
Then simply start the proxy and use Claude Code as usual:
|
|
75
|
+
|
|
76
|
+
```bash
|
|
77
|
+
bunx ghc-proxy@latest start --wait
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
**What each environment variable does:**
|
|
81
|
+
|
|
82
|
+
| Variable | Purpose |
|
|
83
|
+
|----------|---------|
|
|
84
|
+
| `ANTHROPIC_BASE_URL` | Points Claude Code to the proxy instead of Anthropic's servers |
|
|
85
|
+
| `ANTHROPIC_AUTH_TOKEN` | Any non-empty string; the proxy handles real authentication |
|
|
86
|
+
| `ANTHROPIC_MODEL` | The model Claude Code uses for primary/Opus tasks |
|
|
87
|
+
| `ANTHROPIC_DEFAULT_SONNET_MODEL` | The model used for Sonnet-tier tasks |
|
|
88
|
+
| `ANTHROPIC_DEFAULT_HAIKU_MODEL` | The model used for Haiku-tier (fast/cheap) tasks |
|
|
89
|
+
| `DISABLE_NON_ESSENTIAL_MODEL_CALLS` | Prevents Claude Code from making extra API calls |
|
|
90
|
+
| `CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC` | Disables telemetry and non-essential network traffic |
|
|
91
|
+
|
|
92
|
+
> **Tip:** The model names above (e.g. `claude-opus-4.6`) are mapped to actual Copilot models by the proxy. See [Model Mapping](#model-mapping) below for details.
|
|
93
|
+
|
|
80
94
|
See the [Claude Code settings docs](https://docs.anthropic.com/en/docs/claude-code/settings#environment-variables) for more options.
|
|
81
95
|
|
|
82
|
-
##
|
|
96
|
+
## What it Does
|
|
97
|
+
|
|
98
|
+
ghc-proxy sits between your tools and the GitHub Copilot API:
|
|
99
|
+
|
|
100
|
+
```text
|
|
101
|
+
┌─────────────┐ ┌───────────┐ ┌──────────────────────┐
|
|
102
|
+
│ Claude Code │──────│ ghc-proxy │──────│ api.githubcopilot.com│
|
|
103
|
+
│ Cursor │ │ :4141 │ │ │
|
|
104
|
+
│ Any client │ │ │ │ │
|
|
105
|
+
└─────────────┘ └───────────┘ └──────────────────────┘
|
|
106
|
+
OpenAI or Translates GitHub Copilot
|
|
107
|
+
Anthropic between API
|
|
108
|
+
format formats
|
|
109
|
+
```
|
|
110
|
+
|
|
111
|
+
The proxy authenticates with GitHub using the [device code OAuth flow](https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps#device-flow) (the same flow VS Code uses), then exchanges the GitHub token for a short-lived Copilot token that auto-refreshes.
|
|
112
|
+
|
|
113
|
+
When the Copilot token response includes `endpoints.api`, `ghc-proxy` now prefers that runtime API base automatically instead of relying only on the configured account type. This keeps enterprise/business routing aligned with the endpoint GitHub actually returned for the current token.
|
|
114
|
+
|
|
115
|
+
Incoming requests hit a [Hono](https://hono.dev/) server. `chat/completions` requests are validated, normalized into the shared planning pipeline, and then forwarded to Copilot. `responses` requests use a native Responses path with explicit compatibility policies. `messages` requests are routed per-model and can use native Anthropic passthrough, the Responses translation path, or the existing chat-completions fallback. The translator tracks exact vs lossy vs unsupported behavior explicitly; see the [Messages Routing and Translation Guide](./docs/messages-routing-and-translation.md) and the [Anthropic Translation Matrix](./docs/anthropic-translation-matrix.md) for the current support surface.
|
|
116
|
+
|
|
117
|
+
### Request Routing
|
|
118
|
+
|
|
119
|
+
`ghc-proxy` does not force every request through one protocol. The current routing rules are:
|
|
120
|
+
|
|
121
|
+
- `POST /v1/chat/completions`: OpenAI Chat Completions -> shared planning pipeline -> Copilot `/chat/completions`
|
|
122
|
+
- `POST /v1/responses`: OpenAI Responses create -> native Responses handler -> Copilot `/responses`
|
|
123
|
+
- `POST /v1/responses/input_tokens`: Responses input-token counting passthrough when the upstream supports it
|
|
124
|
+
- `GET /v1/responses/:responseId`: Responses retrieve passthrough when the upstream supports it
|
|
125
|
+
- `GET /v1/responses/:responseId/input_items`: Responses input-items passthrough when the upstream supports it
|
|
126
|
+
- `DELETE /v1/responses/:responseId`: Responses delete passthrough when the upstream supports it
|
|
127
|
+
- `POST /v1/messages`: Anthropic Messages -> choose the best available upstream path for the selected model:
|
|
128
|
+
- native Copilot `/v1/messages` when supported
|
|
129
|
+
- Anthropic -> Responses -> Anthropic translation when the model only supports `/responses`
|
|
130
|
+
- Anthropic -> Chat Completions -> Anthropic fallback otherwise
|
|
131
|
+
|
|
132
|
+
This keeps the existing chat pipeline stable while allowing newer Copilot models to use the endpoint they actually expose.
|
|
133
|
+
|
|
134
|
+
### Endpoints
|
|
135
|
+
|
|
136
|
+
**OpenAI compatible:**
|
|
137
|
+
|
|
138
|
+
| Method | Path | Description |
|
|
139
|
+
|--------|------|-------------|
|
|
140
|
+
| `POST` | `/v1/chat/completions` | Chat completions (streaming and non-streaming) |
|
|
141
|
+
| `POST` | `/v1/responses` | Create a Responses API response |
|
|
142
|
+
| `POST` | `/v1/responses/input_tokens` | Count Responses input tokens when supported by Copilot upstream |
|
|
143
|
+
| `GET` | `/v1/responses/:responseId` | Retrieve one response when supported by Copilot upstream |
|
|
144
|
+
| `GET` | `/v1/responses/:responseId/input_items` | Retrieve response input items when supported by Copilot upstream |
|
|
145
|
+
| `DELETE` | `/v1/responses/:responseId` | Delete one response when supported by Copilot upstream |
|
|
146
|
+
| `GET` | `/v1/models` | List available models |
|
|
147
|
+
| `POST` | `/v1/embeddings` | Generate embeddings |
|
|
148
|
+
|
|
149
|
+
**Anthropic compatible:**
|
|
150
|
+
|
|
151
|
+
| Method | Path | Description |
|
|
152
|
+
|--------|------|-------------|
|
|
153
|
+
| `POST` | `/v1/messages` | Messages API with per-model routing across native Messages, Responses translation, or chat-completions fallback |
|
|
154
|
+
| `POST` | `/v1/messages/count_tokens` | Token counting |
|
|
155
|
+
|
|
156
|
+
**Utility:**
|
|
157
|
+
|
|
158
|
+
| Method | Path | Description |
|
|
159
|
+
|--------|------|-------------|
|
|
160
|
+
| `GET` | `/usage` | Copilot quota / usage monitoring |
|
|
161
|
+
| `GET` | `/token` | Inspect the current Copilot token |
|
|
162
|
+
|
|
163
|
+
> **Note:** The `/v1/` prefix is optional. `/chat/completions`, `/responses`, `/models`, and `/embeddings` also work.
|
|
164
|
+
|
|
165
|
+
## CLI Reference
|
|
83
166
|
|
|
84
167
|
ghc-proxy uses a subcommand structure:
|
|
85
168
|
|
|
86
|
-
|
|
87
|
-
|
|
88
|
-
|
|
89
|
-
|
|
169
|
+
```bash
|
|
170
|
+
bunx ghc-proxy@latest start # Start the proxy server
|
|
171
|
+
bunx ghc-proxy@latest auth # Run GitHub auth flow without starting the server
|
|
172
|
+
bunx ghc-proxy@latest check-usage # Show your Copilot usage/quota in the terminal
|
|
173
|
+
bunx ghc-proxy@latest debug # Print diagnostic info (version, paths, token status)
|
|
174
|
+
```
|
|
90
175
|
|
|
91
|
-
###
|
|
176
|
+
### `start` Options
|
|
92
177
|
|
|
93
178
|
| Option | Alias | Default | Description |
|
|
94
179
|
|--------|-------|---------|-------------|
|
|
@@ -103,33 +188,145 @@ ghc-proxy uses a subcommand structure:
|
|
|
103
188
|
| `--show-token` | -- | `false` | Display tokens on auth and refresh |
|
|
104
189
|
| `--proxy-env` | -- | `false` | Use `HTTP_PROXY`/`HTTPS_PROXY` from env |
|
|
105
190
|
| `--idle-timeout` | -- | `120` | Bun server idle timeout in seconds |
|
|
191
|
+
| `--upstream-timeout` | -- | `300` | Upstream request timeout in seconds (0 to disable) |
|
|
192
|
+
|
|
193
|
+
## Rate Limiting
|
|
194
|
+
|
|
195
|
+
If you are worried about hitting Copilot rate limits:
|
|
196
|
+
|
|
197
|
+
```bash
|
|
198
|
+
# Enforce a 30-second cooldown between requests
|
|
199
|
+
bunx ghc-proxy@latest start --rate-limit 30
|
|
200
|
+
|
|
201
|
+
# Same, but queue requests instead of returning 429
|
|
202
|
+
bunx ghc-proxy@latest start --rate-limit 30 --wait
|
|
106
203
|
|
|
107
|
-
|
|
204
|
+
# Manually approve every request (useful for debugging)
|
|
205
|
+
bunx ghc-proxy@latest start --manual
|
|
206
|
+
```
|
|
207
|
+
|
|
208
|
+
## Account Types
|
|
209
|
+
|
|
210
|
+
If you have a GitHub Business or Enterprise Copilot plan, pass `--account-type`:
|
|
211
|
+
|
|
212
|
+
```bash
|
|
213
|
+
bunx ghc-proxy@latest start --account-type business
|
|
214
|
+
bunx ghc-proxy@latest start --account-type enterprise
|
|
215
|
+
```
|
|
108
216
|
|
|
109
|
-
|
|
217
|
+
This routes requests to the correct Copilot API endpoint for your plan. See the [GitHub docs on network routing](https://docs.github.com/en/enterprise-cloud@latest/copilot/managing-copilot/managing-github-copilot-in-your-organization/managing-access-to-github-copilot-in-your-organization/managing-github-copilot-access-to-your-organizations-network#configuring-copilot-subscription-based-network-routing-for-your-enterprise-or-organization) for details.
|
|
110
218
|
|
|
111
|
-
|
|
112
|
-
npx ghc-proxy@latest start --rate-limit 30
|
|
219
|
+
## Model Mapping
|
|
113
220
|
|
|
114
|
-
|
|
115
|
-
npx ghc-proxy@latest start --rate-limit 30 --wait
|
|
221
|
+
When Claude Code sends a request for a model like `claude-sonnet-4.6`, the proxy maps it to an actual model available on Copilot. The mapping logic works as follows:
|
|
116
222
|
|
|
117
|
-
|
|
118
|
-
|
|
223
|
+
1. If the requested model ID is known to Copilot (e.g. `gpt-4.1`, `claude-sonnet-4.5`), it is used as-is.
|
|
224
|
+
2. If the model starts with `claude-opus-`, `claude-sonnet-`, or `claude-haiku-`, it falls back to a configured model.
|
|
225
|
+
|
|
226
|
+
### Default Fallbacks
|
|
227
|
+
|
|
228
|
+
| Prefix | Default Fallback |
|
|
229
|
+
|--------|-----------------|
|
|
230
|
+
| `claude-opus-*` | `claude-opus-4.6` |
|
|
231
|
+
| `claude-sonnet-*` | `claude-sonnet-4.5` |
|
|
232
|
+
| `claude-haiku-*` | `claude-haiku-4.5` |
|
|
233
|
+
|
|
234
|
+
### Customizing Fallbacks
|
|
235
|
+
|
|
236
|
+
You can override the defaults with **environment variables**:
|
|
237
|
+
|
|
238
|
+
```bash
|
|
239
|
+
MODEL_FALLBACK_CLAUDE_OPUS=claude-opus-4.6
|
|
240
|
+
MODEL_FALLBACK_CLAUDE_SONNET=claude-sonnet-4.5
|
|
241
|
+
MODEL_FALLBACK_CLAUDE_HAIKU=claude-haiku-4.5
|
|
242
|
+
```
|
|
243
|
+
|
|
244
|
+
Or in the proxy's **config file** (`~/.local/share/ghc-proxy/config.json`):
|
|
245
|
+
|
|
246
|
+
```json
|
|
247
|
+
{
|
|
248
|
+
"modelFallback": {
|
|
249
|
+
"claudeOpus": "claude-opus-4.6",
|
|
250
|
+
"claudeSonnet": "claude-sonnet-4.5",
|
|
251
|
+
"claudeHaiku": "claude-haiku-4.5"
|
|
252
|
+
}
|
|
253
|
+
}
|
|
254
|
+
```
|
|
255
|
+
|
|
256
|
+
**Priority order:** environment variable > config.json > built-in default.
|
|
257
|
+
|
|
258
|
+
### Small-Model Routing
|
|
259
|
+
|
|
260
|
+
`/v1/messages` can optionally reroute specific low-value requests to a cheaper model:
|
|
261
|
+
|
|
262
|
+
- `smallModel`: the model to reroute to
|
|
263
|
+
- `compactUseSmallModel`: reroute recognized compact/summarization requests
|
|
264
|
+
- `warmupUseSmallModel`: reroute explicitly marked warmup/probe requests
|
|
265
|
+
|
|
266
|
+
Both switches default to `false`. Routing is conservative:
|
|
267
|
+
|
|
268
|
+
- the target `smallModel` must exist in Copilot's model list
|
|
269
|
+
- it must preserve the original model's declared endpoint support
|
|
270
|
+
- tool, thinking, and vision requests are not rerouted to a model that lacks the required capabilities
|
|
271
|
+
|
|
272
|
+
Warmup routing is intentionally narrow. Requests must look like explicit warmup/probe traffic; ordinary tool-free chat requests are not rerouted just because they include `anthropic-beta`.
|
|
273
|
+
|
|
274
|
+
### Responses Compatibility
|
|
275
|
+
|
|
276
|
+
`/v1/responses` is designed to stay close to the OpenAI wire format while making Copilot limitations explicit:
|
|
277
|
+
|
|
278
|
+
- requests are validated before any mutation
|
|
279
|
+
- common official request fields such as `conversation`, `previous_response_id`, `max_tool_calls`, `truncation`, `user`, `prompt`, and `text` are now modeled explicitly instead of relying on loose passthrough alone
|
|
280
|
+
- official `text.format` options are modeled explicitly, including `text`, `json_object`, and `json_schema`
|
|
281
|
+
- `custom` `apply_patch` can be rewritten as a function tool when `useFunctionApplyPatch` is enabled
|
|
282
|
+
- per-model Responses context compaction can be enabled with `responsesApiContextManagementModels`
|
|
283
|
+
- reasoning defaults for Anthropic -> Responses translation can be tuned with `modelReasoningEfforts`
|
|
284
|
+
- known unsupported builtin tools, such as `web_search`, fail explicitly with `400` instead of being silently removed
|
|
285
|
+
- external image URLs on the Responses path fail explicitly with `400`; use `file_id` or data URL image input instead
|
|
286
|
+
- official `input_file` and `item_reference` input items are modeled explicitly and validated before forwarding
|
|
287
|
+
|
|
288
|
+
Live upstream verification matters here. On March 11, 2026, a full local scan across every Copilot model that advertised `/responses` support still showed two stable vision gaps:
|
|
289
|
+
|
|
290
|
+
- external image URLs were rejected uniformly enough that the proxy now rejects them locally with a clearer capability error
|
|
291
|
+
- the current 1x1 PNG data URL probe was rejected upstream as invalid image data even though the fixture itself decodes as a valid PNG locally
|
|
292
|
+
|
|
293
|
+
The proxy does not currently disable Responses vision wholesale because the same models still advertise vision capability in Copilot model metadata. Treat Responses vision as upstream-contract-sensitive and verify it with `matrix:live` before relying on it.
|
|
294
|
+
|
|
295
|
+
Additional real-upstream note: on March 11, 2026, `POST /responses` succeeded against the current enterprise Copilot endpoint, but `POST /responses/input_tokens`, `GET /responses/{id}`, `GET /responses/{id}/input_items`, and `DELETE /responses/{id}` all returned upstream `404`. The proxy exposes those routes because they are part of the official Responses surface, but current Copilot upstream support is not there yet. The same live matrix also showed `previous_response_id` returning upstream `400 previous_response_id is not supported` on the tested model.
|
|
296
|
+
|
|
297
|
+
Example `config.json`:
|
|
298
|
+
|
|
299
|
+
```json
|
|
300
|
+
{
|
|
301
|
+
"smallModel": "gpt-4.1-mini",
|
|
302
|
+
"compactUseSmallModel": true,
|
|
303
|
+
"warmupUseSmallModel": false,
|
|
304
|
+
"useFunctionApplyPatch": true,
|
|
305
|
+
"responsesApiContextManagementModels": ["gpt-5", "gpt-5-mini"],
|
|
306
|
+
"modelReasoningEfforts": {
|
|
307
|
+
"gpt-5": "high",
|
|
308
|
+
"gpt-5-mini": "medium"
|
|
309
|
+
}
|
|
310
|
+
}
|
|
311
|
+
```
|
|
119
312
|
|
|
120
313
|
## Docker
|
|
121
314
|
|
|
122
315
|
Build and run:
|
|
123
316
|
|
|
124
|
-
|
|
125
|
-
|
|
126
|
-
|
|
317
|
+
```bash
|
|
318
|
+
docker build -t ghc-proxy .
|
|
319
|
+
mkdir -p ./copilot-data
|
|
320
|
+
docker run -p 4141:4141 -v $(pwd)/copilot-data:/root/.local/share/ghc-proxy ghc-proxy
|
|
321
|
+
```
|
|
127
322
|
|
|
128
|
-
|
|
323
|
+
Authentication and settings are persisted in `copilot-data/config.json` so they survive container restarts.
|
|
129
324
|
|
|
130
325
|
You can also pass a GitHub token via environment variable:
|
|
131
326
|
|
|
132
|
-
|
|
327
|
+
```bash
|
|
328
|
+
docker run -p 4141:4141 -e GH_TOKEN=your_token ghc-proxy
|
|
329
|
+
```
|
|
133
330
|
|
|
134
331
|
Docker Compose:
|
|
135
332
|
|
|
@@ -144,28 +341,34 @@ services:
|
|
|
144
341
|
restart: unless-stopped
|
|
145
342
|
```
|
|
146
343
|
|
|
147
|
-
##
|
|
148
|
-
|
|
149
|
-
If you have a GitHub business or enterprise Copilot plan, pass the `--account-type` flag:
|
|
150
|
-
|
|
151
|
-
npx ghc-proxy@latest start --account-type business
|
|
152
|
-
npx ghc-proxy@latest start --account-type enterprise
|
|
344
|
+
## Running from Source
|
|
153
345
|
|
|
154
|
-
|
|
155
|
-
|
|
156
|
-
|
|
157
|
-
|
|
158
|
-
|
|
159
|
-
|
|
160
|
-
Incoming requests hit a [Hono](https://hono.dev/) server. OpenAI-format requests are forwarded directly to `api.githubcopilot.com`. Anthropic-format requests pass through a translation layer (`src/translator/`) that converts the message format, tool schemas, and streaming events between the two protocols -- including full support for tool use, thinking blocks, and image content.
|
|
161
|
-
|
|
162
|
-
The server spoofs VS Code headers so the Copilot API treats it like a normal editor session.
|
|
346
|
+
```bash
|
|
347
|
+
git clone https://github.com/wxxb789/ghc-proxy.git
|
|
348
|
+
cd ghc-proxy
|
|
349
|
+
bun install
|
|
350
|
+
bun run dev
|
|
351
|
+
```
|
|
163
352
|
|
|
164
353
|
## Development
|
|
165
354
|
|
|
166
|
-
|
|
167
|
-
|
|
168
|
-
|
|
169
|
-
|
|
170
|
-
|
|
171
|
-
|
|
355
|
+
```bash
|
|
356
|
+
bun install # Install dependencies
|
|
357
|
+
bun run dev # Start with --watch
|
|
358
|
+
bun run build # Build with tsdown
|
|
359
|
+
bun run lint # ESLint
|
|
360
|
+
bun run typecheck # tsc --noEmit
|
|
361
|
+
bun test # Run tests
|
|
362
|
+
bun run matrix:live # Real Copilot upstream compatibility matrix
|
|
363
|
+
bun run matrix:live --vision-only --all-responses-models --json
|
|
364
|
+
bun run matrix:live --stateful-only --json --model=gpt-5.2-codex
|
|
365
|
+
```
|
|
366
|
+
|
|
367
|
+
> **Note:** `bun run matrix:live` uses your configured GitHub/Copilot credentials and spends real upstream requests. Use it when you want end-to-end verification against the current Copilot service, not for every local edit.
|
|
368
|
+
>
|
|
369
|
+
> Useful flags:
|
|
370
|
+
> - `--json`: emit machine-readable JSON only
|
|
371
|
+
> - `--vision-only`: run just the Responses image probes
|
|
372
|
+
> - `--stateful-only`: run follow-up/resource probes such as `previous_response_id`, `input_tokens`, and `input_items`
|
|
373
|
+
> - `--all-responses-models`: scan every model that advertises `/responses`
|
|
374
|
+
> - `--model=<id>`: pin the Responses scan to one specific model
|