ghc-proxy 0.1.1 → 0.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +189 -77
- package/dist/{main.js → main.mjs} +280 -198
- package/dist/main.mjs.map +1 -0
- package/package.json +22 -20
- package/dist/main.js.map +0 -1
package/README.md
CHANGED
|
@@ -7,67 +7,61 @@
|
|
|
7
7
|
A proxy that turns your GitHub Copilot subscription into an OpenAI and Anthropic compatible API. Use it to power [Claude Code](https://docs.anthropic.com/en/docs/claude-code/overview), [Cursor](https://www.cursor.com/), or any tool that speaks the OpenAI Chat Completions or Anthropic Messages protocol.
|
|
8
8
|
|
|
9
9
|
> [!WARNING]
|
|
10
|
-
> Reverse-engineered, unofficial, may break. Excessive use can trigger GitHub abuse detection. Use at your own risk
|
|
10
|
+
> Reverse-engineered, unofficial, may break at any time. Excessive use can trigger GitHub abuse detection. **Use at your own risk.**
|
|
11
11
|
|
|
12
|
-
**
|
|
12
|
+
**TL;DR** — Install [Bun](https://bun.com/docs/installation), then run:
|
|
13
13
|
|
|
14
|
-
|
|
15
|
-
|
|
16
|
-
|
|
14
|
+
```bash
|
|
15
|
+
bunx ghc-proxy@latest start --wait
|
|
16
|
+
```
|
|
17
17
|
|
|
18
|
-
|
|
18
|
+
## Prerequisites
|
|
19
19
|
|
|
20
|
-
|
|
20
|
+
Before you start, make sure you have:
|
|
21
21
|
|
|
22
|
-
|
|
22
|
+
1. **Bun** (>= 1.2) -- a fast JavaScript runtime used to run the proxy
|
|
23
|
+
- **Windows:** `winget install --id Oven-sh.Bun`
|
|
24
|
+
- **Other platforms:** see the [official installation guide](https://bun.com/docs/installation)
|
|
25
|
+
2. **A GitHub Copilot subscription** -- individual, business, or enterprise
|
|
23
26
|
|
|
24
|
-
|
|
27
|
+
## Quick Start
|
|
25
28
|
|
|
26
|
-
|
|
29
|
+
1. Start the proxy:
|
|
27
30
|
|
|
28
|
-
|
|
29
|
-
cd ghc-proxy
|
|
30
|
-
bun install
|
|
31
|
-
bun run dev
|
|
31
|
+
bunx ghc-proxy@latest start --wait
|
|
32
32
|
|
|
33
|
-
|
|
33
|
+
> **Recommended:** The `--wait` flag queues requests instead of rejecting them with a 429 error when you hit Copilot rate limits. This is the simplest way to run the proxy for daily use.
|
|
34
34
|
|
|
35
|
-
|
|
35
|
+
2. On the first run, you will be guided through GitHub's device-code authentication flow. Follow the prompts to authorize the proxy.
|
|
36
36
|
|
|
37
|
-
|
|
37
|
+
3. Once authenticated, the proxy starts on **`http://localhost:4141`** and is ready to accept requests.
|
|
38
38
|
|
|
39
|
-
|
|
40
|
-
- `GET /v1/models` -- list available models
|
|
41
|
-
- `POST /v1/embeddings` -- generate embeddings
|
|
42
|
-
|
|
43
|
-
**Anthropic compatible:**
|
|
39
|
+
That's it. Any tool that supports the OpenAI or Anthropic API can now point to `http://localhost:4141`.
|
|
44
40
|
|
|
45
|
-
|
|
46
|
-
- `POST /v1/messages/count_tokens` -- token counting
|
|
47
|
-
|
|
48
|
-
Anthropic requests are translated to OpenAI format on the fly, sent to Copilot, and the responses are translated back. This means Claude Code thinks it's talking to Anthropic, but it's actually talking to Copilot.
|
|
41
|
+
## Using with Claude Code
|
|
49
42
|
|
|
50
|
-
|
|
43
|
+
This is the most common use case. There are two ways to set it up:
|
|
51
44
|
|
|
52
|
-
|
|
45
|
+
### Option A: One-command launch
|
|
53
46
|
|
|
54
|
-
|
|
47
|
+
```bash
|
|
48
|
+
bunx ghc-proxy@latest start --claude-code
|
|
49
|
+
```
|
|
55
50
|
|
|
56
|
-
|
|
51
|
+
This starts the proxy, opens an interactive model picker, and copies a ready-to-paste environment command to your clipboard. Run that command in another terminal to launch Claude Code with the correct configuration.
|
|
57
52
|
|
|
58
|
-
|
|
53
|
+
### Option B: Permanent config (Recommended)
|
|
59
54
|
|
|
60
|
-
|
|
55
|
+
Create or edit `~/.claude/settings.json` (this applies globally to all projects):
|
|
61
56
|
|
|
62
57
|
```json
|
|
63
58
|
{
|
|
64
59
|
"env": {
|
|
65
60
|
"ANTHROPIC_BASE_URL": "http://localhost:4141",
|
|
66
|
-
"ANTHROPIC_AUTH_TOKEN": "dummy",
|
|
67
|
-
"ANTHROPIC_MODEL": "
|
|
68
|
-
"ANTHROPIC_DEFAULT_SONNET_MODEL": "
|
|
69
|
-
"
|
|
70
|
-
"ANTHROPIC_DEFAULT_HAIKU_MODEL": "gpt-4.1",
|
|
61
|
+
"ANTHROPIC_AUTH_TOKEN": "dummy-token",
|
|
62
|
+
"ANTHROPIC_MODEL": "claude-opus-4.6",
|
|
63
|
+
"ANTHROPIC_DEFAULT_SONNET_MODEL": "claude-sonnet-4.6",
|
|
64
|
+
"ANTHROPIC_DEFAULT_HAIKU_MODEL": "claude-haiku-4.5",
|
|
71
65
|
"DISABLE_NON_ESSENTIAL_MODEL_CALLS": "1",
|
|
72
66
|
"CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1"
|
|
73
67
|
},
|
|
@@ -77,18 +71,85 @@ If you prefer a permanent setup, create `.claude/settings.json` in your project:
|
|
|
77
71
|
}
|
|
78
72
|
```
|
|
79
73
|
|
|
74
|
+
Then simply start the proxy and use Claude Code as usual:
|
|
75
|
+
|
|
76
|
+
```bash
|
|
77
|
+
bunx ghc-proxy@latest start --wait
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
**What each environment variable does:**
|
|
81
|
+
|
|
82
|
+
| Variable | Purpose |
|
|
83
|
+
|----------|---------|
|
|
84
|
+
| `ANTHROPIC_BASE_URL` | Points Claude Code to the proxy instead of Anthropic's servers |
|
|
85
|
+
| `ANTHROPIC_AUTH_TOKEN` | Any non-empty string; the proxy handles real authentication |
|
|
86
|
+
| `ANTHROPIC_MODEL` | The model Claude Code uses for primary/Opus tasks |
|
|
87
|
+
| `ANTHROPIC_DEFAULT_SONNET_MODEL` | The model used for Sonnet-tier tasks |
|
|
88
|
+
| `ANTHROPIC_DEFAULT_HAIKU_MODEL` | The model used for Haiku-tier (fast/cheap) tasks |
|
|
89
|
+
| `DISABLE_NON_ESSENTIAL_MODEL_CALLS` | Prevents Claude Code from making extra API calls |
|
|
90
|
+
| `CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC` | Disables telemetry and non-essential network traffic |
|
|
91
|
+
|
|
92
|
+
> **Tip:** The model names above (e.g. `claude-opus-4.6`) are mapped to actual Copilot models by the proxy. See [Model Mapping](#model-mapping) below for details.
|
|
93
|
+
|
|
80
94
|
See the [Claude Code settings docs](https://docs.anthropic.com/en/docs/claude-code/settings#environment-variables) for more options.
|
|
81
95
|
|
|
82
|
-
##
|
|
96
|
+
## What it Does
|
|
97
|
+
|
|
98
|
+
ghc-proxy sits between your tools and the GitHub Copilot API:
|
|
99
|
+
|
|
100
|
+
```text
|
|
101
|
+
┌─────────────┐ ┌───────────┐ ┌──────────────────────┐
|
|
102
|
+
│ Claude Code │──────│ ghc-proxy │──────│ api.githubcopilot.com│
|
|
103
|
+
│ Cursor │ │ :4141 │ │ │
|
|
104
|
+
│ Any client │ │ │ │ │
|
|
105
|
+
└─────────────┘ └───────────┘ └──────────────────────┘
|
|
106
|
+
OpenAI or Translates GitHub Copilot
|
|
107
|
+
Anthropic between API
|
|
108
|
+
format formats
|
|
109
|
+
```
|
|
110
|
+
|
|
111
|
+
The proxy authenticates with GitHub using the [device code OAuth flow](https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps#device-flow) (the same flow VS Code uses), then exchanges the GitHub token for a short-lived Copilot token that auto-refreshes.
|
|
112
|
+
|
|
113
|
+
Incoming requests hit a [Hono](https://hono.dev/) server. OpenAI-format requests are forwarded directly to Copilot. Anthropic-format requests pass through a translation layer that converts message formats, tool schemas, and streaming events between the two protocols -- including full support for tool use, thinking blocks, and image content.
|
|
114
|
+
|
|
115
|
+
### Endpoints
|
|
116
|
+
|
|
117
|
+
**OpenAI compatible:**
|
|
118
|
+
|
|
119
|
+
| Method | Path | Description |
|
|
120
|
+
|--------|------|-------------|
|
|
121
|
+
| `POST` | `/v1/chat/completions` | Chat completions (streaming and non-streaming) |
|
|
122
|
+
| `GET` | `/v1/models` | List available models |
|
|
123
|
+
| `POST` | `/v1/embeddings` | Generate embeddings |
|
|
124
|
+
|
|
125
|
+
**Anthropic compatible:**
|
|
126
|
+
|
|
127
|
+
| Method | Path | Description |
|
|
128
|
+
|--------|------|-------------|
|
|
129
|
+
| `POST` | `/v1/messages` | Messages API with full tool use and streaming support |
|
|
130
|
+
| `POST` | `/v1/messages/count_tokens` | Token counting |
|
|
131
|
+
|
|
132
|
+
**Utility:**
|
|
133
|
+
|
|
134
|
+
| Method | Path | Description |
|
|
135
|
+
|--------|------|-------------|
|
|
136
|
+
| `GET` | `/usage` | Copilot quota / usage monitoring |
|
|
137
|
+
| `GET` | `/token` | Inspect the current Copilot token |
|
|
138
|
+
|
|
139
|
+
> **Note:** The `/v1/` prefix is optional. `/chat/completions`, `/models`, and `/embeddings` also work.
|
|
140
|
+
|
|
141
|
+
## CLI Reference
|
|
83
142
|
|
|
84
143
|
ghc-proxy uses a subcommand structure:
|
|
85
144
|
|
|
86
|
-
|
|
87
|
-
|
|
88
|
-
|
|
89
|
-
|
|
145
|
+
```bash
|
|
146
|
+
bunx ghc-proxy@latest start # Start the proxy server
|
|
147
|
+
bunx ghc-proxy@latest auth # Run GitHub auth flow without starting the server
|
|
148
|
+
bunx ghc-proxy@latest check-usage # Show your Copilot usage/quota in the terminal
|
|
149
|
+
bunx ghc-proxy@latest debug # Print diagnostic info (version, paths, token status)
|
|
150
|
+
```
|
|
90
151
|
|
|
91
|
-
###
|
|
152
|
+
### `start` Options
|
|
92
153
|
|
|
93
154
|
| Option | Alias | Default | Description |
|
|
94
155
|
|--------|-------|---------|-------------|
|
|
@@ -103,33 +164,90 @@ ghc-proxy uses a subcommand structure:
|
|
|
103
164
|
| `--show-token` | -- | `false` | Display tokens on auth and refresh |
|
|
104
165
|
| `--proxy-env` | -- | `false` | Use `HTTP_PROXY`/`HTTPS_PROXY` from env |
|
|
105
166
|
| `--idle-timeout` | -- | `120` | Bun server idle timeout in seconds |
|
|
167
|
+
| `--upstream-timeout` | -- | `300` | Upstream request timeout in seconds (0 to disable) |
|
|
106
168
|
|
|
107
|
-
## Rate
|
|
169
|
+
## Rate Limiting
|
|
108
170
|
|
|
109
|
-
If you
|
|
171
|
+
If you are worried about hitting Copilot rate limits:
|
|
110
172
|
|
|
111
|
-
|
|
112
|
-
|
|
173
|
+
```bash
|
|
174
|
+
# Enforce a 30-second cooldown between requests
|
|
175
|
+
bunx ghc-proxy@latest start --rate-limit 30
|
|
113
176
|
|
|
114
|
-
|
|
115
|
-
|
|
177
|
+
# Same, but queue requests instead of returning 429
|
|
178
|
+
bunx ghc-proxy@latest start --rate-limit 30 --wait
|
|
116
179
|
|
|
117
|
-
|
|
118
|
-
|
|
180
|
+
# Manually approve every request (useful for debugging)
|
|
181
|
+
bunx ghc-proxy@latest start --manual
|
|
182
|
+
```
|
|
183
|
+
|
|
184
|
+
## Account Types
|
|
185
|
+
|
|
186
|
+
If you have a GitHub Business or Enterprise Copilot plan, pass `--account-type`:
|
|
187
|
+
|
|
188
|
+
```bash
|
|
189
|
+
bunx ghc-proxy@latest start --account-type business
|
|
190
|
+
bunx ghc-proxy@latest start --account-type enterprise
|
|
191
|
+
```
|
|
192
|
+
|
|
193
|
+
This routes requests to the correct Copilot API endpoint for your plan. See the [GitHub docs on network routing](https://docs.github.com/en/enterprise-cloud@latest/copilot/managing-copilot/managing-github-copilot-in-your-organization/managing-access-to-github-copilot-in-your-organization/managing-github-copilot-access-to-your-organizations-network#configuring-copilot-subscription-based-network-routing-for-your-enterprise-or-organization) for details.
|
|
194
|
+
|
|
195
|
+
## Model Mapping
|
|
196
|
+
|
|
197
|
+
When Claude Code sends a request for a model like `claude-sonnet-4.6`, the proxy maps it to an actual model available on Copilot. The mapping logic works as follows:
|
|
198
|
+
|
|
199
|
+
1. If the requested model ID is known to Copilot (e.g. `gpt-4.1`, `claude-sonnet-4.5`), it is used as-is.
|
|
200
|
+
2. If the model starts with `claude-opus-`, `claude-sonnet-`, or `claude-haiku-`, it falls back to a configured model.
|
|
201
|
+
|
|
202
|
+
### Default Fallbacks
|
|
203
|
+
|
|
204
|
+
| Prefix | Default Fallback |
|
|
205
|
+
|--------|-----------------|
|
|
206
|
+
| `claude-opus-*` | `claude-opus-4.6` |
|
|
207
|
+
| `claude-sonnet-*` | `claude-sonnet-4.5` |
|
|
208
|
+
| `claude-haiku-*` | `claude-haiku-4.5` |
|
|
209
|
+
|
|
210
|
+
### Customizing Fallbacks
|
|
211
|
+
|
|
212
|
+
You can override the defaults with **environment variables**:
|
|
213
|
+
|
|
214
|
+
```bash
|
|
215
|
+
MODEL_FALLBACK_CLAUDE_OPUS=claude-opus-4.6
|
|
216
|
+
MODEL_FALLBACK_CLAUDE_SONNET=claude-sonnet-4.5
|
|
217
|
+
MODEL_FALLBACK_CLAUDE_HAIKU=claude-haiku-4.5
|
|
218
|
+
```
|
|
219
|
+
|
|
220
|
+
Or in the proxy's **config file** (`~/.local/share/ghc-proxy/config.json`):
|
|
221
|
+
|
|
222
|
+
```json
|
|
223
|
+
{
|
|
224
|
+
"modelFallback": {
|
|
225
|
+
"claudeOpus": "claude-opus-4.6",
|
|
226
|
+
"claudeSonnet": "claude-sonnet-4.5",
|
|
227
|
+
"claudeHaiku": "claude-haiku-4.5"
|
|
228
|
+
}
|
|
229
|
+
}
|
|
230
|
+
```
|
|
231
|
+
|
|
232
|
+
**Priority order:** environment variable > config.json > built-in default.
|
|
119
233
|
|
|
120
234
|
## Docker
|
|
121
235
|
|
|
122
236
|
Build and run:
|
|
123
237
|
|
|
124
|
-
|
|
125
|
-
|
|
126
|
-
|
|
238
|
+
```bash
|
|
239
|
+
docker build -t ghc-proxy .
|
|
240
|
+
mkdir -p ./copilot-data
|
|
241
|
+
docker run -p 4141:4141 -v $(pwd)/copilot-data:/root/.local/share/ghc-proxy ghc-proxy
|
|
242
|
+
```
|
|
127
243
|
|
|
128
|
-
|
|
244
|
+
Authentication and settings are persisted in `copilot-data/config.json` so they survive container restarts.
|
|
129
245
|
|
|
130
246
|
You can also pass a GitHub token via environment variable:
|
|
131
247
|
|
|
132
|
-
|
|
248
|
+
```bash
|
|
249
|
+
docker run -p 4141:4141 -e GH_TOKEN=your_token ghc-proxy
|
|
250
|
+
```
|
|
133
251
|
|
|
134
252
|
Docker Compose:
|
|
135
253
|
|
|
@@ -144,28 +262,22 @@ services:
|
|
|
144
262
|
restart: unless-stopped
|
|
145
263
|
```
|
|
146
264
|
|
|
147
|
-
##
|
|
148
|
-
|
|
149
|
-
If you have a GitHub business or enterprise Copilot plan, pass the `--account-type` flag:
|
|
265
|
+
## Running from Source
|
|
150
266
|
|
|
151
|
-
|
|
152
|
-
|
|
153
|
-
|
|
154
|
-
|
|
155
|
-
|
|
156
|
-
|
|
157
|
-
|
|
158
|
-
The proxy authenticates with GitHub using the [device code OAuth flow](https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps#device-flow) (the same flow VS Code uses), then exchanges the GitHub token for a short-lived Copilot token that auto-refreshes.
|
|
159
|
-
|
|
160
|
-
Incoming requests hit a [Hono](https://hono.dev/) server. OpenAI-format requests are forwarded directly to `api.githubcopilot.com`. Anthropic-format requests pass through a translation layer (`src/translator/`) that converts the message format, tool schemas, and streaming events between the two protocols -- including full support for tool use, thinking blocks, and image content.
|
|
161
|
-
|
|
162
|
-
The server spoofs VS Code headers so the Copilot API treats it like a normal editor session.
|
|
267
|
+
```bash
|
|
268
|
+
git clone https://github.com/wxxb789/ghc-proxy.git
|
|
269
|
+
cd ghc-proxy
|
|
270
|
+
bun install
|
|
271
|
+
bun run dev
|
|
272
|
+
```
|
|
163
273
|
|
|
164
274
|
## Development
|
|
165
275
|
|
|
166
|
-
|
|
167
|
-
|
|
168
|
-
|
|
169
|
-
|
|
170
|
-
|
|
171
|
-
|
|
276
|
+
```bash
|
|
277
|
+
bun install # Install dependencies
|
|
278
|
+
bun run dev # Start with --watch
|
|
279
|
+
bun run build # Build with tsdown
|
|
280
|
+
bun run lint # ESLint
|
|
281
|
+
bun run typecheck # tsc --noEmit
|
|
282
|
+
bun test # Run tests
|
|
283
|
+
```
|