ghc-proxy 0.1.3 → 0.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -7,67 +7,61 @@
7
7
  A proxy that turns your GitHub Copilot subscription into an OpenAI and Anthropic compatible API. Use it to power [Claude Code](https://docs.anthropic.com/en/docs/claude-code/overview), [Cursor](https://www.cursor.com/), or any tool that speaks the OpenAI Chat Completions or Anthropic Messages protocol.
8
8
 
9
9
  > [!WARNING]
10
- > Reverse-engineered, unofficial, may break. Excessive use can trigger GitHub abuse detection. Use at your own risk.
10
+ > Reverse-engineered, unofficial, may break at any time. Excessive use can trigger GitHub abuse detection. **Use at your own risk.**
11
11
 
12
- **Note:** If you're using [opencode](https://github.com/sst/opencode), you don't need this -- opencode supports GitHub Copilot natively.
12
+ **TL;DR** Install [Bun](https://bun.com/docs/installation), then run:
13
13
 
14
- ## Installation
15
-
16
- The quickest way to get started is with `npx`:
14
+ ```bash
15
+ bunx ghc-proxy@latest start --wait
16
+ ```
17
17
 
18
- npx ghc-proxy@latest start
18
+ ## Prerequisites
19
19
 
20
- This starts the proxy on `http://localhost:4141`. It will walk you through GitHub authentication on first run.
20
+ Before you start, make sure you have:
21
21
 
22
- You can also install it globally:
22
+ 1. **Bun** (>= 1.2) -- a fast JavaScript runtime used to run the proxy
23
+ - **Windows:** `winget install --id Oven-sh.Bun`
24
+ - **Other platforms:** see the [official installation guide](https://bun.com/docs/installation)
25
+ 2. **A GitHub Copilot subscription** -- individual, business, or enterprise
23
26
 
24
- npm install -g ghc-proxy
27
+ ## Quick Start
25
28
 
26
- Or run it from source with [Bun](https://bun.sh/) (>= 1.2):
29
+ 1. Start the proxy:
27
30
 
28
- git clone https://github.com/wxxb789/ghc-proxy.git
29
- cd ghc-proxy
30
- bun install
31
- bun run dev
31
+ bunx ghc-proxy@latest start --wait
32
32
 
33
- ## What it does
33
+ > **Recommended:** The `--wait` flag queues requests instead of rejecting them with a 429 error when you hit Copilot rate limits. This is the simplest way to run the proxy for daily use.
34
34
 
35
- ghc-proxy sits between your tools and the GitHub Copilot API. It authenticates with GitHub using the device code flow, obtains a Copilot token, and exposes the following endpoints:
35
+ 2. On the first run, you will be guided through GitHub's device-code authentication flow. Follow the prompts to authorize the proxy.
36
36
 
37
- **OpenAI compatible:**
37
+ 3. Once authenticated, the proxy starts on **`http://localhost:4141`** and is ready to accept requests.
38
38
 
39
- - `POST /v1/chat/completions` -- chat completions (streaming and non-streaming)
40
- - `GET /v1/models` -- list available models
41
- - `POST /v1/embeddings` -- generate embeddings
42
-
43
- **Anthropic compatible:**
39
+ That's it. Any tool that supports the OpenAI or Anthropic API can now point to `http://localhost:4141`.
44
40
 
45
- - `POST /v1/messages` -- the Anthropic Messages API, with full tool use and streaming support
46
- - `POST /v1/messages/count_tokens` -- token counting
47
-
48
- Anthropic requests are translated to OpenAI format on the fly, sent to Copilot, and the responses are translated back. This means Claude Code thinks it's talking to Anthropic, but it's actually talking to Copilot.
41
+ ## Using with Claude Code
49
42
 
50
- There are also utility endpoints: `GET /usage` for quota monitoring and `GET /token` to inspect the current Copilot token.
43
+ This is the most common use case. There are two ways to set it up:
51
44
 
52
- ## Using with Claude Code
45
+ ### Option A: One-command launch
53
46
 
54
- The fastest way to get Claude Code running on Copilot:
47
+ ```bash
48
+ bunx ghc-proxy@latest start --claude-code
49
+ ```
55
50
 
56
- npx ghc-proxy@latest start --claude-code
51
+ This starts the proxy, opens an interactive model picker, and copies a ready-to-paste environment command to your clipboard. Run that command in another terminal to launch Claude Code with the correct configuration.
57
52
 
58
- This starts the proxy, prompts you to pick a model, and copies a ready-to-paste command to your clipboard. Run that command in another terminal to launch Claude Code.
53
+ ### Option B: Permanent config (Recommended)
59
54
 
60
- If you prefer a permanent setup, create `.claude/settings.json` in your project:
55
+ Create or edit `~/.claude/settings.json` (this applies globally to all projects):
61
56
 
62
57
  ```json
63
58
  {
64
59
  "env": {
65
60
  "ANTHROPIC_BASE_URL": "http://localhost:4141",
66
- "ANTHROPIC_AUTH_TOKEN": "dummy",
67
- "ANTHROPIC_MODEL": "gpt-4.1",
68
- "ANTHROPIC_DEFAULT_SONNET_MODEL": "gpt-4.1",
69
- "ANTHROPIC_SMALL_FAST_MODEL": "gpt-4.1",
70
- "ANTHROPIC_DEFAULT_HAIKU_MODEL": "gpt-4.1",
61
+ "ANTHROPIC_AUTH_TOKEN": "dummy-token",
62
+ "ANTHROPIC_MODEL": "claude-opus-4.6",
63
+ "ANTHROPIC_DEFAULT_SONNET_MODEL": "claude-sonnet-4.6",
64
+ "ANTHROPIC_DEFAULT_HAIKU_MODEL": "claude-haiku-4.5",
71
65
  "DISABLE_NON_ESSENTIAL_MODEL_CALLS": "1",
72
66
  "CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1"
73
67
  },
@@ -77,18 +71,85 @@ If you prefer a permanent setup, create `.claude/settings.json` in your project:
77
71
  }
78
72
  ```
79
73
 
74
+ Then simply start the proxy and use Claude Code as usual:
75
+
76
+ ```bash
77
+ bunx ghc-proxy@latest start --wait
78
+ ```
79
+
80
+ **What each environment variable does:**
81
+
82
+ | Variable | Purpose |
83
+ |----------|---------|
84
+ | `ANTHROPIC_BASE_URL` | Points Claude Code to the proxy instead of Anthropic's servers |
85
+ | `ANTHROPIC_AUTH_TOKEN` | Any non-empty string; the proxy handles real authentication |
86
+ | `ANTHROPIC_MODEL` | The model Claude Code uses for primary/Opus tasks |
87
+ | `ANTHROPIC_DEFAULT_SONNET_MODEL` | The model used for Sonnet-tier tasks |
88
+ | `ANTHROPIC_DEFAULT_HAIKU_MODEL` | The model used for Haiku-tier (fast/cheap) tasks |
89
+ | `DISABLE_NON_ESSENTIAL_MODEL_CALLS` | Prevents Claude Code from making extra API calls |
90
+ | `CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC` | Disables telemetry and non-essential network traffic |
91
+
92
+ > **Tip:** The model names above (e.g. `claude-opus-4.6`) are mapped to actual Copilot models by the proxy. See [Model Mapping](#model-mapping) below for details.
93
+
80
94
  See the [Claude Code settings docs](https://docs.anthropic.com/en/docs/claude-code/settings#environment-variables) for more options.
81
95
 
82
- ## CLI commands
96
+ ## What it Does
97
+
98
+ ghc-proxy sits between your tools and the GitHub Copilot API:
99
+
100
+ ```text
101
+ ┌─────────────┐ ┌───────────┐ ┌──────────────────────┐
102
+ │ Claude Code │──────│ ghc-proxy │──────│ api.githubcopilot.com│
103
+ │ Cursor │ │ :4141 │ │ │
104
+ │ Any client │ │ │ │ │
105
+ └─────────────┘ └───────────┘ └──────────────────────┘
106
+ OpenAI or Translates GitHub Copilot
107
+ Anthropic between API
108
+ format formats
109
+ ```
110
+
111
+ The proxy authenticates with GitHub using the [device code OAuth flow](https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps#device-flow) (the same flow VS Code uses), then exchanges the GitHub token for a short-lived Copilot token that auto-refreshes.
112
+
113
+ Incoming requests hit a [Hono](https://hono.dev/) server. OpenAI-format requests are forwarded directly to Copilot. Anthropic-format requests pass through a translation layer that converts message formats, tool schemas, and streaming events between the two protocols -- including full support for tool use, thinking blocks, and image content.
114
+
115
+ ### Endpoints
116
+
117
+ **OpenAI compatible:**
118
+
119
+ | Method | Path | Description |
120
+ |--------|------|-------------|
121
+ | `POST` | `/v1/chat/completions` | Chat completions (streaming and non-streaming) |
122
+ | `GET` | `/v1/models` | List available models |
123
+ | `POST` | `/v1/embeddings` | Generate embeddings |
124
+
125
+ **Anthropic compatible:**
126
+
127
+ | Method | Path | Description |
128
+ |--------|------|-------------|
129
+ | `POST` | `/v1/messages` | Messages API with full tool use and streaming support |
130
+ | `POST` | `/v1/messages/count_tokens` | Token counting |
131
+
132
+ **Utility:**
133
+
134
+ | Method | Path | Description |
135
+ |--------|------|-------------|
136
+ | `GET` | `/usage` | Copilot quota / usage monitoring |
137
+ | `GET` | `/token` | Inspect the current Copilot token |
138
+
139
+ > **Note:** The `/v1/` prefix is optional. `/chat/completions`, `/models`, and `/embeddings` also work.
140
+
141
+ ## CLI Reference
83
142
 
84
143
  ghc-proxy uses a subcommand structure:
85
144
 
86
- ghc-proxy start # start the proxy server
87
- ghc-proxy auth # run the GitHub auth flow without starting the server
88
- ghc-proxy check-usage # show your Copilot usage/quota in the terminal
89
- ghc-proxy debug # print diagnostic info (version, paths, token status)
145
+ ```bash
146
+ bunx ghc-proxy@latest start # Start the proxy server
147
+ bunx ghc-proxy@latest auth # Run GitHub auth flow without starting the server
148
+ bunx ghc-proxy@latest check-usage # Show your Copilot usage/quota in the terminal
149
+ bunx ghc-proxy@latest debug # Print diagnostic info (version, paths, token status)
150
+ ```
90
151
 
91
- ### Start options
152
+ ### `start` Options
92
153
 
93
154
  | Option | Alias | Default | Description |
94
155
  |--------|-------|---------|-------------|
@@ -103,33 +164,90 @@ ghc-proxy uses a subcommand structure:
103
164
  | `--show-token` | -- | `false` | Display tokens on auth and refresh |
104
165
  | `--proxy-env` | -- | `false` | Use `HTTP_PROXY`/`HTTPS_PROXY` from env |
105
166
  | `--idle-timeout` | -- | `120` | Bun server idle timeout in seconds |
167
+ | `--upstream-timeout` | -- | `300` | Upstream request timeout in seconds (0 to disable) |
106
168
 
107
- ## Rate limiting
169
+ ## Rate Limiting
108
170
 
109
- If you're worried about hitting Copilot rate limits, you have a few options:
171
+ If you are worried about hitting Copilot rate limits:
110
172
 
111
- # Enforce a 30-second cooldown between requests
112
- npx ghc-proxy@latest start --rate-limit 30
173
+ ```bash
174
+ # Enforce a 30-second cooldown between requests
175
+ bunx ghc-proxy@latest start --rate-limit 30
113
176
 
114
- # Same, but wait instead of returning a 429 error
115
- npx ghc-proxy@latest start --rate-limit 30 --wait
177
+ # Same, but queue requests instead of returning 429
178
+ bunx ghc-proxy@latest start --rate-limit 30 --wait
116
179
 
117
- # Manually approve every request (useful for debugging)
118
- npx ghc-proxy@latest start --manual
180
+ # Manually approve every request (useful for debugging)
181
+ bunx ghc-proxy@latest start --manual
182
+ ```
183
+
184
+ ## Account Types
185
+
186
+ If you have a GitHub Business or Enterprise Copilot plan, pass `--account-type`:
187
+
188
+ ```bash
189
+ bunx ghc-proxy@latest start --account-type business
190
+ bunx ghc-proxy@latest start --account-type enterprise
191
+ ```
192
+
193
+ This routes requests to the correct Copilot API endpoint for your plan. See the [GitHub docs on network routing](https://docs.github.com/en/enterprise-cloud@latest/copilot/managing-copilot/managing-github-copilot-in-your-organization/managing-access-to-github-copilot-in-your-organization/managing-github-copilot-access-to-your-organizations-network#configuring-copilot-subscription-based-network-routing-for-your-enterprise-or-organization) for details.
194
+
195
+ ## Model Mapping
196
+
197
+ When Claude Code sends a request for a model like `claude-sonnet-4.6`, the proxy maps it to an actual model available on Copilot. The mapping logic works as follows:
198
+
199
+ 1. If the requested model ID is known to Copilot (e.g. `gpt-4.1`, `claude-sonnet-4.5`), it is used as-is.
200
+ 2. If the model starts with `claude-opus-`, `claude-sonnet-`, or `claude-haiku-`, it falls back to a configured model.
201
+
202
+ ### Default Fallbacks
203
+
204
+ | Prefix | Default Fallback |
205
+ |--------|-----------------|
206
+ | `claude-opus-*` | `claude-opus-4.6` |
207
+ | `claude-sonnet-*` | `claude-sonnet-4.5` |
208
+ | `claude-haiku-*` | `claude-haiku-4.5` |
209
+
210
+ ### Customizing Fallbacks
211
+
212
+ You can override the defaults with **environment variables**:
213
+
214
+ ```bash
215
+ MODEL_FALLBACK_CLAUDE_OPUS=claude-opus-4.6
216
+ MODEL_FALLBACK_CLAUDE_SONNET=claude-sonnet-4.5
217
+ MODEL_FALLBACK_CLAUDE_HAIKU=claude-haiku-4.5
218
+ ```
219
+
220
+ Or in the proxy's **config file** (`~/.local/share/ghc-proxy/config.json`):
221
+
222
+ ```json
223
+ {
224
+ "modelFallback": {
225
+ "claudeOpus": "claude-opus-4.6",
226
+ "claudeSonnet": "claude-sonnet-4.5",
227
+ "claudeHaiku": "claude-haiku-4.5"
228
+ }
229
+ }
230
+ ```
231
+
232
+ **Priority order:** environment variable > config.json > built-in default.
119
233
 
120
234
  ## Docker
121
235
 
122
236
  Build and run:
123
237
 
124
- docker build -t ghc-proxy .
125
- mkdir -p ./copilot-data
126
- docker run -p 4141:4141 -v $(pwd)/copilot-data:/root/.local/share/ghc-proxy ghc-proxy
238
+ ```bash
239
+ docker build -t ghc-proxy .
240
+ mkdir -p ./copilot-data
241
+ docker run -p 4141:4141 -v $(pwd)/copilot-data:/root/.local/share/ghc-proxy ghc-proxy
242
+ ```
127
243
 
128
- The authentication and settings are persisted in `copilot-data/config.json` so authentication survives container restarts.
244
+ Authentication and settings are persisted in `copilot-data/config.json` so they survive container restarts.
129
245
 
130
246
  You can also pass a GitHub token via environment variable:
131
247
 
132
- docker run -p 4141:4141 -e GH_TOKEN=your_token ghc-proxy
248
+ ```bash
249
+ docker run -p 4141:4141 -e GH_TOKEN=your_token ghc-proxy
250
+ ```
133
251
 
134
252
  Docker Compose:
135
253
 
@@ -144,28 +262,22 @@ services:
144
262
  restart: unless-stopped
145
263
  ```
146
264
 
147
- ## Account types
148
-
149
- If you have a GitHub business or enterprise Copilot plan, pass the `--account-type` flag:
265
+ ## Running from Source
150
266
 
151
- npx ghc-proxy@latest start --account-type business
152
- npx ghc-proxy@latest start --account-type enterprise
153
-
154
- This routes requests to the correct Copilot API endpoint for your plan. See the [GitHub docs on network routing](https://docs.github.com/en/enterprise-cloud@latest/copilot/managing-copilot/managing-github-copilot-in-your-organization/managing-access-to-github-copilot-in-your-organization/managing-github-copilot-access-to-your-organizations-network#configuring-copilot-subscription-based-network-routing-for-your-enterprise-or-organization) for more details.
155
-
156
- ## How it works
157
-
158
- The proxy authenticates with GitHub using the [device code OAuth flow](https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps#device-flow) (the same flow VS Code uses), then exchanges the GitHub token for a short-lived Copilot token that auto-refreshes.
159
-
160
- Incoming requests hit a [Hono](https://hono.dev/) server. OpenAI-format requests are forwarded directly to `api.githubcopilot.com`. Anthropic-format requests pass through a translation layer (`src/translator/`) that converts the message format, tool schemas, and streaming events between the two protocols -- including full support for tool use, thinking blocks, and image content.
161
-
162
- The server spoofs VS Code headers so the Copilot API treats it like a normal editor session.
267
+ ```bash
268
+ git clone https://github.com/wxxb789/ghc-proxy.git
269
+ cd ghc-proxy
270
+ bun install
271
+ bun run dev
272
+ ```
163
273
 
164
274
  ## Development
165
275
 
166
- bun install
167
- bun run dev # start with --watch
168
- bun run build # build with tsdown
169
- bun run lint # eslint
170
- bun run typecheck # tsc --noEmit
171
- bun test # run tests
276
+ ```bash
277
+ bun install # Install dependencies
278
+ bun run dev # Start with --watch
279
+ bun run build # Build with tsdown
280
+ bun run lint # ESLint
281
+ bun run typecheck # tsc --noEmit
282
+ bun test # Run tests
283
+ ```