@shunirr/cc-glm 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +199 -0
- package/dist/bin/cli.js +722 -0
- package/dist/bin/cli.js.map +1 -0
- package/dist/proxy/server.js +912 -0
- package/dist/proxy/server.js.map +1 -0
- package/package.json +55 -0
package/README.md
ADDED
|
@@ -0,0 +1,199 @@
|
|
|
1
|
+
# cc-glm
|
|
2
|
+
|
|
3
|
+
Claude Code proxy for routing requests between Anthropic API and z.ai GLM.
|
|
4
|
+
|
|
5
|
+
## Features
|
|
6
|
+
|
|
7
|
+
- **Configurable model routing**: Route requests to different upstreams based on model name patterns with glob matching
|
|
8
|
+
- **Model name rewriting**: Transparently rewrite model names (e.g., `claude-sonnet-*` → `GLM-4.7`)
|
|
9
|
+
- **Thinking block transformation**: Convert z.ai thinking block format to Anthropic-compatible format
|
|
10
|
+
- **Singleton proxy**: One proxy instance shared across multiple Claude Code sessions
|
|
11
|
+
- **Lifecycle management**: Proxy starts/stops automatically with Claude Code
|
|
12
|
+
- **YAML configuration**: Config file with `${VAR:-default}` environment variable expansion
|
|
13
|
+
|
|
14
|
+
## Installation
|
|
15
|
+
|
|
16
|
+
```bash
|
|
17
|
+
npm install -g cc-glm
|
|
18
|
+
```
|
|
19
|
+
|
|
20
|
+
Or use with npx:
|
|
21
|
+
|
|
22
|
+
```bash
|
|
23
|
+
npx cc-glm
|
|
24
|
+
```
|
|
25
|
+
|
|
26
|
+
## Usage
|
|
27
|
+
|
|
28
|
+
Use `cc-glm` as a drop-in replacement for `claude`:
|
|
29
|
+
|
|
30
|
+
```bash
|
|
31
|
+
# Start Claude Code through the proxy
|
|
32
|
+
cc-glm
|
|
33
|
+
|
|
34
|
+
# Pass arguments to Claude Code
|
|
35
|
+
cc-glm -c
|
|
36
|
+
cc-glm -p "PROMPT"
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
The proxy automatically:
|
|
40
|
+
|
|
41
|
+
1. Starts if not already running (singleton)
|
|
42
|
+
2. Sets `ANTHROPIC_BASE_URL` to route requests through the proxy
|
|
43
|
+
3. Routes requests based on model name matching rules
|
|
44
|
+
4. Stops when all Claude Code sessions have exited (after a grace period)
|
|
45
|
+
|
|
46
|
+
## Configuration
|
|
47
|
+
|
|
48
|
+
Create `~/.config/cc-glm/config.yml`:
|
|
49
|
+
|
|
50
|
+
```yaml
|
|
51
|
+
# Claude Code CLI command path (empty = auto-detect from PATH)
|
|
52
|
+
claude:
|
|
53
|
+
path: ""
|
|
54
|
+
|
|
55
|
+
proxy:
|
|
56
|
+
port: 8787
|
|
57
|
+
host: "127.0.0.1"
|
|
58
|
+
|
|
59
|
+
upstream:
|
|
60
|
+
# Anthropic API (OAuth, forwards authorization header as-is)
|
|
61
|
+
anthropic:
|
|
62
|
+
url: "https://api.anthropic.com"
|
|
63
|
+
|
|
64
|
+
# z.ai GLM API
|
|
65
|
+
zai:
|
|
66
|
+
url: "https://api.z.ai/api/anthropic"
|
|
67
|
+
apiKey: "YOUR_API_KEY" # Or falls back to ZAI_API_KEY env var
|
|
68
|
+
|
|
69
|
+
lifecycle:
|
|
70
|
+
stopGraceSeconds: 8
|
|
71
|
+
startWaitSeconds: 8
|
|
72
|
+
stateDir: "${TMPDIR}/claude-code-proxy"
|
|
73
|
+
|
|
74
|
+
logging:
|
|
75
|
+
level: "info" # debug, info, warn, error
|
|
76
|
+
|
|
77
|
+
# Rules are evaluated top-to-bottom, first match wins
|
|
78
|
+
routing:
|
|
79
|
+
rules:
|
|
80
|
+
- match: "claude-sonnet-*"
|
|
81
|
+
upstream: zai
|
|
82
|
+
model: "GLM-4.7"
|
|
83
|
+
|
|
84
|
+
- match: "claude-haiku-*"
|
|
85
|
+
upstream: zai
|
|
86
|
+
model: "GLM-4.7"
|
|
87
|
+
|
|
88
|
+
- match: "glm-*"
|
|
89
|
+
upstream: zai
|
|
90
|
+
|
|
91
|
+
default: anthropic
|
|
92
|
+
```
|
|
93
|
+
|
|
94
|
+
### Configuration Options
|
|
95
|
+
|
|
96
|
+
#### `claude.path`
|
|
97
|
+
Path to the Claude Code CLI executable. If empty or not specified, `cc-glm` will auto-detect the command from your PATH using `which` (Unix/macOS) or `where` (Windows).
|
|
98
|
+
|
|
99
|
+
```yaml
|
|
100
|
+
claude:
|
|
101
|
+
path: "/usr/local/bin/claude" # Custom path
|
|
102
|
+
# or
|
|
103
|
+
path: "" # Auto-detect (default)
|
|
104
|
+
```
|
|
105
|
+
|
|
106
|
+
Without a config file, all requests are routed to Anthropic API (OAuth).
|
|
107
|
+
|
|
108
|
+
### Environment Variables
|
|
109
|
+
|
|
110
|
+
- `ZAI_API_KEY` — z.ai API key (used when config `apiKey` is empty)
|
|
111
|
+
- `ANTHROPIC_BASE_URL` — Automatically set by cc-glm to point to the proxy
|
|
112
|
+
|
|
113
|
+
## Model Routing
|
|
114
|
+
|
|
115
|
+
Routing rules use glob patterns (`*` wildcard) and are evaluated top-to-bottom. The first matching rule wins. Each rule can optionally rewrite the model name sent to the upstream.
|
|
116
|
+
|
|
117
|
+
| Rule Pattern | Upstream | Model Sent |
|
|
118
|
+
|---|---|---|
|
|
119
|
+
| `claude-sonnet-*` | z.ai | `GLM-4.7` |
|
|
120
|
+
| `claude-haiku-*` | z.ai | `GLM-4.7` |
|
|
121
|
+
| `glm-*` | z.ai | (original) |
|
|
122
|
+
| (no match) | Anthropic | (original) |
|
|
123
|
+
|
|
124
|
+
## How It Works
|
|
125
|
+
|
|
126
|
+
1. `cc-glm` starts a local HTTP proxy at `127.0.0.1:8787` (singleton via atomic lock directory)
|
|
127
|
+
2. Sets `ANTHROPIC_BASE_URL` so Claude Code sends API requests through the proxy
|
|
128
|
+
3. The proxy extracts the model name from each request body
|
|
129
|
+
4. Routing rules determine the upstream (Anthropic or z.ai) and optional model rewrite
|
|
130
|
+
5. Auth headers are adjusted per upstream:
|
|
131
|
+
- **Anthropic**: forwards the original OAuth `authorization` header
|
|
132
|
+
- **z.ai**: replaces `authorization` with `x-api-key`
|
|
133
|
+
6. z.ai responses are transformed to ensure Anthropic-compatible thinking block format
|
|
134
|
+
7. After Claude Code exits, the proxy waits a grace period (default 8s) and stops if no other sessions remain
|
|
135
|
+
|
|
136
|
+
## Thinking Block Transformation
|
|
137
|
+
|
|
138
|
+
### Why Transformation Is Needed
|
|
139
|
+
|
|
140
|
+
When conversations include thinking blocks in the message history, the proxy must determine whether to pass them through or convert them to a different format. This is necessary because:
|
|
141
|
+
|
|
142
|
+
- **Anthropic API validates thinking block signatures**: When a thinking block is sent to Anthropic, it includes a cryptographic signature that proves the content was generated by Anthropic. Invalid signatures cause the API to reject the request.
|
|
143
|
+
- **z.ai thinking blocks lack valid signatures**: Thinking blocks generated by z.ai either have no signature or an invalid signature from Anthropic's perspective.
|
|
144
|
+
|
|
145
|
+
### Transformation Rules
|
|
146
|
+
|
|
147
|
+
The proxy uses a signature-based detection system to determine how to handle thinking blocks:
|
|
148
|
+
|
|
149
|
+
| Origin | Signature | Sent to Anthropic | Sent to z.ai |
|
|
150
|
+
|--------|-----------|-------------------|--------------|
|
|
151
|
+
| Anthropic-generated | Valid signature | ✅ Passed through as-is | N/A (usually) |
|
|
152
|
+
| z.ai-generated | None/invalid | ⚠️ Converted to text block | N/A (usually) |
|
|
153
|
+
| In conversation history | Detected via signature store | ✅ Preserved if valid, ⚠️ converted if from z.ai | N/A |
|
|
154
|
+
|
|
155
|
+
**Request Flow (to Anthropic)**:
|
|
156
|
+
1. Check if thinking block has a signature
|
|
157
|
+
2. If signature exists and is in the signature store (previously recorded from Anthropic response) → preserve as thinking block
|
|
158
|
+
3. If signature does not exist in the store (z.ai origin) → convert to text block
|
|
159
|
+
|
|
160
|
+
**Response Flow (from z.ai)**:
|
|
161
|
+
- Extract thinking content and convert to Anthropic-compatible format (removes invalid signature fields)
|
|
162
|
+
|
|
163
|
+
### Conversion Method
|
|
164
|
+
|
|
165
|
+
When a z.ai-origin thinking block is detected in a request to Anthropic, it is converted to a text block with XML tags:
|
|
166
|
+
|
|
167
|
+
**Before (z.ai thinking block)**:
|
|
168
|
+
```json
|
|
169
|
+
{
|
|
170
|
+
"type": "thinking",
|
|
171
|
+
"thinking": "This is my reasoning process...",
|
|
172
|
+
"signature": "invalid_signature_xyz"
|
|
173
|
+
}
|
|
174
|
+
```
|
|
175
|
+
|
|
176
|
+
**After (converted to text)**:
|
|
177
|
+
```json
|
|
178
|
+
{
|
|
179
|
+
"type": "text",
|
|
180
|
+
"text": "<previous-glm-reasoning>\nThis is my reasoning process...\n</previous-glm-reasoning>"
|
|
181
|
+
}
|
|
182
|
+
```
|
|
183
|
+
|
|
184
|
+
This preserves the reasoning content while avoiding Anthropic's signature validation. The `<previous-glm-reasoning>` tags clearly mark the content as historical reasoning from z.ai.
|
|
185
|
+
|
|
186
|
+
## Development
|
|
187
|
+
|
|
188
|
+
```bash
|
|
189
|
+
npm install
|
|
190
|
+
npm run build # Build with tsup
|
|
191
|
+
npm run dev # Build in watch mode
|
|
192
|
+
npm run lint # Type check (tsc --noEmit)
|
|
193
|
+
npm test # Run tests (watch mode)
|
|
194
|
+
npm run test:run # Run tests once
|
|
195
|
+
```
|
|
196
|
+
|
|
197
|
+
## License
|
|
198
|
+
|
|
199
|
+
MIT
|