@mmmbuto/zai-codex-bridge 0.4.0 → 0.4.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md ADDED
@@ -0,0 +1,39 @@
1
+ # Changelog
2
+
3
+ All notable changes to this project will be documented in this file.
4
+
5
+ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
+ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
+
8
+ ## [0.4.2] - 2026-01-16
9
+
10
+ ### Changed
11
+ - Replaced the README with expanded setup, usage, and troubleshooting guidance
12
+ - Clarified Codex provider configuration and proxy endpoint usage
13
+
14
+ ## [0.4.1] - 2026-01-16
15
+
16
+ ### Added
17
+ - Tool calling support (MCP/function calls) when `ALLOW_TOOLS=1`
18
+ - Bridging for `function_call_output` items to Chat `role: tool` messages
19
+ - Streaming support for `delta.tool_calls` with proper Responses API events
20
+ - Non-streaming support for `msg.tool_calls` in final response
21
+ - Tool call events: `response.output_item.added` (function_call), `response.function_call_arguments.delta`, `response.function_call_arguments.done`
22
+ - Automated tool call test in test suite
23
+
24
+ ### Changed
25
+ - `translateResponsesToChat()` now handles `type: function_call_output` items
26
+ - `streamChatToResponses()` now detects and emits tool call events
27
+ - `translateChatToResponses()` now includes `function_call` items in output array
28
+
29
+ ### Fixed
30
+ - Tool responses (from MCP/function calls) are now correctly forwarded to upstream as `role: tool` messages
31
+ - Function call items are now properly included in `response.completed` output array
32
+
33
+ ## [0.4.0] - Previous
34
+
35
+ ### Added
36
+ - Initial release with Responses API to Chat Completions translation
37
+ - Streaming support with SSE
38
+ - Health check endpoint
39
+ - Zero-dependency implementation
package/README.md CHANGED
@@ -1,6 +1,6 @@
1
- # Z.AI Codex Bridge
1
+ # ZAI Codex Bridge
2
2
 
3
- > Local proxy that translates OpenAI Responses API format to Z.AI Chat Completions format
3
+ > Local proxy that translates OpenAI **Responses API** Z.AI **Chat Completions** for Codex CLI
4
4
 
5
5
  [![npm](https://img.shields.io/npm/v/@mmmbuto/zai-codex-bridge?style=flat-square&logo=npm)](https://www.npmjs.org/package/@mmmbuto/zai-codex-bridge)
6
6
  [![node](https://img.shields.io/node/v/@mmmbuto/zai-codex-bridge?style=flat-square&logo=node.js)](https://github.com/DioNanos/zai-codex-bridge)
@@ -10,36 +10,39 @@
10
10
 
11
11
  ## What It Solves
12
12
 
13
- Codex uses the OpenAI **Responses API** format (with `instructions` and `input` fields), but Z.AI only supports the legacy **Chat Completions** format (with `messages` array).
13
+ Newer **Codex CLI** versions speak the OpenAI **Responses API** (e.g. `/v1/responses`, with `instructions` + `input` + event-stream semantics).
14
+ Some gateways/providers (including Z.AI endpoints) only expose legacy **Chat Completions** (`messages[]`).
14
15
 
15
16
  This proxy:
16
- 1. Accepts Codex requests in **Responses format**
17
- 2. Translates them to **Chat format**
17
+ 1. Accepts Codex requests in **Responses** format
18
+ 2. Translates them to **Chat Completions**
18
19
  3. Forwards to Z.AI
19
- 4. Translates the response back to **Responses format**
20
+ 4. Translates back to **Responses** format (stream + non-stream)
20
21
  5. Returns to Codex
21
22
 
22
- **Without this proxy**, Codex fails with error from Z.AI:
23
+ **Without this proxy**, Codex may fail (example from upstream error payloads):
23
24
  ```json
24
25
  {"error":{"code":"1214","message":"Incorrect role information"}}
25
26
  ```
26
27
 
28
+ > If you’re using **codex-termux** and a gateway that doesn’t fully match the Responses API, this proxy is the recommended compatibility layer.
29
+
27
30
  ---
28
31
 
29
32
  ## Features
30
33
 
31
- - Transparent translation between Responses and Chat formats
34
+ - Responses API Chat Completions translation (request + response)
32
35
  - Streaming support with SSE (Server-Sent Events)
33
- - Zero dependencies - uses Node.js built-ins only
34
- - Health checks at `/health` endpoint
35
- - Configurable via CLI flags and environment variables
36
+ - Health check endpoint (`/health`)
37
+ - Works on Linux/macOS/Windows (WSL) + Termux (ARM64)
38
+ - **Optional tool/MCP bridging** (see “Tools / MCP” below)
39
+ - Zero/low dependencies (Node built-ins only, unless noted by package.json)
36
40
 
37
41
  ---
38
42
 
39
43
  ## Requirements
40
44
 
41
- - **Node.js**: 18.0.0 or higher (for native `fetch`)
42
- - **Platform**: Linux, macOS, Windows (WSL), Termux (ARM64)
45
+ - **Node.js**: 18+ (native `fetch`)
43
46
  - **Port**: 31415 (default, configurable)
44
47
 
45
48
  ---
@@ -54,28 +57,34 @@ npm install -g @mmmbuto/zai-codex-bridge
54
57
 
55
58
  ## Quick Start
56
59
 
57
- ### 1. Start the Proxy
60
+ ### 1) Start the Proxy
58
61
 
59
62
  ```bash
60
63
  zai-codex-bridge
61
64
  ```
62
65
 
63
- The proxy will listen on `http://127.0.0.1:31415`
66
+ Default listen address:
67
+
68
+ - `http://127.0.0.1:31415`
64
69
 
65
- ### 2. Configure Codex
70
+ ### 2) Configure Codex
66
71
 
67
- Add to `~/.codex/config.toml`:
72
+ Add this provider to `~/.codex/config.toml`:
68
73
 
69
74
  ```toml
70
75
  [model_providers.zai_proxy]
71
76
  name = "ZAI via local proxy"
72
- base_url = "http://127.0.0.1:31415/v1"
77
+ base_url = "http://127.0.0.1:31415"
73
78
  env_key = "OPENAI_API_KEY"
74
79
  wire_api = "responses"
75
80
  stream_idle_timeout_ms = 3000000
76
81
  ```
77
82
 
78
- ### 3. Use with Codex
83
+ > Notes:
84
+ > - `base_url` is the server root. Codex will call `/v1/responses`; this proxy supports that path.
85
+ > - We keep `env_key = "OPENAI_API_KEY"` because Codex expects that key name. You can store your Z.AI key there.
86
+
87
+ ### 3) Run Codex via the Proxy
79
88
 
80
89
  ```bash
81
90
  export OPENAI_API_KEY="your-zai-api-key"
@@ -84,6 +93,29 @@ codex -m "GLM-4.7" -c model_provider="zai_proxy"
84
93
 
85
94
  ---
86
95
 
96
+ ## Tools / MCP (optional)
97
+
98
+ Codex tool-calling / MCP memory requires an additional compatibility layer:
99
+ - Codex uses **Responses API tool events** (function call items + arguments delta/done, plus function_call_output inputs)
100
+ - Some upstream models/providers may not emit tool calls (or may emit them in a different shape)
101
+
102
+ This proxy can **attempt** to bridge tools when enabled:
103
+
104
+ ```bash
105
+ export ALLOW_TOOLS=1
106
+ ```
107
+
108
+ Important:
109
+ - Tool support is **provider/model dependent**. If upstream never emits tool calls, the proxy can’t invent them.
110
+ - If tools are enabled, the proxy must translate:
111
+ - Responses `tools` + `tool_choice` → Chat `tools` + `tool_choice`
112
+ - Chat `tool_calls` (stream/non-stream) → Responses function-call events
113
+ - Responses `function_call_output` → Chat `role=tool` messages
114
+
115
+ (See repo changelog and docs for the exact implemented behavior.)
116
+
117
+ ---
118
+
87
119
  ## CLI Usage
88
120
 
89
121
  ```bash
@@ -97,7 +129,7 @@ zai-codex-bridge --port 8080
97
129
  zai-codex-bridge --log-level debug
98
130
 
99
131
  # Custom Z.AI endpoint
100
- zai-codex-bridge --zai-base-url https://custom.z.ai/v1
132
+ zai-codex-bridge --zai-base-url https://api.z.ai/api/coding/paas/v4
101
133
 
102
134
  # Show help
103
135
  zai-codex-bridge --help
@@ -106,17 +138,20 @@ zai-codex-bridge --help
106
138
  ### Environment Variables
107
139
 
108
140
  ```bash
109
- export PORT=31415
110
141
  export HOST=127.0.0.1
142
+ export PORT=31415
111
143
  export ZAI_BASE_URL=https://api.z.ai/api/coding/paas/v4
112
144
  export LOG_LEVEL=info
145
+
146
+ # Optional
147
+ export ALLOW_TOOLS=1
113
148
  ```
114
149
 
115
150
  ---
116
151
 
117
- ## Auto-Starting Proxy with Codex
152
+ ## Auto-start the Proxy with Codex (recommended)
118
153
 
119
- You can create a shell function that starts the proxy automatically when needed:
154
+ Use a shell function that starts the proxy only if needed:
120
155
 
121
156
  ```bash
122
157
  codex-with-zai() {
@@ -125,114 +160,107 @@ codex-with-zai() {
125
160
  local HEALTH="http://${HOST}:${PORT}/health"
126
161
  local PROXY_PID=""
127
162
 
128
- # Start proxy only if not responding
129
163
  if ! curl -fsS "$HEALTH" >/dev/null 2>&1; then
130
164
  zai-codex-bridge --host "$HOST" --port "$PORT" >/dev/null 2>&1 &
131
165
  PROXY_PID=$!
132
166
  trap 'kill $PROXY_PID 2>/dev/null' EXIT INT TERM
133
- sleep 2
167
+ sleep 1
134
168
  fi
135
169
 
136
- # Run codex
137
- codex -m "GLM-4.7" -c model_provider="zai_proxy" "$@"
170
+ codex -c model_provider="zai_proxy" "$@"
138
171
  }
139
172
  ```
140
173
 
141
174
  Usage:
175
+
142
176
  ```bash
143
- codex-with-zai
144
- # Proxy auto-starts, Codex runs
145
- # Ctrl+D exits both
177
+ export OPENAI_API_KEY="your-zai-api-key"
178
+ codex-with-zai -m "GLM-4.7"
146
179
  ```
147
180
 
148
181
  ---
149
182
 
150
183
  ## API Endpoints
151
184
 
152
- ### `POST /responses`
153
- Accepts OpenAI Responses API format, translates to Chat, returns Responses format.
154
-
155
- ### `POST /v1/responses`
156
- Same as `/responses` (for compatibility with Codex's path structure).
157
-
158
- ### `GET /health`
159
- Health check endpoint.
185
+ - `POST /responses` — accepts Responses API requests
186
+ - `POST /v1/responses` same as above (Codex default path)
187
+ - `GET /health` — health check
160
188
 
161
189
  ---
162
190
 
163
- ## Translation Details
191
+ ## Translation Overview
164
192
 
165
193
  ### Request: Responses → Chat
166
194
 
167
- ```javascript
168
- // Input (Responses format)
195
+ ```js
196
+ // Input (Responses)
169
197
  {
170
- model: "GLM-4.7",
171
- instructions: "Be helpful",
172
- input: [
173
- { role: "user", content: "Hello" }
174
- ],
175
- max_output_tokens: 1000
198
+ "model": "GLM-4.7",
199
+ "instructions": "Be helpful",
200
+ "input": [{ "role": "user", "content": "Hello" }],
201
+ "max_output_tokens": 1000
176
202
  }
177
203
 
178
- // Output (Chat format)
204
+ // Output (Chat)
179
205
  {
180
- model: "GLM-4.7",
181
- messages: [
182
- { role: "system", content: "Be helpful" },
183
- { role: "user", content: "Hello" }
206
+ "model": "GLM-4.7",
207
+ "messages": [
208
+ { "role": "system", "content": "Be helpful" },
209
+ { "role": "user", "content": "Hello" }
184
210
  ],
185
- max_tokens: 1000
211
+ "max_tokens": 1000
186
212
  }
187
213
  ```
188
214
 
189
- ### Response: Chat → Responses
215
+ ### Response: Chat → Responses (simplified)
190
216
 
191
- ```javascript
192
- // Input (Chat format)
217
+ ```js
218
+ // Input (Chat)
193
219
  {
194
- choices: [{
195
- message: { content: "Hi there!" }
196
- }],
197
- usage: {
198
- prompt_tokens: 10,
199
- completion_tokens: 5
200
- }
220
+ "choices": [{ "message": { "content": "Hi there!" } }],
221
+ "usage": { "prompt_tokens": 10, "completion_tokens": 5 }
201
222
  }
202
223
 
203
- // Output (Responses format)
224
+ // Output (Responses - simplified)
204
225
  {
205
- output: [{ value: "Hi there!", content_type: "text" }],
206
- status: "completed",
207
- usage: {
208
- input_tokens: 10,
209
- output_tokens: 5
210
- }
226
+ "status": "completed",
227
+ "output": [{ "type": "message", "content": [{ "type": "output_text", "text": "Hi there!" }] }],
228
+ "usage": { "input_tokens": 10, "output_tokens": 5 }
211
229
  }
212
230
  ```
213
231
 
214
232
  ---
215
233
 
216
- ## Testing
234
+ ## Troubleshooting
217
235
 
218
- ```bash
219
- # Set your Z.AI API key
220
- export ZAI_API_KEY="sk-your-key"
236
+ ### 401 / “token expired or incorrect”
237
+ - Verify the key is exported as `OPENAI_API_KEY` (or matches `env_key` in config.toml).
238
+ - Make sure the proxy is not overwriting Authorization headers.
221
239
 
222
- # Run test suite
223
- npm run test:curl
224
- ```
240
+ ### 404 on `/v1/responses`
241
+ - Ensure `base_url` points to the proxy root (example: `http://127.0.0.1:31415`).
242
+ - Confirm the proxy is running and `/health` returns `ok`.
243
+
244
+ ### 502 Bad Gateway
245
+ - Proxy reached upstream but upstream failed. Enable debug:
246
+ ```bash
247
+ LOG_LEVEL=debug zai-codex-bridge
248
+ ```
225
249
 
226
250
  ---
227
251
 
228
- ## Documentation
252
+ ## Versioning Policy
229
253
 
230
- Complete usage guide: [docs/guide.md](docs/guide.md)
254
+ This repo follows **small, safe patch increments** while stabilizing provider compatibility:
255
+
256
+ - Keep patch bumps only: `0.4.0 → 0.4.1 → 0.4.2 → ...`
257
+ - No big jumps unless strictly necessary.
258
+
259
+ (See `CHANGELOG.md` for details once present.)
231
260
 
232
261
  ---
233
262
 
234
263
  ## License
235
264
 
236
- MIT License - Copyright (c) 2026 Davide A. Guglielmi
237
-
265
+ MIT License Copyright (c) 2026 Davide A. Guglielmi
238
266
  See [LICENSE](LICENSE) for details.
package/RELEASING.md ADDED
@@ -0,0 +1,80 @@
1
+ # Releasing
2
+
3
+ This document describes the release process for zai-codex-bridge.
4
+
5
+ ## Version Policy
6
+
7
+ - **Patch releases only** (0.4.0 → 0.4.1 → 0.4.2, etc.)
8
+ - No minor or major bumps without explicit discussion
9
+ - Always increment by +0.0.1 from current version
10
+
11
+ ## Release Steps
12
+
13
+ ### 1. Run Tests
14
+
15
+ ```bash
16
+ # Set your API key
17
+ export ZAI_API_KEY="sk-your-key"
18
+
19
+ # Run test suite
20
+ npm run test:curl
21
+ # or
22
+ npm test
23
+ ```
24
+
25
+ ### 2. Bump Version
26
+
27
+ ```bash
28
+ # Use the release script (recommended)
29
+ npm run release:patch
30
+
31
+ # Or manually edit package.json and change:
32
+ # "version": "0.4.0" -> "version": "0.4.1"
33
+ ```
34
+
35
+ ### 3. Update CHANGELOG.md
36
+
37
+ Add an entry for the new version following [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) format.
38
+
39
+ ### 4. Commit
40
+
41
+ ```bash
42
+ git add package.json CHANGELOG.md
43
+ git commit -m "chore: release v0.4.1"
44
+ ```
45
+
46
+ ### 5. Tag
47
+
48
+ ```bash
49
+ git tag v0.4.1
50
+ ```
51
+
52
+ ### 6. Push (Optional)
53
+
54
+ ```bash
55
+ git push
56
+ git push --tags
57
+ ```
58
+
59
+ ### 7. Publish to npm
60
+
61
+ ```bash
62
+ npm publish
63
+ ```
64
+
65
+ ## release:patch Script
66
+
67
+ The `npm run release:patch` script:
68
+
69
+ 1. Verifies current version is 0.4.x
70
+ 2. Bumps patch version by +0.0.1
71
+ 3. Refuses to bump minor/major versions
72
+ 4. Updates package.json in-place
73
+
74
+ Example:
75
+ ```bash
76
+ $ npm run release:patch
77
+ Current version: 0.4.0
78
+ Bumping to: 0.4.1
79
+ Updated package.json
80
+ ```
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@mmmbuto/zai-codex-bridge",
3
- "version": "0.4.0",
3
+ "version": "0.4.2",
4
4
  "description": "Local proxy that translates OpenAI Responses API format to Z.AI Chat Completions format for Codex",
5
5
  "main": "src/server.js",
6
6
  "bin": {
@@ -8,7 +8,9 @@
8
8
  },
9
9
  "scripts": {
10
10
  "start": "node src/server.js",
11
- "test:curl": "node scripts/test-curl.js"
11
+ "test": "node scripts/test-curl.js",
12
+ "test:curl": "node scripts/test-curl.js",
13
+ "release:patch": "node scripts/release-patch.js"
12
14
  },
13
15
  "keywords": [
14
16
  "codex",
@@ -0,0 +1,60 @@
1
+ #!/usr/bin/env node
2
+
3
+ /**
4
+ * Safe patch version bumper
5
+ * Only allows patch releases (0.4.0 -> 0.4.1)
6
+ * Refuses minor/major bumps
7
+ */
8
+
9
+ const fs = require('fs');
10
+ const path = require('path');
11
+
12
+ const PACKAGE_PATH = path.join(__dirname, '..', 'package.json');
13
+
14
+ function bumpPatch(version) {
15
+ const parts = version.split('.').map(Number);
16
+
17
+ if (parts.length !== 3) {
18
+ throw new Error(`Invalid version format: ${version}`);
19
+ }
20
+
21
+ const [major, minor, patch] = parts;
22
+
23
+ // Only allow 0.4.x versions
24
+ if (major !== 0 || minor !== 4) {
25
+ console.error(`ERROR: Current version is ${version}`);
26
+ console.error('This script only supports patch releases for 0.4.x versions.');
27
+ console.error('For other version changes, edit package.json manually.');
28
+ process.exit(1);
29
+ }
30
+
31
+ const newVersion = `0.4.${patch + 1}`;
32
+ return newVersion;
33
+ }
34
+
35
+ function main() {
36
+ // Read package.json
37
+ const pkg = JSON.parse(fs.readFileSync(PACKAGE_PATH, 'utf8'));
38
+ const currentVersion = pkg.version;
39
+
40
+ console.log(`Current version: ${currentVersion}`);
41
+
42
+ // Bump patch
43
+ const newVersion = bumpPatch(currentVersion);
44
+ console.log(`Bumping to: ${newVersion}`);
45
+
46
+ // Update package.json
47
+ pkg.version = newVersion;
48
+
49
+ // Write back
50
+ fs.writeFileSync(PACKAGE_PATH, JSON.stringify(pkg, null, 2) + '\n');
51
+
52
+ console.log('Updated package.json');
53
+ console.log('\nNext steps:');
54
+ console.log(' 1. Update CHANGELOG.md');
55
+ console.log(' 2. Commit: git add package.json CHANGELOG.md && git commit -m "chore: release v' + newVersion + '"');
56
+ console.log(' 3. Tag: git tag v' + newVersion);
57
+ console.log(' 4. Publish: npm publish');
58
+ }
59
+
60
+ main();
@@ -135,6 +135,155 @@ async function testStreamingFormat() {
135
135
  });
136
136
  }
137
137
 
138
+ async function testToolCall() {
139
+ console.log('\n=== Testing POST /v1/responses (Tool Call) ===\n');
140
+ console.log('Note: This test requires ALLOW_TOOLS=1 and upstream model support for tools.\n');
141
+
142
+ const payload = {
143
+ model: 'GLM-4.7',
144
+ instructions: 'You are a helpful assistant.',
145
+ input: [
146
+ {
147
+ role: 'user',
148
+ content: 'What is the weather in Tokyo? Use the get_weather tool.'
149
+ }
150
+ ],
151
+ tools: [
152
+ {
153
+ type: 'function',
154
+ function: {
155
+ name: 'get_weather',
156
+ description: 'Get the current weather for a location',
157
+ parameters: {
158
+ type: 'object',
159
+ properties: {
160
+ location: {
161
+ type: 'string',
162
+ description: 'The city and state, e.g. San Francisco, CA'
163
+ }
164
+ },
165
+ required: ['location']
166
+ }
167
+ }
168
+ }
169
+ ],
170
+ tool_choice: 'auto',
171
+ stream: true
172
+ };
173
+
174
+ return new Promise((resolve, reject) => {
175
+ const options = {
176
+ hostname: PROXY_HOST,
177
+ port: PROXY_PORT,
178
+ path: '/v1/responses',
179
+ method: 'POST',
180
+ headers: {
181
+ 'Content-Type': 'application/json',
182
+ 'Authorization': `Bearer ${ZAI_API_KEY}`
183
+ }
184
+ };
185
+
186
+ const req = http.request(options, (res) => {
187
+ console.log('Status:', res.statusCode);
188
+
189
+ if (res.statusCode !== 200) {
190
+ let body = '';
191
+ res.on('data', (chunk) => body += chunk);
192
+ res.on('end', () => {
193
+ console.log('Error response:', body);
194
+ resolve({ status: 'error', message: body });
195
+ });
196
+ return;
197
+ }
198
+
199
+ console.log('\nStreaming response:');
200
+ let buffer = '';
201
+ let foundToolCall = false;
202
+ let foundOutputItemAdded = false;
203
+ let foundFunctionCallDelta = false;
204
+ let foundOutputItemDone = false;
205
+ let foundResponseCompleted = false;
206
+
207
+ res.on('data', (chunk) => {
208
+ buffer += chunk.toString();
209
+ const events = buffer.split('\n\n');
210
+ buffer = events.pop() || '';
211
+
212
+ for (const evt of events) {
213
+ const lines = evt.split('\n');
214
+ for (const line of lines) {
215
+ if (!line.startsWith('data:')) continue;
216
+ const payload = line.slice(5).trim();
217
+ if (!payload || payload === '[DONE]') continue;
218
+
219
+ try {
220
+ const data = JSON.parse(payload);
221
+ const type = data.type;
222
+
223
+ // Look for tool call events
224
+ if (type === 'response.output_item.added') {
225
+ if (data.item?.type === 'function_call') {
226
+ foundOutputItemAdded = true;
227
+ console.log('[EVENT] output_item.added (function_call):', data.item?.name);
228
+ }
229
+ }
230
+
231
+ if (type === 'response.function_call_arguments.delta') {
232
+ foundFunctionCallDelta = true;
233
+ process.stdout.write('.');
234
+ }
235
+
236
+ if (type === 'response.output_item.done') {
237
+ if (data.item?.type === 'function_call') {
238
+ foundOutputItemDone = true;
239
+ }
240
+ }
241
+
242
+ if (type === 'response.completed') {
243
+ foundResponseCompleted = true;
244
+ }
245
+ } catch (e) {
246
+ // Skip parse errors
247
+ }
248
+ }
249
+ }
250
+ });
251
+
252
+ res.on('end', () => {
253
+ console.log();
254
+ console.log('\n=== Tool Call Test Results ===');
255
+
256
+ if (!foundToolCall && !foundOutputItemAdded) {
257
+ console.log('SKIP: upstream did not return tool_calls');
258
+ console.log('This may mean:');
259
+ console.log(' - ALLOW_TOOLS is not enabled on the proxy');
260
+ console.log(' - The model does not support tool calls');
261
+ console.log(' - The prompt did not trigger a tool call');
262
+ resolve({ status: 'skipped', reason: 'no tool calls from upstream' });
263
+ return;
264
+ }
265
+
266
+ const passed = foundOutputItemAdded && foundFunctionCallDelta && foundOutputItemDone && foundResponseCompleted;
267
+ console.log('output_item.added (function_call):', foundOutputItemAdded ? 'PASS' : 'FAIL');
268
+ console.log('function_call_arguments.delta:', foundFunctionCallDelta ? 'PASS' : 'FAIL');
269
+ console.log('output_item.done (function_call):', foundOutputItemDone ? 'PASS' : 'FAIL');
270
+ console.log('response.completed:', foundResponseCompleted ? 'PASS' : 'FAIL');
271
+ console.log('\nOverall:', passed ? 'PASS' : 'FAIL');
272
+
273
+ resolve({ status: passed ? 'pass' : 'fail', results: { foundOutputItemAdded, foundFunctionCallDelta, foundOutputItemDone, foundResponseCompleted } });
274
+ });
275
+ });
276
+
277
+ req.on('error', (err) => {
278
+ console.error('Request error:', err.message);
279
+ reject(err);
280
+ });
281
+
282
+ req.write(JSON.stringify(payload, null, 2));
283
+ req.end();
284
+ });
285
+ }
286
+
138
287
  async function main() {
139
288
  console.log('zai-codex-bridge Manual Test');
140
289
  console.log('================================');
@@ -151,7 +300,22 @@ async function main() {
151
300
  await testResponsesFormat();
152
301
  await testStreamingFormat();
153
302
 
303
+ // Tool call test (optional - depends on upstream support)
304
+ console.log('\n\n=== Tool Support Tests ===');
305
+ const toolResult = await testToolCall();
306
+
154
307
  console.log('\n=== All Tests Complete ===\n');
308
+ console.log('Summary:');
309
+ console.log(' Health: PASS');
310
+ console.log(' Non-streaming: PASS');
311
+ console.log(' Streaming: PASS');
312
+ if (toolResult.status === 'pass') {
313
+ console.log(' Tool calls: PASS');
314
+ } else if (toolResult.status === 'skipped') {
315
+ console.log(' Tool calls: SKIPPED (upstream does not support or did not return tool_calls)');
316
+ } else {
317
+ console.log(' Tool calls: FAIL or ERROR');
318
+ }
155
319
  } catch (error) {
156
320
  console.error('\nError:', error.message);
157
321
  process.exit(1);
package/src/server.js CHANGED
@@ -143,18 +143,41 @@ function translateResponsesToChat(request) {
143
143
  content: request.input
144
144
  });
145
145
  } else if (Array.isArray(request.input)) {
146
- // Array of ResponseItem objects - filter only Message items with role
146
+ // Array of ResponseItem objects
147
147
  for (const item of request.input) {
148
+ // Handle function_call_output items (tool responses) - only if ALLOW_TOOLS
149
+ if (ALLOW_TOOLS && item.type === 'function_call_output') {
150
+ const toolMsg = {
151
+ role: 'tool',
152
+ tool_call_id: item.call_id || item.tool_call_id || '',
153
+ content: ''
154
+ };
155
+
156
+ // Extract content from output or content field
157
+ if (item.output !== undefined) {
158
+ toolMsg.content = typeof item.output === 'string'
159
+ ? item.output
160
+ : JSON.stringify(item.output);
161
+ } else if (item.content !== undefined) {
162
+ toolMsg.content = typeof item.content === 'string'
163
+ ? item.content
164
+ : JSON.stringify(item.content);
165
+ }
166
+
167
+ messages.push(toolMsg);
168
+ continue;
169
+ }
170
+
148
171
  // Only process items with a 'role' field (Message items)
149
172
  // Skip Reasoning, FunctionCall, LocalShellCall, etc.
150
173
  if (!item.role) continue;
151
174
 
152
175
  // Map non-standard roles to Z.AI-compatible roles
153
- // Z.AI accepts: system, user, assistant
176
+ // Z.AI accepts: system, user, assistant, tool
154
177
  let role = item.role;
155
178
  if (role === 'developer') {
156
179
  role = 'user'; // Map developer to user
157
- } else if (role !== 'system' && role !== 'user' && role !== 'assistant') {
180
+ } else if (role !== 'system' && role !== 'user' && role !== 'assistant' && role !== 'tool') {
158
181
  // Skip any other non-standard roles
159
182
  continue;
160
183
  }
@@ -238,6 +261,7 @@ function translateResponsesToChat(request) {
238
261
  /**
239
262
  * Translate Chat Completions response to Responses format
240
263
  * Handles both output_text and reasoning_text content
264
+ * Handles tool_calls if present (only if ALLOW_TOOLS)
241
265
  */
242
266
  function translateChatToResponses(chatResponse, responsesRequest, ids) {
243
267
  const msg = chatResponse.choices?.[0]?.message ?? {};
@@ -262,6 +286,27 @@ function translateChatToResponses(chatResponse, responsesRequest, ids) {
262
286
  content,
263
287
  };
264
288
 
289
+ // Build output array: message item + any function_call items
290
+ const finalOutput = [msgItem];
291
+
292
+ // Handle tool_calls (only if ALLOW_TOOLS)
293
+ if (ALLOW_TOOLS && msg.tool_calls && Array.isArray(msg.tool_calls)) {
294
+ for (const tc of msg.tool_calls) {
295
+ const callId = tc.id || `call_${randomUUID().replace(/-/g, '')}`;
296
+ const name = tc.function?.name || '';
297
+ const args = tc.function?.arguments || '';
298
+
299
+ finalOutput.push({
300
+ id: callId,
301
+ type: 'function_call',
302
+ status: 'completed',
303
+ call_id: callId,
304
+ name: name,
305
+ arguments: typeof args === 'string' ? args : JSON.stringify(args),
306
+ });
307
+ }
308
+ }
309
+
265
310
  return buildResponseObject({
266
311
  id: responseId,
267
312
  model: responsesRequest?.model || chatResponse.model || DEFAULT_MODEL,
@@ -269,7 +314,7 @@ function translateChatToResponses(chatResponse, responsesRequest, ids) {
269
314
  created_at: createdAt,
270
315
  completed_at: nowSec(),
271
316
  input: responsesRequest?.input || [],
272
- output: [msgItem],
317
+ output: finalOutput,
273
318
  tools: responsesRequest?.tools || [],
274
319
  });
275
320
  }
@@ -400,6 +445,10 @@ async function streamChatToResponses(upstreamBody, res, responsesRequest, ids) {
400
445
  let out = '';
401
446
  let reasoning = '';
402
447
 
448
+ // Tool call tracking (only if ALLOW_TOOLS)
449
+ const toolCallsMap = new Map(); // index -> { callId, name, arguments, partialArgs }
450
+ let nextOutputIndex = 1; // After message item
451
+
403
452
  while (true) {
404
453
  const { done, value } = await reader.read();
405
454
  if (done) break;
@@ -428,6 +477,106 @@ async function streamChatToResponses(upstreamBody, res, responsesRequest, ids) {
428
477
 
429
478
  const delta = chunk.choices?.[0]?.delta || {};
430
479
 
480
+ // Handle tool_calls (only if ALLOW_TOOLS)
481
+ if (ALLOW_TOOLS && delta.tool_calls && Array.isArray(delta.tool_calls)) {
482
+ for (const tc of delta.tool_calls) {
483
+ const index = tc.index;
484
+ if (index == null) continue;
485
+
486
+ if (!toolCallsMap.has(index)) {
487
+ // New tool call - send output_item.added
488
+ const callId = tc.id || `call_${randomUUID().replace(/-/g, '')}`;
489
+ const name = tc.function?.name || '';
490
+
491
+ toolCallsMap.set(index, {
492
+ callId,
493
+ name,
494
+ arguments: '',
495
+ partialArgs: ''
496
+ });
497
+
498
+ const fnItemInProgress = {
499
+ id: callId,
500
+ type: 'function_call',
501
+ status: 'in_progress',
502
+ call_id: callId,
503
+ name: name,
504
+ arguments: '',
505
+ };
506
+
507
+ sse({
508
+ type: 'response.output_item.added',
509
+ output_index: nextOutputIndex,
510
+ item: fnItemInProgress,
511
+ });
512
+
513
+ if (name) {
514
+ sse({
515
+ type: 'response.function_call_name.done',
516
+ item_id: callId,
517
+ output_index: nextOutputIndex,
518
+ name: name,
519
+ });
520
+ }
521
+ }
522
+
523
+ const tcData = toolCallsMap.get(index);
524
+
525
+ // Handle name update if it comes later
526
+ if (tc.function?.name && !tcData.name) {
527
+ tcData.name = tc.function.name;
528
+ sse({
529
+ type: 'response.function_call_name.done',
530
+ item_id: tcData.callId,
531
+ output_index: OUTPUT_INDEX + index,
532
+ name: tcData.name,
533
+ });
534
+ }
535
+
536
+ // Handle arguments delta
537
+ if (tc.function?.arguments && typeof tc.function.arguments === 'string') {
538
+ tcData.partialArgs += tc.function.arguments;
539
+
540
+ sse({
541
+ type: 'response.function_call_arguments.delta',
542
+ item_id: tcData.callId,
543
+ output_index: OUTPUT_INDEX + index,
544
+ delta: tc.function.arguments,
545
+ });
546
+ }
547
+
548
+ // Check if this tool call is done (finish_reason comes later in the choice)
549
+ const finishReason = chunk.choices?.[0]?.finish_reason;
550
+ if (finishReason === 'tool_calls' || (tc.function?.arguments && tc.function.arguments.length > 0 && chunk.choices?.[0]?.delta !== null)) {
551
+ tcData.arguments = tcData.partialArgs;
552
+
553
+ sse({
554
+ type: 'response.function_call_arguments.done',
555
+ item_id: tcData.callId,
556
+ output_index: OUTPUT_INDEX + index,
557
+ arguments: tcData.arguments,
558
+ });
559
+
560
+ const fnItemDone = {
561
+ id: tcData.callId,
562
+ type: 'function_call',
563
+ status: 'completed',
564
+ call_id: tcData.callId,
565
+ name: tcData.name,
566
+ arguments: tcData.arguments,
567
+ };
568
+
569
+ sse({
570
+ type: 'response.output_item.done',
571
+ output_index: OUTPUT_INDEX + index,
572
+ item: fnItemDone,
573
+ });
574
+ }
575
+ }
576
+ // Skip to next iteration after handling tool_calls
577
+ continue;
578
+ }
579
+
431
580
  // NON mescolare reasoning in output_text
432
581
  if (typeof delta.reasoning_content === 'string' && delta.reasoning_content.length) {
433
582
  reasoning += delta.reasoning_content;
@@ -495,6 +644,21 @@ async function streamChatToResponses(upstreamBody, res, responsesRequest, ids) {
495
644
  item: msgItemDone,
496
645
  });
497
646
 
647
+ // Build final output array: message item + any function_call items
648
+ const finalOutput = [msgItemDone];
649
+ if (ALLOW_TOOLS && toolCallsMap.size > 0) {
650
+ for (const [index, tcData] of toolCallsMap.entries()) {
651
+ finalOutput.push({
652
+ id: tcData.callId,
653
+ type: 'function_call',
654
+ status: 'completed',
655
+ call_id: tcData.callId,
656
+ name: tcData.name,
657
+ arguments: tcData.arguments,
658
+ });
659
+ }
660
+ }
661
+
498
662
  const completed = buildResponseObject({
499
663
  id: responseId,
500
664
  model: responsesRequest?.model || DEFAULT_MODEL,
@@ -502,14 +666,14 @@ async function streamChatToResponses(upstreamBody, res, responsesRequest, ids) {
502
666
  created_at: createdAt,
503
667
  completed_at: nowSec(),
504
668
  input: responsesRequest?.input || [],
505
- output: [msgItemDone],
669
+ output: finalOutput,
506
670
  tools: responsesRequest?.tools || [],
507
671
  });
508
672
 
509
673
  sse({ type: 'response.completed', response: completed });
510
674
  res.end();
511
675
 
512
- log('info', `Stream completed - ${out.length} output, ${reasoning.length} reasoning`);
676
+ log('info', `Stream completed - ${out.length} output, ${reasoning.length} reasoning, ${toolCallsMap.size} tool_calls`);
513
677
  }
514
678
 
515
679
  /**