@inductiv/node-red-openai-api 6.22.0 → 6.27.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,102 @@
1
+ [
2
+ {
3
+ "id": "2fd54c5f0db04e4b",
4
+ "type": "tab",
5
+ "label": "Responses Phase Example",
6
+ "disabled": false,
7
+ "info": "",
8
+ "env": []
9
+ },
10
+ {
11
+ "id": "0c8ab1e1e23314fd",
12
+ "type": "inject",
13
+ "z": "2fd54c5f0db04e4b",
14
+ "name": "Create Phased Response",
15
+ "props": [
16
+ {
17
+ "p": "ai.model",
18
+ "v": "gpt-5.4",
19
+ "vt": "str"
20
+ },
21
+ {
22
+ "p": "ai.prompt_cache_key",
23
+ "v": "responses-phase-example-v1",
24
+ "vt": "str"
25
+ },
26
+ {
27
+ "p": "ai.input[0]",
28
+ "v": "{\"type\":\"message\",\"role\":\"assistant\",\"phase\":\"commentary\",\"content\":[{\"type\":\"output_text\",\"text\":\"I will think through the request and then deliver the answer clearly.\"}]}",
29
+ "vt": "json"
30
+ },
31
+ {
32
+ "p": "ai.input[1]",
33
+ "v": "{\"type\":\"message\",\"role\":\"user\",\"content\":[{\"type\":\"input_text\",\"text\":\"Summarize the purpose of NOA-9 in one concise paragraph.\"}]}",
34
+ "vt": "json"
35
+ }
36
+ ],
37
+ "repeat": "",
38
+ "crontab": "",
39
+ "once": false,
40
+ "onceDelay": 0.1,
41
+ "topic": "",
42
+ "x": 330,
43
+ "y": 240,
44
+ "wires": [
45
+ [
46
+ "4e34ed1e9c6c8f31"
47
+ ]
48
+ ]
49
+ },
50
+ {
51
+ "id": "f2dc3af908db35ee",
52
+ "type": "comment",
53
+ "z": "2fd54c5f0db04e4b",
54
+ "name": "Set your API key in Service Host, then send the inject node.",
55
+ "info": "This example shows how to preserve assistant `phase` on a Responses request.\n\nBefore running:\n- open the `OpenAI Auth` config node and set a valid API key\n- keep `gpt-5.4` or replace it with another Responses-compatible model if needed\n\nWhat this flow sends:\n- `prompt_cache_key`\n- an assistant message labeled `phase: commentary`\n- a follow-up user message\n\nExpected result:\n- the response appears in the debug sidebar as the normal `createModelResponse` output",
56
+ "x": 440,
57
+ "y": 180,
58
+ "wires": []
59
+ },
60
+ {
61
+ "id": "4e34ed1e9c6c8f31",
62
+ "type": "OpenAI API",
63
+ "z": "2fd54c5f0db04e4b",
64
+ "name": "Create Model Response",
65
+ "property": "ai",
66
+ "propertyType": "msg",
67
+ "service": "c9f1ae09c3b1c2cf",
68
+ "method": "createModelResponse",
69
+ "x": 560,
70
+ "y": 240,
71
+ "wires": [
72
+ [
73
+ "0b5d64c45d8709c8"
74
+ ]
75
+ ]
76
+ },
77
+ {
78
+ "id": "0b5d64c45d8709c8",
79
+ "type": "debug",
80
+ "z": "2fd54c5f0db04e4b",
81
+ "name": "Model Response",
82
+ "active": true,
83
+ "tosidebar": true,
84
+ "console": false,
85
+ "tostatus": false,
86
+ "complete": "true",
87
+ "targetType": "full",
88
+ "statusVal": "",
89
+ "statusType": "auto",
90
+ "x": 770,
91
+ "y": 240,
92
+ "wires": []
93
+ },
94
+ {
95
+ "id": "c9f1ae09c3b1c2cf",
96
+ "type": "Service Host",
97
+ "apiBase": "https://api.openai.com/v1",
98
+ "secureApiKeyHeaderOrQueryName": "Authorization",
99
+ "organizationId": "",
100
+ "name": "OpenAI Auth"
101
+ }
102
+ ]
@@ -0,0 +1,107 @@
1
+ [
2
+ {
3
+ "id": "af5f0e330a6a497e",
4
+ "type": "tab",
5
+ "label": "Tool Search Example",
6
+ "disabled": false,
7
+ "info": "Minimal Responses Tool Search example.\n\nThis flow keeps the request contract small on purpose:\n- `tool_search` is enabled\n- the remote MCP tool is marked with `defer_loading: true`\n- the model can decide whether to load and use that deferred tool\n\nTool Search is most useful when many deferred tools are available. This example uses one deferred MCP tool so the payload shape is easy to inspect in Node-RED.",
8
+ "env": []
9
+ },
10
+ {
11
+ "id": "5b8e323b9e16bdb9",
12
+ "type": "inject",
13
+ "z": "af5f0e330a6a497e",
14
+ "name": "Create Tool Search Request",
15
+ "props": [
16
+ {
17
+ "p": "ai.model",
18
+ "v": "gpt-5.4",
19
+ "vt": "str"
20
+ },
21
+ {
22
+ "p": "ai.prompt_cache_key",
23
+ "v": "responses-tool-search-example-v1",
24
+ "vt": "str"
25
+ },
26
+ {
27
+ "p": "ai.tools[0]",
28
+ "v": "{\"type\":\"tool_search\"}",
29
+ "vt": "json"
30
+ },
31
+ {
32
+ "p": "ai.tools[1]",
33
+ "v": "{\"type\":\"mcp\",\"server_label\":\"deepwiki\",\"server_url\":\"https://mcp.deepwiki.com/mcp\",\"require_approval\":\"never\",\"defer_loading\":true}",
34
+ "vt": "json"
35
+ },
36
+ {
37
+ "p": "ai.input",
38
+ "v": "Use the available documentation tools to summarize the transport protocols supported by the MCP specification.",
39
+ "vt": "str"
40
+ }
41
+ ],
42
+ "repeat": "",
43
+ "crontab": "",
44
+ "once": false,
45
+ "onceDelay": 0.1,
46
+ "topic": "",
47
+ "x": 340,
48
+ "y": 240,
49
+ "wires": [
50
+ [
51
+ "8abca470f7d99d0b"
52
+ ]
53
+ ]
54
+ },
55
+ {
56
+ "id": "48f13ed89ac6c252",
57
+ "type": "comment",
58
+ "z": "af5f0e330a6a497e",
59
+ "name": "Set your API key, then send the inject node to test Tool Search.",
60
+ "info": "This example demonstrates the minimal Responses Tool Search contract.\n\nBefore running:\n- open the `OpenAI Auth` config node and set a valid API key\n- confirm the remote MCP server in `ai.tools[1]` is still reachable for your environment\n- keep or replace `gpt-5.4` as needed\n\nWhat this flow sends:\n- `tools[0] = { type: \"tool_search\" }`\n- a deferred MCP tool with `defer_loading: true`\n- a user prompt that should encourage the model to inspect and load the deferred tool only if useful\n\nExpected result:\n- the debug sidebar shows the Responses output, including any tool-search-driven behavior chosen by the model",
61
+ "x": 470,
62
+ "y": 180,
63
+ "wires": []
64
+ },
65
+ {
66
+ "id": "8abca470f7d99d0b",
67
+ "type": "OpenAI API",
68
+ "z": "af5f0e330a6a497e",
69
+ "name": "Create Model Response",
70
+ "property": "ai",
71
+ "propertyType": "msg",
72
+ "service": "51cb919695c46177",
73
+ "method": "createModelResponse",
74
+ "x": 590,
75
+ "y": 240,
76
+ "wires": [
77
+ [
78
+ "5c79d8c7482b4f46"
79
+ ]
80
+ ]
81
+ },
82
+ {
83
+ "id": "5c79d8c7482b4f46",
84
+ "type": "debug",
85
+ "z": "af5f0e330a6a497e",
86
+ "name": "Tool Search Response",
87
+ "active": true,
88
+ "tosidebar": true,
89
+ "console": false,
90
+ "tostatus": false,
91
+ "complete": "true",
92
+ "targetType": "full",
93
+ "statusVal": "",
94
+ "statusType": "auto",
95
+ "x": 820,
96
+ "y": 240,
97
+ "wires": []
98
+ },
99
+ {
100
+ "id": "51cb919695c46177",
101
+ "type": "Service Host",
102
+ "apiBase": "https://api.openai.com/v1",
103
+ "secureApiKeyHeaderOrQueryName": "Authorization",
104
+ "organizationId": "",
105
+ "name": "OpenAI Auth"
106
+ }
107
+ ]
@@ -0,0 +1,172 @@
1
+ [
2
+ {
3
+ "id": "94f237d7e5864fc4",
4
+ "type": "tab",
5
+ "label": "Responses WebSocket Example",
6
+ "disabled": false,
7
+ "info": "Responses websocket example.\n\nThis flow demonstrates the websocket lifecycle through one `OpenAI API` node instance:\n- connect the websocket\n- send a `response.create` client event\n- inspect streamed server events from the same node\n- close the connection cleanly",
8
+ "env": []
9
+ },
10
+ {
11
+ "id": "7d097a8ea8f89f3b",
12
+ "type": "comment",
13
+ "z": "94f237d7e5864fc4",
14
+ "name": "Set your API key, then use the same OpenAI API node for connect, send, and close.",
15
+ "info": "Before running:\n- open the `OpenAI Auth` config node and set a valid API key\n- this example stores websocket commands in `msg.ai` because the `OpenAI API` node property is configured to `ai`\n- if your node uses the default property, move the same fields under `msg.payload`\n\nImportant:\n- the websocket connection lives inside a single `OpenAI API` node instance\n- for that reason, connect, send, and close all target the same node\n- click `Connect Responses WebSocket` first, then `Send response.create Event`, then `Close Responses WebSocket`",
16
+ "x": 480,
17
+ "y": 140,
18
+ "wires": []
19
+ },
20
+ {
21
+ "id": "9c6d8525af6b0e2a",
22
+ "type": "comment",
23
+ "z": "94f237d7e5864fc4",
24
+ "name": "What comes out of the node?",
25
+ "info": "This node produces two kinds of output:\n- immediate local acknowledgements for `connect`, `send`, and `close`\n- asynchronous Responses server events such as `response.created`, `response.in_progress`, `response.output_text.delta`, and `response.completed`\n\nServer events arrive in `msg.payload` and include connection metadata under `msg.openai`.",
26
+ "x": 410,
27
+ "y": 220,
28
+ "wires": []
29
+ },
30
+ {
31
+ "id": "b4a7fef11decb00a",
32
+ "type": "inject",
33
+ "z": "94f237d7e5864fc4",
34
+ "name": "Connect Responses WebSocket",
35
+ "props": [
36
+ {
37
+ "p": "ai.action",
38
+ "v": "connect",
39
+ "vt": "str"
40
+ },
41
+ {
42
+ "p": "ai.connection_id",
43
+ "v": "demo-connection-1",
44
+ "vt": "str"
45
+ }
46
+ ],
47
+ "repeat": "",
48
+ "crontab": "",
49
+ "once": false,
50
+ "onceDelay": 0.1,
51
+ "topic": "",
52
+ "x": 250,
53
+ "y": 320,
54
+ "wires": [
55
+ [
56
+ "2f33fb2b774e1f8f"
57
+ ]
58
+ ]
59
+ },
60
+ {
61
+ "id": "db2a19cffba6f2d1",
62
+ "type": "inject",
63
+ "z": "94f237d7e5864fc4",
64
+ "name": "Send response.create Event",
65
+ "props": [
66
+ {
67
+ "p": "ai.action",
68
+ "v": "send",
69
+ "vt": "str"
70
+ },
71
+ {
72
+ "p": "ai.connection_id",
73
+ "v": "demo-connection-1",
74
+ "vt": "str"
75
+ },
76
+ {
77
+ "p": "ai.event",
78
+ "v": "{\"type\":\"response.create\",\"model\":\"gpt-5.4\",\"input\":\"Say hello from Responses websocket mode in one sentence.\"}",
79
+ "vt": "json"
80
+ }
81
+ ],
82
+ "repeat": "",
83
+ "crontab": "",
84
+ "once": false,
85
+ "onceDelay": 0.1,
86
+ "topic": "",
87
+ "x": 230,
88
+ "y": 400,
89
+ "wires": [
90
+ [
91
+ "2f33fb2b774e1f8f"
92
+ ]
93
+ ]
94
+ },
95
+ {
96
+ "id": "22f0c821e335fcbc",
97
+ "type": "inject",
98
+ "z": "94f237d7e5864fc4",
99
+ "name": "Close Responses WebSocket",
100
+ "props": [
101
+ {
102
+ "p": "ai.action",
103
+ "v": "close",
104
+ "vt": "str"
105
+ },
106
+ {
107
+ "p": "ai.connection_id",
108
+ "v": "demo-connection-1",
109
+ "vt": "str"
110
+ },
111
+ {
112
+ "p": "ai.reason",
113
+ "v": "Example flow complete",
114
+ "vt": "str"
115
+ }
116
+ ],
117
+ "repeat": "",
118
+ "crontab": "",
119
+ "once": false,
120
+ "onceDelay": 0.1,
121
+ "topic": "",
122
+ "x": 240,
123
+ "y": 480,
124
+ "wires": [
125
+ [
126
+ "2f33fb2b774e1f8f"
127
+ ]
128
+ ]
129
+ },
130
+ {
131
+ "id": "2f33fb2b774e1f8f",
132
+ "type": "OpenAI API",
133
+ "z": "94f237d7e5864fc4",
134
+ "name": "Manage Model Response WebSocket",
135
+ "property": "ai",
136
+ "propertyType": "msg",
137
+ "service": "c2b37d3bb51e2796",
138
+ "method": "manageModelResponseWebSocket",
139
+ "x": 600,
140
+ "y": 400,
141
+ "wires": [
142
+ [
143
+ "5260c60b85f63c44"
144
+ ]
145
+ ]
146
+ },
147
+ {
148
+ "id": "5260c60b85f63c44",
149
+ "type": "debug",
150
+ "z": "94f237d7e5864fc4",
151
+ "name": "Responses WebSocket",
152
+ "active": true,
153
+ "tosidebar": true,
154
+ "console": false,
155
+ "tostatus": false,
156
+ "complete": "true",
157
+ "targetType": "full",
158
+ "statusVal": "",
159
+ "statusType": "auto",
160
+ "x": 860,
161
+ "y": 400,
162
+ "wires": []
163
+ },
164
+ {
165
+ "id": "c2b37d3bb51e2796",
166
+ "type": "Service Host",
167
+ "apiBase": "https://api.openai.com/v1",
168
+ "secureApiKeyHeaderOrQueryName": "Authorization",
169
+ "organizationId": "",
170
+ "name": "OpenAI Auth"
171
+ }
172
+ ]
@@ -0,0 +1,96 @@
1
+ # Deep research report on new features in the OpenAI TypeScript/JavaScript SDK from v6.23.0 to v6.27.0
2
+
3
+ ## Executive summary
4
+
5
+ Between **February 23, 2026** and **March 5, 2026**, the official OpenAI TypeScript/JavaScript SDK (the `openai` package, from the `openai/openai-node` repository) shipped a tightly linked cluster of features that collectively push the SDK toward **long-running, tool-heavy, agentic workflows**. citeturn46view0turn45view0
6
+
7
+ Across releases **v6.23.0 → v6.27.0**, the most consequential additions were:
8
+
9
+ - **Responses API WebSocket mode support in the SDK** (v6.23.0), aligning with the platform’s WebSocket Mode guidance for persistent `/v1/responses` sessions and incremental continuation via `previous_response_id`. citeturn46view0turn33view0turn19view1
10
+ - **Realtime model ID expansions** (v6.24.0), adding `gpt-realtime-1.5` and `gpt-audio-1.5` into the SDK’s Realtime session model unions. citeturn46view0turn23view0turn24view0
11
+ - A new **assistant message `phase` field** (v6.25.0), designed to label “commentary” vs “final_answer” and improve reliability in longer, tool-rich runs (especially with Codex-family workflows). citeturn46view0turn26view0turn19view1turn16search1
12
+ - A major March 5 platform/SDK step: typed support for **`gpt-5.4`**, plus first-class **Tool Search** and the **GA `computer` tool**, with an SDK follow-up cleanup for the computer tool type naming in v6.27.0. citeturn45view0turn19view2turn29view0turn17view0turn32view2
13
+ - A smaller but practically important schema addition surfaced under “bug fixes”: the **`prompt_cache_key` parameter in Responses** (v6.26.0), which matters for prompt caching strategies in production. citeturn46view0turn39view2
14
+
15
+ Overall, these increments form a coherent story: **persistent sessions + explicit message staging (`phase`) + selective tool loading (tool search) + UI automation loops (computer tool) + new flagship models (gpt‑5.4, realtime 1.5)**. citeturn33view0turn17view0turn32view2turn19view2turn19view0
16
+
17
+ ## Scope and methodology
18
+
19
+ This report covers **only** SDK changes introduced in **v6.23.0, v6.24.0, v6.25.0, v6.26.0, and v6.27.0** (inclusive). Release dates and change descriptions were sourced primarily from the official release notes on entity["company","GitHub","code hosting platform"]. citeturn45view0turn46view0
20
+
21
+ To verify what “feature added” means at the SDK surface (types, method availability, tool schemas), I cross-checked:
22
+
23
+ - The **GitHub releases** for each version in scope. citeturn45view0turn46view0
24
+ - The corresponding **tagged source** in the repository (e.g., `v6.24.0`, `v6.26.0`, `v6.27.0`) for relevant TypeScript type additions/changes. citeturn23view0turn26view0turn29view0turn31view0turn11view0
25
+ - Official OpenAI platform documentation (guides + API reference), including WebSocket Mode, Tool Search, Computer Use, and model pages. citeturn33view0turn17view0turn32view2turn15search0turn6search2turn16search1
26
+
27
+ The SDK is generated from an OpenAPI specification using entity["company","Stainless","sdk generator"], which is relevant context for why many changes appear as schema/type updates tied closely to platform docs. citeturn13view0
28
+
29
+ All code samples below are **illustrative** and were **not executed** in a runtime environment for this report; each row includes an explicit verification note.
30
+
31
+ ## Release-by-release analysis
32
+
33
+ The five releases in scope line up closely with OpenAI platform feature launches in late February and early March 2026.
34
+
35
+ In **v6.23.0 (2026-02-23)**, the SDK added **“websockets for responses api”**. citeturn46view0 This aligns with the platform’s WebSocket Mode guidance for the Responses API: persistent WebSocket connection to `/v1/responses`, and continuation by sending only incremental input plus `previous_response_id`. citeturn33view0turn19view1 In the tagged SDK code, this appears as a dedicated WebSocket helper (`ResponsesWS`) built atop a shared emitter base and a `wss://…/v1/responses` URL constructor. citeturn11view0turn9view0 The public API reference also describes a TypeScript entrypoint `client.responses.connect()` for connecting to a persistent Responses API WebSocket. citeturn44view0 (The reference page’s return type is shown as `void`; details about the returned handle are therefore treated as **unspecified** in this report.) citeturn44view0
36
+
37
+ In **v6.24.0 (2026-02-24)**, the SDK added **`gpt-realtime-1.5`** and **`gpt-audio-1.5`** to Realtime. citeturn46view0 This is concretely reflected in the Realtime session type union in `src/resources/realtime/realtime.ts`, where both new model IDs appear in v6.24.0 and are absent in v6.23.0. citeturn23view0turn24view0 The OpenAI API changelog explicitly notes `gpt-realtime-1.5` for the Realtime API on Feb 23, 2026, and `gpt-audio-1.5` for the Chat Completions API on the same date. citeturn19view0 The Realtime model page describes `gpt-realtime-1.5` as an audio model for low-latency voice agents; `gpt-audio-1.5` is described as an audio model release as well. citeturn6search2turn6search5 Since the SDK includes `gpt-audio-1.5` in the Realtime session model list, this report treats Realtime compatibility as **SDK-supported** (type-level), while explicit platform changelog framing for Realtime vs Chat Completions remains **partially specified**. citeturn23view0turn19view0
38
+
39
+ In **v6.25.0 (2026-02-24)**, the SDK introduced **`phase`** as a first-class field in Responses message types. citeturn46view0 In the SDK’s `EasyInputMessage` interface, `phase` is defined as `'commentary' | 'final_answer' | null`. citeturn26view0 The OpenAI API changelog describes `phase` as a new Responses API feature that labels assistant messages as commentary vs final answer. citeturn19view1 Platform guidance further elaborates why it exists (especially in Codex-like flows) and enumerates valid values. citeturn16search1turn16search4
40
+
41
+ In **v6.26.0 (2026-03-05)**, the SDK aligned with the platform’s **March 5, 2026** launches: adding **`gpt-5.4`**, introducing **Tool Search** schema support, and introducing the **GA `computer` tool**. citeturn45view0turn19view2turn29view0turn17view0turn32view2 Concretely:
42
+
43
+ - `gpt-5.4` appears in the SDK’s `ChatModel` union in v6.26.0 and is absent in v6.25.0. citeturn29view0turn30view0
44
+ - The SDK adds a `ToolSearchTool` (`type: "tool_search"`) and makes `defer_loading?: boolean` available across tool definitions, matching platform guidance that Tool Search requires (1) adding `"tool_search"` and (2) marking tools as deferred. citeturn28view0turn28view1turn17view0
45
+ - Tool search call/output items appear as `tool_search_call` and `tool_search_output` shapes, including recorded items `ResponseToolSearchCall` and `ResponseToolSearchOutputItem`. citeturn31view0turn31view1
46
+ - The `computer` tool path is supported in the platform docs as `{ type: "computer" }` with `gpt-5.4`, and the SDK includes a `computer` tool interface plus the associated action schema used in `computer_call` loops. citeturn32view2turn28view4
47
+
48
+ Also in v6.26.0, the release notes explicitly call out a parameter addition under bug fixes: **“prompt_cache_key param in responses”**. citeturn46view0 The v6.27.0 Responses type comments also reference `prompt_cache_key` as a successor to a deprecated `user` field for caching optimizations, reinforcing that it is part of the SDK’s supported surface in this timeframe. citeturn39view2
49
+
50
+ In **v6.27.0 (2026-03-05)**, the SDK made a **developer-experience clean-up** around computer tool naming: “The GA ComputerTool now uses the ComputerTool class. The `computer_use_preview` tool is moved to ComputerUsePreview.” citeturn45view0 This corresponds to a swap/rename visible in the types across tags: in v6.26.0, the GA tool (`type: "computer"`) was represented as `ComputerUseTool` while the preview tool used `ComputerTool` (with display settings). citeturn28view3turn28view4 In v6.27.0, the GA tool becomes `ComputerTool { type: 'computer' }` and the preview tool becomes `ComputerUsePreviewTool { type: 'computer_use_preview', display_width, display_height, environment }`. citeturn10view0
51
+
52
+ ## Comprehensive feature table
53
+
54
+ The table below lists each feature introduced within the scope window, with the required columns plus documentation status and verification notes.
55
+
56
+ | Feature name | Release version | Concise feature description | Primary use-case(s) | Short JS/TS use-case example | Official doc / release note link | Documentation status | Verification note |
57
+ |---|---|---|---|---|---|---|---|
58
+ | Responses WebSocket support in SDK | v6.23.0 | Adds SDK support for Responses API WebSocket mode (“websockets for responses api”), enabling persistent `wss://…/v1/responses` sessions and event-driven streaming. citeturn46view0turn33view0turn11view0 | Long-running agent loops with many tool round trips; lower per-turn overhead via incremental input + `previous_response_id`. citeturn33view0 | `import OpenAI from "openai";`<br>`import { ResponsesWS } from "openai/resources/responses/ws";`<br>`const client = new OpenAI();`<br>`const ws = new ResponsesWS(client);`<br>`ws.on("response.output_text.delta", e => process.stdout.write(e.delta));`<br>`ws.send({ type:"response.create", model:"gpt-5.4", input:"Hello" });` | WebSocket Mode guide: <https://developers.openai.com/api/docs/guides/websocket-mode/><br>SDK release: <https://github.com/openai/openai-node/releases/tag/v6.23.0> | Documented in OpenAI guides + SDK release notes. citeturn33view0turn46view0 | Example structure validated against the SDK `ResponsesWS` implementation and WebSocket Mode guide; not executed. citeturn11view0turn33view0 |
59
+ | Realtime model ID additions (`gpt-realtime-1.5`) | v6.24.0 | Adds `gpt-realtime-1.5` to the SDK’s Realtime session `model` union types. citeturn46view0turn23view0turn24view0 | Voice agents and low-latency multimodal conversations (Realtime API). citeturn6search2turn19view0 | `import OpenAI from "openai";`<br>`const client = new OpenAI();`<br>`const secret = await client.realtime.clientSecrets.create({ session: { model: "gpt-realtime-1.5" } });`<br>`console.log(secret.value);` | Model doc: <https://developers.openai.com/api/docs/models/gpt-realtime-1.5/><br>SDK release: <https://github.com/openai/openai-node/releases/tag/v6.24.0> | Documented in model docs + SDK release notes; also reflected in SDK types. citeturn6search2turn46view0turn23view0 | Model ID inclusion validated by diffing SDK type unions (v6.23.0 vs v6.24.0); not executed. citeturn23view0turn24view0 |
60
+ | Realtime model ID additions (`gpt-audio-1.5`) | v6.24.0 | Adds `gpt-audio-1.5` to the SDK’s Realtime session `model` union types. citeturn46view0turn23view0turn24view0 | Audio generation/transcription workflows that rely on Realtime session schemas in the SDK; use-case framing in OpenAI changelog is split across APIs. citeturn19view0turn6search5 | `import OpenAI from "openai";`<br>`const client = new OpenAI();`<br>`await client.realtime.clientSecrets.create({ session: { model: "gpt-audio-1.5" } });` | Model doc: <https://developers.openai.com/api/docs/models/gpt-audio-1.5/><br>SDK release: <https://github.com/openai/openai-node/releases/tag/v6.24.0> | Model is documented; Realtime-vs-Chat-Completions availability is partially specified in the changelog text, while SDK types include it for Realtime sessions. citeturn6search5turn19view0turn23view0 | Type-level availability validated via tagged SDK types; not executed. citeturn23view0turn24view0 |
61
+ | `phase` field for assistant messages in Responses | v6.25.0 | Adds `phase?: "commentary" | "final_answer" | null` to assistant message inputs/outputs for the Responses API. citeturn46view0turn26view0turn19view1 | Long-running tasks where you want to label intermediate assistant “preambles” vs the final answer; helps avoid early stopping and supports Codex patterns. citeturn16search1turn16search4 | `import OpenAI from "openai";`<br>`const client = new OpenAI();`<br>`await client.responses.create({`<br>`model:"gpt-5.4",`<br>`input:[{ type:"message", role:"assistant", phase:"commentary", content:"Working..." },{ role:"user", content:"Continue." }]`<br>`});` | `phase` overview (Codex guide): <https://developers.openai.com/cookbook/examples/gpt-5/codex_prompting_guide/><br>SDK release: <https://github.com/openai/openai-node/releases/tag/v6.25.0> | Documented in OpenAI changelog + guides; present in SDK message types. citeturn19view1turn16search1turn26view0 | Field presence validated in v6.25.0 types and OpenAI docs; example not executed. citeturn26view0turn16search1 |
62
+ | `gpt-5.4` model ID typed in SDK | v6.26.0 | Adds `gpt-5.4` to the SDK’s `ChatModel` type union (usable across Chat Completions and Responses). citeturn45view0turn29view0turn30view0turn19view2 | High-capability text+vision reasoning and agent workflows; default recommendation in OpenAI model guidance. citeturn15search0turn19view2turn36search6 | `import OpenAI from "openai";`<br>`const client = new OpenAI();`<br>`const r = await client.responses.create({ model:"gpt-5.4", input:"Summarize this." });`<br>`console.log(r.output_text);` | Model doc: <https://developers.openai.com/api/docs/models/gpt-5.4><br>SDK release: <https://github.com/openai/openai-node/releases/tag/v6.26.0> | Documented in OpenAI model docs + API changelog; reflected in SDK type unions. citeturn15search0turn19view2turn29view0 | Model ID union validation performed by comparing tags; request example not executed. citeturn29view0turn30view0 |
63
+ | Tool Search tool (`type:"tool_search"`) + deferred tool schemas (`defer_loading`) | v6.26.0 | Adds `ToolSearchTool` (`type:"tool_search"`) and `defer_loading?: boolean` across tools; includes tool-search call/output item shapes (`tool_search_call`, `tool_search_output`). citeturn45view0turn28view0turn28view1turn31view0turn17view0turn19view2 | Cutting token overhead and preserving cache behavior when exposing large tool surfaces; loading only relevant tool definitions at runtime. citeturn17view0turn19view2 | `import OpenAI from "openai";`<br>`const client = new OpenAI();`<br>`const resp = await client.responses.create({`<br>`model:"gpt-5.4",`<br>`tools:[{ type:"tool_search" },{ type:"function", name:"crm.lookup", defer_loading:true, parameters:{ type:"object", properties:{ id:{type:"string"} } }, strict:true }],`<br>`input:"Look up customer 123 and summarize."`<br>`});` | Tool Search guide: <https://developers.openai.com/api/docs/guides/tools-tool-search/><br>SDK release: <https://github.com/openai/openai-node/releases/tag/v6.26.0> | Documented in OpenAI guides + changelog; SDK type surfaces match (`tool_search`, `defer_loading`, call/output items). citeturn17view0turn19view2turn28view0turn31view0 | Example validated against Tool Search guide concepts and SDK type definitions; not executed. citeturn17view0turn28view1turn31view0 |
64
+ | GA Computer tool (`type:"computer"`) for Responses + upgrade path from preview | v6.26.0 | Introduces GA `computer` tool support (tool type `computer`, computer-call loops with batched `actions[]`), aligning with `gpt-5.4` computer use workflows. citeturn45view0turn32view2turn19view2turn28view4 | UI automation loops (agent controls a browser/VM via screenshots and returned actions); migration away from `computer-use-preview`. citeturn32view2turn32view3 | `import OpenAI from "openai";`<br>`const client = new OpenAI();`<br>`const resp = await client.responses.create({ model:"gpt-5.4", tools:[{ type:"computer" }], input:"Open the site and search penguin." });`<br>`// then handle resp.output computer_call + send computer_call_output screenshots` | Computer use guide: <https://developers.openai.com/api/docs/guides/tools-computer-use/><br>SDK release: <https://github.com/openai/openai-node/releases/tag/v6.26.0> | Documented in OpenAI guide + changelog; SDK includes `computer` tool type. citeturn32view2turn19view2turn28view4 | Example pattern validated against the official Computer use guide; loop scaffolding not executed. citeturn32view2turn32view3 |
65
+ | `prompt_cache_key` parameter support in Responses | v6.26.0 | Adds/repairs support for `prompt_cache_key` in Responses request parameters (called out explicitly in release notes). citeturn46view0turn39view2 | Prompt caching strategies where you want stable cache bucketing keys across similar requests/users or shared system prompts. citeturn39view2 | `import OpenAI from "openai";`<br>`const client = new OpenAI();`<br>`await client.responses.create({ model:"gpt-5.4", prompt_cache_key:"acct:42:sys:v1", input:"Do the task." });` | SDK release (mentions `prompt_cache_key`): <https://github.com/openai/openai-node/releases/tag/v6.26.0> | Mentioned in SDK release notes under bug fixes; also referenced in Responses type docs. citeturn46view0turn39view2 | Validated via release note text and presence in SDK type documentation; not executed; exact caching semantics beyond docs are unspecified. citeturn46view0turn39view2 |
66
+ | Computer tool type/class naming stabilization (GA vs preview) | v6.27.0 | Renames the GA vs preview tool interfaces so the GA `computer` tool maps to `ComputerTool`, while `computer_use_preview` maps to `ComputerUsePreviewTool`. citeturn45view0turn10view0turn28view4turn28view3 | Reduces confusion when migrating from preview: developers can map tool objects to clearer TypeScript shapes in GA codebases. citeturn32view3turn45view0 | `// GA`<br>`tools:[{ type:"computer" }]`<br>`// Preview`<br>`tools:[{ type:"computer_use_preview", display_width:1024, display_height:768, environment:"browser" }]` | SDK release: <https://github.com/openai/openai-node/releases/tag/v6.27.0><br>Computer migration guide: <https://developers.openai.com/api/docs/guides/tools-computer-use/> | Type rename is documented in SDK release notes; preview-to-GA migration is documented in platform guide. citeturn45view0turn32view3 | Type mapping validated by comparing v6.26.0 vs v6.27.0 tagged SDK types; not executed. citeturn10view0turn28view4 |
67
+
68
+ ## Timeline and feature trajectory
69
+
70
+ ```mermaid
71
+ timeline
72
+ title openai-node v6.23.0–v6.27.0 releases and major feature additions
73
+ 2026-02-23 : v6.23.0 — Responses WebSocket support in SDK
74
+ 2026-02-24 : v6.24.0 — Realtime models: gpt-realtime-1.5, gpt-audio-1.5 (type-level)
75
+ 2026-02-24 : v6.25.0 — phase field for assistant messages in Responses
76
+ 2026-03-05 : v6.26.0 — gpt-5.4 typed support; tool_search tool + defer_loading; GA computer tool; prompt_cache_key param
77
+ 2026-03-05 : v6.27.0 — Computer tool naming cleanup (GA vs preview types/classes)
78
+ ```
79
+
80
+ Read as a progression, the SDK is evolving toward “agent infrastructure” primitives:
81
+
82
+ - **Transport**: WebSocket Mode for the Responses API supports persistent links for multi-step tool loops. citeturn33view0turn46view0
83
+ - **Conversation discipline**: `phase` formalizes what is intermediate vs final in assistant message history, which matters when you carry state forward (especially in stateless or tool-heavy flows). citeturn26view0turn16search4
84
+ - **Tool scaling**: Tool Search provides a mechanism to keep large tool surfaces out of the context window until needed, preserving cache behavior and controlling token usage. citeturn17view0turn31view0turn19view2
85
+ - **Embodied interaction**: The GA `computer` tool pushes the core agent loop beyond API calls into UI actions driven by screenshots and returned action batches. citeturn32view2turn19view2
86
+ - **Model alignment**: releases introduce or type-enable the models that anchor these workflows (`gpt-realtime-1.5`, `gpt-5.4`). citeturn19view0turn19view2turn15search0turn23view0
87
+
88
+ ## Verification and caveats
89
+
90
+ All example snippets in this report were validated by **reading official docs and tagged SDK source/types**, and by cross-checking against the official release notes for the relevant versions. No runtime execution was performed. citeturn45view0turn46view0turn11view0turn23view0turn26view0turn31view0turn32view2turn33view0
91
+
92
+ A few details remain explicitly **unspecified** based on what was visible in primary sources:
93
+
94
+ - The TypeScript API reference page for `client.responses.connect()` shows the signature as `client.responses.connect(RequestOptions options?): void`, while the code snippet uses `await client.responses.connect();`. The return value/handle type is therefore treated as unspecified here. citeturn44view0
95
+ - The platform changelog text explicitly associates `gpt-audio-1.5` with Chat Completions on Feb 23, 2026, while the SDK includes it in the Realtime session model union in v6.24.0. This report treats that as “SDK type-level support,” while the precise platform availability across endpoints should be confirmed per up-to-date model availability in your account. citeturn19view0turn23view0
96
+ - Prompt caching semantics for `prompt_cache_key` are referenced by the SDK change note and type docs; any deeper operational guidance (cache hit-rate strategies, key design patterns) should follow the platform’s prompt caching documentation and testing under your workload. citeturn46view0turn39view2