@inductiv/node-red-openai-api 1.103.0 → 6.27.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +213 -86
- package/examples/realtime/client-secrets.json +182 -0
- package/examples/responses/computer-use.json +142 -0
- package/examples/responses/mcp.json +1 -1
- package/examples/responses/phase.json +102 -0
- package/examples/responses/tool-search.json +107 -0
- package/examples/responses/websocket.json +172 -0
- package/internals/openai-api-features-v6.23.0-v6.27.0.md +96 -0
- package/lib.js +12696 -15003
- package/locales/en-US/node.json +50 -1
- package/node.html +1723 -1012
- package/node.js +204 -54
- package/package.json +9 -7
- package/src/assistants/help.html +1 -77
- package/src/audio/help.html +1 -37
- package/src/batch/help.html +3 -17
- package/src/chat/help.html +11 -89
- package/src/container-files/help.html +1 -27
- package/src/containers/help.html +8 -18
- package/src/conversations/help.html +135 -0
- package/src/conversations/methods.js +73 -0
- package/src/conversations/template.html +10 -0
- package/src/embeddings/help.html +1 -11
- package/src/evals/help.html +249 -0
- package/src/evals/methods.js +114 -0
- package/src/evals/template.html +14 -0
- package/src/files/help.html +4 -17
- package/src/fine-tuning/help.html +1 -35
- package/src/images/help.html +1 -45
- package/src/lib.js +53 -1
- package/src/messages/help.html +19 -39
- package/src/messages/methods.js +13 -0
- package/src/messages/template.html +7 -18
- package/src/models/help.html +1 -5
- package/src/moderations/help.html +1 -5
- package/src/node.html +126 -37
- package/src/realtime/help.html +209 -0
- package/src/realtime/methods.js +45 -0
- package/src/realtime/template.html +7 -0
- package/src/responses/help.html +286 -63
- package/src/responses/methods.js +234 -16
- package/src/responses/template.html +21 -1
- package/src/responses/websocket.js +150 -0
- package/src/runs/help.html +1 -123
- package/src/skills/help.html +183 -0
- package/src/skills/methods.js +99 -0
- package/src/skills/template.html +13 -0
- package/src/threads/help.html +1 -15
- package/src/uploads/help.html +1 -21
- package/src/vector-store-file-batches/help.html +1 -27
- package/src/vector-store-file-batches/methods.js +5 -5
- package/src/vector-store-files/help.html +1 -25
- package/src/vector-store-files/methods.js +4 -7
- package/src/vector-stores/help.html +2 -31
- package/src/vector-stores/methods.js +5 -11
- package/src/vector-stores/template.html +7 -22
- package/src/videos/help.html +113 -0
- package/src/videos/methods.js +50 -0
- package/src/videos/template.html +8 -0
- package/src/webhooks/help.html +61 -0
- package/src/webhooks/methods.js +40 -0
- package/src/webhooks/template.html +4 -0
- package/test/openai-methods-mapping.test.js +1559 -0
- package/test/openai-node-auth-routing.test.js +206 -0
- package/test/openai-responses-websocket.test.js +472 -0
- package/test/service-host-editor-template.test.js +56 -0
- package/test/service-host-node.test.js +185 -0
- package/test/services.test.js +150 -0
- package/test/utils.test.js +78 -0
package/README.md
CHANGED
|
@@ -1,125 +1,252 @@
|
|
|
1
1
|
# @inductiv/node-red-openai-api
|
|
2
2
|
|
|
3
|
-

|
|
3
|
+

|
|
4
|
+

|
|
5
|
+

|
|
4
6
|
|
|
5
|
-
This
|
|
7
|
+
This project brings the OpenAI API, and OpenAI-compatible APIs, into Node-RED as workflow-native building blocks.
|
|
6
8
|
|
|
7
|
-
|
|
8
|
-
<img width="265" alt="node-red-openai-api-node" src="https://github.com/allanbunch/node-red-openai-api/assets/4503640/ee954c8e-fbf4-4812-a38a-f047cecd1982">
|
|
9
|
-
</a>
|
|
10
|
-
<br>
|
|
9
|
+
It is not just a thin wrapper around text generation. The node exposes modern AI capabilities inside a runtime people can inspect, route, test, and operate: request and response workflows, tools, conversations, streaming, realtime interactions, webhooks, and related API families that matter in real systems.
|
|
11
10
|
|
|
12
|
-
|
|
11
|
+
That makes this repository relevant beyond Node-RED alone. It is a practical implementation of how contemporary AI capabilities can live inside an open workflow environment instead of being locked inside a single vendor surface or hidden behind a one-purpose abstraction.
|
|
13
12
|
|
|
14
|
-
|
|
13
|
+
This package currently targets the `openai` Node SDK `^6.27.0`.
|
|
15
14
|
|
|
16
|
-
|
|
15
|
+
## Why This Exists
|
|
17
16
|
|
|
18
|
-
|
|
19
|
-
@inductiv/node-red-openai-api
|
|
20
|
-
```
|
|
17
|
+
Modern AI work is no longer just "send a prompt, get a string back."
|
|
21
18
|
|
|
22
|
-
|
|
19
|
+
Real systems now involve:
|
|
23
20
|
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
|
|
27
|
-
|
|
21
|
+
- tool use
|
|
22
|
+
- multi-step workflows
|
|
23
|
+
- structured payloads
|
|
24
|
+
- streaming responses
|
|
25
|
+
- realtime sessions
|
|
26
|
+
- webhook verification
|
|
27
|
+
- provider compatibility and auth routing
|
|
28
28
|
|
|
29
|
-
|
|
29
|
+
Node-RED is already good at orchestration, automation, event handling, integration, and operational clarity. This project connects those strengths to the OpenAI API surface so teams can build AI workflows in an environment that stays visible and composable.
|
|
30
30
|
|
|
31
|
-
|
|
31
|
+
## Core Model
|
|
32
32
|
|
|
33
|
-
|
|
34
|
-
2. Send [OpenAI documented](https://platform.openai.com/docs/api-reference/) API service configuration paramaters to the node using the default `msg.payload` property, or confiure your desired incoming object property reference on the node itself.
|
|
35
|
-
3. Explore the [examples](./examples/) directory for sample implementations.
|
|
33
|
+
The node model in this repository is intentionally simple:
|
|
36
34
|
|
|
37
|
-
|
|
35
|
+
- one `OpenAI API` node handles the runtime method call
|
|
36
|
+
- one `Service Host` config node handles API base URL, auth, and organization settings
|
|
37
|
+
- the selected method determines which OpenAI API context is being called
|
|
38
|
+
- request data is passed in through a configurable message property, `msg.payload` by default
|
|
39
|
+
- method-specific details live in the editor help, example flows, and the underlying SDK contract
|
|
38
40
|
|
|
39
|
-
|
|
40
|
-
- **Configurable and Flexible**: Adapt to a wide range of project requirements, making it easy to integrate AI into your IoT solutions.
|
|
41
|
-
- **Powerful Combinations**: Utilize Node-RED's diverse nodes to build complex, AI-driven IoT workflows with ease.
|
|
41
|
+
In practice, that means one node can cover a wide API surface without turning the flow itself into a maze of special-purpose nodes.
|
|
42
42
|
|
|
43
|
-
##
|
|
43
|
+
## What It Enables
|
|
44
44
|
|
|
45
|
-
###
|
|
45
|
+
### Request and Response Workflows
|
|
46
46
|
|
|
47
|
-
|
|
47
|
+
Use the node for direct generation, structured Responses API work, chat-style interactions, moderation, embeddings, image work, audio tasks, and other request/response patterns.
|
|
48
48
|
|
|
49
|
-
###
|
|
49
|
+
### Tool-Enabled and Multi-Step AI Flows
|
|
50
50
|
|
|
51
|
-
|
|
52
|
-
- Added support for the new `container` endpoint.
|
|
53
|
-
- Added support for the new `containerFiles` endpoint.
|
|
54
|
-
- Added a simple MCP tool use example flow to the `examples` directory. See: [MCP Example](./examples/responses/mcp.json).
|
|
55
|
-
- Refactored code to greatly improve maintainability and stability.
|
|
51
|
+
Use Responses tools, conversations, runs, messages, vector stores, files, skills, and related resources as part of larger control loops and operational workflows.
|
|
56
52
|
|
|
57
|
-
###
|
|
53
|
+
### Streaming and Realtime Work
|
|
58
54
|
|
|
59
|
-
|
|
60
|
-
- The API call structure and parameters have been refined to align with the latest OpenAI specifications.
|
|
61
|
-
- Some functions and settings from previous versions may no longer be compatible with this update.
|
|
62
|
-
- List responses now exist at the top level of the `msg.payload` object; previously `msg.payload.data`.
|
|
55
|
+
Use streamed Responses output, Realtime client-secret creation, SIP call operations, and persistent Responses websocket connections where a flow needs more than one-shot request handling.
|
|
63
56
|
|
|
64
|
-
|
|
57
|
+
### Event-Driven Integrations
|
|
65
58
|
|
|
66
|
-
|
|
59
|
+
Use webhook signature verification and payload unwrapping in Node-RED flows that react to upstream platform events.
|
|
67
60
|
|
|
68
|
-
|
|
61
|
+
### OpenAI-Compatible Provider Support
|
|
69
62
|
|
|
70
|
-
-
|
|
71
|
-
- [Cloudflare Workers AI](https://developers.cloudflare.com/workers-ai/)
|
|
72
|
-
- [gpt4all](https://github.com/nomic-ai/gpt4all)
|
|
73
|
-
- [Google AI Studio](https://ai.google.dev/gemini-api/docs/openai#node.js)
|
|
74
|
-
- [Groq](https://groq.com/)
|
|
75
|
-
- [Hugging Face Inference API](https://huggingface.co/docs/api-inference/tasks/chat-completion)
|
|
76
|
-
- [Jan](https://jan.ai/)
|
|
77
|
-
- [Lightning AI](https://lightning.ai/)
|
|
78
|
-
- [LiteLLM](https://www.litellm.ai/)
|
|
79
|
-
- [llama.cpp](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file)
|
|
80
|
-
- [llamafile](https://github.com/Mozilla-Ocho/llamafile)
|
|
81
|
-
- [LlamaIndex](https://www.llamaindex.ai/)
|
|
82
|
-
- [LM Studio](https://lmstudio.ai/)
|
|
83
|
-
- [LMDeploy](https://github.com/InternLM/lmdeploy)
|
|
84
|
-
- [LocalAI](https://localai.io/)
|
|
85
|
-
- [Mistral AI](https://mistral.ai/)
|
|
86
|
-
- [Ollama](https://ollama.com/)
|
|
87
|
-
- [OpenRouter](https://openrouter.ai/)
|
|
88
|
-
- [Titan ML](https://www.titanml.co/)
|
|
89
|
-
- [Vllm](https://docs.vllm.ai/en/v0.6.0/index.html)
|
|
90
|
-
- and many more...
|
|
63
|
+
Use the `Service Host` config to target compatible API providers with custom base URLs, custom auth header names, query-string auth routing, and typed configuration values.
|
|
91
64
|
|
|
92
|
-
##
|
|
65
|
+
## Requirements
|
|
93
66
|
|
|
94
|
-
|
|
67
|
+
- Node.js `>=18.0.0`
|
|
68
|
+
- Node-RED `>=3.0.0`
|
|
95
69
|
|
|
96
|
-
|
|
70
|
+
## Install
|
|
97
71
|
|
|
98
|
-
|
|
99
|
-
2. **Clone Your Fork**: Clone your fork to your local machine for development.
|
|
100
|
-
3. **Create a Feature Branch**: Create a branch in your forked repository where you can make your changes.
|
|
101
|
-
4. **Commit Your Changes**: Make your changes in your feature branch and commit them with clear, descriptive messages.
|
|
102
|
-
5. **Push to Your Fork**: Push your changes to your fork on GitHub.
|
|
103
|
-
6. **Submit a Pull Request**: Go to the original repository and submit a pull request from your feature branch. Please provide a clear description of the changes and reference any related issues.
|
|
72
|
+
### Node-RED Palette Manager
|
|
104
73
|
|
|
105
|
-
|
|
74
|
+
```text
|
|
75
|
+
@inductiv/node-red-openai-api
|
|
76
|
+
```
|
|
106
77
|
|
|
107
|
-
|
|
108
|
-
- Include unit tests for new features to confirm they work as expected.
|
|
109
|
-
- Update documentation to reflect any changes or additions made.
|
|
78
|
+
### npm
|
|
110
79
|
|
|
111
|
-
|
|
80
|
+
```bash
|
|
81
|
+
cd $HOME/.node-red
|
|
82
|
+
npm i @inductiv/node-red-openai-api
|
|
83
|
+
```
|
|
112
84
|
|
|
113
|
-
|
|
85
|
+
## Quick Start
|
|
114
86
|
|
|
115
|
-
|
|
116
|
-
|
|
117
|
-
|
|
87
|
+
1. Drop an `OpenAI API` node onto your flow.
|
|
88
|
+
2. Create or select a `Service Host` config node.
|
|
89
|
+
3. Set `API Base` to your provider endpoint. The default OpenAI value is `https://api.openai.com/v1`.
|
|
90
|
+
4. Set `API Key` using either:
|
|
91
|
+
- `cred` for a stored credential value, or
|
|
92
|
+
- `env`, `msg`, `flow`, or `global` for a runtime reference
|
|
93
|
+
5. Pick a method on the `OpenAI API` node, such as `create model response`.
|
|
94
|
+
6. Send the request payload through `msg.payload`, or change the node's input property if your flow uses a different message shape.
|
|
118
95
|
|
|
119
|
-
|
|
96
|
+
Example `msg.payload` for `create model response`:
|
|
120
97
|
|
|
121
|
-
|
|
98
|
+
```json
|
|
99
|
+
{
|
|
100
|
+
"model": "gpt-5-nano",
|
|
101
|
+
"input": "Write a one-line status summary."
|
|
102
|
+
}
|
|
103
|
+
```
|
|
104
|
+
|
|
105
|
+
The node writes its output back to `msg.payload`.
|
|
106
|
+
|
|
107
|
+
## Start Here
|
|
108
|
+
|
|
109
|
+
If you want to understand the shape of this node quickly, these example flows are the best entry points:
|
|
110
|
+
|
|
111
|
+
- [`examples/chat.json`](examples/chat.json)
|
|
112
|
+
A straightforward API-call flow for getting oriented.
|
|
113
|
+
- [`examples/responses/phase.json`](examples/responses/phase.json)
|
|
114
|
+
A clean Responses example using newer payload features.
|
|
115
|
+
- [`examples/responses/tool-search.json`](examples/responses/tool-search.json)
|
|
116
|
+
Shows tool-enabled Responses work in a practical flow.
|
|
117
|
+
- [`examples/responses/computer-use.json`](examples/responses/computer-use.json)
|
|
118
|
+
Shows the request and follow-up contract for computer-use style workflows.
|
|
119
|
+
- [`examples/responses/websocket.json`](examples/responses/websocket.json)
|
|
120
|
+
Shows explicit websocket lifecycle handling in one node instance.
|
|
121
|
+
- [`examples/realtime/client-secrets.json`](examples/realtime/client-secrets.json)
|
|
122
|
+
Shows the Realtime client-secret contract for browser or mobile handoff.
|
|
123
|
+
|
|
124
|
+
## Current Alignment Highlights
|
|
125
|
+
|
|
126
|
+
This repository currently includes:
|
|
127
|
+
|
|
128
|
+
- Responses API support, including `phase`, `prompt_cache_key`, `tool_search`, GA computer-use payloads, cancellation, compaction, input-token counting, and websocket mode
|
|
129
|
+
- Realtime API support, including client-secret creation, SIP call operations, and current SDK-typed model ids such as `gpt-realtime-1.5` and `gpt-audio-1.5`
|
|
130
|
+
- Conversations, Containers, Container Files, Evals, Skills, Videos, and Webhooks support
|
|
131
|
+
- OpenAI-compatible auth routing through the `Service Host` config node
|
|
132
|
+
|
|
133
|
+
See the in-editor node help for exact method payloads and links to official API documentation.
|
|
134
|
+
|
|
135
|
+
## API Surface
|
|
136
|
+
|
|
137
|
+
The method picker covers a wide range of OpenAI API families:
|
|
138
|
+
|
|
139
|
+
- Assistants
|
|
140
|
+
- Audio
|
|
141
|
+
- Batch
|
|
142
|
+
- Chat Completions
|
|
143
|
+
- Container Files
|
|
144
|
+
- Containers
|
|
145
|
+
- Conversations
|
|
146
|
+
- Embeddings
|
|
147
|
+
- Evals
|
|
148
|
+
- Files
|
|
149
|
+
- Fine-tuning
|
|
150
|
+
- Images
|
|
151
|
+
- Messages
|
|
152
|
+
- Models
|
|
153
|
+
- Moderations
|
|
154
|
+
- Realtime
|
|
155
|
+
- Responses
|
|
156
|
+
- Runs
|
|
157
|
+
- Skills
|
|
158
|
+
- Threads
|
|
159
|
+
- Uploads
|
|
160
|
+
- Vector Store File Batches
|
|
161
|
+
- Vector Store Files
|
|
162
|
+
- Vector Stores
|
|
163
|
+
- Videos
|
|
164
|
+
- Webhooks
|
|
165
|
+
|
|
166
|
+
`Graders` are supported through Evals payloads via `testing_criteria`, in the same way the official SDK models them.
|
|
167
|
+
|
|
168
|
+
## Example Index
|
|
169
|
+
|
|
170
|
+
Import-ready example flows live under `examples/`:
|
|
171
|
+
|
|
172
|
+
- [`examples/assistants.json`](examples/assistants.json)
|
|
173
|
+
- [`examples/audio.json`](examples/audio.json)
|
|
174
|
+
- [`examples/chat.json`](examples/chat.json)
|
|
175
|
+
- [`examples/embeddings.json`](examples/embeddings.json)
|
|
176
|
+
- [`examples/files.json`](examples/files.json)
|
|
177
|
+
- [`examples/fine-tuning.json`](examples/fine-tuning.json)
|
|
178
|
+
- [`examples/images.json`](examples/images.json)
|
|
179
|
+
- [`examples/messages.json`](examples/messages.json)
|
|
180
|
+
- [`examples/models.json`](examples/models.json)
|
|
181
|
+
- [`examples/moderations.json`](examples/moderations.json)
|
|
182
|
+
- [`examples/realtime/client-secrets.json`](examples/realtime/client-secrets.json)
|
|
183
|
+
- [`examples/responses/computer-use.json`](examples/responses/computer-use.json)
|
|
184
|
+
- [`examples/responses/mcp.json`](examples/responses/mcp.json)
|
|
185
|
+
- [`examples/responses/phase.json`](examples/responses/phase.json)
|
|
186
|
+
- [`examples/responses/tool-search.json`](examples/responses/tool-search.json)
|
|
187
|
+
- [`examples/responses/websocket.json`](examples/responses/websocket.json)
|
|
188
|
+
- [`examples/runs.json`](examples/runs.json)
|
|
189
|
+
- [`examples/threads.json`](examples/threads.json)
|
|
190
|
+
|
|
191
|
+
## Service Host Notes
|
|
192
|
+
|
|
193
|
+
The `Service Host` config node handles the provider-specific runtime boundary.
|
|
194
|
+
|
|
195
|
+
- `API Key` supports `cred`, `env`, `msg`, `flow`, and `global`
|
|
196
|
+
- `API Base` can point at OpenAI or a compatible provider
|
|
197
|
+
- `Auth Header` defaults to `Authorization`, but can be changed for provider-specific auth conventions
|
|
198
|
+
- auth can be sent either as a header or as a query-string parameter
|
|
199
|
+
- `Organization ID` is optional and supports typed values like the other service fields
|
|
200
|
+
|
|
201
|
+
This is the piece that lets one runtime model work cleanly across both OpenAI and compatible API surfaces.
|
|
202
|
+
|
|
203
|
+
## Repository Shape
|
|
204
|
+
|
|
205
|
+
This repository is structured so the runtime, editor, examples, and generated artifacts stay understandable:
|
|
206
|
+
|
|
207
|
+
- [`node.js`](node.js)
|
|
208
|
+
Node-RED runtime entry point and `Service Host` config-node logic.
|
|
209
|
+
- [`src/`](src)
|
|
210
|
+
Source modules for method implementations, editor templates, and help content.
|
|
211
|
+
- [`src/lib.js`](src/lib.js)
|
|
212
|
+
Source entry for the bundled runtime method surface.
|
|
213
|
+
- [`lib.js`](lib.js)
|
|
214
|
+
Generated runtime bundle built from `src/lib.js`.
|
|
215
|
+
- [`src/node.html`](src/node.html)
|
|
216
|
+
Source editor template that includes the per-family fragments.
|
|
217
|
+
- [`node.html`](node.html)
|
|
218
|
+
Generated editor asset built from `src/node.html`.
|
|
219
|
+
- [`examples/`](examples)
|
|
220
|
+
Import-ready Node-RED flows.
|
|
221
|
+
- [`test/`](test)
|
|
222
|
+
Node test coverage for editor behavior, auth routing, method mapping, and websocket lifecycle behavior.
|
|
223
|
+
|
|
224
|
+
## Development
|
|
225
|
+
|
|
226
|
+
```bash
|
|
227
|
+
npm install
|
|
228
|
+
npm run build
|
|
229
|
+
npm test
|
|
230
|
+
```
|
|
231
|
+
|
|
232
|
+
Generated files are part of the project:
|
|
233
|
+
|
|
234
|
+
- `node.html` is built from `src/node.html`
|
|
235
|
+
- `lib.js` is built from `src/lib.js`
|
|
122
236
|
|
|
123
|
-
|
|
237
|
+
If you change source templates or runtime source files, rebuild before review or release.
|
|
238
|
+
|
|
239
|
+
## Contributing
|
|
240
|
+
|
|
241
|
+
Contributions are welcome. Keep changes clear, intentional, and proven.
|
|
242
|
+
|
|
243
|
+
Please include:
|
|
244
|
+
|
|
245
|
+
- a clear scope and rationale
|
|
246
|
+
- tests for behavior changes
|
|
247
|
+
- a short plain-language comment block at the top of each test file you add or touch
|
|
248
|
+
- doc updates when user-facing behavior changes
|
|
249
|
+
|
|
250
|
+
## License
|
|
124
251
|
|
|
125
|
-
|
|
252
|
+
[MIT](./LICENSE)
|
|
@@ -0,0 +1,182 @@
|
|
|
1
|
+
[
|
|
2
|
+
{
|
|
3
|
+
"id": "7c28d6f5b81f4cf3",
|
|
4
|
+
"type": "tab",
|
|
5
|
+
"label": "Realtime Client Secret Example",
|
|
6
|
+
"disabled": false,
|
|
7
|
+
"info": "Realtime client-secret example.\n\nThis flow demonstrates the correct nested request contract for `createRealtimeClientSecret`:\n- client-secret options such as `expires_after` stay at the top level\n- Realtime session configuration lives under `session`\n- two inject nodes show the newer SDK-typed model ids `gpt-realtime-1.5` and `gpt-audio-1.5`",
|
|
8
|
+
"env": []
|
|
9
|
+
},
|
|
10
|
+
{
|
|
11
|
+
"id": "2b6f81da44f1b506",
|
|
12
|
+
"type": "comment",
|
|
13
|
+
"z": "7c28d6f5b81f4cf3",
|
|
14
|
+
"name": "Set your API key, then run either inject node.",
|
|
15
|
+
"info": "Before running:\n- open the `OpenAI Auth` config node and set a valid API key\n- this example stores request data in `msg.ai` because the `OpenAI API` node property is configured to `ai`\n- if your node uses the default property, the same shape belongs under `msg.payload`\n\nWhat this flow teaches:\n- `expires_after` remains top-level request metadata\n- Realtime session fields are nested under `session`\n- both `gpt-realtime-1.5` and `gpt-audio-1.5` pass through the existing adapter unchanged",
|
|
16
|
+
"x": 430,
|
|
17
|
+
"y": 140,
|
|
18
|
+
"wires": []
|
|
19
|
+
},
|
|
20
|
+
{
|
|
21
|
+
"id": "d48aabf237113ec2",
|
|
22
|
+
"type": "comment",
|
|
23
|
+
"z": "7c28d6f5b81f4cf3",
|
|
24
|
+
"name": "What is a client secret?",
|
|
25
|
+
"info": "A Realtime client secret is not your long-lived OpenAI API key.\n\nIt is a short-lived ephemeral token created server-side so a browser or mobile client can connect to the Realtime API without exposing the main API key.\n\nTypical flow:\n- your server or Node-RED flow creates the client secret\n- the returned `value` is passed to a trusted client application\n- that client uses the secret to open a Realtime session before it expires",
|
|
26
|
+
"x": 430,
|
|
27
|
+
"y": 200,
|
|
28
|
+
"wires": []
|
|
29
|
+
},
|
|
30
|
+
{
|
|
31
|
+
"id": "6e1099b8233f4f6f",
|
|
32
|
+
"type": "inject",
|
|
33
|
+
"z": "7c28d6f5b81f4cf3",
|
|
34
|
+
"name": "Create Realtime 1.5 Client Secret",
|
|
35
|
+
"props": [
|
|
36
|
+
{
|
|
37
|
+
"p": "ai.expires_after.anchor",
|
|
38
|
+
"v": "created_at",
|
|
39
|
+
"vt": "str"
|
|
40
|
+
},
|
|
41
|
+
{
|
|
42
|
+
"p": "ai.expires_after.seconds",
|
|
43
|
+
"v": "600",
|
|
44
|
+
"vt": "num"
|
|
45
|
+
},
|
|
46
|
+
{
|
|
47
|
+
"p": "ai.session.type",
|
|
48
|
+
"v": "realtime",
|
|
49
|
+
"vt": "str"
|
|
50
|
+
},
|
|
51
|
+
{
|
|
52
|
+
"p": "ai.session.model",
|
|
53
|
+
"v": "gpt-realtime-1.5",
|
|
54
|
+
"vt": "str"
|
|
55
|
+
},
|
|
56
|
+
{
|
|
57
|
+
"p": "ai.session.output_modalities[0]",
|
|
58
|
+
"v": "audio",
|
|
59
|
+
"vt": "str"
|
|
60
|
+
},
|
|
61
|
+
{
|
|
62
|
+
"p": "ai.session.instructions",
|
|
63
|
+
"v": "Speak clearly and keep responses concise.",
|
|
64
|
+
"vt": "str"
|
|
65
|
+
}
|
|
66
|
+
],
|
|
67
|
+
"repeat": "",
|
|
68
|
+
"crontab": "",
|
|
69
|
+
"once": false,
|
|
70
|
+
"onceDelay": 0.1,
|
|
71
|
+
"topic": "",
|
|
72
|
+
"x": 300,
|
|
73
|
+
"y": 260,
|
|
74
|
+
"wires": [
|
|
75
|
+
[
|
|
76
|
+
"9df67f8a39c32558"
|
|
77
|
+
]
|
|
78
|
+
]
|
|
79
|
+
},
|
|
80
|
+
{
|
|
81
|
+
"id": "b7fce6b60f312e8a",
|
|
82
|
+
"type": "inject",
|
|
83
|
+
"z": "7c28d6f5b81f4cf3",
|
|
84
|
+
"name": "Create Audio 1.5 Client Secret",
|
|
85
|
+
"props": [
|
|
86
|
+
{
|
|
87
|
+
"p": "ai.expires_after.anchor",
|
|
88
|
+
"v": "created_at",
|
|
89
|
+
"vt": "str"
|
|
90
|
+
},
|
|
91
|
+
{
|
|
92
|
+
"p": "ai.expires_after.seconds",
|
|
93
|
+
"v": "600",
|
|
94
|
+
"vt": "num"
|
|
95
|
+
},
|
|
96
|
+
{
|
|
97
|
+
"p": "ai.session.type",
|
|
98
|
+
"v": "realtime",
|
|
99
|
+
"vt": "str"
|
|
100
|
+
},
|
|
101
|
+
{
|
|
102
|
+
"p": "ai.session.model",
|
|
103
|
+
"v": "gpt-audio-1.5",
|
|
104
|
+
"vt": "str"
|
|
105
|
+
},
|
|
106
|
+
{
|
|
107
|
+
"p": "ai.session.output_modalities[0]",
|
|
108
|
+
"v": "audio",
|
|
109
|
+
"vt": "str"
|
|
110
|
+
},
|
|
111
|
+
{
|
|
112
|
+
"p": "ai.session.instructions",
|
|
113
|
+
"v": "Generate short audio responses.",
|
|
114
|
+
"vt": "str"
|
|
115
|
+
}
|
|
116
|
+
],
|
|
117
|
+
"repeat": "",
|
|
118
|
+
"crontab": "",
|
|
119
|
+
"once": false,
|
|
120
|
+
"onceDelay": 0.1,
|
|
121
|
+
"topic": "",
|
|
122
|
+
"x": 290,
|
|
123
|
+
"y": 340,
|
|
124
|
+
"wires": [
|
|
125
|
+
[
|
|
126
|
+
"9df67f8a39c32558"
|
|
127
|
+
]
|
|
128
|
+
]
|
|
129
|
+
},
|
|
130
|
+
{
|
|
131
|
+
"id": "65c6ad57d8ef6d54",
|
|
132
|
+
"type": "comment",
|
|
133
|
+
"z": "7c28d6f5b81f4cf3",
|
|
134
|
+
"name": "Expected result: debug sidebar shows value, expires_at, and session.",
|
|
135
|
+
"info": "Successful responses from `createRealtimeClientSecret` return top-level `value`, `expires_at`, and `session` fields.\n\nWhat to inspect in the debug sidebar:\n- `session.type` should be `realtime`\n- `session.model` should match the inject node you sent\n- `value` is the ephemeral client secret to hand to a browser or mobile client",
|
|
136
|
+
"x": 520,
|
|
137
|
+
"y": 420,
|
|
138
|
+
"wires": []
|
|
139
|
+
},
|
|
140
|
+
{
|
|
141
|
+
"id": "9df67f8a39c32558",
|
|
142
|
+
"type": "OpenAI API",
|
|
143
|
+
"z": "7c28d6f5b81f4cf3",
|
|
144
|
+
"name": "Create Realtime Client Secret",
|
|
145
|
+
"property": "ai",
|
|
146
|
+
"propertyType": "msg",
|
|
147
|
+
"service": "40e8a97d10d65e1e",
|
|
148
|
+
"method": "createRealtimeClientSecret",
|
|
149
|
+
"x": 600,
|
|
150
|
+
"y": 300,
|
|
151
|
+
"wires": [
|
|
152
|
+
[
|
|
153
|
+
"86453bdf02676d9f"
|
|
154
|
+
]
|
|
155
|
+
]
|
|
156
|
+
},
|
|
157
|
+
{
|
|
158
|
+
"id": "86453bdf02676d9f",
|
|
159
|
+
"type": "debug",
|
|
160
|
+
"z": "7c28d6f5b81f4cf3",
|
|
161
|
+
"name": "Realtime Client Secret",
|
|
162
|
+
"active": true,
|
|
163
|
+
"tosidebar": true,
|
|
164
|
+
"console": false,
|
|
165
|
+
"tostatus": false,
|
|
166
|
+
"complete": "true",
|
|
167
|
+
"targetType": "full",
|
|
168
|
+
"statusVal": "",
|
|
169
|
+
"statusType": "auto",
|
|
170
|
+
"x": 860,
|
|
171
|
+
"y": 300,
|
|
172
|
+
"wires": []
|
|
173
|
+
},
|
|
174
|
+
{
|
|
175
|
+
"id": "40e8a97d10d65e1e",
|
|
176
|
+
"type": "Service Host",
|
|
177
|
+
"apiBase": "https://api.openai.com/v1",
|
|
178
|
+
"secureApiKeyHeaderOrQueryName": "Authorization",
|
|
179
|
+
"organizationId": "",
|
|
180
|
+
"name": "OpenAI Auth"
|
|
181
|
+
}
|
|
182
|
+
]
|
|
@@ -0,0 +1,142 @@
|
|
|
1
|
+
[
|
|
2
|
+
{
|
|
3
|
+
"id": "45ed920ef7184d5e",
|
|
4
|
+
"type": "tab",
|
|
5
|
+
"label": "Computer Use Example",
|
|
6
|
+
"disabled": false,
|
|
7
|
+
"info": "Responses computer-use example.\n\nThis flow teaches the request contract rather than claiming to automate the whole browser loop inside Node-RED.\n\nUse it in two stages:\n1. Send the initial request with `tools: [{ type: \"computer\" }]`.\n2. After the model returns a `computer_call`, edit the placeholder `previous_response_id`, `call_id`, and screenshot location in the second inject node, then send the follow-up request with a `computer_call_output` item.\n\nThe actual screenshot capture and action execution still need external orchestration.",
|
|
8
|
+
"env": []
|
|
9
|
+
},
|
|
10
|
+
{
|
|
11
|
+
"id": "af8473d616d4c73f",
|
|
12
|
+
"type": "inject",
|
|
13
|
+
"z": "45ed920ef7184d5e",
|
|
14
|
+
"name": "Create Computer Request",
|
|
15
|
+
"props": [
|
|
16
|
+
{
|
|
17
|
+
"p": "ai.model",
|
|
18
|
+
"v": "gpt-5.4",
|
|
19
|
+
"vt": "str"
|
|
20
|
+
},
|
|
21
|
+
{
|
|
22
|
+
"p": "ai.tools[0]",
|
|
23
|
+
"v": "{\"type\":\"computer\"}",
|
|
24
|
+
"vt": "json"
|
|
25
|
+
},
|
|
26
|
+
{
|
|
27
|
+
"p": "ai.input",
|
|
28
|
+
"v": "Open the target site, inspect the main page, and tell me what action to take next.",
|
|
29
|
+
"vt": "str"
|
|
30
|
+
}
|
|
31
|
+
],
|
|
32
|
+
"repeat": "",
|
|
33
|
+
"crontab": "",
|
|
34
|
+
"once": false,
|
|
35
|
+
"onceDelay": 0.1,
|
|
36
|
+
"topic": "",
|
|
37
|
+
"x": 320,
|
|
38
|
+
"y": 220,
|
|
39
|
+
"wires": [
|
|
40
|
+
[
|
|
41
|
+
"1c6d500f04f70757"
|
|
42
|
+
]
|
|
43
|
+
]
|
|
44
|
+
},
|
|
45
|
+
{
|
|
46
|
+
"id": "f8116f42c7c6774d",
|
|
47
|
+
"type": "comment",
|
|
48
|
+
"z": "45ed920ef7184d5e",
|
|
49
|
+
"name": "Stage 1: set your API key, then run the initial computer request.",
|
|
50
|
+
"info": "This example teaches the Responses computer-use payload contract.\n\nBefore running:\n- open the `OpenAI Auth` config node and set a valid API key\n- keep or replace `gpt-5.4` as needed\n- understand that this flow does not capture screenshots or execute actions for you\n\nStage 1:\n- send `Create Computer Request`\n- inspect the debug output for a `computer_call` item and note its `call_id`\n- note the response id you need for `previous_response_id` in stage 2",
|
|
51
|
+
"x": 480,
|
|
52
|
+
"y": 160,
|
|
53
|
+
"wires": []
|
|
54
|
+
},
|
|
55
|
+
{
|
|
56
|
+
"id": "66fb2697d4d37b47",
|
|
57
|
+
"type": "inject",
|
|
58
|
+
"z": "45ed920ef7184d5e",
|
|
59
|
+
"name": "Submit Computer Screenshot (edit placeholders)",
|
|
60
|
+
"props": [
|
|
61
|
+
{
|
|
62
|
+
"p": "ai.model",
|
|
63
|
+
"v": "gpt-5.4",
|
|
64
|
+
"vt": "str"
|
|
65
|
+
},
|
|
66
|
+
{
|
|
67
|
+
"p": "ai.previous_response_id",
|
|
68
|
+
"v": "resp_replace_me",
|
|
69
|
+
"vt": "str"
|
|
70
|
+
},
|
|
71
|
+
{
|
|
72
|
+
"p": "ai.input[0]",
|
|
73
|
+
"v": "{\"type\":\"computer_call_output\",\"call_id\":\"call_replace_me\",\"output\":{\"type\":\"computer_screenshot\",\"image_url\":\"https://example.com/replace-with-real-screenshot.png\"}}",
|
|
74
|
+
"vt": "json"
|
|
75
|
+
}
|
|
76
|
+
],
|
|
77
|
+
"repeat": "",
|
|
78
|
+
"crontab": "",
|
|
79
|
+
"once": false,
|
|
80
|
+
"onceDelay": 0.1,
|
|
81
|
+
"topic": "",
|
|
82
|
+
"x": 390,
|
|
83
|
+
"y": 300,
|
|
84
|
+
"wires": [
|
|
85
|
+
[
|
|
86
|
+
"1c6d500f04f70757"
|
|
87
|
+
]
|
|
88
|
+
]
|
|
89
|
+
},
|
|
90
|
+
{
|
|
91
|
+
"id": "57a7d580372bcfd8",
|
|
92
|
+
"type": "comment",
|
|
93
|
+
"z": "45ed920ef7184d5e",
|
|
94
|
+
"name": "Stage 2: replace the placeholders before submitting the screenshot payload.",
|
|
95
|
+
"info": "Before running `Submit Computer Screenshot (edit placeholders)`:\n- replace `resp_replace_me` with the response id from stage 1\n- replace `call_replace_me` with the `computer_call` id returned by the model\n- replace the placeholder screenshot URL with a real screenshot location the model can read\n\nWhat this flow sends back:\n- `previous_response_id`\n- one `computer_call_output` item\n- one `computer_screenshot` payload inside `output`\n\nExpected result:\n- the model continues the computer-use loop using the screenshot you supplied",
|
|
96
|
+
"x": 500,
|
|
97
|
+
"y": 360,
|
|
98
|
+
"wires": []
|
|
99
|
+
},
|
|
100
|
+
{
|
|
101
|
+
"id": "1c6d500f04f70757",
|
|
102
|
+
"type": "OpenAI API",
|
|
103
|
+
"z": "45ed920ef7184d5e",
|
|
104
|
+
"name": "Create Model Response",
|
|
105
|
+
"property": "ai",
|
|
106
|
+
"propertyType": "msg",
|
|
107
|
+
"service": "c23e0df9b74eae30",
|
|
108
|
+
"method": "createModelResponse",
|
|
109
|
+
"x": 660,
|
|
110
|
+
"y": 260,
|
|
111
|
+
"wires": [
|
|
112
|
+
[
|
|
113
|
+
"1b8c5a26f1517ac6"
|
|
114
|
+
]
|
|
115
|
+
]
|
|
116
|
+
},
|
|
117
|
+
{
|
|
118
|
+
"id": "1b8c5a26f1517ac6",
|
|
119
|
+
"type": "debug",
|
|
120
|
+
"z": "45ed920ef7184d5e",
|
|
121
|
+
"name": "Computer Use Response",
|
|
122
|
+
"active": true,
|
|
123
|
+
"tosidebar": true,
|
|
124
|
+
"console": false,
|
|
125
|
+
"tostatus": false,
|
|
126
|
+
"complete": "true",
|
|
127
|
+
"targetType": "full",
|
|
128
|
+
"statusVal": "",
|
|
129
|
+
"statusType": "auto",
|
|
130
|
+
"x": 900,
|
|
131
|
+
"y": 260,
|
|
132
|
+
"wires": []
|
|
133
|
+
},
|
|
134
|
+
{
|
|
135
|
+
"id": "c23e0df9b74eae30",
|
|
136
|
+
"type": "Service Host",
|
|
137
|
+
"apiBase": "https://api.openai.com/v1",
|
|
138
|
+
"secureApiKeyHeaderOrQueryName": "Authorization",
|
|
139
|
+
"organizationId": "",
|
|
140
|
+
"name": "OpenAI Auth"
|
|
141
|
+
}
|
|
142
|
+
]
|