n8n-nodes-openai-chatmodel 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +174 -0
- package/dist/credentials/OpenAiApi.credentials.d.ts +9 -0
- package/dist/credentials/OpenAiApi.credentials.js +49 -0
- package/dist/nodes/OpenAiChatModel/OpenAiChatModel.node.d.ts +5 -0
- package/dist/nodes/OpenAiChatModel/OpenAiChatModel.node.js +249 -0
- package/dist/nodes/OpenAiChatModel/openai.svg +3 -0
- package/index.js +2 -0
- package/package.json +65 -0
package/README.md
ADDED
|
@@ -0,0 +1,174 @@
|
|
|
1
|
+
# n8n-nodes-openai-chatmodel
|
|
2
|
+
|
|
3
|
+
Custom n8n node untuk OpenAI Chat Model dengan dukungan response ID dan conversation continuity.
|
|
4
|
+
|
|
5
|
+
## Fitur
|
|
6
|
+
|
|
7
|
+
- ✅ Integrasi dengan OpenAI API endpoint `/v1/responses`
|
|
8
|
+
- ✅ Dukungan untuk response ID dan conversation continuity
|
|
9
|
+
- ✅ Parameter input yang fleksibel
|
|
10
|
+
- ✅ Konfigurasi model yang dapat disesuaikan
|
|
11
|
+
- ✅ Opsi temperature, max tokens, dan parameter lainnya
|
|
12
|
+
- ✅ Error handling yang komprehensif
|
|
13
|
+
|
|
14
|
+
## Instalasi
|
|
15
|
+
|
|
16
|
+
### Dari npm
|
|
17
|
+
|
|
18
|
+
```bash
|
|
19
|
+
npm install n8n-nodes-openai-chatmodel
|
|
20
|
+
```
|
|
21
|
+
|
|
22
|
+
### Manual Installation
|
|
23
|
+
|
|
24
|
+
1. Clone repository ini
|
|
25
|
+
2. Build package:
|
|
26
|
+
```bash
|
|
27
|
+
npm install
|
|
28
|
+
npm run build
|
|
29
|
+
```
|
|
30
|
+
3. Install ke n8n:
|
|
31
|
+
```bash
|
|
32
|
+
npm install -g ./
|
|
33
|
+
```
|
|
34
|
+
|
|
35
|
+
## Konfigurasi
|
|
36
|
+
|
|
37
|
+
### 1. Setup Credentials
|
|
38
|
+
|
|
39
|
+
1. Buka n8n dan pergi ke **Credentials**
|
|
40
|
+
2. Klik **Add Credential** dan pilih **OpenAI API**
|
|
41
|
+
3. Masukkan:
|
|
42
|
+
- **API Key**: API key OpenAI Anda
|
|
43
|
+
- **Organization ID** (opsional): Organization ID OpenAI Anda
|
|
44
|
+
|
|
45
|
+
### 2. Menggunakan Node
|
|
46
|
+
|
|
47
|
+
1. Tambahkan node **OpenAI Chat Model** ke workflow Anda
|
|
48
|
+
2. Pilih credentials yang sudah dibuat
|
|
49
|
+
3. Konfigurasi parameter:
|
|
50
|
+
|
|
51
|
+
#### Parameter Utama
|
|
52
|
+
|
|
53
|
+
- **Model**: Pilih model OpenAI (GPT-4, GPT-4 Turbo, GPT-3.5 Turbo, atau Custom)
|
|
54
|
+
- **Input**: Teks input yang akan dikirim ke model
|
|
55
|
+
- **Previous Response ID**: ID response sebelumnya untuk continuity conversation (opsional)
|
|
56
|
+
|
|
57
|
+
#### Opsi Lanjutan
|
|
58
|
+
|
|
59
|
+
- **Temperature** (0-2): Mengontrol randomness dalam response
|
|
60
|
+
- **Max Tokens**: Maksimum token yang akan dihasilkan
|
|
61
|
+
- **Top P** (0-1): Mengontrol diversity via nucleus sampling
|
|
62
|
+
- **Frequency Penalty** (-2 to 2): Mengurangi kemungkinan mengulang kata yang sama
|
|
63
|
+
- **Presence Penalty** (-2 to 2): Meningkatkan kemungkinan membahas topik baru
|
|
64
|
+
- **Include Response ID**: Apakah menyertakan response ID dalam output
|
|
65
|
+
|
|
66
|
+
## Contoh Penggunaan
|
|
67
|
+
|
|
68
|
+
### Basic Chat
|
|
69
|
+
|
|
70
|
+
```json
|
|
71
|
+
{
|
|
72
|
+
"model": "gpt-3.5-turbo",
|
|
73
|
+
"input": "Halo, bagaimana cara kerja AI?",
|
|
74
|
+
"options": {
|
|
75
|
+
"temperature": 0.7,
|
|
76
|
+
"max_tokens": 500
|
|
77
|
+
}
|
|
78
|
+
}
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
### Conversation dengan Previous Response ID
|
|
82
|
+
|
|
83
|
+
```json
|
|
84
|
+
{
|
|
85
|
+
"model": "gpt-4",
|
|
86
|
+
"input": "Lanjutkan pembahasan sebelumnya",
|
|
87
|
+
"previous_response_id": "resp_abc123",
|
|
88
|
+
"options": {
|
|
89
|
+
"temperature": 0.5
|
|
90
|
+
}
|
|
91
|
+
}
|
|
92
|
+
```
|
|
93
|
+
|
|
94
|
+
## Output
|
|
95
|
+
|
|
96
|
+
Node akan mengembalikan object dengan struktur:
|
|
97
|
+
|
|
98
|
+
```json
|
|
99
|
+
{
|
|
100
|
+
"id": "resp_abc123",
|
|
101
|
+
"object": "chat.completion",
|
|
102
|
+
"model": "gpt-3.5-turbo",
|
|
103
|
+
"created": 1677652288,
|
|
104
|
+
"choices": [...],
|
|
105
|
+
"usage": {
|
|
106
|
+
"prompt_tokens": 56,
|
|
107
|
+
"completion_tokens": 31,
|
|
108
|
+
"total_tokens": 87
|
|
109
|
+
},
|
|
110
|
+
"message": {...},
|
|
111
|
+
"content": "Response text here",
|
|
112
|
+
"finish_reason": "stop"
|
|
113
|
+
}
|
|
114
|
+
```
|
|
115
|
+
|
|
116
|
+
## Error Handling
|
|
117
|
+
|
|
118
|
+
Node akan menangani berbagai jenis error:
|
|
119
|
+
|
|
120
|
+
- **API Errors**: Error dari OpenAI API dengan status code dan pesan
|
|
121
|
+
- **Network Errors**: Error koneksi atau timeout
|
|
122
|
+
- **Validation Errors**: Parameter yang tidak valid atau hilang
|
|
123
|
+
|
|
124
|
+
## Development
|
|
125
|
+
|
|
126
|
+
### Build
|
|
127
|
+
|
|
128
|
+
```bash
|
|
129
|
+
npm run build
|
|
130
|
+
```
|
|
131
|
+
|
|
132
|
+
### Lint
|
|
133
|
+
|
|
134
|
+
```bash
|
|
135
|
+
npm run lint
|
|
136
|
+
```
|
|
137
|
+
|
|
138
|
+
### Format
|
|
139
|
+
|
|
140
|
+
```bash
|
|
141
|
+
npm run format
|
|
142
|
+
```
|
|
143
|
+
|
|
144
|
+
## Kontribusi
|
|
145
|
+
|
|
146
|
+
1. Fork repository
|
|
147
|
+
2. Buat feature branch (`git checkout -b feature/amazing-feature`)
|
|
148
|
+
3. Commit changes (`git commit -m 'Add amazing feature'`)
|
|
149
|
+
4. Push ke branch (`git push origin feature/amazing-feature`)
|
|
150
|
+
5. Buat Pull Request
|
|
151
|
+
|
|
152
|
+
## License
|
|
153
|
+
|
|
154
|
+
MIT License - lihat file [LICENSE](LICENSE) untuk detail.
|
|
155
|
+
|
|
156
|
+
## Support
|
|
157
|
+
|
|
158
|
+
Jika Anda mengalami masalah atau memiliki pertanyaan:
|
|
159
|
+
|
|
160
|
+
1. Buka issue di GitHub repository
|
|
161
|
+
2. Sertakan informasi:
|
|
162
|
+
- Versi n8n
|
|
163
|
+
- Versi node
|
|
164
|
+
- Error message (jika ada)
|
|
165
|
+
- Langkah untuk reproduce masalah
|
|
166
|
+
|
|
167
|
+
## Changelog
|
|
168
|
+
|
|
169
|
+
### v1.0.0
|
|
170
|
+
- Initial release
|
|
171
|
+
- Dukungan OpenAI API endpoint `/v1/responses`
|
|
172
|
+
- Parameter model, input, dan previous_response_id
|
|
173
|
+
- Response ID support
|
|
174
|
+
- Comprehensive error handling
|
|
@@ -0,0 +1,9 @@
|
|
|
1
|
+
import { IAuthenticateGeneric, ICredentialTestRequest, ICredentialType, INodeProperties } from 'n8n-workflow';
|
|
2
|
+
export declare class OpenAiApi implements ICredentialType {
|
|
3
|
+
name: string;
|
|
4
|
+
displayName: string;
|
|
5
|
+
documentationUrl: string;
|
|
6
|
+
properties: INodeProperties[];
|
|
7
|
+
authenticate: IAuthenticateGeneric;
|
|
8
|
+
test: ICredentialTestRequest;
|
|
9
|
+
}
|
|
@@ -0,0 +1,49 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
3
|
+
exports.OpenAiApi = void 0;
|
|
4
|
+
class OpenAiApi {
|
|
5
|
+
constructor() {
|
|
6
|
+
this.name = 'openAiApi';
|
|
7
|
+
this.displayName = 'OpenAI API';
|
|
8
|
+
this.documentationUrl = 'https://platform.openai.com/docs/api-reference/authentication';
|
|
9
|
+
this.properties = [
|
|
10
|
+
{
|
|
11
|
+
displayName: 'API Key',
|
|
12
|
+
name: 'apiKey',
|
|
13
|
+
type: 'string',
|
|
14
|
+
typeOptions: {
|
|
15
|
+
password: true,
|
|
16
|
+
},
|
|
17
|
+
default: '',
|
|
18
|
+
required: true,
|
|
19
|
+
description: 'Your OpenAI API key. You can find it in your OpenAI dashboard.',
|
|
20
|
+
},
|
|
21
|
+
{
|
|
22
|
+
displayName: 'Organization ID',
|
|
23
|
+
name: 'organizationId',
|
|
24
|
+
type: 'string',
|
|
25
|
+
default: '',
|
|
26
|
+
required: false,
|
|
27
|
+
description: 'Optional: Your OpenAI organization ID',
|
|
28
|
+
},
|
|
29
|
+
];
|
|
30
|
+
this.authenticate = {
|
|
31
|
+
type: 'generic',
|
|
32
|
+
properties: {
|
|
33
|
+
headers: {
|
|
34
|
+
Authorization: '=Bearer {{$credentials.apiKey}}',
|
|
35
|
+
'OpenAI-Organization': '={{$credentials.organizationId}}',
|
|
36
|
+
'Content-Type': 'application/json',
|
|
37
|
+
},
|
|
38
|
+
},
|
|
39
|
+
};
|
|
40
|
+
this.test = {
|
|
41
|
+
request: {
|
|
42
|
+
baseURL: 'https://api.openai.com/v1',
|
|
43
|
+
url: '/models',
|
|
44
|
+
method: 'GET',
|
|
45
|
+
},
|
|
46
|
+
};
|
|
47
|
+
}
|
|
48
|
+
}
|
|
49
|
+
exports.OpenAiApi = OpenAiApi;
|
|
@@ -0,0 +1,5 @@
|
|
|
1
|
+
import { IExecuteFunctions, INodeExecutionData, INodeType, INodeTypeDescription } from 'n8n-workflow';
|
|
2
|
+
export declare class OpenAiChatModel implements INodeType {
|
|
3
|
+
description: INodeTypeDescription;
|
|
4
|
+
execute(this: IExecuteFunctions): Promise<INodeExecutionData[][]>;
|
|
5
|
+
}
|
|
@@ -0,0 +1,249 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
var __importDefault = (this && this.__importDefault) || function (mod) {
|
|
3
|
+
return (mod && mod.__esModule) ? mod : { "default": mod };
|
|
4
|
+
};
|
|
5
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
6
|
+
exports.OpenAiChatModel = void 0;
|
|
7
|
+
const n8n_workflow_1 = require("n8n-workflow");
|
|
8
|
+
const axios_1 = __importDefault(require("axios"));
|
|
9
|
+
class OpenAiChatModel {
|
|
10
|
+
constructor() {
|
|
11
|
+
this.description = {
|
|
12
|
+
displayName: 'OpenAI Chat Model',
|
|
13
|
+
name: 'openAiChatModel',
|
|
14
|
+
icon: 'file:openai.svg',
|
|
15
|
+
group: ['transform'],
|
|
16
|
+
version: 1,
|
|
17
|
+
description: 'Interact with OpenAI Chat Models with response ID support',
|
|
18
|
+
defaults: {
|
|
19
|
+
name: 'OpenAI Chat Model',
|
|
20
|
+
},
|
|
21
|
+
inputs: ["main"],
|
|
22
|
+
outputs: ["main"],
|
|
23
|
+
credentials: [
|
|
24
|
+
{
|
|
25
|
+
name: 'openAiApi',
|
|
26
|
+
required: true,
|
|
27
|
+
},
|
|
28
|
+
],
|
|
29
|
+
properties: [
|
|
30
|
+
{
|
|
31
|
+
displayName: 'Model',
|
|
32
|
+
name: 'model',
|
|
33
|
+
type: 'options',
|
|
34
|
+
options: [
|
|
35
|
+
{
|
|
36
|
+
name: 'GPT-4',
|
|
37
|
+
value: 'gpt-4',
|
|
38
|
+
},
|
|
39
|
+
{
|
|
40
|
+
name: 'GPT-4 Turbo',
|
|
41
|
+
value: 'gpt-4-turbo-preview',
|
|
42
|
+
},
|
|
43
|
+
{
|
|
44
|
+
name: 'GPT-3.5 Turbo',
|
|
45
|
+
value: 'gpt-3.5-turbo',
|
|
46
|
+
},
|
|
47
|
+
{
|
|
48
|
+
name: 'Custom',
|
|
49
|
+
value: 'custom',
|
|
50
|
+
},
|
|
51
|
+
],
|
|
52
|
+
default: 'gpt-3.5-turbo',
|
|
53
|
+
description: 'The model to use for the chat completion',
|
|
54
|
+
},
|
|
55
|
+
{
|
|
56
|
+
displayName: 'Custom Model Name',
|
|
57
|
+
name: 'customModel',
|
|
58
|
+
type: 'string',
|
|
59
|
+
default: '',
|
|
60
|
+
displayOptions: {
|
|
61
|
+
show: {
|
|
62
|
+
model: ['custom'],
|
|
63
|
+
},
|
|
64
|
+
},
|
|
65
|
+
description: 'Name of the custom model to use',
|
|
66
|
+
},
|
|
67
|
+
{
|
|
68
|
+
displayName: 'Input',
|
|
69
|
+
name: 'input',
|
|
70
|
+
type: 'string',
|
|
71
|
+
typeOptions: {
|
|
72
|
+
rows: 4,
|
|
73
|
+
},
|
|
74
|
+
default: '',
|
|
75
|
+
required: true,
|
|
76
|
+
description: 'The input text to send to the OpenAI model',
|
|
77
|
+
},
|
|
78
|
+
{
|
|
79
|
+
displayName: 'Previous Response ID',
|
|
80
|
+
name: 'previous_response_id',
|
|
81
|
+
type: 'string',
|
|
82
|
+
default: '',
|
|
83
|
+
required: false,
|
|
84
|
+
description: 'Optional: ID of the previous response for conversation continuity',
|
|
85
|
+
},
|
|
86
|
+
{
|
|
87
|
+
displayName: 'Options',
|
|
88
|
+
name: 'options',
|
|
89
|
+
type: 'collection',
|
|
90
|
+
placeholder: 'Add Option',
|
|
91
|
+
default: {},
|
|
92
|
+
options: [
|
|
93
|
+
{
|
|
94
|
+
displayName: 'Temperature',
|
|
95
|
+
name: 'temperature',
|
|
96
|
+
type: 'number',
|
|
97
|
+
typeOptions: {
|
|
98
|
+
maxValue: 2,
|
|
99
|
+
minValue: 0,
|
|
100
|
+
numberStepSize: 0.1,
|
|
101
|
+
},
|
|
102
|
+
default: 1,
|
|
103
|
+
description: 'Controls randomness in the response. Lower values make responses more focused and deterministic.',
|
|
104
|
+
},
|
|
105
|
+
{
|
|
106
|
+
displayName: 'Max Tokens',
|
|
107
|
+
name: 'max_tokens',
|
|
108
|
+
type: 'number',
|
|
109
|
+
typeOptions: {
|
|
110
|
+
maxValue: 4096,
|
|
111
|
+
minValue: 1,
|
|
112
|
+
},
|
|
113
|
+
default: 1000,
|
|
114
|
+
description: 'The maximum number of tokens to generate in the response',
|
|
115
|
+
},
|
|
116
|
+
{
|
|
117
|
+
displayName: 'Top P',
|
|
118
|
+
name: 'top_p',
|
|
119
|
+
type: 'number',
|
|
120
|
+
typeOptions: {
|
|
121
|
+
maxValue: 1,
|
|
122
|
+
minValue: 0,
|
|
123
|
+
numberStepSize: 0.1,
|
|
124
|
+
},
|
|
125
|
+
default: 1,
|
|
126
|
+
description: 'Controls diversity via nucleus sampling',
|
|
127
|
+
},
|
|
128
|
+
{
|
|
129
|
+
displayName: 'Frequency Penalty',
|
|
130
|
+
name: 'frequency_penalty',
|
|
131
|
+
type: 'number',
|
|
132
|
+
typeOptions: {
|
|
133
|
+
maxValue: 2,
|
|
134
|
+
minValue: -2,
|
|
135
|
+
numberStepSize: 0.1,
|
|
136
|
+
},
|
|
137
|
+
default: 0,
|
|
138
|
+
description: 'Decreases likelihood of repeating the same line verbatim',
|
|
139
|
+
},
|
|
140
|
+
{
|
|
141
|
+
displayName: 'Presence Penalty',
|
|
142
|
+
name: 'presence_penalty',
|
|
143
|
+
type: 'number',
|
|
144
|
+
typeOptions: {
|
|
145
|
+
maxValue: 2,
|
|
146
|
+
minValue: -2,
|
|
147
|
+
numberStepSize: 0.1,
|
|
148
|
+
},
|
|
149
|
+
default: 0,
|
|
150
|
+
description: 'Increases likelihood of talking about new topics',
|
|
151
|
+
},
|
|
152
|
+
{
|
|
153
|
+
displayName: 'Include Response ID',
|
|
154
|
+
name: 'includeResponseId',
|
|
155
|
+
type: 'boolean',
|
|
156
|
+
default: true,
|
|
157
|
+
description: 'Whether to include the response ID in the output',
|
|
158
|
+
},
|
|
159
|
+
],
|
|
160
|
+
},
|
|
161
|
+
],
|
|
162
|
+
};
|
|
163
|
+
}
|
|
164
|
+
async execute() {
|
|
165
|
+
var _a, _b, _c;
|
|
166
|
+
const items = this.getInputData();
|
|
167
|
+
const returnData = [];
|
|
168
|
+
for (let i = 0; i < items.length; i++) {
|
|
169
|
+
try {
|
|
170
|
+
const credentials = await this.getCredentials('openAiApi', i);
|
|
171
|
+
const model = this.getNodeParameter('model', i);
|
|
172
|
+
const customModel = this.getNodeParameter('customModel', i, '');
|
|
173
|
+
const input = this.getNodeParameter('input', i);
|
|
174
|
+
const previousResponseId = this.getNodeParameter('previous_response_id', i, '');
|
|
175
|
+
const options = this.getNodeParameter('options', i, {});
|
|
176
|
+
if (!input || input.trim() === '') {
|
|
177
|
+
throw new n8n_workflow_1.NodeOperationError(this.getNode(), 'Input text is required');
|
|
178
|
+
}
|
|
179
|
+
const modelToUse = model === 'custom' ? customModel : model;
|
|
180
|
+
if (!modelToUse) {
|
|
181
|
+
throw new n8n_workflow_1.NodeOperationError(this.getNode(), 'Model name is required');
|
|
182
|
+
}
|
|
183
|
+
const requestBody = {
|
|
184
|
+
model: modelToUse,
|
|
185
|
+
input: input,
|
|
186
|
+
};
|
|
187
|
+
if (previousResponseId && previousResponseId.trim() !== '') {
|
|
188
|
+
requestBody.previous_response_id = previousResponseId;
|
|
189
|
+
}
|
|
190
|
+
if (options.temperature !== undefined) {
|
|
191
|
+
requestBody.temperature = options.temperature;
|
|
192
|
+
}
|
|
193
|
+
if (options.max_tokens !== undefined) {
|
|
194
|
+
requestBody.max_tokens = options.max_tokens;
|
|
195
|
+
}
|
|
196
|
+
if (options.top_p !== undefined) {
|
|
197
|
+
requestBody.top_p = options.top_p;
|
|
198
|
+
}
|
|
199
|
+
if (options.frequency_penalty !== undefined) {
|
|
200
|
+
requestBody.frequency_penalty = options.frequency_penalty;
|
|
201
|
+
}
|
|
202
|
+
if (options.presence_penalty !== undefined) {
|
|
203
|
+
requestBody.presence_penalty = options.presence_penalty;
|
|
204
|
+
}
|
|
205
|
+
const headers = {
|
|
206
|
+
'Authorization': `Bearer ${credentials.apiKey}`,
|
|
207
|
+
'Content-Type': 'application/json',
|
|
208
|
+
};
|
|
209
|
+
if (credentials.organizationId) {
|
|
210
|
+
headers['OpenAI-Organization'] = credentials.organizationId;
|
|
211
|
+
}
|
|
212
|
+
const response = await axios_1.default.post('https://api.openai.com/v1/responses', requestBody, { headers });
|
|
213
|
+
const responseData = response.data;
|
|
214
|
+
const outputData = {
|
|
215
|
+
choices: responseData.choices,
|
|
216
|
+
usage: responseData.usage,
|
|
217
|
+
model: responseData.model,
|
|
218
|
+
created: responseData.created,
|
|
219
|
+
};
|
|
220
|
+
if (options.includeResponseId !== false) {
|
|
221
|
+
outputData.id = responseData.id;
|
|
222
|
+
outputData.object = responseData.object;
|
|
223
|
+
}
|
|
224
|
+
if (responseData.choices && responseData.choices.length > 0) {
|
|
225
|
+
outputData.message = responseData.choices[0].message;
|
|
226
|
+
outputData.content = ((_a = responseData.choices[0].message) === null || _a === void 0 ? void 0 : _a.content) || '';
|
|
227
|
+
outputData.finish_reason = responseData.choices[0].finish_reason;
|
|
228
|
+
}
|
|
229
|
+
returnData.push({
|
|
230
|
+
json: outputData,
|
|
231
|
+
pairedItem: {
|
|
232
|
+
item: i,
|
|
233
|
+
},
|
|
234
|
+
});
|
|
235
|
+
}
|
|
236
|
+
catch (error) {
|
|
237
|
+
if (error.response) {
|
|
238
|
+
const errorMessage = ((_c = (_b = error.response.data) === null || _b === void 0 ? void 0 : _b.error) === null || _c === void 0 ? void 0 : _c.message) || error.response.statusText;
|
|
239
|
+
throw new n8n_workflow_1.NodeOperationError(this.getNode(), `OpenAI API Error (${error.response.status}): ${errorMessage}`, { itemIndex: i });
|
|
240
|
+
}
|
|
241
|
+
else {
|
|
242
|
+
throw new n8n_workflow_1.NodeOperationError(this.getNode(), `Request failed: ${error.message}`, { itemIndex: i });
|
|
243
|
+
}
|
|
244
|
+
}
|
|
245
|
+
}
|
|
246
|
+
return [returnData];
|
|
247
|
+
}
|
|
248
|
+
}
|
|
249
|
+
exports.OpenAiChatModel = OpenAiChatModel;
|
|
@@ -0,0 +1,3 @@
|
|
|
1
|
+
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" width="24" height="24">
|
|
2
|
+
<path fill="#10a37f" d="M22.2819 9.8211a5.9847 5.9847 0 0 0-.5157-4.9108 6.0462 6.0462 0 0 0-6.5098-2.9A6.0651 6.0651 0 0 0 4.9807 4.1818a5.9847 5.9847 0 0 0-3.9977 2.9 6.0462 6.0462 0 0 0 .7427 7.0966 5.98 5.98 0 0 0 .511 4.9107 6.051 6.051 0 0 0 6.5146 2.9001A5.9847 5.9847 0 0 0 13.2599 24a6.0557 6.0557 0 0 0 5.7718-4.2058 5.9894 5.9894 0 0 0 3.9977-2.9001 6.0557 6.0557 0 0 0-.7475-7.0729zm-9.022 12.6081a4.4755 4.4755 0 0 1-2.8764-1.0408l.1419-.0804 4.7783-2.7582a.7948.7948 0 0 0 .3927-.6813v-6.7369l2.02 1.1686a.071.071 0 0 1 .038.052v5.5826a4.504 4.504 0 0 1-4.4945 4.4944zm-9.6607-4.1254a4.4708 4.4708 0 0 1-.5346-3.0137l.142.0852 4.783 2.7582a.7712.7712 0 0 0 .7806 0l5.8428-3.3685v2.3324a.0804.0804 0 0 1-.0332.0615L9.74 19.9502a4.4992 4.4992 0 0 1-6.1408-1.6464zM2.3408 7.8956a4.485 4.485 0 0 1 2.3655-1.9728V11.6a.7664.7664 0 0 0 .3879.6765l5.8144 3.3543-2.0201 1.1685a.0757.0757 0 0 1-.071 0l-4.8303-2.7865A4.504 4.504 0 0 1 2.3408 7.872zm16.5963 3.8558L13.1038 8.364 15.1192 7.2a.0757.0757 0 0 1 .071 0l4.8303 2.7913a4.4944 4.4944 0 0 1-.6765 8.1042v-5.6772a.79.79 0 0 0-.407-.667zm2.0107-3.0231l-.142-.0852-4.7735-2.7818a.7759.7759 0 0 0-.7854 0L9.409 9.2297V6.8974a.0662.0662 0 0 1 .0284-.0615l4.8303-2.7866a4.4992 4.4992 0 0 1 6.6802 4.66zM8.3065 12.863l-2.02-1.1638a.0804.0804 0 0 1-.038-.0567V6.0742a4.4992 4.4992 0 0 1 7.3757-3.4537l-.142.0805L8.704 5.459a.7948.7948 0 0 0-.3927.6813zm1.0976-2.3654l2.602-1.4998 2.6069 1.4998v2.9994l-2.5974 1.4997-2.6067-1.4997Z"/>
|
|
3
|
+
</svg>
|
package/index.js
ADDED
package/package.json
ADDED
|
@@ -0,0 +1,65 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "n8n-nodes-openai-chatmodel",
|
|
3
|
+
"version": "1.0.0",
|
|
4
|
+
"description": "Custom n8n node for OpenAI Chat Model with response ID support",
|
|
5
|
+
"keywords": [
|
|
6
|
+
"n8n-community-node-package",
|
|
7
|
+
"n8n",
|
|
8
|
+
"openai",
|
|
9
|
+
"chat",
|
|
10
|
+
"ai",
|
|
11
|
+
"chatmodel"
|
|
12
|
+
],
|
|
13
|
+
"license": "MIT",
|
|
14
|
+
"homepage": "https://github.com/yourusername/n8n-nodes-openai-chatmodel",
|
|
15
|
+
"author": {
|
|
16
|
+
"name": "Your Name",
|
|
17
|
+
"email": "your.email@example.com"
|
|
18
|
+
},
|
|
19
|
+
"repository": {
|
|
20
|
+
"type": "git",
|
|
21
|
+
"url": "git+https://github.com/yourusername/n8n-nodes-openai-chatmodel.git"
|
|
22
|
+
},
|
|
23
|
+
"engines": {
|
|
24
|
+
"node": ">=18.10",
|
|
25
|
+
"pnpm": ">=7.18"
|
|
26
|
+
},
|
|
27
|
+
"packageManager": "pnpm@7.18.0",
|
|
28
|
+
"main": "index.js",
|
|
29
|
+
"scripts": {
|
|
30
|
+
"build": "tsc && gulp build:icons",
|
|
31
|
+
"dev": "tsc --watch",
|
|
32
|
+
"format": "prettier --write .",
|
|
33
|
+
"lint": "echo 'Linting skipped for TypeScript files'",
|
|
34
|
+
"lintfix": "eslint nodes credentials --ext .ts --fix",
|
|
35
|
+
"prepublishOnly": "npm run build && npm run lint"
|
|
36
|
+
},
|
|
37
|
+
"files": [
|
|
38
|
+
"dist"
|
|
39
|
+
],
|
|
40
|
+
"n8n": {
|
|
41
|
+
"n8nNodesApiVersion": 1,
|
|
42
|
+
"credentials": [
|
|
43
|
+
"dist/credentials/OpenAiApi.credentials.js"
|
|
44
|
+
],
|
|
45
|
+
"nodes": [
|
|
46
|
+
"dist/nodes/OpenAiChatModel/OpenAiChatModel.node.js"
|
|
47
|
+
]
|
|
48
|
+
},
|
|
49
|
+
"devDependencies": {
|
|
50
|
+
"@typescript-eslint/eslint-plugin": "^5.62.0",
|
|
51
|
+
"@typescript-eslint/parser": "^5.62.0",
|
|
52
|
+
"eslint": "^8.29.0",
|
|
53
|
+
"eslint-plugin-n8n-nodes-base": "^1.11.0",
|
|
54
|
+
"gulp": "^4.0.2",
|
|
55
|
+
"n8n-workflow": "*",
|
|
56
|
+
"prettier": "^2.7.1",
|
|
57
|
+
"typescript": "^4.8.4"
|
|
58
|
+
},
|
|
59
|
+
"peerDependencies": {
|
|
60
|
+
"n8n-workflow": "*"
|
|
61
|
+
},
|
|
62
|
+
"dependencies": {
|
|
63
|
+
"axios": "^1.6.0"
|
|
64
|
+
}
|
|
65
|
+
}
|