n8n-nodes-rooyai-chat 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +202 -0
- package/dist/credentials/RooyaiApi.credentials.d.ts +9 -0
- package/dist/credentials/RooyaiApi.credentials.js +60 -0
- package/dist/nodes/Rooyai/N8nLlmTracing.d.ts +11 -0
- package/dist/nodes/Rooyai/N8nLlmTracing.js +38 -0
- package/dist/nodes/Rooyai/Rooyai.node.d.ts +5 -0
- package/dist/nodes/Rooyai/Rooyai.node.js +181 -0
- package/dist/nodes/Rooyai/RooyaiLangChainWrapper.d.ts +32 -0
- package/dist/nodes/Rooyai/RooyaiLangChainWrapper.js +130 -0
- package/dist/nodes/Rooyai/rooyai.svg +10 -0
- package/package.json +56 -0
package/README.md
ADDED
|
@@ -0,0 +1,202 @@
|
|
|
1
|
+
# n8n-nodes-rooyai-chat
|
|
2
|
+
|
|
3
|
+
Custom n8n node for Rooyai Chat API - A supply node that provides Rooyai language models for use with Basic LLM Chain, AI Agent, and other AI nodes in n8n.
|
|
4
|
+
|
|
5
|
+
## Description
|
|
6
|
+
|
|
7
|
+
This n8n custom node provides a **Rooyai Chat Model** supply node that can be connected to the **Basic LLM Chain** node, **AI Agent**, **Better AI Agent**, and other AI processing nodes in n8n.
|
|
8
|
+
|
|
9
|
+
## Features
|
|
10
|
+
|
|
11
|
+
- ✅ **Supply Node**: Provides Rooyai language models as a supply node (compatible with Basic LLM Chain, AI Agent)
|
|
12
|
+
- ✅ **16 Available Models**: Support for Gemini, Llama, DeepSeek, GPT-OSS, and more
|
|
13
|
+
- ✅ **Cost Tracking**: Tracks API costs per request (unique to Rooyai)
|
|
14
|
+
- ✅ **Configurable Parameters**: Temperature, max tokens, and other model options
|
|
15
|
+
- ✅ **Custom Headers**: Support for additional API headers
|
|
16
|
+
- ✅ **Real-time API**: Direct HTTP integration with Rooyai API
|
|
17
|
+
|
|
18
|
+
## Installation
|
|
19
|
+
|
|
20
|
+
### Via npm (Recommended)
|
|
21
|
+
|
|
22
|
+
```bash
|
|
23
|
+
npm install n8n-nodes-rooyai-chat
|
|
24
|
+
```
|
|
25
|
+
|
|
26
|
+
Then restart your n8n instance. The node will appear in the node palette under **AI → Language Models**.
|
|
27
|
+
|
|
28
|
+
### Manual Installation
|
|
29
|
+
|
|
30
|
+
1. Clone or download this repository
|
|
31
|
+
2. Install dependencies:
|
|
32
|
+
```bash
|
|
33
|
+
npm install
|
|
34
|
+
```
|
|
35
|
+
3. Build the node:
|
|
36
|
+
```bash
|
|
37
|
+
npm run build
|
|
38
|
+
```
|
|
39
|
+
4. Copy to your n8n custom nodes location:
|
|
40
|
+
```bash
|
|
41
|
+
# Windows
|
|
42
|
+
cp -r dist/* %USERPROFILE%\.n8n\custom\
|
|
43
|
+
|
|
44
|
+
# Linux/Mac
|
|
45
|
+
cp -r dist/* ~/.n8n/custom/
|
|
46
|
+
```
|
|
47
|
+
|
|
48
|
+
## Configuration
|
|
49
|
+
|
|
50
|
+
### 1. Create Rooyai API Credentials
|
|
51
|
+
|
|
52
|
+
1. Go to **Credentials** in n8n
|
|
53
|
+
2. Click **Add Credential**
|
|
54
|
+
3. Search for **"Rooyai API"** and select it
|
|
55
|
+
4. Enter your credentials:
|
|
56
|
+
- **API Key**: Your Rooyai API key (required)
|
|
57
|
+
- **Base URL**: `https://rooyai.com/api/v1` (default)
|
|
58
|
+
- **Optional Headers**: Additional headers as JSON (optional)
|
|
59
|
+
5. Click **Save**
|
|
60
|
+
|
|
61
|
+
### 2. Add Rooyai Chat Model Node
|
|
62
|
+
|
|
63
|
+
1. In your workflow, click **+** to add a node
|
|
64
|
+
2. Search for **"Rooyai Chat Model"**
|
|
65
|
+
3. Select the node to add it to your workflow
|
|
66
|
+
|
|
67
|
+
### 3. Configure the Node
|
|
68
|
+
|
|
69
|
+
1. **Select Credentials**: Choose your Rooyai API credentials
|
|
70
|
+
2. **Choose Model**: Select a model from the dropdown
|
|
71
|
+
3. **Configure Options** (optional):
|
|
72
|
+
- **Temperature**: Controls randomness (0-1, default: 0.7)
|
|
73
|
+
- **Max Tokens**: Maximum number of tokens to generate (default: 4096)
|
|
74
|
+
|
|
75
|
+
### 4. Connect to AI Nodes
|
|
76
|
+
|
|
77
|
+
Connect the **Model** output from **Rooyai Chat Model** to:
|
|
78
|
+
- **Basic LLM Chain**
|
|
79
|
+
- **AI Agent**
|
|
80
|
+
- **Better AI Agent**
|
|
81
|
+
- Any other n8n AI node that accepts a Language Model
|
|
82
|
+
|
|
83
|
+
## Available Models
|
|
84
|
+
|
|
85
|
+
- **Gemini 2.0 Flash** (gemini-2.0-flash) - Default
|
|
86
|
+
- **Llama 3.3 70B** (llama-3.3-70b)
|
|
87
|
+
- **DeepSeek R1** (deepseek-r1)
|
|
88
|
+
- **DeepSeek V3.1 NeX** (deepseek-v3.1-nex)
|
|
89
|
+
- **Qwen3 Coder** (qwen3-coder)
|
|
90
|
+
- **GPT-OSS 120B** (gpt-oss-120b)
|
|
91
|
+
- **GPT-OSS 20B** (gpt-oss-20b)
|
|
92
|
+
- **TNG R1T Chimera** (tng-r1t-chimera)
|
|
93
|
+
- **TNG DeepSeek Chimera** (tng-deepseek-chimera)
|
|
94
|
+
- **Kimi K2** (kimi-k2)
|
|
95
|
+
- **GLM 4.5 Air** (glm-4.5-air)
|
|
96
|
+
- **Devstral** (devstral)
|
|
97
|
+
- **Mimo V2 Flash** (mimo-v2-flash)
|
|
98
|
+
- **Gemma 3 27B** (gemma-3-27b)
|
|
99
|
+
- **Gemma 3 12B** (gemma-3-12b)
|
|
100
|
+
- **Gemma 3 4B** (gemma-3-4b)
|
|
101
|
+
|
|
102
|
+
## Usage Example
|
|
103
|
+
|
|
104
|
+
```
|
|
105
|
+
┌──────────────────┐
|
|
106
|
+
│ Rooyai Chat Model│
|
|
107
|
+
│ (Supply Node) │
|
|
108
|
+
└─────────┬────────┘
|
|
109
|
+
│ Model
|
|
110
|
+
▼
|
|
111
|
+
┌──────────────────┐
|
|
112
|
+
│ Basic LLM Chain │
|
|
113
|
+
│ (AI Node) │
|
|
114
|
+
└──────────────────┘
|
|
115
|
+
```
|
|
116
|
+
|
|
117
|
+
The Rooyai Chat Model node provides the language model instance that the Basic LLM Chain or AI Agent uses to process prompts and generate responses.
|
|
118
|
+
|
|
119
|
+
## API Endpoint Configuration
|
|
120
|
+
|
|
121
|
+
### Current Endpoint
|
|
122
|
+
|
|
123
|
+
The node uses the following endpoint:
|
|
124
|
+
- **Base URL**: `https://rooyai.com/api/v1`
|
|
125
|
+
- **Chat Endpoint**: `/chat`
|
|
126
|
+
- **Full URL**: `https://rooyai.com/api/v1/chat`
|
|
127
|
+
|
|
128
|
+
### How to Change the Endpoint
|
|
129
|
+
|
|
130
|
+
If you need to update the API endpoint:
|
|
131
|
+
|
|
132
|
+
1. **Change Base URL**:
|
|
133
|
+
- Edit: `credentials/RooyaiApi.credentials.ts`
|
|
134
|
+
- Find the `baseUrl` property default value
|
|
135
|
+
- Update: `default: 'https://your-new-url.com/api/v1'`
|
|
136
|
+
|
|
137
|
+
2. **Change Chat Path**:
|
|
138
|
+
- Edit: `nodes/Rooyai/RooyaiLangChainWrapper.ts`
|
|
139
|
+
- Find the `callRooyaiAPI()` method
|
|
140
|
+
- Update the line: `const url = \`\${this.baseUrl}/chat\`;`
|
|
141
|
+
- Change `/chat` to your new endpoint path
|
|
142
|
+
|
|
143
|
+
3. **Rebuild**:
|
|
144
|
+
```bash
|
|
145
|
+
npm run build
|
|
146
|
+
```
|
|
147
|
+
|
|
148
|
+
## Development
|
|
149
|
+
|
|
150
|
+
### Scripts
|
|
151
|
+
|
|
152
|
+
- `npm install` - Install dependencies
|
|
153
|
+
- `npm run build` - Build the node (TypeScript + copy icons)
|
|
154
|
+
- `npm run dev` - Watch mode for development
|
|
155
|
+
- `npm run lint` - Run linter
|
|
156
|
+
- `npm run format` - Format code
|
|
157
|
+
|
|
158
|
+
### Project Structure
|
|
159
|
+
|
|
160
|
+
```
|
|
161
|
+
n8n-nodes-rooyai-chat/
|
|
162
|
+
├── credentials/
|
|
163
|
+
│ └── RooyaiApi.credentials.ts # API credentials definition
|
|
164
|
+
├── nodes/
|
|
165
|
+
│ └── Rooyai/
|
|
166
|
+
│ ├── Rooyai.node.ts # Main node definition
|
|
167
|
+
│ ├── RooyaiLangChainWrapper.ts # LangChain integration
|
|
168
|
+
│ ├── N8nLlmTracing.ts # Event tracing
|
|
169
|
+
│ └── rooyai.svg # Node icon
|
|
170
|
+
├── package.json
|
|
171
|
+
├── tsconfig.json
|
|
172
|
+
└── gulpfile.js
|
|
173
|
+
```
|
|
174
|
+
|
|
175
|
+
## Troubleshooting
|
|
176
|
+
|
|
177
|
+
### "Rooyai API key not found"
|
|
178
|
+
- Verify your API key is entered correctly in credentials
|
|
179
|
+
- Make sure you've selected the correct credential in the node
|
|
180
|
+
|
|
181
|
+
### "Failed to call Rooyai API"
|
|
182
|
+
- Check your internet connection
|
|
183
|
+
- Verify the base URL is correct
|
|
184
|
+
- Check if the Rooyai API is accessible
|
|
185
|
+
|
|
186
|
+
### Node doesn't appear in n8n
|
|
187
|
+
- Make sure you've restarted n8n after installation
|
|
188
|
+
- Verify the dist folder was copied to `~/.n8n/custom/`
|
|
189
|
+
- Check n8n logs for any errors
|
|
190
|
+
|
|
191
|
+
## License
|
|
192
|
+
|
|
193
|
+
MIT
|
|
194
|
+
|
|
195
|
+
## Author
|
|
196
|
+
|
|
197
|
+
Rooyai
|
|
198
|
+
|
|
199
|
+
## Resources
|
|
200
|
+
|
|
201
|
+
- [n8n Documentation](https://docs.n8n.io/)
|
|
202
|
+
- [n8n AI Nodes](https://docs.n8n.io/integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain/)
|
|
@@ -0,0 +1,9 @@
|
|
|
1
|
+
import { IAuthenticateGeneric, ICredentialTestRequest, ICredentialType, INodeProperties } from 'n8n-workflow';
|
|
2
|
+
export declare class RooyaiApi implements ICredentialType {
|
|
3
|
+
name: string;
|
|
4
|
+
displayName: string;
|
|
5
|
+
documentationUrl: string;
|
|
6
|
+
properties: INodeProperties[];
|
|
7
|
+
authenticate: IAuthenticateGeneric;
|
|
8
|
+
test: ICredentialTestRequest;
|
|
9
|
+
}
|
|
@@ -0,0 +1,60 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
3
|
+
exports.RooyaiApi = void 0;
|
|
4
|
+
class RooyaiApi {
|
|
5
|
+
constructor() {
|
|
6
|
+
this.name = 'rooyaiApi';
|
|
7
|
+
this.displayName = 'Rooyai API';
|
|
8
|
+
this.documentationUrl = 'https://rooyai.com/dashboard/llm/docs';
|
|
9
|
+
this.properties = [
|
|
10
|
+
{
|
|
11
|
+
displayName: 'API Key',
|
|
12
|
+
name: 'apiKey',
|
|
13
|
+
type: 'string',
|
|
14
|
+
typeOptions: {
|
|
15
|
+
password: true,
|
|
16
|
+
},
|
|
17
|
+
default: '',
|
|
18
|
+
required: true,
|
|
19
|
+
description: 'Your Rooyai API key',
|
|
20
|
+
},
|
|
21
|
+
{
|
|
22
|
+
displayName: 'Base URL',
|
|
23
|
+
name: 'baseUrl',
|
|
24
|
+
type: 'string',
|
|
25
|
+
default: 'https://rooyai.com/api/v1',
|
|
26
|
+
required: true,
|
|
27
|
+
description: 'Rooyai API base URL',
|
|
28
|
+
},
|
|
29
|
+
{
|
|
30
|
+
displayName: 'Optional Headers',
|
|
31
|
+
name: 'optionalHeaders',
|
|
32
|
+
type: 'string',
|
|
33
|
+
default: '',
|
|
34
|
+
placeholder: '{"X-Custom-Header": "value"}',
|
|
35
|
+
description: 'Additional headers as JSON object (optional)',
|
|
36
|
+
},
|
|
37
|
+
];
|
|
38
|
+
this.authenticate = {
|
|
39
|
+
type: 'generic',
|
|
40
|
+
properties: {
|
|
41
|
+
headers: {
|
|
42
|
+
Authorization: '=Bearer {{$credentials.apiKey}}',
|
|
43
|
+
},
|
|
44
|
+
},
|
|
45
|
+
};
|
|
46
|
+
this.test = {
|
|
47
|
+
request: {
|
|
48
|
+
baseURL: '={{$credentials.baseUrl}}',
|
|
49
|
+
url: '/chat',
|
|
50
|
+
method: 'POST',
|
|
51
|
+
body: {
|
|
52
|
+
model: 'gemini-2.0-flash',
|
|
53
|
+
messages: [{ role: 'user', content: 'Hi' }],
|
|
54
|
+
max_tokens: 10,
|
|
55
|
+
},
|
|
56
|
+
},
|
|
57
|
+
};
|
|
58
|
+
}
|
|
59
|
+
}
|
|
60
|
+
exports.RooyaiApi = RooyaiApi;
|
|
@@ -0,0 +1,11 @@
|
|
|
1
|
+
import { BaseCallbackHandler } from '@langchain/core/callbacks/base';
|
|
2
|
+
import type { Serialized } from '@langchain/core/load/serializable';
|
|
3
|
+
import { ISupplyDataFunctions } from 'n8n-workflow';
|
|
4
|
+
export declare class N8nLlmTracing extends BaseCallbackHandler {
|
|
5
|
+
name: string;
|
|
6
|
+
supplyDataFunctions: ISupplyDataFunctions;
|
|
7
|
+
constructor(supplyDataFunctions: ISupplyDataFunctions);
|
|
8
|
+
handleLLMStart(llm: Serialized, prompts: string[], runId: string, parentRunId?: string, extraParams?: Record<string, unknown>, tags?: string[], metadata?: Record<string, unknown>): Promise<void>;
|
|
9
|
+
handleLLMEnd(output: any, runId: string, parentRunId?: string, tags?: string[]): Promise<void>;
|
|
10
|
+
handleLLMError(error: Error, runId: string, parentRunId?: string, tags?: string[]): Promise<void>;
|
|
11
|
+
}
|
|
@@ -0,0 +1,38 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
3
|
+
exports.N8nLlmTracing = void 0;
|
|
4
|
+
const base_1 = require("@langchain/core/callbacks/base");
|
|
5
|
+
class N8nLlmTracing extends base_1.BaseCallbackHandler {
|
|
6
|
+
constructor(supplyDataFunctions) {
|
|
7
|
+
super();
|
|
8
|
+
this.name = 'n8n_llm_tracing';
|
|
9
|
+
this.supplyDataFunctions = supplyDataFunctions;
|
|
10
|
+
}
|
|
11
|
+
async handleLLMStart(llm, prompts, runId, parentRunId, extraParams, tags, metadata) {
|
|
12
|
+
if (this.supplyDataFunctions.logger) {
|
|
13
|
+
this.supplyDataFunctions.logger.debug('Rooyai LLM started', {
|
|
14
|
+
runId,
|
|
15
|
+
parentRunId,
|
|
16
|
+
prompts,
|
|
17
|
+
});
|
|
18
|
+
}
|
|
19
|
+
}
|
|
20
|
+
async handleLLMEnd(output, runId, parentRunId, tags) {
|
|
21
|
+
if (this.supplyDataFunctions.logger) {
|
|
22
|
+
this.supplyDataFunctions.logger.debug('Rooyai LLM finished', {
|
|
23
|
+
runId,
|
|
24
|
+
parentRunId,
|
|
25
|
+
});
|
|
26
|
+
}
|
|
27
|
+
}
|
|
28
|
+
async handleLLMError(error, runId, parentRunId, tags) {
|
|
29
|
+
if (this.supplyDataFunctions.logger) {
|
|
30
|
+
this.supplyDataFunctions.logger.error('Rooyai LLM error', {
|
|
31
|
+
runId,
|
|
32
|
+
parentRunId,
|
|
33
|
+
error: error.message,
|
|
34
|
+
});
|
|
35
|
+
}
|
|
36
|
+
}
|
|
37
|
+
}
|
|
38
|
+
exports.N8nLlmTracing = N8nLlmTracing;
|
|
@@ -0,0 +1,5 @@
|
|
|
1
|
+
import { INodeType, INodeTypeDescription, ISupplyDataFunctions, SupplyData } from 'n8n-workflow';
|
|
2
|
+
export declare class Rooyai implements INodeType {
|
|
3
|
+
description: INodeTypeDescription;
|
|
4
|
+
supplyData(this: ISupplyDataFunctions, itemIndex: number): Promise<SupplyData>;
|
|
5
|
+
}
|
|
@@ -0,0 +1,181 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
3
|
+
exports.Rooyai = void 0;
|
|
4
|
+
const n8n_workflow_1 = require("n8n-workflow");
|
|
5
|
+
const RooyaiLangChainWrapper_1 = require("./RooyaiLangChainWrapper");
|
|
6
|
+
const N8nLlmTracing_1 = require("./N8nLlmTracing");
|
|
7
|
+
class Rooyai {
|
|
8
|
+
constructor() {
|
|
9
|
+
this.description = {
|
|
10
|
+
displayName: 'Rooyai Chat Model',
|
|
11
|
+
name: 'rooyaiChatModel',
|
|
12
|
+
icon: 'file:rooyai.svg',
|
|
13
|
+
group: ['transform'],
|
|
14
|
+
version: 1,
|
|
15
|
+
description: 'Language Model Rooyai',
|
|
16
|
+
defaults: {
|
|
17
|
+
name: 'Rooyai Chat Model',
|
|
18
|
+
},
|
|
19
|
+
codex: {
|
|
20
|
+
categories: ['AI'],
|
|
21
|
+
subcategories: {
|
|
22
|
+
AI: ['Language Models', 'Root Nodes'],
|
|
23
|
+
'Language Models': ['Chat Models (Recommended)'],
|
|
24
|
+
},
|
|
25
|
+
},
|
|
26
|
+
inputs: [],
|
|
27
|
+
outputs: [n8n_workflow_1.NodeConnectionTypes.AiLanguageModel],
|
|
28
|
+
outputNames: ['Model'],
|
|
29
|
+
credentials: [
|
|
30
|
+
{
|
|
31
|
+
name: 'rooyaiApi',
|
|
32
|
+
required: true,
|
|
33
|
+
},
|
|
34
|
+
],
|
|
35
|
+
properties: [
|
|
36
|
+
{
|
|
37
|
+
displayName: 'Model',
|
|
38
|
+
name: 'model',
|
|
39
|
+
type: 'options',
|
|
40
|
+
options: [
|
|
41
|
+
{
|
|
42
|
+
name: 'Gemini 2.0 Flash',
|
|
43
|
+
value: 'gemini-2.0-flash',
|
|
44
|
+
},
|
|
45
|
+
{
|
|
46
|
+
name: 'Llama 3.3 70B',
|
|
47
|
+
value: 'llama-3.3-70b',
|
|
48
|
+
},
|
|
49
|
+
{
|
|
50
|
+
name: 'DeepSeek R1',
|
|
51
|
+
value: 'deepseek-r1',
|
|
52
|
+
},
|
|
53
|
+
{
|
|
54
|
+
name: 'DeepSeek V3.1 NeX',
|
|
55
|
+
value: 'deepseek-v3.1-nex',
|
|
56
|
+
},
|
|
57
|
+
{
|
|
58
|
+
name: 'Qwen3 Coder',
|
|
59
|
+
value: 'qwen3-coder',
|
|
60
|
+
},
|
|
61
|
+
{
|
|
62
|
+
name: 'GPT-OSS 120B',
|
|
63
|
+
value: 'gpt-oss-120b',
|
|
64
|
+
},
|
|
65
|
+
{
|
|
66
|
+
name: 'GPT-OSS 20B',
|
|
67
|
+
value: 'gpt-oss-20b',
|
|
68
|
+
},
|
|
69
|
+
{
|
|
70
|
+
name: 'TNG R1T Chimera',
|
|
71
|
+
value: 'tng-r1t-chimera',
|
|
72
|
+
},
|
|
73
|
+
{
|
|
74
|
+
name: 'TNG DeepSeek Chimera',
|
|
75
|
+
value: 'tng-deepseek-chimera',
|
|
76
|
+
},
|
|
77
|
+
{
|
|
78
|
+
name: 'Kimi K2',
|
|
79
|
+
value: 'kimi-k2',
|
|
80
|
+
},
|
|
81
|
+
{
|
|
82
|
+
name: 'GLM 4.5 Air',
|
|
83
|
+
value: 'glm-4.5-air',
|
|
84
|
+
},
|
|
85
|
+
{
|
|
86
|
+
name: 'Devstral',
|
|
87
|
+
value: 'devstral',
|
|
88
|
+
},
|
|
89
|
+
{
|
|
90
|
+
name: 'Mimo V2 Flash',
|
|
91
|
+
value: 'mimo-v2-flash',
|
|
92
|
+
},
|
|
93
|
+
{
|
|
94
|
+
name: 'Gemma 3 27B',
|
|
95
|
+
value: 'gemma-3-27b',
|
|
96
|
+
},
|
|
97
|
+
{
|
|
98
|
+
name: 'Gemma 3 12B',
|
|
99
|
+
value: 'gemma-3-12b',
|
|
100
|
+
},
|
|
101
|
+
{
|
|
102
|
+
name: 'Gemma 3 4B',
|
|
103
|
+
value: 'gemma-3-4b',
|
|
104
|
+
},
|
|
105
|
+
],
|
|
106
|
+
default: 'gemini-2.0-flash',
|
|
107
|
+
description: 'The model which will generate the completion',
|
|
108
|
+
},
|
|
109
|
+
{
|
|
110
|
+
displayName: 'Options',
|
|
111
|
+
name: 'options',
|
|
112
|
+
placeholder: 'Add Option',
|
|
113
|
+
description: 'Additional options to add',
|
|
114
|
+
type: 'collection',
|
|
115
|
+
default: {},
|
|
116
|
+
options: [
|
|
117
|
+
{
|
|
118
|
+
displayName: 'Maximum Number of Tokens',
|
|
119
|
+
name: 'maxTokensToSample',
|
|
120
|
+
default: 4096,
|
|
121
|
+
description: 'The maximum number of tokens to generate in the completion',
|
|
122
|
+
type: 'number',
|
|
123
|
+
},
|
|
124
|
+
{
|
|
125
|
+
displayName: 'Sampling Temperature',
|
|
126
|
+
name: 'temperature',
|
|
127
|
+
default: 0.7,
|
|
128
|
+
typeOptions: { maxValue: 1, minValue: 0, numberPrecision: 1 },
|
|
129
|
+
description: 'Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.',
|
|
130
|
+
type: 'number',
|
|
131
|
+
},
|
|
132
|
+
],
|
|
133
|
+
},
|
|
134
|
+
],
|
|
135
|
+
};
|
|
136
|
+
}
|
|
137
|
+
async supplyData(itemIndex) {
|
|
138
|
+
const credentials = await this.getCredentials('rooyaiApi');
|
|
139
|
+
const apiKey = credentials.apiKey;
|
|
140
|
+
const baseUrl = credentials.baseUrl;
|
|
141
|
+
const optionalHeaders = credentials.optionalHeaders;
|
|
142
|
+
if (!apiKey) {
|
|
143
|
+
throw new Error('Rooyai API key not found in credentials.');
|
|
144
|
+
}
|
|
145
|
+
if (!baseUrl) {
|
|
146
|
+
throw new Error('Rooyai base URL not found in credentials.');
|
|
147
|
+
}
|
|
148
|
+
const modelName = this.getNodeParameter('model', itemIndex);
|
|
149
|
+
const options = this.getNodeParameter('options', itemIndex, {});
|
|
150
|
+
let parsedHeaders;
|
|
151
|
+
if (optionalHeaders) {
|
|
152
|
+
try {
|
|
153
|
+
parsedHeaders = JSON.parse(optionalHeaders);
|
|
154
|
+
}
|
|
155
|
+
catch (error) {
|
|
156
|
+
throw new Error(`Invalid JSON in optional headers: ${error instanceof Error ? error.message : String(error)}`);
|
|
157
|
+
}
|
|
158
|
+
}
|
|
159
|
+
if (this.logger) {
|
|
160
|
+
this.logger.info('Rooyai Chat Model initialized', {
|
|
161
|
+
model: modelName,
|
|
162
|
+
temperature: options.temperature ?? 0.7,
|
|
163
|
+
maxTokens: options.maxTokensToSample,
|
|
164
|
+
});
|
|
165
|
+
}
|
|
166
|
+
const model = new RooyaiLangChainWrapper_1.RooyaiLangChainWrapper({
|
|
167
|
+
apiKey,
|
|
168
|
+
baseUrl,
|
|
169
|
+
optionalHeaders: parsedHeaders,
|
|
170
|
+
model: modelName,
|
|
171
|
+
maxTokens: options.maxTokensToSample,
|
|
172
|
+
temperature: options.temperature ?? 0.7,
|
|
173
|
+
supplyDataFunctions: this,
|
|
174
|
+
callbacks: [new N8nLlmTracing_1.N8nLlmTracing(this)],
|
|
175
|
+
});
|
|
176
|
+
return {
|
|
177
|
+
response: model,
|
|
178
|
+
};
|
|
179
|
+
}
|
|
180
|
+
}
|
|
181
|
+
exports.Rooyai = Rooyai;
|
|
@@ -0,0 +1,32 @@
|
|
|
1
|
+
import { BaseChatModel } from '@langchain/core/language_models/chat_models';
|
|
2
|
+
import { BaseMessage } from '@langchain/core/messages';
|
|
3
|
+
import { ChatResult, ChatGenerationChunk } from '@langchain/core/outputs';
|
|
4
|
+
import { CallbackManagerForLLMRun } from '@langchain/core/callbacks/manager';
|
|
5
|
+
import { ISupplyDataFunctions } from 'n8n-workflow';
|
|
6
|
+
interface RooyaiLangChainWrapperParams {
|
|
7
|
+
apiKey: string;
|
|
8
|
+
baseUrl: string;
|
|
9
|
+
optionalHeaders?: Record<string, string>;
|
|
10
|
+
model: string;
|
|
11
|
+
temperature?: number;
|
|
12
|
+
maxTokens?: number;
|
|
13
|
+
supplyDataFunctions?: ISupplyDataFunctions;
|
|
14
|
+
callbacks?: any[];
|
|
15
|
+
}
|
|
16
|
+
export declare class RooyaiLangChainWrapper extends BaseChatModel {
|
|
17
|
+
lc_namespace: string[];
|
|
18
|
+
private apiKey;
|
|
19
|
+
private baseUrl;
|
|
20
|
+
private optionalHeaders?;
|
|
21
|
+
private model;
|
|
22
|
+
private temperature;
|
|
23
|
+
private maxTokens?;
|
|
24
|
+
private supplyDataFunctions?;
|
|
25
|
+
constructor(params: RooyaiLangChainWrapperParams);
|
|
26
|
+
_llmType(): string;
|
|
27
|
+
_modelType(): string;
|
|
28
|
+
private callRooyaiAPI;
|
|
29
|
+
_generate(messages: BaseMessage[], options: this['ParsedCallOptions'], runManager?: CallbackManagerForLLMRun): Promise<ChatResult>;
|
|
30
|
+
_streamResponseChunks(messages: BaseMessage[], options: this['ParsedCallOptions'], runManager?: CallbackManagerForLLMRun): AsyncGenerator<ChatGenerationChunk>;
|
|
31
|
+
}
|
|
32
|
+
export {};
|
|
@@ -0,0 +1,130 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
var __importDefault = (this && this.__importDefault) || function (mod) {
|
|
3
|
+
return (mod && mod.__esModule) ? mod : { "default": mod };
|
|
4
|
+
};
|
|
5
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
6
|
+
exports.RooyaiLangChainWrapper = void 0;
|
|
7
|
+
const chat_models_1 = require("@langchain/core/language_models/chat_models");
|
|
8
|
+
const messages_1 = require("@langchain/core/messages");
|
|
9
|
+
const axios_1 = __importDefault(require("axios"));
|
|
10
|
+
class RooyaiLangChainWrapper extends chat_models_1.BaseChatModel {
|
|
11
|
+
constructor(params) {
|
|
12
|
+
const callbacks = params.callbacks ?? [];
|
|
13
|
+
super({
|
|
14
|
+
...params,
|
|
15
|
+
callbacks,
|
|
16
|
+
});
|
|
17
|
+
this.lc_namespace = ['n8n', 'rooyai', 'chat'];
|
|
18
|
+
this.apiKey = params.apiKey;
|
|
19
|
+
this.baseUrl = params.baseUrl;
|
|
20
|
+
this.optionalHeaders = params.optionalHeaders;
|
|
21
|
+
this.model = params.model;
|
|
22
|
+
this.temperature = params.temperature ?? 0.7;
|
|
23
|
+
this.maxTokens = params.maxTokens;
|
|
24
|
+
this.supplyDataFunctions = params.supplyDataFunctions;
|
|
25
|
+
}
|
|
26
|
+
_llmType() {
|
|
27
|
+
return 'rooyai';
|
|
28
|
+
}
|
|
29
|
+
_modelType() {
|
|
30
|
+
return 'rooyai';
|
|
31
|
+
}
|
|
32
|
+
async callRooyaiAPI(body) {
|
|
33
|
+
const headers = {
|
|
34
|
+
'Authorization': `Bearer ${this.apiKey}`,
|
|
35
|
+
'Content-Type': 'application/json',
|
|
36
|
+
};
|
|
37
|
+
if (this.optionalHeaders) {
|
|
38
|
+
Object.assign(headers, this.optionalHeaders);
|
|
39
|
+
}
|
|
40
|
+
const url = `${this.baseUrl}/chat`;
|
|
41
|
+
try {
|
|
42
|
+
const response = await axios_1.default.post(url, body, {
|
|
43
|
+
headers,
|
|
44
|
+
validateStatus: () => true,
|
|
45
|
+
});
|
|
46
|
+
if (response.status !== 200) {
|
|
47
|
+
const errorText = typeof response.data === 'string'
|
|
48
|
+
? response.data
|
|
49
|
+
: JSON.stringify(response.data);
|
|
50
|
+
throw new Error(`Rooyai API error (${response.status}): ${errorText}`);
|
|
51
|
+
}
|
|
52
|
+
return response.data;
|
|
53
|
+
}
|
|
54
|
+
catch (error) {
|
|
55
|
+
if (error instanceof Error) {
|
|
56
|
+
throw new Error(`Failed to call Rooyai API: ${error.message}`);
|
|
57
|
+
}
|
|
58
|
+
throw new Error(`Failed to call Rooyai API: ${String(error)}`);
|
|
59
|
+
}
|
|
60
|
+
}
|
|
61
|
+
async _generate(messages, options, runManager) {
|
|
62
|
+
const rooyaiMessages = messages.map((msg) => {
|
|
63
|
+
const msgType = msg._getType();
|
|
64
|
+
if (msgType === 'human') {
|
|
65
|
+
return { role: 'user', content: String(msg.content) };
|
|
66
|
+
}
|
|
67
|
+
if (msgType === 'ai') {
|
|
68
|
+
return { role: 'assistant', content: String(msg.content) };
|
|
69
|
+
}
|
|
70
|
+
if (msgType === 'system') {
|
|
71
|
+
return { role: 'system', content: String(msg.content) };
|
|
72
|
+
}
|
|
73
|
+
return { role: 'user', content: String(msg.content) };
|
|
74
|
+
});
|
|
75
|
+
const requestBody = {
|
|
76
|
+
model: this.model,
|
|
77
|
+
messages: rooyaiMessages,
|
|
78
|
+
temperature: this.temperature,
|
|
79
|
+
};
|
|
80
|
+
if (this.maxTokens) {
|
|
81
|
+
requestBody.max_tokens = this.maxTokens;
|
|
82
|
+
}
|
|
83
|
+
const response = await this.callRooyaiAPI(requestBody);
|
|
84
|
+
let assistantContent;
|
|
85
|
+
if (response.choices && response.choices.length > 0 && response.choices[0].message?.content) {
|
|
86
|
+
assistantContent = response.choices[0].message.content;
|
|
87
|
+
}
|
|
88
|
+
else if (response.reply) {
|
|
89
|
+
assistantContent = response.reply;
|
|
90
|
+
}
|
|
91
|
+
else {
|
|
92
|
+
throw new Error('No content found in Rooyai API response');
|
|
93
|
+
}
|
|
94
|
+
const aiMessage = new messages_1.AIMessage(assistantContent);
|
|
95
|
+
const tokenUsage = {};
|
|
96
|
+
if (response.usage?.prompt_tokens !== undefined) {
|
|
97
|
+
tokenUsage.promptTokens = response.usage.prompt_tokens;
|
|
98
|
+
}
|
|
99
|
+
if (response.usage?.completion_tokens !== undefined) {
|
|
100
|
+
tokenUsage.completionTokens = response.usage.completion_tokens;
|
|
101
|
+
}
|
|
102
|
+
if (response.usage?.total_tokens !== undefined) {
|
|
103
|
+
tokenUsage.totalTokens = response.usage.total_tokens;
|
|
104
|
+
}
|
|
105
|
+
const llmOutput = {
|
|
106
|
+
tokenUsage: Object.keys(tokenUsage).length > 0 ? tokenUsage : undefined,
|
|
107
|
+
modelName: this.model,
|
|
108
|
+
};
|
|
109
|
+
if (response.usage?.cost_usd !== undefined) {
|
|
110
|
+
llmOutput.costUsd = response.usage.cost_usd;
|
|
111
|
+
}
|
|
112
|
+
const generation = {
|
|
113
|
+
message: aiMessage,
|
|
114
|
+
text: assistantContent,
|
|
115
|
+
};
|
|
116
|
+
if (response.usage?.cost_usd !== undefined) {
|
|
117
|
+
generation.generationInfo = {
|
|
118
|
+
costUsd: response.usage.cost_usd,
|
|
119
|
+
};
|
|
120
|
+
}
|
|
121
|
+
return {
|
|
122
|
+
generations: [generation],
|
|
123
|
+
llmOutput,
|
|
124
|
+
};
|
|
125
|
+
}
|
|
126
|
+
async *_streamResponseChunks(messages, options, runManager) {
|
|
127
|
+
throw new Error('Streaming is not yet supported for Rooyai Chat Model');
|
|
128
|
+
}
|
|
129
|
+
}
|
|
130
|
+
exports.RooyaiLangChainWrapper = RooyaiLangChainWrapper;
|
|
@@ -0,0 +1,10 @@
|
|
|
1
|
+
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 100 100">
|
|
2
|
+
<defs>
|
|
3
|
+
<linearGradient id="rooyaiGradient" x1="0%" y1="0%" x2="100%" y2="100%">
|
|
4
|
+
<stop offset="0%" style="stop-color:#667eea;stop-opacity:1" />
|
|
5
|
+
<stop offset="100%" style="stop-color:#764ba2;stop-opacity:1" />
|
|
6
|
+
</linearGradient>
|
|
7
|
+
</defs>
|
|
8
|
+
<circle cx="50" cy="50" r="45" fill="url(#rooyaiGradient)" />
|
|
9
|
+
<text x="50" y="70" font-family="Arial, sans-serif" font-size="50" font-weight="bold" fill="white" text-anchor="middle">R</text>
|
|
10
|
+
</svg>
|
package/package.json
ADDED
|
@@ -0,0 +1,56 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "n8n-nodes-rooyai-chat",
|
|
3
|
+
"version": "0.1.0",
|
|
4
|
+
"description": "n8n supply node for Rooyai Chat API - Provides Rooyai language models for use with Basic LLM Chain, AI Agent, and other AI nodes.",
|
|
5
|
+
"keywords": [
|
|
6
|
+
"n8n-community-node-package",
|
|
7
|
+
"n8n",
|
|
8
|
+
"rooyai",
|
|
9
|
+
"ai",
|
|
10
|
+
"llm",
|
|
11
|
+
"language-model",
|
|
12
|
+
"chat-completion"
|
|
13
|
+
],
|
|
14
|
+
"license": "MIT",
|
|
15
|
+
"author": {
|
|
16
|
+
"name": "Rooyai"
|
|
17
|
+
},
|
|
18
|
+
"main": "index.js",
|
|
19
|
+
"scripts": {
|
|
20
|
+
"build": "tsc && gulp build:icons",
|
|
21
|
+
"dev": "tsc --watch",
|
|
22
|
+
"format": "prettier nodes credentials --write",
|
|
23
|
+
"lint": "eslint \"nodes/**/*.ts\" \"credentials/**/*.ts\" package.json",
|
|
24
|
+
"lintfix": "eslint \"nodes/**/*.ts\" \"credentials/**/*.ts\" package.json --fix",
|
|
25
|
+
"prepublishOnly": "npm run build"
|
|
26
|
+
},
|
|
27
|
+
"files": [
|
|
28
|
+
"dist"
|
|
29
|
+
],
|
|
30
|
+
"n8n": {
|
|
31
|
+
"n8nNodesApiVersion": 1,
|
|
32
|
+
"nodes": [
|
|
33
|
+
"dist/nodes/Rooyai/Rooyai.node.js"
|
|
34
|
+
],
|
|
35
|
+
"credentials": [
|
|
36
|
+
"dist/credentials/RooyaiApi.credentials.js"
|
|
37
|
+
]
|
|
38
|
+
},
|
|
39
|
+
"devDependencies": {
|
|
40
|
+
"@types/node": "^25.0.3",
|
|
41
|
+
"@typescript-eslint/parser": "^5.45.0",
|
|
42
|
+
"eslint-plugin-n8n-nodes-base": "~1.11.0",
|
|
43
|
+
"gulp": "^4.0.2",
|
|
44
|
+
"n8n-workflow": "*",
|
|
45
|
+
"prettier": "^2.7.1",
|
|
46
|
+
"typescript": "~5.2.0"
|
|
47
|
+
},
|
|
48
|
+
"dependencies": {
|
|
49
|
+
"@langchain/core": "^0.1.0",
|
|
50
|
+
"axios": "^1.13.2",
|
|
51
|
+
"n8n-workflow": "*"
|
|
52
|
+
},
|
|
53
|
+
"peerDependencies": {
|
|
54
|
+
"n8n-workflow": "*"
|
|
55
|
+
}
|
|
56
|
+
}
|