plugin-custom-llm 1.2.2 → 1.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,104 +1,104 @@
1
- # Plugin Custom LLM (OpenAI Compatible)
2
-
3
- NocoBase plugin for integrating external LLM providers that support OpenAI-compatible `/chat/completions` API, with built-in response format normalization and response mapping for non-standard APIs.
4
-
5
- ## Features
6
-
7
- - **OpenAI-compatible**: Works with any LLM provider exposing `/chat/completions` endpoint
8
- - **Auto content detection**: Handles both string and array content blocks (`[{type: 'text', text: '...'}]`)
9
- - **Response mapping**: Transform non-standard API responses to OpenAI format via JSON config (supports streaming SSE and JSON)
10
- - **Reasoning content**: Display thinking/reasoning from DeepSeek-compatible providers (multi-path detection)
11
- - **Stream keepalive**: Prevent proxy/gateway timeouts during long model thinking phases
12
- - **Tool calling support**: Gemini-compatible tool schema fixing (Zod + JSON Schema)
13
- - **Configurable**: JSON config editors for request and response customization
14
- - **Locale support**: English, Vietnamese, Chinese
15
-
16
- ## Installation
17
-
18
- Upload `plugin-custom-llm-x.x.x.tgz` via NocoBase Plugin Manager UI, then enable.
19
-
20
- ## Configuration
21
-
22
- ### Provider Settings
23
-
24
- | Field | Description |
25
- |---|---|
26
- | **Base URL** | LLM endpoint URL, e.g. `https://your-llm-server.com/v1` |
27
- | **API Key** | Authentication key |
28
- | **Disable Streaming** | Disable streaming for models that return empty stream values |
29
- | **Stream Keep Alive** | Enable keepalive to prevent timeouts during long thinking phases |
30
- | **Keep Alive Interval** | Interval in ms between keepalive signals (default: 5000) |
31
- | **Keep Alive Content** | Visual indicator text during keepalive (default: `...`) |
32
- | **Timeout** | Custom timeout in ms for slow-responding models |
33
- | **Request config (JSON)** | Optional. Extra request configuration |
34
- | **Response config (JSON)** | Optional. Response parsing and mapping configuration |
35
-
36
- ### Request Config
37
-
38
- ```json
39
- {
40
- "extraHeaders": { "X-Custom-Header": "value" },
41
- "extraBody": { "custom_field": "value" },
42
- "modelKwargs": { "stop": ["\n"] }
43
- }
44
- ```
45
-
46
- - `extraHeaders` — Custom HTTP headers sent with every request
47
- - `extraBody` — Additional fields merged into the request body
48
- - `modelKwargs` — Extra LangChain model parameters (stop sequences, etc.)
49
-
50
- ### Response Config
51
-
52
- ```json
53
- {
54
- "contentPath": "auto",
55
- "reasoningKey": "reasoning_content",
56
- "responseMapping": {
57
- "content": "message.response"
58
- }
59
- }
60
- ```
61
-
62
- - `contentPath` — How to extract text from LangChain chunks. `"auto"` (default) detects string, array, and object formats. Or use a dot-path like `"0.text"`
63
- - `reasoningKey` — Key name for reasoning/thinking content in `additional_kwargs` (default: `"reasoning_content"`)
64
- - `responseMapping` — Maps non-standard LLM responses to OpenAI format before LangChain processes them:
65
- - `content` — Dot-path to the content field in the raw response (e.g. `"message.response"`, `"data.text"`)
66
- - `role` — Dot-path to role field (optional, defaults to `"assistant"`)
67
- - `id` — Dot-path to response ID (optional)
68
-
69
- ### Response Mapping Examples
70
-
71
- | Raw LLM Response | `responseMapping.content` |
72
- |---|---|
73
- | `{"message": {"response": "..."}}` | `message.response` |
74
- | `{"data": {"text": "..."}}` | `data.text` |
75
- | `{"result": "..."}` | `result` |
76
- | `{"output": {"content": {"text": "..."}}}` | `output.content.text` |
77
-
78
- ### Model Settings
79
-
80
- Standard OpenAI-compatible parameters: temperature, max tokens, top P, frequency/presence penalty, response format, timeout, max retries.
81
-
82
- ## Changelog
83
-
84
- ### v1.2.0
85
-
86
- - **Fix**: Keepalive no longer interferes with tool call sequences (prevents tool call corruption)
87
- - **Fix**: Gemini-compatible tool schema fixing — handles Zod schemas via dual-phase approach (pre/post conversion)
88
- - **Fix**: Keepalive content no longer contaminates saved messages in DB
89
- - **Fix**: Response metadata extraction with long ID sanitization (>128 chars truncated)
90
- - **Fix**: Multi-path reasoning content detection (`additional_kwargs` + `kwargs.additional_kwargs`)
91
- - **Fix**: Improved error recovery in keepalive consumer (immediate error propagation)
92
-
93
- ### v1.1.1
94
-
95
- - Stream keepalive proxy for long thinking phases
96
- - Response mapping for non-standard LLM APIs
97
-
98
- ### v1.0.0
99
-
100
- - Initial release with OpenAI-compatible LLM provider support
101
-
102
- ## License
103
-
104
- Apache-2.0
1
+ # Plugin Custom LLM (OpenAI Compatible)
2
+
3
+ NocoBase plugin for integrating external LLM providers that support OpenAI-compatible `/chat/completions` API, with built-in response format normalization and response mapping for non-standard APIs.
4
+
5
+ ## Features
6
+
7
+ - **OpenAI-compatible**: Works with any LLM provider exposing `/chat/completions` endpoint
8
+ - **Auto content detection**: Handles both string and array content blocks (`[{type: 'text', text: '...'}]`)
9
+ - **Response mapping**: Transform non-standard API responses to OpenAI format via JSON config (supports streaming SSE and JSON)
10
+ - **Reasoning content**: Display thinking/reasoning from DeepSeek-compatible providers (multi-path detection)
11
+ - **Stream keepalive**: Prevent proxy/gateway timeouts during long model thinking phases
12
+ - **Tool calling support**: Gemini-compatible tool schema fixing (Zod + JSON Schema)
13
+ - **Configurable**: JSON config editors for request and response customization
14
+ - **Locale support**: English, Vietnamese, Chinese
15
+
16
+ ## Installation
17
+
18
+ Upload `plugin-custom-llm-x.x.x.tgz` via NocoBase Plugin Manager UI, then enable.
19
+
20
+ ## Configuration
21
+
22
+ ### Provider Settings
23
+
24
+ | Field | Description |
25
+ |---|---|
26
+ | **Base URL** | LLM endpoint URL, e.g. `https://your-llm-server.com/v1` |
27
+ | **API Key** | Authentication key |
28
+ | **Disable Streaming** | Disable streaming for models that return empty stream values |
29
+ | **Stream Keep Alive** | Enable keepalive to prevent timeouts during long thinking phases |
30
+ | **Keep Alive Interval** | Interval in ms between keepalive signals (default: 5000) |
31
+ | **Keep Alive Content** | Visual indicator text during keepalive (default: `...`) |
32
+ | **Timeout** | Custom timeout in ms for slow-responding models |
33
+ | **Request config (JSON)** | Optional. Extra request configuration |
34
+ | **Response config (JSON)** | Optional. Response parsing and mapping configuration |
35
+
36
+ ### Request Config
37
+
38
+ ```json
39
+ {
40
+ "extraHeaders": { "X-Custom-Header": "value" },
41
+ "extraBody": { "custom_field": "value" },
42
+ "modelKwargs": { "stop": ["\n"] }
43
+ }
44
+ ```
45
+
46
+ - `extraHeaders` — Custom HTTP headers sent with every request
47
+ - `extraBody` — Additional fields merged into the request body
48
+ - `modelKwargs` — Extra LangChain model parameters (stop sequences, etc.)
49
+
50
+ ### Response Config
51
+
52
+ ```json
53
+ {
54
+ "contentPath": "auto",
55
+ "reasoningKey": "reasoning_content",
56
+ "responseMapping": {
57
+ "content": "message.response"
58
+ }
59
+ }
60
+ ```
61
+
62
+ - `contentPath` — How to extract text from LangChain chunks. `"auto"` (default) detects string, array, and object formats. Or use a dot-path like `"0.text"`
63
+ - `reasoningKey` — Key name for reasoning/thinking content in `additional_kwargs` (default: `"reasoning_content"`)
64
+ - `responseMapping` — Maps non-standard LLM responses to OpenAI format before LangChain processes them:
65
+ - `content` — Dot-path to the content field in the raw response (e.g. `"message.response"`, `"data.text"`)
66
+ - `role` — Dot-path to role field (optional, defaults to `"assistant"`)
67
+ - `id` — Dot-path to response ID (optional)
68
+
69
+ ### Response Mapping Examples
70
+
71
+ | Raw LLM Response | `responseMapping.content` |
72
+ |---|---|
73
+ | `{"message": {"response": "..."}}` | `message.response` |
74
+ | `{"data": {"text": "..."}}` | `data.text` |
75
+ | `{"result": "..."}` | `result` |
76
+ | `{"output": {"content": {"text": "..."}}}` | `output.content.text` |
77
+
78
+ ### Model Settings
79
+
80
+ Standard OpenAI-compatible parameters: temperature, max tokens, top P, frequency/presence penalty, response format, timeout, max retries.
81
+
82
+ ## Changelog
83
+
84
+ ### v1.2.0
85
+
86
+ - **Fix**: Keepalive no longer interferes with tool call sequences (prevents tool call corruption)
87
+ - **Fix**: Gemini-compatible tool schema fixing — handles Zod schemas via dual-phase approach (pre/post conversion)
88
+ - **Fix**: Keepalive content no longer contaminates saved messages in DB
89
+ - **Fix**: Response metadata extraction with long ID sanitization (>128 chars truncated)
90
+ - **Fix**: Multi-path reasoning content detection (`additional_kwargs` + `kwargs.additional_kwargs`)
91
+ - **Fix**: Improved error recovery in keepalive consumer (immediate error propagation)
92
+
93
+ ### v1.1.1
94
+
95
+ - Stream keepalive proxy for long thinking phases
96
+ - Response mapping for non-standard LLM APIs
97
+
98
+ ### v1.0.0
99
+
100
+ - Initial release with OpenAI-compatible LLM provider support
101
+
102
+ ## License
103
+
104
+ Apache-2.0
@@ -7,4 +7,4 @@
7
7
  * For more information, please refer to: https://www.nocobase.com/agreement.
8
8
  */
9
9
 
10
- !function(e,t){"object"==typeof exports&&"object"==typeof module?module.exports=t(require("react"),require("@nocobase/plugin-ai/client"),require("@nocobase/client"),require("@nocobase/utils/client"),require("antd"),require("react-i18next")):"function"==typeof define&&define.amd?define("plugin-custom-llm",["react","@nocobase/plugin-ai/client","@nocobase/client","@nocobase/utils/client","antd","react-i18next"],t):"object"==typeof exports?exports["plugin-custom-llm"]=t(require("react"),require("@nocobase/plugin-ai/client"),require("@nocobase/client"),require("@nocobase/utils/client"),require("antd"),require("react-i18next")):e["plugin-custom-llm"]=t(e.react,e["@nocobase/plugin-ai/client"],e["@nocobase/client"],e["@nocobase/utils/client"],e.antd,e["react-i18next"])}(self,function(e,t,n,o,r,i){return function(){"use strict";var a={772:function(e){e.exports=n},645:function(e){e.exports=t},584:function(e){e.exports=o},721:function(e){e.exports=r},156:function(t){t.exports=e},238:function(e){e.exports=i}},c={};function u(e){var t=c[e];if(void 0!==t)return t.exports;var n=c[e]={exports:{}};return a[e](n,n.exports,u),n.exports}u.n=function(e){var t=e&&e.__esModule?function(){return e.default}:function(){return e};return u.d(t,{a:t}),t},u.d=function(e,t){for(var n in t)u.o(t,n)&&!u.o(e,n)&&Object.defineProperty(e,n,{enumerable:!0,get:t[n]})},u.o=function(e,t){return Object.prototype.hasOwnProperty.call(e,t)},u.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})};var l={};return!function(){u.r(l),u.d(l,{PluginCustomLLMClient:function(){return g},default:function(){return S}});var e=u(772),t=u(156),n=u.n(t),o=u(584),r=u(238),i="@nocobase/plugin-custom-llm",a=u(721),c=u(645),p=function(){var t=(0,r.useTranslation)(i,{nsMode:"fallback"}).t;return n().createElement("div",{style:{marginBottom:24}},n().createElement(a.Collapse,{bordered:!1,size:"small",items:[{key:"options",label:t("Options"),forceRender:!0,children:n().createElement(e.SchemaComponent,{schema:{type:"void",name:"custom-llm",properties:{temperature:{title:(0,o.tval)("Temperature",{ns:i}),type:"number","x-decorator":"FormItem","x-component":"InputNumber",default:.7,"x-component-props":{step:.1,min:0,max:2}},maxCompletionTokens:{title:(0,o.tval)("Max completion tokens",{ns:i}),type:"number","x-decorator":"FormItem","x-component":"InputNumber",default:-1},topP:{title:(0,o.tval)("Top P",{ns:i}),type:"number","x-decorator":"FormItem","x-component":"InputNumber",default:1,"x-component-props":{step:.1,min:0,max:1}},frequencyPenalty:{title:(0,o.tval)("Frequency penalty",{ns:i}),type:"number","x-decorator":"FormItem","x-component":"InputNumber",default:0,"x-component-props":{step:.1,min:-2,max:2}},presencePenalty:{title:(0,o.tval)("Presence penalty",{ns:i}),type:"number","x-decorator":"FormItem","x-component":"InputNumber",default:0,"x-component-props":{step:.1,min:-2,max:2}},responseFormat:{title:(0,o.tval)("Response format",{ns:i}),type:"string","x-decorator":"FormItem","x-component":"Select",enum:[{label:t("Text"),value:"text"},{label:t("JSON"),value:"json_object"}],default:"text"},timeout:{title:(0,o.tval)("Timeout (ms)",{ns:i}),type:"number","x-decorator":"FormItem","x-component":"InputNumber",default:6e4},maxRetries:{title:(0,o.tval)("Max retries",{ns:i}),type:"number","x-decorator":"FormItem","x-component":"InputNumber",default:1}}}})}]}))},s={components:{ProviderSettingsForm:function(){return n().createElement(e.SchemaComponent,{schema:{type:"void",properties:{apiKey:{title:(0,o.tval)("API Key",{ns:i}),type:"string",required:!0,"x-decorator":"FormItem","x-component":"TextAreaWithGlobalScope"},disableStream:{title:(0,o.tval)("Disable streaming",{ns:i}),type:"boolean","x-decorator":"FormItem","x-component":"Checkbox","x-content":(0,o.tval)("Disable streaming description",{ns:i})},streamKeepAlive:{title:(0,o.tval)("Stream keepalive",{ns:i}),type:"boolean","x-decorator":"FormItem","x-component":"Checkbox","x-content":(0,o.tval)("Stream keepalive description",{ns:i})},keepAliveIntervalMs:{title:(0,o.tval)("Keepalive interval (ms)",{ns:i}),type:"number","x-decorator":"FormItem","x-component":"InputNumber","x-component-props":{placeholder:"5000",min:1e3,step:1e3,style:{width:"100%"}},description:(0,o.tval)("Keepalive interval description",{ns:i})},keepAliveContent:{title:(0,o.tval)("Keepalive content",{ns:i}),type:"string","x-decorator":"FormItem","x-component":"Input","x-component-props":{placeholder:"..."},description:(0,o.tval)("Keepalive content description",{ns:i})},timeout:{title:(0,o.tval)("Timeout (ms)",{ns:i}),type:"number","x-decorator":"FormItem","x-component":"InputNumber","x-component-props":{placeholder:"120000",min:0,step:1e3,style:{width:"100%"}},description:(0,o.tval)("Timeout description",{ns:i})},requestConfig:{title:(0,o.tval)("Request config (JSON)",{ns:i}),type:"string","x-decorator":"FormItem","x-component":"Input.TextArea","x-component-props":{placeholder:JSON.stringify({extraHeaders:{},extraBody:{},modelKwargs:{}},null,2),rows:6,style:{fontFamily:"monospace",fontSize:12}},description:(0,o.tval)("Request config description",{ns:i})},responseConfig:{title:(0,o.tval)("Response config (JSON)",{ns:i}),type:"string","x-decorator":"FormItem","x-component":"Input.TextArea","x-component-props":{placeholder:JSON.stringify({contentPath:"auto",reasoningKey:"reasoning_content",responseMapping:{content:"message.response"}},null,2),rows:8,style:{fontFamily:"monospace",fontSize:12}},description:(0,o.tval)("Response config description",{ns:i})}}}})},ModelSettingsForm:function(){return n().createElement(e.SchemaComponent,{components:{Options:p,ModelSelect:c.ModelSelect},schema:{type:"void",properties:{model:{title:(0,o.tval)("Model",{ns:i}),type:"string",required:!0,"x-decorator":"FormItem","x-component":"ModelSelect"},options:{type:"void","x-component":"Options"}}}})}}};function m(e,t,n,o,r,i,a){try{var c=e[i](a),u=c.value}catch(e){n(e);return}c.done?t(u):Promise.resolve(u).then(o,r)}function f(e){return function(){var t=this,n=arguments;return new Promise(function(o,r){var i=e.apply(t,n);function a(e){m(i,o,r,a,c,"next",e)}function c(e){m(i,o,r,a,c,"throw",e)}a(void 0)})}}function d(e,t,n){return(d=x()?Reflect.construct:function(e,t,n){var o=[null];o.push.apply(o,t);var r=new(Function.bind.apply(e,o));return n&&b(r,n.prototype),r}).apply(null,arguments)}function y(e){return(y=Object.setPrototypeOf?Object.getPrototypeOf:function(e){return e.__proto__||Object.getPrototypeOf(e)})(e)}function b(e,t){return(b=Object.setPrototypeOf||function(e,t){return e.__proto__=t,e})(e,t)}function v(e){var t="function"==typeof Map?new Map:void 0;return(v=function(e){if(null===e||-1===Function.toString.call(e).indexOf("[native code]"))return e;if("function"!=typeof e)throw TypeError("Super expression must either be null or a function");if(void 0!==t){if(t.has(e))return t.get(e);t.set(e,n)}function n(){return d(e,arguments,y(this).constructor)}return n.prototype=Object.create(e.prototype,{constructor:{value:n,enumerable:!1,writable:!0,configurable:!0}}),b(n,e)})(e)}function x(){try{var e=!Boolean.prototype.valueOf.call(Reflect.construct(Boolean,[],function(){}))}catch(e){}return(x=function(){return!!e})()}function h(e,t){var n,o,r,i,a={label:0,sent:function(){if(1&r[0])throw r[1];return r[1]},trys:[],ops:[]};return i={next:c(0),throw:c(1),return:c(2)},"function"==typeof Symbol&&(i[Symbol.iterator]=function(){return this}),i;function c(i){return function(c){var u=[i,c];if(n)throw TypeError("Generator is already executing.");for(;a;)try{if(n=1,o&&(r=2&u[0]?o.return:u[0]?o.throw||((r=o.return)&&r.call(o),0):o.next)&&!(r=r.call(o,u[1])).done)return r;switch(o=0,r&&(u=[2&u[0],r.value]),u[0]){case 0:case 1:r=u;break;case 4:return a.label++,{value:u[1],done:!1};case 5:a.label++,o=u[1],u=[0];continue;case 7:u=a.ops.pop(),a.trys.pop();continue;default:if(!(r=(r=a.trys).length>0&&r[r.length-1])&&(6===u[0]||2===u[0])){a=0;continue}if(3===u[0]&&(!r||u[1]>r[0]&&u[1]<r[3])){a.label=u[1];break}if(6===u[0]&&a.label<r[1]){a.label=r[1],r=u;break}if(r&&a.label<r[2]){a.label=r[2],a.ops.push(u);break}r[2]&&a.ops.pop(),a.trys.pop();continue}u=t.call(e,a)}catch(e){u=[6,e],o=0}finally{n=r=0}if(5&u[0])throw u[1];return{value:u[0]?u[1]:void 0,done:!0}}}}var g=function(e){var t;if("function"!=typeof e&&null!==e)throw TypeError("Super expression must either be null or a function");function n(){var e,t;if(!(this instanceof n))throw TypeError("Cannot call a class as a function");return e=n,t=arguments,e=y(e),function(e,t){var n;if(t&&("object"==((n=t)&&"undefined"!=typeof Symbol&&n.constructor===Symbol?"symbol":typeof n)||"function"==typeof t))return t;if(void 0===e)throw ReferenceError("this hasn't been initialised - super() hasn't been called");return e}(this,x()?Reflect.construct(e,t||[],y(this).constructor):e.apply(this,t))}return n.prototype=Object.create(e&&e.prototype,{constructor:{value:n,writable:!0,configurable:!0}}),e&&b(n,e),t=[{key:"afterAdd",value:function(){return f(function(){return h(this,function(e){return[2]})})()}},{key:"beforeLoad",value:function(){return f(function(){return h(this,function(e){return[2]})})()}},{key:"load",value:function(){var e=this;return f(function(){return h(this,function(t){return e.aiPlugin.aiManager.registerLLMProvider("custom-llm",s),[2]})})()}},{key:"aiPlugin",get:function(){return this.app.pm.get("ai")}}],function(e,t){for(var n=0;n<t.length;n++){var o=t[n];o.enumerable=o.enumerable||!1,o.configurable=!0,"value"in o&&(o.writable=!0),Object.defineProperty(e,o.key,o)}}(n.prototype,t),n}(v(e.Plugin)),S=g}(),l}()});
10
+ !function(e,t){"object"==typeof exports&&"object"==typeof module?module.exports=t(require("react"),require("@nocobase/plugin-ai/client"),require("@nocobase/client"),require("@nocobase/utils/client"),require("antd"),require("react-i18next")):"function"==typeof define&&define.amd?define("plugin-custom-llm",["react","@nocobase/plugin-ai/client","@nocobase/client","@nocobase/utils/client","antd","react-i18next"],t):"object"==typeof exports?exports["plugin-custom-llm"]=t(require("react"),require("@nocobase/plugin-ai/client"),require("@nocobase/client"),require("@nocobase/utils/client"),require("antd"),require("react-i18next")):e["plugin-custom-llm"]=t(e.react,e["@nocobase/plugin-ai/client"],e["@nocobase/client"],e["@nocobase/utils/client"],e.antd,e["react-i18next"])}(self,function(e,t,n,o,r,i){return function(){"use strict";var a={772:function(e){e.exports=n},645:function(e){e.exports=t},584:function(e){e.exports=o},721:function(e){e.exports=r},156:function(t){t.exports=e},238:function(e){e.exports=i}},c={};function l(e){var t=c[e];if(void 0!==t)return t.exports;var n=c[e]={exports:{}};return a[e](n,n.exports,l),n.exports}l.n=function(e){var t=e&&e.__esModule?function(){return e.default}:function(){return e};return l.d(t,{a:t}),t},l.d=function(e,t){for(var n in t)l.o(t,n)&&!l.o(e,n)&&Object.defineProperty(e,n,{enumerable:!0,get:t[n]})},l.o=function(e,t){return Object.prototype.hasOwnProperty.call(e,t)},l.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})};var u={};return!function(){l.r(u),l.d(u,{PluginCustomLLMClient:function(){return h},default:function(){return I}});var e=l(772),t=l(156),n=l.n(t),o=l(584),r=l(238),i="@nocobase/plugin-custom-llm",a=l(721),c=l(645),p=function(){var t=(0,r.useTranslation)(i,{nsMode:"fallback"}).t;return n().createElement("div",{style:{marginBottom:24}},n().createElement(a.Collapse,{bordered:!1,size:"small",items:[{key:"options",label:t("Options"),forceRender:!0,children:n().createElement(e.SchemaComponent,{schema:{type:"void",name:"custom-llm",properties:{temperature:{title:(0,o.tval)("Temperature",{ns:i}),type:"number","x-decorator":"FormItem","x-component":"InputNumber",default:.7,"x-component-props":{step:.1,min:0,max:2}},maxCompletionTokens:{title:(0,o.tval)("Max completion tokens",{ns:i}),type:"number","x-decorator":"FormItem","x-component":"InputNumber",default:-1},topP:{title:(0,o.tval)("Top P",{ns:i}),type:"number","x-decorator":"FormItem","x-component":"InputNumber",default:1,"x-component-props":{step:.1,min:0,max:1}},frequencyPenalty:{title:(0,o.tval)("Frequency penalty",{ns:i}),type:"number","x-decorator":"FormItem","x-component":"InputNumber",default:0,"x-component-props":{step:.1,min:-2,max:2}},presencePenalty:{title:(0,o.tval)("Presence penalty",{ns:i}),type:"number","x-decorator":"FormItem","x-component":"InputNumber",default:0,"x-component-props":{step:.1,min:-2,max:2}},responseFormat:{title:(0,o.tval)("Response format",{ns:i}),type:"string","x-decorator":"FormItem","x-component":"Select",enum:[{label:t("Text"),value:"text"},{label:t("JSON"),value:"json_object"}],default:"text"},timeout:{title:(0,o.tval)("Timeout (ms)",{ns:i}),type:"number","x-decorator":"FormItem","x-component":"InputNumber",default:6e4},maxRetries:{title:(0,o.tval)("Max retries",{ns:i}),type:"number","x-decorator":"FormItem","x-component":"InputNumber",default:1}}}})}]}))},s={components:{ProviderSettingsForm:function(){return n().createElement(e.SchemaComponent,{schema:{type:"void",properties:{apiKey:{title:(0,o.tval)("API Key",{ns:i}),type:"string",required:!0,"x-decorator":"FormItem","x-component":"TextAreaWithGlobalScope"},disableStream:{title:(0,o.tval)("Disable streaming",{ns:i}),type:"boolean","x-decorator":"FormItem","x-component":"Checkbox","x-content":(0,o.tval)("Disable streaming description",{ns:i})},enableReasoning:{title:(0,o.tval)("Enable reasoning",{ns:i}),type:"boolean","x-decorator":"FormItem","x-component":"Checkbox","x-content":(0,o.tval)("Enable reasoning description",{ns:i})},streamKeepAlive:{title:(0,o.tval)("Stream keepalive",{ns:i}),type:"boolean","x-decorator":"FormItem","x-component":"Checkbox","x-content":(0,o.tval)("Stream keepalive description",{ns:i})},keepAliveIntervalMs:{title:(0,o.tval)("Keepalive interval (ms)",{ns:i}),type:"number","x-decorator":"FormItem","x-component":"InputNumber","x-component-props":{placeholder:"5000",min:1e3,step:1e3,style:{width:"100%"}},description:(0,o.tval)("Keepalive interval description",{ns:i})},keepAliveContent:{title:(0,o.tval)("Keepalive content",{ns:i}),type:"string","x-decorator":"FormItem","x-component":"Input","x-component-props":{placeholder:"..."},description:(0,o.tval)("Keepalive content description",{ns:i})},timeout:{title:(0,o.tval)("Timeout (ms)",{ns:i}),type:"number","x-decorator":"FormItem","x-component":"InputNumber","x-component-props":{placeholder:"120000",min:0,step:1e3,style:{width:"100%"}},description:(0,o.tval)("Timeout description",{ns:i})},requestConfig:{title:(0,o.tval)("Request config (JSON)",{ns:i}),type:"string","x-decorator":"FormItem","x-component":"Input.TextArea","x-component-props":{placeholder:JSON.stringify({extraHeaders:{},extraBody:{},modelKwargs:{}},null,2),rows:6,style:{fontFamily:"monospace",fontSize:12}},description:(0,o.tval)("Request config description",{ns:i})},responseConfig:{title:(0,o.tval)("Response config (JSON)",{ns:i}),type:"string","x-decorator":"FormItem","x-component":"Input.TextArea","x-component-props":{placeholder:JSON.stringify({contentPath:"auto",reasoningKey:"reasoning_content",responseMapping:{content:"message.response",tool_calls:"message.tool_calls",finish_reason:"finish_reason"}},null,2),rows:8,style:{fontFamily:"monospace",fontSize:12}},description:(0,o.tval)("Response config description",{ns:i})}}}})},ModelSettingsForm:function(){return n().createElement(e.SchemaComponent,{components:{Options:p,ModelSelect:c.ModelSelect},schema:{type:"void",properties:{model:{title:(0,o.tval)("Model",{ns:i}),type:"string",required:!0,"x-decorator":"FormItem","x-component":"ModelSelect"},options:{type:"void","x-component":"Options"}}}})}}};function m(e,t,n,o,r,i,a){try{var c=e[i](a),l=c.value}catch(e){n(e);return}c.done?t(l):Promise.resolve(l).then(o,r)}function f(e){return function(){var t=this,n=arguments;return new Promise(function(o,r){var i=e.apply(t,n);function a(e){m(i,o,r,a,c,"next",e)}function c(e){m(i,o,r,a,c,"throw",e)}a(void 0)})}}function d(e,t,n){return(d=v()?Reflect.construct:function(e,t,n){var o=[null];o.push.apply(o,t);var r=new(Function.bind.apply(e,o));return n&&b(r,n.prototype),r}).apply(null,arguments)}function y(e){return(y=Object.setPrototypeOf?Object.getPrototypeOf:function(e){return e.__proto__||Object.getPrototypeOf(e)})(e)}function b(e,t){return(b=Object.setPrototypeOf||function(e,t){return e.__proto__=t,e})(e,t)}function x(e){var t="function"==typeof Map?new Map:void 0;return(x=function(e){if(null===e||-1===Function.toString.call(e).indexOf("[native code]"))return e;if("function"!=typeof e)throw TypeError("Super expression must either be null or a function");if(void 0!==t){if(t.has(e))return t.get(e);t.set(e,n)}function n(){return d(e,arguments,y(this).constructor)}return n.prototype=Object.create(e.prototype,{constructor:{value:n,enumerable:!1,writable:!0,configurable:!0}}),b(n,e)})(e)}function v(){try{var e=!Boolean.prototype.valueOf.call(Reflect.construct(Boolean,[],function(){}))}catch(e){}return(v=function(){return!!e})()}function g(e,t){var n,o,r,i,a={label:0,sent:function(){if(1&r[0])throw r[1];return r[1]},trys:[],ops:[]};return i={next:c(0),throw:c(1),return:c(2)},"function"==typeof Symbol&&(i[Symbol.iterator]=function(){return this}),i;function c(i){return function(c){var l=[i,c];if(n)throw TypeError("Generator is already executing.");for(;a;)try{if(n=1,o&&(r=2&l[0]?o.return:l[0]?o.throw||((r=o.return)&&r.call(o),0):o.next)&&!(r=r.call(o,l[1])).done)return r;switch(o=0,r&&(l=[2&l[0],r.value]),l[0]){case 0:case 1:r=l;break;case 4:return a.label++,{value:l[1],done:!1};case 5:a.label++,o=l[1],l=[0];continue;case 7:l=a.ops.pop(),a.trys.pop();continue;default:if(!(r=(r=a.trys).length>0&&r[r.length-1])&&(6===l[0]||2===l[0])){a=0;continue}if(3===l[0]&&(!r||l[1]>r[0]&&l[1]<r[3])){a.label=l[1];break}if(6===l[0]&&a.label<r[1]){a.label=r[1],r=l;break}if(r&&a.label<r[2]){a.label=r[2],a.ops.push(l);break}r[2]&&a.ops.pop(),a.trys.pop();continue}l=t.call(e,a)}catch(e){l=[6,e],o=0}finally{n=r=0}if(5&l[0])throw l[1];return{value:l[0]?l[1]:void 0,done:!0}}}}var h=function(e){var t;if("function"!=typeof e&&null!==e)throw TypeError("Super expression must either be null or a function");function n(){var e,t;if(!(this instanceof n))throw TypeError("Cannot call a class as a function");return e=n,t=arguments,e=y(e),function(e,t){var n;if(t&&("object"==((n=t)&&"undefined"!=typeof Symbol&&n.constructor===Symbol?"symbol":typeof n)||"function"==typeof t))return t;if(void 0===e)throw ReferenceError("this hasn't been initialised - super() hasn't been called");return e}(this,v()?Reflect.construct(e,t||[],y(this).constructor):e.apply(this,t))}return n.prototype=Object.create(e&&e.prototype,{constructor:{value:n,writable:!0,configurable:!0}}),e&&b(n,e),t=[{key:"afterAdd",value:function(){return f(function(){return g(this,function(e){return[2]})})()}},{key:"beforeLoad",value:function(){return f(function(){return g(this,function(e){return[2]})})()}},{key:"load",value:function(){var e=this;return f(function(){return g(this,function(t){return e.aiPlugin.aiManager.registerLLMProvider("custom-llm",s),[2]})})()}},{key:"aiPlugin",get:function(){return this.app.pm.get("ai")}}],function(e,t){for(var n=0;n<t.length;n++){var o=t[n];o.enumerable=o.enumerable||!1,o.configurable=!0,"value"in o&&(o.writable=!0),Object.defineProperty(e,o.key,o)}}(n.prototype,t),n}(x(e.Plugin)),I=h}(),u}()});
@@ -14,9 +14,9 @@ module.exports = {
14
14
  "@nocobase/server": "2.0.32",
15
15
  "@nocobase/flow-engine": "2.0.32",
16
16
  "@nocobase/database": "2.0.32",
17
- "axios": "1.14.0",
17
+ "axios": "1.7.7",
18
18
  "@nocobase/actions": "2.0.32",
19
- "react": "18.3.1",
19
+ "react": "18.2.0",
20
20
  "@nocobase/utils": "2.0.32",
21
21
  "antd": "5.24.2"
22
22
  };
@@ -1,29 +1,31 @@
1
- {
2
- "Base URL": "Base URL",
3
- "API Key": "API Key",
4
- "Model": "Model",
5
- "Options": "Options",
6
- "Temperature": "Temperature",
7
- "Max completion tokens": "Max completion tokens",
8
- "Top P": "Top P",
9
- "Frequency penalty": "Frequency penalty",
10
- "Presence penalty": "Presence penalty",
11
- "Response format": "Response format",
12
- "Text": "Text",
13
- "JSON": "JSON",
14
- "Timeout (ms)": "Timeout (ms)",
15
- "Timeout description": "Request timeout in milliseconds. Increase this for models with long thinking/processing phases. Default: 120000 (2 minutes).",
16
- "Max retries": "Max retries",
17
- "Disable streaming": "Disable streaming",
18
- "Disable streaming description": "Use non-streaming mode. Enable this for models that have a long \"thinking\" phase before responding, which can cause empty stream values and processing to terminate early.",
19
- "Stream keepalive": "Stream keepalive",
20
- "Stream keepalive description": "Keep stream alive during model thinking. Injects placeholder content when no data arrives within the keepalive interval. Works only when streaming is enabled.",
21
- "Keepalive interval (ms)": "Keepalive interval (ms)",
22
- "Keepalive interval description": "Interval in milliseconds between keepalive signals. Default: 5000 (5 seconds).",
23
- "Keepalive content": "Keepalive content",
24
- "Keepalive content description": "Placeholder text used as keepalive signal (invisible to the user). Default: '...'",
25
- "Request config (JSON)": "Request config (JSON)",
26
- "Request config description": "Extra configuration for LLM requests. Supported keys: extraHeaders (custom HTTP headers), extraBody (extra request body fields), modelKwargs (LangChain model kwargs).",
27
- "Response config (JSON)": "Response config (JSON)",
28
- "Response config description": "Configure response parsing. contentPath: 'auto' or dot-path. reasoningKey: key for reasoning content. responseMapping: { content: 'dot.path' } maps non-standard LLM response to OpenAI format (e.g., 'message.response')."
29
- }
1
+ {
2
+ "Base URL": "Base URL",
3
+ "API Key": "API Key",
4
+ "Model": "Model",
5
+ "Options": "Options",
6
+ "Temperature": "Temperature",
7
+ "Max completion tokens": "Max completion tokens",
8
+ "Top P": "Top P",
9
+ "Frequency penalty": "Frequency penalty",
10
+ "Presence penalty": "Presence penalty",
11
+ "Response format": "Response format",
12
+ "Text": "Text",
13
+ "JSON": "JSON",
14
+ "Timeout (ms)": "Timeout (ms)",
15
+ "Timeout description": "Request timeout in milliseconds. Increase this for models with long thinking/processing phases. Default: 120000 (2 minutes).",
16
+ "Max retries": "Max retries",
17
+ "Disable streaming": "Disable streaming",
18
+ "Disable streaming description": "Use non-streaming mode. Enable this for models that have a long \"thinking\" phase before responding, which can cause empty stream values and processing to terminate early.",
19
+ "Enable reasoning": "Enable reasoning",
20
+ "Enable reasoning description": "Enable reasoning_content support for models like DeepSeek-R1. When enabled, reasoning content is preserved during tool call round-trips. Required for models that include reasoning in their responses.",
21
+ "Stream keepalive": "Stream keepalive",
22
+ "Stream keepalive description": "Keep stream alive during model thinking. Injects placeholder content when no data arrives within the keepalive interval. Works only when streaming is enabled.",
23
+ "Keepalive interval (ms)": "Keepalive interval (ms)",
24
+ "Keepalive interval description": "Interval in milliseconds between keepalive signals. Default: 5000 (5 seconds).",
25
+ "Keepalive content": "Keepalive content",
26
+ "Keepalive content description": "Placeholder text used as keepalive signal (invisible to the user). Default: '...'",
27
+ "Request config (JSON)": "Request config (JSON)",
28
+ "Request config description": "Extra configuration for LLM requests. Supported keys: extraHeaders (custom HTTP headers), extraBody (extra request body fields), modelKwargs (LangChain model kwargs).",
29
+ "Response config (JSON)": "Response config (JSON)",
30
+ "Response config description": "Configure response parsing. contentPath: 'auto' or dot-path. reasoningKey: key for reasoning content. responseMapping: { content: 'dot.path', tool_calls: 'dot.path', finish_reason: 'dot.path' } — maps non-standard LLM response to OpenAI format. tool_calls/finish_reason are auto-detected if not specified."
31
+ }
@@ -1,29 +1,31 @@
1
- {
2
- "Base URL": "URL cơ sở",
3
- "API Key": "API Key",
4
- "Model": "Mô hình",
5
- "Options": "Tùy chọn",
6
- "Temperature": "Nhiệt độ",
7
- "Max completion tokens": "Số token tối đa",
8
- "Top P": "Top P",
9
- "Frequency penalty": "Hình phạt tần suất",
10
- "Presence penalty": "Hình phạt sự hiện diện",
11
- "Response format": "Định dạng phản hồi",
12
- "Text": "Văn bản",
13
- "JSON": "JSON",
14
- "Timeout (ms)": "Thời gian chờ (ms)",
15
- "Timeout description": "Thời gian chờ request tính bằng mili giây. Tăng giá trị này cho các model có giai đoạn thinking/xử lý dài. Mặc định: 120000 (2 phút).",
16
- "Max retries": "Số lần thử lại tối đa",
17
- "Disable streaming": "Tắt streaming",
18
- "Disable streaming description": "Sử dụng chế độ non-streaming. Bật tính năng này cho các model có giai đoạn \"thinking\" dài trước khi trả lời, gây ra stream rỗng và xử lý bị ngắt sớm.",
19
- "Stream keepalive": "Giữ kết nối stream",
20
- "Stream keepalive description": "Giữ stream hoạt động khi model đang thinking. Gửi nội dung giữ kết nối khi không dữ liệu trong khoảng thời gian đã cấu hình. Chỉ hoạt động khi streaming được bật.",
21
- "Keepalive interval (ms)": "Khoảng thời gian keepalive (ms)",
22
- "Keepalive interval description": "Khoảng thời gian giữa các tín hiệu keepalive, tính bằng mili giây. Mặc định: 5000 (5 giây).",
23
- "Keepalive content": "Nội dung keepalive",
24
- "Keepalive content description": "Nội dung giữ kết nối (không hiển thị cho người dùng). Mặc định: '...'",
25
- "Request config (JSON)": "Cấu hình request (JSON)",
26
- "Request config description": "Cấu hình bổ sung cho request LLM. Các key hỗ trợ: extraHeaders (HTTP headers tùy chỉnh), extraBody (thêm trường vào request body), modelKwargs (tham số model LangChain).",
27
- "Response config (JSON)": "Cấu hình response (JSON)",
28
- "Response config description": "Cấu hình parse response. contentPath: 'auto' hoặc dot-path. reasoningKey: key reasoning. responseMapping: { content: 'dot.path' } mapping response không chuẩn OpenAI ( dụ: 'message.response')."
29
- }
1
+ {
2
+ "Base URL": "URL cơ sở",
3
+ "API Key": "API Key",
4
+ "Model": "Mô hình",
5
+ "Options": "Tùy chọn",
6
+ "Temperature": "Nhiệt độ",
7
+ "Max completion tokens": "Số token tối đa",
8
+ "Top P": "Top P",
9
+ "Frequency penalty": "Hình phạt tần suất",
10
+ "Presence penalty": "Hình phạt sự hiện diện",
11
+ "Response format": "Định dạng phản hồi",
12
+ "Text": "Văn bản",
13
+ "JSON": "JSON",
14
+ "Timeout (ms)": "Thời gian chờ (ms)",
15
+ "Timeout description": "Thời gian chờ request tính bằng mili giây. Tăng giá trị này cho các model có giai đoạn thinking/xử lý dài. Mặc định: 120000 (2 phút).",
16
+ "Max retries": "Số lần thử lại tối đa",
17
+ "Disable streaming": "Tắt streaming",
18
+ "Disable streaming description": "Sử dụng chế độ non-streaming. Bật tính năng này cho các model có giai đoạn \"thinking\" dài trước khi trả lời, gây ra stream rỗng và xử lý bị ngắt sớm.",
19
+ "Enable reasoning": "Bật reasoning",
20
+ "Enable reasoning description": "Hỗ trợ reasoning_content cho các model như DeepSeek-R1. Khi bật, reasoning content được bảo toàn trong các vòng gọi tool. Cần thiết cho các model trả về reasoning trong phản hồi.",
21
+ "Stream keepalive": "Giữ kết nối stream",
22
+ "Stream keepalive description": "Giữ stream hoạt động khi model đang thinking. Gửi nội dung giữ kết nối khi không có dữ liệu trong khoảng thời gian đã cấu hình. Chỉ hoạt động khi streaming được bật.",
23
+ "Keepalive interval (ms)": "Khoảng thời gian keepalive (ms)",
24
+ "Keepalive interval description": "Khoảng thời gian giữa các tín hiệu keepalive, tính bằng mili giây. Mặc định: 5000 (5 giây).",
25
+ "Keepalive content": "Nội dung keepalive",
26
+ "Keepalive content description": "Nội dung giữ kết nối (không hiển thị cho người dùng). Mặc định: '...'",
27
+ "Request config (JSON)": "Cấu hình request (JSON)",
28
+ "Request config description": "Cấu hình bổ sung cho request LLM. Các key hỗ trợ: extraHeaders (HTTP headers tùy chỉnh), extraBody (thêm trường vào request body), modelKwargs (tham số model LangChain).",
29
+ "Response config (JSON)": "Cấu hình response (JSON)",
30
+ "Response config description": "Cấu hình parse response. contentPath: 'auto' hoặc dot-path. reasoningKey: key reasoning. responseMapping: { content: 'dot.path', tool_calls: 'dot.path', finish_reason: 'dot.path' } — mapping response không chuẩn OpenAI. tool_calls/finish_reason tự động phát hiện nếu không chỉ định."
31
+ }
@@ -1,16 +1,16 @@
1
- {
2
- "Base URL": "基础 URL",
3
- "API Key": "API 密钥",
4
- "Model": "模型",
5
- "Options": "选项",
6
- "Temperature": "温度",
7
- "Max completion tokens": "最大完成令牌数",
8
- "Top P": "Top P",
9
- "Frequency penalty": "频率惩罚",
10
- "Presence penalty": "存在惩罚",
11
- "Response format": "响应格式",
12
- "Text": "文本",
13
- "JSON": "JSON",
14
- "Timeout (ms)": "超时 (毫秒)",
15
- "Max retries": "最大重试次数"
16
- }
1
+ {
2
+ "Base URL": "基础 URL",
3
+ "API Key": "API 密钥",
4
+ "Model": "模型",
5
+ "Options": "选项",
6
+ "Temperature": "温度",
7
+ "Max completion tokens": "最大完成令牌数",
8
+ "Top P": "Top P",
9
+ "Frequency penalty": "频率惩罚",
10
+ "Presence penalty": "存在惩罚",
11
+ "Response format": "响应格式",
12
+ "Text": "文本",
13
+ "JSON": "JSON",
14
+ "Timeout (ms)": "超时 (毫秒)",
15
+ "Max retries": "最大重试次数"
16
+ }