plugin-sensitive-filter-xr 0.0.9 → 0.1.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,134 +1,72 @@
1
- # Xpert Plugin: Sensitive Filter Middleware
1
+ # Sensitive Filter Middleware
2
2
 
3
- `@xpert-ai/plugin-sensitive-filter` 提供输入/输出敏感内容过滤,支持两种互斥模式:
3
+ `@xpert-ai/plugin-sensitive-filter` filters sensitive content for both input and output in two mutually exclusive modes:
4
4
 
5
- - `rule`:规则匹配过滤(关键词/正则)
6
- - `llm`:自然语言规则过滤(固定改写策略)
5
+ - `rule`: deterministic rules (`keyword` / `regex`)
6
+ - `llm`: natural-language policy evaluation with rewrite-only enforcement
7
7
 
8
- ## 快速上手
8
+ ## Lifecycle Hooks
9
9
 
10
- 1. 在工作流中添加“敏感内容过滤中间件”。
11
- 2. 选择 `mode`:`rule` `llm`(二选一)。
12
- 3. `rule` 模式填写规则表;`llm` 模式只需填写模型、范围、审核规则说明。
13
- 4. 运行工作流验证输入与输出阶段效果。
14
- 5. 查看日志中的审计记录。
10
+ - `beforeAgent`: evaluates and optionally rewrites/blocks input
11
+ - `wrapModelCall`: evaluates and optionally rewrites/blocks output
12
+ - `afterAgent`: writes audit snapshot
15
13
 
16
- ## 执行流程
14
+ ## Configuration
17
15
 
18
- - `beforeAgent`:处理输入
19
- - `wrapModelCall`:处理输出
20
- - `afterAgent`:记录审计
16
+ ### Top-level
21
17
 
22
- ## 顶层参数
18
+ | Field | Type | Required | Default | Description |
19
+ | --- | --- | --- | --- | --- |
20
+ | `mode` | `'rule' \| 'llm'` | Yes | `rule` | Select one mode. |
21
+ | `rules` | `Array<Rule>` | Runtime-required in `rule` mode | `[]` | Business rules for `rule` mode. |
22
+ | `caseSensitive` | `boolean` | No | `false` | Case-sensitive matching in `rule` mode. |
23
+ | `normalize` | `boolean` | No | `true` | Whitespace normalization in `rule` mode. |
24
+ | `llm` | `object` | Runtime-required in `llm` mode | - | LLM mode configuration. |
23
25
 
24
- | 参数 | 类型 | 必填 | 默认值 | 说明 |
25
- |---|---|---|---|---|
26
- | `mode` | `'rule' \| 'llm'` | 是 | `rule` | 过滤模式(互斥)。 |
27
- | `rules` | `Array<Rule>` | `rule` 模式建议配置 | `[]` | 业务规则,仅 `rule` 模式生效。 |
28
- | `generalPack` | `object` | 否 | 见子项 | 通用规则包(本地词库兜底),仅 `rule` 模式生效。 |
29
- | `caseSensitive` | `boolean` | 否 | `false` | 是否区分大小写,仅 `rule` 模式生效。 |
30
- | `normalize` | `boolean` | 否 | `true` | 是否标准化文本,仅 `rule` 模式生效。 |
31
- | `llm` | `object` | `llm` 模式执行期必填 | - | LLM 过滤配置,仅 `llm` 模式生效。 |
26
+ ### Rule Mode (`mode=rule`)
32
27
 
33
- ## rule 模式参数
28
+ `rules[]` fields:
34
29
 
35
- ### rules[]
30
+ | Field | Type | Required | Description |
31
+ | --- | --- | --- | --- |
32
+ | `id` | `string` | No | Auto-generated when empty (`rule-{index+1}`). |
33
+ | `pattern` | `string` | Yes | Match pattern. |
34
+ | `type` | `'keyword' \| 'regex'` | Yes | Match type. |
35
+ | `scope` | `'input' \| 'output' \| 'both'` | Yes | Match phase. |
36
+ | `severity` | `'high' \| 'medium'` | Yes | Conflict priority (`high` > `medium`). |
37
+ | `action` | `'block' \| 'rewrite'` | Yes | Hit action. |
38
+ | `replacementText` | `string` | No | Optional replacement/block message. |
36
39
 
37
- | 字段 | 类型 | 必填 | 默认值 | 说明 |
38
- |---|---|---|---|---|
39
- | `id` | `string` | 否 | 自动生成 `rule-{index+1}` | 规则标识。 |
40
- | `pattern` | `string` | 是 | - | 匹配内容。 |
41
- | `type` | `'keyword' \| 'regex'` | 是 | - | 匹配方式。 |
42
- | `scope` | `'input' \| 'output' \| 'both'` | 是 | - | 生效阶段。 |
43
- | `severity` | `'high' \| 'medium'` | 是 | - | 冲突优先级(`high > medium`)。 |
44
- | `action` | `'block' \| 'rewrite'` | 是 | - | 命中动作。 |
45
- | `replacementText` | `string` | 否 | `[已过滤]`(rewrite 时) | 自定义拦截/改写文本。 |
40
+ Runtime validation requires at least one valid rule with:
41
+ `pattern/type/action/scope/severity`.
46
42
 
47
- ### generalPack
43
+ ### LLM Mode (`mode=llm`)
48
44
 
49
- | 字段 | 类型 | 必填 | 默认值 | 说明 |
50
- |---|---|---|---|---|
51
- | `enabled` | `boolean` | | `false` | 是否启用通用规则包。 |
52
- | `profile` | `'strict' \| 'balanced'` | | `balanced` | `strict` 更严格,`balanced` 更保守。 |
45
+ | Field | Type | Required (runtime) | Default | Description |
46
+ | --- | --- | --- | --- | --- |
47
+ | `model` | `ICopilotModel` | Yes | - | Policy evaluation model. |
48
+ | `scope` | `'input' \| 'output' \| 'both'` | Yes | - | Evaluation phase scope. |
49
+ | `rulePrompt` | `string` | Yes | - | Natural-language policy description. |
50
+ | `rewriteFallbackText` | `string` | No | `[已过滤]` | Fallback rewrite text. |
51
+ | `timeoutMs` | `number` | No | unlimited | Per-evaluation timeout (max `120000`). |
53
52
 
54
- ## llm 模式参数(简化版)
53
+ Notes:
55
54
 
56
- ### llm
55
+ - The middleware internally enforces rewrite-only behavior for LLM hits.
56
+ - Structured output method is internally adaptive; the UI does not expose method selection.
57
+ - Internal decision traces are muted from chat output.
57
58
 
58
- | 字段 | 类型 | 必填(执行期) | 默认值 | 说明 |
59
- |---|---|---|---|---|
60
- | `model` | `ICopilotModel` | 是 | - | 用于判定的过滤模型。 |
61
- | `scope` | `'input' \| 'output' \| 'both'` | 是 | - | 生效范围。 |
62
- | `rulePrompt` | `string` | 是 | - | 审核规则说明(自然语言,不需要 JSON)。 |
63
- | `rewriteFallbackText` | `string` | 否 | `[已过滤]` | 命中但未返回改写文本时兜底文案。 |
64
- | `timeoutMs` | `number` | 否 | 不限 | 判定超时(毫秒,上限 `120000`)。 |
59
+ ## Backward Compatibility
65
60
 
66
- ### llm 模式行为说明
61
+ Historical configurations may still include `generalPack`.
67
62
 
68
- - 用户只需填写自然语言规则,不需要书写 JSON 协议。
69
- - `outputMethod` 由中间件内部自动自适应(兼容历史配置读取),默认不在配置面板展示。
70
- - 命中后统一执行 `rewrite`。
71
- - LLM 调用异常时也统一执行 `rewrite`。
72
- - 若配置了 `timeoutMs` 且判定超时,会直接使用 `rewriteFallbackText` 作为改写结果。
73
- - 若模型不支持当前 `outputMethod` 的 `response_format`,会自动尝试其它结构化方式并最终降级到纯文本 JSON 解析。
74
- - 若历史配置仍携带 `onLlmError/systemPrompt/errorRewriteText`,会按兼容逻辑处理并给出弃用告警。
75
- - 内部判定与审计追踪默认静默执行,不会把 `{"matched":false}` 这类内部 JSON 透传到聊天回复。
63
+ Current behavior:
76
64
 
77
- ## 界面怎么填(最简)
65
+ - The field is ignored.
66
+ - Execution continues.
67
+ - Rule/LLM behavior is driven only by current supported fields.
78
68
 
79
- `mode=llm` 时,最少只需要填这 3 项:
80
-
81
- 1. `llm.model`:选择过滤用模型(建议稳定、低延迟模型)。
82
- 2. `llm.scope`:建议先用 `both`。
83
- 3. `llm.rulePrompt`:写自然语言审核规则,不需要 JSON。
84
-
85
- 示例(个人信息场景):
86
-
87
- - `llm.rulePrompt`:如果文本包含手机号、身份证号、银行卡号、家庭住址等个人敏感信息,请判定为命中并给出脱敏改写内容。
88
- - `llm.rewriteFallbackText`:`[已过滤]`
89
-
90
- 说明:
91
-
92
- - `llm` 模式是“只改写不拦截”。如果你要“直接拦截”,请改用 `mode=rule` 并配置 `action=block`。
93
-
94
- ## 超时与速度说明
95
-
96
- - `timeoutMs` 是“单次 LLM 判定调用的最长等待时间(毫秒)”,不是整条工作流总超时。
97
- - 设置过小(例如 `3000`)时,慢模型更容易超时,超时后会直接走 `rewriteFallbackText`。
98
- - 不设置 `timeoutMs` 时,判定会一直等待模型返回,稳定性更高,但最坏延迟会变大。
99
- - 命中输出过滤时,通常会感觉更慢:因为需要先拿到可判定内容,再统一改写返回。
100
- - 你看到多个 `success` 卡片是正常的:输入判定、输出判定、收尾审计是不同执行步骤。
101
-
102
- ## 配置示例
103
-
104
- ### 示例 1:rule 模式
105
-
106
- ```json
107
- {
108
- "mode": "rule",
109
- "rules": [
110
- {
111
- "pattern": "炸弹",
112
- "type": "keyword",
113
- "scope": "both",
114
- "severity": "high",
115
- "action": "block"
116
- },
117
- {
118
- "pattern": "(身份证|手机号)",
119
- "type": "regex",
120
- "scope": "output",
121
- "severity": "medium",
122
- "action": "rewrite",
123
- "replacementText": "该回答包含敏感信息,已处理。"
124
- }
125
- ],
126
- "normalize": true,
127
- "caseSensitive": false
128
- }
129
- ```
130
-
131
- ### 示例 2:llm 模式(推荐)
69
+ ## Minimal LLM Example
132
70
 
133
71
  ```json
134
72
  {
@@ -136,37 +74,24 @@
136
74
  "llm": {
137
75
  "model": { "provider": "openai", "model": "gpt-4o-mini" },
138
76
  "scope": "both",
139
- "rulePrompt": "若内容包含违法、暴力或隐私泄露,请改写为安全且中性的表达。",
77
+ "rulePrompt": "If content contains ID cards, phone numbers, bank cards, or home addresses, rewrite it into a privacy-safe response.",
140
78
  "rewriteFallbackText": "[已过滤]",
141
79
  "timeoutMs": 3000
142
80
  }
143
81
  }
144
82
  ```
145
83
 
146
- ## 常见问题
84
+ ## Troubleshooting
147
85
 
148
- ### 1) 为什么没生效?
86
+ 1. No effect in `rule` mode:
87
+ - Ensure at least one valid rule contains `pattern/type/action/scope/severity`.
149
88
 
150
- 按顺序检查:
89
+ 2. No effect in `llm` mode:
90
+ - Ensure `model/scope/rulePrompt` are all present.
151
91
 
152
- 1. `mode` 是否正确。
153
- 2. `rule` 模式是否至少有 1 条有效规则(`pattern/type/action/scope/severity`)。
154
- 3. `llm` 模式是否填写了 `model/scope/rulePrompt`。
155
- 4. `scope` 是否覆盖当前阶段(输入或输出)。
92
+ 3. Unexpected rewrites in LLM mode:
93
+ - Check audit records for `source=error-policy` and `reason` starting with `llm-error:`.
156
94
 
157
- ### 2) 为什么提示中文校验错误?
95
+ ## Validation Commands
158
96
 
159
- 中间件采用“编辑期容错、执行期校验”。编辑阶段允许先保存草稿,执行时再提示缺项。
160
97
 
161
- ### 3) 旧配置里的 systemPrompt 还能用吗?
162
-
163
- 可以。若未填写 `rulePrompt`,会兼容读取 `systemPrompt`。建议迁移到 `rulePrompt`。
164
-
165
- ## 开发验证
166
-
167
- ```bash
168
- pnpm -C xpertai exec nx build @xpert-ai/plugin-sensitive-filter
169
- pnpm -C xpertai exec nx test @xpert-ai/plugin-sensitive-filter
170
- pnpm -C plugin-dev-harness build
171
- node plugin-dev-harness/dist/index.js --workspace ./xpertai --plugin ./middlewares/sensitive-filter
172
- ```
@@ -1 +1 @@
1
- {"version":3,"file":"sensitiveFilter.d.ts","sourceRoot":"","sources":["../../src/lib/sensitiveFilter.ts"],"names":[],"mappings":"AAGA,OAAO,KAAK,EAAa,oBAAoB,EAA8B,MAAM,kBAAkB,CAAA;AAGnG,OAAO,EACL,eAAe,EAGf,uBAAuB,EACvB,wBAAwB,EAEzB,MAAM,sBAAsB,CAAA;AAC7B,OAAO,EAOL,qBAAqB,EAMtB,MAAM,YAAY,CAAA;AAqgBnB,qBAEa,yBAA0B,YAAW,wBAAwB,CAAC,qBAAqB,CAAC;IAE/F,OAAO,CAAC,QAAQ,CAAC,UAAU,CAAY;IAEvC,QAAQ,CAAC,IAAI,EAAE,oBAAoB,CAwQlC;IAEK,gBAAgB,CACpB,OAAO,EAAE,qBAAqB,EAC9B,OAAO,EAAE,uBAAuB,GAC/B,OAAO,CAAC,eAAe,CAAC;IAa3B,OAAO,CAAC,wBAAwB;IA8OhC,OAAO,CAAC,uBAAuB;CA4ahC;AAED,YAAY,EAAE,qBAAqB,EAAE,CAAA"}
1
+ {"version":3,"file":"sensitiveFilter.d.ts","sourceRoot":"","sources":["../../src/lib/sensitiveFilter.ts"],"names":[],"mappings":"AAMA,OAAO,KAAK,EAAa,oBAAoB,EAA8B,MAAM,kBAAkB,CAAA;AAGnG,OAAO,EACL,eAAe,EAGf,uBAAuB,EACvB,wBAAwB,EAEzB,MAAM,sBAAsB,CAAA;AAC7B,OAAO,EAML,qBAAqB,EAKtB,MAAM,YAAY,CAAA;AAixBnB,qBAEa,yBAA0B,YAAW,wBAAwB,CAAC,qBAAqB,CAAC;IAE/F,OAAO,CAAC,QAAQ,CAAC,UAAU,CAAY;IAEvC,QAAQ,CAAC,IAAI,EAAE,oBAAoB,CA6NlC;IAEK,gBAAgB,CACpB,OAAO,EAAE,qBAAqB,EAC9B,OAAO,EAAE,uBAAuB,GAC/B,OAAO,CAAC,eAAe,CAAC;IAa3B,OAAO,CAAC,wBAAwB;IAgThC,OAAO,CAAC,uBAAuB;CA2fhC;AAED,YAAY,EAAE,qBAAqB,EAAE,CAAA"}
@@ -1,20 +1,26 @@
1
1
  import { __decorate, __metadata } from "tslib";
2
2
  import { z as z4 } from 'zod/v4';
3
- import { AIMessage, HumanMessage } from '@langchain/core/messages';
3
+ import { AIMessage, AIMessageChunk, HumanMessage } from '@langchain/core/messages';
4
+ import { BaseChatModel } from '@langchain/core/language_models/chat_models';
5
+ import { ChatGenerationChunk } from '@langchain/core/outputs';
4
6
  import { Inject, Injectable } from '@nestjs/common';
5
7
  import { CommandBus } from '@nestjs/cqrs';
6
8
  import { AgentMiddlewareStrategy, CreateModelClientCommand, WrapWorkflowNodeExecutionCommand, } from '@xpert-ai/plugin-sdk';
7
- import { SensitiveFilterIcon, llmDecisionSchema, resolveGeneralPackRules, sensitiveFilterConfigSchema, } from './types.js';
9
+ import { SensitiveFilterIcon, llmDecisionSchema, sensitiveFilterConfigSchema, } from './types.js';
8
10
  const SENSITIVE_FILTER_MIDDLEWARE_NAME = 'SensitiveFilterMiddleware';
9
11
  const DEFAULT_INPUT_BLOCK_MESSAGE = '输入内容触发敏感策略,已拦截。';
10
12
  const DEFAULT_OUTPUT_BLOCK_MESSAGE = '输出内容触发敏感策略,已拦截。';
11
13
  const DEFAULT_REWRITE_TEXT = '[已过滤]';
12
14
  const CONFIG_PARSE_ERROR = '敏感词过滤配置格式不正确,请检查填写内容。';
13
- const BUSINESS_RULES_VALIDATION_ERROR = '请至少配置 1 条有效业务规则(pattern/type/action/scope/severity),或启用通用规则包。';
15
+ const BUSINESS_RULES_VALIDATION_ERROR = '请至少配置 1 条有效业务规则(pattern/type/action/scope/severity)。';
14
16
  const LLM_MODE_VALIDATION_ERROR = '请完善 LLM 过滤配置:需填写过滤模型、生效范围、审核规则说明。';
15
17
  const INTERNAL_LLM_INVOKE_TAG = 'sensitive-filter/internal-eval';
18
+ const INTERNAL_SOURCE_STREAM_TAG = 'sensitive-filter/internal-source-stream';
16
19
  const INTERNAL_LLM_INVOKE_OPTIONS = {
17
20
  tags: [INTERNAL_LLM_INVOKE_TAG],
21
+ metadata: {
22
+ internal: true,
23
+ },
18
24
  };
19
25
  function isRecord(value) {
20
26
  return typeof value === 'object' && value !== null;
@@ -33,6 +39,7 @@ function buildInternalModelConfig(model) {
33
39
  options: {
34
40
  ...options,
35
41
  streaming: false,
42
+ temperature: typeof options['temperature'] === 'number' ? options['temperature'] : 0,
36
43
  },
37
44
  };
38
45
  }
@@ -103,7 +110,7 @@ function pickWinningRule(matches) {
103
110
  }
104
111
  function rewriteTextByRule(_source, rule, _caseSensitive) {
105
112
  const replacement = rule.replacementText?.trim() || DEFAULT_REWRITE_TEXT;
106
- // rewrite 按整句替换,避免局部替换造成语义残留
113
+ // Rewrite replaces the full sentence to avoid semantic leftovers.
107
114
  return replacement;
108
115
  }
109
116
  function findMatches(text, phase, rules, normalize, caseSensitive) {
@@ -137,6 +144,144 @@ function replaceModelResponseText(response, text) {
137
144
  }
138
145
  return new AIMessage(text);
139
146
  }
147
+ function cloneAiMessage(source) {
148
+ return new AIMessage({
149
+ content: source.content,
150
+ additional_kwargs: source.additional_kwargs,
151
+ response_metadata: source.response_metadata,
152
+ tool_calls: source.tool_calls,
153
+ invalid_tool_calls: source.invalid_tool_calls,
154
+ usage_metadata: source.usage_metadata,
155
+ id: source.id,
156
+ name: source.name,
157
+ });
158
+ }
159
+ function cloneAiMessageWithText(source, text) {
160
+ const cloned = cloneAiMessage(source);
161
+ cloned.content = text;
162
+ return cloned;
163
+ }
164
+ function toAiMessageChunk(value) {
165
+ if (value instanceof AIMessageChunk) {
166
+ return value;
167
+ }
168
+ if (!isRecord(value) || !('content' in value)) {
169
+ return null;
170
+ }
171
+ return new AIMessageChunk({
172
+ content: value['content'],
173
+ additional_kwargs: isRecord(value['additional_kwargs']) ? value['additional_kwargs'] : {},
174
+ response_metadata: isRecord(value['response_metadata']) ? value['response_metadata'] : {},
175
+ tool_call_chunks: Array.isArray(value['tool_call_chunks']) ? value['tool_call_chunks'] : [],
176
+ tool_calls: Array.isArray(value['tool_calls']) ? value['tool_calls'] : [],
177
+ invalid_tool_calls: Array.isArray(value['invalid_tool_calls']) ? value['invalid_tool_calls'] : [],
178
+ usage_metadata: isRecord(value['usage_metadata']) ? value['usage_metadata'] : undefined,
179
+ id: typeof value['id'] === 'string' ? value['id'] : undefined,
180
+ });
181
+ }
182
+ function toAiMessage(value) {
183
+ if (value instanceof AIMessage) {
184
+ return value;
185
+ }
186
+ if (value instanceof AIMessageChunk) {
187
+ return new AIMessage({
188
+ content: value.content,
189
+ additional_kwargs: value.additional_kwargs,
190
+ response_metadata: value.response_metadata,
191
+ tool_calls: value.tool_calls,
192
+ invalid_tool_calls: value.invalid_tool_calls,
193
+ usage_metadata: value.usage_metadata,
194
+ id: value.id,
195
+ name: value.name,
196
+ });
197
+ }
198
+ if (isRecord(value) && 'content' in value) {
199
+ return new AIMessage({
200
+ content: value['content'],
201
+ additional_kwargs: isRecord(value['additional_kwargs']) ? value['additional_kwargs'] : {},
202
+ response_metadata: isRecord(value['response_metadata']) ? value['response_metadata'] : {},
203
+ tool_calls: Array.isArray(value['tool_calls']) ? value['tool_calls'] : [],
204
+ invalid_tool_calls: Array.isArray(value['invalid_tool_calls']) ? value['invalid_tool_calls'] : [],
205
+ usage_metadata: isRecord(value['usage_metadata']) ? value['usage_metadata'] : undefined,
206
+ id: typeof value['id'] === 'string' ? value['id'] : undefined,
207
+ name: typeof value['name'] === 'string' ? value['name'] : undefined,
208
+ });
209
+ }
210
+ return new AIMessage(extractPrimitiveText(value));
211
+ }
212
+ function buildInternalSourceOptions(options) {
213
+ const tags = Array.isArray(options?.tags) ? options.tags : [];
214
+ const metadata = isRecord(options?.metadata) ? options.metadata : {};
215
+ return {
216
+ ...(options ?? {}),
217
+ tags: [...tags, INTERNAL_SOURCE_STREAM_TAG],
218
+ metadata: {
219
+ ...metadata,
220
+ internal: true,
221
+ },
222
+ };
223
+ }
224
+ class BufferedOutputProxyChatModel extends BaseChatModel {
225
+ constructor(innerModel, resolveOutput) {
226
+ super({});
227
+ this.innerModel = innerModel;
228
+ this.resolveOutput = resolveOutput;
229
+ }
230
+ _llmType() {
231
+ return 'sensitive-filter-output-proxy';
232
+ }
233
+ async collectInnerMessage(messages, options) {
234
+ const internalOptions = buildInternalSourceOptions(options);
235
+ const streamFn = this.innerModel?.stream;
236
+ if (typeof streamFn === 'function') {
237
+ let mergedChunk = null;
238
+ for await (const rawChunk of streamFn.call(this.innerModel, messages, internalOptions)) {
239
+ const chunk = toAiMessageChunk(rawChunk);
240
+ if (!chunk) {
241
+ continue;
242
+ }
243
+ mergedChunk = mergedChunk ? mergedChunk.concat(chunk) : chunk;
244
+ }
245
+ if (mergedChunk) {
246
+ return toAiMessage(mergedChunk);
247
+ }
248
+ }
249
+ return toAiMessage(await this.innerModel.invoke(messages, internalOptions));
250
+ }
251
+ async finalizeMessage(messages, options) {
252
+ const sourceMessage = await this.collectInnerMessage(messages, options);
253
+ return this.resolveOutput(sourceMessage, extractPrimitiveText(sourceMessage.content));
254
+ }
255
+ async _generate(messages, options, _runManager) {
256
+ const resolved = await this.finalizeMessage(messages, options);
257
+ return {
258
+ generations: [
259
+ {
260
+ text: extractPrimitiveText(resolved.finalMessage.content),
261
+ message: resolved.finalMessage,
262
+ },
263
+ ],
264
+ };
265
+ }
266
+ async *_streamResponseChunks(messages, options, runManager) {
267
+ const resolved = await this.finalizeMessage(messages, options);
268
+ const finalText = extractPrimitiveText(resolved.finalMessage.content);
269
+ if (!finalText) {
270
+ return;
271
+ }
272
+ const generationChunk = new ChatGenerationChunk({
273
+ message: new AIMessageChunk({
274
+ content: finalText,
275
+ id: resolved.finalMessage.id,
276
+ }),
277
+ text: finalText,
278
+ });
279
+ yield generationChunk;
280
+ await runManager?.handleLLMNewToken(finalText, undefined, undefined, undefined, undefined, {
281
+ chunk: generationChunk,
282
+ });
283
+ }
284
+ }
140
285
  function rewriteModelRequestInput(request, rewrittenText) {
141
286
  if (!Array.isArray(request?.messages) || request.messages.length === 0) {
142
287
  return request;
@@ -227,21 +372,74 @@ function modeIncludesScope(scope, phase) {
227
372
  return scope === 'both' || scope === phase;
228
373
  }
229
374
  function buildInternalLlmPrompt(rulePrompt) {
230
- return ('你是敏感内容审核器。请根据用户给出的审核规则判断文本是否命中敏感内容。' +
375
+ return ('你不是聊天助手,你是严格的敏感内容分类器。' +
376
+ '\n你的唯一任务是根据审核规则判断文本是否命中,不要提供安全建议、解释性回复或扩展内容。' +
377
+ '\n这是硬规则匹配任务:当审核规则写明“出现/包含/只要命中/必须命中”时,必须严格按字面执行,不允许自由裁量。' +
231
378
  '\n你只能返回 JSON,不要输出额外说明。' +
232
379
  '\nJSON字段: matched(boolean), action("block"|"rewrite"), replacementText(string), reason(string), categories(string[]).' +
233
380
  '\n命中时请尽量给出 replacementText;未命中时返回 {"matched": false}。' +
234
381
  '\n说明:系统会统一执行改写策略,即便你返回 action=block 也会按 rewrite 处理。' +
235
382
  `\n\n用户审核规则:\n${rulePrompt}`);
236
383
  }
384
+ function extractFirstJsonObject(text) {
385
+ const start = text.indexOf('{');
386
+ if (start < 0) {
387
+ return null;
388
+ }
389
+ let depth = 0;
390
+ let inString = false;
391
+ let escape = false;
392
+ for (let i = start; i < text.length; i++) {
393
+ const ch = text[i];
394
+ if (inString) {
395
+ if (escape) {
396
+ escape = false;
397
+ continue;
398
+ }
399
+ if (ch === '\\') {
400
+ escape = true;
401
+ continue;
402
+ }
403
+ if (ch === '"') {
404
+ inString = false;
405
+ }
406
+ continue;
407
+ }
408
+ if (ch === '"') {
409
+ inString = true;
410
+ continue;
411
+ }
412
+ if (ch === '{') {
413
+ depth++;
414
+ continue;
415
+ }
416
+ if (ch === '}') {
417
+ depth--;
418
+ if (depth === 0) {
419
+ return text.slice(start, i + 1);
420
+ }
421
+ }
422
+ }
423
+ return null;
424
+ }
237
425
  function parseLlmDecision(raw, rewriteFallbackText) {
238
426
  let payload = raw;
239
427
  if (typeof payload === 'string') {
428
+ const rawText = payload;
240
429
  try {
241
430
  payload = JSON.parse(payload);
242
431
  }
243
432
  catch {
244
- throw new Error('LLM decision is not valid JSON string');
433
+ const extracted = extractFirstJsonObject(rawText);
434
+ if (!extracted) {
435
+ throw new Error('LLM decision is not valid JSON string');
436
+ }
437
+ try {
438
+ payload = JSON.parse(extracted);
439
+ }
440
+ catch {
441
+ throw new Error('LLM decision is not valid JSON string');
442
+ }
245
443
  }
246
444
  }
247
445
  if (isRecord(payload) && !('matched' in payload) && 'content' in payload) {
@@ -253,7 +451,16 @@ function parseLlmDecision(raw, rewriteFallbackText) {
253
451
  payload = JSON.parse(content);
254
452
  }
255
453
  catch {
256
- throw new Error('LLM decision content is not valid JSON');
454
+ const extracted = extractFirstJsonObject(content);
455
+ if (!extracted) {
456
+ throw new Error('LLM decision content is not valid JSON');
457
+ }
458
+ try {
459
+ payload = JSON.parse(extracted);
460
+ }
461
+ catch {
462
+ throw new Error('LLM decision content is not valid JSON');
463
+ }
257
464
  }
258
465
  }
259
466
  const parsed = llmDecisionSchema.safeParse(payload);
@@ -365,6 +572,14 @@ function isMissingWrapWorkflowHandlerError(error) {
365
572
  const message = getErrorText(error).toLowerCase();
366
573
  return message.includes('no handler found') && message.includes('wrapworkflownodeexecutioncommand');
367
574
  }
575
+ function isMissingCreateModelHandlerError(error) {
576
+ const message = getErrorText(error).toLowerCase();
577
+ return (message.includes('no handler found') &&
578
+ (message.includes('createmodelclientcommand') || message.includes('create model client')));
579
+ }
580
+ function shouldFailOpenOnLlmError(error) {
581
+ return isMissingWrapWorkflowHandlerError(error) || isMissingCreateModelHandlerError(error);
582
+ }
368
583
  async function runWithWrapWorkflowFallback(runTracked, runFallback) {
369
584
  try {
370
585
  return await runTracked();
@@ -495,47 +710,6 @@ let SensitiveFilterMiddleware = class SensitiveFilterMiddleware {
495
710
  required: ['pattern', 'type', 'action', 'scope', 'severity'],
496
711
  },
497
712
  },
498
- generalPack: {
499
- type: 'object',
500
- 'x-ui': {
501
- span: 2,
502
- },
503
- title: {
504
- en_US: 'General Pack',
505
- zh_Hans: '通用规则包(本地开源词库)',
506
- },
507
- description: {
508
- en_US: 'For local models without built-in moderation. Uses local bilingual (ZH/EN) open-source lexicon.',
509
- zh_Hans: '用于没有内置安全过滤的本地模型兜底;采用本地中英双语开源词库。',
510
- },
511
- properties: {
512
- enabled: {
513
- type: 'boolean',
514
- default: false,
515
- title: { en_US: 'Enable', zh_Hans: '启用通用规则包' },
516
- },
517
- profile: {
518
- type: 'string',
519
- title: { en_US: 'Profile', zh_Hans: '策略档位' },
520
- description: {
521
- en_US: 'Strict blocks on hit with broader lexicon; Balanced rewrites on hit with smaller lexicon.',
522
- zh_Hans: '严格:词库范围更大且命中后拦截;平衡:词库范围较小且命中后改写。',
523
- },
524
- enum: ['strict', 'balanced'],
525
- 'x-ui': {
526
- enumLabels: {
527
- strict: { en_US: 'Strict', zh_Hans: '严格' },
528
- balanced: { en_US: 'Balanced', zh_Hans: '平衡' },
529
- },
530
- tooltip: {
531
- en_US: 'Strict: broader lexicon + block on hit. Balanced: base lexicon + sentence rewrite on hit.',
532
- zh_Hans: '严格:词库范围更大,命中后直接拦截。平衡:词库范围较小,命中后整句替换再继续。',
533
- },
534
- },
535
- default: 'balanced',
536
- },
537
- },
538
- },
539
713
  caseSensitive: {
540
714
  type: 'boolean',
541
715
  default: false,
@@ -653,8 +827,7 @@ let SensitiveFilterMiddleware = class SensitiveFilterMiddleware {
653
827
  const caseSensitive = config.caseSensitive ?? false;
654
828
  const normalize = config.normalize ?? true;
655
829
  const customRules = normalizeRuleDrafts(config.rules ?? []);
656
- const generalRules = resolveGeneralPackRules(config.generalPack);
657
- const allRules = [...customRules, ...generalRules];
830
+ const allRules = [...customRules];
658
831
  const hasEffectiveRules = allRules.length > 0;
659
832
  let compiledRulesCache = null;
660
833
  const getCompiledRules = () => {
@@ -688,12 +861,14 @@ let SensitiveFilterMiddleware = class SensitiveFilterMiddleware {
688
861
  };
689
862
  let inputBlockedMessage = null;
690
863
  let pendingInputRewrite = null;
864
+ let bufferedOutputResolution = null;
691
865
  let finalAction = 'pass';
692
866
  let auditEntries = [];
693
867
  let runtimeConfigurable = null;
694
868
  const resetRunState = () => {
695
869
  inputBlockedMessage = null;
696
870
  pendingInputRewrite = null;
871
+ bufferedOutputResolution = null;
697
872
  finalAction = 'pass';
698
873
  auditEntries = [];
699
874
  };
@@ -815,7 +990,63 @@ let SensitiveFilterMiddleware = class SensitiveFilterMiddleware {
815
990
  }
816
991
  const modelRequest = pendingInputRewrite ? rewriteModelRequestInput(request, pendingInputRewrite) : request;
817
992
  pendingInputRewrite = null;
818
- const response = await handler(modelRequest);
993
+ bufferedOutputResolution = null;
994
+ const shouldBufferOutput = compiledRules.some((rule) => rule.scope === 'output' || rule.scope === 'both');
995
+ const effectiveRequest = shouldBufferOutput
996
+ ? {
997
+ ...modelRequest,
998
+ model: new BufferedOutputProxyChatModel(modelRequest.model, async (message, outputText) => {
999
+ if (message.tool_calls?.length || message.invalid_tool_calls?.length) {
1000
+ bufferedOutputResolution = {
1001
+ finalMessage: cloneAiMessage(message),
1002
+ matched: false,
1003
+ source: 'rule',
1004
+ reason: 'tool-call-skip',
1005
+ errorPolicyTriggered: false,
1006
+ };
1007
+ return bufferedOutputResolution;
1008
+ }
1009
+ const outputMatches = findMatches(outputText, 'output', compiledRules, normalize, caseSensitive);
1010
+ const winner = pickWinningRule(outputMatches);
1011
+ if (!winner) {
1012
+ bufferedOutputResolution = {
1013
+ finalMessage: cloneAiMessage(message),
1014
+ matched: false,
1015
+ source: 'rule',
1016
+ errorPolicyTriggered: false,
1017
+ };
1018
+ return bufferedOutputResolution;
1019
+ }
1020
+ const finalText = winner.action === 'block'
1021
+ ? winner.replacementText?.trim() || DEFAULT_OUTPUT_BLOCK_MESSAGE
1022
+ : rewriteTextByRule(outputText, winner, caseSensitive);
1023
+ bufferedOutputResolution = {
1024
+ finalMessage: cloneAiMessageWithText(message, finalText),
1025
+ matched: true,
1026
+ source: 'rule',
1027
+ action: winner.action,
1028
+ reason: `rule:${winner.id}`,
1029
+ errorPolicyTriggered: false,
1030
+ };
1031
+ return bufferedOutputResolution;
1032
+ }),
1033
+ }
1034
+ : modelRequest;
1035
+ const response = await handler(effectiveRequest);
1036
+ if (bufferedOutputResolution) {
1037
+ pushAudit({
1038
+ phase: 'output',
1039
+ matched: bufferedOutputResolution.matched,
1040
+ source: bufferedOutputResolution.source,
1041
+ action: bufferedOutputResolution.action,
1042
+ reason: bufferedOutputResolution.reason,
1043
+ errorPolicyTriggered: bufferedOutputResolution.errorPolicyTriggered,
1044
+ });
1045
+ if (bufferedOutputResolution.matched && bufferedOutputResolution.action) {
1046
+ finalAction = bufferedOutputResolution.action === 'block' ? 'block' : 'rewrite';
1047
+ }
1048
+ return response;
1049
+ }
819
1050
  const outputText = extractModelResponseText(response);
820
1051
  const outputMatches = findMatches(outputText, 'output', compiledRules, normalize, caseSensitive);
821
1052
  const winner = pickWinningRule(outputMatches);
@@ -883,6 +1114,7 @@ let SensitiveFilterMiddleware = class SensitiveFilterMiddleware {
883
1114
  return structuredModelPromises.get(method);
884
1115
  };
885
1116
  let pendingInputRewrite = null;
1117
+ let bufferedOutputResolution = null;
886
1118
  let finalAction = 'pass';
887
1119
  let auditEntries = [];
888
1120
  let runtimeConfigurable = null;
@@ -891,6 +1123,7 @@ let SensitiveFilterMiddleware = class SensitiveFilterMiddleware {
891
1123
  let methodAttempts = [];
892
1124
  const resetRunState = () => {
893
1125
  pendingInputRewrite = null;
1126
+ bufferedOutputResolution = null;
894
1127
  finalAction = 'pass';
895
1128
  auditEntries = [];
896
1129
  resolvedOutputMethod = undefined;
@@ -1082,6 +1315,12 @@ let SensitiveFilterMiddleware = class SensitiveFilterMiddleware {
1082
1315
  };
1083
1316
  const resolveOnErrorDecision = (llmConfig, error) => {
1084
1317
  const reason = `llm-error:${error instanceof Error ? error.message : String(error)}`;
1318
+ if (shouldFailOpenOnLlmError(error)) {
1319
+ return {
1320
+ matched: false,
1321
+ reason: `llm-fail-open:${reason}`,
1322
+ };
1323
+ }
1085
1324
  return {
1086
1325
  matched: true,
1087
1326
  action: 'rewrite',
@@ -1145,7 +1384,70 @@ let SensitiveFilterMiddleware = class SensitiveFilterMiddleware {
1145
1384
  const llmConfig = getLlmConfig();
1146
1385
  const modelRequest = pendingInputRewrite ? rewriteModelRequestInput(request, pendingInputRewrite) : request;
1147
1386
  pendingInputRewrite = null;
1148
- const response = await handler(modelRequest);
1387
+ bufferedOutputResolution = null;
1388
+ const effectiveRequest = modeIncludesScope(llmConfig.scope, 'output')
1389
+ ? {
1390
+ ...modelRequest,
1391
+ model: new BufferedOutputProxyChatModel(modelRequest.model, async (message, outputText) => {
1392
+ if (message.tool_calls?.length || message.invalid_tool_calls?.length) {
1393
+ bufferedOutputResolution = {
1394
+ finalMessage: cloneAiMessage(message),
1395
+ matched: false,
1396
+ source: 'llm',
1397
+ reason: 'tool-call-skip',
1398
+ errorPolicyTriggered: false,
1399
+ };
1400
+ return bufferedOutputResolution;
1401
+ }
1402
+ if (!outputText) {
1403
+ bufferedOutputResolution = {
1404
+ finalMessage: cloneAiMessage(message),
1405
+ matched: false,
1406
+ source: 'llm',
1407
+ reason: 'empty-output',
1408
+ errorPolicyTriggered: false,
1409
+ };
1410
+ return bufferedOutputResolution;
1411
+ }
1412
+ let decision;
1413
+ let fromErrorPolicy = false;
1414
+ try {
1415
+ decision = await invokeAndTrack('output', outputText, request?.runtime, llmConfig);
1416
+ }
1417
+ catch (error) {
1418
+ decision = resolveOnErrorDecision(llmConfig, error);
1419
+ fromErrorPolicy = true;
1420
+ }
1421
+ const finalText = decision.matched && decision.action
1422
+ ? toNonEmptyString(decision.replacementText) ?? llmConfig.rewriteFallbackText
1423
+ : outputText;
1424
+ bufferedOutputResolution = {
1425
+ finalMessage: cloneAiMessageWithText(message, finalText),
1426
+ matched: decision.matched,
1427
+ source: fromErrorPolicy ? 'error-policy' : 'llm',
1428
+ action: decision.action,
1429
+ reason: decision.reason,
1430
+ errorPolicyTriggered: fromErrorPolicy,
1431
+ };
1432
+ return bufferedOutputResolution;
1433
+ }),
1434
+ }
1435
+ : modelRequest;
1436
+ const response = await handler(effectiveRequest);
1437
+ if (bufferedOutputResolution) {
1438
+ pushAudit({
1439
+ phase: 'output',
1440
+ matched: bufferedOutputResolution.matched,
1441
+ source: bufferedOutputResolution.source,
1442
+ action: bufferedOutputResolution.action,
1443
+ reason: bufferedOutputResolution.reason,
1444
+ errorPolicyTriggered: bufferedOutputResolution.errorPolicyTriggered,
1445
+ });
1446
+ if (bufferedOutputResolution.matched && bufferedOutputResolution.action) {
1447
+ finalAction = 'rewrite';
1448
+ }
1449
+ return response;
1450
+ }
1149
1451
  if (!modeIncludesScope(llmConfig.scope, 'output')) {
1150
1452
  pushAudit({
1151
1453
  phase: 'output',
@@ -9,16 +9,12 @@ export type SensitiveRule = {
9
9
  action: 'block' | 'rewrite';
10
10
  replacementText?: string;
11
11
  };
12
- export type GeneralPackConfig = {
13
- enabled?: boolean;
14
- profile?: 'strict' | 'balanced';
15
- };
16
12
  export type RuleModeConfig = {
17
13
  mode?: 'rule';
18
14
  rules?: Array<Partial<SensitiveRule> | null>;
19
- generalPack?: GeneralPackConfig;
20
15
  caseSensitive?: boolean;
21
16
  normalize?: boolean;
17
+ generalPack?: unknown;
22
18
  };
23
19
  export type LlmScope = 'input' | 'output' | 'both';
24
20
  export type LlmOutputMethod = 'functionCalling' | 'jsonMode' | 'jsonSchema';
@@ -38,6 +34,7 @@ export type LlmFilterConfig = {
38
34
  export type LlmModeConfig = {
39
35
  mode: 'llm';
40
36
  llm?: LlmFilterConfig | null;
37
+ generalPack?: unknown;
41
38
  };
42
39
  export type SensitiveFilterConfig = RuleModeConfig | LlmModeConfig;
43
40
  export type LlmDecision = {
@@ -81,21 +78,11 @@ export declare const sensitiveFilterConfigSchema: z.ZodUnion<[z.ZodObject<{
81
78
  action?: "block" | "rewrite";
82
79
  replacementText?: string;
83
80
  }>>, "many">>>;
84
- generalPack: z.ZodOptional<z.ZodObject<{
85
- enabled: z.ZodDefault<z.ZodBoolean>;
86
- profile: z.ZodOptional<z.ZodDefault<z.ZodEnum<["strict", "balanced"]>>>;
87
- }, "strip", z.ZodTypeAny, {
88
- enabled?: boolean;
89
- profile?: "strict" | "balanced";
90
- }, {
91
- enabled?: boolean;
92
- profile?: "strict" | "balanced";
93
- }>>;
94
81
  caseSensitive: z.ZodDefault<z.ZodOptional<z.ZodBoolean>>;
95
82
  normalize: z.ZodDefault<z.ZodOptional<z.ZodBoolean>>;
96
83
  llm: z.ZodOptional<z.ZodUnknown>;
84
+ generalPack: z.ZodOptional<z.ZodUnknown>;
97
85
  }, "strip", z.ZodTypeAny, {
98
- llm?: unknown;
99
86
  mode?: "rule";
100
87
  rules?: {
101
88
  id?: string;
@@ -106,14 +93,11 @@ export declare const sensitiveFilterConfigSchema: z.ZodUnion<[z.ZodObject<{
106
93
  action?: "block" | "rewrite";
107
94
  replacementText?: string;
108
95
  }[];
109
- generalPack?: {
110
- enabled?: boolean;
111
- profile?: "strict" | "balanced";
112
- };
113
96
  caseSensitive?: boolean;
114
97
  normalize?: boolean;
115
- }, {
116
98
  llm?: unknown;
99
+ generalPack?: unknown;
100
+ }, {
117
101
  mode?: "rule";
118
102
  rules?: {
119
103
  id?: string;
@@ -124,12 +108,10 @@ export declare const sensitiveFilterConfigSchema: z.ZodUnion<[z.ZodObject<{
124
108
  action?: "block" | "rewrite";
125
109
  replacementText?: string;
126
110
  }[];
127
- generalPack?: {
128
- enabled?: boolean;
129
- profile?: "strict" | "balanced";
130
- };
131
111
  caseSensitive?: boolean;
132
112
  normalize?: boolean;
113
+ llm?: unknown;
114
+ generalPack?: unknown;
133
115
  }>, z.ZodObject<{
134
116
  mode: z.ZodLiteral<"llm">;
135
117
  llm: z.ZodDefault<z.ZodNullable<z.ZodOptional<z.ZodObject<{
@@ -167,10 +149,14 @@ export declare const sensitiveFilterConfigSchema: z.ZodUnion<[z.ZodObject<{
167
149
  timeoutMs?: number;
168
150
  }>>>>;
169
151
  rules: z.ZodOptional<z.ZodUnknown>;
170
- generalPack: z.ZodOptional<z.ZodUnknown>;
171
152
  caseSensitive: z.ZodOptional<z.ZodUnknown>;
172
153
  normalize: z.ZodOptional<z.ZodUnknown>;
154
+ generalPack: z.ZodOptional<z.ZodUnknown>;
173
155
  }, "strip", z.ZodTypeAny, {
156
+ mode?: "llm";
157
+ rules?: unknown;
158
+ caseSensitive?: unknown;
159
+ normalize?: unknown;
174
160
  llm?: {
175
161
  scope?: "input" | "output" | "both";
176
162
  model?: ICopilotModel;
@@ -183,12 +169,12 @@ export declare const sensitiveFilterConfigSchema: z.ZodUnion<[z.ZodObject<{
183
169
  rewriteFallbackText?: string;
184
170
  timeoutMs?: number;
185
171
  };
172
+ generalPack?: unknown;
173
+ }, {
186
174
  mode?: "llm";
187
175
  rules?: unknown;
188
- generalPack?: unknown;
189
176
  caseSensitive?: unknown;
190
177
  normalize?: unknown;
191
- }, {
192
178
  llm?: {
193
179
  scope?: "input" | "output" | "both";
194
180
  model?: ICopilotModel;
@@ -201,11 +187,7 @@ export declare const sensitiveFilterConfigSchema: z.ZodUnion<[z.ZodObject<{
201
187
  rewriteFallbackText?: string;
202
188
  timeoutMs?: number;
203
189
  };
204
- mode?: "llm";
205
- rules?: unknown;
206
190
  generalPack?: unknown;
207
- caseSensitive?: unknown;
208
- normalize?: unknown;
209
191
  }>]>;
210
192
  export declare const llmDecisionSchema: z.ZodObject<{
211
193
  matched: z.ZodBoolean;
@@ -226,5 +208,4 @@ export declare const llmDecisionSchema: z.ZodObject<{
226
208
  reason?: string;
227
209
  categories?: string[];
228
210
  }>;
229
- export declare function resolveGeneralPackRules(config?: GeneralPackConfig): SensitiveRule[];
230
211
  //# sourceMappingURL=types.d.ts.map
@@ -1 +1 @@
1
- {"version":3,"file":"types.d.ts","sourceRoot":"","sources":["../../src/lib/types.ts"],"names":[],"mappings":"AAAA,OAAO,KAAK,EAAE,aAAa,EAAE,MAAM,kBAAkB,CAAA;AACrD,OAAO,EAAE,CAAC,EAAE,MAAM,QAAQ,CAAA;AAE1B,MAAM,MAAM,aAAa,GAAG;IAC1B,EAAE,EAAE,MAAM,CAAA;IACV,OAAO,EAAE,MAAM,CAAA;IACf,IAAI,EAAE,SAAS,GAAG,OAAO,CAAA;IACzB,KAAK,EAAE,OAAO,GAAG,QAAQ,GAAG,MAAM,CAAA;IAClC,QAAQ,EAAE,MAAM,GAAG,QAAQ,CAAA;IAC3B,MAAM,EAAE,OAAO,GAAG,SAAS,CAAA;IAC3B,eAAe,CAAC,EAAE,MAAM,CAAA;CACzB,CAAA;AAED,MAAM,MAAM,iBAAiB,GAAG;IAC9B,OAAO,CAAC,EAAE,OAAO,CAAA;IACjB,OAAO,CAAC,EAAE,QAAQ,GAAG,UAAU,CAAA;CAChC,CAAA;AAED,MAAM,MAAM,cAAc,GAAG;IAC3B,IAAI,CAAC,EAAE,MAAM,CAAA;IACb,KAAK,CAAC,EAAE,KAAK,CAAC,OAAO,CAAC,aAAa,CAAC,GAAG,IAAI,CAAC,CAAA;IAC5C,WAAW,CAAC,EAAE,iBAAiB,CAAA;IAC/B,aAAa,CAAC,EAAE,OAAO,CAAA;IACvB,SAAS,CAAC,EAAE,OAAO,CAAA;CACpB,CAAA;AAED,MAAM,MAAM,QAAQ,GAAG,OAAO,GAAG,QAAQ,GAAG,MAAM,CAAA;AAClD,MAAM,MAAM,eAAe,GAAG,iBAAiB,GAAG,UAAU,GAAG,YAAY,CAAA;AAC3E,MAAM,MAAM,cAAc,GAAG,OAAO,GAAG,SAAS,CAAA;AAEhD,MAAM,MAAM,eAAe,GAAG;IAC5B,KAAK,CAAC,EAAE,aAAa,CAAA;IACrB,KAAK,CAAC,EAAE,QAAQ,CAAA;IAChB,UAAU,CAAC,EAAE,MAAM,CAAA;IAEnB,YAAY,CAAC,EAAE,MAAM,CAAA;IACrB,YAAY,CAAC,EAAE,eAAe,CAAA;IAE9B,UAAU,CAAC,EAAE,cAAc,CAAA;IAE3B,gBAAgB,CAAC,EAAE,MAAM,CAAA;IAEzB,YAAY,CAAC,EAAE,MAAM,CAAA;IACrB,mBAAmB,CAAC,EAAE,MAAM,CAAA;IAC5B,SAAS,CAAC,EAAE,MAAM,CAAA;CACnB,CAAA;AAED,MAAM,MAAM,aAAa,GAAG;IAC1B,IAAI,EAAE,KAAK,CAAA;IACX,GAAG,CAAC,EAAE,eAAe,GAAG,IAAI,CAAA;CAC7B,CAAA;AAED,MAAM,MAAM,qBAAqB,GAAG,cAAc,GAAG,aAAa,CAAA;AAElE,MAAM,MAAM,WAAW,GAAG;IACxB,OAAO,EAAE,OAAO,CAAA;IAChB,MAAM,CAAC,EAAE,OAAO,GAAG,SAAS,CAAA;IAC5B,eAAe,CAAC,EAAE,MAAM,GAAG,IAAI,CAAA;IAC/B,MAAM,CAAC,EAAE,MAAM,GAAG,IAAI,CAAA;IACtB,UAAU,CAAC,EAAE,MAAM,EAAE,GAAG,IAAI,CAAA;CAC7B,CAAA;AAED,MAAM,MAAM,qBAAqB,GAAG,aAAa,GAAG;IAClD,KAAK,EAAE,MAAM,CAAA;IACb,iBAAiB,EAAE,MAAM,CAAA;IACzB,UAAU,CAAC,EAAE,MAAM,CAAA;IACnB,YAAY,CAAC,EAAE,MAAM,CAAA;CACtB,CAAA;AAED,eAAO,MAAM,mBAAmB,wSAA8R,CAAA;AAmE9T,eAAO,MAAM,2BAA2B;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;IAAuD,CAAA;AAE/F,eAAO,MAAM,iBAAiB;;;;;;;;;;;;;;;;;;EAO1B,CAAA;AAYJ,wBAAgB,uBAAuB,CAAC,MAAM,CAAC,EAAE,iBAAiB,GAAG,aAAa,EAAE,CA6CnF"}
1
+ {"version":3,"file":"types.d.ts","sourceRoot":"","sources":["../../src/lib/types.ts"],"names":[],"mappings":"AAAA,OAAO,KAAK,EAAE,aAAa,EAAE,MAAM,kBAAkB,CAAA;AACrD,OAAO,EAAE,CAAC,EAAE,MAAM,QAAQ,CAAA;AAE1B,MAAM,MAAM,aAAa,GAAG;IAC1B,EAAE,EAAE,MAAM,CAAA;IACV,OAAO,EAAE,MAAM,CAAA;IACf,IAAI,EAAE,SAAS,GAAG,OAAO,CAAA;IACzB,KAAK,EAAE,OAAO,GAAG,QAAQ,GAAG,MAAM,CAAA;IAClC,QAAQ,EAAE,MAAM,GAAG,QAAQ,CAAA;IAC3B,MAAM,EAAE,OAAO,GAAG,SAAS,CAAA;IAC3B,eAAe,CAAC,EAAE,MAAM,CAAA;CACzB,CAAA;AAED,MAAM,MAAM,cAAc,GAAG;IAC3B,IAAI,CAAC,EAAE,MAAM,CAAA;IACb,KAAK,CAAC,EAAE,KAAK,CAAC,OAAO,CAAC,aAAa,CAAC,GAAG,IAAI,CAAC,CAAA;IAC5C,aAAa,CAAC,EAAE,OAAO,CAAA;IACvB,SAAS,CAAC,EAAE,OAAO,CAAA;IAEnB,WAAW,CAAC,EAAE,OAAO,CAAA;CACtB,CAAA;AAED,MAAM,MAAM,QAAQ,GAAG,OAAO,GAAG,QAAQ,GAAG,MAAM,CAAA;AAClD,MAAM,MAAM,eAAe,GAAG,iBAAiB,GAAG,UAAU,GAAG,YAAY,CAAA;AAC3E,MAAM,MAAM,cAAc,GAAG,OAAO,GAAG,SAAS,CAAA;AAEhD,MAAM,MAAM,eAAe,GAAG;IAC5B,KAAK,CAAC,EAAE,aAAa,CAAA;IACrB,KAAK,CAAC,EAAE,QAAQ,CAAA;IAChB,UAAU,CAAC,EAAE,MAAM,CAAA;IAEnB,YAAY,CAAC,EAAE,MAAM,CAAA;IACrB,YAAY,CAAC,EAAE,eAAe,CAAA;IAE9B,UAAU,CAAC,EAAE,cAAc,CAAA;IAE3B,gBAAgB,CAAC,EAAE,MAAM,CAAA;IAEzB,YAAY,CAAC,EAAE,MAAM,CAAA;IACrB,mBAAmB,CAAC,EAAE,MAAM,CAAA;IAC5B,SAAS,CAAC,EAAE,MAAM,CAAA;CACnB,CAAA;AAED,MAAM,MAAM,aAAa,GAAG;IAC1B,IAAI,EAAE,KAAK,CAAA;IACX,GAAG,CAAC,EAAE,eAAe,GAAG,IAAI,CAAA;IAE5B,WAAW,CAAC,EAAE,OAAO,CAAA;CACtB,CAAA;AAED,MAAM,MAAM,qBAAqB,GAAG,cAAc,GAAG,aAAa,CAAA;AAElE,MAAM,MAAM,WAAW,GAAG;IACxB,OAAO,EAAE,OAAO,CAAA;IAChB,MAAM,CAAC,EAAE,OAAO,GAAG,SAAS,CAAA;IAC5B,eAAe,CAAC,EAAE,MAAM,GAAG,IAAI,CAAA;IAC/B,MAAM,CAAC,EAAE,MAAM,GAAG,IAAI,CAAA;IACtB,UAAU,CAAC,EAAE,MAAM,EAAE,GAAG,IAAI,CAAA;CAC7B,CAAA;AAED,MAAM,MAAM,qBAAqB,GAAG,aAAa,GAAG;IAClD,KAAK,EAAE,MAAM,CAAA;IACb,iBAAiB,EAAE,MAAM,CAAA;IACzB,UAAU,CAAC,EAAE,MAAM,CAAA;IACnB,YAAY,CAAC,EAAE,MAAM,CAAA;CACtB,CAAA;AAED,eAAO,MAAM,mBAAmB,wSAA8R,CAAA;AAgD9T,eAAO,MAAM,2BAA2B;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;IAAuD,CAAA;AAE/F,eAAO,MAAM,iBAAiB;;;;;;;;;;;;;;;;;;EAM5B,CAAA"}
package/dist/lib/types.js CHANGED
@@ -1,20 +1,5 @@
1
1
  import { z } from 'zod/v3';
2
2
  export const SensitiveFilterIcon = `<svg width="800px" height="800px" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"><path d="M12 2l7 3v6c0 5.2-3.3 9.9-7 11-3.7-1.1-7-5.8-7-11V5l7-3zm0 2.1L7 6v5c0 3.9 2.3 7.8 5 8.9 2.7-1.1 5-5 5-8.9V6l-5-1.9zM8.8 12.6l1.4-1.4 1.8 1.8 3.8-3.8 1.4 1.4-5.2 5.2-3.2-3.2z"/></svg>`;
3
- /**
4
- * 本地通用词库来源(中英双语,V1 子集):
5
- * - English: LDNOOBW
6
- * - Chinese: ToolGood.Words 社区词表
7
- */
8
- const OPEN_SOURCE_LEXICON = {
9
- en: {
10
- balanced: ['fuck', 'shit', 'bitch', 'asshole', 'bastard', 'motherfucker', 'dumbass', 'cunt'],
11
- strictExtra: ['slut', 'whore', 'retard', 'nigger', 'faggot'],
12
- },
13
- zh: {
14
- balanced: ['傻逼', '操你妈', '他妈的', '王八蛋', '滚蛋', '去死', '脑残', '妈的'],
15
- strictExtra: ['强奸', '炸弹', '杀人', '自杀', '毒品'],
16
- },
17
- };
18
3
  const sensitiveRuleDraftSchema = z
19
4
  .object({
20
5
  id: z.string().optional().nullable(),
@@ -26,10 +11,6 @@ const sensitiveRuleDraftSchema = z
26
11
  replacementText: z.string().optional().nullable(),
27
12
  })
28
13
  .nullable();
29
- const generalPackSchema = z.object({
30
- enabled: z.boolean().default(false),
31
- profile: z.enum(['strict', 'balanced']).default('balanced').optional(),
32
- });
33
14
  const llmConfigSchema = z
34
15
  .object({
35
16
  model: z.custom().optional().nullable(),
@@ -46,73 +27,26 @@ const llmConfigSchema = z
46
27
  const ruleModeConfigSchema = z.object({
47
28
  mode: z.literal('rule').optional(),
48
29
  rules: z.array(sensitiveRuleDraftSchema).optional().default([]),
49
- generalPack: generalPackSchema.optional(),
50
30
  caseSensitive: z.boolean().optional().default(false),
51
31
  normalize: z.boolean().optional().default(true),
52
32
  llm: z.unknown().optional(),
33
+ // Backward compatibility only.
34
+ generalPack: z.unknown().optional(),
53
35
  });
54
36
  const llmModeConfigSchema = z.object({
55
37
  mode: z.literal('llm'),
56
38
  llm: llmConfigSchema.optional().nullable().default({}),
57
39
  rules: z.unknown().optional(),
58
- generalPack: z.unknown().optional(),
59
40
  caseSensitive: z.unknown().optional(),
60
41
  normalize: z.unknown().optional(),
42
+ // Backward compatibility only.
43
+ generalPack: z.unknown().optional(),
61
44
  });
62
45
  export const sensitiveFilterConfigSchema = z.union([ruleModeConfigSchema, llmModeConfigSchema]);
63
- export const llmDecisionSchema = z
64
- .object({
46
+ export const llmDecisionSchema = z.object({
65
47
  matched: z.boolean(),
66
48
  action: z.enum(['block', 'rewrite']).optional().nullable(),
67
49
  replacementText: z.string().optional().nullable(),
68
50
  reason: z.string().optional().nullable(),
69
51
  categories: z.array(z.string()).optional().nullable(),
70
52
  });
71
- function escapeRegExp(value) {
72
- return value.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
73
- }
74
- function buildLexiconRegex(words) {
75
- const unique = Array.from(new Set(words.map((w) => w.trim()).filter(Boolean)));
76
- const escaped = unique.map((w) => escapeRegExp(w));
77
- return escaped.length ? `(?:${escaped.join('|')})` : '';
78
- }
79
- export function resolveGeneralPackRules(config) {
80
- if (!config?.enabled) {
81
- return [];
82
- }
83
- const profile = config.profile ?? 'balanced';
84
- const enWords = [
85
- ...OPEN_SOURCE_LEXICON.en.balanced,
86
- ...(profile === 'strict' ? OPEN_SOURCE_LEXICON.en.strictExtra : []),
87
- ];
88
- const zhWords = [
89
- ...OPEN_SOURCE_LEXICON.zh.balanced,
90
- ...(profile === 'strict' ? OPEN_SOURCE_LEXICON.zh.strictExtra : []),
91
- ];
92
- const enPattern = buildLexiconRegex(enWords);
93
- const zhPattern = buildLexiconRegex(zhWords);
94
- const action = profile === 'strict' ? 'block' : 'rewrite';
95
- const severity = profile === 'strict' ? 'high' : 'medium';
96
- const replacementText = profile === 'strict' ? '内容触发通用敏感词策略,已拦截。' : '[通用敏感词已过滤]';
97
- const rules = [
98
- {
99
- id: 'general-open-lexicon-en',
100
- pattern: enPattern,
101
- type: 'regex',
102
- scope: 'both',
103
- severity,
104
- action,
105
- replacementText,
106
- },
107
- {
108
- id: 'general-open-lexicon-zh',
109
- pattern: zhPattern,
110
- type: 'regex',
111
- scope: 'both',
112
- severity,
113
- action,
114
- replacementText,
115
- },
116
- ];
117
- return rules.filter((rule) => Boolean(rule.pattern));
118
- }
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "plugin-sensitive-filter-xr",
3
- "version": "0.0.9",
3
+ "version": "0.1.7",
4
4
  "author": {
5
5
  "name": "XpertAI",
6
6
  "url": "https://xpertai.cn"