@josephyan/qingflow-app-user-mcp 0.2.0-beta.22 → 0.2.0-beta.24

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -3,13 +3,13 @@
3
3
  Install:
4
4
 
5
5
  ```bash
6
- npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.22
6
+ npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.24
7
7
  ```
8
8
 
9
9
  Run:
10
10
 
11
11
  ```bash
12
- npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.22 qingflow-app-user-mcp
12
+ npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.24 qingflow-app-user-mcp
13
13
  ```
14
14
 
15
15
  Environment:
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@josephyan/qingflow-app-user-mcp",
3
- "version": "0.2.0-beta.22",
3
+ "version": "0.2.0-beta.24",
4
4
  "description": "Operational end-user MCP for Qingflow records, tasks, comments, and directory workflows.",
5
5
  "license": "MIT",
6
6
  "type": "module",
package/pyproject.toml CHANGED
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "qingflow-mcp"
7
- version = "0.2.0b22"
7
+ version = "0.2.0b24"
8
8
  description = "User-authenticated MCP server for Qingflow"
9
9
  readme = "README.md"
10
10
  license = "MIT"
@@ -7,185 +7,81 @@ metadata:
7
7
 
8
8
  # Qingflow Record Analysis
9
9
 
10
- ## Overview
11
-
12
- This skill is for record analysis inside existing Qingflow apps. Use it when the task is about `分析 / 洞察 / 分布 / 占比 / 平均 / 排名 / 趋势 / 所有 / 全部 / 全国 / 高价值` or any final statistical conclusion.
13
-
14
- This skill assumes the MCP is already connected and authenticated. If not, switch to `$qingflow-mcp-setup` first. If the task is about creating, updating, or deleting records rather than analyzing them, switch to `$qingflow-record-crud`. If it is about task-center actions, comments, approvals, rollback, transfer, or directory-driven workflow work, switch to `$qingflow-task-ops`.
15
-
16
- Before running analysis in `prod`, confirm the intended environment. If browser parity or live route debugging matters, call `record_analyze` with `output_profile=\"verbose\"` and compare `debug.request_route` with the browser route.
17
-
18
- ## Tool Scope
19
-
20
- Use these tools as the core analysis surface:
21
-
22
- - `record_schema_get`
23
- - `record_analyze`
24
-
25
- Use `record_list` or `record_get` only when you need sample rows or a specific supporting example after the main analysis path.
26
-
27
- ## Hard Rules
28
-
29
- - Analysis tasks must start with `record_schema_get`
30
- - Build one or more small DSLs, then run `record_analyze` separately for each question
31
- - DSL field references must use `field_id` only
32
- - Normalize relative time phrases into explicit legal date ranges before building the DSL
33
- - If the user asks for `最近一个完整自然月 / 上个月 / 最近30天 / 本季度 / 去年同期`, first convert that phrase into concrete dates, then verify the dates are legal before calling MCP
34
- - Never send impossible dates such as `2026-02-29`; if the intended month is February 2026, the legal upper bound is `2026-02-28`
35
- - If the schema still leaves multiple plausible fields, stop and ask the user to confirm from a short candidate list instead of guessing
36
- - Do not keep retrying different guessed field names in a loop
37
- - `record_list` is never the basis for a final statistical conclusion
38
- - If `record_list` is capped or paged, treat it as sample-only evidence
39
- - Do not mix full totals from `record_analyze` with sample-only list observations as one combined `全量结论`
40
- - Do not manually tune paging or scan-budget parameters for analysis; `record_analyze` hides them
41
- - For final conclusions, prefer `strict_full=true`
42
- - Before choosing a DSL shape, first decide whether the question needs `count`, `sum`, `avg`, `distinct_count`, `ratio`, or `ranking`
43
- - Do not guess a metric just because the user said `数量`, `单量`, `人数`, or `金额`
44
- - If one business question depends on multiple metrics, split it into smaller structured questions and build multiple focused DSLs
45
- - `渗透率 / 转化率 / 占比类结论必须先定义分子和分母`
46
- - Do not claim a metric you did not query.
47
- - Derived ratios must be computed outside the DSL after trusted numerator and denominator queries complete; do not invent `div`, `formula`, or expression metrics inside `record_analyze`
48
- - If the requested business question requires unsupported derived math, split it into multiple DSLs and compute the final ratio only in the reasoning layer after the source metrics are confirmed
49
- - If the user asks for multiple conclusions and only part of them is completed reliably, explicitly disclose which parts are complete and which parts remain unresolved
50
-
51
- ## Standard Operating Order
52
-
53
- For analysis:
54
-
55
- 1. Confirm target app and environment
56
- 2. Run `record_schema_get`
57
- 3. Inspect fields, aliases, suggested dimensions, suggested metrics, and suggested time fields
58
- 4. Generate one or more field_id-based DSLs
59
- 5. Run `record_analyze` once per DSL
60
- 6. Run `record_list` only if you still need sample rows, examples, or manual inspection
61
- 7. Before answering, separate:
62
- - `全量可信结论`
63
- - `样本观察`
64
- - `待验证假设`
65
-
66
- ## Semantic Guardrails
67
-
68
- - If the user asks for penetration, conversion, share-of-total, win rate, non-standard ratio, or any `%` metric, first write down:
69
- - numerator definition
70
- - denominator definition
71
- - whether each side needs its own DSL
72
- - If you cannot name the denominator from real schema fields and filters, do not use words like `渗透率`, `转化率`, `占比`, `比例`, or `%`
73
- - If a field is still ambiguous after `record_schema_get`, do not guess; either select one unique `field_id` from the schema or ask the user to confirm from a short candidate list
74
- - If a statement depends on `count`, query `count`
75
- - If a statement depends on total amount, query `sum`
76
- - If a statement depends on average level, query `avg` or derive it from trusted `sum + count`
77
- - If a statement depends on trend, query a time dimension with `bucket`
78
- - If a statement depends on a ratio that the DSL cannot express directly, run the numerator and denominator separately, then compute the ratio outside MCP only after both sides are complete and compatible
79
- - Rankings must come from structured sorted results, not from loose natural-language restatement
80
- - When grouped rows are truncated, describe them as `已返回分组中` or `主要分组`
81
- - If `completeness.rows_truncated=true` or `completeness.statement_scope=returned_groups_only`, do not use words like `各部门`、`所有分组`、`完整名单`、`全部渠道`
82
- - If grouped rows are truncated, explicitly downgrade the wording to `前 N 个分组` or `主要分组`, never `全部`
83
- - Complex answers should default to `先结构、后解读`: present the table / metrics / ordering first, then add concise interpretation
84
- - Final wording should stay as close as possible to schema titles, dimension aliases, and metric aliases; do not rename the business object or field title unless the user asked for a rewrite
85
-
86
- ## DSL Contract
87
-
88
- Use `record_schema_get` as the source of truth for every DSL field reference:
89
-
90
- - Use `fields[].field_id` in `dimensions[].field_id`, `metrics[].field_id`, and `filters[].field_id`
91
- - Treat `suggested_dimensions`, `suggested_metrics`, and `suggested_time_fields` as hints, not as executable DSL by themselves
92
- - Do not pass field titles, aliases, or guessed ids where `field_id` is required
93
-
94
- The `record_analyze` call should be built from this argument shape:
10
+ ## Step 1: `record_schema_get` → Step 2: build DSL → Step 3: `record_analyze`
95
11
 
96
- ```json
97
- {
98
- "app_key": "APP_1",
99
- "dimensions": [],
100
- "metrics": [],
101
- "filters": [],
102
- "sort": [],
103
- "limit": 50,
104
- "strict_full": true,
105
- "view_key": null,
106
- "view_name": null,
107
- "output_profile": "normal"
108
- }
12
+ This is the ONLY execution order. Never skip step 1. Never call `record_analyze` without a schema.
13
+
14
+ Tools: `record_schema_get`, `record_analyze`. Use `record_list`/`record_get` only for sample rows AFTER analysis.
15
+
16
+ ---
17
+
18
+ ## DSL FORMAT (CRITICAL — read this FIRST)
19
+
20
+ ### ✅ Correct vs ❌ Wrong — learn from these before building ANY DSL
21
+
22
+ **dimension item:**
23
+ ```
24
+ ✅ CORRECT: { "field_id": 9500572, "alias": "报价类型" }
25
+ ❌ WRONG: 9500572 ← bare integer, not a dict
26
+ ❌ WRONG: "报价类型" ← string, not a dict
27
+ ❌ WRONG: { "field_id": 9500572, "title": "报价类型" } ← "title" is forbidden
109
28
  ```
110
29
 
111
- Top-level argument rules:
112
-
113
- - `app_key`: required. The target Qingflow app.
114
- - `dimensions`: required list. Use `[]` for whole-table summary. Use one item per grouping dimension for grouped analysis.
115
- - `metrics`: optional list. If omitted or empty, `record_analyze` defaults to a single `count` metric.
116
- - `filters`: optional list. Filters restrict the analyzed dataset before results are interpreted.
117
- - `sort`: optional list. Sorting applies to result rows, not raw source rows.
118
- - `limit`: positive integer. It only limits returned result rows; it does not reduce the internal scan scope.
119
- - `strict_full`: boolean. Prefer `true` for final conclusions. If `true`, incomplete scans return an error; if `false`, incomplete scans return partial results.
120
- - `view_key` / `view_name`: optional. Use a view to narrow scope before analysis. Prefer `view_key` when both are available.
121
- - `output_profile`: `normal` or `verbose`. Prefer `normal` unless you are debugging completeness or route issues.
122
-
123
- Item contracts:
124
-
125
- - `dimensions` item:
126
- - shape: `{ "field_id": 2, "alias": "状态", "bucket": null }`
127
- - `field_id`: required integer from `record_schema_get`
128
- - `alias`: optional but recommended; if omitted, the field title becomes the alias
129
- - `bucket`: optional; allowed values are `day`, `week`, `month`, `quarter`, `year`, or omitted / `null`
130
- - `bucket` may only be used on fields from `suggested_time_fields`
131
- - `metrics` item:
132
- - shape: `{ "op": "sum", "field_id": 7, "alias": "总金额" }`
133
- - `op`: one of `count`, `sum`, `avg`, `min`, `max`, `distinct_count`
134
- - `field_id`: required for `sum`, `avg`, `min`, `max`, `distinct_count`; do not pass it for `count`
135
- - `alias`: optional but strongly recommended because `sort.by` must reference aliases
136
- - `filters` item:
137
- - shape: `{ "field_id": 2, "op": "eq", "value": "进行中" }`
138
- - `field_id`: required integer from `record_schema_get`
139
- - `op`: optional; defaults to `eq`
140
- - supported ops: `eq`, `neq`, `in`, `not_in`, `gt`, `gte`, `lt`, `lte`, `between`, `contains`, `is_null`, `not_null`
141
- - value rules:
142
- - `eq`, `neq`, `gt`, `gte`, `lt`, `lte`, `contains`: pass a single scalar value
143
- - `in`, `not_in`: pass an array
144
- - `between`: pass a two-item array like `[min, max]`
145
- - `is_null`, `not_null`: omit `value`
146
- - `sort` item:
147
- - shape: `{ "by": "记录数", "order": "desc" }`
148
- - `by`: required and must reference an alias already defined in `dimensions` or `metrics`
149
- - `order`: optional; use `asc` or `desc`; default is `asc`
150
- - do not sort by raw field title or `field_id`
151
-
152
- Practical rules:
153
-
154
- - Keep one DSL focused on one question. Prefer multiple small DSLs over one overloaded request.
155
- - Always set explicit aliases for metrics you may sort by, compare, or quote in the final answer.
156
- - For trend analysis, use one time dimension with `bucket`, then sort by that time alias ascending.
157
- - For cross analysis, use multiple `dimensions` and a small set of metrics.
158
- - Do not attempt formulas, joins, having clauses, cohort analysis, or manual paging controls in this DSL.
159
- - Do not pass unsupported keys such as `formula`, `expr`, `numerator`, `denominator`, `left`, `right`, or `operator` inside metric items.
160
-
161
- ## Minimal DSL Templates
162
-
163
- Summary:
30
+ **metric item — the key is `op`, NOT `type`/`agg`/`aggregation`:**
31
+ ```
32
+ CORRECT: { "op": "count", "alias": "记录数" }
33
+ CORRECT: { "op": "sum", "field_id": 7, "alias": "总金额" }
34
+ WRONG: { "type": "count" } ← "type" is NOT a valid key
35
+ WRONG: { "agg": "count" } ← "agg" is NOT a valid key
36
+ WRONG: { "aggregation": "count" } ← "aggregation" is NOT a valid key
37
+ ```
164
38
 
165
- ```json
166
- {
167
- "dimensions": [],
168
- "metrics": [
169
- { "op": "count", "alias": "记录数" }
170
- ],
171
- "filters": [],
172
- "sort": [],
173
- "limit": 1,
174
- "strict_full": true
175
- }
39
+ **filter item — the key is `op`, NOT `operator`:**
40
+ ```
41
+ ✅ CORRECT: { "field_id": 2, "op": "between", "value": ["2024-03-01", "2024-03-31"] }
42
+ ✅ CORRECT: { "field_id": 5, "op": "eq", "value": "已完成" }
43
+ ❌ WRONG: { "field_id": 2, "operator": "between", "value": [...] } ← "operator" is forbidden
44
+ ❌ WRONG: { "field_id": 2, "op": ">=", "value": "2024-03-01" } ← ">=" is not valid, use "gte"
176
45
  ```
177
46
 
178
- Single-dimension distribution:
47
+ **sort item:**
48
+ ```
49
+ ✅ CORRECT: { "by": "记录数", "order": "desc" } ← "by" references an alias
50
+ ❌ WRONG: { "by": 9500572, "order": "desc" } ← field_id not allowed in sort
51
+ ```
52
+
53
+ ### Allowed keys per item (ANY other key = error)
54
+
55
+ | Item | Allowed keys only |
56
+ |------|-------------------|
57
+ | dimension | `field_id`, `alias`, `bucket` |
58
+ | metric | `op`, `field_id`, `alias` |
59
+ | filter | `field_id`, `op`, `value` |
60
+ | sort | `by`, `order` |
61
+
62
+ ### `op` values
63
+
64
+ - metrics: `count`, `sum`, `avg`, `min`, `max`, `distinct_count`
65
+ - filters: `eq`, `neq`, `in`, `not_in`, `gt`, `gte`, `lt`, `lte`, `between`, `contains`, `is_null`, `not_null`
66
+ - For `count` metric: do NOT pass `field_id`. For all others: `field_id` is required.
67
+ - If `metrics` is omitted or `[]`, defaults to `[{"op":"count","alias":"记录数"}]`.
68
+
69
+ ---
70
+
71
+ ## COMPLETE DSL TEMPLATE — copy, replace field_id, done
179
72
 
180
73
  ```json
181
74
  {
75
+ "app_key": "YOUR_APP_KEY",
182
76
  "dimensions": [
183
- { "field_id": 2, "alias": "状态" }
77
+ { "field_id": FIELD_ID_FROM_SCHEMA, "alias": "维度名" }
184
78
  ],
185
79
  "metrics": [
186
80
  { "op": "count", "alias": "记录数" }
187
81
  ],
188
- "filters": [],
82
+ "filters": [
83
+ { "field_id": TIME_FIELD_ID, "op": "between", "value": ["2024-03-01", "2024-03-31"] }
84
+ ],
189
85
  "sort": [
190
86
  { "by": "记录数", "order": "desc" }
191
87
  ],
@@ -194,67 +90,73 @@ Single-dimension distribution:
194
90
  }
195
91
  ```
196
92
 
197
- Time trend:
93
+ More templates:
198
94
 
95
+ **Whole-table count (no grouping):**
96
+ ```json
97
+ { "dimensions": [], "metrics": [{"op":"count","alias":"记录数"}], "strict_full": true }
98
+ ```
99
+
100
+ **Monthly trend:**
199
101
  ```json
200
102
  {
201
- "dimensions": [
202
- { "field_id": 3, "alias": "月份", "bucket": "month" }
203
- ],
204
- "metrics": [
205
- { "op": "count", "alias": "记录数" }
206
- ],
207
- "filters": [],
208
- "sort": [
209
- { "by": "月份", "order": "asc" }
210
- ],
211
- "limit": 24,
212
- "strict_full": true
103
+ "dimensions": [{"field_id": 3, "alias": "月份", "bucket":"month"}],
104
+ "metrics": [{"op":"count","alias":"记录数"}],
105
+ "sort": [{"by":"月份","order":"asc"}],
106
+ "limit": 24, "strict_full": true
213
107
  }
214
108
  ```
215
109
 
216
- Two-dimensional cross analysis:
217
-
110
+ **Cross analysis with sum:**
218
111
  ```json
219
112
  {
220
- "dimensions": [
221
- { "field_id": 2, "alias": "状态" },
222
- { "field_id": 5, "alias": "负责人" }
223
- ],
224
- "metrics": [
225
- { "op": "count", "alias": "记录数" },
226
- { "op": "sum", "field_id": 7, "alias": "总金额" }
227
- ],
228
- "filters": [],
229
- "sort": [
230
- { "by": "记录数", "order": "desc" }
231
- ],
232
- "limit": 100,
233
- "strict_full": true
113
+ "dimensions": [{"field_id": 2, "alias": "状态"}, {"field_id": 5, "alias": "负责人"}],
114
+ "metrics": [{"op":"count","alias":"记录数"}, {"op":"sum","field_id": 7, "alias":"总金额"}],
115
+ "sort": [{"by":"记录数","order":"desc"}],
116
+ "limit": 100, "strict_full": true
234
117
  }
235
118
  ```
236
119
 
237
- ## Output Gate
238
-
239
- - Read aggregate rows from `result.rows`
240
- - Read overall totals from `result.totals.metric_totals`
241
- - Read sort intent from `query.sort`
242
- - Read ranked output from `ranking` when it is not `null`
243
- - Read ratio output from `ratios` when it is not `null`; `ratios=null` is normal when MCP did not produce a native ratio block
244
- - Read warning codes from `completeness.warnings`
245
-
246
- - Only write `全量可信结论` when the supporting `record_analyze` calls report `completeness.status=complete` and `safe_for_final_conclusion=true`
247
- - If any key analysis call is incomplete, downgrade the answer to `初步观察` or `部分结果`
248
- - Treat `safe_for_final_conclusion=true` as necessary but not sufficient when the metric definition is incomplete or grouped rows are truncated
249
- - If `completeness.statement_scope=returned_groups_only`, you may still give full-population conclusions about totals or ratios, but not a full grouped enumeration claim
250
- - If aggregate-style output is full but list evidence is sample-only, split the answer into:
251
- - `全量可信结论`
252
- - `样本观察(不作为最终结论)`
253
- - optional `待验证假设`
254
-
255
- ## Resources
256
-
257
- - Analysis patterns: [references/analysis-patterns.md](references/analysis-patterns.md)
258
- - Confidence reporting: [references/confidence-reporting.md](references/confidence-reporting.md)
259
- - Analysis gotchas: [references/analysis-gotchas.md](references/analysis-gotchas.md)
260
- - Shared environment guidance: [/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-app-user/references/environments.md](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-app-user/references/environments.md)
120
+ ### Top-level arguments
121
+
122
+ - `app_key`: required.
123
+ - `dimensions`: `[]` = whole-table summary; `[{...}]` = grouped.
124
+ - `strict_full`: `true` for final conclusions. `false` allows partial results.
125
+ - `limit`: limits returned rows only, not scan scope.
126
+ - `view_key`/`view_name`: optional scope narrowing.
127
+ - `bucket` in dimensions: only for `suggested_time_fields`. Values: `day`/`week`/`month`/`quarter`/`year`/`null`.
128
+
129
+ ---
130
+
131
+ ## RULES
132
+
133
+ - All `field_id` MUST come from `record_schema_get`. Never guess or use field titles.
134
+ - One DSL per question. Multiple small DSLs > one overloaded request.
135
+ - Normalize relative dates to concrete ranges BEFORE building DSL. Never send impossible dates (e.g. `2026-02-29`).
136
+ - If schema has ambiguous fields, ask user to pick from a short list. Do not guess.
137
+ - `record_list` is NEVER the basis for final statistics.
138
+ - Derived ratios: run numerator and denominator as separate DSLs, compute ratio in your reasoning.
139
+ - Set `alias` for any metric you will sort by, compare, or quote.
140
+
141
+ ---
142
+
143
+ ## OUTPUT (CRITICAL final answer must show concrete numbers)
144
+
145
+ ### 必须逐行列出数据(硬性要求)
146
+
147
+ final_answer MUST include a table with every row from `result.rows`:
148
+
149
+ | {维度别名} | {指标别名} | 占比 |
150
+ |------------|-----------|------|
151
+ | {row.dimensions.X} | {row.metrics.Y} | {Y / total * 100}% |
152
+
153
+ - 占比 = 行指标值 / `result.totals.metric_totals` 总值, 保留一位小数
154
+ - 如 `metric_totals` 不存在, 用各行之和作分母
155
+ - 超过 20 行展示 Top 20 并注明
156
+ - 不得只写"共 N 种类型"而省略明细
157
+
158
+ ### 结论分级
159
+
160
+ - `safe_for_final_conclusion=true` → `全量可信结论`
161
+ - 不完整 → `初步观察`
162
+ - `rows_truncated=true` → 用 `前 N 个分组`, 不用 `全部`/`所有`
@@ -11,6 +11,8 @@ Correct recovery:
11
11
  3. build one or more small DSLs
12
12
  4. run `record_analyze`
13
13
 
14
+ The schema here is applicant-node visible-only. If a field is absent, treat it as not available to the current user rather than switching to guessed ids or builder-side memory.
15
+
14
16
  ## Normalize relative time phrases before building the DSL.
15
17
 
16
18
  Examples:
@@ -77,6 +79,8 @@ Correct recovery:
77
79
  2. if several plausible candidates remain, ask the user to confirm from a short list
78
80
  3. build the DSL only after the field is clear
79
81
 
82
+ If the intended field is absent from the schema altogether, stop and explain that it is not visible in the current applicant-node permission scope.
83
+
80
84
  Examples of the right recovery question:
81
85
 
82
86
  - “我找到两个可能的字段:`线索来源`、`来源渠道`。你要按哪个字段统计?”
@@ -30,6 +30,8 @@ Result reading order:
30
30
  5. `completeness`
31
31
  6. `presentation`
32
32
 
33
+ Treat `record_schema_get` as applicant-node visible-only schema. Missing fields are permission boundaries, not invitations to guess hidden ids.
34
+
33
35
  ## Distribution / ratio pattern
34
36
 
35
37
  1. Run `record_schema_get`
@@ -55,6 +55,12 @@ Use exactly one of these default paths:
55
55
  - `record_get`
56
56
  - `record_write`
57
57
 
58
+ `record_schema_get` now returns the **current user's applicant-node schema only**:
59
+
60
+ - only fields visible to the current user at the applicant node are returned
61
+ - hidden fields are omitted entirely
62
+ - missing fields should be treated as `当前用户在申请人节点下不可见/不可用`, not as a reason to guess a different field
63
+
58
64
  ## Supporting Tools
59
65
 
60
66
  - `directory_search`
@@ -80,12 +86,14 @@ Use exactly one of these default paths:
80
86
 
81
87
  - Use `record_list` for browse/export/sample inspection only
82
88
  - Use `record_get` when `record_id` is known
89
+ - `record_get` without explicit `columns` still returns only applicant-node visible fields; do not assume it exposes the full builder-side record
83
90
  - `record_list` accepts:
84
91
  - `columns`
85
92
  - `where`
86
93
  - `order_by`
87
94
  - `limit`
88
95
  - `page`
96
+ - `record_list` and `record_get` may reject hidden-field `field_id`s because record tools now validate against the applicant-node visible schema only
89
97
  - `record_list` is **not** an analysis tool
90
98
  - If a request turns into grouped distributions, ratios, rankings, trends, or final statistical conclusions, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
91
99
 
@@ -153,6 +161,7 @@ The DSL is clause-shaped like SQL, but it is **not raw SQL text**.
153
161
  - Do not use free-form `WHERE` updates or deletes
154
162
  - Do not auto-fill missing fields
155
163
  - Do not auto-resolve relation targets without first querying them
164
+ - Do not assume `record_schema_get` is a builder/full-field schema. It is the current user's applicant-node visible schema only.
156
165
 
157
166
  ## Response Interpretation
158
167
 
@@ -170,4 +179,3 @@ The DSL is clause-shaped like SQL, but it is **not raw SQL text**.
170
179
  - Environment switching: [references/environments.md](references/environments.md)
171
180
  - Record operation patterns: [references/record-patterns.md](references/record-patterns.md)
172
181
  - Data gotchas: [references/data-gotchas.md](references/data-gotchas.md)
173
-
@@ -6,6 +6,8 @@ For final statistics, grouped distributions, rankings, trends, or insight-style
6
6
 
7
7
  - `record_list` is for browsing, export, and sample inspection only
8
8
  - `record_get` is for one exact record
9
+ - `record_schema_get` is applicant-node visible-only schema, not a builder/full-field schema
10
+ - if a field is absent from `record_schema_get`, treat it as not visible or not usable for the current user at the applicant node
9
11
  - Do not present paged browse output as if it were a grouped or full-population conclusion
10
12
  - If the browser and MCP disagree, compare `request_route.base_url` and `request_route.qf_version` first
11
13
 
@@ -40,4 +42,3 @@ For final statistics, grouped distributions, rankings, trends, or insight-style
40
42
  - Use the current form schema's subfield titles; do not guess nested ids
41
43
  - When updating existing subtable rows, preserve row ids if the source record returns them
42
44
  - Nested subtable writes are still unsupported
43
-
@@ -11,6 +11,8 @@ Use `record_schema_get -> record_list` when:
11
11
  - a delete or update target still needs confirmation
12
12
  - the user needs sample rows or a small export
13
13
 
14
+ Remember that `record_schema_get` only exposes the current user's applicant-node visible fields. If a field is missing from that schema, treat it as unavailable in the current permission scope instead of trying to guess another `field_id`.
15
+
14
16
  Keep the browse DSL simple:
15
17
 
16
18
  - `columns`: field ids only
@@ -29,6 +31,7 @@ Use `record_schema_get -> record_get` when:
29
31
  - a write target needs verification before action
30
32
 
31
33
  Prefer passing explicit `columns` when the user only needs a subset of fields.
34
+ Without `columns`, `record_get` still returns only applicant-node visible fields, not the full builder-side record payload.
32
35
 
33
36
  ## Write Pattern
34
37
 
@@ -88,6 +91,7 @@ Do not do this:
88
91
  - do not invent formulas or expressions
89
92
  - do not auto-fill missing required fields
90
93
  - do not guess relation targets without first resolving them
94
+ - do not guess hidden or missing fields from prior builder knowledge; if the field is absent from applicant-node schema, stop and explain the permission boundary
91
95
  - do not claim a blocked `record_write` was executed
92
96
 
93
97
  ## Unsupported Direct Writes
@@ -106,4 +110,3 @@ If the payload includes them, stop after the blocked `record_write` response and
106
110
  - Relation fields are record-id based. Resolve the referenced target first, then write the relation field with the real `record_id`.
107
111
  - Attachment fields are two-step: upload first with `file_upload_local`, then reuse the returned attachment payload in `record_write`.
108
112
  - Subtable writes require the current schema shape; when updating existing subtable rows, preserve row ids if the current record exposes them.
109
-
@@ -2,4 +2,4 @@ from __future__ import annotations
2
2
 
3
3
  __all__ = ["__version__"]
4
4
 
5
- __version__ = "0.2.0b22"
5
+ __version__ = "0.2.0b23"
@@ -30,8 +30,9 @@ def build_server() -> FastMCP:
30
30
  "All resource tools operate with the logged-in user's Qingflow permissions.\n\n"
31
31
  "For analytics, use record_schema_get first, let the model build field_id-based DSL, "
32
32
  "then call record_analyze. record_analyze returns compact business-first output as query/result/ranking/ratios/completeness/presentation; use verbose only for route/debug details. "
33
+ "record_schema_get returns the current user's applicant-node visible schema only; hidden fields are omitted and missing fields should be treated as not visible in the current permission scope. "
33
34
  "For operational record reads, use record_schema_get first, then record_list or record_get. "
34
- "For writes, use record_schema_get and then call record_write once; it performs internal preflight before any apply.\n\n"
35
+ "For writes, use record_schema_get and then call record_write once; it performs internal preflight before any apply and refuses fields outside the applicant-node writable schema.\n\n"
35
36
  "Task Center (待办/已办) handling:\n"
36
37
  "- Use task_summary to get headline counts.\n"
37
38
  "- Use task_list for flat task browsing with task_box and flow_status.\n"
@@ -20,6 +20,7 @@ def build_user_server() -> FastMCP:
20
20
  instructions=(
21
21
  "Use this server for Qingflow operational workflows with a schema-first path. "
22
22
  "For records, start with record_schema_get, then choose record_list, record_get, or record_write. "
23
+ "record_schema_get returns the current user's applicant-node visible schema only; hidden fields are omitted and missing fields should be treated as not visible in the current permission scope. "
23
24
  "For analytics, switch to record_schema_get and record_analyze; its default output is compact query/result/ranking/ratios/completeness/presentation, with route/debug only in verbose mode. "
24
25
  "For task center, use task_summary, task_list, and task_facets before any explicit task action. "
25
26
  "Avoid builder-side app or schema changes here."
@@ -38,6 +38,7 @@ ATTACHMENT_QUE_TYPES = {13}
38
38
  RELATION_QUE_TYPES = {25}
39
39
  SUBTABLE_QUE_TYPES = {18}
40
40
  VERIFY_UNSUPPORTED_WRITE_QUE_TYPES = {14, 34, 35, 36}
41
+ LAYOUT_ONLY_QUE_TYPES = {24}
41
42
  DEPARTMENT_MEMBER_JUDGE_PREFIX = "deptId_"
42
43
  JUDGE_EQUAL = 0
43
44
  JUDGE_UNEQUAL = 1
@@ -90,6 +91,14 @@ class ViewSelection:
90
91
  conditions: list[list[ViewFilterCondition]]
91
92
 
92
93
 
94
+ @dataclass(slots=True)
95
+ class WorkflowNodeRef:
96
+ workflow_node_id: int
97
+ name: str
98
+ type: str
99
+ raw: JSONObject
100
+
101
+
93
102
  @dataclass(slots=True)
94
103
  class RecordInputError(Exception):
95
104
  message: str
@@ -134,7 +143,8 @@ FIELD_LOOKUP_STRIP_RE = re.compile(r"[\s_()()\[\]【】{}<>·/\\::-]+")
134
143
  class RecordTools(ToolBase):
135
144
  def __init__(self, sessions, backend) -> None: # type: ignore[no-untyped-def]
136
145
  super().__init__(sessions, backend)
137
- self._form_cache: dict[tuple[str, str], JSONObject] = {}
146
+ self._form_cache: dict[tuple[str, str, str, int], JSONObject] = {}
147
+ self._applicant_node_cache: dict[tuple[str, str], WorkflowNodeRef] = {}
138
148
  self._view_list_cache: dict[tuple[str, str], list[JSONObject]] = {}
139
149
  self._view_config_cache: dict[tuple[str, str], JSONObject] = {}
140
150
 
@@ -286,9 +296,10 @@ class RecordTools(ToolBase):
286
296
  raise_tool_error(QingflowApiError.config_error("app_key is required"))
287
297
 
288
298
  def runner(session_profile, context):
299
+ applicant_node = self._resolve_applicant_node(profile, context, app_key, force_refresh=False)
289
300
  index = self._get_field_index(profile, context, app_key, force_refresh=False)
290
301
  view_selection = self._resolve_view_selection(profile, context, app_key, view_key=view_key, view_name=view_name)
291
- fields = [self._schema_field_payload(field) for field in index.by_id.values()]
302
+ fields = [self._schema_field_payload(field, workflow_node_id=applicant_node.workflow_node_id) for field in index.by_id.values()]
292
303
  suggested_dimensions = [
293
304
  {"field_id": item["field_id"], "title": item["title"]}
294
305
  for item in fields
@@ -312,6 +323,12 @@ class RecordTools(ToolBase):
312
323
  "request_route": self._request_route_payload(context),
313
324
  "data": {
314
325
  "app_key": app_key,
326
+ "schema_scope": "applicant_node",
327
+ "workflow_node": {
328
+ "workflow_node_id": applicant_node.workflow_node_id,
329
+ "name": applicant_node.name,
330
+ "type": applicant_node.type,
331
+ },
315
332
  "view_resolution": _view_selection_payload(view_selection),
316
333
  "fields": fields,
317
334
  "suggested_dimensions": suggested_dimensions,
@@ -531,31 +548,39 @@ class RecordTools(ToolBase):
531
548
  }
532
549
  return response
533
550
 
534
- raw = self.record_get(
535
- profile=profile,
536
- app_key=app_key,
537
- apply_id=record_id,
538
- role=1,
539
- list_type=None,
540
- audit_node_id=workflow_node_id,
541
- )
542
- return {
543
- "profile": profile,
544
- "ws_id": raw.get("ws_id"),
545
- "ok": bool(raw.get("ok", True)),
546
- "request_route": raw.get("request_route"),
547
- "warnings": [],
548
- "output_profile": normalized_output_profile,
549
- "data": {
550
- "app_key": app_key,
551
- "record_id": record_id,
552
- "record": raw.get("result"),
553
- "selection": {
554
- "columns": columns,
555
- "workflow_node_id": workflow_node_id,
551
+ def runner(session_profile, context):
552
+ index = self._get_field_index(profile, context, app_key, force_refresh=False)
553
+ selected_fields = list(index.by_id.values())
554
+ result = self.backend.request(
555
+ "GET",
556
+ context,
557
+ f"/app/{app_key}/apply/{record_id}",
558
+ params={"role": 1},
559
+ )
560
+ answer_list = result.get("answers") if isinstance(result, dict) and isinstance(result.get("answers"), list) else []
561
+ row = _build_flat_row(cast(list[JSONValue], answer_list), selected_fields, apply_id=record_id)
562
+ response: JSONObject = {
563
+ "profile": profile,
564
+ "ws_id": session_profile.selected_ws_id,
565
+ "ok": True,
566
+ "request_route": self._request_route_payload(context),
567
+ "warnings": [],
568
+ "output_profile": normalized_output_profile,
569
+ "data": {
570
+ "app_key": app_key,
571
+ "record_id": record_id,
572
+ "record": row,
573
+ "selection": {
574
+ "columns": columns,
575
+ "workflow_node_id": workflow_node_id,
576
+ },
556
577
  },
557
- },
558
- }
578
+ }
579
+ if normalized_output_profile == "verbose":
580
+ response["data"]["debug"] = {"raw_record": result}
581
+ return response
582
+
583
+ return self._run_record_tool(profile, runner)
559
584
 
560
585
  def record_write(
561
586
  self,
@@ -712,7 +737,7 @@ class RecordTools(ToolBase):
712
737
  preflight=None,
713
738
  )
714
739
 
715
- def _schema_field_payload(self, field: FormField) -> JSONObject:
740
+ def _schema_field_payload(self, field: FormField, *, workflow_node_id: int) -> JSONObject:
716
741
  write_hints = self._schema_write_hints(field)
717
742
  return {
718
743
  "field_id": field.que_id,
@@ -725,6 +750,8 @@ class RecordTools(ToolBase):
725
750
  "role_hints": self._schema_role_hints(field),
726
751
  "readable": True,
727
752
  "writable": write_hints["writable"],
753
+ "permission_scope": "applicant_node",
754
+ "workflow_node_id": workflow_node_id,
728
755
  "write_kind": write_hints["write_kind"],
729
756
  "supported_read_ops": write_hints["supported_read_ops"],
730
757
  "supported_write_ops": write_hints["supported_write_ops"],
@@ -1105,10 +1132,12 @@ class RecordTools(ToolBase):
1105
1132
  self._ensure_allowed_analyze_keys(
1106
1133
  item,
1107
1134
  location=f"metrics[{idx}]",
1108
- allowed_keys={"op", "field_id", "fieldId", "alias"},
1135
+ allowed_keys={"op", "type", "agg", "aggregation", "field_id", "fieldId", "alias"},
1109
1136
  example="{'op': 'sum', 'field_id': 7, 'alias': '总金额'}",
1110
1137
  )
1111
- op = _normalize_optional_text(item.get("op"))
1138
+ op = _normalize_optional_text(
1139
+ item.get("op") or item.get("type") or item.get("agg") or item.get("aggregation")
1140
+ )
1112
1141
  if op not in supported_ops:
1113
1142
  raise RecordInputError(
1114
1143
  message=f"metrics[{idx}] uses unsupported op '{op}'",
@@ -1127,12 +1156,8 @@ class RecordTools(ToolBase):
1127
1156
  details={"location": f"metrics[{idx}]", "field": _field_ref_payload(field), "op": op},
1128
1157
  )
1129
1158
  elif item.get("field_id", item.get("fieldId")) is not None:
1130
- raise RecordInputError(
1131
- message=f"metrics[{idx}] with op 'count' must not include field_id",
1132
- error_code="INVALID_ANALYZE_METRIC",
1133
- fix_hint="Remove field_id from count metrics.",
1134
- details={"location": f"metrics[{idx}]", "op": op},
1135
- )
1159
+ # LLM 经常给 count 传 field_id,静默忽略而非报错
1160
+ pass
1136
1161
  alias = _normalize_optional_text(item.get("alias"))
1137
1162
  if alias is None:
1138
1163
  if op == "count":
@@ -2390,10 +2415,16 @@ class RecordTools(ToolBase):
2390
2415
  return self._run_record_tool(profile, runner)
2391
2416
 
2392
2417
  def _get_form_schema(self, profile: str, context, app_key: str, *, force_refresh: bool) -> JSONObject: # type: ignore[no-untyped-def]
2393
- cache_key = (profile, app_key)
2418
+ applicant_node = self._resolve_applicant_node(profile, context, app_key, force_refresh=force_refresh)
2419
+ cache_key = (profile, app_key, "applicant_node", applicant_node.workflow_node_id)
2394
2420
  if not force_refresh and cache_key in self._form_cache:
2395
2421
  return self._form_cache[cache_key]
2396
- schema = self.backend.request("GET", context, f"/app/{app_key}/form", params={"type": 1})
2422
+ schema = self.backend.request(
2423
+ "GET",
2424
+ context,
2425
+ f"/app/{app_key}/form",
2426
+ params={"type": 1, "beingApply": True, "auditNodeId": applicant_node.workflow_node_id},
2427
+ )
2397
2428
  normalized = _normalize_form_schema(schema)
2398
2429
  self._form_cache[cache_key] = normalized
2399
2430
  return normalized
@@ -2401,6 +2432,26 @@ class RecordTools(ToolBase):
2401
2432
  def _get_field_index(self, profile: str, context, app_key: str, *, force_refresh: bool) -> FieldIndex: # type: ignore[no-untyped-def]
2402
2433
  return _build_field_index(self._get_form_schema(profile, context, app_key, force_refresh=force_refresh))
2403
2434
 
2435
+ def _resolve_applicant_node(self, profile: str, context, app_key: str, *, force_refresh: bool) -> WorkflowNodeRef: # type: ignore[no-untyped-def]
2436
+ cache_key = (profile, app_key)
2437
+ if not force_refresh and cache_key in self._applicant_node_cache:
2438
+ return self._applicant_node_cache[cache_key]
2439
+ payload = self.backend.request("GET", context, f"/app/{app_key}/auditNodes")
2440
+ applicant_node = _extract_applicant_node(payload)
2441
+ if applicant_node is None:
2442
+ raise_tool_error(
2443
+ QingflowApiError(
2444
+ category="config",
2445
+ message=f"cannot resolve applicant node for app {app_key}",
2446
+ details={
2447
+ "error_code": "APPLICANT_NODE_NOT_FOUND",
2448
+ "fix_hint": "Ensure the app has a workflow applicant node before using user-side record tools.",
2449
+ },
2450
+ )
2451
+ )
2452
+ self._applicant_node_cache[cache_key] = applicant_node
2453
+ return applicant_node
2454
+
2404
2455
  def _get_view_list(self, profile: str, context, app_key: str) -> list[JSONObject]: # type: ignore[no-untyped-def]
2405
2456
  cache_key = (profile, app_key)
2406
2457
  if cache_key in self._view_list_cache:
@@ -3883,6 +3934,30 @@ def _normalize_view_list(payload: JSONValue) -> list[JSONObject]:
3883
3934
  return flattened
3884
3935
 
3885
3936
 
3937
+ def _normalize_audit_nodes(payload: JSONValue) -> list[JSONObject]:
3938
+ if isinstance(payload, list):
3939
+ return [item for item in payload if isinstance(item, dict)]
3940
+ if isinstance(payload, dict):
3941
+ return [item for item in payload.values() if isinstance(item, dict)]
3942
+ return []
3943
+
3944
+
3945
+ def _extract_applicant_node(payload: JSONValue) -> WorkflowNodeRef | None:
3946
+ for item in _normalize_audit_nodes(payload):
3947
+ node_type = _coerce_count(item.get("type"))
3948
+ deal_type = _coerce_count(item.get("dealType"))
3949
+ workflow_node_id = _coerce_count(item.get("auditNodeId"))
3950
+ if workflow_node_id is None or node_type != 0 or deal_type != 3:
3951
+ continue
3952
+ return WorkflowNodeRef(
3953
+ workflow_node_id=workflow_node_id,
3954
+ name=_normalize_optional_text(item.get("auditNodeName")) or str(workflow_node_id),
3955
+ type="applicant",
3956
+ raw=item,
3957
+ )
3958
+ return None
3959
+
3960
+
3886
3961
  def _compile_view_conditions(config: JSONObject) -> list[list[ViewFilterCondition]]:
3887
3962
  raw_limit = config.get("viewgraphLimit")
3888
3963
  if not isinstance(raw_limit, list):
@@ -3919,16 +3994,19 @@ def _build_field_index(schema: JSONObject) -> FieldIndex:
3919
3994
  *[(question, False) for question in _flatten_questions(schema.get("formQues"))],
3920
3995
  ]
3921
3996
  for question, is_base_question in all_questions:
3997
+ if not _should_index_question(question):
3998
+ continue
3922
3999
  que_id = _coerce_count(question.get("queId"))
3923
4000
  title = _stringify_json(question.get("queTitle")).strip()
3924
4001
  if que_id is None or que_id < 0 or not title:
3925
4002
  continue
4003
+ can_edit = question.get("canEdit")
3926
4004
  field = FormField(
3927
4005
  que_id=que_id,
3928
4006
  que_title=title,
3929
4007
  que_type=_coerce_count(question.get("queType")),
3930
4008
  required=bool(question.get("required") or question.get("beingRequired")),
3931
- readonly=bool(question.get("readonly") or question.get("beingReadonly") or is_base_question),
4009
+ readonly=bool(question.get("readonly") or question.get("beingReadonly") or is_base_question or can_edit is False),
3932
4010
  system=bool(question.get("system") or question.get("beingSystem") or is_base_question),
3933
4011
  options=_extract_question_options(question),
3934
4012
  aliases=[],
@@ -3947,16 +4025,35 @@ def _build_field_index(schema: JSONObject) -> FieldIndex:
3947
4025
  def _flatten_questions(payload: JSONValue) -> list[JSONObject]:
3948
4026
  flattened: list[JSONObject] = []
3949
4027
  if isinstance(payload, dict):
3950
- if "queId" in payload or "queTitle" in payload:
4028
+ is_question = "queId" in payload or "queTitle" in payload
4029
+ if is_question:
3951
4030
  flattened.append(payload)
3952
- for value in payload.values():
3953
- flattened.extend(_flatten_questions(value))
4031
+ for key in ("subQuestions", "innerQuestions", "subQues"):
4032
+ value = payload.get(key)
4033
+ if isinstance(value, list):
4034
+ flattened.extend(_flatten_questions(value))
4035
+ if not is_question:
4036
+ for key in ("baseQues", "formQues"):
4037
+ value = payload.get(key)
4038
+ if isinstance(value, list):
4039
+ flattened.extend(_flatten_questions(value))
3954
4040
  elif isinstance(payload, list):
3955
4041
  for item in payload:
3956
4042
  flattened.extend(_flatten_questions(item))
3957
4043
  return flattened
3958
4044
 
3959
4045
 
4046
+ def _should_index_question(question: JSONObject) -> bool:
4047
+ if bool(question.get("beingHide") or question.get("hidden")):
4048
+ return False
4049
+ if _coerce_count(question.get("quoteId")) is not None:
4050
+ return False
4051
+ que_type = _coerce_count(question.get("queType"))
4052
+ if que_type in LAYOUT_ONLY_QUE_TYPES:
4053
+ return False
4054
+ return True
4055
+
4056
+
3960
4057
  def _extract_question_options(question: JSONObject) -> list[str]:
3961
4058
  options = question.get("options")
3962
4059
  if not isinstance(options, list):