@josephyan/qingflow-app-user-mcp 0.2.0-beta.23 → 0.2.0-beta.25

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -3,13 +3,13 @@
3
3
  Install:
4
4
 
5
5
  ```bash
6
- npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.23
6
+ npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.25
7
7
  ```
8
8
 
9
9
  Run:
10
10
 
11
11
  ```bash
12
- npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.23 qingflow-app-user-mcp
12
+ npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.25 qingflow-app-user-mcp
13
13
  ```
14
14
 
15
15
  Environment:
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@josephyan/qingflow-app-user-mcp",
3
- "version": "0.2.0-beta.23",
3
+ "version": "0.2.0-beta.25",
4
4
  "description": "Operational end-user MCP for Qingflow records, tasks, comments, and directory workflows.",
5
5
  "license": "MIT",
6
6
  "type": "module",
package/pyproject.toml CHANGED
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "qingflow-mcp"
7
- version = "0.2.0b23"
7
+ version = "0.2.0b25"
8
8
  description = "User-authenticated MCP server for Qingflow"
9
9
  readme = "README.md"
10
10
  license = "MIT"
@@ -41,6 +41,7 @@ Route to exactly one of these specialized paths:
41
41
 
42
42
  ## Routing Rules
43
43
 
44
+ - If the user does not know the target `app_key`, discover apps first with `app_list` or `app_search`, then route to the specialized skill
44
45
  - If the task is about browsing, reading, creating, updating, deleting, attachments, relations, or subtable writes, switch to `$qingflow-record-crud`
45
46
  - If the task is about inbox, todo, cc, task-center workload, comments, approval, reject, rollback, transfer, urge, or directory lookup, switch to `$qingflow-task-ops`
46
47
  - If the task is about grouped distributions, ratios, rankings, trends, insights, or any final statistical conclusion, switch to `$qingflow-record-analysis`
@@ -7,192 +7,87 @@ metadata:
7
7
 
8
8
  # Qingflow Record Analysis
9
9
 
10
- ## Overview
10
+ Analysis tasks must start with `record_schema_get`.
11
+ Use field_id-based DSLs only.
11
12
 
12
- This skill is for record analysis inside existing Qingflow apps. Use it when the task is about `分析 / 洞察 / 分布 / 占比 / 平均 / 排名 / 趋势 / 所有 / 全部 / 全国 / 高价值` or any final statistical conclusion.
13
+ ## Step 1: `record_schema_get` Step 2: build DSL Step 3: `record_analyze`
13
14
 
14
- This skill assumes the MCP is already connected and authenticated. If not, switch to `$qingflow-mcp-setup` first. If the task is about creating, updating, or deleting records rather than analyzing them, switch to `$qingflow-record-crud`. If it is about task-center actions, comments, approvals, rollback, transfer, or directory-driven workflow work, switch to `$qingflow-task-ops`.
15
+ This is the ONLY execution order. Never skip step 1. Never call `record_analyze` without a schema.
15
16
 
16
- Before running analysis in `prod`, confirm the intended environment. If browser parity or live route debugging matters, call `record_analyze` with `output_profile=\"verbose\"` and compare `debug.request_route` with the browser route.
17
+ Tools: `record_schema_get`, `record_analyze`. Use `record_list`/`record_get` only for sample rows AFTER analysis, and treat those read paths as belonging to [$qingflow-record-crud](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-crud/SKILL.md).
18
+ Comments, approvals, rollback, transfer, urge, and directory lookup stay in [$qingflow-task-ops](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-task-ops/SKILL.md), not in this analysis skill.
17
19
 
18
- ## Tool Scope
19
-
20
- Use these tools as the core analysis surface:
21
-
22
- - `record_schema_get`
23
- - `record_analyze`
20
+ ---
24
21
 
25
- Use `record_list` or `record_get` only when you need sample rows or a specific supporting example after the main analysis path.
22
+ ## DSL Contract
26
23
 
27
- `record_schema_get` now returns the **current user's applicant-node visible schema only**:
24
+ ### DSL FORMAT (CRITICAL read this FIRST)
28
25
 
29
- - hidden fields are omitted entirely
30
- - absent fields should be interpreted as `当前用户在申请人节点下不可见/不可用`
31
- - do not treat the schema as a builder/full-field metadata dump
26
+ ### Correct vs Wrong — learn from these before building ANY DSL
32
27
 
33
- ## Hard Rules
28
+ **dimension item:**
29
+ ```
30
+ ✅ CORRECT: { "field_id": 9500572, "alias": "报价类型" }
31
+ ❌ WRONG: 9500572 ← bare integer, not a dict
32
+ ❌ WRONG: "报价类型" ← string, not a dict
33
+ ❌ WRONG: { "field_id": 9500572, "title": "报价类型" } ← "title" is forbidden
34
+ ```
34
35
 
35
- - Analysis tasks must start with `record_schema_get`
36
- - Build one or more small DSLs, then run `record_analyze` separately for each question
37
- - DSL field references must use `field_id` only
38
- - Normalize relative time phrases into explicit legal date ranges before building the DSL
39
- - If the user asks for `最近一个完整自然月 / 上个月 / 最近30天 / 本季度 / 去年同期`, first convert that phrase into concrete dates, then verify the dates are legal before calling MCP
40
- - Never send impossible dates such as `2026-02-29`; if the intended month is February 2026, the legal upper bound is `2026-02-28`
41
- - If the schema still leaves multiple plausible fields, stop and ask the user to confirm from a short candidate list instead of guessing
42
- - Do not keep retrying different guessed field names in a loop
43
- - `record_list` is never the basis for a final statistical conclusion
44
- - If `record_list` is capped or paged, treat it as sample-only evidence
45
- - Do not mix full totals from `record_analyze` with sample-only list observations as one combined `全量结论`
46
- - Do not manually tune paging or scan-budget parameters for analysis; `record_analyze` hides them
47
- - For final conclusions, prefer `strict_full=true`
48
- - Before choosing a DSL shape, first decide whether the question needs `count`, `sum`, `avg`, `distinct_count`, `ratio`, or `ranking`
49
- - Do not guess a metric just because the user said `数量`, `单量`, `人数`, or `金额`
50
- - If one business question depends on multiple metrics, split it into smaller structured questions and build multiple focused DSLs
51
- - `渗透率 / 转化率 / 占比类结论必须先定义分子和分母`
52
- - Do not claim a metric you did not query.
53
- - Derived ratios must be computed outside the DSL after trusted numerator and denominator queries complete; do not invent `div`, `formula`, or expression metrics inside `record_analyze`
54
- - If the requested business question requires unsupported derived math, split it into multiple DSLs and compute the final ratio only in the reasoning layer after the source metrics are confirmed
55
- - If the user asks for multiple conclusions and only part of them is completed reliably, explicitly disclose which parts are complete and which parts remain unresolved
56
-
57
- ## Standard Operating Order
58
-
59
- For analysis:
60
-
61
- 1. Confirm target app and environment
62
- 2. Run `record_schema_get`
63
- 3. Inspect fields, aliases, suggested dimensions, suggested metrics, and suggested time fields
64
- 4. Generate one or more field_id-based DSLs
65
- 5. Run `record_analyze` once per DSL
66
- 6. Run `record_list` only if you still need sample rows, examples, or manual inspection
67
- 7. Before answering, separate:
68
- - `全量可信结论`
69
- - `样本观察`
70
- - `待验证假设`
71
-
72
- ## Semantic Guardrails
73
-
74
- - If the user asks for penetration, conversion, share-of-total, win rate, non-standard ratio, or any `%` metric, first write down:
75
- - numerator definition
76
- - denominator definition
77
- - whether each side needs its own DSL
78
- - If you cannot name the denominator from real schema fields and filters, do not use words like `渗透率`, `转化率`, `占比`, `比例`, or `%`
79
- - If a field is still ambiguous after `record_schema_get`, do not guess; either select one unique `field_id` from the schema or ask the user to confirm from a short candidate list
80
- - If a business field is absent from `record_schema_get`, do not infer or guess a hidden `field_id`; explain that the field is not visible in the current applicant-node permission scope
81
- - If a statement depends on `count`, query `count`
82
- - If a statement depends on total amount, query `sum`
83
- - If a statement depends on average level, query `avg` or derive it from trusted `sum + count`
84
- - If a statement depends on trend, query a time dimension with `bucket`
85
- - If a statement depends on a ratio that the DSL cannot express directly, run the numerator and denominator separately, then compute the ratio outside MCP only after both sides are complete and compatible
86
- - Rankings must come from structured sorted results, not from loose natural-language restatement
87
- - When grouped rows are truncated, describe them as `已返回分组中` or `主要分组`
88
- - If `completeness.rows_truncated=true` or `completeness.statement_scope=returned_groups_only`, do not use words like `各部门`、`所有分组`、`完整名单`、`全部渠道`
89
- - If grouped rows are truncated, explicitly downgrade the wording to `前 N 个分组` or `主要分组`, never `全部`
90
- - Complex answers should default to `先结构、后解读`: present the table / metrics / ordering first, then add concise interpretation
91
- - Final wording should stay as close as possible to schema titles, dimension aliases, and metric aliases; do not rename the business object or field title unless the user asked for a rewrite
36
+ **metric item the key is `op`, NOT `type`/`agg`/`aggregation`:**
37
+ ```
38
+ CORRECT: { "op": "count", "alias": "记录数" }
39
+ CORRECT: { "op": "sum", "field_id": 7, "alias": "总金额" }
40
+ WRONG: { "type": "count" } ← "type" is NOT a valid key
41
+ WRONG: { "agg": "count" } ← "agg" is NOT a valid key
42
+ WRONG: { "aggregation": "count" } ← "aggregation" is NOT a valid key
43
+ ```
92
44
 
93
- ## DSL Contract
45
+ **filter item — the key is `op`, NOT `operator`:**
46
+ ```
47
+ ✅ CORRECT: { "field_id": 2, "op": "between", "value": ["2024-03-01", "2024-03-31"] }
48
+ ✅ CORRECT: { "field_id": 5, "op": "eq", "value": "已完成" }
49
+ ❌ WRONG: { "field_id": 2, "operator": "between", "value": [...] } ← "operator" is forbidden
50
+ ❌ WRONG: { "field_id": 2, "op": ">=", "value": "2024-03-01" } ← ">=" is not valid, use "gte"
51
+ ```
94
52
 
95
- Use `record_schema_get` as the source of truth for every DSL field reference:
53
+ **sort item:**
54
+ ```
55
+ ✅ CORRECT: { "by": "记录数", "order": "desc" } ← "by" references an alias
56
+ ❌ WRONG: { "by": 9500572, "order": "desc" } ← field_id not allowed in sort
57
+ ```
96
58
 
97
- - Use `fields[].field_id` in `dimensions[].field_id`, `metrics[].field_id`, and `filters[].field_id`
98
- - Treat `suggested_dimensions`, `suggested_metrics`, and `suggested_time_fields` as hints, not as executable DSL by themselves
99
- - Do not pass field titles, aliases, or guessed ids where `field_id` is required
59
+ ### Allowed keys per item (ANY other key = error)
100
60
 
101
- The `record_analyze` call should be built from this argument shape:
61
+ | Item | Allowed keys only |
62
+ |------|-------------------|
63
+ | dimension | `field_id`, `alias`, `bucket` |
64
+ | metric | `op`, `field_id`, `alias` |
65
+ | filter | `field_id`, `op`, `value` |
66
+ | sort | `by`, `order` |
102
67
 
103
- ```json
104
- {
105
- "app_key": "APP_1",
106
- "dimensions": [],
107
- "metrics": [],
108
- "filters": [],
109
- "sort": [],
110
- "limit": 50,
111
- "strict_full": true,
112
- "view_key": null,
113
- "view_name": null,
114
- "output_profile": "normal"
115
- }
116
- ```
68
+ ### `op` values
117
69
 
118
- Top-level argument rules:
119
-
120
- - `app_key`: required. The target Qingflow app.
121
- - `dimensions`: required list. Use `[]` for whole-table summary. Use one item per grouping dimension for grouped analysis.
122
- - `metrics`: optional list. If omitted or empty, `record_analyze` defaults to a single `count` metric.
123
- - `filters`: optional list. Filters restrict the analyzed dataset before results are interpreted.
124
- - `sort`: optional list. Sorting applies to result rows, not raw source rows.
125
- - `limit`: positive integer. It only limits returned result rows; it does not reduce the internal scan scope.
126
- - `strict_full`: boolean. Prefer `true` for final conclusions. If `true`, incomplete scans return an error; if `false`, incomplete scans return partial results.
127
- - `view_key` / `view_name`: optional. Use a view to narrow scope before analysis. Prefer `view_key` when both are available.
128
- - `output_profile`: `normal` or `verbose`. Prefer `normal` unless you are debugging completeness or route issues.
129
-
130
- Item contracts:
131
-
132
- - `dimensions` item:
133
- - shape: `{ "field_id": 2, "alias": "状态", "bucket": null }`
134
- - `field_id`: required integer from `record_schema_get`
135
- - `alias`: optional but recommended; if omitted, the field title becomes the alias
136
- - `bucket`: optional; allowed values are `day`, `week`, `month`, `quarter`, `year`, or omitted / `null`
137
- - `bucket` may only be used on fields from `suggested_time_fields`
138
- - `metrics` item:
139
- - shape: `{ "op": "sum", "field_id": 7, "alias": "总金额" }`
140
- - `op`: one of `count`, `sum`, `avg`, `min`, `max`, `distinct_count`
141
- - `field_id`: required for `sum`, `avg`, `min`, `max`, `distinct_count`; do not pass it for `count`
142
- - `alias`: optional but strongly recommended because `sort.by` must reference aliases
143
- - `filters` item:
144
- - shape: `{ "field_id": 2, "op": "eq", "value": "进行中" }`
145
- - `field_id`: required integer from `record_schema_get`
146
- - `op`: optional; defaults to `eq`
147
- - supported ops: `eq`, `neq`, `in`, `not_in`, `gt`, `gte`, `lt`, `lte`, `between`, `contains`, `is_null`, `not_null`
148
- - value rules:
149
- - `eq`, `neq`, `gt`, `gte`, `lt`, `lte`, `contains`: pass a single scalar value
150
- - `in`, `not_in`: pass an array
151
- - `between`: pass a two-item array like `[min, max]`
152
- - `is_null`, `not_null`: omit `value`
153
- - `sort` item:
154
- - shape: `{ "by": "记录数", "order": "desc" }`
155
- - `by`: required and must reference an alias already defined in `dimensions` or `metrics`
156
- - `order`: optional; use `asc` or `desc`; default is `asc`
157
- - do not sort by raw field title or `field_id`
158
-
159
- Practical rules:
160
-
161
- - Keep one DSL focused on one question. Prefer multiple small DSLs over one overloaded request.
162
- - Always set explicit aliases for metrics you may sort by, compare, or quote in the final answer.
163
- - For trend analysis, use one time dimension with `bucket`, then sort by that time alias ascending.
164
- - For cross analysis, use multiple `dimensions` and a small set of metrics.
165
- - Do not attempt formulas, joins, having clauses, cohort analysis, or manual paging controls in this DSL.
166
- - Do not pass unsupported keys such as `formula`, `expr`, `numerator`, `denominator`, `left`, `right`, or `operator` inside metric items.
167
-
168
- ## Minimal DSL Templates
169
-
170
- Summary:
70
+ - metrics: `count`, `sum`, `avg`, `min`, `max`, `distinct_count`
71
+ - filters: `eq`, `neq`, `in`, `not_in`, `gt`, `gte`, `lt`, `lte`, `between`, `contains`, `is_null`, `not_null`
72
+ - For `count` metric: do NOT pass `field_id`. For all others: `field_id` is required.
73
+ - If `metrics` is omitted or `[]`, defaults to `[{"op":"count","alias":"记录数"}]`.
171
74
 
172
- ```json
173
- {
174
- "dimensions": [],
175
- "metrics": [
176
- { "op": "count", "alias": "记录数" }
177
- ],
178
- "filters": [],
179
- "sort": [],
180
- "limit": 1,
181
- "strict_full": true
182
- }
183
- ```
75
+ ---
184
76
 
185
- Single-dimension distribution:
77
+ ## COMPLETE DSL TEMPLATE — copy, replace field_id, done
186
78
 
187
79
  ```json
188
80
  {
81
+ "app_key": "YOUR_APP_KEY",
189
82
  "dimensions": [
190
- { "field_id": 2, "alias": "状态" }
83
+ { "field_id": FIELD_ID_FROM_SCHEMA, "alias": "维度名" }
191
84
  ],
192
85
  "metrics": [
193
86
  { "op": "count", "alias": "记录数" }
194
87
  ],
195
- "filters": [],
88
+ "filters": [
89
+ { "field_id": TIME_FIELD_ID, "op": "between", "value": ["2024-03-01", "2024-03-31"] }
90
+ ],
196
91
  "sort": [
197
92
  { "by": "记录数", "order": "desc" }
198
93
  ],
@@ -201,67 +96,87 @@ Single-dimension distribution:
201
96
  }
202
97
  ```
203
98
 
204
- Time trend:
99
+ More templates:
100
+
101
+ **Whole-table count (no grouping):**
102
+ ```json
103
+ { "dimensions": [], "metrics": [{"op":"count","alias":"记录数"}], "strict_full": true }
104
+ ```
205
105
 
106
+ **Monthly trend:**
206
107
  ```json
207
108
  {
208
- "dimensions": [
209
- { "field_id": 3, "alias": "月份", "bucket": "month" }
210
- ],
211
- "metrics": [
212
- { "op": "count", "alias": "记录数" }
213
- ],
214
- "filters": [],
215
- "sort": [
216
- { "by": "月份", "order": "asc" }
217
- ],
218
- "limit": 24,
219
- "strict_full": true
109
+ "dimensions": [{"field_id": 3, "alias": "月份", "bucket":"month"}],
110
+ "metrics": [{"op":"count","alias":"记录数"}],
111
+ "sort": [{"by":"月份","order":"asc"}],
112
+ "limit": 24, "strict_full": true
220
113
  }
221
114
  ```
222
115
 
223
- Two-dimensional cross analysis:
224
-
116
+ **Cross analysis with sum:**
225
117
  ```json
226
118
  {
227
- "dimensions": [
228
- { "field_id": 2, "alias": "状态" },
229
- { "field_id": 5, "alias": "负责人" }
230
- ],
231
- "metrics": [
232
- { "op": "count", "alias": "记录数" },
233
- { "op": "sum", "field_id": 7, "alias": "总金额" }
234
- ],
235
- "filters": [],
236
- "sort": [
237
- { "by": "记录数", "order": "desc" }
238
- ],
239
- "limit": 100,
240
- "strict_full": true
119
+ "dimensions": [{"field_id": 2, "alias": "状态"}, {"field_id": 5, "alias": "负责人"}],
120
+ "metrics": [{"op":"count","alias":"记录数"}, {"op":"sum","field_id": 7, "alias":"总金额"}],
121
+ "sort": [{"by":"记录数","order":"desc"}],
122
+ "limit": 100, "strict_full": true
241
123
  }
242
124
  ```
243
125
 
244
- ## Output Gate
245
-
246
- - Read aggregate rows from `result.rows`
247
- - Read overall totals from `result.totals.metric_totals`
248
- - Read sort intent from `query.sort`
249
- - Read ranked output from `ranking` when it is not `null`
250
- - Read ratio output from `ratios` when it is not `null`; `ratios=null` is normal when MCP did not produce a native ratio block
251
- - Read warning codes from `completeness.warnings`
252
-
253
- - Only write `全量可信结论` when the supporting `record_analyze` calls report `completeness.status=complete` and `safe_for_final_conclusion=true`
254
- - If any key analysis call is incomplete, downgrade the answer to `初步观察` or `部分结果`
255
- - Treat `safe_for_final_conclusion=true` as necessary but not sufficient when the metric definition is incomplete or grouped rows are truncated
256
- - If `completeness.statement_scope=returned_groups_only`, you may still give full-population conclusions about totals or ratios, but not a full grouped enumeration claim
257
- - If aggregate-style output is full but list evidence is sample-only, split the answer into:
258
- - `全量可信结论`
259
- - `样本观察(不作为最终结论)`
260
- - optional `待验证假设`
261
-
262
- ## Resources
263
-
264
- - Analysis patterns: [references/analysis-patterns.md](references/analysis-patterns.md)
265
- - Confidence reporting: [references/confidence-reporting.md](references/confidence-reporting.md)
266
- - Analysis gotchas: [references/analysis-gotchas.md](references/analysis-gotchas.md)
267
- - Shared environment guidance: [/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-app-user/references/environments.md](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-app-user/references/environments.md)
126
+ ### Top-level arguments
127
+
128
+ - `app_key`: required.
129
+ - `dimensions`: `[]` = whole-table summary; `[{...}]` = grouped.
130
+ - `strict_full`: `true` for final conclusions. `false` allows partial results.
131
+ - `limit`: limits returned rows only, not scan scope.
132
+ - `view_key`/`view_name`: optional scope narrowing.
133
+ - `bucket` in dimensions: only for `suggested_time_fields`. Values: `day`/`week`/`month`/`quarter`/`year`/`null`.
134
+
135
+ ---
136
+
137
+ ## RULES
138
+
139
+ - Normalize relative time phrases into explicit legal date ranges.
140
+ - 渗透率 / 转化率 / 占比类结论必须先定义分子和分母。
141
+ - Do not claim a metric you did not query.
142
+ - Derived ratios must be computed outside the DSL.
143
+ - Before choosing a DSL shape, first decide whether the question needs `count`, `sum`, `avg`, `distinct_count`, `ratio`, or `ranking`.
144
+ - If a field is still ambiguous after `record_schema_get`, do not guess; ask the user to confirm from a short candidate list.
145
+ - Rankings must come from structured sorted results.
146
+ - For partial answers, explicitly disclose which parts are complete and which parts remain unresolved.
147
+ - Complex answers should default to `先结构、后解读`.
148
+ - `between`: pass a two-item array.
149
+ - Sort entries must reference an alias already defined in `dimensions` or `metrics`.
150
+ - Final wording should stay as close as possible to schema titles.
151
+ - Do not pass field titles, aliases, or guessed ids.
152
+ - If `completeness.statement_scope=returned_groups_only` or `completeness.rows_truncated=true`, downgrade wording to returned groups only.
153
+ - All `field_id` MUST come from `record_schema_get`. Never guess or use field titles.
154
+ - One DSL per question. Multiple small DSLs > one overloaded request.
155
+ - Normalize relative dates to concrete ranges BEFORE building DSL. Never send impossible dates (e.g. `2026-02-29`).
156
+ - If schema has ambiguous fields, ask user to pick from a short list. Do not guess.
157
+ - `record_list` is NEVER the basis for final statistics.
158
+ - Derived ratios: run numerator and denominator as separate DSLs, compute ratio in your reasoning.
159
+ - Set `alias` for any metric you will sort by, compare, or quote.
160
+
161
+ ---
162
+
163
+ ## OUTPUT (CRITICAL — final answer must show concrete numbers)
164
+
165
+ ### 必须逐行列出数据(硬性要求)
166
+
167
+ final_answer MUST include a table with every row from `result.rows`:
168
+
169
+ | {维度别名} | {指标别名} | 占比 |
170
+ |------------|-----------|------|
171
+ | {row.dimensions.X} | {row.metrics.Y} | {Y / total * 100}% |
172
+
173
+ - 占比 = 行指标值 / `result.totals.metric_totals` 总值, 保留一位小数
174
+ - 如 `metric_totals` 不存在, 用各行之和作分母
175
+ - 超过 20 行展示 Top 20 并注明
176
+ - 不得只写"共 N 种类型"而省略明细
177
+
178
+ ### 结论分级
179
+
180
+ - `safe_for_final_conclusion=true` → `全量可信结论`
181
+ - 不完整 → `初步观察`
182
+ - `rows_truncated=true` → 用 `前 N 个分组`, 不用 `全部`/`所有`
@@ -63,6 +63,8 @@ Use exactly one of these default paths:
63
63
 
64
64
  ## Supporting Tools
65
65
 
66
+ - `app_list`
67
+ - `app_search`
66
68
  - `directory_search`
67
69
  - `directory_list_internal_users`
68
70
  - `directory_list_internal_departments`
@@ -75,12 +77,13 @@ Use exactly one of these default paths:
75
77
  1. Ensure auth exists
76
78
  2. Ensure workspace is selected
77
79
  3. Confirm target app and whether the task is browse / detail / write / analysis
78
- 4. Run `record_schema_get` before any non-trivial record read or write
79
- 5. If the request is analysis-like, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
80
- 6. If the request is write-like, decide `insert / update / delete` before building any payload
81
- 7. If fields are still ambiguous after `record_schema_get`, ask the user to confirm from a short candidate list instead of guessing
82
- 8. For high-risk writes or production changes, read the current state first whenever practical
83
- 9. After actions, report the affected `record_id`, counts, or returned item count
80
+ 4. If `app_key` is unknown, use `app_list` or `app_search` first
81
+ 5. Run `record_schema_get` before any non-trivial record read or write
82
+ 6. If the request is analysis-like, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
83
+ 7. If the request is write-like, decide `insert / update / delete` before building any payload
84
+ 8. If fields are still ambiguous after `record_schema_get`, ask the user to confirm from a short candidate list instead of guessing
85
+ 9. For high-risk writes or production changes, read the current state first whenever practical
86
+ 10. After actions, report the affected `record_id`, counts, or returned item count
84
87
 
85
88
  ## Record Read Rules
86
89
 
@@ -2,4 +2,4 @@ from __future__ import annotations
2
2
 
3
3
  __all__ = ["__version__"]
4
4
 
5
- __version__ = "0.2.0b23"
5
+ __version__ = "0.2.0b25"
@@ -28,6 +28,7 @@ def build_server() -> FastMCP:
28
28
  instructions=(
29
29
  "Use auth_login first, then workspace_list and workspace_select. "
30
30
  "All resource tools operate with the logged-in user's Qingflow permissions.\n\n"
31
+ "If app_key is unknown, use app_list or app_search first to discover current-user visible apps in the selected workspace. "
31
32
  "For analytics, use record_schema_get first, let the model build field_id-based DSL, "
32
33
  "then call record_analyze. record_analyze returns compact business-first output as query/result/ranking/ratios/completeness/presentation; use verbose only for route/debug details. "
33
34
  "record_schema_get returns the current user's applicant-node visible schema only; hidden fields are omitted and missing fields should be treated as not visible in the current permission scope. "
@@ -6,6 +6,7 @@ from .backend_client import BackendClient
6
6
  from .config import DEFAULT_PROFILE
7
7
  from .session_store import SessionStore
8
8
  from .tools.approval_tools import ApprovalTools
9
+ from .tools.app_tools import AppTools
9
10
  from .tools.auth_tools import AuthTools
10
11
  from .tools.directory_tools import DirectoryTools
11
12
  from .tools.file_tools import FileTools
@@ -19,6 +20,7 @@ def build_user_server() -> FastMCP:
19
20
  "Qingflow App User MCP",
20
21
  instructions=(
21
22
  "Use this server for Qingflow operational workflows with a schema-first path. "
23
+ "If app_key is unknown, use app_list or app_search first to discover current-user visible apps in the selected workspace. "
22
24
  "For records, start with record_schema_get, then choose record_list, record_get, or record_write. "
23
25
  "record_schema_get returns the current user's applicant-node visible schema only; hidden fields are omitted and missing fields should be treated as not visible in the current permission scope. "
24
26
  "For analytics, switch to record_schema_get and record_analyze; its default output is compact query/result/ranking/ratios/completeness/presentation, with route/debug only in verbose mode. "
@@ -29,6 +31,7 @@ def build_user_server() -> FastMCP:
29
31
  sessions = SessionStore()
30
32
  backend = BackendClient()
31
33
  auth = AuthTools(sessions, backend)
34
+ apps = AppTools(sessions, backend)
32
35
  workspace = WorkspaceTools(sessions, backend)
33
36
  files = FileTools(sessions, backend)
34
37
  approvals = ApprovalTools(sessions, backend)
@@ -96,6 +99,14 @@ def build_user_server() -> FastMCP:
96
99
  def workspace_select(profile: str = DEFAULT_PROFILE, ws_id: int = 0) -> dict:
97
100
  return workspace.workspace_select(profile=profile, ws_id=ws_id)
98
101
 
102
+ @server.tool()
103
+ def app_list(profile: str = DEFAULT_PROFILE) -> dict:
104
+ return apps.app_list(profile=profile)
105
+
106
+ @server.tool()
107
+ def app_search(profile: str = DEFAULT_PROFILE, keyword: str = "", page_num: int = 1, page_size: int = 50) -> dict:
108
+ return apps.app_search(profile=profile, keyword=keyword, page_num=page_num, page_size=page_size)
109
+
99
110
  @server.tool()
100
111
  def file_get_upload_info(
101
112
  profile: str = DEFAULT_PROFILE,
@@ -76,14 +76,20 @@ class AppTools(ToolBase):
76
76
  return self.app_publish(profile=profile, app_key=app_key, payload=payload or {})
77
77
 
78
78
  def app_list(self, *, profile: str, ship_auth: bool = False) -> JSONObject:
79
- """Get all apps with full hierarchy from tag/apps endpoint."""
79
+ """List current-user visible apps in the selected workspace."""
80
80
  def runner(session_profile, context):
81
81
  result = self.backend.request("GET", context, "/tag/apps")
82
- return {
82
+ items, source_shape = self._extract_visible_apps(result)
83
+ response = {
83
84
  "profile": profile,
84
85
  "ws_id": session_profile.selected_ws_id,
85
- "items": result,
86
+ "items": items,
87
+ "count": len(items),
88
+ "source_shape": source_shape,
86
89
  }
90
+ if ship_auth:
91
+ response["raw"] = result
92
+ return response
87
93
 
88
94
  return self._run(profile, runner)
89
95
 
@@ -98,19 +104,20 @@ class AppTools(ToolBase):
98
104
 
99
105
  result = self.backend.request("GET", context, "/app/item", params=params)
100
106
 
101
- # Extract app list from the response
102
107
  apps = []
103
108
  if isinstance(result, dict):
104
109
  items = result.get("list", [])
105
110
  for item in items:
106
111
  if isinstance(item, dict):
107
- apps.append({
108
- "app_key": item.get("appKey"),
109
- "title": item.get("title") or item.get("formTitle"),
110
- "form_id": item.get("formId"),
111
- "tag_id": item.get("tagId"),
112
- "group_id": item.get("groupId"),
113
- })
112
+ normalized = self._normalize_visible_app(
113
+ item,
114
+ package_tag_id=_coerce_positive_int(item.get("tagId")),
115
+ package_name=str(item.get("tagName") or "").strip() or None,
116
+ group_id=_coerce_positive_int(item.get("groupId")),
117
+ group_name=str(item.get("groupName") or "").strip() or None,
118
+ )
119
+ if normalized is not None:
120
+ apps.append(normalized)
114
121
 
115
122
  return {
116
123
  "profile": profile,
@@ -119,6 +126,7 @@ class AppTools(ToolBase):
119
126
  "page_num": page_num,
120
127
  "page_size": page_size,
121
128
  "total": result.get("total") if isinstance(result, dict) else len(apps),
129
+ "items": apps,
122
130
  "apps": apps,
123
131
  }
124
132
 
@@ -424,6 +432,88 @@ class AppTools(ToolBase):
424
432
  }
425
433
  return {key: value for key, value in compact.items() if value is not None}
426
434
 
435
+ def _extract_visible_apps(self, result: Any) -> tuple[list[JSONObject], str]:
436
+ apps: list[JSONObject] = []
437
+ seen: set[str] = set()
438
+
439
+ def walk(
440
+ node: Any,
441
+ *,
442
+ package_tag_id: int | None = None,
443
+ package_name: str | None = None,
444
+ group_id: int | None = None,
445
+ group_name: str | None = None,
446
+ ) -> None:
447
+ if isinstance(node, list):
448
+ for item in node:
449
+ walk(
450
+ item,
451
+ package_tag_id=package_tag_id,
452
+ package_name=package_name,
453
+ group_id=group_id,
454
+ group_name=group_name,
455
+ )
456
+ return
457
+ if not isinstance(node, dict):
458
+ return
459
+
460
+ next_package_tag_id = _coerce_positive_int(node.get("tagId")) or package_tag_id
461
+ next_package_name = str(node.get("tagName") or "").strip() or package_name
462
+ next_group_id = _coerce_positive_int(node.get("groupId")) or group_id
463
+ next_group_name = str(node.get("groupName") or node.get("groupTitle") or "").strip() or group_name
464
+
465
+ normalized = self._normalize_visible_app(
466
+ node,
467
+ package_tag_id=next_package_tag_id,
468
+ package_name=next_package_name,
469
+ group_id=next_group_id,
470
+ group_name=next_group_name,
471
+ )
472
+ if normalized is not None:
473
+ app_key = str(normalized.get("app_key") or "").strip()
474
+ if app_key and app_key not in seen:
475
+ seen.add(app_key)
476
+ apps.append(normalized)
477
+
478
+ for value in node.values():
479
+ if isinstance(value, (list, dict)):
480
+ walk(
481
+ value,
482
+ package_tag_id=next_package_tag_id,
483
+ package_name=next_package_name,
484
+ group_id=next_group_id,
485
+ group_name=next_group_name,
486
+ )
487
+
488
+ walk(result)
489
+ return apps, type(result).__name__
490
+
491
+ def _normalize_visible_app(
492
+ self,
493
+ item: dict[str, Any],
494
+ *,
495
+ package_tag_id: int | None,
496
+ package_name: str | None,
497
+ group_id: int | None,
498
+ group_name: str | None,
499
+ ) -> JSONObject | None:
500
+ app_key = str(item.get("appKey") or item.get("app_key") or "").strip()
501
+ if not app_key:
502
+ return None
503
+ title = str(item.get("title") or item.get("formTitle") or item.get("appName") or item.get("name") or app_key).strip() or app_key
504
+ tag_ids = item.get("tagIds") if isinstance(item.get("tagIds"), list) else []
505
+ compact = {
506
+ "app_key": app_key,
507
+ "title": title,
508
+ "form_id": item.get("formId"),
509
+ "tag_id": package_tag_id,
510
+ "package_name": package_name,
511
+ "group_id": group_id,
512
+ "group_name": group_name,
513
+ "tag_ids": [value for value in (_coerce_positive_int(tag_id) for tag_id in tag_ids) if value is not None],
514
+ }
515
+ return {key: value for key, value in compact.items() if value not in (None, [], "", {})}
516
+
427
517
  def _count_auth_members(self, auth_payload: Any, member_key: str) -> int:
428
518
  if not isinstance(auth_payload, dict):
429
519
  return 0
@@ -470,3 +560,11 @@ def _normalize_form_type(value: int | str) -> int:
470
560
  if text in FORM_TYPE_ALIASES:
471
561
  return FORM_TYPE_ALIASES[text]
472
562
  raise_tool_error(QingflowApiError.config_error("form_type must be a positive integer or one of: default, form, schema, new, draft, edit"))
563
+
564
+
565
+ def _coerce_positive_int(value: Any) -> int | None:
566
+ try:
567
+ number = int(value)
568
+ except (TypeError, ValueError):
569
+ return None
570
+ return number if number > 0 else None
@@ -38,6 +38,7 @@ ATTACHMENT_QUE_TYPES = {13}
38
38
  RELATION_QUE_TYPES = {25}
39
39
  SUBTABLE_QUE_TYPES = {18}
40
40
  VERIFY_UNSUPPORTED_WRITE_QUE_TYPES = {14, 34, 35, 36}
41
+ LAYOUT_ONLY_QUE_TYPES = {24}
41
42
  DEPARTMENT_MEMBER_JUDGE_PREFIX = "deptId_"
42
43
  JUDGE_EQUAL = 0
43
44
  JUDGE_UNEQUAL = 1
@@ -1131,10 +1132,12 @@ class RecordTools(ToolBase):
1131
1132
  self._ensure_allowed_analyze_keys(
1132
1133
  item,
1133
1134
  location=f"metrics[{idx}]",
1134
- allowed_keys={"op", "field_id", "fieldId", "alias"},
1135
+ allowed_keys={"op", "type", "agg", "aggregation", "field_id", "fieldId", "alias"},
1135
1136
  example="{'op': 'sum', 'field_id': 7, 'alias': '总金额'}",
1136
1137
  )
1137
- op = _normalize_optional_text(item.get("op"))
1138
+ op = _normalize_optional_text(
1139
+ item.get("op") or item.get("type") or item.get("agg") or item.get("aggregation")
1140
+ )
1138
1141
  if op not in supported_ops:
1139
1142
  raise RecordInputError(
1140
1143
  message=f"metrics[{idx}] uses unsupported op '{op}'",
@@ -1156,7 +1159,7 @@ class RecordTools(ToolBase):
1156
1159
  raise RecordInputError(
1157
1160
  message=f"metrics[{idx}] with op 'count' must not include field_id",
1158
1161
  error_code="INVALID_ANALYZE_METRIC",
1159
- fix_hint="Remove field_id from count metrics.",
1162
+ fix_hint="For count, omit field_id and use only {'op': 'count', 'alias': '记录数'}.",
1160
1163
  details={"location": f"metrics[{idx}]", "op": op},
1161
1164
  )
1162
1165
  alias = _normalize_optional_text(item.get("alias"))
@@ -3995,16 +3998,19 @@ def _build_field_index(schema: JSONObject) -> FieldIndex:
3995
3998
  *[(question, False) for question in _flatten_questions(schema.get("formQues"))],
3996
3999
  ]
3997
4000
  for question, is_base_question in all_questions:
4001
+ if not _should_index_question(question):
4002
+ continue
3998
4003
  que_id = _coerce_count(question.get("queId"))
3999
4004
  title = _stringify_json(question.get("queTitle")).strip()
4000
4005
  if que_id is None or que_id < 0 or not title:
4001
4006
  continue
4007
+ can_edit = question.get("canEdit")
4002
4008
  field = FormField(
4003
4009
  que_id=que_id,
4004
4010
  que_title=title,
4005
4011
  que_type=_coerce_count(question.get("queType")),
4006
4012
  required=bool(question.get("required") or question.get("beingRequired")),
4007
- readonly=bool(question.get("readonly") or question.get("beingReadonly") or is_base_question),
4013
+ readonly=bool(question.get("readonly") or question.get("beingReadonly") or is_base_question or can_edit is False),
4008
4014
  system=bool(question.get("system") or question.get("beingSystem") or is_base_question),
4009
4015
  options=_extract_question_options(question),
4010
4016
  aliases=[],
@@ -4023,16 +4029,35 @@ def _build_field_index(schema: JSONObject) -> FieldIndex:
4023
4029
  def _flatten_questions(payload: JSONValue) -> list[JSONObject]:
4024
4030
  flattened: list[JSONObject] = []
4025
4031
  if isinstance(payload, dict):
4026
- if "queId" in payload or "queTitle" in payload:
4032
+ is_question = "queId" in payload or "queTitle" in payload
4033
+ if is_question:
4027
4034
  flattened.append(payload)
4028
- for value in payload.values():
4029
- flattened.extend(_flatten_questions(value))
4035
+ for key in ("subQuestions", "innerQuestions", "subQues"):
4036
+ value = payload.get(key)
4037
+ if isinstance(value, list):
4038
+ flattened.extend(_flatten_questions(value))
4039
+ if not is_question:
4040
+ for key in ("baseQues", "formQues"):
4041
+ value = payload.get(key)
4042
+ if isinstance(value, list):
4043
+ flattened.extend(_flatten_questions(value))
4030
4044
  elif isinstance(payload, list):
4031
4045
  for item in payload:
4032
4046
  flattened.extend(_flatten_questions(item))
4033
4047
  return flattened
4034
4048
 
4035
4049
 
4050
+ def _should_index_question(question: JSONObject) -> bool:
4051
+ if bool(question.get("beingHide") or question.get("hidden")):
4052
+ return False
4053
+ if _coerce_count(question.get("quoteId")) is not None:
4054
+ return False
4055
+ que_type = _coerce_count(question.get("queType"))
4056
+ if que_type in LAYOUT_ONLY_QUE_TYPES:
4057
+ return False
4058
+ return True
4059
+
4060
+
4036
4061
  def _extract_question_options(question: JSONObject) -> list[str]:
4037
4062
  options = question.get("options")
4038
4063
  if not isinstance(options, list):