@josephyan/qingflow-app-user-mcp 0.2.0-beta.20 → 0.2.0-beta.22

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (30) hide show
  1. package/README.md +4 -2
  2. package/package.json +1 -1
  3. package/pyproject.toml +1 -1
  4. package/skills/qingflow-app-user/SKILL.md +51 -148
  5. package/skills/qingflow-app-user/agents/openai.yaml +2 -2
  6. package/skills/qingflow-app-user/references/data-gotchas.md +21 -30
  7. package/skills/qingflow-app-user/references/environments.md +1 -1
  8. package/skills/qingflow-app-user/references/record-patterns.md +78 -66
  9. package/skills/qingflow-app-user/references/workflow-usage.md +10 -8
  10. package/skills/qingflow-record-analysis/SKILL.md +15 -8
  11. package/skills/qingflow-record-analysis/agents/openai.yaml +1 -1
  12. package/skills/qingflow-record-analysis/references/analysis-gotchas.md +3 -3
  13. package/skills/qingflow-record-analysis/references/analysis-patterns.md +17 -7
  14. package/skills/qingflow-record-analysis/references/confidence-reporting.md +5 -5
  15. package/skills/qingflow-record-crud/SKILL.md +173 -0
  16. package/skills/qingflow-record-crud/agents/openai.yaml +5 -0
  17. package/skills/qingflow-record-crud/references/data-gotchas.md +43 -0
  18. package/skills/qingflow-record-crud/references/environments.md +58 -0
  19. package/skills/qingflow-record-crud/references/record-patterns.md +109 -0
  20. package/skills/qingflow-task-ops/SKILL.md +148 -0
  21. package/skills/qingflow-task-ops/agents/openai.yaml +4 -0
  22. package/skills/qingflow-task-ops/references/environments.md +44 -0
  23. package/skills/qingflow-task-ops/references/workflow-usage.md +27 -0
  24. package/src/qingflow_mcp/__init__.py +1 -1
  25. package/src/qingflow_mcp/server.py +7 -6
  26. package/src/qingflow_mcp/server_app_user.py +8 -183
  27. package/src/qingflow_mcp/tools/approval_tools.py +357 -75
  28. package/src/qingflow_mcp/tools/directory_tools.py +158 -28
  29. package/src/qingflow_mcp/tools/record_tools.py +964 -306
  30. package/src/qingflow_mcp/tools/task_tools.py +376 -225
package/README.md CHANGED
@@ -3,13 +3,13 @@
3
3
  Install:
4
4
 
5
5
  ```bash
6
- npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.20
6
+ npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.22
7
7
  ```
8
8
 
9
9
  Run:
10
10
 
11
11
  ```bash
12
- npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.20 qingflow-app-user-mcp
12
+ npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.22 qingflow-app-user-mcp
13
13
  ```
14
14
 
15
15
  Environment:
@@ -23,6 +23,8 @@ This package bootstraps a local Python runtime on first install and then starts
23
23
  Bundled skills:
24
24
 
25
25
  - `skills/qingflow-app-user`
26
+ - `skills/qingflow-record-crud`
27
+ - `skills/qingflow-task-ops`
26
28
  - `skills/qingflow-record-analysis`
27
29
 
28
30
  Note:
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@josephyan/qingflow-app-user-mcp",
3
- "version": "0.2.0-beta.20",
3
+ "version": "0.2.0-beta.22",
4
4
  "description": "Operational end-user MCP for Qingflow records, tasks, comments, and directory workflows.",
5
5
  "license": "MIT",
6
6
  "type": "module",
package/pyproject.toml CHANGED
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "qingflow-mcp"
7
- version = "0.2.0b20"
7
+ version = "0.2.0b22"
8
8
  description = "User-authenticated MCP server for Qingflow"
9
9
  readme = "README.md"
10
10
  license = "MIT"
@@ -1,160 +1,63 @@
1
1
  ---
2
2
  name: qingflow-app-user
3
- description: Use Qingflow apps as an operational end user after the MCP is already connected and authenticated. Use when the user wants to create, search, read, update, or delete business records, inspect or manage task-center work, add comments, or perform workflow usage actions inside an existing app. Do not use this skill to design apps, modify schemas, or build a brand new SolutionSpec.
3
+ description: Route Qingflow end-user requests to the right specialized operational skill after the MCP is already connected and authenticated. Use when the task is operational but it is not yet clear whether it is record CRUD, task-center workflow work, or final analysis.
4
4
  metadata:
5
- short-description: Use Qingflow apps for business data and task operations
5
+ short-description: Router for Qingflow operational skills
6
6
  ---
7
7
 
8
8
  # Qingflow App User
9
9
 
10
10
  ## Overview
11
11
 
12
- This skill is for business-user operations inside existing Qingflow apps. It focuses on records, task-center usage, comments, and usage-side workflow actions, not app design or system configuration. If the task is about building or changing app structure, switch to `$qingflow-app-builder`.
13
-
14
- If the user is asking for analysis, grouped distributions, ranking, trend, averages, business insights, or any final statistical conclusion, switch to `$qingflow-record-analysis` instead of keeping that logic inside this skill.
15
-
16
- Before operating on data, identify whether the task targets `test` or `prod` and read [references/environments.md](references/environments.md). If the user did not specify one, default to `prod`.
17
- When the task is in `prod`, browser parity matters, or the user says "the page has data but MCP does not", restate the expected `base_url` and `qf_version`, then prefer tools that expose `request_route` so you can confirm the live route before concluding.
18
-
19
- ## Tool Scope
20
-
21
- Primary record and data tools:
22
-
23
- - `record_query`
24
- - `record_schema_get`
25
- - `record_write_plan`
26
- - `record_create`
27
- - `record_get`
28
- - `record_update`
29
- - `record_delete`
30
-
31
- Directory and organization lookup tools when the user is asking about internal members, departments, org structure, ownership, approver candidates, or wants full contact exports:
32
-
33
- - `directory_search`
34
- - `directory_list_internal_users`
35
- - `directory_list_all_internal_users`
36
- - `directory_list_internal_departments`
37
- - `directory_list_all_departments`
38
- - `directory_list_sub_departments`
39
- - `directory_list_external_members`
40
-
41
- Usage-side collaboration and flow tools when needed:
42
-
43
- - `record_comment_*`
44
- - `task_approve`
45
- - `task_reject`
46
- - `task_rollback*`
47
- - `task_transfer*`
48
-
49
- Task-center and inbox tools when the user is asking about pending work, processed work, cc, or workflow workload:
50
-
51
- - `task_list`
52
- - `task_list_grouped`
53
- - `task_statistics`
54
- - `task_urge`
55
-
56
- Do not use builder-side tools here:
57
-
58
- - `app_*`
59
- - `view_*`
60
- - `workflow_*`
61
- - `portal_*`
62
- - `navigation_*`
63
- - `package_*`
64
- - `solution_*`
65
-
66
- ## Standard Operating Order
67
-
68
- 1. Ensure auth exists
69
- 2. Ensure workspace is selected
70
- 3. Confirm target app, task scope, and operation type
71
- 4. For org, member, department, approver, or ownership questions, start with `directory_*`
72
- 5. For inbox, pending, processed, cc, or workload questions, start with `task_statistics`, `task_list`, or `task_list_grouped`
73
- 6. When a task query identifies the target record, switch to `record_get` or `record_query` for business data details
74
- 7. For non-trivial record reads, start with `record_query`
75
- 8. For non-trivial writes, start with `record_write_plan`, especially when using `fields`
76
- 9. Prefer read-first when changing existing records
77
- 10. Report the affected task ids, record ids, member ids, department ids, or counts after actions
78
- 11. For `prod`, complex forms, attachments, or any unfamiliar schema, prefer `record_create(..., verify_write=true)` or read back immediately after create/update
79
-
80
- ## Data Rules
81
-
82
- - Prefer `record_query` as the default read entry
83
- - Treat `record_query(list)` as the default wide-table browse and export endpoint; pass explicit `select_columns`, do not expect raw answer arrays there, and let the tool auto-batch columns when the backend per-request field cap is hit
84
- - For analysis, grouped distributions, trends, or final statistical conclusions, switch to `$qingflow-record-analysis`
85
- - Use `request_route` from tool responses to verify the active `base_url` and `qf_version` whenever route mismatches are plausible
86
- - Use `directory_search` for fuzzy internal lookup across both members and departments
87
- - Use `directory_list_all_internal_users` when the user explicitly wants a complete internal member list within the current workspace or within a specific department or role
88
- - Use `directory_list_all_departments` when the user explicitly wants the full department tree or all departments under a root
89
- - Use `directory_list_internal_departments` for keyword-based department search, not full exports
90
- - Use `task_statistics` before `task_list` when the user only needs counts
91
- - Use `task_list_grouped` when worksheet or group buckets matter
92
- - Use `task_urge` only when the user clearly wants a reminder sent for a pending task
93
- - Use `record_schema_get` when field selectors are ambiguous; if the task then turns into analysis, switch to `$qingflow-record-analysis`
94
- - For precise record lookup, use `record_get` when `apply_id` is known
95
- - Use `record_schema_get` when the user gives field titles and you are not fully sure about the exact schema; do not guess ambiguous fields silently
96
- - If the task has already shifted into analysis and `record_schema_get` still leaves multiple plausible fields, stop and ask the user to confirm the intended field instead of continuing to try read tools in a loop
97
- - Treat field selectors as schema-first and platform-generic. Prefer exact field titles, then neutral aliases such as `创建时间`, `新增时间`, `负责人`, `部门`, `时间`, or `阶段` only when the tool resolves them clearly. Do not assume CRM shorthand like `销售`, `商机阶段`, `客户全称`, or similar domain shortcuts apply across arbitrary Qingflow apps
98
- - For updates, inspect current data first unless the user already provided the exact target and patch
99
- - For deletes, confirm the exact record scope and report the deleted ids
100
- - When validating business data volume, use `effective_count` over raw backend totals
101
- - In `prod`, prefer read-first even more strictly and avoid deletes unless the record scope is explicit in the conversation
102
- - For attachments, first run `file_upload_local`, then pass the returned `attachment_value` into `record_create` or `record_update`; do not try to write local file paths directly into attachment fields
103
- - For relation fields, first query the target app and resolve the referenced record `apply_id`; do not assume titles, numbers, or business keys can be written directly into a relation field
104
- - For subtable fields, write a list of row objects keyed by the subfield titles. When updating existing rows, include `rowId` / `row_id` / `__row_id__` only if the source record already exposes it
105
- - Treat `14/34/35/36` as unsupported direct-write field types in app-user flows:
106
- - `14`: time range
107
- - `34`: image recognition
108
- - `35`: image generation
109
- - `36`: document parsing
110
- - For those unsupported types, stop and explain the limitation instead of inventing payloads
111
- - Use `record_write_plan` to inspect `write_format.support_level` before non-trivial writes:
112
- - `full`: generic scalar/select/date writes are directly supported
113
- - `restricted`: member/department/attachment/relation/subtable writes need the documented presteps
114
- - `unsupported`: stop and explain the limitation
115
- - For relation-heavy, attachment, subtable, or production writes, default to `verify_write=true` so field drops are surfaced immediately instead of being reported as success
116
-
117
- ## Mock and Demo Data
118
-
119
- When the user asks for demo data, seed, smoke data, or mock data:
120
-
121
- - default to at least `5` records for the relevant entity unless the user asks for fewer
122
- - keep titles realistic and business-like
123
- - vary statuses, dates, and categories enough to make views and charts useful
124
- - if the task is `prod`, do not create mock or smoke data unless the user explicitly asks for it
125
-
126
- ## Response Interpretation
127
-
128
- - `record_query(query_mode="list")` is browse/sample output, not a final analysis result
129
- - If `record_query(query_mode="list")` reports `row_cap_hit`, `sample_only`, or capped rows, do not present it as full data
130
- - For grouped distributions, trends, or final statistical conclusions, switch to `$qingflow-record-analysis` and use `record_schema_get -> record_analyze`
131
- - `record_write_plan` is static preflight, not a guarantee that submit will pass runtime linkage or visibility checks
132
- - `record_create` now returns integer `apply_id`; you can pass that id directly into `record_get`, `record_update`, or `record_delete`
133
- - `verify_write=true` means the tool read the record back and compared the written fields; if it returns `status=verification_failed` or `ok=false`, do not report the create or update as successful
134
- - Relation writes are `apply_id`-based; if the user only gives a title, number, or business key, query the target app first and resolve the real record id before writing
135
- - Task counts and record counts are not interchangeable; a task query reflects task-center workload, not the underlying record total
136
- - When reporting task results, include the task dimension that was used, such as pending, processed, cc, node, or worksheet
137
- - Prefer summarizing titles and counts instead of dumping raw answer arrays
138
- - When records reference other entities, verify references are coherent before reporting success
139
- - `file_upload_local` may transparently change `effective_upload_kind` and `upload_protocol`; surface those fields when debugging production upload behavior instead of assuming all uploads are direct `PUT`
140
-
141
- ## Practical Patterns
142
-
143
- - Bulk mock data creation: query current data first, run `record_write_plan`, then create missing records
144
- - Data correction: query, inspect, preflight, update, and re-read
145
- - Inbox triage: use `task_statistics` first, then `task_list` or `task_list_grouped`, then switch to `record_*` for the underlying record when needed
146
- - Bottleneck analysis: start with `task_statistics` and `task_list_grouped` before drilling into specific records
147
- - Workflow collaboration: comment, transfer, or reassign only after identifying the exact record
148
- - Approval actions: identify the exact record and current node first, then use `task_approve` or `task_reject`; do not guess `nodeId`
149
- - Demo validation: create at least `5` rows and confirm they are queryable
150
- - Org export: use `directory_list_all_internal_users` for full member exports and `directory_list_all_departments` for full org-tree exports before mapping owners or departments into record operations
151
- - Attachment write: upload first, write the returned URL object second, and prefer `verify_write=true`
152
- - Relation write: query the target app first, capture the referenced record `apply_id`, then write the relation field and verify the readback
153
- - Production discrepancy triage: compare the response `request_route` with the browser environment before assuming the data query is wrong
12
+ This skill is a **router skill** for operational usage inside existing Qingflow apps.
13
+
14
+ Use it when the request is operational, but you first need to decide which specialized skill should own it.
15
+
16
+ This skill does **not** try to teach every detailed workflow itself.
17
+ It routes to:
18
+
19
+ - [$qingflow-record-crud](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-crud/SKILL.md) for record browsing, detail lookup, create, update, and delete
20
+ - [$qingflow-task-ops](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-task-ops/SKILL.md) for task-center usage, comments, approvals, rollback, transfer, urge, and directory lookup
21
+ - [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md) for grouped analysis, ratios, rankings, trends, and final statistical conclusions
22
+
23
+ Before operating on data, identify whether the task targets `test` or `prod` and read [references/environments.md](references/environments.md).
24
+ If the user did not specify one, default to `prod`.
25
+
26
+ ## Default Paths
27
+
28
+ Route to exactly one of these specialized paths:
29
+
30
+ 1. Record CRUD
31
+ Switch to [$qingflow-record-crud](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-crud/SKILL.md)
32
+
33
+ 2. Task center / comments / workflow usage / directory
34
+ Switch to [$qingflow-task-ops](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-task-ops/SKILL.md)
35
+
36
+ 3. Analysis
37
+ Switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
38
+
39
+ 4. MCP connection / auth / workspace selection
40
+ Switch to [$qingflow-mcp-setup](/Users/yanqidong/.codex/skills/qingflow-mcp-setup/SKILL.md)
41
+
42
+ ## Routing Rules
43
+
44
+ - If the task is about browsing, reading, creating, updating, deleting, attachments, relations, or subtable writes, switch to `$qingflow-record-crud`
45
+ - If the task is about inbox, todo, cc, task-center workload, comments, approval, reject, rollback, transfer, urge, or directory lookup, switch to `$qingflow-task-ops`
46
+ - If the task is about grouped distributions, ratios, rankings, trends, insights, or any final statistical conclusion, switch to `$qingflow-record-analysis`
47
+ - If the MCP is not connected, authenticated, or bound to the right workspace, switch to `$qingflow-mcp-setup`
48
+
49
+ ## Shared Preconditions
50
+
51
+ - confirm environment first
52
+ - ensure auth exists
53
+ - ensure workspace is selected
54
+ - prefer canonical app ids, record ids, task ids, and workflow node ids over guessed names
55
+ - if a field or target is still ambiguous after schema/task lookup, ask the user to confirm from a short candidate list instead of guessing
56
+ - if the task can stay read-only, do not write or act
57
+
154
58
  ## Resources
155
59
 
156
60
  - Environment switching: [references/environments.md](references/environments.md)
157
- - Record operation patterns: [references/record-patterns.md](references/record-patterns.md)
158
- - Workflow usage actions: [references/workflow-usage.md](references/workflow-usage.md)
159
- - Data gotchas: [references/data-gotchas.md](references/data-gotchas.md)
160
- - Dedicated analysis workflow: [qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
61
+ - Record CRUD: [$qingflow-record-crud](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-crud/SKILL.md)
62
+ - Task center and workflow usage: [$qingflow-task-ops](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-task-ops/SKILL.md)
63
+ - Dedicated analysis workflow: [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
@@ -1,4 +1,4 @@
1
1
  interface:
2
2
  display_name: "Qingflow App User"
3
- short_description: "Use Qingflow apps for business data and task operations"
4
- default_prompt: "Use $qingflow-app-user for ordinary Qingflow record, task, comment, and directory operations. If the task shifts into grouped analysis, insight generation, ranking, trend, or final statistical conclusions, switch to $qingflow-record-analysis instead of keeping the logic here."
3
+ short_description: "Route Qingflow operational tasks to CRUD, task ops, or analysis"
4
+ default_prompt: "Use $qingflow-app-user as a router: switch to $qingflow-record-crud for record browse/read/write, switch to $qingflow-task-ops for task-center, comments, directory, and workflow usage actions, and switch to $qingflow-record-analysis for grouped analysis or final statistical conclusions."
@@ -1,40 +1,31 @@
1
1
  # Data Gotchas
2
2
 
3
- For final statistics, grouped distributions, or insight-style analysis, use [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md) instead of keeping that reasoning inside `$qingflow-app-user`.
4
-
5
- ## Counts
6
-
7
- - Prefer `effective_count`
8
- - For final analysis, inspect `record_analyze.data.completeness` and `safe_for_final_conclusion` before concluding
9
- - If `record_analyze.status!=success`, treat the result as exploratory unless the user explicitly asked for a partial sample
10
- - `record_query(list)` is for browsing and sample inspection. If it reports `row_cap_hit`, `sample_only`, or capped `returned_items`, do not present it as full data
11
- - When coverage matters, surface:
12
- - `scanned_count`
13
- - `presentation.statement_scope`
14
- - Use narrower views, filters, or smaller analysis questions instead of inventing manual scan settings by hand
15
- - If the browser and MCP disagree, compare `request_route.base_url` and `request_route.qf_version` first
16
- - Do not mix a full aggregate total with sample-only list detail in one sentence like “基于全部数据分析”; split the answer into `全量结论` and `样本观察`
3
+ For final statistics, grouped distributions, rankings, trends, or insight-style conclusions, use [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md) instead of keeping that reasoning inside `$qingflow-app-user`.
17
4
 
18
- ## Record titles
5
+ ## Record Reads
19
6
 
20
- - Do not dump raw answer arrays to the user unless needed
21
- - Prefer concise business titles and counts
7
+ - `record_list` is for browsing, export, and sample inspection only
8
+ - `record_get` is for one exact record
9
+ - Do not present paged browse output as if it were a grouped or full-population conclusion
10
+ - If the browser and MCP disagree, compare `request_route.base_url` and `request_route.qf_version` first
22
11
 
23
- ## Preflight
12
+ ## Write Preflight
24
13
 
25
- - `record_write_plan` is static preflight only; linked visibility and runtime required rules can still reject writes
26
- - `record_write_plan` now exposes `write_format.support_level`; check `full / restricted / unsupported` before attempting non-trivial writes
14
+ - `record_write` always performs internal static preflight before any apply
15
+ - If `record_write` returns `ok=false`, the write was blocked and not executed
27
16
  - Use `record_schema_get` when field titles are uncertain instead of guessing ids
28
- - For analysis tasks, use the fixed path `record_schema_get -> record_analyze`; do not switch tools blindly after `FIELD_NOT_FOUND` or ambiguity
29
- - Prefer `strict_full=true` for final statistics or business conclusions
30
- - `record_create` and `record_update` can do post-write verification with `verify_write=true`; use that for complex, subtable, or production writes
31
- - `apply_id` is normalized to an integer; pass it directly into later record tools
17
+ - Prefer `verify_write=true` for complex, relation-heavy, subtable, or production writes
18
+ - Even when `record_write` returns `ok=true`, it may still surface verification failures; do not report success before checking them
32
19
 
33
- ## Mock data
20
+ ## Write Semantics
34
21
 
35
- - Default to at least `5` rows per relevant entity unless the user asked for fewer
36
- - Avoid identical titles and identical statuses across all rows
37
- - Keep relation references valid
22
+ - `insert` uses `values`
23
+ - `update` uses `set`
24
+ - `delete` uses `record_id` or `record_ids`
25
+ - Do not send raw SQL strings
26
+ - Do not fake formula or expression fields
27
+ - Do not perform free-form bulk updates or deletes
28
+ - Do not guess relation targets from display text; resolve the real `record_id` first
38
29
 
39
30
  ## Attachments
40
31
 
@@ -46,10 +37,10 @@ For final statistics, grouped distributions, or insight-style analysis, use [$qi
46
37
 
47
38
  - Subtable fields accept row objects keyed by subfield title, or native `tableValues`
48
39
  - Use the current form schema's subfield titles; do not guess nested ids
49
- - When updating existing subtable rows, preserve `rowId` if the source record returns it
40
+ - When updating existing subtable rows, preserve row ids if the source record returns them
50
41
  - Nested subtable writes are still unsupported
51
42
 
52
- ## Unsupported direct-write fields
43
+ ## Unsupported Direct-Write Fields
53
44
 
54
45
  - `14` time range
55
46
  - `34` image recognition
@@ -50,7 +50,7 @@ Production behavior:
50
50
  Production guardrails:
51
51
 
52
52
  - never assume a record id, app id, or workspace id
53
- - treat `record_delete` as high risk
53
+ - treat `record_write(operation="delete")` as high risk
54
54
  - if the task can be answered read-only, do not write
55
55
 
56
56
  ## Reporting Rule
@@ -1,81 +1,96 @@
1
1
  # Record Patterns
2
2
 
3
- If the task shifts into grouped analysis, ratio, ranking, trend, or final statistical conclusions, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md).
3
+ If the task shifts into grouped analysis, ratio, ranking, trend, or any final statistical conclusion, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md).
4
4
 
5
- ## Query first
5
+ ## Browse Pattern
6
6
 
7
- Use `record_query` first when:
7
+ Use `record_schema_get -> record_list` when:
8
8
 
9
- - the user only gives a title or business key
10
- - the target record id is unknown
11
- - updates or deletes need confirmation
12
- - ordinary list browsing or spot checks are needed
9
+ - the user wants to browse records
10
+ - the target `record_id` is unknown
11
+ - a delete or update target still needs confirmation
12
+ - the user needs sample rows or a small export
13
13
 
14
- Use [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md) when:
14
+ Keep the browse DSL simple:
15
15
 
16
- - field titles may be ambiguous
17
- - filters are still in natural-language shape
18
- - the result may be used as a final conclusion
19
- - scan scope or completeness is unclear
20
- - the user asks for a distribution, ratio, ranking, top-N, or any grouped aggregate
21
- - the user asks for `分析 / 洞察 / 分布 / 占比 / 平均 / 排名 / 趋势 / 所有 / 全部 / 全国 / 高价值`
16
+ - `columns`: field ids only
17
+ - `where`: flat AND filters only
18
+ - `order_by`: field sorting only
19
+ - `limit` and `page`: browsing intent only
22
20
 
23
- ## Final analysis pattern
21
+ Do not use `record_list` for grouped conclusions, ratios, rankings, trends, or any final statistical claim.
24
22
 
25
- 1. Run `record_schema_get`
26
- 2. Generate one or more field_id-based DSLs
27
- 3. Run `record_analyze(strict_full=true)` for summary/distribution/trend/cross analysis
28
- 4. Run `record_query(query_mode="list")` only if you still need sample rows or examples
29
- 5. Report `scanned_count`, `presentation.statement_scope`, and whether the result is safe for a final conclusion
30
- 6. If `status=error` or `safe_for_final_conclusion=false`, stop at “partial result” instead of presenting a final business conclusion
31
- 7. If list rows are sample-only, separate the answer into:
32
- - `全量可信结论`
33
- - `样本观察(不作为最终结论)`
34
- - optional `待验证假设`
23
+ ## Detail Pattern
35
24
 
36
- ## Analysis anti-pattern
25
+ Use `record_schema_get -> record_get` when:
37
26
 
38
- Do not do this:
27
+ - the exact `record_id` is known
28
+ - the user needs one record in detail
29
+ - a write target needs verification before action
30
+
31
+ Prefer passing explicit `columns` when the user only needs a subset of fields.
32
+
33
+ ## Write Pattern
34
+
35
+ Use `record_schema_get -> record_write`.
39
36
 
40
- 1. Run only `record_query(query_mode="list")`
41
- 2. Get `200` rows back
42
- 3. Report平均值、占比、地域分布 as if they were based on all records
37
+ 1. Confirm the target app
38
+ 2. Resolve fields with `record_schema_get`
39
+ 3. Decide whether the task is `insert`, `update`, or `delete`
40
+ 4. Build SQL-like JSON clauses
41
+ 5. Run `record_write`
42
+ 6. If `ok=false`, explain `field_errors` first, then summarize blockers; stop because the write was not executed
43
+ 7. If `ok=true`, report the affected resource and any verification outcome
44
+ 8. For important writes, keep `verify_write=true`
43
45
 
44
- This is not acceptable because the list endpoint can be capped. Use `record_schema_get -> record_analyze` first, then treat list rows as sample-only evidence.
46
+ ### Insert
45
47
 
46
- ## Create pattern
48
+ ```json
49
+ {
50
+ "operation": "insert",
51
+ "values": [
52
+ { "field_id": 12, "value": "测试客户" },
53
+ { "field_id": 18, "value": 1000 }
54
+ ],
55
+ "submit_type": "submit",
56
+ "verify_write": true
57
+ }
58
+ ```
47
59
 
48
- 1. Confirm target app
49
- 2. Resolve fields with `record_schema_get` if needed. Prefer exact schema titles first; only rely on platform-neutral aliases such as `创建时间`, `负责人`, or `部门` when they resolve cleanly, and do not assume business-domain shorthand like `销售` is portable across apps
50
- 3. Run `record_write_plan` for non-trivial payloads or any `fields`-based write
51
- 4. For relation fields, query the target app first and resolve the referenced record `apply_id`
52
- 5. For attachments, call `file_upload_local` first and reuse the returned `attachment_value`
53
- 6. For subtable fields, pass a list of row objects keyed by subfield title. When updating existing rows, include `rowId` / `row_id` / `__row_id__` only if the current record already exposes it
54
- 7. Inspect `record_write_plan.data.support_matrix` or each field's `write_format.support_level` before submit:
55
- - `full`: direct write is supported
56
- - `restricted`: follow the documented presteps first
57
- - `unsupported`: stop and explain the limitation
58
- 8. For complex forms, production writes, attachments, relation-heavy payloads, or subtables, create with `verify_write=true`
59
- 9. If verification fails, treat the write as not yet successful and inspect the missing or empty fields before reporting back
60
- 10. Re-query or fetch the record when validation matters
60
+ ### Update
61
61
 
62
- ## Update pattern
62
+ ```json
63
+ {
64
+ "operation": "update",
65
+ "record_id": 123,
66
+ "set": [
67
+ { "field_id": 18, "value": 2000 }
68
+ ],
69
+ "verify_write": true
70
+ }
71
+ ```
63
72
 
64
- 1. Query the target records
65
- 2. Resolve exact `apply_id`
66
- 3. Run `record_write_plan`
67
- 4. Update only the intended fields
68
- 5. Prefer `verify_write=true` for attachment, relation, subtable, or production updates
69
- 6. Re-read the record if the change is important, attachment-related, subtable-related, or the form has linkage
73
+ ### Delete
70
74
 
71
- ## Delete pattern
75
+ ```json
76
+ {
77
+ "operation": "delete",
78
+ "record_ids": [123, 124]
79
+ }
80
+ ```
72
81
 
73
- 1. Query or fetch the exact record first
74
- 2. Confirm the target ids
75
- 3. Delete
76
- 4. Report affected ids and remaining count when relevant
82
+ ## Write Anti-Patterns
77
83
 
78
- ## Unsupported direct writes
84
+ Do not do this:
85
+
86
+ - do not send raw SQL text
87
+ - do not build free-form `WHERE` updates or deletes
88
+ - do not invent formulas or expressions
89
+ - do not auto-fill missing required fields
90
+ - do not guess relation targets without first resolving them
91
+ - do not claim a blocked `record_write` was executed
92
+
93
+ ## Unsupported Direct Writes
79
94
 
80
95
  Do not attempt direct app-user writes for these field types:
81
96
 
@@ -84,13 +99,10 @@ Do not attempt direct app-user writes for these field types:
84
99
  - `35` image generation
85
100
  - `36` document parsing
86
101
 
87
- If the payload includes them, stop at `record_write_plan` and explain that the tool does not build a reliable native payload for those fields yet.
88
-
89
- ## Relation fields
102
+ If the payload includes them, stop after the blocked `record_write` response and explain that the tool does not support a reliable direct write for those fields yet.
90
103
 
91
- Relation fields are record-id based.
104
+ ## Relation, Attachment, and Subtable Rules
92
105
 
93
- - Query the referenced app first
94
- - Resolve the target record `apply_id`
95
- - Write the relation field with that id
96
- - Do not write relation fields with display titles, business keys, or guessed identifiers unless they have already been resolved to the real record id
106
+ - Relation fields are record-id based. Resolve the referenced target first, then write the relation field with the real `record_id`.
107
+ - Attachment fields are two-step: upload first with `file_upload_local`, then reuse the returned attachment payload in `record_write`.
108
+ - Subtable writes require the current schema shape; when updating existing subtable rows, preserve row ids if the current record exposes them.
@@ -7,18 +7,20 @@ Examples:
7
7
  - add a comment to a record
8
8
  - approve or reject a workflow task
9
9
  - transfer a task
10
- - roll back a record
11
- - list pending, processed, or cc tasks
10
+ - roll back a task
11
+ - list todo, initiated, done, or cc tasks
12
+ - inspect workload by worksheet or workflow node
12
13
  - urge a pending task
13
14
 
14
15
  Rules:
15
16
 
16
17
  - if the user starts from inbox, todo, workload, cc, or bottleneck language, use `task_*` first
17
- - use `task_statistics` for counts and `task_list` or `task_list_grouped` for browsing
18
- - use `task_list_grouped` when grouped workload browsing matters
18
+ - use `task_summary` for headline counts
19
+ - use `task_list` for flat browsing
20
+ - use `task_facets` when worksheet or workflow-node buckets matter
19
21
  - treat task counts as task-center counts, not record counts
20
- - switch to `record_*` after locating the exact business record behind a task
21
- - identify the exact record first
22
- - for approve or reject, identify the exact `nodeId` first; prefer task-center results or audit info, then use `task_approve` or `task_reject`
23
- - avoid usage-side flow actions on ambiguous records
22
+ - switch to `record_*` only after locating the exact business record behind a task
23
+ - identify the exact target first
24
+ - for approve or reject, identify the exact `workflow_node_id` first; prefer task-center results or current audit info, then use `task_approve` or `task_reject`
25
+ - avoid usage-side workflow actions on ambiguous records
24
26
  - summarize the final action and target task ids or record ids
@@ -11,9 +11,9 @@ metadata:
11
11
 
12
12
  This skill is for record analysis inside existing Qingflow apps. Use it when the task is about `分析 / 洞察 / 分布 / 占比 / 平均 / 排名 / 趋势 / 所有 / 全部 / 全国 / 高价值` or any final statistical conclusion.
13
13
 
14
- This skill assumes the MCP is already connected and authenticated. If not, switch to `$qingflow-mcp-setup` first. If the task is about creating, updating, deleting, or approving records rather than analyzing them, switch back to `$qingflow-app-user`.
14
+ This skill assumes the MCP is already connected and authenticated. If not, switch to `$qingflow-mcp-setup` first. If the task is about creating, updating, or deleting records rather than analyzing them, switch to `$qingflow-record-crud`. If it is about task-center actions, comments, approvals, rollback, transfer, or directory-driven workflow work, switch to `$qingflow-task-ops`.
15
15
 
16
- Before running analysis in `prod`, confirm the intended environment and compare `request_route` with the browser route if browser parity matters.
16
+ Before running analysis in `prod`, confirm the intended environment. If browser parity or live route debugging matters, call `record_analyze` with `output_profile=\"verbose\"` and compare `debug.request_route` with the browser route.
17
17
 
18
18
  ## Tool Scope
19
19
 
@@ -22,7 +22,7 @@ Use these tools as the core analysis surface:
22
22
  - `record_schema_get`
23
23
  - `record_analyze`
24
24
 
25
- Use `record_query(query_mode="list")` or `record_get` only when you need sample rows or a specific supporting example after the main analysis path.
25
+ Use `record_list` or `record_get` only when you need sample rows or a specific supporting example after the main analysis path.
26
26
 
27
27
  ## Hard Rules
28
28
 
@@ -34,8 +34,8 @@ Use `record_query(query_mode="list")` or `record_get` only when you need sample
34
34
  - Never send impossible dates such as `2026-02-29`; if the intended month is February 2026, the legal upper bound is `2026-02-28`
35
35
  - If the schema still leaves multiple plausible fields, stop and ask the user to confirm from a short candidate list instead of guessing
36
36
  - Do not keep retrying different guessed field names in a loop
37
- - `record_query(list)` is never the basis for a final statistical conclusion
38
- - If `record_query(list)` reports `row_cap_hit`, `sample_only`, capped `returned_items`, or compact output, treat it as sample-only evidence
37
+ - `record_list` is never the basis for a final statistical conclusion
38
+ - If `record_list` is capped or paged, treat it as sample-only evidence
39
39
  - Do not mix full totals from `record_analyze` with sample-only list observations as one combined `全量结论`
40
40
  - Do not manually tune paging or scan-budget parameters for analysis; `record_analyze` hides them
41
41
  - For final conclusions, prefer `strict_full=true`
@@ -57,7 +57,7 @@ For analysis:
57
57
  3. Inspect fields, aliases, suggested dimensions, suggested metrics, and suggested time fields
58
58
  4. Generate one or more field_id-based DSLs
59
59
  5. Run `record_analyze` once per DSL
60
- 6. Run `record_query(query_mode="list")` only if you still need sample rows, examples, or manual inspection
60
+ 6. Run `record_list` only if you still need sample rows, examples, or manual inspection
61
61
  7. Before answering, separate:
62
62
  - `全量可信结论`
63
63
  - `样本观察`
@@ -78,7 +78,7 @@ For analysis:
78
78
  - If a statement depends on a ratio that the DSL cannot express directly, run the numerator and denominator separately, then compute the ratio outside MCP only after both sides are complete and compatible
79
79
  - Rankings must come from structured sorted results, not from loose natural-language restatement
80
80
  - When grouped rows are truncated, describe them as `已返回分组中` or `主要分组`
81
- - If `presentation.rows_truncated=true` or `presentation.statement_scope=returned_groups_only`, do not use words like `各部门`、`所有分组`、`完整名单`、`全部渠道`
81
+ - If `completeness.rows_truncated=true` or `completeness.statement_scope=returned_groups_only`, do not use words like `各部门`、`所有分组`、`完整名单`、`全部渠道`
82
82
  - If grouped rows are truncated, explicitly downgrade the wording to `前 N 个分组` or `主要分组`, never `全部`
83
83
  - Complex answers should default to `先结构、后解读`: present the table / metrics / ordering first, then add concise interpretation
84
84
  - Final wording should stay as close as possible to schema titles, dimension aliases, and metric aliases; do not rename the business object or field title unless the user asked for a rewrite
@@ -236,10 +236,17 @@ Two-dimensional cross analysis:
236
236
 
237
237
  ## Output Gate
238
238
 
239
+ - Read aggregate rows from `result.rows`
240
+ - Read overall totals from `result.totals.metric_totals`
241
+ - Read sort intent from `query.sort`
242
+ - Read ranked output from `ranking` when it is not `null`
243
+ - Read ratio output from `ratios` when it is not `null`; `ratios=null` is normal when MCP did not produce a native ratio block
244
+ - Read warning codes from `completeness.warnings`
245
+
239
246
  - Only write `全量可信结论` when the supporting `record_analyze` calls report `completeness.status=complete` and `safe_for_final_conclusion=true`
240
247
  - If any key analysis call is incomplete, downgrade the answer to `初步观察` or `部分结果`
241
248
  - Treat `safe_for_final_conclusion=true` as necessary but not sufficient when the metric definition is incomplete or grouped rows are truncated
242
- - If `presentation.statement_scope=returned_groups_only`, you may still give full-population conclusions about totals or ratios, but not a full grouped enumeration claim
249
+ - If `completeness.statement_scope=returned_groups_only`, you may still give full-population conclusions about totals or ratios, but not a full grouped enumeration claim
243
250
  - If aggregate-style output is full but list evidence is sample-only, split the answer into:
244
251
  - `全量可信结论`
245
252
  - `样本观察(不作为最终结论)`