@josephyan/qingflow-app-user-mcp 0.2.0-beta.54 → 0.2.0-beta.55

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (48) hide show
  1. package/README.md +6 -3
  2. package/docs/local-agent-install.md +12 -5
  3. package/package.json +1 -1
  4. package/pyproject.toml +2 -1
  5. package/skills/qingflow-app-user/SKILL.md +27 -11
  6. package/skills/qingflow-app-user/agents/openai.yaml +2 -2
  7. package/skills/qingflow-app-user/references/data-gotchas.md +12 -31
  8. package/skills/qingflow-app-user/references/record-patterns.md +27 -88
  9. package/skills/qingflow-record-analysis/SKILL.md +4 -4
  10. package/skills/qingflow-record-analysis/agents/openai.yaml +1 -1
  11. package/skills/qingflow-record-analysis/references/analysis-gotchas.md +2 -2
  12. package/skills/qingflow-record-analysis/references/analysis-patterns.md +7 -7
  13. package/skills/qingflow-record-analysis/references/confidence-reporting.md +2 -2
  14. package/skills/qingflow-record-analysis/references/dsl-templates.md +1 -1
  15. package/skills/qingflow-record-delete/SKILL.md +29 -0
  16. package/skills/qingflow-record-import/SKILL.md +31 -0
  17. package/skills/qingflow-record-insert/SKILL.md +58 -0
  18. package/skills/qingflow-record-update/SKILL.md +42 -0
  19. package/src/qingflow_mcp/builder_facade/service.py +435 -37
  20. package/src/qingflow_mcp/cli/__init__.py +1 -0
  21. package/src/qingflow_mcp/cli/commands/__init__.py +15 -0
  22. package/src/qingflow_mcp/cli/commands/app.py +40 -0
  23. package/src/qingflow_mcp/cli/commands/auth.py +78 -0
  24. package/src/qingflow_mcp/cli/commands/builder.py +184 -0
  25. package/src/qingflow_mcp/cli/commands/common.py +47 -0
  26. package/src/qingflow_mcp/cli/commands/imports.py +86 -0
  27. package/src/qingflow_mcp/cli/commands/record.py +202 -0
  28. package/src/qingflow_mcp/cli/commands/task.py +87 -0
  29. package/src/qingflow_mcp/cli/commands/workspace.py +33 -0
  30. package/src/qingflow_mcp/cli/context.py +48 -0
  31. package/src/qingflow_mcp/cli/formatters.py +269 -0
  32. package/src/qingflow_mcp/cli/json_io.py +50 -0
  33. package/src/qingflow_mcp/cli/main.py +147 -0
  34. package/src/qingflow_mcp/server.py +29 -17
  35. package/src/qingflow_mcp/server_app_user.py +30 -17
  36. package/src/qingflow_mcp/session_store.py +41 -1
  37. package/src/qingflow_mcp/tools/app_tools.py +34 -5
  38. package/src/qingflow_mcp/tools/auth_tools.py +139 -1
  39. package/src/qingflow_mcp/tools/code_block_tools.py +81 -1
  40. package/src/qingflow_mcp/tools/import_tools.py +74 -0
  41. package/src/qingflow_mcp/tools/package_tools.py +17 -4
  42. package/src/qingflow_mcp/tools/record_tools.py +2150 -240
  43. package/src/qingflow_mcp/tools/workspace_tools.py +39 -12
  44. package/skills/qingflow-record-crud/SKILL.md +0 -231
  45. package/skills/qingflow-record-crud/agents/openai.yaml +0 -4
  46. package/skills/qingflow-record-crud/references/data-gotchas.md +0 -42
  47. package/skills/qingflow-record-crud/references/environments.md +0 -57
  48. package/skills/qingflow-record-crud/references/record-patterns.md +0 -147
package/README.md CHANGED
@@ -3,13 +3,13 @@
3
3
  Install:
4
4
 
5
5
  ```bash
6
- npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.54
6
+ npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.55
7
7
  ```
8
8
 
9
9
  Run:
10
10
 
11
11
  ```bash
12
- npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.54 qingflow-app-user-mcp
12
+ npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.55 qingflow-app-user-mcp
13
13
  ```
14
14
 
15
15
  Environment:
@@ -23,7 +23,10 @@ This package bootstraps a local Python runtime on first install and then starts
23
23
  Bundled skills:
24
24
 
25
25
  - `skills/qingflow-app-user`
26
- - `skills/qingflow-record-crud`
26
+ - `skills/qingflow-record-insert`
27
+ - `skills/qingflow-record-update`
28
+ - `skills/qingflow-record-delete`
29
+ - `skills/qingflow-record-import`
27
30
  - `skills/qingflow-task-ops`
28
31
  - `skills/qingflow-record-analysis`
29
32
 
@@ -1,9 +1,10 @@
1
1
  # 本地智能体安装
2
2
 
3
- 这个目录下现在只保留两个本地命令入口:
3
+ 这个目录下现在有三个本地命令入口:
4
4
 
5
- 1. 记录/待办优先的 `qingflow-app-user-mcp`
6
- 2. 精简 builder 的 `qingflow-app-builder-mcp`
5
+ 1. 统一 CLI:`qingflow`
6
+ 2. 记录/待办优先的 `qingflow-app-user-mcp`
7
+ 3. 精简 builder 的 `qingflow-app-builder-mcp`
7
8
 
8
9
  ## npm 安装器适用场景
9
10
 
@@ -62,6 +63,7 @@ npm run pack:npm
62
63
  会生成:
63
64
 
64
65
  ```bash
66
+ dist/npm/josephyan-qingflow-cli-<version>.tgz
65
67
  dist/npm/josephyan-qingflow-app-user-mcp-<version>.tgz
66
68
  dist/npm/josephyan-qingflow-app-builder-mcp-<version>.tgz
67
69
  ```
@@ -69,6 +71,7 @@ dist/npm/josephyan-qingflow-app-builder-mcp-<version>.tgz
69
71
  然后在目标机器安装:
70
72
 
71
73
  ```bash
74
+ npm install /absolute/path/to/dist/npm/josephyan-qingflow-cli-<version>.tgz
72
75
  npm install /absolute/path/to/dist/npm/josephyan-qingflow-app-user-mcp-<version>.tgz
73
76
  npm install /absolute/path/to/dist/npm/josephyan-qingflow-app-builder-mcp-<version>.tgz
74
77
  ```
@@ -78,7 +81,7 @@ npm install /absolute/path/to/dist/npm/josephyan-qingflow-app-builder-mcp-<versi
78
81
  1. 创建 `.npm-python/`
79
82
  2. 在其中建立 Python 虚拟环境
80
83
  3. 执行 `pip install .`
81
- 4. 在安装位置暴露 `qingflow-app-user-mcp`、`qingflow-app-builder-mcp` 命令
84
+ 4. 在安装位置暴露 `qingflow`、`qingflow-app-user-mcp`、`qingflow-app-builder-mcp` 命令
82
85
 
83
86
  ## 本地验证
84
87
 
@@ -86,6 +89,7 @@ npm install /absolute/path/to/dist/npm/josephyan-qingflow-app-builder-mcp-<versi
86
89
 
87
90
  ```bash
88
91
  cd qingflow-support/mcp-server
92
+ node ./npm/bin/qingflow.mjs --help
89
93
  node ./npm/bin/qingflow-app-user-mcp.mjs
90
94
  node ./npm/bin/qingflow-app-builder-mcp.mjs
91
95
  ```
@@ -93,6 +97,7 @@ node ./npm/bin/qingflow-app-builder-mcp.mjs
93
97
  如果你是全局安装:
94
98
 
95
99
  ```bash
100
+ qingflow --help
96
101
  qingflow-app-user-mcp
97
102
  qingflow-app-builder-mcp
98
103
  ```
@@ -100,6 +105,7 @@ qingflow-app-builder-mcp
100
105
  如果你是把包安装到了某个本地 agent workspace,命令通常位于:
101
106
 
102
107
  ```bash
108
+ /absolute/path/to/agent-workspace/node_modules/.bin/qingflow
103
109
  /absolute/path/to/agent-workspace/node_modules/.bin/qingflow-app-user-mcp
104
110
  /absolute/path/to/agent-workspace/node_modules/.bin/qingflow-app-builder-mcp
105
111
  ```
@@ -107,6 +113,7 @@ qingflow-app-builder-mcp
107
113
  如果你是从 tgz 安装到某个空目录,命令通常位于:
108
114
 
109
115
  ```bash
116
+ /absolute/path/to/install-dir/node_modules/.bin/qingflow
110
117
  /absolute/path/to/install-dir/node_modules/.bin/qingflow-app-user-mcp
111
118
  /absolute/path/to/install-dir/node_modules/.bin/qingflow-app-builder-mcp
112
119
  ```
@@ -202,7 +209,7 @@ qingflow-app-builder-mcp
202
209
  ```
203
210
 
204
211
  说明:
205
- - 源码目录 `npm install` 不会把命令加到全局 PATH;这种模式请用 `node ./npm/bin/qingflow-app-user-mcp.mjs` 或 `node ./npm/bin/qingflow-app-builder-mcp.mjs`
212
+ - 源码目录 `npm install` 不会把命令加到全局 PATH;这种模式请用 `node ./npm/bin/qingflow.mjs`、`node ./npm/bin/qingflow-app-user-mcp.mjs` 或 `node ./npm/bin/qingflow-app-builder-mcp.mjs`
206
213
  - `npx` 方式适合临时安装或容器化本地 agent
207
214
  - 全局安装方式更适合长期固定使用的本机开发环境
208
215
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@josephyan/qingflow-app-user-mcp",
3
- "version": "0.2.0-beta.54",
3
+ "version": "0.2.0-beta.55",
4
4
  "description": "Operational end-user MCP for Qingflow records, tasks, comments, and directory workflows.",
5
5
  "license": "MIT",
6
6
  "type": "module",
package/pyproject.toml CHANGED
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "qingflow-mcp"
7
- version = "0.2.0b54"
7
+ version = "0.2.0b55"
8
8
  description = "User-authenticated MCP server for Qingflow"
9
9
  readme = "README.md"
10
10
  license = "MIT"
@@ -46,6 +46,7 @@ build = [
46
46
  [project.scripts]
47
47
  qingflow-app-user-mcp = "qingflow_mcp.server_app_user:main"
48
48
  qingflow-app-builder-mcp = "qingflow_mcp.server_app_builder:main"
49
+ qingflow = "qingflow_mcp.cli.main:main"
49
50
 
50
51
  [project.urls]
51
52
  Homepage = "https://github.com/qingflow/qingflow-mcp"
@@ -16,28 +16,40 @@ Assumes MCP is connected, authenticated, and on the correct workspace.
16
16
 
17
17
  Route to exactly one of these specialized paths:
18
18
 
19
- 1. Record CRUD and import
20
- Switch to [$qingflow-record-crud](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-crud/SKILL.md)
19
+ 1. Record insert
20
+ Switch to [$qingflow-record-insert](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-insert/SKILL.md)
21
21
 
22
- 2. Task workflow operations
22
+ 2. Record update
23
+ Switch to [$qingflow-record-update](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-update/SKILL.md)
24
+
25
+ 3. Record delete
26
+ Switch to [$qingflow-record-delete](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-delete/SKILL.md)
27
+
28
+ 4. Record import
29
+ Switch to [$qingflow-record-import](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-import/SKILL.md)
30
+
31
+ 5. Task workflow operations
23
32
  Switch to [$qingflow-task-ops](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-task-ops/SKILL.md)
24
33
 
25
- 3. Analysis
34
+ 6. Analysis
26
35
  Switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
27
36
 
28
- 4. MCP connection / auth / workspace selection
37
+ 7. MCP connection / auth / workspace selection
29
38
  Switch to [$qingflow-mcp-setup](/Users/yanqidong/.codex/skills/qingflow-mcp-setup/SKILL.md)
30
39
 
31
40
  ## Routing Rules
32
41
 
33
42
  - If the user does not know the target `app_key`, discover apps first with `app_list` or `app_search`, then route to the specialized skill
34
43
  - If the app is known but the available data range is unclear, call `app_get` first and inspect `accessible_views`
35
- - If the task is about browsing, reading, creating, updating, deleting, attachments, relations, subtable writes, member/department-field candidate lookup, code-block field execution, import templates, import capability discovery, import-file verification, authorized local file repair, import execution, or import status, switch to `$qingflow-record-crud`
44
+ - If the task is about creating or new record entry, switch to `$qingflow-record-insert`
45
+ - If the task is about editing an existing record directly, switch to `$qingflow-record-update`
46
+ - If the task is about deleting records directly, switch to `$qingflow-record-delete`
47
+ - If the task is about import templates, import capability discovery, import-file verification, authorized local file repair, import execution, or import status, switch to `$qingflow-record-import`
36
48
  - If the task is about todo discovery, task context, approval actions, rollback or transfer, associated report review, or workflow log review, switch to `$qingflow-task-ops`
37
- - If the task is about creating new records or importing data, prefer `$qingflow-record-crud` under applicant-node create semantics
38
- - If the task is about updating an existing record directly, route to `$qingflow-record-crud`, which uses `record_update` and defaults to `system:all` before requiring an explicit accessible `view_id`
39
- - If the task involves member, department, or relation fields and the user only has natural names/titles, still route to `$qingflow-record-crud`; direct write now supports backend-native auto resolution and may return `needs_confirmation` with candidates instead of failing blind
40
- - If the task is about subtable writes, still route to `$qingflow-record-crud`, but shape the payload through the parent subtable field `rows/tableValues`; do not route users toward top-level leaf selectors
49
+ - If the task involves member, department, or relation fields and the user only has natural names/titles, keep the same route; direct write now supports backend-native auto resolution and may return `needs_confirmation` with candidates instead of failing blind
50
+ - If the task involves linked visibility, upstream/downstream field dependencies, reference-driven auto fill, or formula-driven defaulting, keep the same insert/update route and read field-level `linkage` from the schema before composing payloads
51
+ - If the task is about subtable writes, still route to the matching insert/update skill, but shape the payload as parent subtable field -> row array; do not route users toward top-level leaf selectors
52
+ - If the task is insert-focused and readback consistency matters, keep the same route and prefer `record_get / record_list` with `output_profile="normalized"` after the write
41
53
  - If the user sounds like an ordinary workflow assignee rather than a system operator, prefer `$qingflow-task-ops` over direct record mutation whenever both paths could fit
42
54
  - If the task is about grouped distributions, ratios, rankings, trends, insights, or any final statistical conclusion, switch to `$qingflow-record-analysis`
43
55
  - If the MCP is not connected, authenticated, or bound to the right workspace, switch to `$qingflow-mcp-setup`
@@ -46,6 +58,7 @@ Route to exactly one of these specialized paths:
46
58
 
47
59
  - prefer canonical app ids, record ids, task ids, and workflow node ids over guessed names
48
60
  - if a field or target is still ambiguous after schema/task lookup, ask the user to confirm from a short candidate list instead of guessing
61
+ - if schema fields include `linkage.sources` or `linkage.affects_fields`, treat those as the preferred high-level explanation of field dependencies instead of trying to infer hidden front-end logic
49
62
  - if the task can stay read-only, do not write or act
50
63
  - if the task involves a user-uploaded import file, do not modify the file unless the user explicitly authorizes repair or normalization
51
64
  - if the task involves record import, call `app_get` first and inspect `data.import_capability` before template download, file repair, or import start
@@ -59,5 +72,8 @@ Route to exactly one of these specialized paths:
59
72
 
60
73
  ## Resources
61
74
 
62
- - Record CRUD: [$qingflow-record-crud](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-crud/SKILL.md)
75
+ - Record insert: [$qingflow-record-insert](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-insert/SKILL.md)
76
+ - Record update: [$qingflow-record-update](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-update/SKILL.md)
77
+ - Record delete: [$qingflow-record-delete](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-delete/SKILL.md)
78
+ - Record import: [$qingflow-record-import](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-import/SKILL.md)
63
79
  - Dedicated analysis workflow: [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
@@ -1,4 +1,4 @@
1
1
  interface:
2
2
  display_name: "Qingflow App User"
3
- short_description: "Route Qingflow operational tasks to CRUD, task ops, or analysis"
4
- default_prompt: "Use $qingflow-app-user as a router: switch to $qingflow-record-crud for record browse/read/write, switch to $qingflow-task-ops for task-center, comments, directory, and workflow usage actions, and switch to $qingflow-record-analysis for grouped analysis or final statistical conclusions."
3
+ short_description: "Route Qingflow operational tasks to insert, update, delete, import, task ops, or analysis"
4
+ default_prompt: "Use $qingflow-app-user as a router: switch to $qingflow-record-insert for new record entry, $qingflow-record-update for direct edits, $qingflow-record-delete for deletes, $qingflow-record-import for bulk import, $qingflow-task-ops for task-center, comments, directory, and workflow usage actions, and $qingflow-record-analysis for grouped analysis or final statistical conclusions."
@@ -6,43 +6,24 @@ For final statistics, grouped distributions, rankings, trends, or insight-style
6
6
 
7
7
  - `record_list` is for browsing, export, and sample inspection only
8
8
  - `record_get` is for one exact record
9
+ - Use `record_browse_schema_get` when field titles are uncertain instead of guessing ids
9
10
  - Do not present paged browse output as if it were a grouped or full-population conclusion
10
- - If the browser and MCP disagree, compare `request_route.base_url` and `request_route.qf_version` first
11
11
 
12
- ## Write Preflight
12
+ ## Direct Writes
13
13
 
14
- - `record_insert`, `record_update`, and `record_delete` always perform internal static preflight before any apply
14
+ - `record_insert` is schema-first through `record_insert_schema_get`
15
+ - `record_update` is schema-first through `record_update_schema_get`
16
+ - `record_delete` does not need a schema-get step
15
17
  - If a direct-write tool returns `ok=false`, the write was blocked and not executed
16
- - Use `record_schema_get` when field titles are uncertain instead of guessing ids
17
18
  - Prefer `verify_write=true` for complex, relation-heavy, subtable, or production writes
18
- - Even when a direct-write tool returns `ok=true`, it may still surface verification failures; do not report success before checking them
19
19
 
20
- ## Write Semantics
20
+ ## Lookup Fields
21
21
 
22
- - `record_insert` uses an applicant-node `fields` map
23
- - `record_update` uses a view-scoped `fields` map
24
- - `record_delete` uses `record_id` or `record_ids`
25
- - Do not fake formula or expression fields
26
- - Do not guess relation targets from display text; resolve the real `record_id` first
22
+ - Member / department / relation fields may accept natural text, but MCP may return `needs_confirmation`
23
+ - Do not guess ids when the response returns candidate options
24
+ - Retry only after the user confirms the explicit candidate
27
25
 
28
- ## Attachments
26
+ ## Subtables and Attachments
29
27
 
30
- - Attachment fields are two-step: upload first, then write the returned URL object into the record
31
- - `file_upload_local` may report `effective_upload_kind=login` even when the requested kind was `attachment`; this is an implementation fallback, not necessarily an error
32
- - When debugging uploads, surface both `effective_upload_kind` and `upload_protocol`
33
-
34
- ## Subtables
35
-
36
- - Subtable fields accept row objects keyed by subfield title, or native `tableValues`
37
- - Use the current form schema's subfield titles; do not guess nested ids
38
- - When updating existing subtable rows, preserve row ids if the source record returns them
39
- - Nested subtable writes are still unsupported
40
-
41
- ## Unsupported Direct-Write Fields
42
-
43
- - `14` time range
44
- - `34` image recognition
45
- - `35` image generation
46
- - `36` document parsing
47
-
48
- Do not fake values for these fields in app-user writes. Stop and explain the limitation.
28
+ - Subtable payloads stay under the parent table field as a row array
29
+ - Attachment fields are two-step: upload first, then write the returned upload payload
@@ -4,106 +4,45 @@ If the task shifts into grouped analysis, ratio, ranking, trend, or any final st
4
4
 
5
5
  ## Browse Pattern
6
6
 
7
- Use `record_schema_get -> record_list` when:
7
+ Use `record_browse_schema_get -> record_list` when:
8
8
 
9
9
  - the user wants to browse records
10
10
  - the target `record_id` is unknown
11
- - a delete or update target still needs confirmation
11
+ - a delete target still needs confirmation
12
12
  - the user needs sample rows or a small export
13
13
 
14
- Keep the browse DSL simple:
15
-
16
- - `columns`: field ids only
17
- - `where`: flat AND filters only
18
- - `order_by`: field sorting only
19
- - `limit` and `page`: browsing intent only
20
-
21
- Do not use `record_list` for grouped conclusions, ratios, rankings, trends, or any final statistical claim.
22
-
23
14
  ## Detail Pattern
24
15
 
25
- Use `record_schema_get -> record_get` when:
16
+ Use `record_browse_schema_get -> record_get` when:
26
17
 
27
18
  - the exact `record_id` is known
28
19
  - the user needs one record in detail
29
20
  - a write target needs verification before action
30
21
 
31
- Prefer passing explicit `columns` when the user only needs a subset of fields.
22
+ ## Insert Pattern
32
23
 
33
- ## Write Pattern
34
-
35
- Use `record_schema_get -> record_insert / record_update / record_delete`.
24
+ Use `record_insert_schema_get -> record_insert`.
36
25
 
37
26
  1. Confirm the target app
38
- 2. Resolve fields with `record_schema_get`
39
- 3. Decide whether the task is `insert`, `update`, or `delete`
40
- 4. Build a field-title keyed `fields` map for insert/update
41
- 5. Run `record_insert`, `record_update`, or `record_delete`
42
- 6. If `ok=false`, explain `field_errors` first, then summarize blockers; stop because the write was not executed
43
- 7. If `ok=true`, report the affected resource and any verification outcome
44
- 8. For important writes, keep `verify_write=true`
45
-
46
- ### Insert
47
-
48
- ```json
49
- {
50
- "app_key": "APP_1",
51
- "fields": {
52
- "客户名称": "测试客户",
53
- "合同金额": 1000
54
- },
55
- "submit_type": "submit",
56
- "verify_write": true
57
- }
58
- ```
59
-
60
- ### Update
61
-
62
- ```json
63
- {
64
- "app_key": "APP_1",
65
- "record_id": 123,
66
- "fields": {
67
- "合同金额": 2000
68
- },
69
- "view_id": "system:all",
70
- "verify_write": true
71
- }
72
- ```
73
-
74
- ### Delete
75
-
76
- ```json
77
- {
78
- "app_key": "APP_1",
79
- "record_ids": [123, 124]
80
- }
81
- ```
82
-
83
- ## Write Anti-Patterns
84
-
85
- Do not do this:
86
-
87
- - do not send raw SQL text
88
- - do not build free-form `WHERE` updates or deletes
89
- - do not invent formulas or expressions
90
- - do not auto-fill missing required fields
91
- - do not guess relation targets without first resolving them
92
- - do not claim a blocked direct write was executed
93
-
94
- ## Unsupported Direct Writes
95
-
96
- Do not attempt direct app-user writes for these field types:
97
-
98
- - `14` time range
99
- - `34` image recognition
100
- - `35` image generation
101
- - `36` document parsing
102
-
103
- If the payload includes them, stop after the blocked response and explain that the tool does not support a reliable direct write for those fields yet.
104
-
105
- ## Relation, Attachment, and Subtable Rules
106
-
107
- - Relation fields are record-id based. Resolve the referenced target first, then write the relation field with the real `record_id`.
108
- - Attachment fields are two-step: upload first with `file_upload_local`, then reuse the returned attachment payload in `record_insert` or `record_update`.
109
- - Subtable writes require the current schema shape; when updating existing subtable rows, preserve row ids if the current record exposes them.
27
+ 2. Read `required_fields`, `optional_fields`, `runtime_linked_required_fields`, and `payload_template`
28
+ 3. Build a field-title keyed `fields` map
29
+ 4. If lookup fields are ambiguous, stop and ask for confirmation
30
+ 5. Run `record_insert`
31
+
32
+ ## Update Pattern
33
+
34
+ Use `record_update_schema_get -> record_update`.
35
+
36
+ 1. Confirm the target app and `record_id`
37
+ 2. Read `writable_fields` and `payload_template`
38
+ 3. Update only fields present in `writable_fields`
39
+ 4. Let MCP auto-select the first matched accessible view that can execute the payload
40
+ 5. Run `record_update`
41
+
42
+ ## Delete Pattern
43
+
44
+ Use `record_list / record_get -> record_delete`.
45
+
46
+ 1. Confirm the exact `record_id`
47
+ 2. Run `record_delete`
48
+ 3. Do not invent range deletes from guessed browse results
@@ -9,15 +9,15 @@ metadata:
9
9
 
10
10
  This skill is for final statistical conclusions only.
11
11
  Assumes MCP is connected, authenticated, and on the correct workspace.
12
- Analysis tasks must start with `app_get`, then `record_schema_get(schema_mode="browse", view_id=...)`. Read top-level `fields` and `suggested_*`, then build field_id-based DSLs only.
13
- Analysis tasks must start with `record_schema_get`.
12
+ Analysis tasks must start with `app_get`, then `record_browse_schema_get(view_id=...)`. Read top-level `fields` and `suggested_*`, then build field_id-based DSLs only.
13
+ Analysis tasks must start with `record_browse_schema_get`.
14
14
  If `app_get.accessible_views` marks a view with `analysis_supported=false`, do not use that view for `record_list` or `record_analyze`. `boardView` and `ganttView` are special UI views, not list/analyze targets.
15
15
 
16
- ## Step 1: `app_get` → Step 2: `record_schema_get(schema_mode="browse", view_id=...)` → Step 3: build DSL → Step 4: `record_analyze`
16
+ ## Step 1: `app_get` → Step 2: `record_browse_schema_get(view_id=...)` → Step 3: build DSL → Step 4: `record_analyze`
17
17
 
18
18
  This is the ONLY execution order. Never skip `app_get` when the browse range is unclear. Never call `record_analyze` without a browse schema.
19
19
 
20
- Core tools: `app_get`, `record_schema_get`, `record_analyze`. Use `record_list`/`record_get` only for post-analysis samples; task/comment work stays in [$qingflow-task-ops](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-task-ops/SKILL.md).
20
+ Core tools: `app_get`, `record_browse_schema_get`, `record_analyze`. Use `record_list`/`record_get` only for post-analysis samples; task/comment work stays in [$qingflow-task-ops](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-task-ops/SKILL.md).
21
21
 
22
22
  ---
23
23
 
@@ -1,4 +1,4 @@
1
1
  interface:
2
2
  display_name: "Qingflow Record Analysis"
3
3
  short_description: "Analyze Qingflow record data with schema-first DSL execution"
4
- default_prompt: "Use $qingflow-record-analysis for grouped distributions, ratios, rankings, trends, and final statistical conclusions in Qingflow apps. Start with record_schema_get, build one or more field_id-based DSLs, then run record_analyze. Treat record_list as sample-only when capped or paged, and separate full conclusions from sample observations."
4
+ default_prompt: "Use $qingflow-record-analysis for grouped distributions, ratios, rankings, trends, and final statistical conclusions in Qingflow apps. Start with record_browse_schema_get, build one or more field_id-based DSLs, then run record_analyze. Treat record_list as sample-only when capped or paged, and separate full conclusions from sample observations."
@@ -6,7 +6,7 @@ If the task is analysis-style and you jump straight to `record_list` or `record_
6
6
 
7
7
  Correct recovery:
8
8
 
9
- 1. `record_schema_get`
9
+ 1. `record_browse_schema_get`
10
10
  2. inspect the schema and choose fields
11
11
  3. build one or more small DSLs
12
12
  4. run `record_analyze`
@@ -75,7 +75,7 @@ If the field is uncertain:
75
75
 
76
76
  Correct recovery:
77
77
 
78
- 1. `record_schema_get`
78
+ 1. `record_browse_schema_get`
79
79
  2. if several plausible candidates remain, ask the user to confirm from a short list
80
80
  3. build the DSL only after the field is clear
81
81
 
@@ -15,7 +15,7 @@ Use this skill when the user asks for:
15
15
 
16
16
  ## Canonical analysis sequence
17
17
 
18
- 1. `record_schema_get`
18
+ 1. `record_browse_schema_get`
19
19
  2. decide whether the question needs `count`, `sum`, `avg`, `distinct_count`, `ratio`, or `ranking`
20
20
  3. build one or more field_id-based DSLs
21
21
  4. `record_analyze`
@@ -30,11 +30,11 @@ Result reading order:
30
30
  5. `completeness`
31
31
  6. `presentation`
32
32
 
33
- Treat `record_schema_get` as applicant-node visible-only schema. Missing fields are permission boundaries, not invitations to guess hidden ids.
33
+ Treat `record_browse_schema_get` as the browse-schema source of truth. Missing fields are permission boundaries, not invitations to guess hidden ids.
34
34
 
35
35
  ## Distribution / ratio pattern
36
36
 
37
- 1. Run `record_schema_get`
37
+ 1. Run `record_browse_schema_get`
38
38
  2. Inspect candidate fields and aliases
39
39
  3. If several plausible candidates remain, stop and ask the user to confirm the field from a short list
40
40
  4. Build a DSL with:
@@ -51,7 +51,7 @@ Treat `record_schema_get` as applicant-node visible-only schema. Missing fields
51
51
 
52
52
  ## penetration / conversion / share-of-total pattern
53
53
 
54
- 1. Run `record_schema_get`
54
+ 1. Run `record_browse_schema_get`
55
55
  2. Write down the business definition in plain language:
56
56
  - numerator
57
57
  - denominator
@@ -64,7 +64,7 @@ Treat `record_schema_get` as applicant-node visible-only schema. Missing fields
64
64
 
65
65
  ## Average / ranking pattern
66
66
 
67
- 1. Run `record_schema_get`
67
+ 1. Run `record_browse_schema_get`
68
68
  2. Choose one dimension field and one numeric metric field
69
69
  3. Build a DSL with:
70
70
  - `dimensions=[...]`
@@ -76,7 +76,7 @@ Treat `record_schema_get` as applicant-node visible-only schema. Missing fields
76
76
 
77
77
  ## Trend pattern
78
78
 
79
- 1. Run `record_schema_get`
79
+ 1. Run `record_browse_schema_get`
80
80
  2. Choose a date/time field from `suggested_time_fields`
81
81
  3. Build a DSL with `bucket=day|week|month|quarter|year`
82
82
  4. Run `record_analyze`
@@ -109,7 +109,7 @@ Never use list mode alone to justify final averages, shares, rankings, or region
109
109
 
110
110
  If the user asks for something like “来源分布” or “类型占比” and the exact field is unclear:
111
111
 
112
- 1. run `record_schema_get`
112
+ 1. run `record_browse_schema_get`
113
113
  2. inspect titles, aliases, and suggested fields
114
114
  3. if one candidate is clearly dominant, proceed
115
115
  4. if multiple candidates are still plausible, ask the user to confirm which field they want
@@ -12,7 +12,7 @@ When analysis is intended as a final answer, use this order:
12
12
 
13
13
  Only write `全量可信结论` when:
14
14
 
15
- - `record_schema_get` was used
15
+ - `record_browse_schema_get` was used
16
16
  - the analysis path used one or more `record_analyze` calls
17
17
  - every key analysis result has `safe_for_final_conclusion=true`
18
18
  - `safe_for_final_conclusion=true is necessary but not sufficient`
@@ -30,7 +30,7 @@ Put evidence into `样本观察` when:
30
30
 
31
31
  ## Downgrade rule
32
32
 
33
- If `record_schema_get` was not used for an analysis task, downgrade the overall framing to `初步观察` instead of `洞察` or `结论`.
33
+ If `record_browse_schema_get` was not used for an analysis task, downgrade the overall framing to `初步观察` instead of `洞察` or `结论`.
34
34
 
35
35
  ## Anti-mixing rule
36
36
 
@@ -1,6 +1,6 @@
1
1
  # DSL Templates
2
2
 
3
- Use these copy-paste templates after `record_schema_get`.
3
+ Use these copy-paste templates after `record_browse_schema_get`.
4
4
 
5
5
  ## Whole-table summary
6
6
 
@@ -0,0 +1,29 @@
1
+ ---
2
+ name: qingflow-record-delete
3
+ description: Delete Qingflow records safely after the MCP is already connected and authenticated.
4
+ metadata:
5
+ short-description: Qingflow record delete
6
+ ---
7
+
8
+ # Qingflow Record Delete
9
+
10
+ ## Default Path
11
+
12
+ `record_list / record_get -> record_delete`
13
+
14
+ ## Core Tools
15
+
16
+ - `record_list`
17
+ - `record_get`
18
+ - `record_delete`
19
+
20
+ ## Working Rules
21
+
22
+ 1. Resolve the exact target `record_id` first
23
+ 2. Prefer reading the current state before delete when the request is high risk
24
+ 3. Call `record_delete` with `record_id` or `record_ids`
25
+
26
+ ## Do Not
27
+
28
+ - Do not pass any `view_*` selector
29
+ - Do not infer the target record id from a vague title if `record_list` can disambiguate it
@@ -0,0 +1,31 @@
1
+ ---
2
+ name: qingflow-record-import
3
+ description: Explain and operate Qingflow file-based bulk import after the MCP is already connected and authenticated.
4
+ metadata:
5
+ short-description: Qingflow bulk import workflow and troubleshooting
6
+ ---
7
+
8
+ # Qingflow Record Import
9
+
10
+ ## Default Path
11
+
12
+ `app_get -> record_import_schema_get -> record_import_template_get -> record_import_verify -> (optional authorized repair) -> record_import_start -> record_import_status_get`
13
+
14
+ ## Core Tools
15
+
16
+ - `app_get`
17
+ - `record_import_schema_get`
18
+ - `record_import_template_get`
19
+ - `record_import_verify`
20
+ - `record_import_repair_local`
21
+ - `record_import_start`
22
+ - `record_import_status_get`
23
+
24
+ ## Working Rules
25
+
26
+ 1. Inspect `app_get.data.import_capability` first
27
+ 2. Read `record_import_schema_get` before touching the file when column meaning is unclear
28
+ 3. Keep official headers unchanged
29
+ 4. Verify before start
30
+ 5. Only repair a file after explicit user authorization
31
+ 6. Read back one imported sample after success