@josephyan/qingflow-app-user-mcp 0.2.0-beta.20 → 0.2.0-beta.21

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -3,13 +3,13 @@
3
3
  Install:
4
4
 
5
5
  ```bash
6
- npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.20
6
+ npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.21
7
7
  ```
8
8
 
9
9
  Run:
10
10
 
11
11
  ```bash
12
- npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.20 qingflow-app-user-mcp
12
+ npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.21 qingflow-app-user-mcp
13
13
  ```
14
14
 
15
15
  Environment:
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@josephyan/qingflow-app-user-mcp",
3
- "version": "0.2.0-beta.20",
3
+ "version": "0.2.0-beta.21",
4
4
  "description": "Operational end-user MCP for Qingflow records, tasks, comments, and directory workflows.",
5
5
  "license": "MIT",
6
6
  "type": "module",
package/pyproject.toml CHANGED
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "qingflow-mcp"
7
- version = "0.2.0b20"
7
+ version = "0.2.0b21"
8
8
  description = "User-authenticated MCP server for Qingflow"
9
9
  readme = "README.md"
10
10
  license = "MIT"
@@ -1,34 +1,61 @@
1
1
  ---
2
2
  name: qingflow-app-user
3
- description: Use Qingflow apps as an operational end user after the MCP is already connected and authenticated. Use when the user wants to create, search, read, update, or delete business records, inspect or manage task-center work, add comments, or perform workflow usage actions inside an existing app. Do not use this skill to design apps, modify schemas, or build a brand new SolutionSpec.
3
+ description: Use Qingflow apps as an operational end user after the MCP is already connected and authenticated. Use when the user wants to browse, read, write, comment on, or act on existing business records and task-center work. Do not use this skill for schema design or final statistical analysis.
4
4
  metadata:
5
- short-description: Use Qingflow apps for business data and task operations
5
+ short-description: Schema-first operational use of Qingflow apps
6
6
  ---
7
7
 
8
8
  # Qingflow App User
9
9
 
10
10
  ## Overview
11
11
 
12
- This skill is for business-user operations inside existing Qingflow apps. It focuses on records, task-center usage, comments, and usage-side workflow actions, not app design or system configuration. If the task is about building or changing app structure, switch to `$qingflow-app-builder`.
12
+ This skill is for **operational usage** inside existing Qingflow apps.
13
13
 
14
- If the user is asking for analysis, grouped distributions, ranking, trend, averages, business insights, or any final statistical conclusion, switch to `$qingflow-record-analysis` instead of keeping that logic inside this skill.
14
+ Use it for:
15
15
 
16
- Before operating on data, identify whether the task targets `test` or `prod` and read [references/environments.md](references/environments.md). If the user did not specify one, default to `prod`.
17
- When the task is in `prod`, browser parity matters, or the user says "the page has data but MCP does not", restate the expected `base_url` and `qf_version`, then prefer tools that expose `request_route` so you can confirm the live route before concluding.
16
+ - record browsing
17
+ - record detail lookup
18
+ - record create / update / delete
19
+ - task-center usage
20
+ - record comments
21
+ - approval / reject / rollback / transfer
22
+ - directory lookup
23
+
24
+ Do **not** keep grouped analysis, ratios, rankings, trends, or final statistical conclusions inside this skill.
25
+ For those, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md).
26
+
27
+ Before operating on data, identify whether the task targets `test` or `prod` and read [references/environments.md](references/environments.md).
28
+ If the user did not specify one, default to `prod`.
29
+
30
+ ## Default Paths
31
+
32
+ Use exactly one of these default paths:
33
+
34
+ 1. Browse records
35
+ `record_schema_get -> record_list`
36
+
37
+ 2. Read one record
38
+ `record_schema_get -> record_get`
39
+
40
+ 3. Write records
41
+ `record_schema_get -> record_write(mode="plan") -> record_write(mode="apply")`
42
+
43
+ 4. Work task center
44
+ `task_summary / task_list / task_facets`
45
+
46
+ 5. Analysis
47
+ Switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
18
48
 
19
49
  ## Tool Scope
20
50
 
21
- Primary record and data tools:
51
+ Primary record tools:
22
52
 
23
- - `record_query`
24
53
  - `record_schema_get`
25
- - `record_write_plan`
26
- - `record_create`
54
+ - `record_list`
27
55
  - `record_get`
28
- - `record_update`
29
- - `record_delete`
56
+ - `record_write`
30
57
 
31
- Directory and organization lookup tools when the user is asking about internal members, departments, org structure, ownership, approver candidates, or wants full contact exports:
58
+ Directory tools:
32
59
 
33
60
  - `directory_search`
34
61
  - `directory_list_internal_users`
@@ -38,119 +65,162 @@ Directory and organization lookup tools when the user is asking about internal m
38
65
  - `directory_list_sub_departments`
39
66
  - `directory_list_external_members`
40
67
 
41
- Usage-side collaboration and flow tools when needed:
42
-
43
- - `record_comment_*`
44
- - `task_approve`
45
- - `task_reject`
46
- - `task_rollback*`
47
- - `task_transfer*`
48
-
49
- Task-center and inbox tools when the user is asking about pending work, processed work, cc, or workflow workload:
68
+ Task-center tools:
50
69
 
70
+ - `task_summary`
51
71
  - `task_list`
52
- - `task_list_grouped`
53
- - `task_statistics`
72
+ - `task_facets`
73
+ - `task_mark_read`
74
+ - `task_mark_all_cc_read`
54
75
  - `task_urge`
55
76
 
56
- Do not use builder-side tools here:
77
+ Comments and workflow usage actions:
57
78
 
58
- - `app_*`
59
- - `view_*`
60
- - `workflow_*`
61
- - `portal_*`
62
- - `navigation_*`
63
- - `package_*`
64
- - `solution_*`
79
+ - `record_comment_write`
80
+ - `record_comment_list`
81
+ - `record_comment_mentions`
82
+ - `record_comment_mark_read`
83
+ - `task_approve`
84
+ - `task_reject`
85
+ - `task_rollback_candidates`
86
+ - `task_rollback`
87
+ - `task_transfer_candidates`
88
+ - `task_transfer`
89
+
90
+ Do not use builder-side tools here.
65
91
 
66
92
  ## Standard Operating Order
67
93
 
68
94
  1. Ensure auth exists
69
95
  2. Ensure workspace is selected
70
- 3. Confirm target app, task scope, and operation type
71
- 4. For org, member, department, approver, or ownership questions, start with `directory_*`
72
- 5. For inbox, pending, processed, cc, or workload questions, start with `task_statistics`, `task_list`, or `task_list_grouped`
73
- 6. When a task query identifies the target record, switch to `record_get` or `record_query` for business data details
74
- 7. For non-trivial record reads, start with `record_query`
75
- 8. For non-trivial writes, start with `record_write_plan`, especially when using `fields`
76
- 9. Prefer read-first when changing existing records
77
- 10. Report the affected task ids, record ids, member ids, department ids, or counts after actions
78
- 11. For `prod`, complex forms, attachments, or any unfamiliar schema, prefer `record_create(..., verify_write=true)` or read back immediately after create/update
79
-
80
- ## Data Rules
81
-
82
- - Prefer `record_query` as the default read entry
83
- - Treat `record_query(list)` as the default wide-table browse and export endpoint; pass explicit `select_columns`, do not expect raw answer arrays there, and let the tool auto-batch columns when the backend per-request field cap is hit
84
- - For analysis, grouped distributions, trends, or final statistical conclusions, switch to `$qingflow-record-analysis`
85
- - Use `request_route` from tool responses to verify the active `base_url` and `qf_version` whenever route mismatches are plausible
86
- - Use `directory_search` for fuzzy internal lookup across both members and departments
87
- - Use `directory_list_all_internal_users` when the user explicitly wants a complete internal member list within the current workspace or within a specific department or role
88
- - Use `directory_list_all_departments` when the user explicitly wants the full department tree or all departments under a root
89
- - Use `directory_list_internal_departments` for keyword-based department search, not full exports
90
- - Use `task_statistics` before `task_list` when the user only needs counts
91
- - Use `task_list_grouped` when worksheet or group buckets matter
92
- - Use `task_urge` only when the user clearly wants a reminder sent for a pending task
93
- - Use `record_schema_get` when field selectors are ambiguous; if the task then turns into analysis, switch to `$qingflow-record-analysis`
94
- - For precise record lookup, use `record_get` when `apply_id` is known
95
- - Use `record_schema_get` when the user gives field titles and you are not fully sure about the exact schema; do not guess ambiguous fields silently
96
- - If the task has already shifted into analysis and `record_schema_get` still leaves multiple plausible fields, stop and ask the user to confirm the intended field instead of continuing to try read tools in a loop
97
- - Treat field selectors as schema-first and platform-generic. Prefer exact field titles, then neutral aliases such as `创建时间`, `新增时间`, `负责人`, `部门`, `时间`, or `阶段` only when the tool resolves them clearly. Do not assume CRM shorthand like `销售`, `商机阶段`, `客户全称`, or similar domain shortcuts apply across arbitrary Qingflow apps
98
- - For updates, inspect current data first unless the user already provided the exact target and patch
99
- - For deletes, confirm the exact record scope and report the deleted ids
100
- - When validating business data volume, use `effective_count` over raw backend totals
101
- - In `prod`, prefer read-first even more strictly and avoid deletes unless the record scope is explicit in the conversation
102
- - For attachments, first run `file_upload_local`, then pass the returned `attachment_value` into `record_create` or `record_update`; do not try to write local file paths directly into attachment fields
103
- - For relation fields, first query the target app and resolve the referenced record `apply_id`; do not assume titles, numbers, or business keys can be written directly into a relation field
104
- - For subtable fields, write a list of row objects keyed by the subfield titles. When updating existing rows, include `rowId` / `row_id` / `__row_id__` only if the source record already exposes it
105
- - Treat `14/34/35/36` as unsupported direct-write field types in app-user flows:
106
- - `14`: time range
107
- - `34`: image recognition
108
- - `35`: image generation
109
- - `36`: document parsing
110
- - For those unsupported types, stop and explain the limitation instead of inventing payloads
111
- - Use `record_write_plan` to inspect `write_format.support_level` before non-trivial writes:
112
- - `full`: generic scalar/select/date writes are directly supported
113
- - `restricted`: member/department/attachment/relation/subtable writes need the documented presteps
114
- - `unsupported`: stop and explain the limitation
115
- - For relation-heavy, attachment, subtable, or production writes, default to `verify_write=true` so field drops are surfaced immediately instead of being reported as success
116
-
117
- ## Mock and Demo Data
118
-
119
- When the user asks for demo data, seed, smoke data, or mock data:
120
-
121
- - default to at least `5` records for the relevant entity unless the user asks for fewer
122
- - keep titles realistic and business-like
123
- - vary statuses, dates, and categories enough to make views and charts useful
124
- - if the task is `prod`, do not create mock or smoke data unless the user explicitly asks for it
96
+ 3. Confirm target app and whether the task is browse / detail / write / task / analysis
97
+ 4. Run `record_schema_get` before any non-trivial record read or write
98
+ 5. If the request is analysis-like, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
99
+ 6. If the request is write-like, decide `insert / update / delete` before building any payload
100
+ 7. If fields are still ambiguous after `record_schema_get`, ask the user to confirm from a short candidate list instead of guessing
101
+ 8. For high-risk writes, task actions, or production changes, read the current state first whenever practical
102
+ 9. After actions, report the affected `record_id`, `task_id`, counts, or returned item count
103
+
104
+ ## Record Read Rules
105
+
106
+ - Use `record_list` for browse/export/sample inspection only
107
+ - Use `record_get` when `record_id` is known
108
+ - `record_list` accepts:
109
+ - `columns`
110
+ - `where`
111
+ - `order_by`
112
+ - `limit`
113
+ - `page`
114
+ - `record_list` is **not** an analysis tool
115
+ - If a request turns into grouped distributions, ratios, rankings, trends, or final statistical conclusions, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
116
+
117
+ ## Record Write Rules
118
+
119
+ Use `record_write` as the only default write tool.
120
+
121
+ ### Write workflow
122
+
123
+ 1. Run `record_schema_get`
124
+ 2. Decide whether the task is `insert`, `update`, or `delete`
125
+ 3. Build SQL-like JSON clauses
126
+ 4. Run `record_write(mode="plan")`
127
+ 5. If blockers are empty, run `record_write(mode="apply")`
128
+ 6. For important writes, keep `verify_write=true`
129
+
130
+ ### SQL-like JSON DSL
131
+
132
+ The DSL is clause-shaped like SQL, but it is **not raw SQL text**.
133
+
134
+ #### Insert
135
+
136
+ ```json
137
+ {
138
+ "operation": "insert",
139
+ "mode": "plan",
140
+ "values": [
141
+ { "field_id": 12, "value": "测试客户" },
142
+ { "field_id": 18, "value": 1000 }
143
+ ],
144
+ "submit_type": "submit",
145
+ "verify_write": true
146
+ }
147
+ ```
148
+
149
+ #### Update
150
+
151
+ ```json
152
+ {
153
+ "operation": "update",
154
+ "mode": "plan",
155
+ "record_id": 123,
156
+ "set": [
157
+ { "field_id": 18, "value": 2000 }
158
+ ],
159
+ "verify_write": true
160
+ }
161
+ ```
162
+
163
+ #### Delete
164
+
165
+ ```json
166
+ {
167
+ "operation": "delete",
168
+ "mode": "plan",
169
+ "record_ids": [123, 124]
170
+ }
171
+ ```
172
+
173
+ ### Write discipline
174
+
175
+ - `insert` uses `values`
176
+ - `update` uses `set`
177
+ - `delete` uses `record_id` or `record_ids`
178
+ - Do not send raw SQL text
179
+ - Do not invent formulas or expressions
180
+ - Do not use free-form `WHERE` updates or deletes
181
+ - Do not auto-fill missing fields
182
+ - Do not auto-resolve relation targets without first querying them
183
+
184
+ ## Task-Center Rules
185
+
186
+ - Use `task_summary` for headline counts
187
+ - Use `task_list` for flat browsing
188
+ - Use `task_facets` for grouped worksheet or workflow-node buckets
189
+ - `task_box` must be one of:
190
+ - `todo`
191
+ - `initiated`
192
+ - `cc`
193
+ - `done`
194
+ - `flow_status` must be one of:
195
+ - `all`
196
+ - `in_progress`
197
+ - `approved`
198
+ - `rejected`
199
+ - `pending_fix`
200
+ - `urged`
201
+ - `overdue`
202
+ - `due_soon`
203
+ - `unread`
204
+ - `ended`
205
+ - Find the exact task or record first, then use `task_approve`, `task_reject`, `task_rollback`, or `task_transfer`
206
+ - Do not guess `workflow_node_id`
207
+
208
+ ## Directory and Comments
209
+
210
+ - Use `directory_search` for fuzzy member/department lookup
211
+ - Use `directory_list_all_internal_users` and `directory_list_all_departments` only when the user explicitly wants a complete export
212
+ - Use `record_comment_write` after the exact `record_id` is known
213
+ - Use `record_comment_mentions` to resolve mention candidates before building complex comment payloads
125
214
 
126
215
  ## Response Interpretation
127
216
 
128
- - `record_query(query_mode="list")` is browse/sample output, not a final analysis result
129
- - If `record_query(query_mode="list")` reports `row_cap_hit`, `sample_only`, or capped rows, do not present it as full data
130
- - For grouped distributions, trends, or final statistical conclusions, switch to `$qingflow-record-analysis` and use `record_schema_get -> record_analyze`
131
- - `record_write_plan` is static preflight, not a guarantee that submit will pass runtime linkage or visibility checks
132
- - `record_create` now returns integer `apply_id`; you can pass that id directly into `record_get`, `record_update`, or `record_delete`
133
- - `verify_write=true` means the tool read the record back and compared the written fields; if it returns `status=verification_failed` or `ok=false`, do not report the create or update as successful
134
- - Relation writes are `apply_id`-based; if the user only gives a title, number, or business key, query the target app first and resolve the real record id before writing
135
- - Task counts and record counts are not interchangeable; a task query reflects task-center workload, not the underlying record total
136
- - When reporting task results, include the task dimension that was used, such as pending, processed, cc, node, or worksheet
137
- - Prefer summarizing titles and counts instead of dumping raw answer arrays
138
- - When records reference other entities, verify references are coherent before reporting success
139
- - `file_upload_local` may transparently change `effective_upload_kind` and `upload_protocol`; surface those fields when debugging production upload behavior instead of assuming all uploads are direct `PUT`
140
-
141
- ## Practical Patterns
142
-
143
- - Bulk mock data creation: query current data first, run `record_write_plan`, then create missing records
144
- - Data correction: query, inspect, preflight, update, and re-read
145
- - Inbox triage: use `task_statistics` first, then `task_list` or `task_list_grouped`, then switch to `record_*` for the underlying record when needed
146
- - Bottleneck analysis: start with `task_statistics` and `task_list_grouped` before drilling into specific records
147
- - Workflow collaboration: comment, transfer, or reassign only after identifying the exact record
148
- - Approval actions: identify the exact record and current node first, then use `task_approve` or `task_reject`; do not guess `nodeId`
149
- - Demo validation: create at least `5` rows and confirm they are queryable
150
- - Org export: use `directory_list_all_internal_users` for full member exports and `directory_list_all_departments` for full org-tree exports before mapping owners or departments into record operations
151
- - Attachment write: upload first, write the returned URL object second, and prefer `verify_write=true`
152
- - Relation write: query the target app first, capture the referenced record `apply_id`, then write the relation field and verify the readback
153
- - Production discrepancy triage: compare the response `request_route` with the browser environment before assuming the data query is wrong
217
+ - `record_list` returns browse/sample data, not final analysis conclusions
218
+ - `record_write(mode="plan")` is static preflight, not runtime execution
219
+ - `record_write(mode="apply")` may still surface verification failures
220
+ - Treat `request_route` as the source of truth for live route debugging
221
+ - Prefer canonical schema titles and aliases in your final wording
222
+ - If only part of the requested work is completed, explicitly disclose which parts are done and which are not
223
+
154
224
  ## Resources
155
225
 
156
226
  - Environment switching: [references/environments.md](references/environments.md)
@@ -1,40 +1,30 @@
1
1
  # Data Gotchas
2
2
 
3
- For final statistics, grouped distributions, or insight-style analysis, use [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md) instead of keeping that reasoning inside `$qingflow-app-user`.
4
-
5
- ## Counts
6
-
7
- - Prefer `effective_count`
8
- - For final analysis, inspect `record_analyze.data.completeness` and `safe_for_final_conclusion` before concluding
9
- - If `record_analyze.status!=success`, treat the result as exploratory unless the user explicitly asked for a partial sample
10
- - `record_query(list)` is for browsing and sample inspection. If it reports `row_cap_hit`, `sample_only`, or capped `returned_items`, do not present it as full data
11
- - When coverage matters, surface:
12
- - `scanned_count`
13
- - `presentation.statement_scope`
14
- - Use narrower views, filters, or smaller analysis questions instead of inventing manual scan settings by hand
15
- - If the browser and MCP disagree, compare `request_route.base_url` and `request_route.qf_version` first
16
- - Do not mix a full aggregate total with sample-only list detail in one sentence like “基于全部数据分析”; split the answer into `全量结论` and `样本观察`
3
+ For final statistics, grouped distributions, rankings, trends, or insight-style conclusions, use [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md) instead of keeping that reasoning inside `$qingflow-app-user`.
17
4
 
18
- ## Record titles
5
+ ## Record Reads
19
6
 
20
- - Do not dump raw answer arrays to the user unless needed
21
- - Prefer concise business titles and counts
7
+ - `record_list` is for browsing, export, and sample inspection only
8
+ - `record_get` is for one exact record
9
+ - Do not present paged browse output as if it were a grouped or full-population conclusion
10
+ - If the browser and MCP disagree, compare `request_route.base_url` and `request_route.qf_version` first
22
11
 
23
- ## Preflight
12
+ ## Write Preflight
24
13
 
25
- - `record_write_plan` is static preflight only; linked visibility and runtime required rules can still reject writes
26
- - `record_write_plan` now exposes `write_format.support_level`; check `full / restricted / unsupported` before attempting non-trivial writes
14
+ - `record_write(mode="plan")` is static preflight only; linked visibility and runtime required rules can still reject writes
27
15
  - Use `record_schema_get` when field titles are uncertain instead of guessing ids
28
- - For analysis tasks, use the fixed path `record_schema_get -> record_analyze`; do not switch tools blindly after `FIELD_NOT_FOUND` or ambiguity
29
- - Prefer `strict_full=true` for final statistics or business conclusions
30
- - `record_create` and `record_update` can do post-write verification with `verify_write=true`; use that for complex, subtable, or production writes
31
- - `apply_id` is normalized to an integer; pass it directly into later record tools
16
+ - Prefer `verify_write=true` for complex, relation-heavy, subtable, or production writes
17
+ - `record_write(mode="apply")` may still surface verification failures; do not report success before checking them
32
18
 
33
- ## Mock data
19
+ ## Write Semantics
34
20
 
35
- - Default to at least `5` rows per relevant entity unless the user asked for fewer
36
- - Avoid identical titles and identical statuses across all rows
37
- - Keep relation references valid
21
+ - `insert` uses `values`
22
+ - `update` uses `set`
23
+ - `delete` uses `record_id` or `record_ids`
24
+ - Do not send raw SQL strings
25
+ - Do not fake formula or expression fields
26
+ - Do not perform free-form bulk updates or deletes
27
+ - Do not guess relation targets from display text; resolve the real `record_id` first
38
28
 
39
29
  ## Attachments
40
30
 
@@ -46,10 +36,10 @@ For final statistics, grouped distributions, or insight-style analysis, use [$qi
46
36
 
47
37
  - Subtable fields accept row objects keyed by subfield title, or native `tableValues`
48
38
  - Use the current form schema's subfield titles; do not guess nested ids
49
- - When updating existing subtable rows, preserve `rowId` if the source record returns it
39
+ - When updating existing subtable rows, preserve row ids if the source record returns them
50
40
  - Nested subtable writes are still unsupported
51
41
 
52
- ## Unsupported direct-write fields
42
+ ## Unsupported Direct-Write Fields
53
43
 
54
44
  - `14` time range
55
45
  - `34` image recognition
@@ -50,7 +50,7 @@ Production behavior:
50
50
  Production guardrails:
51
51
 
52
52
  - never assume a record id, app id, or workspace id
53
- - treat `record_delete` as high risk
53
+ - treat `record_write(operation="delete")` as high risk
54
54
  - if the task can be answered read-only, do not write
55
55
 
56
56
  ## Reporting Rule
@@ -1,81 +1,98 @@
1
1
  # Record Patterns
2
2
 
3
- If the task shifts into grouped analysis, ratio, ranking, trend, or final statistical conclusions, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md).
3
+ If the task shifts into grouped analysis, ratio, ranking, trend, or any final statistical conclusion, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md).
4
4
 
5
- ## Query first
5
+ ## Browse Pattern
6
6
 
7
- Use `record_query` first when:
7
+ Use `record_schema_get -> record_list` when:
8
8
 
9
- - the user only gives a title or business key
10
- - the target record id is unknown
11
- - updates or deletes need confirmation
12
- - ordinary list browsing or spot checks are needed
9
+ - the user wants to browse records
10
+ - the target `record_id` is unknown
11
+ - a delete or update target still needs confirmation
12
+ - the user needs sample rows or a small export
13
13
 
14
- Use [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md) when:
14
+ Keep the browse DSL simple:
15
15
 
16
- - field titles may be ambiguous
17
- - filters are still in natural-language shape
18
- - the result may be used as a final conclusion
19
- - scan scope or completeness is unclear
20
- - the user asks for a distribution, ratio, ranking, top-N, or any grouped aggregate
21
- - the user asks for `分析 / 洞察 / 分布 / 占比 / 平均 / 排名 / 趋势 / 所有 / 全部 / 全国 / 高价值`
16
+ - `columns`: field ids only
17
+ - `where`: flat AND filters only
18
+ - `order_by`: field sorting only
19
+ - `limit` and `page`: browsing intent only
22
20
 
23
- ## Final analysis pattern
21
+ Do not use `record_list` for grouped conclusions, ratios, rankings, trends, or any final statistical claim.
24
22
 
25
- 1. Run `record_schema_get`
26
- 2. Generate one or more field_id-based DSLs
27
- 3. Run `record_analyze(strict_full=true)` for summary/distribution/trend/cross analysis
28
- 4. Run `record_query(query_mode="list")` only if you still need sample rows or examples
29
- 5. Report `scanned_count`, `presentation.statement_scope`, and whether the result is safe for a final conclusion
30
- 6. If `status=error` or `safe_for_final_conclusion=false`, stop at “partial result” instead of presenting a final business conclusion
31
- 7. If list rows are sample-only, separate the answer into:
32
- - `全量可信结论`
33
- - `样本观察(不作为最终结论)`
34
- - optional `待验证假设`
23
+ ## Detail Pattern
35
24
 
36
- ## Analysis anti-pattern
25
+ Use `record_schema_get -> record_get` when:
37
26
 
38
- Do not do this:
27
+ - the exact `record_id` is known
28
+ - the user needs one record in detail
29
+ - a write target needs verification before action
30
+
31
+ Prefer passing explicit `columns` when the user only needs a subset of fields.
32
+
33
+ ## Write Pattern
34
+
35
+ Use `record_schema_get -> record_write(mode="plan") -> record_write(mode="apply")`.
39
36
 
40
- 1. Run only `record_query(query_mode="list")`
41
- 2. Get `200` rows back
42
- 3. Report平均值、占比、地域分布 as if they were based on all records
37
+ 1. Confirm the target app
38
+ 2. Resolve fields with `record_schema_get`
39
+ 3. Decide whether the task is `insert`, `update`, or `delete`
40
+ 4. Build SQL-like JSON clauses
41
+ 5. Run `record_write(mode="plan")`
42
+ 6. If blockers are empty, run `record_write(mode="apply")`
43
+ 7. For important writes, keep `verify_write=true`
43
44
 
44
- This is not acceptable because the list endpoint can be capped. Use `record_schema_get -> record_analyze` first, then treat list rows as sample-only evidence.
45
+ ### Insert
45
46
 
46
- ## Create pattern
47
+ ```json
48
+ {
49
+ "operation": "insert",
50
+ "mode": "plan",
51
+ "values": [
52
+ { "field_id": 12, "value": "测试客户" },
53
+ { "field_id": 18, "value": 1000 }
54
+ ],
55
+ "submit_type": "submit",
56
+ "verify_write": true
57
+ }
58
+ ```
47
59
 
48
- 1. Confirm target app
49
- 2. Resolve fields with `record_schema_get` if needed. Prefer exact schema titles first; only rely on platform-neutral aliases such as `创建时间`, `负责人`, or `部门` when they resolve cleanly, and do not assume business-domain shorthand like `销售` is portable across apps
50
- 3. Run `record_write_plan` for non-trivial payloads or any `fields`-based write
51
- 4. For relation fields, query the target app first and resolve the referenced record `apply_id`
52
- 5. For attachments, call `file_upload_local` first and reuse the returned `attachment_value`
53
- 6. For subtable fields, pass a list of row objects keyed by subfield title. When updating existing rows, include `rowId` / `row_id` / `__row_id__` only if the current record already exposes it
54
- 7. Inspect `record_write_plan.data.support_matrix` or each field's `write_format.support_level` before submit:
55
- - `full`: direct write is supported
56
- - `restricted`: follow the documented presteps first
57
- - `unsupported`: stop and explain the limitation
58
- 8. For complex forms, production writes, attachments, relation-heavy payloads, or subtables, create with `verify_write=true`
59
- 9. If verification fails, treat the write as not yet successful and inspect the missing or empty fields before reporting back
60
- 10. Re-query or fetch the record when validation matters
60
+ ### Update
61
61
 
62
- ## Update pattern
62
+ ```json
63
+ {
64
+ "operation": "update",
65
+ "mode": "plan",
66
+ "record_id": 123,
67
+ "set": [
68
+ { "field_id": 18, "value": 2000 }
69
+ ],
70
+ "verify_write": true
71
+ }
72
+ ```
63
73
 
64
- 1. Query the target records
65
- 2. Resolve exact `apply_id`
66
- 3. Run `record_write_plan`
67
- 4. Update only the intended fields
68
- 5. Prefer `verify_write=true` for attachment, relation, subtable, or production updates
69
- 6. Re-read the record if the change is important, attachment-related, subtable-related, or the form has linkage
74
+ ### Delete
70
75
 
71
- ## Delete pattern
76
+ ```json
77
+ {
78
+ "operation": "delete",
79
+ "mode": "plan",
80
+ "record_ids": [123, 124]
81
+ }
82
+ ```
72
83
 
73
- 1. Query or fetch the exact record first
74
- 2. Confirm the target ids
75
- 3. Delete
76
- 4. Report affected ids and remaining count when relevant
84
+ ## Write Anti-Patterns
77
85
 
78
- ## Unsupported direct writes
86
+ Do not do this:
87
+
88
+ - do not send raw SQL text
89
+ - do not build free-form `WHERE` updates or deletes
90
+ - do not invent formulas or expressions
91
+ - do not auto-fill missing required fields
92
+ - do not guess relation targets without first resolving them
93
+ - do not skip `mode="plan"` on non-trivial writes
94
+
95
+ ## Unsupported Direct Writes
79
96
 
80
97
  Do not attempt direct app-user writes for these field types:
81
98
 
@@ -84,13 +101,10 @@ Do not attempt direct app-user writes for these field types:
84
101
  - `35` image generation
85
102
  - `36` document parsing
86
103
 
87
- If the payload includes them, stop at `record_write_plan` and explain that the tool does not build a reliable native payload for those fields yet.
88
-
89
- ## Relation fields
104
+ If the payload includes them, stop at `record_write(mode="plan")` and explain that the tool does not support a reliable direct write for those fields yet.
90
105
 
91
- Relation fields are record-id based.
106
+ ## Relation, Attachment, and Subtable Rules
92
107
 
93
- - Query the referenced app first
94
- - Resolve the target record `apply_id`
95
- - Write the relation field with that id
96
- - Do not write relation fields with display titles, business keys, or guessed identifiers unless they have already been resolved to the real record id
108
+ - Relation fields are record-id based. Resolve the referenced target first, then write the relation field with the real `record_id`.
109
+ - Attachment fields are two-step: upload first with `file_upload_local`, then reuse the returned attachment payload in `record_write`.
110
+ - Subtable writes require the current schema shape; when updating existing subtable rows, preserve row ids if the current record exposes them.