@josephyan/qingflow-app-user-mcp 0.2.0-beta.52 → 0.2.0-beta.54

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -3,13 +3,13 @@
3
3
  Install:
4
4
 
5
5
  ```bash
6
- npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.52
6
+ npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.54
7
7
  ```
8
8
 
9
9
  Run:
10
10
 
11
11
  ```bash
12
- npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.52 qingflow-app-user-mcp
12
+ npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.54 qingflow-app-user-mcp
13
13
  ```
14
14
 
15
15
  Environment:
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@josephyan/qingflow-app-user-mcp",
3
- "version": "0.2.0-beta.52",
3
+ "version": "0.2.0-beta.54",
4
4
  "description": "Operational end-user MCP for Qingflow records, tasks, comments, and directory workflows.",
5
5
  "license": "MIT",
6
6
  "type": "module",
package/pyproject.toml CHANGED
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "qingflow-mcp"
7
- version = "0.2.0b52"
7
+ version = "0.2.0b54"
8
8
  description = "User-authenticated MCP server for Qingflow"
9
9
  readme = "README.md"
10
10
  license = "MIT"
@@ -34,8 +34,9 @@ Route to exactly one of these specialized paths:
34
34
  - If the app is known but the available data range is unclear, call `app_get` first and inspect `accessible_views`
35
35
  - If the task is about browsing, reading, creating, updating, deleting, attachments, relations, subtable writes, member/department-field candidate lookup, code-block field execution, import templates, import capability discovery, import-file verification, authorized local file repair, import execution, or import status, switch to `$qingflow-record-crud`
36
36
  - If the task is about todo discovery, task context, approval actions, rollback or transfer, associated report review, or workflow log review, switch to `$qingflow-task-ops`
37
- - If the task is about creating new records or importing data, prefer `$qingflow-record-crud` under applicant-node schema semantics
38
- - If the task is about updating an existing record directly, require an explicit accessible `view_id` and then route to `$qingflow-record-crud`
37
+ - If the task is about creating new records or importing data, prefer `$qingflow-record-crud` under applicant-node create semantics
38
+ - If the task is about updating an existing record directly, route to `$qingflow-record-crud`, which uses `record_update` and defaults to `system:all` before requiring an explicit accessible `view_id`
39
+ - If the task involves member, department, or relation fields and the user only has natural names/titles, still route to `$qingflow-record-crud`; direct write now supports backend-native auto resolution and may return `needs_confirmation` with candidates instead of failing blind
39
40
  - If the task is about subtable writes, still route to `$qingflow-record-crud`, but shape the payload through the parent subtable field `rows/tableValues`; do not route users toward top-level leaf selectors
40
41
  - If the user sounds like an ordinary workflow assignee rather than a system operator, prefer `$qingflow-task-ops` over direct record mutation whenever both paths could fit
41
42
  - If the task is about grouped distributions, ratios, rankings, trends, insights, or any final statistical conclusion, switch to `$qingflow-record-analysis`
@@ -11,20 +11,18 @@ For final statistics, grouped distributions, rankings, trends, or insight-style
11
11
 
12
12
  ## Write Preflight
13
13
 
14
- - `record_write` always performs internal static preflight before any apply
15
- - If `record_write` returns `ok=false`, the write was blocked and not executed
14
+ - `record_insert`, `record_update`, and `record_delete` always perform internal static preflight before any apply
15
+ - If a direct-write tool returns `ok=false`, the write was blocked and not executed
16
16
  - Use `record_schema_get` when field titles are uncertain instead of guessing ids
17
17
  - Prefer `verify_write=true` for complex, relation-heavy, subtable, or production writes
18
- - Even when `record_write` returns `ok=true`, it may still surface verification failures; do not report success before checking them
18
+ - Even when a direct-write tool returns `ok=true`, it may still surface verification failures; do not report success before checking them
19
19
 
20
20
  ## Write Semantics
21
21
 
22
- - `insert` uses `values`
23
- - `update` uses `set`
24
- - `delete` uses `record_id` or `record_ids`
25
- - Do not send raw SQL strings
22
+ - `record_insert` uses an applicant-node `fields` map
23
+ - `record_update` uses a view-scoped `fields` map
24
+ - `record_delete` uses `record_id` or `record_ids`
26
25
  - Do not fake formula or expression fields
27
- - Do not perform free-form bulk updates or deletes
28
26
  - Do not guess relation targets from display text; resolve the real `record_id` first
29
27
 
30
28
  ## Attachments
@@ -50,7 +50,7 @@ Production behavior:
50
50
  Production guardrails:
51
51
 
52
52
  - never assume a record id, app id, or workspace id
53
- - treat `record_write(operation="delete")` as high risk
53
+ - treat `record_delete` as high risk
54
54
  - if the task can be answered read-only, do not write
55
55
 
56
56
  ## Reporting Rule
@@ -32,13 +32,13 @@ Prefer passing explicit `columns` when the user only needs a subset of fields.
32
32
 
33
33
  ## Write Pattern
34
34
 
35
- Use `record_schema_get -> record_write`.
35
+ Use `record_schema_get -> record_insert / record_update / record_delete`.
36
36
 
37
37
  1. Confirm the target app
38
38
  2. Resolve fields with `record_schema_get`
39
39
  3. Decide whether the task is `insert`, `update`, or `delete`
40
- 4. Build SQL-like JSON clauses
41
- 5. Run `record_write`
40
+ 4. Build a field-title keyed `fields` map for insert/update
41
+ 5. Run `record_insert`, `record_update`, or `record_delete`
42
42
  6. If `ok=false`, explain `field_errors` first, then summarize blockers; stop because the write was not executed
43
43
  7. If `ok=true`, report the affected resource and any verification outcome
44
44
  8. For important writes, keep `verify_write=true`
@@ -47,11 +47,11 @@ Use `record_schema_get -> record_write`.
47
47
 
48
48
  ```json
49
49
  {
50
- "operation": "insert",
51
- "values": [
52
- { "field_id": 12, "value": "测试客户" },
53
- { "field_id": 18, "value": 1000 }
54
- ],
50
+ "app_key": "APP_1",
51
+ "fields": {
52
+ "客户名称": "测试客户",
53
+ "合同金额": 1000
54
+ },
55
55
  "submit_type": "submit",
56
56
  "verify_write": true
57
57
  }
@@ -61,11 +61,12 @@ Use `record_schema_get -> record_write`.
61
61
 
62
62
  ```json
63
63
  {
64
- "operation": "update",
64
+ "app_key": "APP_1",
65
65
  "record_id": 123,
66
- "set": [
67
- { "field_id": 18, "value": 2000 }
68
- ],
66
+ "fields": {
67
+ "合同金额": 2000
68
+ },
69
+ "view_id": "system:all",
69
70
  "verify_write": true
70
71
  }
71
72
  ```
@@ -74,7 +75,7 @@ Use `record_schema_get -> record_write`.
74
75
 
75
76
  ```json
76
77
  {
77
- "operation": "delete",
78
+ "app_key": "APP_1",
78
79
  "record_ids": [123, 124]
79
80
  }
80
81
  ```
@@ -88,7 +89,7 @@ Do not do this:
88
89
  - do not invent formulas or expressions
89
90
  - do not auto-fill missing required fields
90
91
  - do not guess relation targets without first resolving them
91
- - do not claim a blocked `record_write` was executed
92
+ - do not claim a blocked direct write was executed
92
93
 
93
94
  ## Unsupported Direct Writes
94
95
 
@@ -99,10 +100,10 @@ Do not attempt direct app-user writes for these field types:
99
100
  - `35` image generation
100
101
  - `36` document parsing
101
102
 
102
- If the payload includes them, stop after the blocked `record_write` response and explain that the tool does not support a reliable direct write for those fields yet.
103
+ If the payload includes them, stop after the blocked response and explain that the tool does not support a reliable direct write for those fields yet.
103
104
 
104
105
  ## Relation, Attachment, and Subtable Rules
105
106
 
106
107
  - Relation fields are record-id based. Resolve the referenced target first, then write the relation field with the real `record_id`.
107
- - Attachment fields are two-step: upload first with `file_upload_local`, then reuse the returned attachment payload in `record_write`.
108
+ - Attachment fields are two-step: upload first with `file_upload_local`, then reuse the returned attachment payload in `record_insert` or `record_update`.
108
109
  - Subtable writes require the current schema shape; when updating existing subtable rows, preserve row ids if the current record exposes them.
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: qingflow-record-crud
3
- description: Browse, read, create, update, delete, and import Qingflow records after the MCP is already connected and authenticated. Use when the user wants schema-first record CRUD, import-template verification/import execution, or SQL-like JSON DSL writes. Do not use this skill for task-center workflow actions or final statistical analysis.
3
+ description: Browse, read, create, update, delete, and import Qingflow records after the MCP is already connected and authenticated. Use when the user wants schema-first record CRUD or import-template verification/import execution. Do not use this skill for task-center workflow actions or final statistical analysis.
4
4
  metadata:
5
5
  short-description: Schema-first Qingflow record CRUD and import
6
6
  ---
@@ -18,17 +18,20 @@ Use exactly one of these default paths:
18
18
 
19
19
  1. Browse records: `app_get -> record_schema_get(schema_mode="browse", view_id=...) -> record_list`
20
20
  2. Read one record: `app_get -> record_schema_get(schema_mode="browse", view_id=...) -> record_get`
21
- 3. Insert records: `record_schema_get(schema_mode="applicant") -> record_write(operation="insert")`
22
- 4. Update records: `app_get -> choose accessible view -> record_schema_get(schema_mode="browse", view_id=...) -> record_write(operation="update", view_id=...)`
23
- 5. Run a code-block field: `record_schema_get(schema_mode="applicant") -> record_code_block_run`
24
- 6. Import records: `record_import_template_get -> record_import_verify -> (optional authorized file repair) -> record_import_start -> record_import_status_get`
21
+ 3. Insert records: `record_schema_get(schema_mode="applicant") -> record_insert`
22
+ 4. Update records: `app_get -> choose accessible view -> record_schema_get(schema_mode="browse", view_id=...) -> record_update`
23
+ 5. Delete records: `record_list / record_get -> record_delete`
24
+ 6. Run a code-block field: `record_schema_get(schema_mode="applicant") -> record_code_block_run`
25
+ 7. Import records: `record_import_template_get -> record_import_verify -> (optional authorized file repair) -> record_import_start -> record_import_status_get`
25
26
 
26
27
  ## Core Tools
27
28
 
28
29
  - `record_schema_get`
29
30
  - `record_list`
30
31
  - `record_get`
31
- - `record_write`
32
+ - `record_insert`
33
+ - `record_update`
34
+ - `record_delete`
32
35
  - `record_code_block_run`
33
36
  - `record_import_template_get`
34
37
  - `record_import_verify`
@@ -39,7 +42,7 @@ Use exactly one of these default paths:
39
42
  `record_schema_get(schema_mode="applicant")` exposes the current user's applicant-node visible write/create fields.
40
43
  Its top-level `fields` now list only direct applicant-write fields; subtable leaf columns stay under the parent subtable field's `write_format.subfields`.
41
44
  `record_schema_get(schema_mode="browse", view_id=...)` exposes browse-schema fields for the selected accessible view.
42
- In browse mode, `writable=true` means `record_write(operation="update", view_id=...)` can target that field in the selected view; `writable=false` means the chosen view blocks direct update for that field.
45
+ In browse mode, `writable=true` means `record_update` can target that field in the selected view; `writable=false` means the chosen view blocks direct update for that field.
43
46
  Read top-level `fields` and `suggested_*`; if a field is missing, treat it as unavailable in the current permission scope.
44
47
 
45
48
  ## Supporting Tools
@@ -72,7 +75,7 @@ Use `record_member_candidates` / `record_department_candidates` as the default l
72
75
  10. If the request is analysis-like, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
73
76
  11. If the request is write-like, decide `insert / update / delete` before building any payload
74
77
  12. For `insert`, stay on `schema_mode="applicant"` and do not introduce any view selector
75
- 13. For `update`, call `app_get` first, choose an accessible view, and carry that `view_id` into `record_write`
78
+ 13. For `update`, call `app_get` first, choose an accessible view, and carry that `view_id` into `record_update`
76
79
  14. For subtable writes, use the parent subtable field and submit `rows/tableValues`; do not treat subtable leaf titles as top-level write selectors
77
80
  15. If fields are still ambiguous after `record_schema_get`, ask the user to confirm from a short candidate list instead of guessing
78
81
  16. For high-risk writes or production changes, read the current state first whenever practical
@@ -92,7 +95,10 @@ Use `record_member_candidates` / `record_department_candidates` as the default l
92
95
 
93
96
  ## Record Write Rules
94
97
 
95
- Use `record_write` as the only default write tool.
98
+ Use the three direct-write tools as separate paths:
99
+ - `record_insert` for create
100
+ - `record_update` for edit
101
+ - `record_delete` for delete
96
102
 
97
103
  ### Write workflow
98
104
 
@@ -100,56 +106,43 @@ Use `record_write` as the only default write tool.
100
106
  2. Decide whether the task is `insert`, `update`, or `delete`
101
107
  3. If this is workflow work for an ordinary assignee, switch to [$qingflow-task-ops](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-task-ops/SKILL.md) instead of forcing direct CRUD
102
108
  4. If this is `insert`, stay on applicant schema only and do not pass any `view_*` selector
103
- 5. If this is `update`, choose an accessible view first, inspect `record_schema_get(schema_mode="browse", view_id=...)`, and pass that `view_id` into `record_write`
104
- 6. For relation fields, read `target_app_key / target_app_name` from schema first
105
- 7. For member fields with unknown ids, run `record_member_candidates`
106
- 8. For department fields with unknown ids, run `record_department_candidates`
107
- 9. For subtable writes, target the parent subtable field and fill `rows/tableValues`; subtable leaf titles are not valid top-level `record_write` selectors
108
- 10. Build SQL-like JSON clauses
109
- 11. Run `record_write`
109
+ 5. If this is `update`, choose an accessible view first, inspect `record_schema_get(schema_mode="browse", view_id=...)`, and pass that `view_id` into `record_update`
110
+ 6. For relation fields, read `target_app_key / target_app_name / searchable_fields` from schema first
111
+ 7. For member, department, and relation fields, natural strings are allowed; the write path now tries backend-native auto resolution before it asks for confirmation
112
+ 8. If a member or department field still needs explicit lookup, run `record_member_candidates` or `record_department_candidates`
113
+ 9. For subtable writes, target the parent subtable field and fill row objects or `tableValues`; subtable leaf titles are not valid top-level selectors
114
+ 10. Build a field-title keyed `fields` map
115
+ 11. Run `record_insert`, `record_update`, or `record_delete`
110
116
  12. If `ok=false`, explain `field_errors` first, then summarize blockers; do not report a write as executed
111
117
  13. If `ok=true`, report the affected `record_id` or created resource
112
118
  14. For important writes, keep `verify_write=true`
113
119
 
114
- ### SQL-like JSON DSL
115
-
116
- The DSL is clause-shaped like SQL, but it is **not raw SQL text**.
117
-
118
- | operation | required shape | compact example |
119
- |-----------|----------------|-----------------|
120
- | `insert` | `values` | `{"operation":"insert","values":[{"field_id":12,"value":"测试客户"}]}` |
121
- | `update` | `record_id + set` | `{"operation":"update","record_id":123,"set":[{"field_id":18,"value":2000}]}` |
122
- | `delete` | `record_id` or `record_ids` | `{"operation":"delete","record_ids":[123,124]}` |
123
-
124
120
  ### Write discipline
125
121
 
126
- - `insert` uses `values`
127
- - `update` uses `set`
128
- - `delete` uses `record_id` or `record_ids`
129
- - `insert` strictly follows applicant-node write scope and does not accept `view_id / list_type / view_key / view_name`
130
- - `update` must carry an explicit `view_id / list_type / view_key / view_name`; prefer `view_id` chosen from `app_get.accessible_views`
122
+ - `record_insert` uses an applicant-node `fields` map keyed by field title
123
+ - `record_update` uses a view-scoped `fields` map keyed by field title
124
+ - `record_delete` uses `record_id` or `record_ids`
125
+ - `record_insert` strictly follows applicant-node write scope and does not accept `view_id / list_type / view_key / view_name`
126
+ - `record_update` should receive an explicit `view_id`; if omitted, the tool tries `system:all` first and fails clearly when that view is not accessible
131
127
  - `update` should be treated as “view-scoped direct edit”, not as a task or approval shortcut
132
128
  - `record_schema_get(schema_mode="browse").fields[*].writable` is the contract for whether the selected view allows direct update on that field
133
- - subtable writes must go through the parent subtable field with `rows/tableValues`; top-level subtable leaf selectors now fail fast
134
- - Do not send raw SQL text
135
- - Do not invent formulas or expressions
136
- - Do not use free-form `WHERE` updates or deletes
129
+ - subtable writes must go through the parent subtable field with row objects or `tableValues`; top-level subtable leaf selectors now fail fast
137
130
  - Do not auto-fill missing fields
138
- - Do not auto-resolve relation targets without first querying them
139
- - Do not assume member display names resolve automatically; `record_member_candidates` returns the current field candidate scope, and write paths now reject member values outside that scope when it is supported
140
- - Do not assume department names resolve automatically; `record_department_candidates` returns the current field candidate scope, and write paths now reject department values outside that scope when it is supported
131
+ - Member, department, and relation fields may accept natural strings when the backend candidate scope resolves them uniquely
132
+ - If any lookup field returns `status="needs_confirmation"`, stop and surface `confirmation_requests`; do not submit a partial write
133
+ - For relation fields, use schema `searchable_fields` as the mental model for what the backend will search; do not invent a custom match-field title
141
134
  - For default-all member or department fields, prefer the field candidate tools; do not start with `directory_*`
142
135
  - Do not invent member or department ids/names when a candidate tool is available; choose an exact returned candidate item
143
- - If member or department candidate lookup fails, stop and surface the lookup error; do not retry the write with guessed ids or names
136
+ - If member, department, or relation auto resolution fails, stop and surface the lookup error; do not retry the write with guessed ids or names
144
137
  - Do not assume `record_schema_get` is a builder/full-field schema.
145
138
 
146
139
  ### Complex field quick examples
147
140
 
148
- - `member`: `✅ {"field_id":5,"value":{"id":7,"value":"张三"}}` / `✅ {"field_id":5,"value":"张三"}` only when it exactly matches a current candidate / `❌ {"field_id":5,"value":"不存在的成员"}`
149
- - `department`: `✅ {"field_id":22,"value":{"id":336193,"value":"北斗组"}}` / `✅ {"field_id":22,"value":"北斗组"}` only when it exactly matches a current candidate / `❌ {"field_id":22,"value":"不存在的部门"}`
150
- - `relation`: `✅ {"field_id":25,"value":{"apply_id":5001}}` / `❌ {"field_id":25,"value":"客户A"}`
151
- - `attachment`: upload first, then `✅ {"field_id":13,"value":{"value":"https://.../a.pdf","name":"a.pdf"}}` / `❌ {"field_id":13,"value":"/tmp/a.pdf"}`
152
- - `subtable`: `✅ {"field_id":18,"value":{"rows":[{"产品名称":"企业版","数量":2}]}}` / `✅ {"field_id":18,"value":{"tableValues":[{"queId":101,"values":[{"value":"企业版"}]},{"queId":102,"values":[{"value":2}]}]}}` / `❌ {"field_id":101,"value":"企业版"}`
141
+ - `member`: `✅ {"负责人":{"id":7,"value":"张三"}}` / `✅ {"负责人":"张三"}` when it resolves uniquely in backend scope / `⚠️ {"负责人":"张三"}` may return `needs_confirmation` when multiple candidates match
142
+ - `department`: `✅ {"所属部门":{"id":336193,"value":"北斗组"}}` / `✅ {"所属部门":"北斗组"}` when it resolves uniquely in backend scope / `⚠️ {"所属部门":"研发部"}` may return `needs_confirmation` when paths collide
143
+ - `relation`: `✅ {"关联客户":{"apply_id":5001}}` / `✅ {"关联客户":"客户A"}` when it resolves uniquely inside schema `searchable_fields` / `⚠️ {"关联客户":"客户A"}` may return `needs_confirmation` when multiple records match
144
+ - `attachment`: upload first, then `✅ {"合同附件":{"value":"https://.../a.pdf","name":"a.pdf"}}` / `❌ {"合同附件":"/tmp/a.pdf"}`
145
+ - `subtable`: `✅ {"销售明细":[{"产品名称":"企业版","数量":2}]}` / `✅ {"销售明细":{"tableValues":[{"queId":101,"values":[{"value":"企业版"}]},{"queId":102,"values":[{"value":2}]}]}}` / `❌ {"产品名称":"企业版"}`
153
146
 
154
147
  ## Code Block Rules
155
148
 
@@ -182,7 +175,7 @@ Use `record_code_block_run` when the user wants to run a form code-block field a
182
175
 
183
176
  ## Import Rules
184
177
 
185
- Use the import tools for file-based bulk data loading, not `record_write`.
178
+ Use the import tools for file-based bulk data loading, not `record_insert` or `record_update`.
186
179
 
187
180
  ### Import workflow
188
181
 
@@ -215,14 +208,14 @@ Use the import tools for file-based bulk data loading, not `record_write`.
215
208
  - `record_list` returns browse/sample data, not final analysis conclusions
216
209
  - If `view_id` is a custom view, treat the result as convenience browse output only when `warnings` includes `CUSTOM_VIEW_FILTER_UNVERIFIED`. In that case do not state the saved filter result as a verified fact.
217
210
  - Prefer `system:all` plus explicit `where` filters whenever the user needs a trustworthy scoped dataset.
218
- - `record_write` always performs internal static preflight before any apply
219
- - `record_write insert` is validated strictly against applicant-node create scope
220
- - `record_write update` is validated against the selected view for field scope, then against real record edit permission
211
+ - `record_insert`, `record_update`, and `record_delete` always perform internal static preflight before any apply
212
+ - `record_insert` is validated strictly against applicant-node create scope
213
+ - `record_update` is validated against the selected view for field scope, then against real record edit permission
221
214
  - if a subtable leaf is missing from applicant top-level schema, that is expected; inspect the parent subtable field's `write_format.subfields`
222
- - If `record_write` returns `ok=false`, the write was blocked and not executed
223
- - If `record_write` raises `WRITE_PERMISSION_DENIED`, explain that direct edit permission is missing and prefer task-center actions for ordinary workflow users
215
+ - If a direct-write tool returns `ok=false`, the write was blocked and not executed
216
+ - If a direct-write tool raises `WRITE_PERMISSION_DENIED`, explain that direct edit permission is missing and prefer task-center actions for ordinary workflow users
224
217
  - Prefer explaining `field_errors` before summarizing top-level blockers
225
- - If `record_write` returns `ok=true`, still check `verification` and `warnings` before claiming success
218
+ - If a direct-write tool returns `ok=true`, still check `verification` and `warnings` before claiming success
226
219
  - Prefer canonical schema titles and aliases in your final wording
227
220
  - If only part of the requested work is completed, explicitly disclose which parts are done and which are not
228
221
 
@@ -1,5 +1,4 @@
1
1
  interface:
2
2
  display_name: "Qingflow Record CRUD"
3
3
  short_description: "Browse, read, and write Qingflow records with schema-first DSLs"
4
- default_prompt: "Use $qingflow-record-crud for Qingflow record browsing, detail lookup, and record_write operations. Start with record_schema_get, choose record_list, record_get, or record_write, and keep analysis work in $qingflow-record-analysis."
5
-
4
+ default_prompt: "Use $qingflow-record-crud for Qingflow record browsing, detail lookup, and direct record edits. Start with record_schema_get, choose record_list, record_get, record_insert, record_update, or record_delete, and keep analysis work in $qingflow-record-analysis."
@@ -13,21 +13,19 @@ For final statistics, grouped distributions, rankings, trends, or insight-style
13
13
 
14
14
  ## Write Preflight
15
15
 
16
- - `record_write` always performs internal static preflight before any apply
17
- - If `record_write` returns `ok=false`, the write was blocked and not executed
16
+ - `record_insert`, `record_update`, and `record_delete` always perform internal static preflight before any apply
17
+ - If a direct-write tool returns `ok=false`, the write was blocked and not executed
18
18
  - Explain `field_errors` before summarizing blockers
19
19
  - Use `record_schema_get` when field titles are uncertain instead of guessing ids
20
20
  - Prefer `verify_write=true` for complex, relation-heavy, subtable, or production writes
21
- - Even when `record_write` returns `ok=true`, it may still surface verification failures; do not report success before checking them
21
+ - Even when a direct-write tool returns `ok=true`, it may still surface verification failures; do not report success before checking them
22
22
 
23
23
  ## Write Semantics
24
24
 
25
- - `insert` uses `values`
26
- - `update` uses `set`
27
- - `delete` uses `record_id` or `record_ids`
28
- - Do not send raw SQL strings
25
+ - `record_insert` uses an applicant-node `fields` map
26
+ - `record_update` uses a view-scoped `fields` map
27
+ - `record_delete` uses `record_id` or `record_ids`
29
28
  - Do not fake formula or expression fields
30
- - Do not perform free-form bulk updates or deletes
31
29
  - Do not guess relation targets from display text; resolve the real `record_id` first
32
30
 
33
31
  ## Attachments
@@ -44,7 +44,7 @@ Production behavior:
44
44
  Production guardrails:
45
45
 
46
46
  - never assume a record id, app id, or workspace id
47
- - treat `record_write(operation="delete")` as high risk
47
+ - treat `record_delete` as high risk
48
48
  - if the task can be answered read-only, do not write
49
49
 
50
50
  ## Reporting Rule
@@ -55,4 +55,3 @@ For record CRUD operations, always report:
55
55
  - target app
56
56
  - operation type: read, create, update, or delete
57
57
  - affected record count or ids
58
-
@@ -35,13 +35,13 @@ Without `columns`, `record_get` still returns only applicant-node visible fields
35
35
 
36
36
  ## Write Pattern
37
37
 
38
- Use `record_schema_get -> record_write`.
38
+ Use `record_schema_get -> record_insert / record_update / record_delete`.
39
39
 
40
40
  1. Confirm the target app
41
41
  2. Resolve fields with `record_schema_get`
42
42
  3. Decide whether the task is `insert`, `update`, or `delete`
43
- 4. Build SQL-like JSON clauses
44
- 5. Run `record_write`
43
+ 4. Build a field-title keyed `fields` map for insert/update
44
+ 5. Run `record_insert`, `record_update`, or `record_delete`
45
45
  6. If `ok=false`, explain `field_errors` first, then summarize blockers; stop because the write was not executed
46
46
  7. If `ok=true`, report the affected resource and any verification outcome
47
47
  8. For important writes, keep `verify_write=true`
@@ -50,11 +50,11 @@ Use `record_schema_get -> record_write`.
50
50
 
51
51
  ```json
52
52
  {
53
- "operation": "insert",
54
- "values": [
55
- { "field_id": 12, "value": "测试客户" },
56
- { "field_id": 18, "value": 1000 }
57
- ],
53
+ "app_key": "APP_1",
54
+ "fields": {
55
+ "客户名称": "测试客户",
56
+ "合同金额": 1000
57
+ },
58
58
  "submit_type": "submit",
59
59
  "verify_write": true
60
60
  }
@@ -64,11 +64,12 @@ Use `record_schema_get -> record_write`.
64
64
 
65
65
  ```json
66
66
  {
67
- "operation": "update",
67
+ "app_key": "APP_1",
68
68
  "record_id": 123,
69
- "set": [
70
- { "field_id": 18, "value": 2000 }
71
- ],
69
+ "fields": {
70
+ "合同金额": 2000
71
+ },
72
+ "view_id": "system:all",
72
73
  "verify_write": true
73
74
  }
74
75
  ```
@@ -77,7 +78,7 @@ Use `record_schema_get -> record_write`.
77
78
 
78
79
  ```json
79
80
  {
80
- "operation": "delete",
81
+ "app_key": "APP_1",
81
82
  "record_ids": [123, 124]
82
83
  }
83
84
  ```
@@ -92,7 +93,7 @@ Do not do this:
92
93
  - do not auto-fill missing required fields
93
94
  - do not guess relation targets without first resolving them
94
95
  - do not guess hidden or missing fields from prior builder knowledge; if the field is absent from applicant-node schema, stop and explain the permission boundary
95
- - do not claim a blocked `record_write` was executed
96
+ - do not claim a blocked direct write was executed
96
97
 
97
98
  ## Unsupported Direct Writes
98
99
 
@@ -103,12 +104,12 @@ Do not attempt direct writes for these field types:
103
104
  - `35` image generation
104
105
  - `36` document parsing
105
106
 
106
- If the payload includes them, stop after the blocked `record_write` response and explain that the tool does not support a reliable direct write for those fields yet.
107
+ If the payload includes them, stop after the blocked response and explain that the tool does not support a reliable direct write for those fields yet.
107
108
 
108
109
  ## Relation, Attachment, and Subtable Rules
109
110
 
110
111
  - Relation fields are record-id based. Resolve the referenced target first, then write the relation field with the real `record_id`.
111
- - Attachment fields are two-step: upload first with `file_upload_local`, then reuse the returned attachment payload in `record_write`.
112
+ - Attachment fields are two-step: upload first with `file_upload_local`, then reuse the returned attachment payload in `record_insert` or `record_update`.
112
113
  - Subtable writes require the current schema shape; when updating existing subtable rows, preserve row ids if the current record exposes them.
113
114
 
114
115
  ### Quick field examples
@@ -53,7 +53,8 @@ If an accessible view has `analysis_supported=false`, do not use it for `record_
53
53
 
54
54
  ## Schema-First Rule
55
55
 
56
- Call `record_schema_get(schema_mode="applicant")` before `record_write`.
56
+ Call `record_schema_get(schema_mode="applicant")` before `record_insert`.
57
+ Call `record_schema_get(schema_mode="applicant")` before `record_code_block_run`.
57
58
  Call `app_get` first when the data range is unclear, then use `record_schema_get(schema_mode="browse", view_id=...)` before `record_list`, `record_get`, or `record_analyze`.
58
59
 
59
60
  - All `field_id` values must come from the schema response.
@@ -94,7 +95,9 @@ Analysis answers must include concrete numbers. When applicable, include percent
94
95
  ## Record CRUD Path
95
96
 
96
97
  `app_get -> record_schema_get(schema_mode="browse", view_id=...) -> record_list / record_get`
97
- `record_schema_get(schema_mode="applicant") -> record_write`
98
+ `record_schema_get(schema_mode="applicant") -> record_insert`
99
+ `app_get -> record_schema_get(schema_mode="browse", view_id=...) -> record_update`
100
+ `record_list / record_get -> record_delete`
98
101
  `record_schema_get(schema_mode="applicant") -> record_code_block_run`
99
102
 
100
103
  - Use `columns` as `[{{field_id}}]`
@@ -102,14 +105,12 @@ Analysis answers must include concrete numbers. When applicable, include percent
102
105
  - Use `order_by` items as `{{field_id, direction}}`
103
106
  - Legacy forms such as bare integer `field_id`, `fieldId`, `operator`, `values`, or `order` may still parse, but they are compatibility-only and not the canonical DSL
104
107
 
105
- `record_write` uses SQL-like JSON clauses:
106
-
107
- - `insert` -> `values`
108
- - `update` -> `record_id + set`
109
- - `delete` -> `record_id` or `record_ids`
108
+ - `record_insert` uses an applicant-node `fields` map keyed by field title.
109
+ - `record_update` uses a view-scoped `fields` map keyed by field title.
110
+ - `record_delete` deletes by `record_id` or `record_ids`.
110
111
 
111
112
  - Read relation targets from `record_schema_get.target_app_key` / `target_app_name` before preparing relation writes.
112
- - If a member or department field id is known but candidate ids are not, use `record_member_candidates` or `record_department_candidates` before `record_write`.
113
+ - If a member or department field id is known but candidate ids are not, use `record_member_candidates` or `record_department_candidates` before `record_insert` or `record_update`.
113
114
  - For default-all member or department fields, prefer those field candidate tools instead of starting with `directory_*`.
114
115
 
115
116
  ## Code Block Path
@@ -41,7 +41,8 @@ If an accessible view has `analysis_supported=false`, do not use it for `record_
41
41
 
42
42
  ## Schema-First Rule
43
43
 
44
- Call `record_schema_get(schema_mode="applicant")` before `record_write`.
44
+ Call `record_schema_get(schema_mode="applicant")` before `record_insert`.
45
+ Call `record_schema_get(schema_mode="applicant")` before `record_code_block_run`.
45
46
  Call `app_get` first when the data range is unclear, then use `record_schema_get(schema_mode="browse", view_id=...)` before `record_list`, `record_get`, or `record_analyze`.
46
47
 
47
48
  - All `field_id` values must come from the schema response.
@@ -82,7 +83,9 @@ Analysis answers must include concrete numbers. When applicable, include percent
82
83
  ## Record CRUD Path
83
84
 
84
85
  `app_get -> record_schema_get(schema_mode="browse", view_id=...) -> record_list / record_get`
85
- `record_schema_get(schema_mode="applicant") -> record_write`
86
+ `record_schema_get(schema_mode="applicant") -> record_insert`
87
+ `app_get -> record_schema_get(schema_mode="browse", view_id=...) -> record_update`
88
+ `record_list / record_get -> record_delete`
86
89
  `record_schema_get(schema_mode="applicant") -> record_code_block_run`
87
90
 
88
91
  - Use `columns` as `[{{field_id}}]`
@@ -90,14 +93,12 @@ Analysis answers must include concrete numbers. When applicable, include percent
90
93
  - Use `order_by` items as `{{field_id, direction}}`
91
94
  - Legacy forms such as bare integer `field_id`, `fieldId`, `operator`, `values`, or `order` may still parse, but they are compatibility-only and not the canonical DSL
92
95
 
93
- `record_write` uses SQL-like JSON clauses:
94
-
95
- - `insert` -> `values`
96
- - `update` -> `record_id + set`
97
- - `delete` -> `record_id` or `record_ids`
96
+ - `record_insert` uses an applicant-node `fields` map keyed by field title.
97
+ - `record_update` uses a view-scoped `fields` map keyed by field title.
98
+ - `record_delete` deletes by `record_id` or `record_ids`.
98
99
 
99
100
  - Read relation targets from `record_schema_get.target_app_key` / `target_app_name` before preparing relation writes.
100
- - If a member or department field id is known but candidate ids are not, use `record_member_candidates` or `record_department_candidates` before `record_write`.
101
+ - If a member or department field id is known but candidate ids are not, use `record_member_candidates` or `record_department_candidates` before `record_insert` or `record_update`.
101
102
  - For default-all member or department fields, prefer those field candidate tools instead of starting with `directory_*`.
102
103
 
103
104
  ## Code Block Path