@josephyan/qingflow-app-user-mcp 0.2.0-beta.21 → 0.2.0-beta.23

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,181 @@
1
+ ---
2
+ name: qingflow-record-crud
3
+ description: Browse, read, create, update, and delete Qingflow records after the MCP is already connected and authenticated. Use when the user wants schema-first record CRUD with SQL-like JSON DSL. Do not use this skill for task-center workflow actions or final statistical analysis.
4
+ metadata:
5
+ short-description: Schema-first Qingflow record CRUD
6
+ ---
7
+
8
+ # Qingflow Record CRUD
9
+
10
+ ## Overview
11
+
12
+ This skill is for **record CRUD only** inside existing Qingflow apps.
13
+
14
+ Use it for:
15
+
16
+ - record browsing
17
+ - record detail lookup
18
+ - record create / update / delete
19
+ - attachment preflight before a write
20
+ - relation/member/department lookup that directly supports a write
21
+
22
+ Do **not** use this skill for:
23
+
24
+ - grouped analysis, ratios, rankings, trends, or final statistical conclusions
25
+ Switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
26
+ - task-center workflow actions, comments, or directory-driven operational workflows
27
+ Switch to [$qingflow-task-ops](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-task-ops/SKILL.md)
28
+
29
+ This skill assumes the MCP is already connected, authenticated, and bound to the correct workspace.
30
+ If not, switch to [$qingflow-mcp-setup](/Users/yanqidong/.codex/skills/qingflow-mcp-setup/SKILL.md) first.
31
+
32
+ Before operating on data, identify whether the task targets `test` or `prod` and read [references/environments.md](references/environments.md).
33
+ If the user did not specify one, default to `prod`.
34
+
35
+ ## Default Paths
36
+
37
+ Use exactly one of these default paths:
38
+
39
+ 1. Browse records
40
+ `record_schema_get -> record_list`
41
+
42
+ 2. Read one record
43
+ `record_schema_get -> record_get`
44
+
45
+ 3. Write records
46
+ `record_schema_get -> record_write`
47
+
48
+ 4. Analysis
49
+ Switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
50
+
51
+ ## Core Tools
52
+
53
+ - `record_schema_get`
54
+ - `record_list`
55
+ - `record_get`
56
+ - `record_write`
57
+
58
+ `record_schema_get` now returns the **current user's applicant-node schema only**:
59
+
60
+ - only fields visible to the current user at the applicant node are returned
61
+ - hidden fields are omitted entirely
62
+ - missing fields should be treated as `当前用户在申请人节点下不可见/不可用`, not as a reason to guess a different field
63
+
64
+ ## Supporting Tools
65
+
66
+ - `directory_search`
67
+ - `directory_list_internal_users`
68
+ - `directory_list_internal_departments`
69
+ - `directory_list_sub_departments`
70
+ - `file_get_upload_info`
71
+ - `file_upload_local`
72
+
73
+ ## Standard Operating Order
74
+
75
+ 1. Ensure auth exists
76
+ 2. Ensure workspace is selected
77
+ 3. Confirm target app and whether the task is browse / detail / write / analysis
78
+ 4. Run `record_schema_get` before any non-trivial record read or write
79
+ 5. If the request is analysis-like, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
80
+ 6. If the request is write-like, decide `insert / update / delete` before building any payload
81
+ 7. If fields are still ambiguous after `record_schema_get`, ask the user to confirm from a short candidate list instead of guessing
82
+ 8. For high-risk writes or production changes, read the current state first whenever practical
83
+ 9. After actions, report the affected `record_id`, counts, or returned item count
84
+
85
+ ## Record Read Rules
86
+
87
+ - Use `record_list` for browse/export/sample inspection only
88
+ - Use `record_get` when `record_id` is known
89
+ - `record_get` without explicit `columns` still returns only applicant-node visible fields; do not assume it exposes the full builder-side record
90
+ - `record_list` accepts:
91
+ - `columns`
92
+ - `where`
93
+ - `order_by`
94
+ - `limit`
95
+ - `page`
96
+ - `record_list` and `record_get` may reject hidden-field `field_id`s because record tools now validate against the applicant-node visible schema only
97
+ - `record_list` is **not** an analysis tool
98
+ - If a request turns into grouped distributions, ratios, rankings, trends, or final statistical conclusions, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
99
+
100
+ ## Record Write Rules
101
+
102
+ Use `record_write` as the only default write tool.
103
+
104
+ ### Write workflow
105
+
106
+ 1. Run `record_schema_get`
107
+ 2. Decide whether the task is `insert`, `update`, or `delete`
108
+ 3. Build SQL-like JSON clauses
109
+ 4. Run `record_write`
110
+ 5. If `ok=false`, explain `field_errors` first, then summarize blockers; do not report a write as executed
111
+ 6. If `ok=true`, report the affected `record_id` or created resource
112
+ 7. For important writes, keep `verify_write=true`
113
+
114
+ ### SQL-like JSON DSL
115
+
116
+ The DSL is clause-shaped like SQL, but it is **not raw SQL text**.
117
+
118
+ #### Insert
119
+
120
+ ```json
121
+ {
122
+ "operation": "insert",
123
+ "values": [
124
+ { "field_id": 12, "value": "测试客户" },
125
+ { "field_id": 18, "value": 1000 }
126
+ ],
127
+ "submit_type": "submit",
128
+ "verify_write": true
129
+ }
130
+ ```
131
+
132
+ #### Update
133
+
134
+ ```json
135
+ {
136
+ "operation": "update",
137
+ "record_id": 123,
138
+ "set": [
139
+ { "field_id": 18, "value": 2000 }
140
+ ],
141
+ "verify_write": true
142
+ }
143
+ ```
144
+
145
+ #### Delete
146
+
147
+ ```json
148
+ {
149
+ "operation": "delete",
150
+ "record_ids": [123, 124]
151
+ }
152
+ ```
153
+
154
+ ### Write discipline
155
+
156
+ - `insert` uses `values`
157
+ - `update` uses `set`
158
+ - `delete` uses `record_id` or `record_ids`
159
+ - Do not send raw SQL text
160
+ - Do not invent formulas or expressions
161
+ - Do not use free-form `WHERE` updates or deletes
162
+ - Do not auto-fill missing fields
163
+ - Do not auto-resolve relation targets without first querying them
164
+ - Do not assume `record_schema_get` is a builder/full-field schema. It is the current user's applicant-node visible schema only.
165
+
166
+ ## Response Interpretation
167
+
168
+ - `record_list` returns browse/sample data, not final analysis conclusions
169
+ - `record_write` always performs internal static preflight before any apply
170
+ - If `record_write` returns `ok=false`, the write was blocked and not executed
171
+ - Prefer explaining `field_errors` before summarizing top-level blockers
172
+ - If `record_write` returns `ok=true`, still check `verification` and `warnings` before claiming success
173
+ - Treat `request_route` as the source of truth for live route debugging
174
+ - Prefer canonical schema titles and aliases in your final wording
175
+ - If only part of the requested work is completed, explicitly disclose which parts are done and which are not
176
+
177
+ ## Resources
178
+
179
+ - Environment switching: [references/environments.md](references/environments.md)
180
+ - Record operation patterns: [references/record-patterns.md](references/record-patterns.md)
181
+ - Data gotchas: [references/data-gotchas.md](references/data-gotchas.md)
@@ -0,0 +1,5 @@
1
+ interface:
2
+ display_name: "Qingflow Record CRUD"
3
+ short_description: "Browse, read, and write Qingflow records with schema-first DSLs"
4
+ default_prompt: "Use $qingflow-record-crud for Qingflow record browsing, detail lookup, and record_write operations. Start with record_schema_get, choose record_list, record_get, or record_write, and keep analysis work in $qingflow-record-analysis."
5
+
@@ -0,0 +1,44 @@
1
+ # Data Gotchas
2
+
3
+ For final statistics, grouped distributions, rankings, trends, or insight-style conclusions, use [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md) instead of keeping that reasoning inside `$qingflow-record-crud`.
4
+
5
+ ## Record Reads
6
+
7
+ - `record_list` is for browsing, export, and sample inspection only
8
+ - `record_get` is for one exact record
9
+ - `record_schema_get` is applicant-node visible-only schema, not a builder/full-field schema
10
+ - if a field is absent from `record_schema_get`, treat it as not visible or not usable for the current user at the applicant node
11
+ - Do not present paged browse output as if it were a grouped or full-population conclusion
12
+ - If the browser and MCP disagree, compare `request_route.base_url` and `request_route.qf_version` first
13
+
14
+ ## Write Preflight
15
+
16
+ - `record_write` always performs internal static preflight before any apply
17
+ - If `record_write` returns `ok=false`, the write was blocked and not executed
18
+ - Explain `field_errors` before summarizing blockers
19
+ - Use `record_schema_get` when field titles are uncertain instead of guessing ids
20
+ - Prefer `verify_write=true` for complex, relation-heavy, subtable, or production writes
21
+ - Even when `record_write` returns `ok=true`, it may still surface verification failures; do not report success before checking them
22
+
23
+ ## Write Semantics
24
+
25
+ - `insert` uses `values`
26
+ - `update` uses `set`
27
+ - `delete` uses `record_id` or `record_ids`
28
+ - Do not send raw SQL strings
29
+ - Do not fake formula or expression fields
30
+ - Do not perform free-form bulk updates or deletes
31
+ - Do not guess relation targets from display text; resolve the real `record_id` first
32
+
33
+ ## Attachments
34
+
35
+ - Attachment fields are two-step: upload first, then write the returned URL object into the record
36
+ - `file_upload_local` may report `effective_upload_kind=login` even when the requested kind was `attachment`; this is an implementation fallback, not necessarily an error
37
+ - When debugging uploads, surface both `effective_upload_kind` and `upload_protocol`
38
+
39
+ ## Subtables
40
+
41
+ - Subtable fields accept row objects keyed by subfield title, or native `tableValues`
42
+ - Use the current form schema's subfield titles; do not guess nested ids
43
+ - When updating existing subtable rows, preserve row ids if the source record returns them
44
+ - Nested subtable writes are still unsupported
@@ -0,0 +1,58 @@
1
+ # Environment Switching
2
+
3
+ Use this reference before any data creation, update, or delete.
4
+
5
+ ## Step 1: Resolve the active environment
6
+
7
+ Decide explicitly whether the task targets:
8
+
9
+ - `test`: demo, mock data, smoke usage validation, training scenarios
10
+ - `prod`: real operational data and live record changes
11
+
12
+ If the user did not specify an environment, default to `prod`.
13
+
14
+ ## Test Environment
15
+
16
+ Use test for:
17
+
18
+ - mock or smoke data entry
19
+ - user acceptance demos
20
+ - data correction rehearsals
21
+
22
+ Test behavior:
23
+
24
+ - creating demo data is acceptable
25
+ - default to at least `5` records for mock or smoke datasets unless the user asks for fewer
26
+ - destructive cleanup is acceptable only when the record scope is explicit
27
+
28
+ ## Production Environment
29
+
30
+ Use production for:
31
+
32
+ - live data entry
33
+ - live business record updates
34
+ - controlled data correction or deletion
35
+
36
+ Production behavior:
37
+
38
+ - prefer browse or detail reads before any write
39
+ - restate the exact app and record scope before update or delete
40
+ - do not create mock, smoke, or demo data unless the user explicitly asks for it
41
+ - for bulk changes, summarize the target count before execution and the affected ids after execution
42
+ - destructive actions need explicit confirmation in the conversation context
43
+
44
+ Production guardrails:
45
+
46
+ - never assume a record id, app id, or workspace id
47
+ - treat `record_write(operation="delete")` as high risk
48
+ - if the task can be answered read-only, do not write
49
+
50
+ ## Reporting Rule
51
+
52
+ For record CRUD operations, always report:
53
+
54
+ - active environment
55
+ - target app
56
+ - operation type: read, create, update, or delete
57
+ - affected record count or ids
58
+
@@ -0,0 +1,112 @@
1
+ # Record Patterns
2
+
3
+ If the task shifts into grouped analysis, ratio, ranking, trend, or any final statistical conclusion, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md).
4
+
5
+ ## Browse Pattern
6
+
7
+ Use `record_schema_get -> record_list` when:
8
+
9
+ - the user wants to browse records
10
+ - the target `record_id` is unknown
11
+ - a delete or update target still needs confirmation
12
+ - the user needs sample rows or a small export
13
+
14
+ Remember that `record_schema_get` only exposes the current user's applicant-node visible fields. If a field is missing from that schema, treat it as unavailable in the current permission scope instead of trying to guess another `field_id`.
15
+
16
+ Keep the browse DSL simple:
17
+
18
+ - `columns`: field ids only
19
+ - `where`: flat AND filters only
20
+ - `order_by`: field sorting only
21
+ - `limit` and `page`: browsing intent only
22
+
23
+ Do not use `record_list` for grouped conclusions, ratios, rankings, trends, or any final statistical claim.
24
+
25
+ ## Detail Pattern
26
+
27
+ Use `record_schema_get -> record_get` when:
28
+
29
+ - the exact `record_id` is known
30
+ - the user needs one record in detail
31
+ - a write target needs verification before action
32
+
33
+ Prefer passing explicit `columns` when the user only needs a subset of fields.
34
+ Without `columns`, `record_get` still returns only applicant-node visible fields, not the full builder-side record payload.
35
+
36
+ ## Write Pattern
37
+
38
+ Use `record_schema_get -> record_write`.
39
+
40
+ 1. Confirm the target app
41
+ 2. Resolve fields with `record_schema_get`
42
+ 3. Decide whether the task is `insert`, `update`, or `delete`
43
+ 4. Build SQL-like JSON clauses
44
+ 5. Run `record_write`
45
+ 6. If `ok=false`, explain `field_errors` first, then summarize blockers; stop because the write was not executed
46
+ 7. If `ok=true`, report the affected resource and any verification outcome
47
+ 8. For important writes, keep `verify_write=true`
48
+
49
+ ### Insert
50
+
51
+ ```json
52
+ {
53
+ "operation": "insert",
54
+ "values": [
55
+ { "field_id": 12, "value": "测试客户" },
56
+ { "field_id": 18, "value": 1000 }
57
+ ],
58
+ "submit_type": "submit",
59
+ "verify_write": true
60
+ }
61
+ ```
62
+
63
+ ### Update
64
+
65
+ ```json
66
+ {
67
+ "operation": "update",
68
+ "record_id": 123,
69
+ "set": [
70
+ { "field_id": 18, "value": 2000 }
71
+ ],
72
+ "verify_write": true
73
+ }
74
+ ```
75
+
76
+ ### Delete
77
+
78
+ ```json
79
+ {
80
+ "operation": "delete",
81
+ "record_ids": [123, 124]
82
+ }
83
+ ```
84
+
85
+ ## Write Anti-Patterns
86
+
87
+ Do not do this:
88
+
89
+ - do not send raw SQL text
90
+ - do not build free-form `WHERE` updates or deletes
91
+ - do not invent formulas or expressions
92
+ - do not auto-fill missing required fields
93
+ - do not guess relation targets without first resolving them
94
+ - do not guess hidden or missing fields from prior builder knowledge; if the field is absent from applicant-node schema, stop and explain the permission boundary
95
+ - do not claim a blocked `record_write` was executed
96
+
97
+ ## Unsupported Direct Writes
98
+
99
+ Do not attempt direct writes for these field types:
100
+
101
+ - `14` time range
102
+ - `34` image recognition
103
+ - `35` image generation
104
+ - `36` document parsing
105
+
106
+ If the payload includes them, stop after the blocked `record_write` response and explain that the tool does not support a reliable direct write for those fields yet.
107
+
108
+ ## Relation, Attachment, and Subtable Rules
109
+
110
+ - Relation fields are record-id based. Resolve the referenced target first, then write the relation field with the real `record_id`.
111
+ - Attachment fields are two-step: upload first with `file_upload_local`, then reuse the returned attachment payload in `record_write`.
112
+ - Subtable writes require the current schema shape; when updating existing subtable rows, preserve row ids if the current record exposes them.
@@ -0,0 +1,148 @@
1
+ ---
2
+ name: qingflow-task-ops
3
+ description: Use Qingflow task center, workflow usage actions, comments, and directory lookup after the MCP is already connected and authenticated. Do not use this skill for record CRUD or final statistical analysis.
4
+ metadata:
5
+ short-description: Qingflow task center and workflow operations
6
+ ---
7
+
8
+ # Qingflow Task Ops
9
+
10
+ ## Overview
11
+
12
+ This skill is for **task-center and workflow usage operations** inside existing Qingflow apps.
13
+
14
+ Use it for:
15
+
16
+ - task-center browsing
17
+ - headline task counts
18
+ - grouped worksheet or workflow-node workload views
19
+ - approve / reject / rollback / transfer / urge
20
+ - record comments
21
+ - directory lookup that supports task or comment actions
22
+
23
+ Do **not** use this skill for:
24
+
25
+ - record create / update / delete
26
+ Switch to [$qingflow-record-crud](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-crud/SKILL.md)
27
+ - grouped analysis, ratios, rankings, trends, or final statistical conclusions
28
+ Switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
29
+
30
+ This skill assumes the MCP is already connected, authenticated, and bound to the correct workspace.
31
+ If not, switch to [$qingflow-mcp-setup](/Users/yanqidong/.codex/skills/qingflow-mcp-setup/SKILL.md) first.
32
+
33
+ Before operating on live work, identify whether the task targets `test` or `prod` and read [references/environments.md](references/environments.md).
34
+ If the user did not specify one, default to `prod`.
35
+
36
+ ## Default Paths
37
+
38
+ Use exactly one of these default paths:
39
+
40
+ 1. Task headline counts
41
+ `task_summary`
42
+
43
+ 2. Flat task browsing
44
+ `task_list`
45
+
46
+ 3. Grouped workload buckets
47
+ `task_facets`
48
+
49
+ 4. Task or workflow action
50
+ `task_list / task_facets -> exact target -> task_* action`
51
+
52
+ 5. Comments and directory support
53
+ `record_get -> record_comment_*` or `directory_*`
54
+
55
+ ## Core Tools
56
+
57
+ - `task_summary`
58
+ - `task_list`
59
+ - `task_facets`
60
+ - `task_mark_read`
61
+ - `task_mark_all_cc_read`
62
+ - `task_urge`
63
+ - `task_approve`
64
+ - `task_reject`
65
+ - `task_rollback_candidates`
66
+ - `task_rollback`
67
+ - `task_transfer_candidates`
68
+ - `task_transfer`
69
+ - `record_comment_write`
70
+ - `record_comment_list`
71
+ - `record_comment_mentions`
72
+ - `record_comment_mark_read`
73
+
74
+ ## Supporting Tools
75
+
76
+ - `directory_search`
77
+ - `directory_list_internal_users`
78
+ - `directory_list_all_internal_users`
79
+ - `directory_list_internal_departments`
80
+ - `directory_list_all_departments`
81
+ - `directory_list_sub_departments`
82
+ - `directory_list_external_members`
83
+ - `record_get`
84
+
85
+ ## Standard Operating Order
86
+
87
+ 1. Ensure auth exists
88
+ 2. Ensure workspace is selected
89
+ 3. Confirm target app and whether the task is task browse / grouped workload / comment / workflow action
90
+ 4. Use `task_summary`, `task_list`, or `task_facets` to locate the exact target first
91
+ 5. If a workflow action is required, identify the exact `task_id`, `record_id`, and `workflow_node_id` whenever practical
92
+ 6. Use directory tools only when member/department lookup is needed to support the action
93
+ 7. For production actions, read current task or record state first whenever practical
94
+ 8. After actions, report the affected `task_id`, `record_id`, or returned item count
95
+
96
+ ## Task-Center Rules
97
+
98
+ - Use `task_summary` for headline counts
99
+ - Use `task_list` for flat browsing
100
+ - Use `task_facets` for grouped worksheet or workflow-node buckets
101
+ - `task_box` must be one of:
102
+ - `todo`
103
+ - `initiated`
104
+ - `cc`
105
+ - `done`
106
+ - `flow_status` must be one of:
107
+ - `all`
108
+ - `in_progress`
109
+ - `approved`
110
+ - `rejected`
111
+ - `pending_fix`
112
+ - `urged`
113
+ - `overdue`
114
+ - `due_soon`
115
+ - `unread`
116
+ - `ended`
117
+ - Task counts are task-center counts, not record counts
118
+ - If the user asks for workload by worksheet or node, use `task_facets`
119
+ - If a result set is truncated, describe it as `已返回分组中` or `主要分组`
120
+
121
+ ## Workflow Usage Actions
122
+
123
+ - Find the exact target first
124
+ - For approve or reject, identify the exact `workflow_node_id` first; prefer task-center results or current audit info
125
+ - Avoid workflow actions on ambiguous tasks or records
126
+ - For rollback or transfer, fetch candidates first
127
+ - Summarize the final action and target task ids or record ids
128
+
129
+ ## Comments and Directory
130
+
131
+ - Use `record_comment_write` only after the exact `record_id` is known
132
+ - Use `record_comment_mentions` to resolve mention candidates before building complex comment payloads
133
+ - Use `directory_search` for fuzzy member/department lookup
134
+ - Use `directory_list_all_internal_users` and `directory_list_all_departments` only when the user explicitly wants a complete export
135
+
136
+ ## Response Interpretation
137
+
138
+ - `task_summary` gives headline counts only
139
+ - `task_list` returns flat task rows, not grouped workload conclusions
140
+ - `task_facets` is the only default grouped workload path
141
+ - Treat `request_route` as the source of truth for live route debugging
142
+ - If only part of the requested work is completed, explicitly disclose which parts are done and which are not
143
+
144
+ ## Resources
145
+
146
+ - Environment switching: [references/environments.md](references/environments.md)
147
+ - Workflow and task usage actions: [references/workflow-usage.md](references/workflow-usage.md)
148
+
@@ -0,0 +1,4 @@
1
+ interface:
2
+ display_name: "Qingflow Task Ops"
3
+ short_description: "Use Qingflow task center, comments, directory, and workflow actions"
4
+ default_prompt: "Use $qingflow-task-ops for Qingflow task-center browsing, workload facets, comments, directory lookup, and workflow usage actions. Locate the exact target first, then run the smallest explicit task or comment action."
@@ -0,0 +1,44 @@
1
+ # Environment Switching
2
+
3
+ Use this reference before any workflow usage action, comment, or task-center operation that might affect live work.
4
+
5
+ ## Step 1: Resolve the active environment
6
+
7
+ Decide explicitly whether the task targets:
8
+
9
+ - `test`: demo, mock data, smoke usage validation, training scenarios
10
+ - `prod`: real operational tasks, comments, and workflow actions
11
+
12
+ If the user did not specify an environment, default to `prod`.
13
+
14
+ ## Test Environment
15
+
16
+ Use test for:
17
+
18
+ - workflow walkthroughs
19
+ - user acceptance demos
20
+ - comment or transfer rehearsals
21
+
22
+ ## Production Environment
23
+
24
+ Use production for:
25
+
26
+ - live task-center operations
27
+ - live comments on real business records
28
+ - approve / reject / rollback / transfer / urge on real work
29
+
30
+ Production guardrails:
31
+
32
+ - never assume a task id, record id, or workflow node id
33
+ - find the exact target first
34
+ - if the task can be answered read-only, do not act
35
+
36
+ ## Reporting Rule
37
+
38
+ For task ops, always report:
39
+
40
+ - active environment
41
+ - target app or task box
42
+ - operation type: read, comment, approve, reject, rollback, transfer, urge, or mark_read
43
+ - affected task ids or record ids
44
+
@@ -0,0 +1,27 @@
1
+ # Workflow and Task Usage Actions
2
+
3
+ Use these when the user is operating inside an existing process, not redesigning it.
4
+
5
+ Examples:
6
+
7
+ - add a comment to a record
8
+ - approve or reject a workflow task
9
+ - transfer a task
10
+ - roll back a task
11
+ - list todo, initiated, done, or cc tasks
12
+ - inspect workload by worksheet or workflow node
13
+ - urge a pending task
14
+
15
+ Rules:
16
+
17
+ - if the user starts from inbox, todo, workload, cc, or bottleneck language, use `task_*` first
18
+ - use `task_summary` for headline counts
19
+ - use `task_list` for flat browsing
20
+ - use `task_facets` when worksheet or workflow-node buckets matter
21
+ - treat task counts as task-center counts, not record counts
22
+ - switch to `record_get` only after locating the exact business record behind a task
23
+ - identify the exact target first
24
+ - for approve or reject, identify the exact `workflow_node_id` first; prefer task-center results or current audit info, then use `task_approve` or `task_reject`
25
+ - avoid usage-side workflow actions on ambiguous records
26
+ - summarize the final action and target task ids or record ids
27
+
@@ -2,4 +2,4 @@ from __future__ import annotations
2
2
 
3
3
  __all__ = ["__version__"]
4
4
 
5
- __version__ = "0.2.0b21"
5
+ __version__ = "0.2.0b23"
@@ -29,8 +29,10 @@ def build_server() -> FastMCP:
29
29
  "Use auth_login first, then workspace_list and workspace_select. "
30
30
  "All resource tools operate with the logged-in user's Qingflow permissions.\n\n"
31
31
  "For analytics, use record_schema_get first, let the model build field_id-based DSL, "
32
- "then call record_analyze. For operational record reads, use record_schema_get first, then record_list or record_get. "
33
- "For writes, use record_schema_get and then record_write with mode=plan or apply.\n\n"
32
+ "then call record_analyze. record_analyze returns compact business-first output as query/result/ranking/ratios/completeness/presentation; use verbose only for route/debug details. "
33
+ "record_schema_get returns the current user's applicant-node visible schema only; hidden fields are omitted and missing fields should be treated as not visible in the current permission scope. "
34
+ "For operational record reads, use record_schema_get first, then record_list or record_get. "
35
+ "For writes, use record_schema_get and then call record_write once; it performs internal preflight before any apply and refuses fields outside the applicant-node writable schema.\n\n"
34
36
  "Task Center (待办/已办) handling:\n"
35
37
  "- Use task_summary to get headline counts.\n"
36
38
  "- Use task_list for flat task browsing with task_box and flow_status.\n"
@@ -20,7 +20,8 @@ def build_user_server() -> FastMCP:
20
20
  instructions=(
21
21
  "Use this server for Qingflow operational workflows with a schema-first path. "
22
22
  "For records, start with record_schema_get, then choose record_list, record_get, or record_write. "
23
- "For analytics, switch to record_schema_get and record_analyze. "
23
+ "record_schema_get returns the current user's applicant-node visible schema only; hidden fields are omitted and missing fields should be treated as not visible in the current permission scope. "
24
+ "For analytics, switch to record_schema_get and record_analyze; its default output is compact query/result/ranking/ratios/completeness/presentation, with route/debug only in verbose mode. "
24
25
  "For task center, use task_summary, task_list, and task_facets before any explicit task action. "
25
26
  "Avoid builder-side app or schema changes here."
26
27
  ),