@josephyan/qingflow-app-user-mcp 0.2.0-beta.21 → 0.2.0-beta.22

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,173 @@
1
+ ---
2
+ name: qingflow-record-crud
3
+ description: Browse, read, create, update, and delete Qingflow records after the MCP is already connected and authenticated. Use when the user wants schema-first record CRUD with SQL-like JSON DSL. Do not use this skill for task-center workflow actions or final statistical analysis.
4
+ metadata:
5
+ short-description: Schema-first Qingflow record CRUD
6
+ ---
7
+
8
+ # Qingflow Record CRUD
9
+
10
+ ## Overview
11
+
12
+ This skill is for **record CRUD only** inside existing Qingflow apps.
13
+
14
+ Use it for:
15
+
16
+ - record browsing
17
+ - record detail lookup
18
+ - record create / update / delete
19
+ - attachment preflight before a write
20
+ - relation/member/department lookup that directly supports a write
21
+
22
+ Do **not** use this skill for:
23
+
24
+ - grouped analysis, ratios, rankings, trends, or final statistical conclusions
25
+ Switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
26
+ - task-center workflow actions, comments, or directory-driven operational workflows
27
+ Switch to [$qingflow-task-ops](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-task-ops/SKILL.md)
28
+
29
+ This skill assumes the MCP is already connected, authenticated, and bound to the correct workspace.
30
+ If not, switch to [$qingflow-mcp-setup](/Users/yanqidong/.codex/skills/qingflow-mcp-setup/SKILL.md) first.
31
+
32
+ Before operating on data, identify whether the task targets `test` or `prod` and read [references/environments.md](references/environments.md).
33
+ If the user did not specify one, default to `prod`.
34
+
35
+ ## Default Paths
36
+
37
+ Use exactly one of these default paths:
38
+
39
+ 1. Browse records
40
+ `record_schema_get -> record_list`
41
+
42
+ 2. Read one record
43
+ `record_schema_get -> record_get`
44
+
45
+ 3. Write records
46
+ `record_schema_get -> record_write`
47
+
48
+ 4. Analysis
49
+ Switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
50
+
51
+ ## Core Tools
52
+
53
+ - `record_schema_get`
54
+ - `record_list`
55
+ - `record_get`
56
+ - `record_write`
57
+
58
+ ## Supporting Tools
59
+
60
+ - `directory_search`
61
+ - `directory_list_internal_users`
62
+ - `directory_list_internal_departments`
63
+ - `directory_list_sub_departments`
64
+ - `file_get_upload_info`
65
+ - `file_upload_local`
66
+
67
+ ## Standard Operating Order
68
+
69
+ 1. Ensure auth exists
70
+ 2. Ensure workspace is selected
71
+ 3. Confirm target app and whether the task is browse / detail / write / analysis
72
+ 4. Run `record_schema_get` before any non-trivial record read or write
73
+ 5. If the request is analysis-like, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
74
+ 6. If the request is write-like, decide `insert / update / delete` before building any payload
75
+ 7. If fields are still ambiguous after `record_schema_get`, ask the user to confirm from a short candidate list instead of guessing
76
+ 8. For high-risk writes or production changes, read the current state first whenever practical
77
+ 9. After actions, report the affected `record_id`, counts, or returned item count
78
+
79
+ ## Record Read Rules
80
+
81
+ - Use `record_list` for browse/export/sample inspection only
82
+ - Use `record_get` when `record_id` is known
83
+ - `record_list` accepts:
84
+ - `columns`
85
+ - `where`
86
+ - `order_by`
87
+ - `limit`
88
+ - `page`
89
+ - `record_list` is **not** an analysis tool
90
+ - If a request turns into grouped distributions, ratios, rankings, trends, or final statistical conclusions, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
91
+
92
+ ## Record Write Rules
93
+
94
+ Use `record_write` as the only default write tool.
95
+
96
+ ### Write workflow
97
+
98
+ 1. Run `record_schema_get`
99
+ 2. Decide whether the task is `insert`, `update`, or `delete`
100
+ 3. Build SQL-like JSON clauses
101
+ 4. Run `record_write`
102
+ 5. If `ok=false`, explain `field_errors` first, then summarize blockers; do not report a write as executed
103
+ 6. If `ok=true`, report the affected `record_id` or created resource
104
+ 7. For important writes, keep `verify_write=true`
105
+
106
+ ### SQL-like JSON DSL
107
+
108
+ The DSL is clause-shaped like SQL, but it is **not raw SQL text**.
109
+
110
+ #### Insert
111
+
112
+ ```json
113
+ {
114
+ "operation": "insert",
115
+ "values": [
116
+ { "field_id": 12, "value": "测试客户" },
117
+ { "field_id": 18, "value": 1000 }
118
+ ],
119
+ "submit_type": "submit",
120
+ "verify_write": true
121
+ }
122
+ ```
123
+
124
+ #### Update
125
+
126
+ ```json
127
+ {
128
+ "operation": "update",
129
+ "record_id": 123,
130
+ "set": [
131
+ { "field_id": 18, "value": 2000 }
132
+ ],
133
+ "verify_write": true
134
+ }
135
+ ```
136
+
137
+ #### Delete
138
+
139
+ ```json
140
+ {
141
+ "operation": "delete",
142
+ "record_ids": [123, 124]
143
+ }
144
+ ```
145
+
146
+ ### Write discipline
147
+
148
+ - `insert` uses `values`
149
+ - `update` uses `set`
150
+ - `delete` uses `record_id` or `record_ids`
151
+ - Do not send raw SQL text
152
+ - Do not invent formulas or expressions
153
+ - Do not use free-form `WHERE` updates or deletes
154
+ - Do not auto-fill missing fields
155
+ - Do not auto-resolve relation targets without first querying them
156
+
157
+ ## Response Interpretation
158
+
159
+ - `record_list` returns browse/sample data, not final analysis conclusions
160
+ - `record_write` always performs internal static preflight before any apply
161
+ - If `record_write` returns `ok=false`, the write was blocked and not executed
162
+ - Prefer explaining `field_errors` before summarizing top-level blockers
163
+ - If `record_write` returns `ok=true`, still check `verification` and `warnings` before claiming success
164
+ - Treat `request_route` as the source of truth for live route debugging
165
+ - Prefer canonical schema titles and aliases in your final wording
166
+ - If only part of the requested work is completed, explicitly disclose which parts are done and which are not
167
+
168
+ ## Resources
169
+
170
+ - Environment switching: [references/environments.md](references/environments.md)
171
+ - Record operation patterns: [references/record-patterns.md](references/record-patterns.md)
172
+ - Data gotchas: [references/data-gotchas.md](references/data-gotchas.md)
173
+
@@ -0,0 +1,5 @@
1
+ interface:
2
+ display_name: "Qingflow Record CRUD"
3
+ short_description: "Browse, read, and write Qingflow records with schema-first DSLs"
4
+ default_prompt: "Use $qingflow-record-crud for Qingflow record browsing, detail lookup, and record_write operations. Start with record_schema_get, choose record_list, record_get, or record_write, and keep analysis work in $qingflow-record-analysis."
5
+
@@ -0,0 +1,43 @@
1
+ # Data Gotchas
2
+
3
+ For final statistics, grouped distributions, rankings, trends, or insight-style conclusions, use [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md) instead of keeping that reasoning inside `$qingflow-record-crud`.
4
+
5
+ ## Record Reads
6
+
7
+ - `record_list` is for browsing, export, and sample inspection only
8
+ - `record_get` is for one exact record
9
+ - Do not present paged browse output as if it were a grouped or full-population conclusion
10
+ - If the browser and MCP disagree, compare `request_route.base_url` and `request_route.qf_version` first
11
+
12
+ ## Write Preflight
13
+
14
+ - `record_write` always performs internal static preflight before any apply
15
+ - If `record_write` returns `ok=false`, the write was blocked and not executed
16
+ - Explain `field_errors` before summarizing blockers
17
+ - Use `record_schema_get` when field titles are uncertain instead of guessing ids
18
+ - Prefer `verify_write=true` for complex, relation-heavy, subtable, or production writes
19
+ - Even when `record_write` returns `ok=true`, it may still surface verification failures; do not report success before checking them
20
+
21
+ ## Write Semantics
22
+
23
+ - `insert` uses `values`
24
+ - `update` uses `set`
25
+ - `delete` uses `record_id` or `record_ids`
26
+ - Do not send raw SQL strings
27
+ - Do not fake formula or expression fields
28
+ - Do not perform free-form bulk updates or deletes
29
+ - Do not guess relation targets from display text; resolve the real `record_id` first
30
+
31
+ ## Attachments
32
+
33
+ - Attachment fields are two-step: upload first, then write the returned URL object into the record
34
+ - `file_upload_local` may report `effective_upload_kind=login` even when the requested kind was `attachment`; this is an implementation fallback, not necessarily an error
35
+ - When debugging uploads, surface both `effective_upload_kind` and `upload_protocol`
36
+
37
+ ## Subtables
38
+
39
+ - Subtable fields accept row objects keyed by subfield title, or native `tableValues`
40
+ - Use the current form schema's subfield titles; do not guess nested ids
41
+ - When updating existing subtable rows, preserve row ids if the source record returns them
42
+ - Nested subtable writes are still unsupported
43
+
@@ -0,0 +1,58 @@
1
+ # Environment Switching
2
+
3
+ Use this reference before any data creation, update, or delete.
4
+
5
+ ## Step 1: Resolve the active environment
6
+
7
+ Decide explicitly whether the task targets:
8
+
9
+ - `test`: demo, mock data, smoke usage validation, training scenarios
10
+ - `prod`: real operational data and live record changes
11
+
12
+ If the user did not specify an environment, default to `prod`.
13
+
14
+ ## Test Environment
15
+
16
+ Use test for:
17
+
18
+ - mock or smoke data entry
19
+ - user acceptance demos
20
+ - data correction rehearsals
21
+
22
+ Test behavior:
23
+
24
+ - creating demo data is acceptable
25
+ - default to at least `5` records for mock or smoke datasets unless the user asks for fewer
26
+ - destructive cleanup is acceptable only when the record scope is explicit
27
+
28
+ ## Production Environment
29
+
30
+ Use production for:
31
+
32
+ - live data entry
33
+ - live business record updates
34
+ - controlled data correction or deletion
35
+
36
+ Production behavior:
37
+
38
+ - prefer browse or detail reads before any write
39
+ - restate the exact app and record scope before update or delete
40
+ - do not create mock, smoke, or demo data unless the user explicitly asks for it
41
+ - for bulk changes, summarize the target count before execution and the affected ids after execution
42
+ - destructive actions need explicit confirmation in the conversation context
43
+
44
+ Production guardrails:
45
+
46
+ - never assume a record id, app id, or workspace id
47
+ - treat `record_write(operation="delete")` as high risk
48
+ - if the task can be answered read-only, do not write
49
+
50
+ ## Reporting Rule
51
+
52
+ For record CRUD operations, always report:
53
+
54
+ - active environment
55
+ - target app
56
+ - operation type: read, create, update, or delete
57
+ - affected record count or ids
58
+
@@ -0,0 +1,109 @@
1
+ # Record Patterns
2
+
3
+ If the task shifts into grouped analysis, ratio, ranking, trend, or any final statistical conclusion, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md).
4
+
5
+ ## Browse Pattern
6
+
7
+ Use `record_schema_get -> record_list` when:
8
+
9
+ - the user wants to browse records
10
+ - the target `record_id` is unknown
11
+ - a delete or update target still needs confirmation
12
+ - the user needs sample rows or a small export
13
+
14
+ Keep the browse DSL simple:
15
+
16
+ - `columns`: field ids only
17
+ - `where`: flat AND filters only
18
+ - `order_by`: field sorting only
19
+ - `limit` and `page`: browsing intent only
20
+
21
+ Do not use `record_list` for grouped conclusions, ratios, rankings, trends, or any final statistical claim.
22
+
23
+ ## Detail Pattern
24
+
25
+ Use `record_schema_get -> record_get` when:
26
+
27
+ - the exact `record_id` is known
28
+ - the user needs one record in detail
29
+ - a write target needs verification before action
30
+
31
+ Prefer passing explicit `columns` when the user only needs a subset of fields.
32
+
33
+ ## Write Pattern
34
+
35
+ Use `record_schema_get -> record_write`.
36
+
37
+ 1. Confirm the target app
38
+ 2. Resolve fields with `record_schema_get`
39
+ 3. Decide whether the task is `insert`, `update`, or `delete`
40
+ 4. Build SQL-like JSON clauses
41
+ 5. Run `record_write`
42
+ 6. If `ok=false`, explain `field_errors` first, then summarize blockers; stop because the write was not executed
43
+ 7. If `ok=true`, report the affected resource and any verification outcome
44
+ 8. For important writes, keep `verify_write=true`
45
+
46
+ ### Insert
47
+
48
+ ```json
49
+ {
50
+ "operation": "insert",
51
+ "values": [
52
+ { "field_id": 12, "value": "测试客户" },
53
+ { "field_id": 18, "value": 1000 }
54
+ ],
55
+ "submit_type": "submit",
56
+ "verify_write": true
57
+ }
58
+ ```
59
+
60
+ ### Update
61
+
62
+ ```json
63
+ {
64
+ "operation": "update",
65
+ "record_id": 123,
66
+ "set": [
67
+ { "field_id": 18, "value": 2000 }
68
+ ],
69
+ "verify_write": true
70
+ }
71
+ ```
72
+
73
+ ### Delete
74
+
75
+ ```json
76
+ {
77
+ "operation": "delete",
78
+ "record_ids": [123, 124]
79
+ }
80
+ ```
81
+
82
+ ## Write Anti-Patterns
83
+
84
+ Do not do this:
85
+
86
+ - do not send raw SQL text
87
+ - do not build free-form `WHERE` updates or deletes
88
+ - do not invent formulas or expressions
89
+ - do not auto-fill missing required fields
90
+ - do not guess relation targets without first resolving them
91
+ - do not claim a blocked `record_write` was executed
92
+
93
+ ## Unsupported Direct Writes
94
+
95
+ Do not attempt direct writes for these field types:
96
+
97
+ - `14` time range
98
+ - `34` image recognition
99
+ - `35` image generation
100
+ - `36` document parsing
101
+
102
+ If the payload includes them, stop after the blocked `record_write` response and explain that the tool does not support a reliable direct write for those fields yet.
103
+
104
+ ## Relation, Attachment, and Subtable Rules
105
+
106
+ - Relation fields are record-id based. Resolve the referenced target first, then write the relation field with the real `record_id`.
107
+ - Attachment fields are two-step: upload first with `file_upload_local`, then reuse the returned attachment payload in `record_write`.
108
+ - Subtable writes require the current schema shape; when updating existing subtable rows, preserve row ids if the current record exposes them.
109
+
@@ -0,0 +1,148 @@
1
+ ---
2
+ name: qingflow-task-ops
3
+ description: Use Qingflow task center, workflow usage actions, comments, and directory lookup after the MCP is already connected and authenticated. Do not use this skill for record CRUD or final statistical analysis.
4
+ metadata:
5
+ short-description: Qingflow task center and workflow operations
6
+ ---
7
+
8
+ # Qingflow Task Ops
9
+
10
+ ## Overview
11
+
12
+ This skill is for **task-center and workflow usage operations** inside existing Qingflow apps.
13
+
14
+ Use it for:
15
+
16
+ - task-center browsing
17
+ - headline task counts
18
+ - grouped worksheet or workflow-node workload views
19
+ - approve / reject / rollback / transfer / urge
20
+ - record comments
21
+ - directory lookup that supports task or comment actions
22
+
23
+ Do **not** use this skill for:
24
+
25
+ - record create / update / delete
26
+ Switch to [$qingflow-record-crud](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-crud/SKILL.md)
27
+ - grouped analysis, ratios, rankings, trends, or final statistical conclusions
28
+ Switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
29
+
30
+ This skill assumes the MCP is already connected, authenticated, and bound to the correct workspace.
31
+ If not, switch to [$qingflow-mcp-setup](/Users/yanqidong/.codex/skills/qingflow-mcp-setup/SKILL.md) first.
32
+
33
+ Before operating on live work, identify whether the task targets `test` or `prod` and read [references/environments.md](references/environments.md).
34
+ If the user did not specify one, default to `prod`.
35
+
36
+ ## Default Paths
37
+
38
+ Use exactly one of these default paths:
39
+
40
+ 1. Task headline counts
41
+ `task_summary`
42
+
43
+ 2. Flat task browsing
44
+ `task_list`
45
+
46
+ 3. Grouped workload buckets
47
+ `task_facets`
48
+
49
+ 4. Task or workflow action
50
+ `task_list / task_facets -> exact target -> task_* action`
51
+
52
+ 5. Comments and directory support
53
+ `record_get -> record_comment_*` or `directory_*`
54
+
55
+ ## Core Tools
56
+
57
+ - `task_summary`
58
+ - `task_list`
59
+ - `task_facets`
60
+ - `task_mark_read`
61
+ - `task_mark_all_cc_read`
62
+ - `task_urge`
63
+ - `task_approve`
64
+ - `task_reject`
65
+ - `task_rollback_candidates`
66
+ - `task_rollback`
67
+ - `task_transfer_candidates`
68
+ - `task_transfer`
69
+ - `record_comment_write`
70
+ - `record_comment_list`
71
+ - `record_comment_mentions`
72
+ - `record_comment_mark_read`
73
+
74
+ ## Supporting Tools
75
+
76
+ - `directory_search`
77
+ - `directory_list_internal_users`
78
+ - `directory_list_all_internal_users`
79
+ - `directory_list_internal_departments`
80
+ - `directory_list_all_departments`
81
+ - `directory_list_sub_departments`
82
+ - `directory_list_external_members`
83
+ - `record_get`
84
+
85
+ ## Standard Operating Order
86
+
87
+ 1. Ensure auth exists
88
+ 2. Ensure workspace is selected
89
+ 3. Confirm target app and whether the task is task browse / grouped workload / comment / workflow action
90
+ 4. Use `task_summary`, `task_list`, or `task_facets` to locate the exact target first
91
+ 5. If a workflow action is required, identify the exact `task_id`, `record_id`, and `workflow_node_id` whenever practical
92
+ 6. Use directory tools only when member/department lookup is needed to support the action
93
+ 7. For production actions, read current task or record state first whenever practical
94
+ 8. After actions, report the affected `task_id`, `record_id`, or returned item count
95
+
96
+ ## Task-Center Rules
97
+
98
+ - Use `task_summary` for headline counts
99
+ - Use `task_list` for flat browsing
100
+ - Use `task_facets` for grouped worksheet or workflow-node buckets
101
+ - `task_box` must be one of:
102
+ - `todo`
103
+ - `initiated`
104
+ - `cc`
105
+ - `done`
106
+ - `flow_status` must be one of:
107
+ - `all`
108
+ - `in_progress`
109
+ - `approved`
110
+ - `rejected`
111
+ - `pending_fix`
112
+ - `urged`
113
+ - `overdue`
114
+ - `due_soon`
115
+ - `unread`
116
+ - `ended`
117
+ - Task counts are task-center counts, not record counts
118
+ - If the user asks for workload by worksheet or node, use `task_facets`
119
+ - If a result set is truncated, describe it as `已返回分组中` or `主要分组`
120
+
121
+ ## Workflow Usage Actions
122
+
123
+ - Find the exact target first
124
+ - For approve or reject, identify the exact `workflow_node_id` first; prefer task-center results or current audit info
125
+ - Avoid workflow actions on ambiguous tasks or records
126
+ - For rollback or transfer, fetch candidates first
127
+ - Summarize the final action and target task ids or record ids
128
+
129
+ ## Comments and Directory
130
+
131
+ - Use `record_comment_write` only after the exact `record_id` is known
132
+ - Use `record_comment_mentions` to resolve mention candidates before building complex comment payloads
133
+ - Use `directory_search` for fuzzy member/department lookup
134
+ - Use `directory_list_all_internal_users` and `directory_list_all_departments` only when the user explicitly wants a complete export
135
+
136
+ ## Response Interpretation
137
+
138
+ - `task_summary` gives headline counts only
139
+ - `task_list` returns flat task rows, not grouped workload conclusions
140
+ - `task_facets` is the only default grouped workload path
141
+ - Treat `request_route` as the source of truth for live route debugging
142
+ - If only part of the requested work is completed, explicitly disclose which parts are done and which are not
143
+
144
+ ## Resources
145
+
146
+ - Environment switching: [references/environments.md](references/environments.md)
147
+ - Workflow and task usage actions: [references/workflow-usage.md](references/workflow-usage.md)
148
+
@@ -0,0 +1,4 @@
1
+ interface:
2
+ display_name: "Qingflow Task Ops"
3
+ short_description: "Use Qingflow task center, comments, directory, and workflow actions"
4
+ default_prompt: "Use $qingflow-task-ops for Qingflow task-center browsing, workload facets, comments, directory lookup, and workflow usage actions. Locate the exact target first, then run the smallest explicit task or comment action."
@@ -0,0 +1,44 @@
1
+ # Environment Switching
2
+
3
+ Use this reference before any workflow usage action, comment, or task-center operation that might affect live work.
4
+
5
+ ## Step 1: Resolve the active environment
6
+
7
+ Decide explicitly whether the task targets:
8
+
9
+ - `test`: demo, mock data, smoke usage validation, training scenarios
10
+ - `prod`: real operational tasks, comments, and workflow actions
11
+
12
+ If the user did not specify an environment, default to `prod`.
13
+
14
+ ## Test Environment
15
+
16
+ Use test for:
17
+
18
+ - workflow walkthroughs
19
+ - user acceptance demos
20
+ - comment or transfer rehearsals
21
+
22
+ ## Production Environment
23
+
24
+ Use production for:
25
+
26
+ - live task-center operations
27
+ - live comments on real business records
28
+ - approve / reject / rollback / transfer / urge on real work
29
+
30
+ Production guardrails:
31
+
32
+ - never assume a task id, record id, or workflow node id
33
+ - find the exact target first
34
+ - if the task can be answered read-only, do not act
35
+
36
+ ## Reporting Rule
37
+
38
+ For task ops, always report:
39
+
40
+ - active environment
41
+ - target app or task box
42
+ - operation type: read, comment, approve, reject, rollback, transfer, urge, or mark_read
43
+ - affected task ids or record ids
44
+
@@ -0,0 +1,27 @@
1
+ # Workflow and Task Usage Actions
2
+
3
+ Use these when the user is operating inside an existing process, not redesigning it.
4
+
5
+ Examples:
6
+
7
+ - add a comment to a record
8
+ - approve or reject a workflow task
9
+ - transfer a task
10
+ - roll back a task
11
+ - list todo, initiated, done, or cc tasks
12
+ - inspect workload by worksheet or workflow node
13
+ - urge a pending task
14
+
15
+ Rules:
16
+
17
+ - if the user starts from inbox, todo, workload, cc, or bottleneck language, use `task_*` first
18
+ - use `task_summary` for headline counts
19
+ - use `task_list` for flat browsing
20
+ - use `task_facets` when worksheet or workflow-node buckets matter
21
+ - treat task counts as task-center counts, not record counts
22
+ - switch to `record_get` only after locating the exact business record behind a task
23
+ - identify the exact target first
24
+ - for approve or reject, identify the exact `workflow_node_id` first; prefer task-center results or current audit info, then use `task_approve` or `task_reject`
25
+ - avoid usage-side workflow actions on ambiguous records
26
+ - summarize the final action and target task ids or record ids
27
+
@@ -2,4 +2,4 @@ from __future__ import annotations
2
2
 
3
3
  __all__ = ["__version__"]
4
4
 
5
- __version__ = "0.2.0b21"
5
+ __version__ = "0.2.0b22"
@@ -29,8 +29,9 @@ def build_server() -> FastMCP:
29
29
  "Use auth_login first, then workspace_list and workspace_select. "
30
30
  "All resource tools operate with the logged-in user's Qingflow permissions.\n\n"
31
31
  "For analytics, use record_schema_get first, let the model build field_id-based DSL, "
32
- "then call record_analyze. For operational record reads, use record_schema_get first, then record_list or record_get. "
33
- "For writes, use record_schema_get and then record_write with mode=plan or apply.\n\n"
32
+ "then call record_analyze. record_analyze returns compact business-first output as query/result/ranking/ratios/completeness/presentation; use verbose only for route/debug details. "
33
+ "For operational record reads, use record_schema_get first, then record_list or record_get. "
34
+ "For writes, use record_schema_get and then call record_write once; it performs internal preflight before any apply.\n\n"
34
35
  "Task Center (待办/已办) handling:\n"
35
36
  "- Use task_summary to get headline counts.\n"
36
37
  "- Use task_list for flat task browsing with task_box and flow_status.\n"
@@ -20,7 +20,7 @@ def build_user_server() -> FastMCP:
20
20
  instructions=(
21
21
  "Use this server for Qingflow operational workflows with a schema-first path. "
22
22
  "For records, start with record_schema_get, then choose record_list, record_get, or record_write. "
23
- "For analytics, switch to record_schema_get and record_analyze. "
23
+ "For analytics, switch to record_schema_get and record_analyze; its default output is compact query/result/ranking/ratios/completeness/presentation, with route/debug only in verbose mode. "
24
24
  "For task center, use task_summary, task_list, and task_facets before any explicit task action. "
25
25
  "Avoid builder-side app or schema changes here."
26
26
  ),