@josephyan/qingflow-app-user-mcp 0.2.0-beta.20 → 0.2.0-beta.21

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -7,18 +7,20 @@ Examples:
7
7
  - add a comment to a record
8
8
  - approve or reject a workflow task
9
9
  - transfer a task
10
- - roll back a record
11
- - list pending, processed, or cc tasks
10
+ - roll back a task
11
+ - list todo, initiated, done, or cc tasks
12
+ - inspect workload by worksheet or workflow node
12
13
  - urge a pending task
13
14
 
14
15
  Rules:
15
16
 
16
17
  - if the user starts from inbox, todo, workload, cc, or bottleneck language, use `task_*` first
17
- - use `task_statistics` for counts and `task_list` or `task_list_grouped` for browsing
18
- - use `task_list_grouped` when grouped workload browsing matters
18
+ - use `task_summary` for headline counts
19
+ - use `task_list` for flat browsing
20
+ - use `task_facets` when worksheet or workflow-node buckets matter
19
21
  - treat task counts as task-center counts, not record counts
20
- - switch to `record_*` after locating the exact business record behind a task
21
- - identify the exact record first
22
- - for approve or reject, identify the exact `nodeId` first; prefer task-center results or audit info, then use `task_approve` or `task_reject`
23
- - avoid usage-side flow actions on ambiguous records
22
+ - switch to `record_*` only after locating the exact business record behind a task
23
+ - identify the exact target first
24
+ - for approve or reject, identify the exact `workflow_node_id` first; prefer task-center results or current audit info, then use `task_approve` or `task_reject`
25
+ - avoid usage-side workflow actions on ambiguous records
24
26
  - summarize the final action and target task ids or record ids
@@ -22,7 +22,7 @@ Use these tools as the core analysis surface:
22
22
  - `record_schema_get`
23
23
  - `record_analyze`
24
24
 
25
- Use `record_query(query_mode="list")` or `record_get` only when you need sample rows or a specific supporting example after the main analysis path.
25
+ Use `record_list` or `record_get` only when you need sample rows or a specific supporting example after the main analysis path.
26
26
 
27
27
  ## Hard Rules
28
28
 
@@ -34,8 +34,8 @@ Use `record_query(query_mode="list")` or `record_get` only when you need sample
34
34
  - Never send impossible dates such as `2026-02-29`; if the intended month is February 2026, the legal upper bound is `2026-02-28`
35
35
  - If the schema still leaves multiple plausible fields, stop and ask the user to confirm from a short candidate list instead of guessing
36
36
  - Do not keep retrying different guessed field names in a loop
37
- - `record_query(list)` is never the basis for a final statistical conclusion
38
- - If `record_query(list)` reports `row_cap_hit`, `sample_only`, capped `returned_items`, or compact output, treat it as sample-only evidence
37
+ - `record_list` is never the basis for a final statistical conclusion
38
+ - If `record_list` is capped or paged, treat it as sample-only evidence
39
39
  - Do not mix full totals from `record_analyze` with sample-only list observations as one combined `全量结论`
40
40
  - Do not manually tune paging or scan-budget parameters for analysis; `record_analyze` hides them
41
41
  - For final conclusions, prefer `strict_full=true`
@@ -57,7 +57,7 @@ For analysis:
57
57
  3. Inspect fields, aliases, suggested dimensions, suggested metrics, and suggested time fields
58
58
  4. Generate one or more field_id-based DSLs
59
59
  5. Run `record_analyze` once per DSL
60
- 6. Run `record_query(query_mode="list")` only if you still need sample rows, examples, or manual inspection
60
+ 6. Run `record_list` only if you still need sample rows, examples, or manual inspection
61
61
  7. Before answering, separate:
62
62
  - `全量可信结论`
63
63
  - `样本观察`
@@ -1,4 +1,4 @@
1
1
  interface:
2
2
  display_name: "Qingflow Record Analysis"
3
3
  short_description: "Analyze Qingflow record data with schema-first DSL execution"
4
- default_prompt: "Use $qingflow-record-analysis for grouped distributions, ratios, rankings, trends, and final statistical conclusions in Qingflow apps. Start with record_schema_get, build one or more field_id-based DSLs, then run record_analyze. Treat record_query(query_mode=\"list\") as sample-only when capped, and separate full conclusions from sample observations."
4
+ default_prompt: "Use $qingflow-record-analysis for grouped distributions, ratios, rankings, trends, and final statistical conclusions in Qingflow apps. Start with record_schema_get, build one or more field_id-based DSLs, then run record_analyze. Treat record_list as sample-only when capped or paged, and separate full conclusions from sample observations."
@@ -2,7 +2,7 @@
2
2
 
3
3
  ## Do not skip schema
4
4
 
5
- If the task is analysis-style and you jump straight to `record_query(query_mode="list")` or `record_analyze`, you are already off the stable path.
5
+ If the task is analysis-style and you jump straight to `record_list` or `record_analyze`, you are already off the stable path.
6
6
 
7
7
  Correct recovery:
8
8
 
@@ -23,7 +23,7 @@ Do not pass vague time phrases or impossible dates into MCP.
23
23
 
24
24
  ## Do not treat 200-row list output as full data
25
25
 
26
- `record_query(query_mode="list")` can hit:
26
+ `record_list` can hit:
27
27
 
28
28
  - `row_cap=200`
29
29
  - `row_cap_hit=true`
@@ -19,7 +19,7 @@ Use this skill when the user asks for:
19
19
  2. decide whether the question needs `count`, `sum`, `avg`, `distinct_count`, `ratio`, or `ranking`
20
20
  3. build one or more field_id-based DSLs
21
21
  4. `record_analyze`
22
- 5. `record_query(query_mode="list")` only for sample inspection
22
+ 5. `record_list` only for sample inspection
23
23
 
24
24
  ## Distribution / ratio pattern
25
25
 
@@ -73,7 +73,7 @@ Use this skill when the user asks for:
73
73
 
74
74
  ## Sample inspection pattern
75
75
 
76
- Only use `record_query(query_mode="list")` after schema/analyze when you need:
76
+ Only use `record_list` after schema/analyze when you need:
77
77
 
78
78
  - example rows
79
79
  - spot checks
@@ -23,7 +23,7 @@ Only write `全量可信结论` when:
23
23
 
24
24
  Put evidence into `样本观察` when:
25
25
 
26
- - it came from `record_query(query_mode="list")`
26
+ - it came from `record_list`
27
27
  - the tool reports `row_cap_hit`
28
28
  - the tool reports `sample_only`
29
29
  - the result is compact/capped and not complete
@@ -37,7 +37,7 @@ If `record_schema_get` was not used for an analysis task, downgrade the overall
37
37
  Do not combine:
38
38
 
39
39
  - full totals from `record_analyze`
40
- - sample-only details from `record_query(query_mode="list")`
40
+ - sample-only details from `record_list`
41
41
 
42
42
  into one sentence like “基于全部数据分析...”.
43
43
 
@@ -2,4 +2,4 @@ from __future__ import annotations
2
2
 
3
3
  __all__ = ["__version__"]
4
4
 
5
- __version__ = "0.2.0b20"
5
+ __version__ = "0.2.0b21"
@@ -29,15 +29,15 @@ def build_server() -> FastMCP:
29
29
  "Use auth_login first, then workspace_list and workspace_select. "
30
30
  "All resource tools operate with the logged-in user's Qingflow permissions.\n\n"
31
31
  "For analytics, use record_schema_get first, let the model build field_id-based DSL, "
32
- "then call record_analyze. Use record_query for list/detail browsing only.\n\n"
32
+ "then call record_analyze. For operational record reads, use record_schema_get first, then record_list or record_get. "
33
+ "For writes, use record_schema_get and then record_write with mode=plan or apply.\n\n"
33
34
  "Task Center (待办/已办) handling:\n"
34
- "- Use task_statistics to get counts of pending tasks (todo_count), timeouts, urged, etc.\n"
35
- "- Use task_list to query tasks. Type values: 1=todo (待办), 2=initiated (我发起的), 3=cc (抄送), 5=done (已办).\n"
36
- "- Use task_list_grouped to get tasks grouped by form/worksheet.\n"
35
+ "- Use task_summary to get headline counts.\n"
36
+ "- Use task_list for flat task browsing with task_box and flow_status.\n"
37
+ "- Use task_facets when worksheet or workflow-node buckets matter.\n"
37
38
  "- Use task_mark_read to mark a specific task as read.\n"
38
39
  "- Use task_urge to send an urgent reminder for a pending task.\n"
39
- "- Process status values: 1=all, 2=processing, 3=passed, 4=refused, 5=need_supply, 6=urged, 7=timeout, 8=pre_timeout, 9=unread.\n"
40
- "- After identifying the exact task node and record, use record_approve, record_reject, record_rollback, record_transfer, record_reassign, or record_countersign as needed."
40
+ "- After identifying the exact task node and record, use task_approve, task_reject, task_rollback, or task_transfer as needed."
41
41
  ),
42
42
  )
43
43
  sessions = SessionStore()
@@ -18,9 +18,11 @@ def build_user_server() -> FastMCP:
18
18
  server = FastMCP(
19
19
  "Qingflow App User MCP",
20
20
  instructions=(
21
- "Use this server for Qingflow record queries, schema-first analytics via record_schema_get and record_analyze, "
22
- "record writes, task center operations, directory lookups, and approval actions. "
23
- "Use record_query for list/detail reads only. Avoid builder-side app or schema changes here."
21
+ "Use this server for Qingflow operational workflows with a schema-first path. "
22
+ "For records, start with record_schema_get, then choose record_list, record_get, or record_write. "
23
+ "For analytics, switch to record_schema_get and record_analyze. "
24
+ "For task center, use task_summary, task_list, and task_facets before any explicit task action. "
25
+ "Avoid builder-side app or schema changes here."
24
26
  ),
25
27
  )
26
28
  sessions = SessionStore()
@@ -137,189 +139,12 @@ def build_user_server() -> FastMCP:
137
139
  bucket_type=bucket_type,
138
140
  path_id=path_id,
139
141
  file_related_url=file_related_url,
140
- )
142
+ )
141
143
 
142
144
  RecordTools(sessions, backend).register(server)
143
145
  DirectoryTools(sessions, backend).register(server)
144
-
145
- @server.tool()
146
- def record_comment_add(
147
- profile: str = DEFAULT_PROFILE,
148
- app_key: str = "",
149
- apply_id: int = 0,
150
- payload: dict | None = None,
151
- ) -> dict:
152
- return approvals.record_comment_add(profile=profile, app_key=app_key, apply_id=apply_id, payload=payload or {})
153
-
154
- @server.tool()
155
- def record_comment_list(
156
- profile: str = DEFAULT_PROFILE,
157
- app_key: str = "",
158
- apply_id: int = 0,
159
- page_size: int = 20,
160
- list_type: int | None = None,
161
- page_num: int | None = 1,
162
- ) -> dict:
163
- return approvals.record_comment_list(
164
- profile=profile,
165
- app_key=app_key,
166
- apply_id=apply_id,
167
- page_size=page_size,
168
- list_type=list_type,
169
- page_num=page_num,
170
- )
171
-
172
- @server.tool()
173
- def record_comment_mention_candidates(
174
- profile: str = DEFAULT_PROFILE,
175
- app_key: str = "",
176
- apply_id: int = 0,
177
- page_size: int = 20,
178
- page_num: int = 1,
179
- list_type: int | None = None,
180
- keyword: str | None = None,
181
- ) -> dict:
182
- return approvals.record_comment_mention_candidates(
183
- profile=profile,
184
- app_key=app_key,
185
- apply_id=apply_id,
186
- page_size=page_size,
187
- page_num=page_num,
188
- list_type=list_type,
189
- keyword=keyword,
190
- )
191
-
192
- @server.tool()
193
- def record_comment_mark_read(profile: str = DEFAULT_PROFILE, app_key: str = "", apply_id: int = 0) -> dict:
194
- return approvals.record_comment_mark_read(profile=profile, app_key=app_key, apply_id=apply_id)
195
-
196
- @server.tool()
197
- def record_comment_stats(profile: str = DEFAULT_PROFILE, app_key: str = "", apply_id: int = 0) -> dict:
198
- return approvals.record_comment_stats(profile=profile, app_key=app_key, apply_id=apply_id)
199
-
200
- @server.tool()
201
- def task_list(
202
- profile: str = DEFAULT_PROFILE,
203
- type: int = 1,
204
- process_status: int = 1,
205
- app_key: str | None = None,
206
- node_id: int | None = None,
207
- search_key: str | None = None,
208
- page_num: int = 1,
209
- page_size: int = 20,
210
- create_time_asc: bool | None = None,
211
- ) -> dict:
212
- return tasks.task_list(
213
- profile=profile,
214
- type=type,
215
- process_status=process_status,
216
- app_key=app_key,
217
- node_id=node_id,
218
- search_key=search_key,
219
- page_num=page_num,
220
- page_size=page_size,
221
- create_time_asc=create_time_asc,
222
- )
223
-
224
- @server.tool()
225
- def task_list_grouped(
226
- profile: str = DEFAULT_PROFILE,
227
- type: int = 1,
228
- process_status: int = 1,
229
- app_key: str | None = None,
230
- node_id: int | None = None,
231
- search_key: str | None = None,
232
- page_num: int = 1,
233
- page_size: int = 20,
234
- ) -> dict:
235
- return tasks.task_list_grouped(
236
- profile=profile,
237
- type=type,
238
- process_status=process_status,
239
- app_key=app_key,
240
- node_id=node_id,
241
- search_key=search_key,
242
- page_num=page_num,
243
- page_size=page_size,
244
- )
245
-
246
- @server.tool()
247
- def task_statistics(profile: str = DEFAULT_PROFILE, app_key: str | None = None) -> dict:
248
- return tasks.task_statistics(profile=profile, app_key=app_key)
249
-
250
- @server.tool()
251
- def task_urge(profile: str = DEFAULT_PROFILE, app_key: str = "", row_record_id: int = 0) -> dict:
252
- return tasks.task_urge(profile=profile, app_key=app_key, row_record_id=row_record_id)
253
-
254
- @server.tool(description=approvals._high_risk_tool_description(operation="approve", target="workflow task"))
255
- def task_approve(
256
- profile: str = DEFAULT_PROFILE,
257
- app_key: str = "",
258
- apply_id: int = 0,
259
- payload: dict | None = None,
260
- ) -> dict:
261
- return approvals.record_approve(profile=profile, app_key=app_key, apply_id=apply_id, payload=payload or {})
262
-
263
- @server.tool(description=approvals._high_risk_tool_description(operation="reject", target="workflow task"))
264
- def task_reject(
265
- profile: str = DEFAULT_PROFILE,
266
- app_key: str = "",
267
- apply_id: int = 0,
268
- payload: dict | None = None,
269
- ) -> dict:
270
- return approvals.record_reject(profile=profile, app_key=app_key, apply_id=apply_id, payload=payload or {})
271
-
272
- @server.tool()
273
- def task_rollback_candidates(
274
- profile: str = DEFAULT_PROFILE,
275
- app_key: str = "",
276
- apply_id: int = 0,
277
- audit_node_id: int = 0,
278
- ) -> dict:
279
- return approvals.record_rollback_candidates(
280
- profile=profile,
281
- app_key=app_key,
282
- apply_id=apply_id,
283
- audit_node_id=audit_node_id,
284
- )
285
-
286
- @server.tool()
287
- def task_rollback(
288
- profile: str = DEFAULT_PROFILE,
289
- app_key: str = "",
290
- apply_id: int = 0,
291
- payload: dict | None = None,
292
- ) -> dict:
293
- return approvals.record_rollback(profile=profile, app_key=app_key, apply_id=apply_id, payload=payload or {})
294
-
295
- @server.tool()
296
- def task_transfer_candidates(
297
- profile: str = DEFAULT_PROFILE,
298
- app_key: str = "",
299
- apply_id: int = 0,
300
- page_size: int = 20,
301
- page_num: int = 1,
302
- audit_node_id: int = 0,
303
- keyword: str | None = None,
304
- ) -> dict:
305
- return approvals.record_transfer_candidates(
306
- profile=profile,
307
- app_key=app_key,
308
- apply_id=apply_id,
309
- page_size=page_size,
310
- page_num=page_num,
311
- audit_node_id=audit_node_id,
312
- keyword=keyword,
313
- )
314
-
315
- @server.tool()
316
- def task_transfer(
317
- profile: str = DEFAULT_PROFILE,
318
- app_key: str = "",
319
- apply_id: int = 0,
320
- payload: dict | None = None,
321
- ) -> dict:
322
- return approvals.record_transfer(profile=profile, app_key=app_key, apply_id=apply_id, payload=payload or {})
146
+ approvals.register(server)
147
+ tasks.register(server)
323
148
 
324
149
  return server
325
150