@josephyan/qingflow-app-user-mcp 0.2.0-beta.30 → 0.2.0-beta.32

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -3,13 +3,13 @@
3
3
  Install:
4
4
 
5
5
  ```bash
6
- npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.30
6
+ npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.32
7
7
  ```
8
8
 
9
9
  Run:
10
10
 
11
11
  ```bash
12
- npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.30 qingflow-app-user-mcp
12
+ npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.32 qingflow-app-user-mcp
13
13
  ```
14
14
 
15
15
  Environment:
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@josephyan/qingflow-app-user-mcp",
3
- "version": "0.2.0-beta.30",
3
+ "version": "0.2.0-beta.32",
4
4
  "description": "Operational end-user MCP for Qingflow records, tasks, comments, and directory workflows.",
5
5
  "license": "MIT",
6
6
  "type": "module",
package/pyproject.toml CHANGED
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "qingflow-mcp"
7
- version = "0.2.0b30"
7
+ version = "0.2.0b32"
8
8
  description = "User-authenticated MCP server for Qingflow"
9
9
  readme = "README.md"
10
10
  license = "MIT"
@@ -19,16 +19,20 @@ Route to exactly one of these specialized paths:
19
19
  1. Record CRUD
20
20
  Switch to [$qingflow-record-crud](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-crud/SKILL.md)
21
21
 
22
- 2. Analysis
22
+ 2. Task workflow operations
23
+ Switch to [$qingflow-task-ops](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-task-ops/SKILL.md)
24
+
25
+ 3. Analysis
23
26
  Switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
24
27
 
25
- 3. MCP connection / auth / workspace selection
28
+ 4. MCP connection / auth / workspace selection
26
29
  Switch to [$qingflow-mcp-setup](/Users/yanqidong/.codex/skills/qingflow-mcp-setup/SKILL.md)
27
30
 
28
31
  ## Routing Rules
29
32
 
30
33
  - If the user does not know the target `app_key`, discover apps first with `app_list` or `app_search`, then route to the specialized skill
31
34
  - If the task is about browsing, reading, creating, updating, deleting, attachments, relations, subtable writes, or member/department-field candidate lookup, switch to `$qingflow-record-crud`
35
+ - If the task is about todo discovery, task context, approval actions, rollback or transfer, associated report review, or workflow log review, switch to `$qingflow-task-ops`
32
36
  - If the task is about grouped distributions, ratios, rankings, trends, insights, or any final statistical conclusion, switch to `$qingflow-record-analysis`
33
37
  - If the MCP is not connected, authenticated, or bound to the right workspace, switch to `$qingflow-mcp-setup`
34
38
 
@@ -60,7 +60,9 @@ Use `record_member_candidates` / `record_department_candidates` as the default l
60
60
  ## Record Read Rules
61
61
 
62
62
  - Use `record_list` for browse/export/sample inspection only
63
- - For `columns`, prefer `[{ "field_id": 12 }]`; bare integer field ids are accepted for compatibility
63
+ - For `columns`, use `[{ "field_id": 12 }]`
64
+ - For `where`, use `{ "field_id": 12, "op": "eq", "value": "进行中" }`
65
+ - For `order_by`, use `{ "field_id": 18, "direction": "desc" }`
64
66
  - Use `record_get` when `record_id` is known
65
67
  - `record_get` without explicit `columns` still returns only applicant-node visible fields; do not assume it exposes the full builder-side record
66
68
  - `record_list` and `record_get` may reject hidden-field `field_id`s because record tools now validate against the applicant-node visible schema only
@@ -15,9 +15,9 @@ Remember that `record_schema_get` only exposes the current user's applicant-node
15
15
 
16
16
  Keep the browse DSL simple:
17
17
 
18
- - `columns`: prefer `[{ "field_id": 12 }]`; bare integers are accepted for compatibility
19
- - `where`: flat AND filters only
20
- - `order_by`: field sorting only
18
+ - `columns`: use `[{ "field_id": 12 }]`
19
+ - `where`: flat AND filters only, using `{ "field_id": 12, "op": "eq", "value": "进行中" }`
20
+ - `order_by`: field sorting only, using `{ "field_id": 18, "direction": "desc" }`
21
21
  - `limit` and `page`: browsing intent only
22
22
 
23
23
  Do not use `record_list` for grouped conclusions, ratios, rankings, trends, or any final statistical claim.
@@ -1,82 +1,62 @@
1
1
  ---
2
2
  name: qingflow-task-ops
3
- description: Use Qingflow task center, workflow usage actions, comments, and directory lookup after the MCP is already connected and authenticated. Do not use this skill for record CRUD or final statistical analysis.
3
+ description: Use Qingflow todo discovery, workflow task context, associated approval context, workflow logs, and unified task actions after the MCP is already connected and authenticated. Do not use this skill for record CRUD or final statistical analysis.
4
4
  metadata:
5
- short-description: Qingflow task center and workflow operations
5
+ short-description: Qingflow task workflow context and actions
6
6
  ---
7
7
 
8
8
  # Qingflow Task Ops
9
9
 
10
10
  ## Overview
11
11
 
12
- This skill is for task-center and workflow usage operations only.
12
+ This skill is for task workflow operations only.
13
13
  Assumes MCP is connected, authenticated, and on the correct workspace.
14
14
 
15
15
  ## Default Paths
16
16
 
17
17
  Use exactly one of these default paths:
18
18
 
19
- 1. Task headline counts
20
- `task_summary`
21
-
22
- 2. Flat task browsing
19
+ 1. Find target todos
23
20
  `task_list`
24
21
 
25
- 3. Grouped workload buckets
26
- `task_facets`
22
+ 2. Read one task context
23
+ `task_list -> exact target -> task_get`
27
24
 
28
- 4. Task or workflow action
29
- `task_list / task_facets -> exact target -> task_* action`
25
+ 3. Read associated approval context
26
+ `task_get -> task_associated_report_detail_get` or `task_workflow_log_get`
30
27
 
31
- 5. Comments and directory support
32
- `record_get -> record_comment_*` or `directory_*`
28
+ 4. Execute workflow action
29
+ `task_list -> exact target -> task_get -> task_action_execute`
33
30
 
34
31
  ## Core Tools
35
32
 
36
- - `task_summary`
37
33
  - `task_list`
38
- - `task_facets`
39
- - `task_mark_read`
40
- - `task_mark_all_cc_read`
41
- - `task_urge`
42
- - `task_approve`
43
- - `task_reject`
44
- - `task_rollback_candidates`
45
- - `task_rollback`
46
- - `task_transfer_candidates`
47
- - `task_transfer`
48
- - `record_comment_write`
49
- - `record_comment_list`
50
- - `record_comment_mentions`
51
- - `record_comment_mark_read`
34
+ - `task_get`
35
+ - `task_action_execute`
36
+ - `task_associated_report_detail_get`
37
+ - `task_workflow_log_get`
52
38
 
53
39
  ## Supporting Tools
54
40
 
55
- - `directory_search`
56
- - `directory_list_internal_users`
57
- - `directory_list_all_internal_users`
58
- - `directory_list_internal_departments`
59
- - `directory_list_all_departments`
60
- - `directory_list_sub_departments`
61
- - `directory_list_external_members`
62
- - `record_get`
41
+ - `app_list`
42
+ - `app_search`
63
43
 
64
44
  ## Standard Operating Order
65
45
 
66
46
  1. Ensure auth exists
67
47
  2. Ensure workspace is selected
68
- 3. Confirm target app and whether the task is task browse / grouped workload / comment / workflow action
69
- 4. Use `task_summary`, `task_list`, or `task_facets` to locate the exact target first
70
- 5. If a workflow action is required, identify the exact `task_id`, `record_id`, and `workflow_node_id` whenever practical
71
- 6. Use directory tools only when member/department lookup is needed to support the action
72
- 7. For production actions, read current task or record state first whenever practical
73
- 8. After actions, report the affected `task_id`, `record_id`, or returned item count
48
+ 3. Discover the exact target with `task_list`
49
+ 4. Read node context with `task_get`
50
+ 5. Before giving any approval recommendation, read `task_workflow_log_get`
51
+ 6. If `task_get` returns any `associated_reports`, read every visible report through `task_associated_report_detail_get`
52
+ 7. Give an approval recommendation only after reviewing the node context, workflow log, and associated report details
53
+ 8. Wait for explicit user confirmation before `task_action_execute`
54
+ 9. Execute through `task_action_execute`
55
+ 10. After actions, report the exact `app_key`, `record_id`, `workflow_node_id`, and executed action
74
56
 
75
57
  ## Task-Center Rules
76
58
 
77
- - Use `task_summary` for headline counts
78
59
  - Use `task_list` for flat browsing
79
- - Use `task_facets` for grouped worksheet or workflow-node buckets
80
60
  - `task_box` must be one of:
81
61
  - `todo`
82
62
  - `initiated`
@@ -93,30 +73,39 @@ Use exactly one of these default paths:
93
73
  - `due_soon`
94
74
  - `unread`
95
75
  - `ended`
96
- - Task counts are task-center counts, not record counts
97
- - If the user asks for workload by worksheet or node, use `task_facets`
98
- - If a result set is truncated, describe it as `已返回分组中` or `主要分组`
76
+ - `task_list` is the only public task discovery path in this MCP surface
77
+ - Treat `task_id` as a locator only; the action primary key is `app_key + record_id + workflow_node_id`
78
+ - Default box usage:
79
+ - `todo`: `task_list -> task_get -> task_workflow_log_get / task_associated_report_detail_get -> recommendation -> explicit user confirmation -> task_action_execute`
80
+ - `initiated`: `task_list -> record_get`
81
+ - `done`: `task_list -> record_get`
82
+ - `cc`: `task_list -> record_get`
83
+ - Treat `initiated`, `done`, and `cc` primarily as list-plus-record-detail flows, not task action flows
99
84
 
100
85
  ## Workflow Usage Actions
101
86
 
102
- - Find the exact target first
103
- - For approve or reject, identify the exact `workflow_node_id` first; prefer task-center results or current audit info
104
- - Avoid workflow actions on ambiguous tasks or records
105
- - For rollback or transfer, fetch candidates first
106
- - Summarize the final action and target task ids or record ids
107
-
108
- ## Comments and Directory
109
-
110
- - Use `record_comment_write` only after the exact `record_id` is known
111
- - Use `record_comment_mentions` to resolve mention candidates before building complex comment payloads
112
- - Use `directory_search` for fuzzy member/department lookup
113
- - Use `directory_list_all_internal_users` and `directory_list_all_departments` only when the user explicitly wants a complete export
87
+ - `task_get.capabilities.available_actions` is the source of truth for v1 executable actions
88
+ - Current public actions are:
89
+ - `approve`
90
+ - `reject`
91
+ - `rollback`
92
+ - `transfer`
93
+ - `urge`
94
+ - Before any approve/reject/rollback/transfer recommendation, always review `task_workflow_log_get` when `task_get.visibility.audit_record_visible=true`
95
+ - If `task_get` returns visible `associated_reports`, review each one with `task_associated_report_detail_get`; do not rely on report summary alone
96
+ - Do not give an approval recommendation based only on `task_get`
97
+ - Do not execute `task_action_execute` until the user explicitly confirms the chosen action
98
+ - Avoid actions on ambiguous tasks or records
99
+ - Summarize the final action and the exact `app_key / record_id / workflow_node_id`
114
100
 
115
101
  ## Response Interpretation
116
102
 
117
- - `task_summary` gives headline counts only
118
- - `task_list` returns flat task rows, not grouped workload conclusions
119
- - `task_facets` is the only default grouped workload path
103
+ - `task_list` returns normalized todo rows and is the only default discovery path
104
+ - `task_get` returns node context summary, not full historical report data
105
+ - `task_associated_report_detail_get` may return either:
106
+ - `result_type=view_list`
107
+ - `result_type=chart_data`
108
+ - `task_workflow_log_get` returns workflow log detail only when the node grants log visibility
120
109
  - Treat `request_route` as the source of truth for live route debugging
121
110
  - If only part of the requested work is completed, explicitly disclose which parts are done and which are not
122
111
 
@@ -2,4 +2,4 @@ from __future__ import annotations
2
2
 
3
3
  __all__ = ["__version__"]
4
4
 
5
- __version__ = "0.2.0b30"
5
+ __version__ = "0.2.0b32"
@@ -17,6 +17,7 @@ from .tools.qingbi_report_tools import QingbiReportTools
17
17
  from .tools.record_tools import RecordTools
18
18
  from .tools.role_tools import RoleTools
19
19
  from .tools.solution_tools import SolutionTools
20
+ from .tools.task_context_tools import TaskContextTools
20
21
  from .tools.view_tools import ViewTools
21
22
  from .tools.workflow_tools import WorkflowTools
22
23
  from .tools.workspace_tools import WorkspaceTools
@@ -78,7 +79,10 @@ Analysis answers must include concrete numbers. When applicable, include percent
78
79
 
79
80
  `record_schema_get -> record_list / record_get / record_write`
80
81
 
81
- - For `columns`, prefer `[{{field_id}}]`; bare integer field ids remain supported for compatibility.
82
+ - Use `columns` as `[{{field_id}}]`
83
+ - Use `where` items as `{{field_id, op, value}}`
84
+ - Use `order_by` items as `{{field_id, direction}}`
85
+ - Legacy forms such as bare integer `field_id`, `fieldId`, `operator`, `values`, or `order` may still parse, but they are compatibility-only and not the canonical DSL
82
86
 
83
87
  `record_write` uses SQL-like JSON clauses:
84
88
 
@@ -90,6 +94,14 @@ Analysis answers must include concrete numbers. When applicable, include percent
90
94
  - If a member or department field id is known but candidate ids are not, use `record_member_candidates` or `record_department_candidates` before `record_write`.
91
95
  - For default-all member or department fields, prefer those field candidate tools instead of starting with `directory_*`.
92
96
 
97
+ ## Task Workflow Path
98
+
99
+ `task_list -> task_get -> task_action_execute`
100
+
101
+ - Use `task_associated_report_detail_get` for associated view or report details.
102
+ - Use `task_workflow_log_get` for full workflow log history.
103
+ - Task actions operate on `app_key + record_id + workflow_node_id`, not `task_id`.
104
+
93
105
  ## Time Handling
94
106
 
95
107
  Normalize relative dates before building DSL.
@@ -112,6 +124,7 @@ Avoid builder-side app or schema changes here.""",
112
124
  WorkspaceTools(sessions, backend).register(server)
113
125
  FileTools(sessions, backend).register(server)
114
126
  RecordTools(sessions, backend).register(server)
127
+ TaskContextTools(sessions, backend).register(server)
115
128
  RoleTools(sessions, backend).register(server)
116
129
  AppTools(sessions, backend).register(server)
117
130
  QingbiReportTools(sessions, backend).register(server)
@@ -12,6 +12,7 @@ from .tools.auth_tools import AuthTools
12
12
  from .tools.directory_tools import DirectoryTools
13
13
  from .tools.file_tools import FileTools
14
14
  from .tools.record_tools import RecordTools
15
+ from .tools.task_context_tools import TaskContextTools
15
16
  from .tools.workspace_tools import WorkspaceTools
16
17
 
17
18
 
@@ -66,7 +67,10 @@ Analysis answers must include concrete numbers. When applicable, include percent
66
67
 
67
68
  `record_schema_get -> record_list / record_get / record_write`
68
69
 
69
- - For `columns`, prefer `[{{field_id}}]`; bare integer field ids remain supported for compatibility.
70
+ - Use `columns` as `[{{field_id}}]`
71
+ - Use `where` items as `{{field_id, op, value}}`
72
+ - Use `order_by` items as `{{field_id, direction}}`
73
+ - Legacy forms such as bare integer `field_id`, `fieldId`, `operator`, `values`, or `order` may still parse, but they are compatibility-only and not the canonical DSL
70
74
 
71
75
  `record_write` uses SQL-like JSON clauses:
72
76
 
@@ -78,6 +82,14 @@ Analysis answers must include concrete numbers. When applicable, include percent
78
82
  - If a member or department field id is known but candidate ids are not, use `record_member_candidates` or `record_department_candidates` before `record_write`.
79
83
  - For default-all member or department fields, prefer those field candidate tools instead of starting with `directory_*`.
80
84
 
85
+ ## Task Workflow Path
86
+
87
+ `task_list -> task_get -> task_action_execute`
88
+
89
+ - Use `task_associated_report_detail_get` for associated view or report details.
90
+ - Use `task_workflow_log_get` for full workflow log history.
91
+ - Task actions operate on `app_key + record_id + workflow_node_id`, not `task_id`.
92
+
81
93
  ## Time Handling
82
94
 
83
95
  Normalize relative dates before building DSL.
@@ -218,6 +230,7 @@ Avoid builder-side app or schema changes here.""",
218
230
  )
219
231
 
220
232
  RecordTools(sessions, backend).register(server)
233
+ TaskContextTools(sessions, backend).register(server)
221
234
  DirectoryTools(sessions, backend).register(server)
222
235
 
223
236
  return server
@@ -611,9 +611,11 @@ class ApprovalTools(ToolBase):
611
611
  self._normalize_alias(body, "formId", "form_id")
612
612
 
613
613
  node_id = self._extract_node_id(body)
614
- body["nodeId"] = node_id
614
+ body["nodeId"] = self._resolve_actionable_node_id(context, app_key, apply_id, node_id)
615
615
  body["applyId"] = self._match_or_fill_int(body, field_name="applyId", expected_value=apply_id)
616
616
  body["formId"] = self._resolve_form_id(profile, context, app_key, explicit_form_id=body.get("formId"))
617
+ if body.get("answers") is None:
618
+ body["answers"] = self._fetch_current_todo_answers(context, app_key, apply_id, body["nodeId"])
617
619
 
618
620
  self._validate_approval_payload(body)
619
621
  return body
@@ -672,6 +674,54 @@ class ApprovalTools(ToolBase):
672
674
  elif alias_value is not None and payload.get(canonical_key) != alias_value:
673
675
  raise_tool_error(QingflowApiError.config_error(f"payload.{canonical_key} and payload.{alias_key} must match when both are provided"))
674
676
 
677
+ def _resolve_actionable_node_id(self, context, app_key: str, apply_id: int, node_id: int) -> int: # type: ignore[no-untyped-def]
678
+ infos = self.backend.request(
679
+ "GET",
680
+ context,
681
+ f"/app/{app_key}/apply/{apply_id}/auditInfo",
682
+ params={"type": 1},
683
+ )
684
+ if not isinstance(infos, list) or not infos:
685
+ raise_tool_error(
686
+ QingflowApiError.config_error(
687
+ f"apply_id={apply_id} is not currently actionable for the logged-in user in todo list"
688
+ )
689
+ )
690
+ actionable_node_ids = {
691
+ candidate
692
+ for item in infos
693
+ if isinstance(item, dict)
694
+ for candidate in (item.get("auditNodeId"), item.get("nodeId"))
695
+ if isinstance(candidate, int) and candidate > 0
696
+ }
697
+ if node_id not in actionable_node_ids:
698
+ raise_tool_error(
699
+ QingflowApiError.config_error(
700
+ f"payload.nodeId={node_id} is not an actionable todo node for apply_id={apply_id}"
701
+ )
702
+ )
703
+ return node_id
704
+
705
+ def _fetch_current_todo_answers(self, context, app_key: str, apply_id: int, node_id: int) -> list[dict[str, Any]]: # type: ignore[no-untyped-def]
706
+ detail = self.backend.request(
707
+ "GET",
708
+ context,
709
+ f"/app/{app_key}/apply/{apply_id}",
710
+ params={"role": 3, "listType": 1, "auditNodeId": node_id},
711
+ )
712
+ answers = detail.get("answers") if isinstance(detail, dict) else None
713
+ if not isinstance(answers, list):
714
+ raise_tool_error(
715
+ QingflowApiError.config_error(
716
+ f"cannot resolve current answers for apply_id={apply_id} nodeId={node_id}"
717
+ )
718
+ )
719
+ normalized_answers: list[dict[str, Any]] = []
720
+ for item in answers:
721
+ if isinstance(item, dict):
722
+ normalized_answers.append(dict(item))
723
+ return normalized_answers
724
+
675
725
  def _validate_approval_payload(self, payload: dict[str, Any]) -> None:
676
726
  self._reject_unsupported_fields(payload)
677
727
  if not isinstance(payload.get("formId"), int) or payload["formId"] <= 0:
@@ -680,6 +730,9 @@ class ApprovalTools(ToolBase):
680
730
  raise_tool_error(QingflowApiError.config_error("payload.applyId must be a positive integer"))
681
731
  if not isinstance(payload.get("nodeId"), int) or payload["nodeId"] <= 0:
682
732
  raise_tool_error(QingflowApiError.config_error("payload.nodeId must be a positive integer"))
733
+ answers = payload.get("answers")
734
+ if answers is not None and not isinstance(answers, list):
735
+ raise_tool_error(QingflowApiError.config_error("payload.answers must be an array when provided"))
683
736
 
684
737
  def _validate_audit_payload(self, payload: dict[str, Any], *, require_uid: bool = False) -> None:
685
738
  self._reject_unsupported_fields(payload)
@@ -565,6 +565,12 @@ class RecordTools(ToolBase):
565
565
  raise_tool_error(QingflowApiError.config_error("app_key is required"))
566
566
  if limit <= 0:
567
567
  raise_tool_error(QingflowApiError.config_error("limit must be positive"))
568
+ legacy_warnings = _detect_analyze_legacy_warnings(
569
+ dimensions=dimensions,
570
+ metrics=metrics,
571
+ filters=filters,
572
+ sort=sort,
573
+ )
568
574
 
569
575
  def runner(session_profile, context):
570
576
  index = self._get_field_index(profile, context, app_key, force_refresh=False)
@@ -594,6 +600,7 @@ class RecordTools(ToolBase):
594
600
  limit=limit,
595
601
  strict_full=strict_full,
596
602
  output_profile=output_profile,
603
+ extra_warnings=legacy_warnings,
597
604
  )
598
605
 
599
606
  return self._run_record_tool(profile, runner)
@@ -615,6 +622,7 @@ class RecordTools(ToolBase):
615
622
  normalized_output_profile = self._normalize_public_output_profile(output_profile)
616
623
  if not app_key:
617
624
  raise_tool_error(QingflowApiError.config_error("app_key is required"))
625
+ legacy_warnings = _detect_record_list_legacy_warnings(columns=columns, where=where, order_by=order_by)
618
626
  normalized_columns = _normalize_public_column_selectors(columns)
619
627
  if not normalized_columns:
620
628
  raise_tool_error(QingflowApiError.config_error("columns is required"))
@@ -651,6 +659,7 @@ class RecordTools(ToolBase):
651
659
  list_data = cast(JSONObject, cast(JSONObject, raw["data"])["list"])
652
660
  pagination = cast(JSONObject, list_data["pagination"])
653
661
  warnings: list[JSONObject] = []
662
+ warnings.extend(legacy_warnings)
654
663
  warning = _normalize_optional_text(list_data.get("analysis_warning"))
655
664
  if warning:
656
665
  warnings.append({"code": "BROWSE_ONLY", "message": warning})
@@ -1906,6 +1915,7 @@ class RecordTools(ToolBase):
1906
1915
  limit: int,
1907
1916
  strict_full: bool,
1908
1917
  output_profile: str,
1918
+ extra_warnings: list[JSONObject] | None = None,
1909
1919
  ) -> JSONObject:
1910
1920
  started_at = time.perf_counter()
1911
1921
  analysis_paging = _fixed_analysis_scan_policy()
@@ -2068,7 +2078,11 @@ class RecordTools(ToolBase):
2068
2078
  for idx, row in enumerate(rows, start=1)
2069
2079
  ]
2070
2080
 
2071
- warnings = self._build_analyze_warnings(local_filtering=local_filtering, rows_truncated=rows_truncated)
2081
+ warnings = self._build_analyze_warnings(
2082
+ local_filtering=local_filtering,
2083
+ rows_truncated=rows_truncated,
2084
+ extra_warnings=extra_warnings or [],
2085
+ )
2072
2086
  completeness: JSONObject = {
2073
2087
  "status": completeness_status,
2074
2088
  "safe_for_final_conclusion": completeness_status == "complete",
@@ -2230,8 +2244,15 @@ class RecordTools(ToolBase):
2230
2244
  )
2231
2245
  return sorted_rows
2232
2246
 
2233
- def _build_analyze_warnings(self, *, local_filtering: bool, rows_truncated: bool) -> list[JSONObject]:
2247
+ def _build_analyze_warnings(
2248
+ self,
2249
+ *,
2250
+ local_filtering: bool,
2251
+ rows_truncated: bool,
2252
+ extra_warnings: list[JSONObject],
2253
+ ) -> list[JSONObject]:
2234
2254
  warnings: list[JSONObject] = []
2255
+ warnings.extend(extra_warnings)
2235
2256
  if local_filtering:
2236
2257
  warnings.append({"code": "LOCAL_VIEW_FILTERING"})
2237
2258
  if rows_truncated:
@@ -3734,18 +3755,23 @@ class RecordTools(ToolBase):
3734
3755
  for idx, item in enumerate(where):
3735
3756
  if not isinstance(item, dict):
3736
3757
  raise_tool_error(QingflowApiError.config_error(f"where[{idx}] must be an object"))
3758
+ _ensure_allowed_record_list_keys(
3759
+ item,
3760
+ location=f"where[{idx}]",
3761
+ allowed_keys={"field_id", "fieldId", "op", "operator", "value", "values"},
3762
+ example="{'field_id': 12, 'op': 'eq', 'value': '进行中'}",
3763
+ )
3737
3764
  field_id = _coerce_count(item.get("field_id", item.get("fieldId")))
3738
3765
  if field_id is None:
3739
3766
  raise_tool_error(QingflowApiError.config_error(f"where[{idx}] requires field_id"))
3740
3767
  payload: JSONObject = {"field_id": field_id}
3741
- if "op" in item:
3742
- payload["op"] = item["op"]
3743
- if "operator" in item:
3744
- payload["operator"] = item["operator"]
3768
+ op = item.get("op", item.get("operator"))
3769
+ if op is not None:
3770
+ payload["op"] = op
3745
3771
  if "value" in item:
3746
3772
  payload["value"] = item["value"]
3747
3773
  elif "values" in item:
3748
- payload["values"] = item["values"]
3774
+ payload["value"] = item["values"]
3749
3775
  normalized.append(payload)
3750
3776
  return normalized
3751
3777
 
@@ -3754,6 +3780,12 @@ class RecordTools(ToolBase):
3754
3780
  for idx, item in enumerate(order_by):
3755
3781
  if not isinstance(item, dict):
3756
3782
  raise_tool_error(QingflowApiError.config_error(f"order_by[{idx}] must be an object"))
3783
+ _ensure_allowed_record_list_keys(
3784
+ item,
3785
+ location=f"order_by[{idx}]",
3786
+ allowed_keys={"field_id", "fieldId", "direction", "order"},
3787
+ example="{'field_id': 18, 'direction': 'desc'}",
3788
+ )
3757
3789
  field_id = _coerce_count(item.get("field_id", item.get("fieldId")))
3758
3790
  if field_id is None:
3759
3791
  raise_tool_error(QingflowApiError.config_error(f"order_by[{idx}] requires field_id"))
@@ -5361,6 +5393,12 @@ def _normalize_public_column_selectors(columns: list[JSONObject | int]) -> list[
5361
5393
  if isinstance(item, int):
5362
5394
  field_id = item
5363
5395
  elif isinstance(item, dict):
5396
+ _ensure_allowed_record_list_keys(
5397
+ item,
5398
+ location="columns[]",
5399
+ allowed_keys={"field_id", "fieldId"},
5400
+ example="{'field_id': 12}",
5401
+ )
5364
5402
  field_id = _coerce_count(item.get("field_id", item.get("fieldId")))
5365
5403
  if field_id is None or field_id < 0:
5366
5404
  raise_tool_error(
@@ -5376,6 +5414,92 @@ def _column_selector_payload(field_id: int) -> JSONObject:
5376
5414
  return {"field_id": field_id}
5377
5415
 
5378
5416
 
5417
+ def _ensure_allowed_record_list_keys(
5418
+ item: JSONObject,
5419
+ *,
5420
+ location: str,
5421
+ allowed_keys: set[str],
5422
+ example: str,
5423
+ ) -> None:
5424
+ unexpected_keys = sorted(str(key) for key in item.keys() if str(key) not in allowed_keys)
5425
+ if unexpected_keys:
5426
+ raise_tool_error(
5427
+ QingflowApiError.config_error(
5428
+ f"{location} contains unsupported keys: {unexpected_keys}. Use {example}."
5429
+ )
5430
+ )
5431
+
5432
+
5433
+ def _detect_record_list_legacy_warnings(
5434
+ *,
5435
+ columns: list[JSONObject | int],
5436
+ where: list[JSONObject],
5437
+ order_by: list[JSONObject],
5438
+ ) -> list[JSONObject]:
5439
+ warnings: list[JSONObject] = []
5440
+ if any(isinstance(item, int) or (isinstance(item, dict) and "fieldId" in item) for item in columns):
5441
+ warnings.append(
5442
+ {
5443
+ "code": "LEGACY_LIST_COLUMNS_DSL",
5444
+ "message": "Use columns as [{field_id}] objects. Bare integers and fieldId are compatibility-only.",
5445
+ }
5446
+ )
5447
+ if any(isinstance(item, dict) and any(key in item for key in ("fieldId", "operator", "values")) for item in where):
5448
+ warnings.append(
5449
+ {
5450
+ "code": "LEGACY_LIST_FILTER_DSL",
5451
+ "message": "Use where items as {field_id, op, value}. fieldId/operator/values are compatibility-only.",
5452
+ }
5453
+ )
5454
+ if any(isinstance(item, dict) and any(key in item for key in ("fieldId", "order")) for item in order_by):
5455
+ warnings.append(
5456
+ {
5457
+ "code": "LEGACY_LIST_SORT_DSL",
5458
+ "message": "Use order_by items as {field_id, direction}. fieldId/order are compatibility-only.",
5459
+ }
5460
+ )
5461
+ return warnings
5462
+
5463
+
5464
+ def _detect_analyze_legacy_warnings(
5465
+ *,
5466
+ dimensions: list[JSONObject],
5467
+ metrics: list[JSONObject],
5468
+ filters: list[JSONObject],
5469
+ sort: list[JSONObject],
5470
+ ) -> list[JSONObject]:
5471
+ warnings: list[JSONObject] = []
5472
+ if any(isinstance(item, dict) and "fieldId" in item for item in dimensions):
5473
+ warnings.append(
5474
+ {
5475
+ "code": "LEGACY_ANALYZE_DIMENSION_DSL",
5476
+ "message": "Use dimensions as {field_id, alias, bucket}. fieldId is compatibility-only.",
5477
+ }
5478
+ )
5479
+ if any(isinstance(item, dict) and any(key in item for key in ("fieldId", "type", "agg", "aggregation")) for item in metrics):
5480
+ warnings.append(
5481
+ {
5482
+ "code": "LEGACY_ANALYZE_METRIC_DSL",
5483
+ "message": "Use metrics as {op, field_id, alias}. fieldId/type/agg/aggregation are compatibility-only.",
5484
+ }
5485
+ )
5486
+ if any(isinstance(item, dict) and any(key in item for key in ("fieldId", "operator", "values")) for item in filters):
5487
+ warnings.append(
5488
+ {
5489
+ "code": "LEGACY_ANALYZE_FILTER_DSL",
5490
+ "message": "Use filters as {field_id, op, value}. fieldId/operator/values are compatibility-only.",
5491
+ }
5492
+ )
5493
+ if any(isinstance(item, dict) and "direction" in item for item in sort):
5494
+ warnings.append(
5495
+ {
5496
+ "code": "LEGACY_ANALYZE_SORT_DSL",
5497
+ "message": "Use sort items as {by, order}. direction is compatibility-only.",
5498
+ }
5499
+ )
5500
+ return warnings
5501
+
5502
+
5379
5503
  def _resolve_sort_ascend(item: JSONObject) -> bool:
5380
5504
  if "isAscend" in item:
5381
5505
  return bool(item["isAscend"])