@josephyan/qingflow-app-user-mcp 0.2.0-beta.24 → 0.2.0-beta.26

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -3,13 +3,13 @@
3
3
  Install:
4
4
 
5
5
  ```bash
6
- npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.24
6
+ npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.26
7
7
  ```
8
8
 
9
9
  Run:
10
10
 
11
11
  ```bash
12
- npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.24 qingflow-app-user-mcp
12
+ npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.26 qingflow-app-user-mcp
13
13
  ```
14
14
 
15
15
  Environment:
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@josephyan/qingflow-app-user-mcp",
3
- "version": "0.2.0-beta.24",
3
+ "version": "0.2.0-beta.26",
4
4
  "description": "Operational end-user MCP for Qingflow records, tasks, comments, and directory workflows.",
5
5
  "license": "MIT",
6
6
  "type": "module",
package/pyproject.toml CHANGED
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "qingflow-mcp"
7
- version = "0.2.0b24"
7
+ version = "0.2.0b26"
8
8
  description = "User-authenticated MCP server for Qingflow"
9
9
  readme = "README.md"
10
10
  license = "MIT"
@@ -41,6 +41,7 @@ Route to exactly one of these specialized paths:
41
41
 
42
42
  ## Routing Rules
43
43
 
44
+ - If the user does not know the target `app_key`, discover apps first with `app_list` or `app_search`, then route to the specialized skill
44
45
  - If the task is about browsing, reading, creating, updating, deleting, attachments, relations, or subtable writes, switch to `$qingflow-record-crud`
45
46
  - If the task is about inbox, todo, cc, task-center workload, comments, approval, reject, rollback, transfer, urge, or directory lookup, switch to `$qingflow-task-ops`
46
47
  - If the task is about grouped distributions, ratios, rankings, trends, insights, or any final statistical conclusion, switch to `$qingflow-record-analysis`
@@ -7,15 +7,21 @@ metadata:
7
7
 
8
8
  # Qingflow Record Analysis
9
9
 
10
+ Analysis tasks must start with `record_schema_get`.
11
+ Use field_id-based DSLs only.
12
+
10
13
  ## Step 1: `record_schema_get` → Step 2: build DSL → Step 3: `record_analyze`
11
14
 
12
15
  This is the ONLY execution order. Never skip step 1. Never call `record_analyze` without a schema.
13
16
 
14
- Tools: `record_schema_get`, `record_analyze`. Use `record_list`/`record_get` only for sample rows AFTER analysis.
17
+ Tools: `record_schema_get`, `record_analyze`. Use `record_list`/`record_get` only for sample rows AFTER analysis, and treat those read paths as belonging to [$qingflow-record-crud](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-crud/SKILL.md).
18
+ Comments, approvals, rollback, transfer, urge, and directory lookup stay in [$qingflow-task-ops](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-task-ops/SKILL.md), not in this analysis skill.
15
19
 
16
20
  ---
17
21
 
18
- ## DSL FORMAT (CRITICAL — read this FIRST)
22
+ ## DSL Contract
23
+
24
+ ### DSL FORMAT (CRITICAL — read this FIRST)
19
25
 
20
26
  ### ✅ Correct vs ❌ Wrong — learn from these before building ANY DSL
21
27
 
@@ -130,6 +136,20 @@ More templates:
130
136
 
131
137
  ## RULES
132
138
 
139
+ - Normalize relative time phrases into explicit legal date ranges.
140
+ - 渗透率 / 转化率 / 占比类结论必须先定义分子和分母。
141
+ - Do not claim a metric you did not query.
142
+ - Derived ratios must be computed outside the DSL.
143
+ - Before choosing a DSL shape, first decide whether the question needs `count`, `sum`, `avg`, `distinct_count`, `ratio`, or `ranking`.
144
+ - If a field is still ambiguous after `record_schema_get`, do not guess; ask the user to confirm from a short candidate list.
145
+ - Rankings must come from structured sorted results.
146
+ - For partial answers, explicitly disclose which parts are complete and which parts remain unresolved.
147
+ - Complex answers should default to `先结构、后解读`.
148
+ - `between`: pass a two-item array.
149
+ - Sort entries must reference an alias already defined in `dimensions` or `metrics`.
150
+ - Final wording should stay as close as possible to schema titles.
151
+ - Do not pass field titles, aliases, or guessed ids.
152
+ - If `completeness.statement_scope=returned_groups_only` or `completeness.rows_truncated=true`, downgrade wording to returned groups only.
133
153
  - All `field_id` MUST come from `record_schema_get`. Never guess or use field titles.
134
154
  - One DSL per question. Multiple small DSLs > one overloaded request.
135
155
  - Normalize relative dates to concrete ranges BEFORE building DSL. Never send impossible dates (e.g. `2026-02-29`).
@@ -63,6 +63,8 @@ Use exactly one of these default paths:
63
63
 
64
64
  ## Supporting Tools
65
65
 
66
+ - `app_list`
67
+ - `app_search`
66
68
  - `directory_search`
67
69
  - `directory_list_internal_users`
68
70
  - `directory_list_internal_departments`
@@ -75,12 +77,13 @@ Use exactly one of these default paths:
75
77
  1. Ensure auth exists
76
78
  2. Ensure workspace is selected
77
79
  3. Confirm target app and whether the task is browse / detail / write / analysis
78
- 4. Run `record_schema_get` before any non-trivial record read or write
79
- 5. If the request is analysis-like, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
80
- 6. If the request is write-like, decide `insert / update / delete` before building any payload
81
- 7. If fields are still ambiguous after `record_schema_get`, ask the user to confirm from a short candidate list instead of guessing
82
- 8. For high-risk writes or production changes, read the current state first whenever practical
83
- 9. After actions, report the affected `record_id`, counts, or returned item count
80
+ 4. If `app_key` is unknown, use `app_list` or `app_search` first
81
+ 5. Run `record_schema_get` before any non-trivial record read or write
82
+ 6. If the request is analysis-like, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
83
+ 7. If the request is write-like, decide `insert / update / delete` before building any payload
84
+ 8. If fields are still ambiguous after `record_schema_get`, ask the user to confirm from a short candidate list instead of guessing
85
+ 9. For high-risk writes or production changes, read the current state first whenever practical
86
+ 10. After actions, report the affected `record_id`, counts, or returned item count
84
87
 
85
88
  ## Record Read Rules
86
89
 
@@ -2,4 +2,4 @@ from __future__ import annotations
2
2
 
3
3
  __all__ = ["__version__"]
4
4
 
5
- __version__ = "0.2.0b23"
5
+ __version__ = "0.2.0b26"
@@ -28,6 +28,7 @@ def build_server() -> FastMCP:
28
28
  instructions=(
29
29
  "Use auth_login first, then workspace_list and workspace_select. "
30
30
  "All resource tools operate with the logged-in user's Qingflow permissions.\n\n"
31
+ "If app_key is unknown, use app_list or app_search first to discover current-user visible apps in the selected workspace. "
31
32
  "For analytics, use record_schema_get first, let the model build field_id-based DSL, "
32
33
  "then call record_analyze. record_analyze returns compact business-first output as query/result/ranking/ratios/completeness/presentation; use verbose only for route/debug details. "
33
34
  "record_schema_get returns the current user's applicant-node visible schema only; hidden fields are omitted and missing fields should be treated as not visible in the current permission scope. "
@@ -6,6 +6,7 @@ from .backend_client import BackendClient
6
6
  from .config import DEFAULT_PROFILE
7
7
  from .session_store import SessionStore
8
8
  from .tools.approval_tools import ApprovalTools
9
+ from .tools.app_tools import AppTools
9
10
  from .tools.auth_tools import AuthTools
10
11
  from .tools.directory_tools import DirectoryTools
11
12
  from .tools.file_tools import FileTools
@@ -19,6 +20,7 @@ def build_user_server() -> FastMCP:
19
20
  "Qingflow App User MCP",
20
21
  instructions=(
21
22
  "Use this server for Qingflow operational workflows with a schema-first path. "
23
+ "If app_key is unknown, use app_list or app_search first to discover current-user visible apps in the selected workspace. "
22
24
  "For records, start with record_schema_get, then choose record_list, record_get, or record_write. "
23
25
  "record_schema_get returns the current user's applicant-node visible schema only; hidden fields are omitted and missing fields should be treated as not visible in the current permission scope. "
24
26
  "For analytics, switch to record_schema_get and record_analyze; its default output is compact query/result/ranking/ratios/completeness/presentation, with route/debug only in verbose mode. "
@@ -29,6 +31,7 @@ def build_user_server() -> FastMCP:
29
31
  sessions = SessionStore()
30
32
  backend = BackendClient()
31
33
  auth = AuthTools(sessions, backend)
34
+ apps = AppTools(sessions, backend)
32
35
  workspace = WorkspaceTools(sessions, backend)
33
36
  files = FileTools(sessions, backend)
34
37
  approvals = ApprovalTools(sessions, backend)
@@ -96,6 +99,14 @@ def build_user_server() -> FastMCP:
96
99
  def workspace_select(profile: str = DEFAULT_PROFILE, ws_id: int = 0) -> dict:
97
100
  return workspace.workspace_select(profile=profile, ws_id=ws_id)
98
101
 
102
+ @server.tool()
103
+ def app_list(profile: str = DEFAULT_PROFILE) -> dict:
104
+ return apps.app_list(profile=profile)
105
+
106
+ @server.tool()
107
+ def app_search(profile: str = DEFAULT_PROFILE, keyword: str = "", page_num: int = 1, page_size: int = 50) -> dict:
108
+ return apps.app_search(profile=profile, keyword=keyword, page_num=page_num, page_size=page_size)
109
+
99
110
  @server.tool()
100
111
  def file_get_upload_info(
101
112
  profile: str = DEFAULT_PROFILE,
@@ -76,14 +76,20 @@ class AppTools(ToolBase):
76
76
  return self.app_publish(profile=profile, app_key=app_key, payload=payload or {})
77
77
 
78
78
  def app_list(self, *, profile: str, ship_auth: bool = False) -> JSONObject:
79
- """Get all apps with full hierarchy from tag/apps endpoint."""
79
+ """List current-user visible apps in the selected workspace."""
80
80
  def runner(session_profile, context):
81
81
  result = self.backend.request("GET", context, "/tag/apps")
82
- return {
82
+ items, source_shape = self._extract_visible_apps(result)
83
+ response = {
83
84
  "profile": profile,
84
85
  "ws_id": session_profile.selected_ws_id,
85
- "items": result,
86
+ "items": items,
87
+ "count": len(items),
88
+ "source_shape": source_shape,
86
89
  }
90
+ if ship_auth:
91
+ response["raw"] = result
92
+ return response
87
93
 
88
94
  return self._run(profile, runner)
89
95
 
@@ -98,19 +104,20 @@ class AppTools(ToolBase):
98
104
 
99
105
  result = self.backend.request("GET", context, "/app/item", params=params)
100
106
 
101
- # Extract app list from the response
102
107
  apps = []
103
108
  if isinstance(result, dict):
104
109
  items = result.get("list", [])
105
110
  for item in items:
106
111
  if isinstance(item, dict):
107
- apps.append({
108
- "app_key": item.get("appKey"),
109
- "title": item.get("title") or item.get("formTitle"),
110
- "form_id": item.get("formId"),
111
- "tag_id": item.get("tagId"),
112
- "group_id": item.get("groupId"),
113
- })
112
+ normalized = self._normalize_visible_app(
113
+ item,
114
+ package_tag_id=_coerce_positive_int(item.get("tagId")),
115
+ package_name=str(item.get("tagName") or "").strip() or None,
116
+ group_id=_coerce_positive_int(item.get("groupId")),
117
+ group_name=str(item.get("groupName") or "").strip() or None,
118
+ )
119
+ if normalized is not None:
120
+ apps.append(normalized)
114
121
 
115
122
  return {
116
123
  "profile": profile,
@@ -119,6 +126,7 @@ class AppTools(ToolBase):
119
126
  "page_num": page_num,
120
127
  "page_size": page_size,
121
128
  "total": result.get("total") if isinstance(result, dict) else len(apps),
129
+ "items": apps,
122
130
  "apps": apps,
123
131
  }
124
132
 
@@ -424,6 +432,88 @@ class AppTools(ToolBase):
424
432
  }
425
433
  return {key: value for key, value in compact.items() if value is not None}
426
434
 
435
+ def _extract_visible_apps(self, result: Any) -> tuple[list[JSONObject], str]:
436
+ apps: list[JSONObject] = []
437
+ seen: set[str] = set()
438
+
439
+ def walk(
440
+ node: Any,
441
+ *,
442
+ package_tag_id: int | None = None,
443
+ package_name: str | None = None,
444
+ group_id: int | None = None,
445
+ group_name: str | None = None,
446
+ ) -> None:
447
+ if isinstance(node, list):
448
+ for item in node:
449
+ walk(
450
+ item,
451
+ package_tag_id=package_tag_id,
452
+ package_name=package_name,
453
+ group_id=group_id,
454
+ group_name=group_name,
455
+ )
456
+ return
457
+ if not isinstance(node, dict):
458
+ return
459
+
460
+ next_package_tag_id = _coerce_positive_int(node.get("tagId")) or package_tag_id
461
+ next_package_name = str(node.get("tagName") or "").strip() or package_name
462
+ next_group_id = _coerce_positive_int(node.get("groupId")) or group_id
463
+ next_group_name = str(node.get("groupName") or node.get("groupTitle") or "").strip() or group_name
464
+
465
+ normalized = self._normalize_visible_app(
466
+ node,
467
+ package_tag_id=next_package_tag_id,
468
+ package_name=next_package_name,
469
+ group_id=next_group_id,
470
+ group_name=next_group_name,
471
+ )
472
+ if normalized is not None:
473
+ app_key = str(normalized.get("app_key") or "").strip()
474
+ if app_key and app_key not in seen:
475
+ seen.add(app_key)
476
+ apps.append(normalized)
477
+
478
+ for value in node.values():
479
+ if isinstance(value, (list, dict)):
480
+ walk(
481
+ value,
482
+ package_tag_id=next_package_tag_id,
483
+ package_name=next_package_name,
484
+ group_id=next_group_id,
485
+ group_name=next_group_name,
486
+ )
487
+
488
+ walk(result)
489
+ return apps, type(result).__name__
490
+
491
+ def _normalize_visible_app(
492
+ self,
493
+ item: dict[str, Any],
494
+ *,
495
+ package_tag_id: int | None,
496
+ package_name: str | None,
497
+ group_id: int | None,
498
+ group_name: str | None,
499
+ ) -> JSONObject | None:
500
+ app_key = str(item.get("appKey") or item.get("app_key") or "").strip()
501
+ if not app_key:
502
+ return None
503
+ title = str(item.get("title") or item.get("formTitle") or item.get("appName") or item.get("name") or app_key).strip() or app_key
504
+ tag_ids = item.get("tagIds") if isinstance(item.get("tagIds"), list) else []
505
+ compact = {
506
+ "app_key": app_key,
507
+ "title": title,
508
+ "form_id": item.get("formId"),
509
+ "tag_id": package_tag_id,
510
+ "package_name": package_name,
511
+ "group_id": group_id,
512
+ "group_name": group_name,
513
+ "tag_ids": [value for value in (_coerce_positive_int(tag_id) for tag_id in tag_ids) if value is not None],
514
+ }
515
+ return {key: value for key, value in compact.items() if value not in (None, [], "", {})}
516
+
427
517
  def _count_auth_members(self, auth_payload: Any, member_key: str) -> int:
428
518
  if not isinstance(auth_payload, dict):
429
519
  return 0
@@ -470,3 +560,11 @@ def _normalize_form_type(value: int | str) -> int:
470
560
  if text in FORM_TYPE_ALIASES:
471
561
  return FORM_TYPE_ALIASES[text]
472
562
  raise_tool_error(QingflowApiError.config_error("form_type must be a positive integer or one of: default, form, schema, new, draft, edit"))
563
+
564
+
565
+ def _coerce_positive_int(value: Any) -> int | None:
566
+ try:
567
+ number = int(value)
568
+ except (TypeError, ValueError):
569
+ return None
570
+ return number if number > 0 else None
@@ -303,15 +303,57 @@ class DirectoryTools(ToolBase):
303
303
  page_num: int,
304
304
  page_size: int,
305
305
  ) -> dict[str, Any]:
306
- if not keyword:
307
- raise_tool_error(QingflowApiError.config_error("keyword is required"))
306
+ if page_num <= 0:
307
+ raise_tool_error(QingflowApiError.config_error("page_num must be positive"))
308
+ if page_size <= 0:
309
+ raise_tool_error(QingflowApiError.config_error("page_size must be positive"))
310
+ normalized_keyword = keyword.strip()
311
+
312
+ if not normalized_keyword:
313
+ def runner(session_profile, context):
314
+ fetch_limit = max((page_num + 1) * page_size + 1, page_size + 1)
315
+ items, truncated, deepest_depth = self._walk_department_tree(
316
+ context,
317
+ parent_dept_id=None,
318
+ max_depth=20,
319
+ max_items=fetch_limit,
320
+ )
321
+ start = (page_num - 1) * page_size
322
+ page_items = items[start : start + page_size]
323
+ reported_total = None if truncated else len(items)
324
+ page_amount = None if truncated else ((len(items) + page_size - 1) // page_size if items else 0)
325
+ if truncated and page_items:
326
+ page_amount = max(page_num + 1, (start + len(page_items) + page_size - 1) // page_size)
327
+ return {
328
+ "profile": profile,
329
+ "ws_id": session_profile.selected_ws_id,
330
+ "request_route": self._request_route_payload(context),
331
+ "items": page_items,
332
+ "pagination": {
333
+ "page": page_num,
334
+ "page_size": page_size,
335
+ "returned_items": len(page_items),
336
+ "reported_total": reported_total,
337
+ "page_amount": page_amount,
338
+ "depth_scanned": deepest_depth + 1 if page_items else 0,
339
+ },
340
+ }
341
+
342
+ raw = self._run(profile, runner)
343
+ items = [item for item in raw.get("items", []) if isinstance(item, dict)]
344
+ return self._public_directory_response(
345
+ raw,
346
+ items=items,
347
+ pagination=raw.get("pagination", {}),
348
+ selection={"keyword": None},
349
+ )
308
350
 
309
351
  def runner(session_profile, context):
310
352
  result = self.backend.request(
311
353
  "GET",
312
354
  context,
313
355
  "/contact/deptByPage",
314
- params={"keyword": keyword, "pageNum": page_num, "pageSize": page_size},
356
+ params={"keyword": normalized_keyword, "pageNum": page_num, "pageSize": page_size},
315
357
  )
316
358
  return {
317
359
  "profile": profile,
@@ -332,7 +374,7 @@ class DirectoryTools(ToolBase):
332
374
  "reported_total": _coerce_int(_payload_value(raw.get("page"), "total")),
333
375
  "page_amount": _coerce_int(_payload_value(raw.get("page"), "pageAmount")),
334
376
  },
335
- selection={"keyword": keyword},
377
+ selection={"keyword": normalized_keyword},
336
378
  )
337
379
 
338
380
  def directory_list_all_departments(
@@ -1156,8 +1156,12 @@ class RecordTools(ToolBase):
1156
1156
  details={"location": f"metrics[{idx}]", "field": _field_ref_payload(field), "op": op},
1157
1157
  )
1158
1158
  elif item.get("field_id", item.get("fieldId")) is not None:
1159
- # LLM 经常给 count 传 field_id,静默忽略而非报错
1160
- pass
1159
+ raise RecordInputError(
1160
+ message=f"metrics[{idx}] with op 'count' must not include field_id",
1161
+ error_code="INVALID_ANALYZE_METRIC",
1162
+ fix_hint="For count, omit field_id and use only {'op': 'count', 'alias': '记录数'}.",
1163
+ details={"location": f"metrics[{idx}]", "op": op},
1164
+ )
1161
1165
  alias = _normalize_optional_text(item.get("alias"))
1162
1166
  if alias is None:
1163
1167
  if op == "count":
@@ -1682,6 +1686,9 @@ class RecordTools(ToolBase):
1682
1686
  support_matrix = _summarize_write_support(resolved_fields)
1683
1687
  invalid_fields: list[JSONObject] = []
1684
1688
  normalized_answers: list[JSONObject] = []
1689
+ validation_warnings = [
1690
+ "record_write performs static preflight from form metadata before apply; runtime visibility and dynamic linkage can still reject writes."
1691
+ ]
1685
1692
  try:
1686
1693
  normalized_answers = self._resolve_answers(
1687
1694
  profile,
@@ -1702,6 +1709,16 @@ class RecordTools(ToolBase):
1702
1709
  "received_value": error.details.get("received_value") if error.details else None,
1703
1710
  }
1704
1711
  )
1712
+ validation_answers = normalized_answers
1713
+ if operation == "update" and apply_id is not None and not invalid_fields:
1714
+ try:
1715
+ existing_answers = self._load_record_answers_for_preflight(context, app_key=app_key, apply_id=apply_id)
1716
+ except QingflowApiError:
1717
+ validation_warnings.append(
1718
+ "update preflight could not load the current record; required-field completeness was not revalidated."
1719
+ )
1720
+ else:
1721
+ validation_answers = self._merge_record_answers(existing_answers, normalized_answers)
1705
1722
  readonly_or_system_fields = [
1706
1723
  {
1707
1724
  "que_id": entry.get("que_id"),
@@ -1717,8 +1734,9 @@ class RecordTools(ToolBase):
1717
1734
  ]
1718
1735
  provided_field_ids = {
1719
1736
  str(answer.get("queId"))
1720
- for answer in normalized_answers
1737
+ for answer in validation_answers
1721
1738
  if isinstance(answer.get("queId"), int) and int(answer["queId"]) > 0
1739
+ and _answer_has_meaningful_content(answer)
1722
1740
  }
1723
1741
  missing_required_fields = []
1724
1742
  for field in index.by_id.values():
@@ -1734,9 +1752,6 @@ class RecordTools(ToolBase):
1734
1752
  )
1735
1753
  question_relations = _collect_question_relations(schema)
1736
1754
  option_links = _collect_option_links(resolved_fields)
1737
- validation_warnings = [
1738
- "record_write performs static preflight from form metadata before apply; runtime visibility and dynamic linkage can still reject writes."
1739
- ]
1740
1755
  if question_relations:
1741
1756
  validation_warnings.append(
1742
1757
  "form contains questionRelations; linked visibility and runtime required rules may differ at submit time."
@@ -1791,6 +1806,39 @@ class RecordTools(ToolBase):
1791
1806
  "recommended_next_actions": actions,
1792
1807
  }
1793
1808
 
1809
+ def _load_record_answers_for_preflight(
1810
+ self,
1811
+ context, # type: ignore[no-untyped-def]
1812
+ *,
1813
+ app_key: str,
1814
+ apply_id: int,
1815
+ ) -> list[JSONObject]:
1816
+ record = self.backend.request(
1817
+ "GET",
1818
+ context,
1819
+ f"/app/{app_key}/apply/{apply_id}",
1820
+ params={"role": 1, "listType": DEFAULT_RECORD_LIST_TYPE},
1821
+ )
1822
+ answers = record.get("answers") if isinstance(record, dict) else None
1823
+ return [item for item in answers if isinstance(item, dict)] if isinstance(answers, list) else []
1824
+
1825
+ def _merge_record_answers(
1826
+ self,
1827
+ existing_answers: list[JSONObject],
1828
+ patch_answers: list[JSONObject],
1829
+ ) -> list[JSONObject]:
1830
+ merged_by_id: dict[int, JSONObject] = {}
1831
+ order: list[int] = []
1832
+ for source in (existing_answers, patch_answers):
1833
+ for item in source:
1834
+ que_id = _coerce_count(item.get("queId")) if isinstance(item, dict) else None
1835
+ if que_id is None or que_id <= 0:
1836
+ continue
1837
+ if que_id not in merged_by_id:
1838
+ order.append(que_id)
1839
+ merged_by_id[que_id] = item
1840
+ return [merged_by_id[que_id] for que_id in order]
1841
+
1794
1842
  def record_query(
1795
1843
  self,
1796
1844
  *,
@@ -3942,6 +3990,26 @@ def _normalize_audit_nodes(payload: JSONValue) -> list[JSONObject]:
3942
3990
  return []
3943
3991
 
3944
3992
 
3993
+ def _answer_has_meaningful_content(answer: JSONObject) -> bool:
3994
+ table_values = answer.get("tableValues")
3995
+ if isinstance(table_values, list) and table_values:
3996
+ for row in table_values:
3997
+ if isinstance(row, list) and any(_answer_has_meaningful_content(item) for item in row if isinstance(item, dict)):
3998
+ return True
3999
+ return False
4000
+ values = answer.get("values")
4001
+ if not isinstance(values, list) or not values:
4002
+ return False
4003
+ for item in values:
4004
+ if isinstance(item, dict):
4005
+ if any(value not in (None, "", [], {}) for value in item.values()):
4006
+ return True
4007
+ continue
4008
+ if item not in (None, "", [], {}):
4009
+ return True
4010
+ return False
4011
+
4012
+
3945
4013
  def _extract_applicant_node(payload: JSONValue) -> WorkflowNodeRef | None:
3946
4014
  for item in _normalize_audit_nodes(payload):
3947
4015
  node_type = _coerce_count(item.get("type"))