@josephyan/qingflow-app-user-mcp 0.2.0-beta.22 → 0.2.0-beta.23

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -3,13 +3,13 @@
3
3
  Install:
4
4
 
5
5
  ```bash
6
- npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.22
6
+ npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.23
7
7
  ```
8
8
 
9
9
  Run:
10
10
 
11
11
  ```bash
12
- npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.22 qingflow-app-user-mcp
12
+ npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.23 qingflow-app-user-mcp
13
13
  ```
14
14
 
15
15
  Environment:
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@josephyan/qingflow-app-user-mcp",
3
- "version": "0.2.0-beta.22",
3
+ "version": "0.2.0-beta.23",
4
4
  "description": "Operational end-user MCP for Qingflow records, tasks, comments, and directory workflows.",
5
5
  "license": "MIT",
6
6
  "type": "module",
package/pyproject.toml CHANGED
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "qingflow-mcp"
7
- version = "0.2.0b22"
7
+ version = "0.2.0b23"
8
8
  description = "User-authenticated MCP server for Qingflow"
9
9
  readme = "README.md"
10
10
  license = "MIT"
@@ -24,6 +24,12 @@ Use these tools as the core analysis surface:
24
24
 
25
25
  Use `record_list` or `record_get` only when you need sample rows or a specific supporting example after the main analysis path.
26
26
 
27
+ `record_schema_get` now returns the **current user's applicant-node visible schema only**:
28
+
29
+ - hidden fields are omitted entirely
30
+ - absent fields should be interpreted as `当前用户在申请人节点下不可见/不可用`
31
+ - do not treat the schema as a builder/full-field metadata dump
32
+
27
33
  ## Hard Rules
28
34
 
29
35
  - Analysis tasks must start with `record_schema_get`
@@ -71,6 +77,7 @@ For analysis:
71
77
  - whether each side needs its own DSL
72
78
  - If you cannot name the denominator from real schema fields and filters, do not use words like `渗透率`, `转化率`, `占比`, `比例`, or `%`
73
79
  - If a field is still ambiguous after `record_schema_get`, do not guess; either select one unique `field_id` from the schema or ask the user to confirm from a short candidate list
80
+ - If a business field is absent from `record_schema_get`, do not infer or guess a hidden `field_id`; explain that the field is not visible in the current applicant-node permission scope
74
81
  - If a statement depends on `count`, query `count`
75
82
  - If a statement depends on total amount, query `sum`
76
83
  - If a statement depends on average level, query `avg` or derive it from trusted `sum + count`
@@ -11,6 +11,8 @@ Correct recovery:
11
11
  3. build one or more small DSLs
12
12
  4. run `record_analyze`
13
13
 
14
+ The schema here is applicant-node visible-only. If a field is absent, treat it as not available to the current user rather than switching to guessed ids or builder-side memory.
15
+
14
16
  ## Normalize relative time phrases before building the DSL.
15
17
 
16
18
  Examples:
@@ -77,6 +79,8 @@ Correct recovery:
77
79
  2. if several plausible candidates remain, ask the user to confirm from a short list
78
80
  3. build the DSL only after the field is clear
79
81
 
82
+ If the intended field is absent from the schema altogether, stop and explain that it is not visible in the current applicant-node permission scope.
83
+
80
84
  Examples of the right recovery question:
81
85
 
82
86
  - “我找到两个可能的字段:`线索来源`、`来源渠道`。你要按哪个字段统计?”
@@ -30,6 +30,8 @@ Result reading order:
30
30
  5. `completeness`
31
31
  6. `presentation`
32
32
 
33
+ Treat `record_schema_get` as applicant-node visible-only schema. Missing fields are permission boundaries, not invitations to guess hidden ids.
34
+
33
35
  ## Distribution / ratio pattern
34
36
 
35
37
  1. Run `record_schema_get`
@@ -55,6 +55,12 @@ Use exactly one of these default paths:
55
55
  - `record_get`
56
56
  - `record_write`
57
57
 
58
+ `record_schema_get` now returns the **current user's applicant-node schema only**:
59
+
60
+ - only fields visible to the current user at the applicant node are returned
61
+ - hidden fields are omitted entirely
62
+ - missing fields should be treated as `当前用户在申请人节点下不可见/不可用`, not as a reason to guess a different field
63
+
58
64
  ## Supporting Tools
59
65
 
60
66
  - `directory_search`
@@ -80,12 +86,14 @@ Use exactly one of these default paths:
80
86
 
81
87
  - Use `record_list` for browse/export/sample inspection only
82
88
  - Use `record_get` when `record_id` is known
89
+ - `record_get` without explicit `columns` still returns only applicant-node visible fields; do not assume it exposes the full builder-side record
83
90
  - `record_list` accepts:
84
91
  - `columns`
85
92
  - `where`
86
93
  - `order_by`
87
94
  - `limit`
88
95
  - `page`
96
+ - `record_list` and `record_get` may reject hidden-field `field_id`s because record tools now validate against the applicant-node visible schema only
89
97
  - `record_list` is **not** an analysis tool
90
98
  - If a request turns into grouped distributions, ratios, rankings, trends, or final statistical conclusions, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
91
99
 
@@ -153,6 +161,7 @@ The DSL is clause-shaped like SQL, but it is **not raw SQL text**.
153
161
  - Do not use free-form `WHERE` updates or deletes
154
162
  - Do not auto-fill missing fields
155
163
  - Do not auto-resolve relation targets without first querying them
164
+ - Do not assume `record_schema_get` is a builder/full-field schema. It is the current user's applicant-node visible schema only.
156
165
 
157
166
  ## Response Interpretation
158
167
 
@@ -170,4 +179,3 @@ The DSL is clause-shaped like SQL, but it is **not raw SQL text**.
170
179
  - Environment switching: [references/environments.md](references/environments.md)
171
180
  - Record operation patterns: [references/record-patterns.md](references/record-patterns.md)
172
181
  - Data gotchas: [references/data-gotchas.md](references/data-gotchas.md)
173
-
@@ -6,6 +6,8 @@ For final statistics, grouped distributions, rankings, trends, or insight-style
6
6
 
7
7
  - `record_list` is for browsing, export, and sample inspection only
8
8
  - `record_get` is for one exact record
9
+ - `record_schema_get` is applicant-node visible-only schema, not a builder/full-field schema
10
+ - if a field is absent from `record_schema_get`, treat it as not visible or not usable for the current user at the applicant node
9
11
  - Do not present paged browse output as if it were a grouped or full-population conclusion
10
12
  - If the browser and MCP disagree, compare `request_route.base_url` and `request_route.qf_version` first
11
13
 
@@ -40,4 +42,3 @@ For final statistics, grouped distributions, rankings, trends, or insight-style
40
42
  - Use the current form schema's subfield titles; do not guess nested ids
41
43
  - When updating existing subtable rows, preserve row ids if the source record returns them
42
44
  - Nested subtable writes are still unsupported
43
-
@@ -11,6 +11,8 @@ Use `record_schema_get -> record_list` when:
11
11
  - a delete or update target still needs confirmation
12
12
  - the user needs sample rows or a small export
13
13
 
14
+ Remember that `record_schema_get` only exposes the current user's applicant-node visible fields. If a field is missing from that schema, treat it as unavailable in the current permission scope instead of trying to guess another `field_id`.
15
+
14
16
  Keep the browse DSL simple:
15
17
 
16
18
  - `columns`: field ids only
@@ -29,6 +31,7 @@ Use `record_schema_get -> record_get` when:
29
31
  - a write target needs verification before action
30
32
 
31
33
  Prefer passing explicit `columns` when the user only needs a subset of fields.
34
+ Without `columns`, `record_get` still returns only applicant-node visible fields, not the full builder-side record payload.
32
35
 
33
36
  ## Write Pattern
34
37
 
@@ -88,6 +91,7 @@ Do not do this:
88
91
  - do not invent formulas or expressions
89
92
  - do not auto-fill missing required fields
90
93
  - do not guess relation targets without first resolving them
94
+ - do not guess hidden or missing fields from prior builder knowledge; if the field is absent from applicant-node schema, stop and explain the permission boundary
91
95
  - do not claim a blocked `record_write` was executed
92
96
 
93
97
  ## Unsupported Direct Writes
@@ -106,4 +110,3 @@ If the payload includes them, stop after the blocked `record_write` response and
106
110
  - Relation fields are record-id based. Resolve the referenced target first, then write the relation field with the real `record_id`.
107
111
  - Attachment fields are two-step: upload first with `file_upload_local`, then reuse the returned attachment payload in `record_write`.
108
112
  - Subtable writes require the current schema shape; when updating existing subtable rows, preserve row ids if the current record exposes them.
109
-
@@ -2,4 +2,4 @@ from __future__ import annotations
2
2
 
3
3
  __all__ = ["__version__"]
4
4
 
5
- __version__ = "0.2.0b22"
5
+ __version__ = "0.2.0b23"
@@ -30,8 +30,9 @@ def build_server() -> FastMCP:
30
30
  "All resource tools operate with the logged-in user's Qingflow permissions.\n\n"
31
31
  "For analytics, use record_schema_get first, let the model build field_id-based DSL, "
32
32
  "then call record_analyze. record_analyze returns compact business-first output as query/result/ranking/ratios/completeness/presentation; use verbose only for route/debug details. "
33
+ "record_schema_get returns the current user's applicant-node visible schema only; hidden fields are omitted and missing fields should be treated as not visible in the current permission scope. "
33
34
  "For operational record reads, use record_schema_get first, then record_list or record_get. "
34
- "For writes, use record_schema_get and then call record_write once; it performs internal preflight before any apply.\n\n"
35
+ "For writes, use record_schema_get and then call record_write once; it performs internal preflight before any apply and refuses fields outside the applicant-node writable schema.\n\n"
35
36
  "Task Center (待办/已办) handling:\n"
36
37
  "- Use task_summary to get headline counts.\n"
37
38
  "- Use task_list for flat task browsing with task_box and flow_status.\n"
@@ -20,6 +20,7 @@ def build_user_server() -> FastMCP:
20
20
  instructions=(
21
21
  "Use this server for Qingflow operational workflows with a schema-first path. "
22
22
  "For records, start with record_schema_get, then choose record_list, record_get, or record_write. "
23
+ "record_schema_get returns the current user's applicant-node visible schema only; hidden fields are omitted and missing fields should be treated as not visible in the current permission scope. "
23
24
  "For analytics, switch to record_schema_get and record_analyze; its default output is compact query/result/ranking/ratios/completeness/presentation, with route/debug only in verbose mode. "
24
25
  "For task center, use task_summary, task_list, and task_facets before any explicit task action. "
25
26
  "Avoid builder-side app or schema changes here."
@@ -90,6 +90,14 @@ class ViewSelection:
90
90
  conditions: list[list[ViewFilterCondition]]
91
91
 
92
92
 
93
+ @dataclass(slots=True)
94
+ class WorkflowNodeRef:
95
+ workflow_node_id: int
96
+ name: str
97
+ type: str
98
+ raw: JSONObject
99
+
100
+
93
101
  @dataclass(slots=True)
94
102
  class RecordInputError(Exception):
95
103
  message: str
@@ -134,7 +142,8 @@ FIELD_LOOKUP_STRIP_RE = re.compile(r"[\s_()()\[\]【】{}<>·/\\::-]+")
134
142
  class RecordTools(ToolBase):
135
143
  def __init__(self, sessions, backend) -> None: # type: ignore[no-untyped-def]
136
144
  super().__init__(sessions, backend)
137
- self._form_cache: dict[tuple[str, str], JSONObject] = {}
145
+ self._form_cache: dict[tuple[str, str, str, int], JSONObject] = {}
146
+ self._applicant_node_cache: dict[tuple[str, str], WorkflowNodeRef] = {}
138
147
  self._view_list_cache: dict[tuple[str, str], list[JSONObject]] = {}
139
148
  self._view_config_cache: dict[tuple[str, str], JSONObject] = {}
140
149
 
@@ -286,9 +295,10 @@ class RecordTools(ToolBase):
286
295
  raise_tool_error(QingflowApiError.config_error("app_key is required"))
287
296
 
288
297
  def runner(session_profile, context):
298
+ applicant_node = self._resolve_applicant_node(profile, context, app_key, force_refresh=False)
289
299
  index = self._get_field_index(profile, context, app_key, force_refresh=False)
290
300
  view_selection = self._resolve_view_selection(profile, context, app_key, view_key=view_key, view_name=view_name)
291
- fields = [self._schema_field_payload(field) for field in index.by_id.values()]
301
+ fields = [self._schema_field_payload(field, workflow_node_id=applicant_node.workflow_node_id) for field in index.by_id.values()]
292
302
  suggested_dimensions = [
293
303
  {"field_id": item["field_id"], "title": item["title"]}
294
304
  for item in fields
@@ -312,6 +322,12 @@ class RecordTools(ToolBase):
312
322
  "request_route": self._request_route_payload(context),
313
323
  "data": {
314
324
  "app_key": app_key,
325
+ "schema_scope": "applicant_node",
326
+ "workflow_node": {
327
+ "workflow_node_id": applicant_node.workflow_node_id,
328
+ "name": applicant_node.name,
329
+ "type": applicant_node.type,
330
+ },
315
331
  "view_resolution": _view_selection_payload(view_selection),
316
332
  "fields": fields,
317
333
  "suggested_dimensions": suggested_dimensions,
@@ -531,31 +547,39 @@ class RecordTools(ToolBase):
531
547
  }
532
548
  return response
533
549
 
534
- raw = self.record_get(
535
- profile=profile,
536
- app_key=app_key,
537
- apply_id=record_id,
538
- role=1,
539
- list_type=None,
540
- audit_node_id=workflow_node_id,
541
- )
542
- return {
543
- "profile": profile,
544
- "ws_id": raw.get("ws_id"),
545
- "ok": bool(raw.get("ok", True)),
546
- "request_route": raw.get("request_route"),
547
- "warnings": [],
548
- "output_profile": normalized_output_profile,
549
- "data": {
550
- "app_key": app_key,
551
- "record_id": record_id,
552
- "record": raw.get("result"),
553
- "selection": {
554
- "columns": columns,
555
- "workflow_node_id": workflow_node_id,
550
+ def runner(session_profile, context):
551
+ index = self._get_field_index(profile, context, app_key, force_refresh=False)
552
+ selected_fields = list(index.by_id.values())
553
+ result = self.backend.request(
554
+ "GET",
555
+ context,
556
+ f"/app/{app_key}/apply/{record_id}",
557
+ params={"role": 1},
558
+ )
559
+ answer_list = result.get("answers") if isinstance(result, dict) and isinstance(result.get("answers"), list) else []
560
+ row = _build_flat_row(cast(list[JSONValue], answer_list), selected_fields, apply_id=record_id)
561
+ response: JSONObject = {
562
+ "profile": profile,
563
+ "ws_id": session_profile.selected_ws_id,
564
+ "ok": True,
565
+ "request_route": self._request_route_payload(context),
566
+ "warnings": [],
567
+ "output_profile": normalized_output_profile,
568
+ "data": {
569
+ "app_key": app_key,
570
+ "record_id": record_id,
571
+ "record": row,
572
+ "selection": {
573
+ "columns": columns,
574
+ "workflow_node_id": workflow_node_id,
575
+ },
556
576
  },
557
- },
558
- }
577
+ }
578
+ if normalized_output_profile == "verbose":
579
+ response["data"]["debug"] = {"raw_record": result}
580
+ return response
581
+
582
+ return self._run_record_tool(profile, runner)
559
583
 
560
584
  def record_write(
561
585
  self,
@@ -712,7 +736,7 @@ class RecordTools(ToolBase):
712
736
  preflight=None,
713
737
  )
714
738
 
715
- def _schema_field_payload(self, field: FormField) -> JSONObject:
739
+ def _schema_field_payload(self, field: FormField, *, workflow_node_id: int) -> JSONObject:
716
740
  write_hints = self._schema_write_hints(field)
717
741
  return {
718
742
  "field_id": field.que_id,
@@ -725,6 +749,8 @@ class RecordTools(ToolBase):
725
749
  "role_hints": self._schema_role_hints(field),
726
750
  "readable": True,
727
751
  "writable": write_hints["writable"],
752
+ "permission_scope": "applicant_node",
753
+ "workflow_node_id": workflow_node_id,
728
754
  "write_kind": write_hints["write_kind"],
729
755
  "supported_read_ops": write_hints["supported_read_ops"],
730
756
  "supported_write_ops": write_hints["supported_write_ops"],
@@ -2390,10 +2416,16 @@ class RecordTools(ToolBase):
2390
2416
  return self._run_record_tool(profile, runner)
2391
2417
 
2392
2418
  def _get_form_schema(self, profile: str, context, app_key: str, *, force_refresh: bool) -> JSONObject: # type: ignore[no-untyped-def]
2393
- cache_key = (profile, app_key)
2419
+ applicant_node = self._resolve_applicant_node(profile, context, app_key, force_refresh=force_refresh)
2420
+ cache_key = (profile, app_key, "applicant_node", applicant_node.workflow_node_id)
2394
2421
  if not force_refresh and cache_key in self._form_cache:
2395
2422
  return self._form_cache[cache_key]
2396
- schema = self.backend.request("GET", context, f"/app/{app_key}/form", params={"type": 1})
2423
+ schema = self.backend.request(
2424
+ "GET",
2425
+ context,
2426
+ f"/app/{app_key}/form",
2427
+ params={"type": 1, "beingApply": True, "auditNodeId": applicant_node.workflow_node_id},
2428
+ )
2397
2429
  normalized = _normalize_form_schema(schema)
2398
2430
  self._form_cache[cache_key] = normalized
2399
2431
  return normalized
@@ -2401,6 +2433,26 @@ class RecordTools(ToolBase):
2401
2433
  def _get_field_index(self, profile: str, context, app_key: str, *, force_refresh: bool) -> FieldIndex: # type: ignore[no-untyped-def]
2402
2434
  return _build_field_index(self._get_form_schema(profile, context, app_key, force_refresh=force_refresh))
2403
2435
 
2436
+ def _resolve_applicant_node(self, profile: str, context, app_key: str, *, force_refresh: bool) -> WorkflowNodeRef: # type: ignore[no-untyped-def]
2437
+ cache_key = (profile, app_key)
2438
+ if not force_refresh and cache_key in self._applicant_node_cache:
2439
+ return self._applicant_node_cache[cache_key]
2440
+ payload = self.backend.request("GET", context, f"/app/{app_key}/auditNodes")
2441
+ applicant_node = _extract_applicant_node(payload)
2442
+ if applicant_node is None:
2443
+ raise_tool_error(
2444
+ QingflowApiError(
2445
+ category="config",
2446
+ message=f"cannot resolve applicant node for app {app_key}",
2447
+ details={
2448
+ "error_code": "APPLICANT_NODE_NOT_FOUND",
2449
+ "fix_hint": "Ensure the app has a workflow applicant node before using user-side record tools.",
2450
+ },
2451
+ )
2452
+ )
2453
+ self._applicant_node_cache[cache_key] = applicant_node
2454
+ return applicant_node
2455
+
2404
2456
  def _get_view_list(self, profile: str, context, app_key: str) -> list[JSONObject]: # type: ignore[no-untyped-def]
2405
2457
  cache_key = (profile, app_key)
2406
2458
  if cache_key in self._view_list_cache:
@@ -3883,6 +3935,30 @@ def _normalize_view_list(payload: JSONValue) -> list[JSONObject]:
3883
3935
  return flattened
3884
3936
 
3885
3937
 
3938
+ def _normalize_audit_nodes(payload: JSONValue) -> list[JSONObject]:
3939
+ if isinstance(payload, list):
3940
+ return [item for item in payload if isinstance(item, dict)]
3941
+ if isinstance(payload, dict):
3942
+ return [item for item in payload.values() if isinstance(item, dict)]
3943
+ return []
3944
+
3945
+
3946
+ def _extract_applicant_node(payload: JSONValue) -> WorkflowNodeRef | None:
3947
+ for item in _normalize_audit_nodes(payload):
3948
+ node_type = _coerce_count(item.get("type"))
3949
+ deal_type = _coerce_count(item.get("dealType"))
3950
+ workflow_node_id = _coerce_count(item.get("auditNodeId"))
3951
+ if workflow_node_id is None or node_type != 0 or deal_type != 3:
3952
+ continue
3953
+ return WorkflowNodeRef(
3954
+ workflow_node_id=workflow_node_id,
3955
+ name=_normalize_optional_text(item.get("auditNodeName")) or str(workflow_node_id),
3956
+ type="applicant",
3957
+ raw=item,
3958
+ )
3959
+ return None
3960
+
3961
+
3886
3962
  def _compile_view_conditions(config: JSONObject) -> list[list[ViewFilterCondition]]:
3887
3963
  raw_limit = config.get("viewgraphLimit")
3888
3964
  if not isinstance(raw_limit, list):