@josephyan/qingflow-app-user-mcp 0.2.0-beta.40 → 0.2.0-beta.42

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -3,13 +3,13 @@
3
3
  Install:
4
4
 
5
5
  ```bash
6
- npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.40
6
+ npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.42
7
7
  ```
8
8
 
9
9
  Run:
10
10
 
11
11
  ```bash
12
- npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.40 qingflow-app-user-mcp
12
+ npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.42 qingflow-app-user-mcp
13
13
  ```
14
14
 
15
15
  Environment:
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@josephyan/qingflow-app-user-mcp",
3
- "version": "0.2.0-beta.40",
3
+ "version": "0.2.0-beta.42",
4
4
  "description": "Operational end-user MCP for Qingflow records, tasks, comments, and directory workflows.",
5
5
  "license": "MIT",
6
6
  "type": "module",
package/pyproject.toml CHANGED
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "qingflow-mcp"
7
- version = "0.2.0b40"
7
+ version = "0.2.0b42"
8
8
  description = "User-authenticated MCP server for Qingflow"
9
9
  readme = "README.md"
10
10
  license = "MIT"
@@ -26,8 +26,10 @@ dependencies = [
26
26
  "mcp>=1.9.4,<2.0.0",
27
27
  "httpx>=0.27,<1.0",
28
28
  "keyring>=25.5,<26.0",
29
+ "openpyxl>=3.1,<4.0",
29
30
  "pydantic>=2.8,<3.0",
30
31
  "pycryptodome>=3.20,<4.0",
32
+ "python-socketio[client]>=5.11,<6.0",
31
33
  ]
32
34
 
33
35
  [project.optional-dependencies]
@@ -16,7 +16,7 @@ Assumes MCP is connected, authenticated, and on the correct workspace.
16
16
 
17
17
  Route to exactly one of these specialized paths:
18
18
 
19
- 1. Record CRUD
19
+ 1. Record CRUD and import
20
20
  Switch to [$qingflow-record-crud](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-crud/SKILL.md)
21
21
 
22
22
  2. Task workflow operations
@@ -32,7 +32,7 @@ Route to exactly one of these specialized paths:
32
32
 
33
33
  - If the user does not know the target `app_key`, discover apps first with `app_list` or `app_search`, then route to the specialized skill
34
34
  - If the app is known but the available data range is unclear, call `app_get` first and inspect `accessible_views`
35
- - If the task is about browsing, reading, creating, updating, deleting, attachments, relations, subtable writes, or member/department-field candidate lookup, switch to `$qingflow-record-crud`
35
+ - If the task is about browsing, reading, creating, updating, deleting, attachments, relations, subtable writes, member/department-field candidate lookup, import templates, import-file verification, authorized local file repair, import execution, or import status, switch to `$qingflow-record-crud`
36
36
  - If the task is about todo discovery, task context, approval actions, rollback or transfer, associated report review, or workflow log review, switch to `$qingflow-task-ops`
37
37
  - If the task is about grouped distributions, ratios, rankings, trends, insights, or any final statistical conclusion, switch to `$qingflow-record-analysis`
38
38
  - If the MCP is not connected, authenticated, or bound to the right workspace, switch to `$qingflow-mcp-setup`
@@ -42,6 +42,7 @@ Route to exactly one of these specialized paths:
42
42
  - prefer canonical app ids, record ids, task ids, and workflow node ids over guessed names
43
43
  - if a field or target is still ambiguous after schema/task lookup, ask the user to confirm from a short candidate list instead of guessing
44
44
  - if the task can stay read-only, do not write or act
45
+ - if the task involves a user-uploaded import file, do not modify the file unless the user explicitly authorizes repair or normalization
45
46
  - if the current MCP capability is unsupported, the workflow is awkward, or the user's need still cannot be satisfied after reasonable use, summarize the gap, ask whether to submit feedback, and call `feedback_submit` only after explicit user confirmation
46
47
 
47
48
  ## Shared Helper
@@ -109,6 +109,7 @@ Top-level arguments:
109
109
  - `limit`: limits returned rows only, not scan scope.
110
110
  - `view_id`: the canonical browse selector. Prefer choosing it from `app_get.accessible_views`.
111
111
  - Prefer `view_id` entries where `analysis_supported=true`. If a view is `boardView` or `ganttView`, switch to a system or table-style custom view before calling `record_analyze`.
112
+ - If a chosen `view_id` is `custom:*`, treat the output as analysis over an unverified saved-filter scope unless `verification.view_filter_verified=true`. For critical conclusions, prefer `system:all` plus explicit filters in the DSL.
112
113
  - `bucket` in dimensions: only for `suggested_time_fields`. Values: `day`/`week`/`month`/`quarter`/`year`/`null`.
113
114
 
114
115
  ---
@@ -1,8 +1,8 @@
1
1
  ---
2
2
  name: qingflow-record-crud
3
- description: Browse, read, create, update, and delete Qingflow records after the MCP is already connected and authenticated. Use when the user wants schema-first record CRUD with SQL-like JSON DSL. Do not use this skill for task-center workflow actions or final statistical analysis.
3
+ description: Browse, read, create, update, delete, and import Qingflow records after the MCP is already connected and authenticated. Use when the user wants schema-first record CRUD, import-template verification/import execution, or SQL-like JSON DSL writes. Do not use this skill for task-center workflow actions or final statistical analysis.
4
4
  metadata:
5
- short-description: Schema-first Qingflow record CRUD
5
+ short-description: Schema-first Qingflow record CRUD and import
6
6
  ---
7
7
 
8
8
  # Qingflow Record CRUD
@@ -19,6 +19,7 @@ Use exactly one of these default paths:
19
19
  1. Browse records: `app_get -> record_schema_get(schema_mode="browse", view_id=...) -> record_list`
20
20
  2. Read one record: `app_get -> record_schema_get(schema_mode="browse", view_id=...) -> record_get`
21
21
  3. Write records: `record_schema_get(schema_mode="applicant") -> record_write`
22
+ 4. Import records: `record_import_template_get -> record_import_verify -> (optional authorized file repair) -> record_import_start -> record_import_status_get`
22
23
 
23
24
  ## Core Tools
24
25
 
@@ -26,6 +27,11 @@ Use exactly one of these default paths:
26
27
  - `record_list`
27
28
  - `record_get`
28
29
  - `record_write`
30
+ - `record_import_template_get`
31
+ - `record_import_verify`
32
+ - `record_import_repair_local`
33
+ - `record_import_start`
34
+ - `record_import_status_get`
29
35
 
30
36
  `record_schema_get(schema_mode="applicant")` exposes the current user's applicant-node visible write/create fields.
31
37
  `record_schema_get(schema_mode="browse", view_id=...)` exposes browse-schema fields for the selected accessible view.
@@ -56,11 +62,12 @@ Use `record_member_candidates` / `record_department_candidates` as the default l
56
62
  5. If browse/read range is unclear, run `app_get` and choose from `accessible_views`
57
63
  6. Run `record_schema_get(schema_mode="browse", view_id=...)` before browse/read
58
64
  7. Run `record_schema_get(schema_mode="applicant")` before write
59
- 6. If the request is analysis-like, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
60
- 7. If the request is write-like, decide `insert / update / delete` before building any payload
61
- 8. If fields are still ambiguous after `record_schema_get`, ask the user to confirm from a short candidate list instead of guessing
62
- 9. For high-risk writes or production changes, read the current state first whenever practical
63
- 10. After actions, report the affected `record_id`, counts, or returned item count
65
+ 8. If the request is import-like, decide whether the user needs template download, file verification, file repair, import execution, or import status before changing any file
66
+ 9. If the request is analysis-like, switch to [$qingflow-record-analysis](/Users/yanqidong/Documents/qingflow-next/.codex/skills/qingflow-record-analysis/SKILL.md)
67
+ 10. If the request is write-like, decide `insert / update / delete` before building any payload
68
+ 11. If fields are still ambiguous after `record_schema_get`, ask the user to confirm from a short candidate list instead of guessing
69
+ 12. For high-risk writes or production changes, read the current state first whenever practical
70
+ 13. After actions, report the affected `record_id`, counts, or returned item count
64
71
 
65
72
  ## Record Read Rules
66
73
 
@@ -124,9 +131,38 @@ The DSL is clause-shaped like SQL, but it is **not raw SQL text**.
124
131
  - `relation`: `✅ {"field_id":25,"value":{"apply_id":5001}}` / `❌ {"field_id":25,"value":"客户A"}`
125
132
  - `attachment`: upload first, then `✅ {"field_id":13,"value":{"value":"https://.../a.pdf","name":"a.pdf"}}` / `❌ {"field_id":13,"value":"/tmp/a.pdf"}`
126
133
 
134
+ ## Import Rules
135
+
136
+ Use the import tools for file-based bulk data loading, not `record_write`.
137
+
138
+ ### Import workflow
139
+
140
+ 1. Get the official template with `record_import_template_get`
141
+ 2. Verify the uploaded file with `record_import_verify`
142
+ 3. If verification fails, explain the issues first
143
+ 4. Only modify the uploaded file if the user explicitly authorizes repair or normalization
144
+ 5. If authorized, preserve the original file and write a repaired copy instead of overwriting the source file by default
145
+ 6. Use `record_import_repair_local` for authorized `.xlsx` repair
146
+ 7. Re-run `record_import_verify` on the repaired copy
147
+ 8. Start import only from a successful verification result
148
+ 9. Track the job with `record_import_status_get`
149
+
150
+ ### Import discipline
151
+
152
+ - Do not modify a user-uploaded Excel or CSV file unless the user explicitly authorizes file repair
153
+ - Do not silently normalize, rename, reorder, or delete columns
154
+ - Do not fabricate business values to satisfy validation
155
+ - Only fix format-level issues that keep the user’s intended data semantics intact, such as header alignment, date/number formatting, enum spelling aligned to the template, blank trailing rows, workbook sheet shape, or attachment/url cell normalization
156
+ - If a repair would change business meaning or requires guessing missing values, stop and ask the user instead of editing the file
157
+ - After any authorized repair, report exactly what changed and which file path is the repaired import candidate
158
+ - Do not start import unless the repaired file passes `record_import_verify`
159
+ - `record_import_start` must receive an explicit `being_enter_auditing` choice; do not assume a default import mode
160
+
127
161
  ## Response Interpretation
128
162
 
129
163
  - `record_list` returns browse/sample data, not final analysis conclusions
164
+ - If `view_id` is a custom view, treat the result as convenience browse output only when `warnings` includes `CUSTOM_VIEW_FILTER_UNVERIFIED`. In that case do not state the saved filter result as a verified fact.
165
+ - Prefer `system:all` plus explicit `where` filters whenever the user needs a trustworthy scoped dataset.
130
166
  - `record_write` always performs internal static preflight before any apply
131
167
  - If `record_write` returns `ok=false`, the write was blocked and not executed
132
168
  - Prefer explaining `field_errors` before summarizing top-level blockers
@@ -128,3 +128,19 @@ If the payload includes them, stop after the blocked `record_write` response and
128
128
  ```json
129
129
  { "field_id": 13, "value": { "value": "https://.../a.pdf", "name": "a.pdf" } }
130
130
  ```
131
+
132
+ ## Import Pattern
133
+
134
+ Use `record_import_template_get -> record_import_verify -> record_import_repair_local -> record_import_start -> record_import_status_get` when file repair is authorized; otherwise skip the repair step.
135
+
136
+ If verification fails:
137
+
138
+ 1. summarize the validation issues first
139
+ 2. do not modify the uploaded file unless the user explicitly authorizes repair
140
+ 3. if authorized, preserve the original file and create a repaired copy
141
+ 4. only repair format-level problems that do not require guessing business meaning
142
+ 5. use `record_import_repair_local` only after explicit authorization and only for safe format repairs
143
+ 6. re-run `record_import_verify`
144
+ 7. import only after verification passes, and pass an explicit `being_enter_auditing` choice to `record_import_start`
145
+
146
+ Do not silently "fix" business data to force an import through.
@@ -97,6 +97,9 @@ Use exactly one of these default paths:
97
97
  - Do not execute `task_action_execute` until the user explicitly confirms the chosen action
98
98
  - Avoid actions on ambiguous tasks or records
99
99
  - Summarize the final action and the exact `app_key / record_id / workflow_node_id`
100
+ - `task_action_execute` now distinguishes action execution from workflow continuation. Read `verification.runtime_continuation_verified` before claiming the workflow actually moved on.
101
+ - If `task_action_execute` returns `partial_success` with `WORKFLOW_CONTINUATION_UNVERIFIED`, report the action as sent but the downstream continuation as unverified.
102
+ - If `task_action_execute` returns `TASK_CONTEXT_VISIBILITY_UNVERIFIED` after a `46001`-style context loss, do not claim the task was already processed unless the workflow log or record state proves it.
100
103
 
101
104
  ## Feedback Escalation
102
105
 
@@ -2,4 +2,4 @@ from __future__ import annotations
2
2
 
3
3
  __all__ = ["__version__"]
4
4
 
5
- __version__ = "0.2.0b40"
5
+ __version__ = "0.2.0b42"
@@ -1,6 +1,9 @@
1
1
  from __future__ import annotations
2
2
 
3
3
  from dataclasses import dataclass
4
+ from threading import Event
5
+ from typing import Any
6
+ from urllib.parse import urlsplit, urlunsplit
4
7
  from uuid import uuid4
5
8
 
6
9
  import httpx
@@ -201,6 +204,182 @@ class BackendClient:
201
204
  "body": body,
202
205
  }
203
206
 
207
+ def download_binary(self, url: str, *, headers: dict[str, str] | None = None) -> bytes:
208
+ try:
209
+ response = self._client.get(url, headers=headers or None)
210
+ except httpx.RequestError as exc:
211
+ raise QingflowApiError(category="network", message=str(exc))
212
+ if response.status_code >= 400:
213
+ raise QingflowApiError(
214
+ category="http",
215
+ message=self._extract_message(response.text) or f"HTTP {response.status_code}",
216
+ http_status=response.status_code,
217
+ )
218
+ return response.content
219
+
220
+ def request_multipart(
221
+ self,
222
+ method: str,
223
+ context: BackendRequestContext,
224
+ path: str,
225
+ *,
226
+ data: dict[str, Any] | None = None,
227
+ files: dict[str, tuple[str, bytes, str | None]] | None = None,
228
+ unwrap: bool = True,
229
+ ) -> JSONValue:
230
+ return self.request_multipart_with_meta(
231
+ method,
232
+ context,
233
+ path,
234
+ data=data,
235
+ files=files,
236
+ unwrap=unwrap,
237
+ ).data
238
+
239
+ def request_multipart_with_meta(
240
+ self,
241
+ method: str,
242
+ context: BackendRequestContext,
243
+ path: str,
244
+ *,
245
+ data: dict[str, Any] | None = None,
246
+ files: dict[str, tuple[str, bytes, str | None]] | None = None,
247
+ unwrap: bool = True,
248
+ ) -> BackendResponse:
249
+ headers = self._base_headers(
250
+ context.token,
251
+ context.ws_id,
252
+ context.qf_request_id,
253
+ qf_version=context.qf_version,
254
+ )
255
+ request_files: dict[str, tuple[str, bytes, str | None]] | None = None
256
+ if files:
257
+ request_files = {key: value for key, value in files.items()}
258
+ try:
259
+ response = self._client.request(
260
+ method.upper(),
261
+ self._build_url(context.base_url, path),
262
+ data=data,
263
+ files=request_files,
264
+ headers=headers,
265
+ )
266
+ except httpx.RequestError as exc:
267
+ raise QingflowApiError(category="network", message=str(exc), request_id=headers["Qf-Request-Id"])
268
+ parsed = self._parse_response(response, headers["Qf-Request-Id"], unwrap=unwrap)
269
+ return BackendResponse(
270
+ data=parsed,
271
+ headers=dict(response.headers),
272
+ request_id=headers["Qf-Request-Id"],
273
+ http_status=response.status_code,
274
+ qf_response_version=self._extract_response_qf_version(response.headers),
275
+ )
276
+
277
+ def start_socket_data_import(
278
+ self,
279
+ context: BackendRequestContext,
280
+ *,
281
+ app_key: str,
282
+ being_enter_auditing: bool,
283
+ view_key: str | None,
284
+ excel_url: str,
285
+ excel_name: str,
286
+ ack_timeout_seconds: float = 8.0,
287
+ initial_wait_seconds: float = 4.0,
288
+ ) -> dict[str, Any]:
289
+ try:
290
+ import socketio # type: ignore[import-not-found]
291
+ except ImportError as exc:
292
+ raise QingflowApiError(
293
+ category="config",
294
+ message=f"socket.io client dependency is missing: {exc}",
295
+ )
296
+
297
+ socket_base_url = self._build_socket_base_url(context.base_url)
298
+ import_result: dict[str, Any] = {
299
+ "import_id": None,
300
+ "process_id_str": None,
301
+ "status": "accepted",
302
+ "warnings": [],
303
+ "initial_event": None,
304
+ "failure_event": None,
305
+ }
306
+ initial_event_received = Event()
307
+ failure_event_received = Event()
308
+ sio = socketio.Client(reconnection=False, logger=False, engineio_logger=False)
309
+
310
+ def _handle_initial(payload: Any) -> None:
311
+ if isinstance(payload, dict):
312
+ import_result["initial_event"] = payload
313
+ process_id = payload.get("processIdStr") or payload.get("process_id_str") or payload.get("processId")
314
+ if process_id is not None:
315
+ import_result["process_id_str"] = str(process_id)
316
+ initial_event_received.set()
317
+
318
+ def _handle_failure(payload: Any) -> None:
319
+ if isinstance(payload, dict):
320
+ import_result["failure_event"] = payload
321
+ process_id = payload.get("processIdStr") or payload.get("process_id_str") or payload.get("processId")
322
+ if process_id is not None:
323
+ import_result["process_id_str"] = str(process_id)
324
+ failure_event_received.set()
325
+
326
+ try:
327
+ sio.connect(
328
+ socket_base_url,
329
+ transports=["polling", "websocket"],
330
+ socketio_path="socket.io",
331
+ headers=self._base_headers(
332
+ context.token,
333
+ context.ws_id,
334
+ qf_version=context.qf_version,
335
+ ),
336
+ wait_timeout=ack_timeout_seconds,
337
+ )
338
+ ack = sio.call(
339
+ "dataImport",
340
+ [
341
+ context.token,
342
+ app_key,
343
+ bool(being_enter_auditing),
344
+ view_key,
345
+ False,
346
+ excel_url,
347
+ excel_name,
348
+ ],
349
+ timeout=ack_timeout_seconds,
350
+ )
351
+ import_id = ack[0] if isinstance(ack, list) and ack else ack
352
+ if not import_id:
353
+ raise QingflowApiError(category="backend", message="socket import ack did not return import_id")
354
+ import_result["import_id"] = str(import_id)
355
+ sio.on(f"dataImportRes_{import_result['import_id']}", _handle_initial)
356
+ sio.on(f"dataImportFail_{import_result['import_id']}", _handle_failure)
357
+ if not initial_event_received.wait(timeout=initial_wait_seconds) and not failure_event_received.wait(timeout=0.1):
358
+ import_result["warnings"].append(
359
+ {
360
+ "code": "IMPORT_SOCKET_INITIAL_EVENT_PENDING",
361
+ "message": "Import ack received, but no initial progress payload arrived within the initial wait window.",
362
+ }
363
+ )
364
+ except Exception as exc:
365
+ message = str(exc)
366
+ if "timeout" in message.lower():
367
+ raise QingflowApiError(
368
+ category="network",
369
+ message="socket import ack timed out",
370
+ details={"error_code": "IMPORT_SOCKET_ACK_TIMEOUT"},
371
+ )
372
+ if isinstance(exc, QingflowApiError):
373
+ raise
374
+ raise QingflowApiError(category="network", message=message or "socket import failed")
375
+ finally:
376
+ try:
377
+ if sio.connected:
378
+ sio.disconnect()
379
+ except Exception:
380
+ pass
381
+ return import_result
382
+
204
383
  def _request_with_meta(
205
384
  self,
206
385
  method: str,
@@ -334,3 +513,13 @@ class BackendClient:
334
513
  if not normalized:
335
514
  raise QingflowApiError.config_error("base_url is required")
336
515
  return f"{normalized}/{path.lstrip('/')}"
516
+
517
+ def _build_socket_base_url(self, base_url: str) -> str:
518
+ normalized = normalize_base_url(base_url)
519
+ if not normalized:
520
+ raise QingflowApiError.config_error("base_url is required")
521
+ parsed = urlsplit(normalized)
522
+ path = parsed.path.rstrip("/")
523
+ if path.endswith("/api"):
524
+ path = path[:-4]
525
+ return urlunsplit((parsed.scheme, parsed.netloc, path or "", "", ""))
@@ -266,6 +266,8 @@ class FieldPatch(StrictModel):
266
266
  description: str | None = None
267
267
  options: list[str] = Field(default_factory=list)
268
268
  target_app_key: str | None = None
269
+ display_field: FieldSelector | None = None
270
+ visible_fields: list[FieldSelector] = Field(default_factory=list)
269
271
  subfields: list["FieldPatch"] = Field(default_factory=list)
270
272
 
271
273
  @model_validator(mode="after")
@@ -274,6 +276,8 @@ class FieldPatch(StrictModel):
274
276
  raise ValueError("relation field requires target_app_key")
275
277
  if self.type != PublicFieldType.relation and self.target_app_key:
276
278
  raise ValueError("target_app_key is only allowed for relation fields")
279
+ if self.type != PublicFieldType.relation and (self.display_field is not None or self.visible_fields):
280
+ raise ValueError("display_field and visible_fields are only allowed for relation fields")
277
281
  if self.type == PublicFieldType.subtable and not self.subfields:
278
282
  raise ValueError("subtable field requires subfields")
279
283
  if self.type != PublicFieldType.subtable and self.subfields:
@@ -293,12 +297,16 @@ class FieldMutation(StrictModel):
293
297
  description: str | None = None
294
298
  options: list[str] | None = None
295
299
  target_app_key: str | None = None
300
+ display_field: FieldSelector | None = None
301
+ visible_fields: list[FieldSelector] | None = None
296
302
  subfields: list[FieldPatch] | None = None
297
303
 
298
304
  @model_validator(mode="after")
299
305
  def validate_shape(self) -> "FieldMutation":
300
306
  if self.type == PublicFieldType.relation and not self.target_app_key:
301
307
  raise ValueError("relation field requires target_app_key")
308
+ if self.type is not None and self.type != PublicFieldType.relation and (self.display_field is not None or self.visible_fields):
309
+ raise ValueError("display_field and visible_fields are only allowed for relation fields")
302
310
  if self.type == PublicFieldType.subtable and not self.subfields:
303
311
  raise ValueError("subtable field requires subfields")
304
312
  return self