@josephyan/qingflow-app-builder-mcp 0.2.0-beta.52 → 0.2.0-beta.53

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -3,13 +3,13 @@
3
3
  Install:
4
4
 
5
5
  ```bash
6
- npm install @josephyan/qingflow-app-builder-mcp@0.2.0-beta.52
6
+ npm install @josephyan/qingflow-app-builder-mcp@0.2.0-beta.53
7
7
  ```
8
8
 
9
9
  Run:
10
10
 
11
11
  ```bash
12
- npx -y -p @josephyan/qingflow-app-builder-mcp@0.2.0-beta.52 qingflow-app-builder-mcp
12
+ npx -y -p @josephyan/qingflow-app-builder-mcp@0.2.0-beta.53 qingflow-app-builder-mcp
13
13
  ```
14
14
 
15
15
  Environment:
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@josephyan/qingflow-app-builder-mcp",
3
- "version": "0.2.0-beta.52",
3
+ "version": "0.2.0-beta.53",
4
4
  "description": "Builder MCP for Qingflow app/package/system design and staged solution workflows.",
5
5
  "license": "MIT",
6
6
  "type": "module",
package/pyproject.toml CHANGED
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "qingflow-mcp"
7
- version = "0.2.0b52"
7
+ version = "0.2.0b53"
8
8
  description = "User-authenticated MCP server for Qingflow"
9
9
  readme = "README.md"
10
10
  license = "MIT"
@@ -53,7 +53,8 @@ If an accessible view has `analysis_supported=false`, do not use it for `record_
53
53
 
54
54
  ## Schema-First Rule
55
55
 
56
- Call `record_schema_get(schema_mode="applicant")` before `record_write`.
56
+ Call `record_schema_get(schema_mode="applicant")` before `record_insert`.
57
+ Call `record_schema_get(schema_mode="applicant")` before `record_code_block_run`.
57
58
  Call `app_get` first when the data range is unclear, then use `record_schema_get(schema_mode="browse", view_id=...)` before `record_list`, `record_get`, or `record_analyze`.
58
59
 
59
60
  - All `field_id` values must come from the schema response.
@@ -94,7 +95,9 @@ Analysis answers must include concrete numbers. When applicable, include percent
94
95
  ## Record CRUD Path
95
96
 
96
97
  `app_get -> record_schema_get(schema_mode="browse", view_id=...) -> record_list / record_get`
97
- `record_schema_get(schema_mode="applicant") -> record_write`
98
+ `record_schema_get(schema_mode="applicant") -> record_insert`
99
+ `app_get -> record_schema_get(schema_mode="browse", view_id=...) -> record_update`
100
+ `record_list / record_get -> record_delete`
98
101
  `record_schema_get(schema_mode="applicant") -> record_code_block_run`
99
102
 
100
103
  - Use `columns` as `[{{field_id}}]`
@@ -102,14 +105,12 @@ Analysis answers must include concrete numbers. When applicable, include percent
102
105
  - Use `order_by` items as `{{field_id, direction}}`
103
106
  - Legacy forms such as bare integer `field_id`, `fieldId`, `operator`, `values`, or `order` may still parse, but they are compatibility-only and not the canonical DSL
104
107
 
105
- `record_write` uses SQL-like JSON clauses:
106
-
107
- - `insert` -> `values`
108
- - `update` -> `record_id + set`
109
- - `delete` -> `record_id` or `record_ids`
108
+ - `record_insert` uses an applicant-node `fields` map keyed by field title.
109
+ - `record_update` uses a view-scoped `fields` map keyed by field title.
110
+ - `record_delete` deletes by `record_id` or `record_ids`.
110
111
 
111
112
  - Read relation targets from `record_schema_get.target_app_key` / `target_app_name` before preparing relation writes.
112
- - If a member or department field id is known but candidate ids are not, use `record_member_candidates` or `record_department_candidates` before `record_write`.
113
+ - If a member or department field id is known but candidate ids are not, use `record_member_candidates` or `record_department_candidates` before `record_insert` or `record_update`.
113
114
  - For default-all member or department fields, prefer those field candidate tools instead of starting with `directory_*`.
114
115
 
115
116
  ## Code Block Path
@@ -41,7 +41,8 @@ If an accessible view has `analysis_supported=false`, do not use it for `record_
41
41
 
42
42
  ## Schema-First Rule
43
43
 
44
- Call `record_schema_get(schema_mode="applicant")` before `record_write`.
44
+ Call `record_schema_get(schema_mode="applicant")` before `record_insert`.
45
+ Call `record_schema_get(schema_mode="applicant")` before `record_code_block_run`.
45
46
  Call `app_get` first when the data range is unclear, then use `record_schema_get(schema_mode="browse", view_id=...)` before `record_list`, `record_get`, or `record_analyze`.
46
47
 
47
48
  - All `field_id` values must come from the schema response.
@@ -82,7 +83,9 @@ Analysis answers must include concrete numbers. When applicable, include percent
82
83
  ## Record CRUD Path
83
84
 
84
85
  `app_get -> record_schema_get(schema_mode="browse", view_id=...) -> record_list / record_get`
85
- `record_schema_get(schema_mode="applicant") -> record_write`
86
+ `record_schema_get(schema_mode="applicant") -> record_insert`
87
+ `app_get -> record_schema_get(schema_mode="browse", view_id=...) -> record_update`
88
+ `record_list / record_get -> record_delete`
86
89
  `record_schema_get(schema_mode="applicant") -> record_code_block_run`
87
90
 
88
91
  - Use `columns` as `[{{field_id}}]`
@@ -90,14 +93,12 @@ Analysis answers must include concrete numbers. When applicable, include percent
90
93
  - Use `order_by` items as `{{field_id, direction}}`
91
94
  - Legacy forms such as bare integer `field_id`, `fieldId`, `operator`, `values`, or `order` may still parse, but they are compatibility-only and not the canonical DSL
92
95
 
93
- `record_write` uses SQL-like JSON clauses:
94
-
95
- - `insert` -> `values`
96
- - `update` -> `record_id + set`
97
- - `delete` -> `record_id` or `record_ids`
96
+ - `record_insert` uses an applicant-node `fields` map keyed by field title.
97
+ - `record_update` uses a view-scoped `fields` map keyed by field title.
98
+ - `record_delete` deletes by `record_id` or `record_ids`.
98
99
 
99
100
  - Read relation targets from `record_schema_get.target_app_key` / `target_app_name` before preparing relation writes.
100
- - If a member or department field id is known but candidate ids are not, use `record_member_candidates` or `record_department_candidates` before `record_write`.
101
+ - If a member or department field id is known but candidate ids are not, use `record_member_candidates` or `record_department_candidates` before `record_insert` or `record_update`.
101
102
  - For default-all member or department fields, prefer those field candidate tools instead of starting with `directory_*`.
102
103
 
103
104
  ## Code Block Path
@@ -3,6 +3,7 @@ from __future__ import annotations
3
3
  import hashlib
4
4
  import json
5
5
  import mimetypes
6
+ import re
6
7
  import shutil
7
8
  import tempfile
8
9
  from io import BytesIO
@@ -35,6 +36,7 @@ SAFE_REPAIRS = {
35
36
  "normalize_number_formats",
36
37
  "normalize_url_cells",
37
38
  }
39
+ EMAIL_PATTERN = re.compile(r"^[^@\s]+@[^@\s]+\.[^@\s]+$")
38
40
 
39
41
 
40
42
  class ImportTools(ToolBase):
@@ -135,7 +137,7 @@ class ImportTools(ToolBase):
135
137
 
136
138
  def runner(session_profile, context):
137
139
  import_capability, import_warnings = self._fetch_import_capability(context, app_key)
138
- expected_columns, schema_fingerprint = self._expected_import_columns(
140
+ field_index, expected_columns, schema_fingerprint = self._resolve_import_schema_bundle(
139
141
  profile,
140
142
  context,
141
143
  app_key,
@@ -261,7 +263,7 @@ class ImportTools(ToolBase):
261
263
  "message": "record_import_verify could not determine import permission from app metadata; continuing with file verification only.",
262
264
  }
263
265
  ]
264
- expected_columns, schema_fingerprint = self._expected_import_columns(
266
+ field_index, expected_columns, schema_fingerprint = self._resolve_import_schema_bundle(
265
267
  profile,
266
268
  context,
267
269
  app_key,
@@ -275,8 +277,11 @@ class ImportTools(ToolBase):
275
277
  )
276
278
  template_header_titles = template_header_profile.get("allowed_titles")
277
279
  local_check = self._local_verify(
280
+ profile=profile,
281
+ context=context,
278
282
  path=path,
279
283
  app_key=app_key,
284
+ field_index=field_index,
280
285
  expected_columns=expected_columns,
281
286
  allowed_header_titles=template_header_titles,
282
287
  schema_fingerprint=schema_fingerprint,
@@ -292,8 +297,11 @@ class ImportTools(ToolBase):
292
297
  if auto_normalization is not None:
293
298
  effective_path = Path(str(auto_normalization["verified_file_path"]))
294
299
  effective_local_check = self._local_verify(
300
+ profile=profile,
301
+ context=context,
295
302
  path=effective_path,
296
303
  app_key=app_key,
304
+ field_index=field_index,
297
305
  expected_columns=expected_columns,
298
306
  allowed_header_titles=list(auto_normalization["header_titles"]),
299
307
  schema_fingerprint=schema_fingerprint,
@@ -550,7 +558,13 @@ class ImportTools(ToolBase):
550
558
  message="the file changed after verification; run record_import_verify again",
551
559
  extra={"accepted": False},
552
560
  )
553
- _, current_schema_fingerprint = self._expected_import_columns(profile, context, app_key)
561
+ stored_import_capability = stored.get("import_capability")
562
+ _, current_schema_fingerprint = self._expected_import_columns(
563
+ profile,
564
+ context,
565
+ app_key,
566
+ import_capability=stored_import_capability if isinstance(stored_import_capability, dict) else None,
567
+ )
554
568
  if current_schema_fingerprint != stored.get("schema_fingerprint"):
555
569
  return self._failed_start_result(
556
570
  error_code="IMPORT_SCHEMA_CHANGED_AFTER_VERIFY",
@@ -702,14 +716,14 @@ class ImportTools(ToolBase):
702
716
  except RuntimeError as exc:
703
717
  return self._runtime_error_as_result(exc, error_code="IMPORT_STATUS_AMBIGUOUS")
704
718
 
705
- def _expected_import_columns(
719
+ def _resolve_import_schema_bundle(
706
720
  self,
707
721
  profile: str,
708
722
  context,
709
723
  app_key: str,
710
724
  *,
711
725
  import_capability: JSONObject | None = None,
712
- ) -> tuple[list[JSONObject], str]: # type: ignore[no-untyped-def]
726
+ ) -> tuple[Any, list[JSONObject], str]: # type: ignore[no-untyped-def]
713
727
  auth_source = _normalize_optional_text((import_capability or {}).get("auth_source")) or "unknown"
714
728
  if auth_source == "data_manage_auth":
715
729
  schema = self.backend.request("GET", context, f"/app/{app_key}/form", params={"type": 1})
@@ -744,16 +758,33 @@ class ImportTools(ToolBase):
744
758
  }
745
759
  )
746
760
  expected_columns.sort(key=lambda item: int(item["field_id"]))
747
- schema_fingerprint = hashlib.sha256(
748
- json.dumps(expected_columns, ensure_ascii=False, sort_keys=True).encode("utf-8")
749
- ).hexdigest()
761
+ schema_fingerprint = _stable_import_schema_fingerprint(expected_columns)
762
+ return index, expected_columns, schema_fingerprint
763
+
764
+ def _expected_import_columns(
765
+ self,
766
+ profile: str,
767
+ context,
768
+ app_key: str,
769
+ *,
770
+ import_capability: JSONObject | None = None,
771
+ ) -> tuple[list[JSONObject], str]: # type: ignore[no-untyped-def]
772
+ _, expected_columns, schema_fingerprint = self._resolve_import_schema_bundle(
773
+ profile,
774
+ context,
775
+ app_key,
776
+ import_capability=import_capability,
777
+ )
750
778
  return expected_columns, schema_fingerprint
751
779
 
752
780
  def _local_verify(
753
781
  self,
754
782
  *,
783
+ profile: str,
784
+ context,
755
785
  path: Path,
756
786
  app_key: str,
787
+ field_index: Any,
757
788
  expected_columns: list[JSONObject],
758
789
  allowed_header_titles: list[str] | None,
759
790
  schema_fingerprint: str,
@@ -815,6 +846,16 @@ class ImportTools(ToolBase):
815
846
  )
816
847
  base_result["issues"].extend(header_analysis["issues"])
817
848
  base_result["repair_suggestions"].extend(header_analysis["repair_suggestions"])
849
+ if not any(issue.get("severity") == "error" for issue in base_result["issues"]):
850
+ semantic_issues, semantic_warnings = self._inspect_semantic_cells(
851
+ profile=profile,
852
+ context=context,
853
+ sheet=sheet,
854
+ expected_columns=expected_columns,
855
+ field_index=field_index,
856
+ )
857
+ base_result["issues"].extend(semantic_issues)
858
+ base_result["warnings"].extend(semantic_warnings)
818
859
  trailing_blank_rows = _count_trailing_blank_rows(sheet)
819
860
  if trailing_blank_rows > 0:
820
861
  base_result["warnings"].append(
@@ -840,6 +881,190 @@ class ImportTools(ToolBase):
840
881
  base_result["error_code"] = "IMPORT_VERIFICATION_FAILED"
841
882
  return base_result
842
883
 
884
+ def _inspect_semantic_cells(
885
+ self,
886
+ *,
887
+ profile: str,
888
+ context,
889
+ sheet,
890
+ expected_columns: list[JSONObject],
891
+ field_index: Any,
892
+ ) -> tuple[list[JSONObject], list[JSONObject]]: # type: ignore[no-untyped-def]
893
+ issues: list[JSONObject] = []
894
+ warnings: list[JSONObject] = []
895
+ header_positions = _sheet_header_positions(sheet)
896
+ expected_by_key: dict[str, list[JSONObject]] = {}
897
+ for column in expected_columns:
898
+ key = _normalize_header_key(column.get("title"))
899
+ if key:
900
+ expected_by_key.setdefault(key, []).append(column)
901
+ for key, columns in expected_by_key.items():
902
+ positions = header_positions.get(key, [])
903
+ if len(columns) != 1 or len(positions) != 1:
904
+ continue
905
+ column = columns[0]
906
+ column_index = positions[0]
907
+ write_kind = _normalize_optional_text(column.get("write_kind")) or "scalar"
908
+ if column.get("options"):
909
+ issue = _inspect_enum_column(sheet, column_index=column_index, column=column)
910
+ if issue is not None:
911
+ issues.append(issue)
912
+ continue
913
+ if write_kind == "relation":
914
+ issue = _inspect_relation_column(sheet, column_index=column_index, column=column)
915
+ if issue is not None:
916
+ issues.append(issue)
917
+ continue
918
+ field = field_index.by_id.get(str(column.get("field_id"))) if field_index is not None else None
919
+ if (
920
+ write_kind == "member"
921
+ and field is not None
922
+ and (
923
+ field.member_select_scope_type is not None
924
+ or field.member_select_scope is not None
925
+ )
926
+ ):
927
+ member_issue, member_warning = self._inspect_member_column(
928
+ context=context,
929
+ sheet=sheet,
930
+ column_index=column_index,
931
+ column=column,
932
+ field=field,
933
+ )
934
+ if member_issue is not None:
935
+ issues.append(member_issue)
936
+ continue
937
+ if member_warning is not None:
938
+ warnings.append(member_warning)
939
+ continue
940
+ if (
941
+ write_kind == "department"
942
+ and field is not None
943
+ and (
944
+ field.dept_select_scope_type is not None
945
+ or field.dept_select_scope is not None
946
+ )
947
+ ):
948
+ department_issue, department_warning = self._inspect_department_column(
949
+ context=context,
950
+ sheet=sheet,
951
+ column_index=column_index,
952
+ column=column,
953
+ field=field,
954
+ )
955
+ if department_issue is not None:
956
+ issues.append(department_issue)
957
+ continue
958
+ if department_warning is not None:
959
+ warnings.append(department_warning)
960
+ continue
961
+ return issues, warnings
962
+
963
+ def _inspect_member_column(
964
+ self,
965
+ *,
966
+ context,
967
+ sheet,
968
+ column_index: int,
969
+ column: JSONObject,
970
+ field,
971
+ ) -> tuple[JSONObject | None, JSONObject | None]: # type: ignore[no-untyped-def]
972
+ invalid_email_samples: list[str] = []
973
+ scope_miss_samples: list[str] = []
974
+ checked_values: set[str] = set()
975
+ for row_index in range(2, sheet.max_row + 1):
976
+ text = _normalize_optional_text(sheet.cell(row=row_index, column=column_index).value)
977
+ if text is None:
978
+ continue
979
+ normalized = text.strip()
980
+ if normalized in checked_values:
981
+ continue
982
+ checked_values.add(normalized)
983
+ if not EMAIL_PATTERN.fullmatch(normalized):
984
+ invalid_email_samples.append(f"row {row_index}: {normalized}")
985
+ if len(invalid_email_samples) >= 3:
986
+ break
987
+ continue
988
+ try:
989
+ candidates = self._record_tools._resolve_member_candidates(context, field, keyword=normalized)
990
+ matches = self._record_tools._match_member_candidates(candidates, normalized)
991
+ except QingflowApiError as exc:
992
+ if exc.category == "not_supported":
993
+ return None, {
994
+ "code": "MEMBER_CANDIDATE_VALIDATION_SKIPPED",
995
+ "message": f"Member candidate scope for column '{column['title']}' could not be resolved safely during local precheck.",
996
+ }
997
+ raise
998
+ except RuntimeError:
999
+ return None, {
1000
+ "code": "MEMBER_CANDIDATE_VALIDATION_SKIPPED",
1001
+ "message": f"Member candidate scope for column '{column['title']}' could not be resolved safely during local precheck.",
1002
+ }
1003
+ if len(matches) != 1:
1004
+ scope_miss_samples.append(f"row {row_index}: {normalized}")
1005
+ if len(scope_miss_samples) >= 3:
1006
+ break
1007
+ if invalid_email_samples:
1008
+ return _issue(
1009
+ "MEMBER_IMPORT_REQUIRES_EMAIL",
1010
+ f"Column '{column['title']}' must use member email values in import files. Samples: {', '.join(invalid_email_samples)}",
1011
+ severity="error",
1012
+ ), None
1013
+ if scope_miss_samples:
1014
+ return _issue(
1015
+ "MEMBER_NOT_IN_CANDIDATE_SCOPE",
1016
+ f"Column '{column['title']}' contains members outside the current candidate scope. Samples: {', '.join(scope_miss_samples)}",
1017
+ severity="error",
1018
+ ), None
1019
+ return None, None
1020
+
1021
+ def _inspect_department_column(
1022
+ self,
1023
+ *,
1024
+ context,
1025
+ sheet,
1026
+ column_index: int,
1027
+ column: JSONObject,
1028
+ field,
1029
+ ) -> tuple[JSONObject | None, JSONObject | None]: # type: ignore[no-untyped-def]
1030
+ scope_miss_samples: list[str] = []
1031
+ checked_values: set[str] = set()
1032
+ for row_index in range(2, sheet.max_row + 1):
1033
+ value = sheet.cell(row=row_index, column=column_index).value
1034
+ text = _normalize_optional_text(value)
1035
+ if text is None:
1036
+ continue
1037
+ normalized = text.strip()
1038
+ if normalized in checked_values:
1039
+ continue
1040
+ checked_values.add(normalized)
1041
+ try:
1042
+ candidates = self._record_tools._resolve_department_candidates(context, field, keyword=normalized)
1043
+ matches = self._record_tools._match_department_candidates(candidates, normalized)
1044
+ except QingflowApiError as exc:
1045
+ if exc.category == "not_supported":
1046
+ return None, {
1047
+ "code": "DEPARTMENT_CANDIDATE_VALIDATION_SKIPPED",
1048
+ "message": f"Department candidate scope for column '{column['title']}' could not be resolved safely during local precheck.",
1049
+ }
1050
+ raise
1051
+ except RuntimeError:
1052
+ return None, {
1053
+ "code": "DEPARTMENT_CANDIDATE_VALIDATION_SKIPPED",
1054
+ "message": f"Department candidate scope for column '{column['title']}' could not be resolved safely during local precheck.",
1055
+ }
1056
+ if len(matches) != 1:
1057
+ scope_miss_samples.append(f"row {row_index}: {normalized}")
1058
+ if len(scope_miss_samples) >= 3:
1059
+ break
1060
+ if scope_miss_samples:
1061
+ return _issue(
1062
+ "DEPARTMENT_NOT_IN_CANDIDATE_SCOPE",
1063
+ f"Column '{column['title']}' contains departments outside the current candidate scope. Samples: {', '.join(scope_miss_samples)}",
1064
+ severity="error",
1065
+ ), None
1066
+ return None, None
1067
+
843
1068
  def _load_template_header_profile(
844
1069
  self,
845
1070
  context,
@@ -1215,12 +1440,17 @@ def _analyze_headers(
1215
1440
  *,
1216
1441
  allowed_titles: list[str] | None = None,
1217
1442
  ) -> dict[str, Any]:
1218
- expected_by_key = {_normalize_header_key(item["title"]): item for item in expected_columns}
1219
- allowed_by_key = (
1220
- {_normalize_header_key(title): title for title in allowed_titles if _normalize_optional_text(title)}
1221
- if allowed_titles
1222
- else {key: item["title"] for key, item in expected_by_key.items()}
1223
- )
1443
+ expected_titles = [str(item["title"]) for item in expected_columns]
1444
+ allowed_title_list = allowed_titles if allowed_titles else expected_titles
1445
+ allowed_counts = _header_title_counts(allowed_title_list)
1446
+ allowed_by_key = {
1447
+ key: title
1448
+ for key, title in (
1449
+ (_normalize_header_key(title), _normalize_optional_text(title))
1450
+ for title in allowed_title_list
1451
+ )
1452
+ if key and title
1453
+ }
1224
1454
  seen: dict[str, int] = {}
1225
1455
  actual_headers: list[str] = []
1226
1456
  for item in header_row:
@@ -1231,10 +1461,24 @@ def _analyze_headers(
1231
1461
  actual_headers.append(text)
1232
1462
  key = _normalize_header_key(text)
1233
1463
  seen[key] = seen.get(key, 0) + 1
1234
- actual_keys = {key for key, count in seen.items() if key and count > 0}
1235
- missing = [title for key, title in allowed_by_key.items() if key not in actual_keys]
1464
+ missing: list[str] = []
1465
+ for key, expected_count in allowed_counts.items():
1466
+ actual_count = seen.get(key, 0)
1467
+ if actual_count >= expected_count:
1468
+ continue
1469
+ title = allowed_by_key.get(key) or key
1470
+ if expected_count <= 1:
1471
+ missing.append(title)
1472
+ else:
1473
+ missing.append(f"{title} (need {expected_count}, got {actual_count})")
1236
1474
  extra = [text for text in actual_headers if text and _normalize_header_key(text) not in allowed_by_key]
1237
- duplicates = [text for text in actual_headers if text and seen.get(_normalize_header_key(text), 0) > 1]
1475
+ duplicates = []
1476
+ for key, count in seen.items():
1477
+ if not key:
1478
+ continue
1479
+ allowed_count = allowed_counts.get(key, 0)
1480
+ if count > max(allowed_count, 1 if allowed_count == 0 else allowed_count):
1481
+ duplicates.append(allowed_by_key.get(key) or key)
1238
1482
  issues: list[JSONObject] = []
1239
1483
  repair_suggestions: list[str] = []
1240
1484
  if missing:
@@ -1279,6 +1523,111 @@ def _analyze_headers(
1279
1523
  return {"issues": issues, "repair_suggestions": repair_suggestions}
1280
1524
 
1281
1525
 
1526
+ def _header_title_counts(titles: list[str]) -> dict[str, int]:
1527
+ counts: dict[str, int] = {}
1528
+ for title in titles:
1529
+ key = _normalize_header_key(title)
1530
+ if not key:
1531
+ continue
1532
+ counts[key] = counts.get(key, 0) + 1
1533
+ return counts
1534
+
1535
+
1536
+ def _sheet_header_positions(sheet) -> dict[str, list[int]]: # type: ignore[no-untyped-def]
1537
+ mapping: dict[str, list[int]] = {}
1538
+ for index, cell in enumerate(next(sheet.iter_rows(min_row=1, max_row=1), []), start=1):
1539
+ key = _normalize_header_key(cell.value)
1540
+ if not key:
1541
+ continue
1542
+ mapping.setdefault(key, []).append(index)
1543
+ return mapping
1544
+
1545
+
1546
+ def _inspect_enum_column(sheet, *, column_index: int, column: JSONObject) -> JSONObject | None: # type: ignore[no-untyped-def]
1547
+ options = [str(item).strip() for item in column.get("options", []) if str(item).strip()]
1548
+ if not options:
1549
+ return None
1550
+ option_map = {_normalize_header_key(item): item for item in options}
1551
+ invalid_samples: list[str] = []
1552
+ for row_index in range(2, sheet.max_row + 1):
1553
+ text = _normalize_optional_text(sheet.cell(row=row_index, column=column_index).value)
1554
+ if text is None:
1555
+ continue
1556
+ if _normalize_header_key(text) in option_map:
1557
+ continue
1558
+ invalid_samples.append(f"row {row_index}: {text}")
1559
+ if len(invalid_samples) >= 3:
1560
+ break
1561
+ if not invalid_samples:
1562
+ return None
1563
+ return _issue(
1564
+ "INVALID_ENUM_VALUES",
1565
+ f"Column '{column['title']}' contains values outside the allowed options. Samples: {', '.join(invalid_samples)}",
1566
+ severity="error",
1567
+ )
1568
+
1569
+
1570
+ def _inspect_relation_column(sheet, *, column_index: int, column: JSONObject) -> JSONObject | None: # type: ignore[no-untyped-def]
1571
+ invalid_samples: list[str] = []
1572
+ for row_index in range(2, sheet.max_row + 1):
1573
+ value = sheet.cell(row=row_index, column=column_index).value
1574
+ text = _normalize_optional_text(value)
1575
+ if text is None:
1576
+ continue
1577
+ relation_id = _coerce_positive_relation_id(value)
1578
+ if relation_id is not None:
1579
+ continue
1580
+ invalid_samples.append(f"row {row_index}: {text}")
1581
+ if len(invalid_samples) >= 3:
1582
+ break
1583
+ if not invalid_samples:
1584
+ return None
1585
+ return _issue(
1586
+ "RELATION_IMPORT_REQUIRES_APPLY_ID",
1587
+ f"Column '{column['title']}' must use target record apply_id values during import. Samples: {', '.join(invalid_samples)}",
1588
+ severity="error",
1589
+ )
1590
+
1591
+
1592
+ def _stable_import_schema_fingerprint(expected_columns: list[JSONObject]) -> str:
1593
+ stable_columns = []
1594
+ for item in expected_columns:
1595
+ stable_columns.append(
1596
+ {
1597
+ "field_id": item["field_id"],
1598
+ "title": item["title"],
1599
+ "que_type": item["que_type"],
1600
+ "required": item["required"],
1601
+ "write_kind": item["write_kind"],
1602
+ "options": item.get("options", []),
1603
+ "requires_lookup": bool(item.get("requires_lookup")),
1604
+ "requires_upload": bool(item.get("requires_upload")),
1605
+ "target_app_key": item.get("target_app_key"),
1606
+ }
1607
+ )
1608
+ return hashlib.sha256(
1609
+ json.dumps(stable_columns, ensure_ascii=False, sort_keys=True).encode("utf-8")
1610
+ ).hexdigest()
1611
+
1612
+
1613
+ def _coerce_positive_relation_id(value: Any) -> int | None:
1614
+ if isinstance(value, bool):
1615
+ return None
1616
+ if isinstance(value, int):
1617
+ return value if value > 0 else None
1618
+ if isinstance(value, float):
1619
+ if value.is_integer() and value > 0:
1620
+ return int(value)
1621
+ return None
1622
+ text = _normalize_optional_text(value)
1623
+ if text is None:
1624
+ return None
1625
+ if text.isdigit():
1626
+ parsed = int(text)
1627
+ return parsed if parsed > 0 else None
1628
+ return None
1629
+
1630
+
1282
1631
  def _infer_header_depth(sheet) -> int: # type: ignore[no-untyped-def]
1283
1632
  header_depth = 1
1284
1633
  merged_cells = getattr(sheet, "merged_cells", None)