@josephyan/qingflow-app-user-mcp 0.2.0-beta.13 → 0.2.0-beta.15

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -3,13 +3,13 @@
3
3
  Install:
4
4
 
5
5
  ```bash
6
- npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.13
6
+ npm install @josephyan/qingflow-app-user-mcp@0.2.0-beta.15
7
7
  ```
8
8
 
9
9
  Run:
10
10
 
11
11
  ```bash
12
- npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.13 qingflow-app-user-mcp
12
+ npx -y -p @josephyan/qingflow-app-user-mcp@0.2.0-beta.15 qingflow-app-user-mcp
13
13
  ```
14
14
 
15
15
  Environment:
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@josephyan/qingflow-app-user-mcp",
3
- "version": "0.2.0-beta.13",
3
+ "version": "0.2.0-beta.15",
4
4
  "description": "Operational end-user MCP for Qingflow records, tasks, comments, and directory workflows.",
5
5
  "license": "MIT",
6
6
  "type": "module",
package/pyproject.toml CHANGED
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "qingflow-mcp"
7
- version = "0.2.0b13"
7
+ version = "0.2.0b15"
8
8
  description = "User-authenticated MCP server for Qingflow"
9
9
  readme = "README.md"
10
10
  license = "MIT"
@@ -81,6 +81,7 @@ Do not use builder-side tools here:
81
81
 
82
82
  - Prefer `record_query` as the default read entry
83
83
  - Treat `record_query(list)` as the default wide-table browse and export endpoint; pass explicit `select_columns`, do not expect raw answer arrays there, and let the tool auto-batch columns when the backend per-request field cap is hit
84
+ - If the user is asking for `分析 / 洞察 / 分布 / 占比 / 平均 / 排名 / 趋势 / 所有 / 全部 / 全国 / 高价值`, treat the task as analysis-first and start with `record_query_plan`
84
85
  - Use `request_route` from tool responses to verify the active `base_url` and `qf_version` whenever route mismatches are plausible
85
86
  - Use `directory_search` for fuzzy internal lookup across both members and departments
86
87
  - Use `directory_list_all_internal_users` when the user explicitly wants a complete internal member list within the current workspace or within a specific department or role
@@ -90,6 +91,7 @@ Do not use builder-side tools here:
90
91
  - Use `task_list_grouped` when worksheet or group buckets matter
91
92
  - Use `task_urge` only when the user clearly wants a reminder sent for a pending task
92
93
  - Use `record_query_plan` before final statistics or when field selectors are ambiguous
94
+ - If the target fields are still uncertain, use the fixed order `record_field_resolve -> record_query_plan -> record_query/record_aggregate`; do not bounce between read tools by trial and error
93
95
  - For precise record lookup, use `record_get` when `apply_id` is known
94
96
  - Use `record_field_resolve` when the user gives field titles and you are not fully sure about the exact schema; do not guess ambiguous fields silently
95
97
  - Treat field selectors as schema-first and platform-generic. Prefer exact field titles, then neutral aliases such as `创建时间`, `新增时间`, `负责人`, `部门`, `时间`, or `阶段` only when the tool resolves them clearly. Do not assume CRM shorthand like `销售`, `商机阶段`, `客户全称`, or similar domain shortcuts apply across arbitrary Qingflow apps
@@ -97,6 +99,13 @@ Do not use builder-side tools here:
97
99
  - For deletes, confirm the exact record scope and report the deleted ids
98
100
  - When validating business data volume, use `effective_count` over raw backend totals
99
101
  - For summary or aggregate conclusions, prefer `strict_full=true`
102
+ - For distribution, ratio, or final-count analysis, prefer the fixed order:
103
+ 1. `record_query_plan`
104
+ 2. `record_query(query_mode="summary")`
105
+ 3. `record_aggregate`
106
+ - For analysis routes, prefer `auto_expand_pages=true` unless the user explicitly wants a quick exploratory sample
107
+ - Do not use `record_query(list)` as the basis for final averages, ratios, rankings, trends, or “全部数据” claims; it is sample browsing only when capped
108
+ - If `record_query_plan` returns `estimate.recommended_arguments` or `suggested_next_call`, reuse those arguments instead of guessing scan parameters manually
100
109
  - In `prod`, prefer read-first even more strictly and avoid deletes unless the record scope is explicit in the conversation
101
110
  - For attachments, first run `file_upload_local`, then pass the returned `attachment_value` into `record_create` or `record_update`; do not try to write local file paths directly into attachment fields
102
111
  - For relation fields, first query the target app and resolve the referenced record `apply_id`; do not assume titles, numbers, or business keys can be written directly into a relation field
@@ -126,6 +135,14 @@ When the user asks for demo data, seed, smoke data, or mock data:
126
135
 
127
136
  - low-level list totals from the backend may report `0` while rows are present; prefer `record_query(summary)` or `record_aggregate` for final conclusions
128
137
  - `record_query(summary)` and `record_aggregate` expose `completeness`; do not treat partial scans as final conclusions
138
+ - `record_query(summary)` and `record_aggregate` now also expose `analysis_status`, `safe_for_final_conclusion`, and `analysis_counts`; if `status=partial_success` or `safe_for_final_conclusion=false`, do not present the result as final
139
+ - If an analysis answer did not use `record_query_plan`, downgrade the wording to `初步观察`; do not call it `结论` or `洞察`
140
+ - If `record_query(list)` reports `row_cap_hit`, `sample_only`, capped `returned_items`, or compact output, explicitly say it is a sample rather than full data
141
+ - If summary/aggregate is full but list evidence is sample-only, report them separately as:
142
+ - `全量可信结论`
143
+ - `样本观察(不作为最终结论)`
144
+ - optional `待验证假设`
145
+ - For aggregate or summary answers, report both `backend_total_count` and `scanned_count` when coverage matters
129
146
  - `record_write_plan` is static preflight, not a guarantee that submit will pass runtime linkage or visibility checks
130
147
  - `record_create` now returns integer `apply_id`; you can pass that id directly into `record_get`, `record_update`, or `record_delete`
131
148
  - `verify_write=true` means the tool read the record back and compared the written fields; if it returns `status=verification_failed` or `ok=false`, do not report the create or update as successful
@@ -149,6 +166,10 @@ When the user asks for demo data, seed, smoke data, or mock data:
149
166
  - Attachment write: upload first, write the returned URL object second, and prefer `verify_write=true`
150
167
  - Relation write: query the target app first, capture the referenced record `apply_id`, then write the relation field and verify the readback
151
168
  - Production discrepancy triage: compare the response `request_route` with the browser environment before assuming the data query is wrong
169
+ - Final analysis reporting template:
170
+ - `全量可信结论`
171
+ - `样本观察`
172
+ - `待验证假设`
152
173
 
153
174
  ## Resources
154
175
 
@@ -3,8 +3,16 @@
3
3
  ## Counts
4
4
 
5
5
  - Prefer `effective_count`
6
- - For `record_query(summary)` and `record_aggregate`, inspect `completeness` before concluding
6
+ - For `record_query(summary)` and `record_aggregate`, inspect `completeness`, `analysis_status`, and `safe_for_final_conclusion` before concluding
7
+ - If `status=partial_success`, treat the result as exploratory unless the user explicitly asked for a partial sample
8
+ - `record_query(list)` is for browsing and sample inspection. If it reports `row_cap_hit`, `sample_only`, or capped `returned_items`, do not present it as full data
9
+ - When coverage matters, surface:
10
+ - `backend_total_count`
11
+ - `scanned_count`
12
+ - `unscanned_count`
13
+ - Reuse `suggested_next_call` or `estimate.recommended_arguments` instead of inventing bigger scan settings by hand
7
14
  - If the browser and MCP disagree, compare `request_route.base_url` and `request_route.qf_version` first
15
+ - Do not mix a full aggregate total with sample-only list detail in one sentence like “基于全部数据分析”; split the answer into `全量结论` and `样本观察`
8
16
 
9
17
  ## Record titles
10
18
 
@@ -16,6 +24,7 @@
16
24
  - `record_write_plan` is static preflight only; linked visibility and runtime required rules can still reject writes
17
25
  - `record_write_plan` now exposes `write_format.support_level`; check `full / restricted / unsupported` before attempting non-trivial writes
18
26
  - Use `record_field_resolve` when field titles are uncertain instead of guessing ids
27
+ - For analysis tasks, use the fixed preflight order `record_field_resolve -> record_query_plan -> summary/aggregate`; do not switch tools blindly after `FIELD_NOT_FOUND` or ambiguity
19
28
  - Prefer `strict_full=true` for final statistics or business conclusions
20
29
  - `record_create` and `record_update` can do post-write verification with `verify_write=true`; use that for complex, subtable, or production writes
21
30
  - `apply_id` is normalized to an integer; pass it directly into later record tools
@@ -15,6 +15,32 @@ Use `record_query_plan` first when:
15
15
  - filters are still in natural-language shape
16
16
  - the result may be used as a final conclusion
17
17
  - scan scope or completeness is unclear
18
+ - the user asks for a distribution, ratio, ranking, top-N, or any grouped aggregate
19
+ - the user asks for `分析 / 洞察 / 分布 / 占比 / 平均 / 排名 / 趋势 / 所有 / 全部 / 全国 / 高价值`
20
+
21
+ ## Final analysis pattern
22
+
23
+ 1. Run `record_query_plan`
24
+ 2. If the plan exposes `estimate.recommended_arguments` or `suggested_next_call`, prefer those arguments directly
25
+ 3. Run `record_query(query_mode="summary", strict_full=true, auto_expand_pages=true)` to confirm the total scope
26
+ 4. Run `record_aggregate(strict_full=true, auto_expand_pages=true)` for grouped results
27
+ 5. Run `record_query(query_mode="list")` only if you still need sample rows or examples
28
+ 6. Report `backend_total_count`, `scanned_count`, and whether the result is safe for a final conclusion
29
+ 7. If `status=partial_success` or `safe_for_final_conclusion=false`, stop at “partial result” instead of presenting a final business conclusion
30
+ 8. If list rows are sample-only, separate the answer into:
31
+ - `全量可信结论`
32
+ - `样本观察(不作为最终结论)`
33
+ - optional `待验证假设`
34
+
35
+ ## Analysis anti-pattern
36
+
37
+ Do not do this:
38
+
39
+ 1. Run only `record_query(query_mode="list")`
40
+ 2. Get `200` rows back
41
+ 3. Report平均值、占比、地域分布 as if they were based on all records
42
+
43
+ This is not acceptable because the list endpoint can be capped. Use `record_query_plan -> summary -> aggregate` first, then treat list rows as sample-only evidence.
18
44
 
19
45
  ## Create pattern
20
46
 
@@ -2,4 +2,4 @@ from __future__ import annotations
2
2
 
3
3
  __all__ = ["__version__"]
4
4
 
5
- __version__ = "0.2.0b13"
5
+ __version__ = "0.2.0b15"
@@ -318,15 +318,65 @@ class FieldRemovePatch(StrictModel):
318
318
  return self
319
319
 
320
320
 
321
+ def _coerce_layout_columns(value: Any) -> int | None:
322
+ if isinstance(value, bool):
323
+ return None
324
+ if isinstance(value, int):
325
+ return value if value > 0 else None
326
+ if isinstance(value, str):
327
+ stripped = value.strip()
328
+ if stripped.isdigit():
329
+ parsed = int(stripped)
330
+ return parsed if parsed > 0 else None
331
+ return None
332
+
333
+
334
+ def _normalize_layout_rows(value: Any, *, columns: int | None = None) -> Any:
335
+ if not isinstance(value, list):
336
+ return value
337
+ if value and all(isinstance(item, list) for item in value):
338
+ return value
339
+ if not value:
340
+ return []
341
+ width = columns if columns and columns > 0 else None
342
+ if width is None:
343
+ return [list(value)]
344
+ return [list(value[index : index + width]) for index in range(0, len(value), width) if value[index : index + width]]
345
+
346
+
321
347
  class LayoutSectionPatch(StrictModel):
322
348
  section_id: str | None = Field(default=None, validation_alias=AliasChoices("section_id", "sectionId"))
323
349
  title: str
324
- rows: list[list[str]] = Field(default_factory=list)
350
+ rows: list[list[Any]] = Field(default_factory=list)
351
+
352
+ @model_validator(mode="before")
353
+ @classmethod
354
+ def normalize_aliases(cls, value: Any) -> Any:
355
+ if not isinstance(value, dict):
356
+ return value
357
+ payload = dict(value)
358
+ if "name" in payload and "title" not in payload:
359
+ payload["title"] = payload.pop("name")
360
+ shorthand: Any | None = None
361
+ if "rows" not in payload:
362
+ if "fields" in payload:
363
+ shorthand = payload.pop("fields")
364
+ elif "field_ids" in payload:
365
+ shorthand = payload.pop("field_ids")
366
+ if shorthand is not None:
367
+ payload["rows"] = _normalize_layout_rows(
368
+ shorthand,
369
+ columns=_coerce_layout_columns(payload.pop("columns", None)),
370
+ )
371
+ return payload
325
372
 
326
373
  @model_validator(mode="after")
327
374
  def validate_rows(self) -> "LayoutSectionPatch":
328
375
  if not self.rows:
329
376
  raise ValueError("section rows must be a non-empty list")
377
+ for row in self.rows:
378
+ if not isinstance(row, list) or not row:
379
+ raise ValueError("section rows must be a non-empty list")
330
380
  if not self.section_id:
331
381
  self.section_id = _slugify_title(self.title)
332
382
  return self
@@ -392,6 +442,7 @@ class FlowTransitionPatch(StrictModel):
392
442
 
393
443
  class ViewUpsertPatch(StrictModel):
394
444
  name: str
445
+ view_key: str | None = Field(default=None, validation_alias=AliasChoices("view_key", "viewKey"))
395
446
  type: PublicViewType
396
447
  columns: list[str] = Field(default_factory=list)
397
448
  group_by: str | None = None