@clickzetta/cz-cli-darwin-arm64 0.3.19 → 0.3.21

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (40) hide show
  1. package/bin/cz-cli +0 -0
  2. package/bin/skills/clickzetta-access-control/eval_cases.jsonl +1 -1
  3. package/bin/skills/clickzetta-batch-sync-pipeline/eval_cases.jsonl +5 -0
  4. package/bin/skills/clickzetta-cdc-sync-pipeline/eval_cases.jsonl +5 -0
  5. package/bin/skills/clickzetta-dba-guide/SKILL.md +542 -0
  6. package/bin/skills/clickzetta-dba-guide/eval_cases.jsonl +3 -0
  7. package/bin/skills/clickzetta-dw-modeling/eval_cases.jsonl +1 -1
  8. package/bin/skills/clickzetta-dynamic-table/eval_cases.jsonl +5 -0
  9. package/bin/skills/clickzetta-file-import-pipeline/eval_cases.jsonl +5 -0
  10. package/bin/skills/clickzetta-lakehouse-connect/SKILL.md +218 -0
  11. package/bin/skills/clickzetta-lakehouse-connect/eval_cases.jsonl +3 -0
  12. package/bin/skills/clickzetta-lakehouse-connect/evals/evals.json +35 -0
  13. package/bin/skills/clickzetta-lakehouse-connect/references/config-file.md +435 -0
  14. package/bin/skills/clickzetta-lakehouse-connect/references/jdbc.md +478 -0
  15. package/bin/skills/clickzetta-lakehouse-connect/references/python-sdk.md +225 -0
  16. package/bin/skills/clickzetta-lakehouse-connect/references/sqlalchemy.md +468 -0
  17. package/bin/skills/clickzetta-lakehouse-connect/references/zettapark-session.md +445 -0
  18. package/bin/skills/clickzetta-manage-comments/SKILL.md +219 -0
  19. package/bin/skills/clickzetta-manage-comments/eval_cases.jsonl +3 -0
  20. package/bin/skills/clickzetta-metadata/SKILL.md +483 -0
  21. package/bin/skills/clickzetta-metadata/eval_cases.jsonl +5 -0
  22. package/bin/skills/clickzetta-metadata/references/instance-views-reference.md +276 -0
  23. package/bin/skills/clickzetta-metadata/references/metering-views-reference.md +137 -0
  24. package/bin/skills/clickzetta-metadata/references/show-desc-reference.md +326 -0
  25. package/bin/skills/clickzetta-metadata/references/views-reference.md +271 -0
  26. package/bin/skills/clickzetta-oss-ingest-pipeline/eval_cases.jsonl +5 -0
  27. package/bin/skills/clickzetta-overview/SKILL.md +102 -0
  28. package/bin/skills/clickzetta-overview/eval_cases.jsonl +5 -0
  29. package/bin/skills/clickzetta-overview/references/brands-and-endpoints.md +79 -0
  30. package/bin/skills/clickzetta-overview/references/object-model.md +311 -0
  31. package/bin/skills/clickzetta-overview/references/studio-modules.md +173 -0
  32. package/bin/skills/clickzetta-realtime-sync-pipeline/eval_cases.jsonl +5 -0
  33. package/bin/skills/clickzetta-sql-pipeline-manager/eval_cases.jsonl +12 -0
  34. package/bin/skills/clickzetta-table-stream-pipeline/eval_cases.jsonl +5 -0
  35. package/bin/skills/clickzetta-vcluster-manager/eval_cases.jsonl +5 -0
  36. package/bin/skills/clickzetta-volume-manager/eval_cases.jsonl +5 -0
  37. package/bin/skills/cz-cli-inner/SKILL.md +5 -4
  38. package/package.json +1 -1
  39. package/bin/skills/clickzetta-data-ingest-pipeline/SKILL.md +0 -220
  40. package/bin/skills/clickzetta-data-ingest-pipeline/eval_cases.jsonl +0 -5
@@ -0,0 +1,12 @@
1
+ {"case_id":"001","type":"should_call","user_input":"帮我创建一个动态表,每 5 分钟从 raw_events 聚合数据","expected_skill":"clickzetta-sql-pipeline-manager","expected_output_contains":["CREATE DYNAMIC TABLE","REFRESH INTERVAL"]}
2
+ {"case_id":"002","type":"should_call","user_input":"怎么创建物化视图?","expected_skill":"clickzetta-sql-pipeline-manager","expected_output_contains":["CREATE MATERIALIZED VIEW"]}
3
+ {"case_id":"003","type":"should_call","user_input":"创建一个 Table Stream 捕获 orders 表的变更","expected_skill":"clickzetta-sql-pipeline-manager","expected_output_contains":["CREATE TABLE STREAM"]}
4
+ {"case_id":"004","type":"should_call","user_input":"怎么暂停动态表的刷新","expected_skill":"clickzetta-sql-pipeline-manager","expected_output_contains":["ALTER","SUSPEND"]}
5
+ {"case_id":"005","type":"should_call","user_input":"怎么查看动态表的刷新历史","expected_skill":"clickzetta-sql-pipeline-manager","expected_output_contains":["SHOW DYNAMIC TABLE REFRESH HISTORY"]}
6
+ {"case_id":"006","type":"should_call","user_input":"帮我设计一个 Medallion 架构的数据管道","expected_skill":"clickzetta-sql-pipeline-manager","expected_output_contains":["Bronze","Silver","Gold"]}
7
+ {"case_id":"007","type":"should_call","user_input":"从 Kafka 持续导入数据到 Lakehouse 用什么方式","expected_skill":"clickzetta-sql-pipeline-manager","expected_output_contains":["Pipe","read_kafka"]}
8
+ {"case_id":"008","type":"should_not_call","user_input":"帮我写一个 Node.js 后端","forbidden_skill":"clickzetta-sql-pipeline-manager"}
9
+ {"case_id":"009","type":"should_not_call","user_input":"怎么创建用户和授权","forbidden_skill":"clickzetta-sql-pipeline-manager"}
10
+ {"case_id":"010","type":"should_not_call","user_input":"Kubernetes 怎么部署","forbidden_skill":"clickzetta-sql-pipeline-manager"}
11
+ {"case_id":"011","type":"should_not_call","user_input":"怎么连接 Superset","forbidden_skill":"clickzetta-sql-pipeline-manager"}
12
+ {"case_id":"012","type":"should_not_call","user_input":"帮我优化一个慢查询","forbidden_skill":"clickzetta-sql-pipeline-manager"}
@@ -0,0 +1,5 @@
1
+ {"case_id":"001","type":"should_call","user_input":"怎么创建 Table Stream 捕获表的增量变更?","expected_skill":"clickzetta-table-stream-pipeline","expected_output_contains":["CREATE TABLE STREAM","change_tracking"]}
2
+ {"case_id":"002","type":"should_call","user_input":"Table Stream 的 STANDARD 和 APPEND_ONLY 模式有什么区别?","expected_skill":"clickzetta-table-stream-pipeline","expected_output_contains":["STANDARD","APPEND_ONLY","INSERT"]}
3
+ {"case_id":"003","type":"should_call","user_input":"Table Stream 消费后 offset 怎么管理?","expected_skill":"clickzetta-table-stream-pipeline","expected_output_contains":["offset"]}
4
+ {"case_id":"004","type":"should_call","user_input":"怎么用 Table Stream + MERGE 实现幂等增量消费?","expected_skill":"clickzetta-table-stream-pipeline","expected_output_contains":["MERGE","幂等"]}
5
+ {"case_id":"005","type":"should_call","user_input":"Table Stream 的 __change_type 元数据字段有哪些值?","expected_skill":"clickzetta-table-stream-pipeline","expected_output_contains":["__change_type","INSERT","UPDATE","DELETE"]}
@@ -0,0 +1,5 @@
1
+ {"case_id":"001","type":"should_call","user_input":"创建分析型集群的语法是什么?需要哪些参数?","expected_skill":"clickzetta-vcluster-manager","expected_output_contains":["CREATE VCLUSTER","ANALYTICS","VCLUSTER_SIZE"]}
2
+ {"case_id":"002","type":"should_call","user_input":"VCluster 的三种集群类型分别适合什么场景?","expected_skill":"clickzetta-vcluster-manager","expected_output_contains":["GENERAL","ANALYTICS","INTEGRATION"]}
3
+ {"case_id":"003","type":"should_call","user_input":"分析型集群的副本扩缩容参数怎么配置?","expected_skill":"clickzetta-vcluster-manager","expected_output_contains":["MIN_REPLICAS","MAX_REPLICAS"]}
4
+ {"case_id":"004","type":"should_call","user_input":"PRELOAD_TABLES 缓存预加载的语法和限制是什么?","expected_skill":"clickzetta-vcluster-manager","expected_output_contains":["PRELOAD_TABLES","ALTER VCLUSTER"]}
5
+ {"case_id":"005","type":"should_call","user_input":"集群的 AUTO_SUSPEND 和 AUTO_RESUME 机制是怎样的?","expected_skill":"clickzetta-vcluster-manager","expected_output_contains":["AUTO_SUSPEND_IN_SECOND","AUTO_RESUME"]}
@@ -0,0 +1,5 @@
1
+ {"case_id":"001","type":"should_call","user_input":"我要挂载一个阿里云 OSS bucket 到 Lakehouse,需要先创建 Storage Connection,endpoint 是 oss-cn-hangzhou-internal.aliyuncs.com,帮我说明完整流程","expected_skill":"clickzetta-volume-manager","expected_output_contains":["connection","volume"]}
2
+ {"case_id":"002","type":"should_call","user_input":"创建一个外部 Volume 叫 eval_oss_volume,挂载 oss://studio-dev-hz/,用 eval_oss_conn 连接,开启目录自动刷新","expected_skill":"clickzetta-volume-manager","expected_output_contains":["eval_oss_volume"]}
3
+ {"case_id":"003","type":"should_call","user_input":"查看 eval_oss_volume 里有哪些文件","expected_skill":"clickzetta-volume-manager","expected_output_contains":["eval_oss_volume"]}
4
+ {"case_id":"004","type":"should_call","user_input":"直接查询 eval_oss_volume 里的 CSV 文件,看前 10 行","expected_skill":"clickzetta-volume-manager","expected_output_contains":["eval_oss_volume"]}
5
+ {"case_id":"005","type":"should_call","user_input":"删除 eval_oss_volume 和 eval_oss_conn,清理测试资源","expected_skill":"clickzetta-volume-manager","expected_output_contains":["eval_oss"]}
@@ -37,8 +37,8 @@ cz-cli task save-content <task> --file <f> Save task script
37
37
  cz-cli task save-config <task> Save task non-cron config, like retry, dependency
38
38
  cz-cli task save-cron <task> Save task schedule config
39
39
  cz-cli task deps <task> Show task dependencies (draft)
40
- cz-cli task online <task> Publish a task
41
- cz-cli task offline <task> Take task offline (irreversible)
40
+ cz-cli task deploy <task> Publish/deploy a task (alias: online)
41
+ cz-cli task undeploy <task> Undeploy a task, irreversible (alias: offline)
42
42
  cz-cli task execute <task> Execute ad-hoc
43
43
  cz-cli task delete <task> Delete draft/offline task
44
44
  cz-cli task flow dag <task> Get flow DAG
@@ -78,6 +78,7 @@ cz-cli datasource objects <name_or_id> <catalog>
78
78
  List objects (tables/topics/collections) in a catalog
79
79
  cz-cli datasource describe <name_or_id> <catalog> <object>
80
80
  Show object metadata (columns, types)
81
+ cz-cli datasource test <name_or_id> Test data source connectivity
81
82
  ```
82
83
 
83
84
  ## Output Formats
@@ -92,9 +93,9 @@ cz-cli datasource describe <name_or_id> <catalog> <object>
92
93
  1. **SQL is async by default**. Use `--sync` for SELECT when you need data immediately.
93
94
  2. **Write operations require `--write` flag** (INSERT/UPDATE/DELETE/CREATE/DROP).
94
95
  3. **Always pass `--type` when creating tasks** (SQL/PYTHON/SHELL/SPARK/FLOW).
95
- 4. **Flow tasks use `task flow *` commands exclusively** — never use `task save-content` or `task online` on flow nodes.
96
+ 4. **Flow tasks use `task flow *` commands exclusively** — never use `task save-content` or `task deploy` on flow nodes.
96
97
  5. **Paginated results**: `list` commands return page 1 only. Check `ai_message` in response for next-page hints.
97
- 6. **State-changing operations** (online/offline/execute/delete/refill): confirm intent with user first.
98
+ 6. **State-changing operations** (deploy/undeploy/execute/delete/refill): confirm intent with user first.
98
99
  7. **Multi-environment**: use `--profile <name>` to target a specific environment.
99
100
  8. **On `NO_PROFILE` error**: guide user to run `cz-cli setup`.
100
101
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@clickzetta/cz-cli-darwin-arm64",
3
- "version": "0.3.19",
3
+ "version": "0.3.21",
4
4
  "description": "cz-cli binary for macOS ARM64 (Apple Silicon)",
5
5
  "os": ["darwin"],
6
6
  "cpu": ["arm64"],
@@ -1,220 +0,0 @@
1
- ---
2
- name: clickzetta-data-ingest-pipeline
3
- description: |
4
- ClickZetta Lakehouse 数据导入总览与路由。根据用户的数据源类型、实时性要求、数据量等条件,
5
- 推荐最合适的数据导入方式,并引导到对应的专项 Skill 或直接执行简单导入操作。
6
- 当用户说"导入数据到 Lakehouse"、"数据入仓"、"数据入湖"、"怎么把数据导进来"、
7
- "数据采集"、"数据加载"、"ingest data"、"load data"、"数据导入方案选择"时触发。
8
- Keywords: data ingestion, import, routing, pipeline selection, data source
9
- ---
10
-
11
- # Lakehouse 数据导入总览与路由
12
-
13
- 根据用户的数据源、实时性需求、数据规模等条件,推荐最合适的数据导入方式,
14
- 并路由到对应的专项 Pipeline Skill 或直接执行简单导入操作。
15
-
16
- ## 适用场景
17
-
18
- - 用户想把数据导入 ClickZetta Lakehouse,但不确定用哪种方式
19
- - 用户描述了数据源(Kafka、MySQL、OSS、文件等),需要推荐导入方案
20
- - 用户需要了解各种导入方式的适用场景和差异
21
- - 关键词:数据导入、数据入仓、数据入湖、数据采集、数据加载、pipeline 选择
22
-
23
- ## 前置依赖
24
-
25
- - ClickZetta Lakehouse 账户,具备创建工作空间、Schema、表、PIPE、任务等权限
26
- - **执行环境(满足其一即可,优先使用 cz-cli)**:
27
- - **cz-cli 路径**:已安装 cz-cli(`pip install cz-cli`),并完成 `cz-cli configure` 配置
28
- - **MCP 路径**:clickzetta-studio-mcp 或 clickzetta-mcp-server 工具可用(`LH_execute_query`、`create_task`、`save_integration_task` 等)
29
-
30
- ## 环境探测(执行前必读)
31
-
32
- 在开始任何操作前,先判断当前执行环境:
33
-
34
- **第一步:检测 cz-cli 是否可用**
35
- ```bash
36
- cz-cli --version
37
- ```
38
- - 若命令存在 → **走 cz-cli 路径**(见本文档末尾"cz-cli 替代路径"章节,以及各专项 Skill 的 cz-cli 替代路径)
39
- - 若命令不存在 → 继续检测 MCP
40
-
41
- **第二步:检测 MCP 是否可用(仅在 cz-cli 不可用时)**
42
-
43
- 尝试调用 `LH_execute_query` 工具执行一条简单 SQL(如 `SELECT 1`)。
44
- - 若工具存在于 tool list → **走 MCP 路径**(本文档默认路径)
45
- - 若工具不存在 → 停止执行,提示用户:
46
- > "当前环境既无 cz-cli 也无 MCP 工具,请安装其中之一后重试。
47
- > cz-cli 安装:`pip install cz-cli`,然后运行 `cz-cli configure`
48
- > MCP 安装:参考 clickzetta-studio-mcp 或 clickzetta-mcp-server 配置文档"
49
-
50
- ## 数据导入方式决策树
51
-
52
- ### 步骤 1:确认数据源类型和需求
53
-
54
- 向用户收集以下信息:
55
-
56
- 1. **数据源类型**:Kafka / 对象存储(OSS/S3/COS) / 关系型数据库(MySQL/PostgreSQL/SQL Server) / 本地文件 / URL/Web 文件 / Java SDK / ZettaPark
57
- 2. **实时性要求**:实时(秒级延迟)/ 准实时(分钟级)/ 离线批量(小时/天级)
58
- 3. **同步范围**:单表 / 多表 / 整库
59
- 4. **是否需要持续同步**:一次性导入 / 持续增量同步
60
- 5. **是否需要 CDC(变更数据捕获)**:是 / 否
61
-
62
- ### 步骤 2:根据决策矩阵推荐方案
63
-
64
- | 数据源 | 实时性 | 同步范围 | 推荐方式 | 对应 Skill |
65
- |--------|--------|---------|---------|-----------|
66
- | Kafka | 实时/准实时 | 单 topic | Kafka PIPE 持续导入(SQL) | `clickzetta-kafka-ingest-pipeline` |
67
- | Kafka | 实时 | 多 topic | Studio 实时同步 | `clickzetta-realtime-sync-pipeline` |
68
- | 对象存储 (OSS/S3/COS) | 准实时/批量 | 文件持续到达 | PIPE 持续导入 | `clickzetta-oss-ingest-pipeline` |
69
- | 对象存储 | 一次性 | 批量文件 | COPY INTO 命令 | `clickzetta-file-import-pipeline`(COPY INTO 部分) |
70
- | MySQL/PostgreSQL/SQL Server | 实时 CDC | 单表 | Studio 实时同步 | `clickzetta-realtime-sync-pipeline` |
71
- | MySQL/PostgreSQL/SQL Server | 实时 CDC | 多表/整库 | Studio 多表实时同步 | `clickzetta-cdc-sync-pipeline` |
72
- | MySQL/PostgreSQL/SQL Server | 离线批量 | 单表 | Studio 离线同步 | `clickzetta-batch-sync-pipeline` |
73
- | MySQL/PostgreSQL/SQL Server | 离线批量 | 多表 | Studio 多表离线同步 | `clickzetta-batch-sync-pipeline` |
74
- | 本地文件 / URL | 一次性 | 单文件/多文件 | URL 下载 + COPY INTO | `clickzetta-file-import-pipeline` |
75
- | 流式增量计算 | 准实时 | 表变更驱动 | Dynamic Table + Stream | `clickzetta-incremental-compute-pipeline` |
76
- | Java 应用 | 实时/批量 | 程序写入 | Java SDK | (见下方 SDK 导入指引) |
77
- | Python/ZettaPark | 批量 | DataFrame | ZettaPark save_as_table | (见下方 SDK 导入指引) |
78
-
79
- ### 步骤 3:路由到专项 Skill 或直接执行
80
-
81
- 根据推荐方案,执行以下路由逻辑:
82
-
83
- **有对应专项 Skill 的场景** → 告知用户推荐方案,引导使用对应 Skill:
84
- - `clickzetta-kafka-ingest-pipeline`:Kafka PIPE 管道搭建
85
- - `clickzetta-oss-ingest-pipeline`:对象存储 PIPE 管道搭建
86
- - `clickzetta-batch-sync-pipeline`:Studio 离线同步任务
87
- - `clickzetta-realtime-sync-pipeline`:Studio 实时同步任务
88
- - `clickzetta-cdc-sync-pipeline`:Studio 多表实时同步(CDC)
89
- - `clickzetta-incremental-compute-pipeline`:Dynamic Table + Stream 增量计算管道
90
- - `clickzetta-file-import-pipeline`:URL/文件下载导入
91
- - `clickzetta-table-stream-pipeline`:Table Stream 变更数据捕获
92
-
93
- **无专项 Skill 的简单场景** → 直接执行:
94
-
95
- #### SQL INSERT 导入(小数据量)
96
- ```sql
97
- -- 使用 LH_execute_query 执行
98
- INSERT INTO schema_name.table_name (col1, col2, col3)
99
- VALUES ('val1', 'val2', 'val3');
100
- ```
101
-
102
- #### COPY INTO 快速导入(从 Volume)
103
- ```sql
104
- -- 1. 确认 Volume 中有文件
105
- SHOW VOLUME DIRECTORY volume_name;
106
-
107
- -- 2. 执行 COPY INTO
108
- COPY INTO schema_name.table_name
109
- FROM VOLUME volume_name
110
- USING CSV
111
- OPTIONS('header' = 'true');
112
- ```
113
-
114
- #### Java SDK 导入指引
115
- 提供 Java SDK 的关键配置信息:
116
- - Maven 依赖坐标
117
- - 连接配置(endpoint、workspace、schema、vcluster)
118
- - 批量写入 API:`BulkloadWriter`
119
- - 实时写入 API:`RealtimeWriter`
120
- - 建议用户参考官方文档:`comprehensive_guide_to_ingesting_javasdk_buckload_realtime`
121
-
122
- #### ZettaPark (Python) 导入指引
123
- - `INSERT` 方式:`session.sql("INSERT INTO ...")`
124
- - `save_as_table` 方式:`df.write.save_as_table("table_name")`
125
- - 建议用户参考官方文档:`comprehensive_guide_to_ingesting_zettapark_save_as_table`
126
-
127
- ## 数据入仓 vs 数据入湖
128
-
129
- | 维度 | 数据入仓 | 数据入湖 |
130
- |------|---------|---------|
131
- | 目标 | Lakehouse 托管表 | 用户 Volume(对象存储) |
132
- | 格式 | 自动转为内部列式格式 | 保持原始文件格式 |
133
- | 查询性能 | 高(列式存储 + 索引) | 较低(需扫描原始文件) |
134
- | 适用场景 | 分析查询、BI 报表、数据仓库 | 数据暂存、原始数据归档、跨系统共享 |
135
- | 常用方式 | Studio 同步、PIPE、COPY INTO、SDK | PUT 文件、Python 脚本上传 |
136
-
137
- ## 示例
138
-
139
- ### 示例 1:用户不确定导入方式
140
-
141
- 用户说:"我有一个 MySQL 数据库,想把里面的订单表实时同步到 Lakehouse"
142
-
143
- 路由逻辑:
144
- 1. 数据源:MySQL(关系型数据库)
145
- 2. 实时性:实时
146
- 3. 同步范围:单表
147
- 4. 需要 CDC:是(实时同步意味着需要捕获变更)
148
- → 推荐:Studio 实时同步
149
- → 路由到 `clickzetta-realtime-sync-pipeline` Skill
150
-
151
- ### 示例 2:多种数据源混合场景
152
-
153
- 用户说:"我们有 Kafka 的用户行为日志,还有 MySQL 的业务数据,都要导入 Lakehouse"
154
-
155
- 路由逻辑:
156
- 1. Kafka 用户行为日志 → `clickzetta-kafka-ingest-pipeline`(PIPE 持续导入)
157
- 2. MySQL 业务数据 → 确认实时性需求:
158
- - 实时 → `clickzetta-realtime-sync-pipeline` 或 `clickzetta-cdc-sync-pipeline`
159
- - 离线 → `clickzetta-batch-sync-pipeline`
160
- → 分别引导到对应 Skill
161
-
162
- ### 示例 3:简单的一次性文件导入
163
-
164
- 用户说:"我有一个 CSV 文件要导入"
165
-
166
- 路由逻辑:
167
- 1. 数据源:本地文件
168
- 2. 一次性导入
169
- → 路由到 `clickzetta-file-import-pipeline` Skill(支持文件上传 + COPY INTO)
170
-
171
- ## 错误处理
172
-
173
- | 场景 | 处理方式 |
174
- |------|---------|
175
- | 用户无法确定数据源类型 | 询问数据当前存储位置(哪个系统/服务),帮助判断 |
176
- | 用户需求跨多种导入方式 | 拆分为多个独立的导入任务,分别路由到对应 Skill |
177
- | 推荐的 Skill 尚未创建 | 提供该导入方式的基本步骤和关键 SQL/API,引导用户参考官方文档 |
178
- | 用户的云环境不支持某种连接 | 使用 `LH_show_object_list`(object_type=CONNECTIONS)检查可用连接类型,推荐替代方案 |
179
- | 数据量极大(TB 级) | 建议分批导入,优先使用 PIPE 或 Studio 同步任务(支持断点续传) |
180
-
181
- ## 注意事项
182
-
183
- - 本 Skill 是路由入口,不直接执行复杂的 pipeline 搭建,而是引导到专项 Skill
184
- - 对于简单场景(SQL INSERT、单次 COPY INTO),可以直接在本 Skill 中完成
185
- - 推荐方案时需考虑用户的云环境(阿里云/腾讯云/AWS),不同环境支持的连接类型可能不同
186
- - 使用 `LH_show_object_list`(object_type=VCLUSTERS)确认可用的虚拟集群,同步任务需要 SYNC 类型的 VCluster
187
- - 数据入仓是最常见的场景,数据入湖主要用于原始数据暂存或跨系统共享
188
-
189
- ---
190
-
191
- ## cz-cli 替代路径
192
-
193
- > 仅在 cz-cli 可用且 MCP 不可用时使用本节。
194
- > 本 Skill 是路由入口,cz-cli 路径的核心逻辑在各专项 Skill 的"cz-cli 替代路径"章节中。
195
-
196
- ### 路由说明
197
-
198
- 当 MCP 不可用时,各专项 Skill 均已提供 cz-cli 替代路径:
199
-
200
- | 数据源 | 推荐方式 | 对应 Skill 的 cz-cli 路径 |
201
- |--------|---------|--------------------------|
202
- | Kafka | PIPE 持续导入 | `clickzetta-kafka-ingest-pipeline` → cz-cli 替代路径 |
203
- | 对象存储 (OSS/S3/COS) | PIPE 持续导入 | `clickzetta-oss-ingest-pipeline` → cz-cli 替代路径 |
204
- | MySQL/PostgreSQL/SQL Server(实时单表) | Studio 实时同步 | `clickzetta-realtime-sync-pipeline` → cz-cli 替代路径 |
205
- | MySQL/PostgreSQL/SQL Server(实时多表/整库) | Studio 多表实时同步 | `clickzetta-cdc-sync-pipeline` → cz-cli 替代路径 |
206
- | MySQL/PostgreSQL/SQL Server(离线批量) | Studio 离线同步 | `clickzetta-batch-sync-pipeline` → cz-cli 替代路径 |
207
-
208
- ### 简单场景直接执行(cz-cli 版)
209
-
210
- 对于无需专项 Skill 的简单场景,可直接用 cz-cli agent 完成:
211
-
212
- ```bash
213
- # SQL INSERT 导入(小数据量)
214
- cz-cli agent run "向表 <schema_name>.<table_name> 插入数据:<col1>=<val1>, <col2>=<val2>" \
215
- --format a2a --dangerously-skip-permissions
216
-
217
- # COPY INTO 快速导入(从 Volume)
218
- cz-cli agent run "从 Volume <volume_name> 以 CSV 格式(有 header)将数据导入表 <schema_name>.<table_name>" \
219
- --format a2a --dangerously-skip-permissions
220
- ```
@@ -1,5 +0,0 @@
1
- {"case_id":"001","type":"should_call","user_input":"我想把数据导入 Lakehouse,但不确定用哪种方式","expected_skill":"clickzetta-data-ingest-pipeline","expected_output_contains":["数据源","实时","批量"]}
2
- {"case_id":"002","type":"should_call","user_input":"数据入仓有哪些方案?怎么选择?","expected_skill":"clickzetta-data-ingest-pipeline","expected_output_contains":["Kafka","对象存储","MySQL"]}
3
- {"case_id":"003","type":"should_call","user_input":"我有 MySQL 和 Kafka 两个数据源要导入 Lakehouse,分别用什么方式?","expected_skill":"clickzetta-data-ingest-pipeline","expected_output_contains":["CDC"]}
4
- {"case_id":"004","type":"should_call","user_input":"数据导入方案怎么选?实时和离线有什么区别?","expected_skill":"clickzetta-data-ingest-pipeline","expected_output_contains":["实时","离线","延迟"]}
5
- {"case_id":"005","type":"should_call","user_input":"ingest data into ClickZetta Lakehouse, what options do I have?","expected_skill":"clickzetta-data-ingest-pipeline","expected_output_contains":["Kafka","OSS","SDK"]}