@bgicli/bgicli 2.2.0 → 2.2.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +152 -74
- package/dist/bgi.js +100 -1
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -1,12 +1,13 @@
|
|
|
1
1
|
# BGI CLI
|
|
2
2
|
|
|
3
|
-
**BGI CLI**
|
|
3
|
+
**BGI CLI** 是面向中国生物学研究者的 AI 终端工具,开箱即用,无需额外配置。
|
|
4
4
|
|
|
5
|
-
- ✅
|
|
6
|
-
- ✅
|
|
7
|
-
- ✅
|
|
8
|
-
- ✅
|
|
9
|
-
- ✅
|
|
5
|
+
- ✅ **开箱即用** — `npm install -g @bgicli/bgicli` 即可
|
|
6
|
+
- ✅ **内置 889 个技能** — 21 个生信工作流 + 868 个 OpenClaw 医学技能,自动安装
|
|
7
|
+
- ✅ **智能技能路由** — 描述任务自动激活对应技能,无需手动搜索
|
|
8
|
+
- ✅ **中国 AI 服务商** — 百炼(DashScope)聚合:Qwen3.5、DeepSeek、Kimi、MiniMax 等 20+ 模型
|
|
9
|
+
- ✅ **真实工具调用** — 执行 bash、读写文件、运行 R/Python 脚本
|
|
10
|
+
- ✅ **内网支持** — 可接入公司私有化部署的大模型
|
|
10
11
|
|
|
11
12
|
---
|
|
12
13
|
|
|
@@ -14,96 +15,178 @@
|
|
|
14
15
|
|
|
15
16
|
```bash
|
|
16
17
|
# 需要 Node.js 18+
|
|
17
|
-
npm install -g @bgicli/bgicli
|
|
18
|
+
npm install -g @bgicli/bgicli --registry https://registry.npmjs.org
|
|
19
|
+
```
|
|
18
20
|
|
|
19
|
-
|
|
20
|
-
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
npm run build
|
|
24
|
-
npm link
|
|
21
|
+
首次运行自动初始化工作流和技能库(约 16MB),无需额外操作。
|
|
22
|
+
|
|
23
|
+
```bash
|
|
24
|
+
bgi
|
|
25
25
|
```
|
|
26
26
|
|
|
27
|
+
---
|
|
28
|
+
|
|
29
|
+
## 卸载
|
|
30
|
+
|
|
31
|
+
```bash
|
|
32
|
+
# 卸载 npm 包
|
|
33
|
+
npm uninstall -g @bgicli/bgicli
|
|
34
|
+
|
|
35
|
+
# 删除本地数据(配置、工作流、技能库)
|
|
36
|
+
# Linux / macOS
|
|
37
|
+
rm -rf ~/.bgicli
|
|
38
|
+
|
|
39
|
+
# Windows PowerShell
|
|
40
|
+
Remove-Item -Recurse -Force "$env:USERPROFILE\.bgicli"
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
---
|
|
44
|
+
|
|
27
45
|
## 快速开始
|
|
28
46
|
|
|
29
47
|
```bash
|
|
30
|
-
bgi
|
|
48
|
+
bgi # 启动
|
|
49
|
+
/connect # 首次配置 API Key
|
|
50
|
+
/cat # 浏览技能分类目录
|
|
51
|
+
/sk deseq2 # 搜索并激活 DESeq2 工作流
|
|
52
|
+
/help # 查看全部命令
|
|
31
53
|
```
|
|
32
54
|
|
|
33
|
-
|
|
55
|
+
首次运行提示配置百炼 (DashScope) API Key:
|
|
56
|
+
- 获取地址:[bailian.console.aliyun.com](https://bailian.console.aliyun.com/) → API Key 管理
|
|
34
57
|
|
|
35
58
|
---
|
|
36
59
|
|
|
37
60
|
## 支持的 AI 服务商
|
|
38
61
|
|
|
39
|
-
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
|
|
|
43
|
-
|
|
44
|
-
|
|
|
62
|
+
### 百炼 · 阿里云 (DashScope) — 默认
|
|
63
|
+
通过百炼统一接入多个国内主流模型:
|
|
64
|
+
|
|
65
|
+
| 模型 | 命令 |
|
|
66
|
+
|------|------|
|
|
67
|
+
| Qwen3.5-plus(默认) | `/model qwen3.5-plus` |
|
|
68
|
+
| Qwen3-235B | `/model qwen3-235b-a22b` |
|
|
69
|
+
| DeepSeek-R1 | `/model deepseek-r1` |
|
|
70
|
+
| DeepSeek-V3 | `/model deepseek-v3` |
|
|
71
|
+
| Kimi-K2.5 | `/model kimi-k2.5` |
|
|
72
|
+
| MiniMax-M2.5 | `/model MiniMax-M2.5` |
|
|
73
|
+
| QwQ-Plus(推理) | `/model qwq-plus` |
|
|
74
|
+
|
|
75
|
+
获取 API Key:[bailian.console.aliyun.com](https://bailian.console.aliyun.com/)
|
|
76
|
+
|
|
77
|
+
### 内网私有化部署
|
|
78
|
+
```bash
|
|
79
|
+
/provider intranet # 切换到内网 Qwen3-235B(无需 Key)
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
### 自定义 OpenAI 兼容服务
|
|
83
|
+
```bash
|
|
84
|
+
/connect custom # 配置任意 vLLM / Ollama / FastChat 地址
|
|
85
|
+
```
|
|
45
86
|
|
|
46
87
|
---
|
|
47
88
|
|
|
48
89
|
## 命令参考
|
|
49
90
|
|
|
91
|
+
### 服务商 / 模型
|
|
50
92
|
| 命令 | 说明 |
|
|
51
93
|
|------|------|
|
|
52
|
-
| `/provider <name>` |
|
|
94
|
+
| `/provider <name>` | 切换服务商 (`bailian` / `intranet` / `custom`) |
|
|
53
95
|
| `/model <name>` | 切换模型 |
|
|
54
|
-
| `/models` |
|
|
96
|
+
| `/models` | 列出当前服务商所有可用模型 |
|
|
55
97
|
| `/providers` | 列出所有服务商 |
|
|
56
98
|
| `/connect [provider]` | 配置 API Key |
|
|
57
99
|
| `/status` | 显示当前配置 |
|
|
58
|
-
| `/clear` | 清空对话历史 |
|
|
59
|
-
| `/help` | 显示帮助 |
|
|
60
|
-
| `exit` / `quit` | 退出 |
|
|
61
100
|
|
|
62
|
-
|
|
63
|
-
|
|
64
|
-
|
|
65
|
-
|
|
66
|
-
|
|
101
|
+
### 技能与工作流
|
|
102
|
+
| 命令 | 说明 |
|
|
103
|
+
|------|------|
|
|
104
|
+
| `/cat` | 按领域浏览技能分类目录(11个领域) |
|
|
105
|
+
| `/sk` | 列出全部技能(工作流 + OpenClaw Medical) |
|
|
106
|
+
| `/sk <关键词>` | 搜索并激活技能(如 `/sk deseq2`、`/sk alphafold`) |
|
|
107
|
+
| `/wf` | 同 `/sk`,别名 |
|
|
67
108
|
|
|
68
|
-
|
|
69
|
-
- `bulk-rnaseq-counts-to-de-deseq2` — DESeq2 差异表达分析
|
|
70
|
-
- `bulk-omics-clustering` — 样本/特征聚类
|
|
71
|
-
- `scrnaseq-scanpy-core-analysis` — 单细胞分析 (Scanpy/Python)
|
|
72
|
-
- `scrnaseq-seurat-core-analysis` — 单细胞分析 (Seurat/R)
|
|
73
|
-
- `spatial-transcriptomics` — 空间转录组
|
|
74
|
-
- `coexpression-network` — 共表达网络 (WGCNA)
|
|
75
|
-
- `functional-enrichment-from-degs` — 功能富集 (GO/KEGG/GSEA)
|
|
76
|
-
- `grn-pyscenic` — 基因调控网络 (pySCENIC)
|
|
109
|
+
> **智能路由**:直接描述任务,BGI CLI 自动识别并激活对应技能。
|
|
77
110
|
|
|
78
|
-
###
|
|
79
|
-
|
|
80
|
-
|
|
81
|
-
|
|
82
|
-
|
|
83
|
-
|
|
111
|
+
### 对话管理
|
|
112
|
+
| 命令 | 说明 |
|
|
113
|
+
|------|------|
|
|
114
|
+
| `/clear` | 清空对话历史(重置激活的技能) |
|
|
115
|
+
| `/history` | 查看对话统计(轮次 / Token 估算) |
|
|
116
|
+
| `/save [文件名]` | 保存对话为 Markdown 文件 |
|
|
117
|
+
| `/think [on\|off]` | 切换思考模式(Qwen3 `/think` 前缀) |
|
|
84
118
|
|
|
85
|
-
###
|
|
86
|
-
|
|
119
|
+
### 文件与目录
|
|
120
|
+
| 命令 | 说明 |
|
|
121
|
+
|------|------|
|
|
122
|
+
| `/cd <路径>` | 更改工作目录 |
|
|
123
|
+
| `/cwd` | 显示当前工作目录 |
|
|
124
|
+
| `/tools` | 列出 AI 可调用的工具 |
|
|
125
|
+
| `@路径` | 消息中内嵌文件内容(如 `@data.csv 里有什么?`) |
|
|
87
126
|
|
|
88
|
-
###
|
|
89
|
-
|
|
90
|
-
|
|
127
|
+
### 其他
|
|
128
|
+
| 命令 | 说明 |
|
|
129
|
+
|------|------|
|
|
130
|
+
| `/help` | 显示帮助 |
|
|
131
|
+
| `exit` / `quit` / `q` | 退出 |
|
|
91
132
|
|
|
92
133
|
---
|
|
93
134
|
|
|
94
|
-
##
|
|
135
|
+
## 内置技能库
|
|
136
|
+
|
|
137
|
+
### 生物信息学工作流(21个)
|
|
138
|
+
|
|
139
|
+
#### 转录组学
|
|
140
|
+
| ID | 说明 |
|
|
141
|
+
|----|------|
|
|
142
|
+
| `bulk-rnaseq-counts-to-de-deseq2` | DESeq2 差异表达分析 |
|
|
143
|
+
| `bulk-omics-clustering` | 样本/特征聚类(K-Means / HDBSCAN) |
|
|
144
|
+
| `scrnaseq-scanpy-core-analysis` | 单细胞 RNA-seq(Scanpy/Python) |
|
|
145
|
+
| `scrnaseq-seurat-core-analysis` | 单细胞 RNA-seq(Seurat/R) |
|
|
146
|
+
| `spatial-transcriptomics` | 空间转录组(Visium) |
|
|
147
|
+
| `coexpression-network` | 共表达网络(WGCNA) |
|
|
148
|
+
| `functional-enrichment-from-degs` | 功能富集(GO / KEGG / GSEA) |
|
|
149
|
+
| `grn-pyscenic` | 基因调控网络(pySCENIC) |
|
|
150
|
+
|
|
151
|
+
#### 基因组学
|
|
152
|
+
| ID | 说明 |
|
|
153
|
+
|----|------|
|
|
154
|
+
| `genetic-variant-annotation` | VCF 变异注释(VEP / ANNOVAR) |
|
|
155
|
+
| `gwas-to-function-twas` | GWAS → TWAS 因果基因 |
|
|
156
|
+
| `mendelian-randomization-twosamplemr` | 孟德尔随机化 |
|
|
157
|
+
| `polygenic-risk-score-prs-catalog` | 多基因风险评分(PRS) |
|
|
158
|
+
| `pooled-crispr-screens` | CRISPR 文库筛选(MAGeCK / BAGEL2) |
|
|
159
|
+
|
|
160
|
+
#### 表观基因组
|
|
161
|
+
| ID | 说明 |
|
|
162
|
+
|----|------|
|
|
163
|
+
| `chip-atlas-peak-enrichment` | ChIP-seq 峰值富集 |
|
|
164
|
+
| `chip-atlas-diff-analysis` | 差异结合分析 |
|
|
165
|
+
| `chip-atlas-target-genes` | 转录因子靶基因鉴定 |
|
|
166
|
+
|
|
167
|
+
#### 临床与流行病学
|
|
168
|
+
| ID | 说明 |
|
|
169
|
+
|----|------|
|
|
170
|
+
| `clinicaltrials-landscape` | 临床试验格局分析 |
|
|
171
|
+
| `literature-preclinical` | 临床前文献系统提取 |
|
|
172
|
+
| `experimental-design-statistics` | 实验设计与统计检验 |
|
|
173
|
+
| `lasso-biomarker-panel` | LASSO 生物标志物筛选 |
|
|
174
|
+
| `pcr-primer-design` | PCR/qPCR 引物设计 |
|
|
175
|
+
|
|
176
|
+
### OpenClaw Medical Skills(868个)
|
|
177
|
+
|
|
178
|
+
覆盖结构生物学、单细胞、药物发现、抗体设计、文献检索等领域,使用 `/cat` 浏览分类目录。
|
|
95
179
|
|
|
96
|
-
|
|
97
|
-
|
|
98
|
-
```bash
|
|
99
|
-
# 从 bgicli-opencode 目录复制(如果已克隆旧仓库)
|
|
100
|
-
cp -r /path/to/old/workflows ~/.bgicli/workflows/
|
|
101
|
-
```
|
|
180
|
+
---
|
|
102
181
|
|
|
103
|
-
|
|
182
|
+
## 从源码安装
|
|
104
183
|
|
|
105
184
|
```bash
|
|
106
|
-
|
|
185
|
+
git clone https://github.com/zja2004/BGI-CLI.git
|
|
186
|
+
cd BGI-CLI
|
|
187
|
+
npm install
|
|
188
|
+
npm run build
|
|
189
|
+
npm link
|
|
107
190
|
```
|
|
108
191
|
|
|
109
192
|
---
|
|
@@ -111,22 +194,17 @@ bash install.sh
|
|
|
111
194
|
## 架构
|
|
112
195
|
|
|
113
196
|
```
|
|
114
|
-
bgi
|
|
115
|
-
├── src/index.ts
|
|
116
|
-
├── src/chat.ts
|
|
117
|
-
├── src/tools.ts
|
|
118
|
-
├── src/
|
|
119
|
-
├── src/
|
|
120
|
-
|
|
197
|
+
bgi
|
|
198
|
+
├── src/index.ts — CLI 主入口、命令处理、智能路由
|
|
199
|
+
├── src/chat.ts — 流式对话引擎(工具调用循环)
|
|
200
|
+
├── src/tools.ts — 工具实现(bash / read_file / write_file 等)
|
|
201
|
+
├── src/skillRouter.ts — 关键词路由表(35个核心技能自动匹配)
|
|
202
|
+
├── src/prompt.ts — 生物信息学系统提示
|
|
203
|
+
├── src/providers.ts — 中国 AI 服务商配置
|
|
204
|
+
├── src/config.ts — 配置管理(~/.bgicli/config.json)
|
|
205
|
+
└── data/ — 内置数据(工作流 + Skills + Python 工具)
|
|
121
206
|
```
|
|
122
207
|
|
|
123
|
-
**工具调用流程**:
|
|
124
|
-
1. 用户提问 → 发给 LLM(带工具定义)
|
|
125
|
-
2. LLM 决定调用工具(bash/read_file 等)
|
|
126
|
-
3. BGI CLI 执行工具,将结果返回给 LLM
|
|
127
|
-
4. LLM 基于执行结果继续回答
|
|
128
|
-
5. 循环直到 LLM 完成回答
|
|
129
|
-
|
|
130
208
|
---
|
|
131
209
|
|
|
132
210
|
## License
|
package/dist/bgi.js
CHANGED
|
@@ -13930,6 +13930,33 @@ async function streamOnce(client, messages, model) {
|
|
|
13930
13930
|
finishReason
|
|
13931
13931
|
};
|
|
13932
13932
|
}
|
|
13933
|
+
async function compactMessages(messages, config) {
|
|
13934
|
+
const prov = PROVIDERS[config.provider];
|
|
13935
|
+
if (!prov) throw new Error(`Unknown provider: ${config.provider}`);
|
|
13936
|
+
const baseURL = config.provider === "custom" ? config.customUrl : prov.baseURL;
|
|
13937
|
+
const apiKey = getApiKey(config);
|
|
13938
|
+
const client = new openai_default({ apiKey: apiKey || "none", baseURL });
|
|
13939
|
+
const transcript = messages.filter((m2) => m2.role === "user" || m2.role === "assistant").map((m2) => `[${m2.role === "user" ? "\u7528\u6237" : "AI"}]: ${String(m2.content ?? "").slice(0, 2e3)}`).join("\n\n");
|
|
13940
|
+
const resp = await client.chat.completions.create({
|
|
13941
|
+
model: config.model,
|
|
13942
|
+
messages: [
|
|
13943
|
+
{
|
|
13944
|
+
role: "system",
|
|
13945
|
+
content: "\u4F60\u662F\u4E00\u4E2A\u5BF9\u8BDD\u6458\u8981\u52A9\u624B\u3002\u8BF7\u5C06\u4EE5\u4E0B\u5BF9\u8BDD\u5386\u53F2\u538B\u7F29\u4E3A\u7B80\u6D01\u7684\u4E2D\u6587\u6458\u8981\uFF0C\u4FDD\u7559\u6240\u6709\u5173\u952E\u6280\u672F\u4FE1\u606F\uFF1A\u6587\u4EF6\u8DEF\u5F84\u3001\u547D\u4EE4\u3001\u5206\u6790\u7ED3\u679C\u3001\u7528\u6237\u51B3\u7B56\u3001\u5DF2\u6FC0\u6D3B\u7684\u5DE5\u4F5C\u6D41/\u6280\u80FD\u3002\u6458\u8981\u5E94\u8BA9\u5BF9\u8BDD\u80FD\u591F\u65E0\u7F1D\u7EE7\u7EED\u3002"
|
|
13946
|
+
},
|
|
13947
|
+
{
|
|
13948
|
+
role: "user",
|
|
13949
|
+
content: `\u8BF7\u538B\u7F29\u4EE5\u4E0B\u5BF9\u8BDD\u5386\u53F2\uFF1A
|
|
13950
|
+
|
|
13951
|
+
${transcript}
|
|
13952
|
+
|
|
13953
|
+
\u8F93\u51FA\u683C\u5F0F\uFF1A\u76F4\u63A5\u8F93\u51FA\u6458\u8981\u6587\u672C\uFF0C\u4E0D\u9700\u8981\u4EFB\u4F55\u524D\u7F00\u3002`
|
|
13954
|
+
}
|
|
13955
|
+
],
|
|
13956
|
+
stream: false
|
|
13957
|
+
});
|
|
13958
|
+
return resp.choices[0]?.message?.content ?? "\uFF08\u5BF9\u8BDD\u5386\u53F2\u5DF2\u538B\u7F29\uFF09";
|
|
13959
|
+
}
|
|
13933
13960
|
function getApiKey(cfg) {
|
|
13934
13961
|
const prov = PROVIDERS[cfg.provider];
|
|
13935
13962
|
if (prov?.envKey && process.env[prov.envKey]) return process.env[prov.envKey];
|
|
@@ -14707,6 +14734,7 @@ function printHelp() {
|
|
|
14707
14734
|
console.log(source_default.bold.cyan("\u2500\u2500\u2500 \u5BF9\u8BDD\u7BA1\u7406 \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500"));
|
|
14708
14735
|
console.log(` ${source_default.cyan("/clear")} \u6E05\u7A7A\u5BF9\u8BDD\u5386\u53F2`);
|
|
14709
14736
|
console.log(` ${source_default.cyan("/history")} \u67E5\u770B\u5BF9\u8BDD\u7EDF\u8BA1\uFF08\u8F6E\u6B21 / Token \u4F30\u7B97\uFF09`);
|
|
14737
|
+
console.log(` ${source_default.cyan("/compact")} \u7ACB\u5373\u538B\u7F29\u5BF9\u8BDD\u5386\u53F2\uFF08\u8D85 60k token \u81EA\u52A8\u89E6\u53D1\uFF09`);
|
|
14710
14738
|
console.log(` ${source_default.cyan("/save")} [\u6587\u4EF6\u540D] \u4FDD\u5B58\u5BF9\u8BDD\u4E3A Markdown \u6587\u4EF6`);
|
|
14711
14739
|
console.log(` ${source_default.cyan("/think")} [on|off] \u5207\u6362\u601D\u8003\u6A21\u5F0F (Qwen3 /think \u524D\u7F00)`);
|
|
14712
14740
|
console.log();
|
|
@@ -14934,6 +14962,45 @@ ${msg.content}
|
|
|
14934
14962
|
(0, import_fs4.writeFileSync)(outPath, lines.join("\n"), "utf8");
|
|
14935
14963
|
console.log(source_default.green(`\u2713 \u5BF9\u8BDD\u5DF2\u4FDD\u5B58: ${outPath}`));
|
|
14936
14964
|
}
|
|
14965
|
+
var COMPACT_TOKEN_THRESHOLD = 6e4;
|
|
14966
|
+
var COMPACT_KEEP_RECENT = 8;
|
|
14967
|
+
function estimateTokens(messages) {
|
|
14968
|
+
const chars = messages.reduce((n2, m2) => n2 + String(m2.content ?? "").length, 0);
|
|
14969
|
+
return Math.round(chars / 3.5);
|
|
14970
|
+
}
|
|
14971
|
+
async function maybeCompact(history, cfg) {
|
|
14972
|
+
const tokens = estimateTokens(history);
|
|
14973
|
+
if (tokens < COMPACT_TOKEN_THRESHOLD) return history;
|
|
14974
|
+
const recent = history.slice(-COMPACT_KEEP_RECENT);
|
|
14975
|
+
const old = history.slice(0, -COMPACT_KEEP_RECENT);
|
|
14976
|
+
if (old.length === 0) return history;
|
|
14977
|
+
process.stdout.write(source_default.dim(`
|
|
14978
|
+
[\u4E0A\u4E0B\u6587\u5DF2\u8FBE ~${Math.round(tokens / 1e3)}k tokens\uFF0C\u6B63\u5728\u81EA\u52A8\u538B\u7F29...]
|
|
14979
|
+
`));
|
|
14980
|
+
try {
|
|
14981
|
+
const summary = await compactMessages(old, cfg);
|
|
14982
|
+
const compacted = [
|
|
14983
|
+
{
|
|
14984
|
+
role: "user",
|
|
14985
|
+
content: `[\u5BF9\u8BDD\u5386\u53F2\u6458\u8981 \u2014 \u8BF7\u5728\u6B64\u57FA\u7840\u4E0A\u7EE7\u7EED]
|
|
14986
|
+
|
|
14987
|
+
${summary}`
|
|
14988
|
+
},
|
|
14989
|
+
{
|
|
14990
|
+
role: "assistant",
|
|
14991
|
+
content: "\u2713 \u5DF2\u7406\u89E3\u4E4B\u524D\u7684\u5BF9\u8BDD\u6458\u8981\uFF0C\u8BF7\u7EE7\u7EED\u3002"
|
|
14992
|
+
},
|
|
14993
|
+
...recent
|
|
14994
|
+
];
|
|
14995
|
+
const saved = estimateTokens(history) - estimateTokens(compacted);
|
|
14996
|
+
process.stdout.write(source_default.dim(`[\u538B\u7F29\u5B8C\u6210\uFF0C\u91CA\u653E\u7EA6 ~${Math.round(saved / 1e3)}k tokens]
|
|
14997
|
+
|
|
14998
|
+
`));
|
|
14999
|
+
return compacted;
|
|
15000
|
+
} catch {
|
|
15001
|
+
return history.slice(-COMPACT_KEEP_RECENT * 2);
|
|
15002
|
+
}
|
|
15003
|
+
}
|
|
14937
15004
|
async function handleCommand(input, rl, history, thinkMode) {
|
|
14938
15005
|
const [cmd, ...rest] = input.slice(1).trim().split(/\s+/);
|
|
14939
15006
|
const arg = rest.join(" ");
|
|
@@ -15037,6 +15104,37 @@ async function handleCommand(input, rl, history, thinkMode) {
|
|
|
15037
15104
|
saveConversation(history, arg || void 0);
|
|
15038
15105
|
break;
|
|
15039
15106
|
}
|
|
15107
|
+
case "compact": {
|
|
15108
|
+
const tokens = estimateTokens(history);
|
|
15109
|
+
if (history.length < 4) {
|
|
15110
|
+
console.log(source_default.dim("\u5BF9\u8BDD\u592A\u77ED\uFF0C\u65E0\u9700\u538B\u7F29"));
|
|
15111
|
+
break;
|
|
15112
|
+
}
|
|
15113
|
+
console.log(source_default.dim(`\u5F53\u524D\u5BF9\u8BDD\u7EA6 ~${Math.round(tokens / 1e3)}k tokens\uFF0C\u6B63\u5728\u538B\u7F29...`));
|
|
15114
|
+
try {
|
|
15115
|
+
const currentCfg = loadConfig();
|
|
15116
|
+
const recent = history.slice(-COMPACT_KEEP_RECENT);
|
|
15117
|
+
const old = history.slice(0, -COMPACT_KEEP_RECENT);
|
|
15118
|
+
if (old.length === 0) {
|
|
15119
|
+
console.log(source_default.dim("\u8FD1\u671F\u6D88\u606F\u4E0D\u8DB3\uFF0C\u65E0\u9700\u538B\u7F29"));
|
|
15120
|
+
break;
|
|
15121
|
+
}
|
|
15122
|
+
const summary = await compactMessages(old, currentCfg);
|
|
15123
|
+
const newHistory = [
|
|
15124
|
+
{ role: "user", content: `[\u5BF9\u8BDD\u5386\u53F2\u6458\u8981 \u2014 \u8BF7\u5728\u6B64\u57FA\u7840\u4E0A\u7EE7\u7EED]
|
|
15125
|
+
|
|
15126
|
+
${summary}` },
|
|
15127
|
+
{ role: "assistant", content: "\u2713 \u5DF2\u7406\u89E3\u4E4B\u524D\u7684\u5BF9\u8BDD\u6458\u8981\uFF0C\u8BF7\u7EE7\u7EED\u3002" },
|
|
15128
|
+
...recent
|
|
15129
|
+
];
|
|
15130
|
+
const after = estimateTokens(newHistory);
|
|
15131
|
+
console.log(source_default.green(`\u2713 \u538B\u7F29\u5B8C\u6210: ${history.length} \u6761\u6D88\u606F \u2192 ${newHistory.length} \u6761\uFF0C~${Math.round(after / 1e3)}k tokens`));
|
|
15132
|
+
return { injectHistory: newHistory };
|
|
15133
|
+
} catch (err) {
|
|
15134
|
+
console.error(source_default.red(`\u538B\u7F29\u5931\u8D25: ${err instanceof Error ? err.message : String(err)}`));
|
|
15135
|
+
}
|
|
15136
|
+
break;
|
|
15137
|
+
}
|
|
15040
15138
|
case "think": {
|
|
15041
15139
|
const val = arg.toLowerCase();
|
|
15042
15140
|
if (val === "on" || val === "1" || val === "true") {
|
|
@@ -15182,6 +15280,7 @@ async function main() {
|
|
|
15182
15280
|
history = [];
|
|
15183
15281
|
injectedSkills.clear();
|
|
15184
15282
|
}
|
|
15283
|
+
if (result.injectHistory) history = result.injectHistory;
|
|
15185
15284
|
if (result.thinkMode !== void 0) thinkMode = result.thinkMode;
|
|
15186
15285
|
continue;
|
|
15187
15286
|
}
|
|
@@ -15210,7 +15309,7 @@ ${expanded}` : expanded;
|
|
|
15210
15309
|
const currentCfg = loadConfig();
|
|
15211
15310
|
const reply = await chat(history, currentCfg, systemPrompt);
|
|
15212
15311
|
history.push({ role: "assistant", content: reply });
|
|
15213
|
-
|
|
15312
|
+
history = await maybeCompact(history, currentCfg);
|
|
15214
15313
|
} catch (err) {
|
|
15215
15314
|
const msg = err instanceof Error ? err.message : String(err);
|
|
15216
15315
|
console.error(source_default.red(`
|