stigmergy 1.10.3 → 1.10.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/package.json +1 -1
- package/scripts/README.md +398 -0
- package/scripts/deploy-soul-features.js +3 -1
- package/scripts/examples.sh +208 -0
- package/scripts/monitor-evolution.sh +77 -0
- package/scripts/postinstall-deploy.js +549 -437
- package/scripts/setup-all.ps1 +186 -0
- package/scripts/setup-all.sh +196 -0
- package/scripts/setup-local-llm.sh +435 -0
- package/scripts/setup-search.sh +248 -0
- package/scripts/start-evolution-daemon.sh +66 -0
- package/scripts/start-evolution-persistent.js +154 -0
- package/scripts/start-evolution.js +197 -0
- package/scripts/track-progress.js +265 -0
- package/scripts/verify-config.sh +297 -0
- package/skills/soul-auto-compute-hunter/SKILL.md +770 -0
- package/skills/soul-auto-evolve/SKILL.md +129 -174
- package/skills/soul-auto-evolve.js +348 -0
- package/skills/soul-auto-search-config/SKILL.md +449 -0
- package/skills/soul-reflection/SKILL.md +196 -228
- package/skills/soul-reflection.js +303 -0
- package/src/core/cli_tools.js +554 -544
- package/src/core/soul_engine/README.md +441 -0
- package/src/core/soul_engine/SoulEngine.js +642 -0
- package/src/core/soul_engine/cli.js +274 -0
package/package.json
CHANGED
|
@@ -0,0 +1,398 @@
|
|
|
1
|
+
# Stigmergy 实用配置脚本
|
|
2
|
+
|
|
3
|
+
**版本**: 1.0.0
|
|
4
|
+
**状态**: ✅ 真实可执行
|
|
5
|
+
**承诺**: 所有功能均经过验证,无虚假承诺
|
|
6
|
+
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
## 📋 概述
|
|
10
|
+
|
|
11
|
+
这是一套**真实可用**的 Stigmergy 配置脚本,用于增强您的 AI CLI 工具能力。
|
|
12
|
+
|
|
13
|
+
### ✅ 可实现的功能
|
|
14
|
+
|
|
15
|
+
- ✅ **DuckDuckGo 搜索** - 即时可用,无需 API Key
|
|
16
|
+
- ✅ **Wikipedia 知识库** - 高质量知识,无需认证
|
|
17
|
+
- ✅ **Ollama 本地 LLM** - 完全免费,隐私保护
|
|
18
|
+
- ✅ **配置验证** - 自动检查配置状态
|
|
19
|
+
- ✅ **一键安装** - Linux/macOS/Windows 支持
|
|
20
|
+
|
|
21
|
+
### ⚠️ 需要手动操作的
|
|
22
|
+
|
|
23
|
+
- ⏳ **API Key 申请** - Tavily、Google AI 等(需要注册)
|
|
24
|
+
- ⏳ **Ollama 安装** - 需要下载和安装
|
|
25
|
+
- ⏳ **模型下载** - 需要时间和磁盘空间
|
|
26
|
+
|
|
27
|
+
---
|
|
28
|
+
|
|
29
|
+
## 🚀 快速开始
|
|
30
|
+
|
|
31
|
+
### Linux / macOS
|
|
32
|
+
|
|
33
|
+
```bash
|
|
34
|
+
# 1. 给脚本添加执行权限
|
|
35
|
+
cd ~/.stigmergy/scripts
|
|
36
|
+
chmod +x *.sh
|
|
37
|
+
|
|
38
|
+
# 2. 一键配置(推荐)
|
|
39
|
+
./setup-all.sh
|
|
40
|
+
|
|
41
|
+
# 3. 验证配置
|
|
42
|
+
./verify-config.sh
|
|
43
|
+
|
|
44
|
+
# 4. 查看使用示例
|
|
45
|
+
./examples.sh
|
|
46
|
+
```
|
|
47
|
+
|
|
48
|
+
### Windows
|
|
49
|
+
|
|
50
|
+
```powershell
|
|
51
|
+
# 1. 进入脚本目录
|
|
52
|
+
cd $env:USERPROFILE\.stigmergy\scripts
|
|
53
|
+
|
|
54
|
+
# 2. 运行配置脚本
|
|
55
|
+
powershell -ExecutionPolicy Bypass -File setup-all.ps1
|
|
56
|
+
|
|
57
|
+
# 3. 验证配置
|
|
58
|
+
# (验证脚本仅支持 Linux/macOS)
|
|
59
|
+
```
|
|
60
|
+
|
|
61
|
+
---
|
|
62
|
+
|
|
63
|
+
## 📁 脚本说明
|
|
64
|
+
|
|
65
|
+
### 1. setup-all.sh / setup-all.ps1
|
|
66
|
+
**一键配置脚本** - 快速配置所有功能
|
|
67
|
+
|
|
68
|
+
**功能:**
|
|
69
|
+
- 检测操作系统和依赖
|
|
70
|
+
- 配置 DuckDuckGo 搜索
|
|
71
|
+
- 配置 Wikipedia 知识库
|
|
72
|
+
- 配置 Ollama 本地 LLM
|
|
73
|
+
- 显示下一步操作指南
|
|
74
|
+
|
|
75
|
+
**使用:**
|
|
76
|
+
```bash
|
|
77
|
+
./setup-all.sh # Linux/macOS
|
|
78
|
+
powershell setup-all.ps1 # Windows
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
### 2. setup-search.sh
|
|
82
|
+
**搜索引擎配置脚本** - 配置搜索服务
|
|
83
|
+
|
|
84
|
+
**功能:**
|
|
85
|
+
- 配置 DuckDuckGo(即时可用)
|
|
86
|
+
- 配置 Wikipedia(即时可用)
|
|
87
|
+
- 配置 Tavily(需要 API Key)
|
|
88
|
+
- 测试搜索功能
|
|
89
|
+
|
|
90
|
+
**使用:**
|
|
91
|
+
```bash
|
|
92
|
+
./setup-search.sh
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
### 3. setup-local-llm.sh
|
|
96
|
+
**本地 LLM 配置脚本** - 安装和配置 Ollama
|
|
97
|
+
|
|
98
|
+
**功能:**
|
|
99
|
+
- 检测 Ollama 安装状态
|
|
100
|
+
- 自动安装 Ollama(Linux)
|
|
101
|
+
- 推荐和下载模型
|
|
102
|
+
- 测试模型功能
|
|
103
|
+
- 创建配置文件
|
|
104
|
+
|
|
105
|
+
**使用:**
|
|
106
|
+
```bash
|
|
107
|
+
./setup-local-llm.sh
|
|
108
|
+
```
|
|
109
|
+
|
|
110
|
+
### 4. verify-config.sh
|
|
111
|
+
**配置验证工具** - 检查配置状态
|
|
112
|
+
|
|
113
|
+
**功能:**
|
|
114
|
+
- 检查目录结构
|
|
115
|
+
- 验证搜索引擎配置
|
|
116
|
+
- 验证本地 LLM 配置
|
|
117
|
+
- 测试 API 连接
|
|
118
|
+
- 显示详细配置信息
|
|
119
|
+
- 提供修复建议
|
|
120
|
+
|
|
121
|
+
**使用:**
|
|
122
|
+
```bash
|
|
123
|
+
./verify-config.sh
|
|
124
|
+
```
|
|
125
|
+
|
|
126
|
+
### 5. examples.sh
|
|
127
|
+
**使用示例脚本** - 显示使用方法
|
|
128
|
+
|
|
129
|
+
**功能:**
|
|
130
|
+
- 搜索引擎使用示例
|
|
131
|
+
- 本地 LLM 使用示例
|
|
132
|
+
- API 调用示例
|
|
133
|
+
- 组合使用示例
|
|
134
|
+
- 实用工作流
|
|
135
|
+
|
|
136
|
+
**使用:**
|
|
137
|
+
```bash
|
|
138
|
+
./examples.sh
|
|
139
|
+
```
|
|
140
|
+
|
|
141
|
+
---
|
|
142
|
+
|
|
143
|
+
## 🎯 典型使用场景
|
|
144
|
+
|
|
145
|
+
### 场景1: 即时搜索(2分钟)
|
|
146
|
+
|
|
147
|
+
```bash
|
|
148
|
+
# 1. 配置搜索
|
|
149
|
+
./setup-search.sh
|
|
150
|
+
|
|
151
|
+
# 2. 立即使用
|
|
152
|
+
claude '搜索 latest AI news'
|
|
153
|
+
|
|
154
|
+
# ✅ 完成!DuckDuckGo 已可用
|
|
155
|
+
```
|
|
156
|
+
|
|
157
|
+
### 场景2: 本地 LLM(20分钟)
|
|
158
|
+
|
|
159
|
+
```bash
|
|
160
|
+
# 1. 配置 Ollama
|
|
161
|
+
./setup-local-llm.sh
|
|
162
|
+
|
|
163
|
+
# 2. 下载模型(选择推荐模型)
|
|
164
|
+
# ollama pull llama3:8b
|
|
165
|
+
# ollama pull qwen2.5:7b
|
|
166
|
+
|
|
167
|
+
# 3. 测试
|
|
168
|
+
ollama run llama3:8b '你好'
|
|
169
|
+
|
|
170
|
+
# ✅ 完成!本地 LLM 已可用
|
|
171
|
+
```
|
|
172
|
+
|
|
173
|
+
### 场景3: 完整配置(30分钟)
|
|
174
|
+
|
|
175
|
+
```bash
|
|
176
|
+
# 1. 一键配置
|
|
177
|
+
./setup-all.sh
|
|
178
|
+
|
|
179
|
+
# 2. 验证
|
|
180
|
+
./verify-config.sh
|
|
181
|
+
|
|
182
|
+
# 3. 开始使用
|
|
183
|
+
claude '搜索 AI news'
|
|
184
|
+
ollama run llama3:8b '总结今天的新闻'
|
|
185
|
+
|
|
186
|
+
# ✅ 完成!所有功能已配置
|
|
187
|
+
```
|
|
188
|
+
|
|
189
|
+
---
|
|
190
|
+
|
|
191
|
+
## 📊 配置文件说明
|
|
192
|
+
|
|
193
|
+
### 搜索引擎配置
|
|
194
|
+
**位置**: `~/.stigmergy/config/search-services.json`
|
|
195
|
+
|
|
196
|
+
```json
|
|
197
|
+
{
|
|
198
|
+
"version": "1.0.0",
|
|
199
|
+
"enabled": ["duckduckgo"],
|
|
200
|
+
"providers": {
|
|
201
|
+
"duckduckgo": {
|
|
202
|
+
"name": "DuckDuckGo",
|
|
203
|
+
"enabled": true,
|
|
204
|
+
"noAuthRequired": true,
|
|
205
|
+
"baseUrl": "https://api.duckduckgo.com/",
|
|
206
|
+
"priority": 1,
|
|
207
|
+
"description": "完全免费的搜索引擎,无需API Key"
|
|
208
|
+
}
|
|
209
|
+
}
|
|
210
|
+
}
|
|
211
|
+
```
|
|
212
|
+
|
|
213
|
+
### 本地 LLM 配置
|
|
214
|
+
**位置**: `~/.stigmergy/config/local-llm.json`
|
|
215
|
+
|
|
216
|
+
```json
|
|
217
|
+
{
|
|
218
|
+
"version": "1.0.0",
|
|
219
|
+
"provider": "ollama",
|
|
220
|
+
"baseUrl": "http://localhost:11434",
|
|
221
|
+
"enabled": true,
|
|
222
|
+
"models": ["llama3:8b", "qwen2.5:7b"]
|
|
223
|
+
}
|
|
224
|
+
```
|
|
225
|
+
|
|
226
|
+
---
|
|
227
|
+
|
|
228
|
+
## 🔧 高级配置
|
|
229
|
+
|
|
230
|
+
### 添加 Tavily Search
|
|
231
|
+
|
|
232
|
+
```bash
|
|
233
|
+
# 1. 注册并获取 API Key
|
|
234
|
+
# 访问: https://api.tavily.com
|
|
235
|
+
|
|
236
|
+
# 2. 设置环境变量
|
|
237
|
+
export TAVILY_API_KEY="tvly-xxxxxxxxxxxxx"
|
|
238
|
+
|
|
239
|
+
# 3. 添加到 shell 配置
|
|
240
|
+
echo 'export TAVILY_API_KEY="tvly-xxxxxxxxxxxxx"' >> ~/.bashrc # Linux
|
|
241
|
+
echo 'export TAVILY_API_KEY="tvly-xxxxxxxxxxxxx"' >> ~/.zshrc # macOS
|
|
242
|
+
|
|
243
|
+
# 4. 重新配置搜索
|
|
244
|
+
./setup-search.sh
|
|
245
|
+
```
|
|
246
|
+
|
|
247
|
+
### 添加 Google AI
|
|
248
|
+
|
|
249
|
+
```bash
|
|
250
|
+
# 1. 获取 API Key
|
|
251
|
+
# 访问: https://aistudio.google.com
|
|
252
|
+
|
|
253
|
+
# 2. 设置环境变量
|
|
254
|
+
export GOOGLE_API_KEY="AIzaxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
|
|
255
|
+
|
|
256
|
+
# 3. 添加到配置
|
|
257
|
+
# 手动编辑 ~/.stigmergy/config/search-services.json
|
|
258
|
+
```
|
|
259
|
+
|
|
260
|
+
---
|
|
261
|
+
|
|
262
|
+
## 🧪 测试和验证
|
|
263
|
+
|
|
264
|
+
### 测试搜索功能
|
|
265
|
+
|
|
266
|
+
```bash
|
|
267
|
+
# 方法1: 使用 curl
|
|
268
|
+
curl 'https://api.duckduckgo.com/?q=test&format=json'
|
|
269
|
+
|
|
270
|
+
# 方法2: 使用 Claude CLI
|
|
271
|
+
claude '搜索 test query'
|
|
272
|
+
|
|
273
|
+
# 方法3: 查看配置
|
|
274
|
+
cat ~/.stigmergy/config/search-services.json
|
|
275
|
+
```
|
|
276
|
+
|
|
277
|
+
### 测试本地 LLM
|
|
278
|
+
|
|
279
|
+
```bash
|
|
280
|
+
# 方法1: 直接运行
|
|
281
|
+
ollama run llama3:8b 'Hello'
|
|
282
|
+
|
|
283
|
+
# 方法2: API 调用
|
|
284
|
+
curl http://localhost:11434/api/generate -d '{
|
|
285
|
+
"model": "llama3:8b",
|
|
286
|
+
"prompt": "Hello"
|
|
287
|
+
}'
|
|
288
|
+
|
|
289
|
+
# 方法3: 查看已安装模型
|
|
290
|
+
ollama list
|
|
291
|
+
```
|
|
292
|
+
|
|
293
|
+
### 验证配置
|
|
294
|
+
|
|
295
|
+
```bash
|
|
296
|
+
# 运行验证脚本
|
|
297
|
+
./verify-config.sh
|
|
298
|
+
|
|
299
|
+
# 检查配置文件
|
|
300
|
+
ls -la ~/.stigmergy/config/
|
|
301
|
+
cat ~/.stigmergy/config/search-services.json
|
|
302
|
+
cat ~/.stigmergy/config/local-llm.json
|
|
303
|
+
```
|
|
304
|
+
|
|
305
|
+
---
|
|
306
|
+
|
|
307
|
+
## ❓ 常见问题
|
|
308
|
+
|
|
309
|
+
### Q1: DuckDuckGo 搜索质量不高?
|
|
310
|
+
|
|
311
|
+
**A:** DuckDuckGo 的优势是无需 API Key,立即可用。如果需要更高质量的搜索结果,可以:
|
|
312
|
+
1. 配置 Tavily(专为 AI 设计,1000次/月免费)
|
|
313
|
+
2. 配置 Google Custom Search(100次/天免费)
|
|
314
|
+
|
|
315
|
+
### Q2: Ollama 安装失败?
|
|
316
|
+
|
|
317
|
+
**A:**
|
|
318
|
+
- **macOS**: 使用 Homebrew: `brew install ollama`
|
|
319
|
+
- **Linux**: 使用官方脚本: `curl -fsSL https://ollama.com/install.sh | sh`
|
|
320
|
+
- **Windows**: 从官网下载安装包: https://ollama.com/download
|
|
321
|
+
|
|
322
|
+
### Q3: 模型下载太慢?
|
|
323
|
+
|
|
324
|
+
**A:**
|
|
325
|
+
- 使用较小的模型(phi3: 2.3GB)
|
|
326
|
+
- 使用国内镜像(如果有)
|
|
327
|
+
- 分时段下载
|
|
328
|
+
|
|
329
|
+
### Q4: 内存不足?
|
|
330
|
+
|
|
331
|
+
**A:**
|
|
332
|
+
- 使用轻量模型(phi3: 2.3GB)
|
|
333
|
+
- 一次只运行一个模型
|
|
334
|
+
- 关闭其他应用
|
|
335
|
+
|
|
336
|
+
### Q5: 如何卸载?
|
|
337
|
+
|
|
338
|
+
**A:**
|
|
339
|
+
```bash
|
|
340
|
+
# 删除配置
|
|
341
|
+
rm -rf ~/.stigmergy
|
|
342
|
+
|
|
343
|
+
# 卸载 Ollama(如果需要)
|
|
344
|
+
brew uninstall ollama # macOS
|
|
345
|
+
# 或
|
|
346
|
+
rm -rf /usr/local/bin/ollama # Linux
|
|
347
|
+
```
|
|
348
|
+
|
|
349
|
+
---
|
|
350
|
+
|
|
351
|
+
## 📚 相关资源
|
|
352
|
+
|
|
353
|
+
### 官方文档
|
|
354
|
+
- [Stigmergy GitHub](https://github.com/ptreezh/stigmergy-CLI-Multi-Agents)
|
|
355
|
+
- [Ollama 文档](https://ollama.com/documentation)
|
|
356
|
+
- [Claude CLI](https://claude.ai/code)
|
|
357
|
+
|
|
358
|
+
### API 服务
|
|
359
|
+
- [Tavily Search](https://api.tavily.com) - 1000次/月免费
|
|
360
|
+
- [Google AI Studio](https://aistudio.google.com) - Gemini 无限免费
|
|
361
|
+
- [Hugging Face](https://huggingface.co) - 30k/天免费
|
|
362
|
+
- [Together AI](https://together.ai) - $25 免费额度
|
|
363
|
+
|
|
364
|
+
### 模型推荐
|
|
365
|
+
- [Llama 3](https://ollama.com/library/llama3) - 通用高质量
|
|
366
|
+
- [Qwen 2.5](https://ollama.com/library/qwen2.5) - 中文优化
|
|
367
|
+
- [Phi-3](https://ollama.com/library/phi3) - 轻量快速
|
|
368
|
+
- [Mistral](https://ollama.com/library/mistral) - 高效
|
|
369
|
+
|
|
370
|
+
---
|
|
371
|
+
|
|
372
|
+
## 🤝 贡献
|
|
373
|
+
|
|
374
|
+
如果您发现问题或有改进建议:
|
|
375
|
+
1. 提交 Issue
|
|
376
|
+
2. 创建 Pull Request
|
|
377
|
+
3. 分享您的配置经验
|
|
378
|
+
|
|
379
|
+
---
|
|
380
|
+
|
|
381
|
+
## 📄 许可证
|
|
382
|
+
|
|
383
|
+
MIT License - 与 Stigmergy 主项目相同
|
|
384
|
+
|
|
385
|
+
---
|
|
386
|
+
|
|
387
|
+
## 🔒 隐私说明
|
|
388
|
+
|
|
389
|
+
- ✅ **本地优先**: Ollama 完全在本地运行
|
|
390
|
+
- ✅ **无需账户**: DuckDuckGo 无需注册
|
|
391
|
+
- ✅ **开源透明**: 所有脚本开源可审计
|
|
392
|
+
- ⚠️ **云端服务**: Tavily/Google AI 需要发送数据到云端
|
|
393
|
+
|
|
394
|
+
---
|
|
395
|
+
|
|
396
|
+
**版本**: 1.0.0
|
|
397
|
+
**最后更新**: 2026-03-06
|
|
398
|
+
**状态**: ✅ 生产就绪
|
|
@@ -29,7 +29,9 @@ const cliTools = [
|
|
|
29
29
|
// Soul 技能文件
|
|
30
30
|
const soulSkills = [
|
|
31
31
|
{ source: "skills/soul-auto-evolve/SKILL.md", target: "soul-auto-evolve.md" },
|
|
32
|
-
{ source: "skills/soul-reflection/SKILL.md", target: "soul-reflection.md" }
|
|
32
|
+
{ source: "skills/soul-reflection/SKILL.md", target: "soul-reflection.md" },
|
|
33
|
+
{ source: "skills/soul-auto-search-config/SKILL.md", target: "soul-auto-search-config.md" },
|
|
34
|
+
{ source: "skills/soul-auto-compute-hunter/SKILL.md", target: "soul-auto-compute-hunter.md" }
|
|
33
35
|
];
|
|
34
36
|
|
|
35
37
|
console.log("🧠 Deploying Soul 自我进化系统...\n");
|
|
@@ -0,0 +1,208 @@
|
|
|
1
|
+
#!/bin/bash
|
|
2
|
+
#
|
|
3
|
+
# Stigmergy Usage Examples
|
|
4
|
+
# 使用示例脚本
|
|
5
|
+
#
|
|
6
|
+
# 用途: 演示如何使用配置好的 Stigmergy 增强功能
|
|
7
|
+
#
|
|
8
|
+
|
|
9
|
+
# 颜色定义
|
|
10
|
+
GREEN='\033[0;32m'
|
|
11
|
+
YELLOW='\033[1;33m'
|
|
12
|
+
BLUE='\033[0;34m'
|
|
13
|
+
NC='\033[0m'
|
|
14
|
+
|
|
15
|
+
echo -e "${BLUE}Stigmergy 使用示例${NC}"
|
|
16
|
+
echo "======================================"
|
|
17
|
+
echo ""
|
|
18
|
+
|
|
19
|
+
# 1. 搜索引擎使用示例
|
|
20
|
+
echo -e "${GREEN}1. 搜索引擎使用示例${NC}"
|
|
21
|
+
echo "--------------------------------------"
|
|
22
|
+
echo ""
|
|
23
|
+
echo "在 Claude CLI 中:"
|
|
24
|
+
echo ""
|
|
25
|
+
echo " # 基础搜索"
|
|
26
|
+
echo " claude '搜索 latest artificial intelligence news'"
|
|
27
|
+
echo ""
|
|
28
|
+
echo " # 学术搜索"
|
|
29
|
+
echo " claude '搜索 machine learning transformers paper 2024'"
|
|
30
|
+
echo ""
|
|
31
|
+
echo " # 技术问题"
|
|
32
|
+
echo " claude '搜索 how to install Ollama on Ubuntu'"
|
|
33
|
+
echo ""
|
|
34
|
+
echo " # 实时信息"
|
|
35
|
+
echo " claude '搜索 Bitcoin price today'"
|
|
36
|
+
echo ""
|
|
37
|
+
|
|
38
|
+
# 2. 本地 LLM 使用示例
|
|
39
|
+
echo -e "${GREEN}2. 本地 LLM 使用示例${NC}"
|
|
40
|
+
echo "--------------------------------------"
|
|
41
|
+
echo ""
|
|
42
|
+
echo "命令行直接使用:"
|
|
43
|
+
echo ""
|
|
44
|
+
echo " # 使用 Llama 3"
|
|
45
|
+
echo " ollama run llama3:8b '解释一下量子计算'"
|
|
46
|
+
echo ""
|
|
47
|
+
echo " # 使用 Qwen(中文)"
|
|
48
|
+
echo " ollama run qwen2.5:7b '写一首关于AI的诗'"
|
|
49
|
+
echo ""
|
|
50
|
+
echo " # 交互式对话"
|
|
51
|
+
echo " ollama run llama3:8b"
|
|
52
|
+
echo ""
|
|
53
|
+
echo "在 Claude CLI 中结合使用:"
|
|
54
|
+
echo ""
|
|
55
|
+
echo " # 让 Claude 使用本地模型"
|
|
56
|
+
echo " claude '使用本地的 llama3 模型帮我分析这段代码的性能'"
|
|
57
|
+
echo ""
|
|
58
|
+
echo " # 混合使用(Claude + 本地LLM)"
|
|
59
|
+
echo " claude '先用本地模型快速分析,然后你给出详细建议'"
|
|
60
|
+
echo ""
|
|
61
|
+
|
|
62
|
+
# 3. API 调用示例
|
|
63
|
+
echo -e "${GREEN}3. API 调用示例${NC}"
|
|
64
|
+
echo "--------------------------------------"
|
|
65
|
+
echo ""
|
|
66
|
+
echo "使用 curl 调用 Ollama API:"
|
|
67
|
+
echo ""
|
|
68
|
+
echo " # 生成文本"
|
|
69
|
+
echo " curl http://localhost:11434/api/generate -d '{"
|
|
70
|
+
echo " \"model\": \"llama3:8b\","
|
|
71
|
+
echo " \"prompt\": \"Why is the sky blue?\""
|
|
72
|
+
echo " }'"
|
|
73
|
+
echo ""
|
|
74
|
+
echo " # 对话"
|
|
75
|
+
echo " curl http://localhost:11434/api/chat -d '{"
|
|
76
|
+
echo " \"model\": \"llama3:8b\","
|
|
77
|
+
echo " \"messages\": [{\"role\": \"user\", \"content\": \"Hello\"}]"
|
|
78
|
+
echo " }'"
|
|
79
|
+
echo ""
|
|
80
|
+
|
|
81
|
+
# 4. 组合使用示例
|
|
82
|
+
echo -e "${GREEN}4. 组合使用示例${NC}"
|
|
83
|
+
echo "--------------------------------------"
|
|
84
|
+
echo ""
|
|
85
|
+
echo "搜索 + 本地 LLM:"
|
|
86
|
+
echo ""
|
|
87
|
+
echo " # 1. 搜索信息"
|
|
88
|
+
echo " claude '搜索 Docker 最佳实践 2024'"
|
|
89
|
+
echo ""
|
|
90
|
+
echo " # 2. 使用本地模型深入分析"
|
|
91
|
+
echo " ollama run llama3:8b '总结 Docker 容器化的5个关键点'"
|
|
92
|
+
echo ""
|
|
93
|
+
echo " # 3. 让 Claude 整合结果"
|
|
94
|
+
echo " claude '结合搜索结果和本地模型的分析,给出 Docker 最佳实践建议'"
|
|
95
|
+
echo ""
|
|
96
|
+
|
|
97
|
+
# 5. 编程任务示例
|
|
98
|
+
echo -e "${GREEN}5. 编程任务示例${NC}"
|
|
99
|
+
echo "--------------------------------------"
|
|
100
|
+
echo ""
|
|
101
|
+
echo "代码分析:"
|
|
102
|
+
echo " claude '使用本地模型快速分析当前目录的代码结构'"
|
|
103
|
+
echo ""
|
|
104
|
+
echo "调试:"
|
|
105
|
+
echo " claude '搜索这个错误信息的解决方案,然后用本地模型分析我的代码'"
|
|
106
|
+
echo ""
|
|
107
|
+
echo "重构:"
|
|
108
|
+
echo " claude '搜索 React 性能优化最佳实践,帮我重构这段代码'"
|
|
109
|
+
echo ""
|
|
110
|
+
|
|
111
|
+
# 6. 实用工作流
|
|
112
|
+
echo -e "${GREEN}6. 实用工作流${NC}"
|
|
113
|
+
echo "--------------------------------------"
|
|
114
|
+
echo ""
|
|
115
|
+
echo "学习新技术:"
|
|
116
|
+
echo " # 1. 搜索资料"
|
|
117
|
+
echo " claude '搜索 Rust 编程语言教程'"
|
|
118
|
+
echo ""
|
|
119
|
+
echo " # 2. 本地模型练习"
|
|
120
|
+
echo " ollama run llama3:8b '用 Rust 写一个 Hello World'"
|
|
121
|
+
echo ""
|
|
122
|
+
echo " # 3. Claude 评审"
|
|
123
|
+
echo " claude '评审这段 Rust 代码,给出改进建议'"
|
|
124
|
+
echo ""
|
|
125
|
+
echo "研究项目:"
|
|
126
|
+
echo " # 1. 搜索相关论文"
|
|
127
|
+
echo " claude '搜索 reinforcement learning survey 2024'"
|
|
128
|
+
echo ""
|
|
129
|
+
echo " # 2. 本地模型总结"
|
|
130
|
+
echo " ollama run llama3:8b '总结强化学习的3个核心概念'"
|
|
131
|
+
echo ""
|
|
132
|
+
echo " # 3. 深入分析"
|
|
133
|
+
echo " claude '基于搜索和总结,设计一个强化学习项目'"
|
|
134
|
+
echo ""
|
|
135
|
+
|
|
136
|
+
# 7. 配置文件查看
|
|
137
|
+
echo -e "${GREEN}7. 查看配置${NC}"
|
|
138
|
+
echo "--------------------------------------"
|
|
139
|
+
echo ""
|
|
140
|
+
echo "# 查看搜索配置"
|
|
141
|
+
echo "cat ~/.stigmergy/config/search-services.json"
|
|
142
|
+
echo ""
|
|
143
|
+
echo "# 查看本地 LLM 配置"
|
|
144
|
+
echo "cat ~/.stigmergy/config/local-llm.json"
|
|
145
|
+
echo ""
|
|
146
|
+
echo "# 列出已安装的模型"
|
|
147
|
+
echo "ollama list"
|
|
148
|
+
echo ""
|
|
149
|
+
|
|
150
|
+
# 8. 故障排除
|
|
151
|
+
echo -e "${GREEN}8. 故障排除${NC}"
|
|
152
|
+
echo "--------------------------------------"
|
|
153
|
+
echo ""
|
|
154
|
+
echo "# 验证配置"
|
|
155
|
+
echo "bash ~/.stigmergy/scripts/verify-config.sh"
|
|
156
|
+
echo ""
|
|
157
|
+
echo "# 重启 Ollama 服务"
|
|
158
|
+
echo "pkill ollama && ollama serve &"
|
|
159
|
+
echo ""
|
|
160
|
+
echo "# 测试搜索"
|
|
161
|
+
echo "curl 'https://api.duckduckgo.com/?q=test&format=json'"
|
|
162
|
+
echo ""
|
|
163
|
+
echo "# 测试 Ollama"
|
|
164
|
+
echo "curl http://localhost:11434/api/tags"
|
|
165
|
+
echo ""
|
|
166
|
+
|
|
167
|
+
# 9. 性能优化提示
|
|
168
|
+
echo -e "${GREEN}9. 性能优化提示${NC}"
|
|
169
|
+
echo "--------------------------------------"
|
|
170
|
+
echo ""
|
|
171
|
+
echo "选择合适的模型:"
|
|
172
|
+
echo " - phi3: 轻量快速(2.3GB)"
|
|
173
|
+
echo " - llama3:8b: 平衡性能(4.7GB)"
|
|
174
|
+
echo " - qwen2.5:7b: 中文优化(4.5GB)"
|
|
175
|
+
echo ""
|
|
176
|
+
echo "使用场景:"
|
|
177
|
+
echo " - 快速查询 → phi3"
|
|
178
|
+
echo " - 复杂推理 → llama3:8b"
|
|
179
|
+
echo " - 中文任务 → qwen2.5:7b"
|
|
180
|
+
echo ""
|
|
181
|
+
echo "资源管理:"
|
|
182
|
+
echo " - 卸载不用的模型: ollama rm <model>"
|
|
183
|
+
echo " - 查看模型大小: ollama list"
|
|
184
|
+
echo " - 监控资源: top 或 htop"
|
|
185
|
+
echo ""
|
|
186
|
+
|
|
187
|
+
# 10. 下一步
|
|
188
|
+
echo -e "${GREEN}10. 下一步${NC}"
|
|
189
|
+
echo "--------------------------------------"
|
|
190
|
+
echo ""
|
|
191
|
+
echo "1. 运行配置验证:"
|
|
192
|
+
echo " bash ~/.stigmergy/scripts/verify-config.sh"
|
|
193
|
+
echo ""
|
|
194
|
+
echo "2. 阅读实用指南:"
|
|
195
|
+
echo " cat ~/.stigmergy/docs/realistic-soul-proposal.md"
|
|
196
|
+
echo ""
|
|
197
|
+
echo "3. 开始使用:"
|
|
198
|
+
echo " claude '搜索 today AI news'"
|
|
199
|
+
echo " ollama run llama3:8b '你好'"
|
|
200
|
+
echo ""
|
|
201
|
+
echo "4. 探索更多:"
|
|
202
|
+
echo " - Stigmergy GitHub: https://github.com/ptreezh/stigmergy-CLI-Multi-Agents"
|
|
203
|
+
echo " - Ollama 文档: https://ollama.com/documentation"
|
|
204
|
+
echo " - Claude CLI: https://claude.ai/code"
|
|
205
|
+
echo ""
|
|
206
|
+
|
|
207
|
+
echo -e "${BLUE}======================================${NC}"
|
|
208
|
+
echo "开始探索吧!🚀"
|
|
@@ -0,0 +1,77 @@
|
|
|
1
|
+
#!/bin/bash
|
|
2
|
+
#
|
|
3
|
+
# Stigmergy 进化系统实时监控
|
|
4
|
+
#
|
|
5
|
+
|
|
6
|
+
cd /c/bde/stigmergy
|
|
7
|
+
|
|
8
|
+
echo "╔════════════════════════════════════════════════════════════╗"
|
|
9
|
+
echo "║ Stigmergy 自主进化系统 - 实时监控 ║"
|
|
10
|
+
echo "╚════════════════════════════════════════════════════════════╝"
|
|
11
|
+
echo ""
|
|
12
|
+
|
|
13
|
+
# 检查进程是否运行
|
|
14
|
+
if [ -f ~/.stigmergy/soul-state/evolution.pid ]; then
|
|
15
|
+
PID=$(cat ~/.stigmergy/soul-state/evolution.pid)
|
|
16
|
+
if ps -p $PID > /dev/null 2>&1; then
|
|
17
|
+
echo "✅ 进化系统运行中 (PID: $PID)"
|
|
18
|
+
else
|
|
19
|
+
echo "⚠️ 进化系统未运行"
|
|
20
|
+
fi
|
|
21
|
+
else
|
|
22
|
+
echo "⚠️ 进化系统未运行"
|
|
23
|
+
fi
|
|
24
|
+
|
|
25
|
+
echo ""
|
|
26
|
+
echo "════════════════════════════════════════════════════════════"
|
|
27
|
+
echo ""
|
|
28
|
+
|
|
29
|
+
# 显示最新进度
|
|
30
|
+
node scripts/track-progress.js
|
|
31
|
+
|
|
32
|
+
echo ""
|
|
33
|
+
echo "════════════════════════════════════════════════════════════"
|
|
34
|
+
echo ""
|
|
35
|
+
echo "📁 最新文件:"
|
|
36
|
+
echo ""
|
|
37
|
+
|
|
38
|
+
# 显示最新反思
|
|
39
|
+
LATEST_REFLECTION=$(ls -t ~/.stigmergy/soul-state/reflections/ 2>/dev/null | head -1)
|
|
40
|
+
if [ -n "$LATEST_REFLECTION" ]; then
|
|
41
|
+
echo "🧠 最新反思: $LATEST_REFLECTION"
|
|
42
|
+
REFLECTION_TIME=$(stat -c %y ~/.stigmergy/soul-state/reflections/$LATEST_REFLECTION 2>/dev/null | cut -d'.' -f1)
|
|
43
|
+
echo " 时间: $REFLECTION_TIME"
|
|
44
|
+
echo ""
|
|
45
|
+
fi
|
|
46
|
+
|
|
47
|
+
# 显示最新技能
|
|
48
|
+
LATEST_SKILL=$(ls -t ~/.stigmergy/soul-state/evolved-skills/ 2>/dev/null | head -1)
|
|
49
|
+
if [ -n "$LATEST_SKILL" ]; then
|
|
50
|
+
echo "⚡ 最新技能: $LATEST_SKILL"
|
|
51
|
+
SKILL_TIME=$(stat -c %y ~/.stigmergy/soul-state/evolved-skills/$LATEST_SKILL 2>/dev/null | cut -d'.' -f1)
|
|
52
|
+
echo " 时间: $SKILL_TIME"
|
|
53
|
+
|
|
54
|
+
# 显示技能内容
|
|
55
|
+
echo ""
|
|
56
|
+
echo " 技能内容:"
|
|
57
|
+
cat ~/.stigmergy/soul-state/evolved-skills/$LATEST_SKILL | head -20
|
|
58
|
+
echo " ..."
|
|
59
|
+
echo ""
|
|
60
|
+
fi
|
|
61
|
+
|
|
62
|
+
# 显示最新日志
|
|
63
|
+
LATEST_LOG=$(ls -t ~/.stigmergy/soul-state/logs/ 2>/dev/null | head -1)
|
|
64
|
+
if [ -n "$LATEST_LOG" ]; then
|
|
65
|
+
echo "📝 最新日志: $LATEST_LOG"
|
|
66
|
+
echo " 最后20行:"
|
|
67
|
+
echo ""
|
|
68
|
+
tail -20 ~/.stigmergy/soul-state/logs/$LATEST_LOG
|
|
69
|
+
echo ""
|
|
70
|
+
fi
|
|
71
|
+
|
|
72
|
+
echo "════════════════════════════════════════════════════════════"
|
|
73
|
+
echo ""
|
|
74
|
+
echo "💡 持续监控 (每10秒更新,按 Ctrl+C 退出):"
|
|
75
|
+
echo ""
|
|
76
|
+
echo " watch -n 10 'bash scripts/monitor-evolution.sh'"
|
|
77
|
+
echo ""
|