scientify 1.11.0 → 1.12.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (72) hide show
  1. package/README.md +2 -2
  2. package/README.zh.md +2 -2
  3. package/dist/index.d.ts.map +1 -1
  4. package/dist/index.js +16 -1
  5. package/dist/index.js.map +1 -1
  6. package/dist/src/cli/research.d.ts +10 -0
  7. package/dist/src/cli/research.d.ts.map +1 -0
  8. package/dist/src/cli/research.js +283 -0
  9. package/dist/src/cli/research.js.map +1 -0
  10. package/dist/src/commands/metabolism-status.d.ts +6 -0
  11. package/dist/src/commands/metabolism-status.d.ts.map +1 -0
  12. package/dist/src/commands/metabolism-status.js +100 -0
  13. package/dist/src/commands/metabolism-status.js.map +1 -0
  14. package/dist/src/hooks/cron-skill-inject.d.ts +21 -0
  15. package/dist/src/hooks/cron-skill-inject.d.ts.map +1 -0
  16. package/dist/src/hooks/cron-skill-inject.js +53 -0
  17. package/dist/src/hooks/cron-skill-inject.js.map +1 -0
  18. package/dist/src/hooks/research-mode.d.ts +1 -1
  19. package/dist/src/hooks/research-mode.d.ts.map +1 -1
  20. package/dist/src/hooks/research-mode.js +28 -2
  21. package/dist/src/hooks/research-mode.js.map +1 -1
  22. package/dist/src/knowledge-state/render.d.ts +8 -1
  23. package/dist/src/knowledge-state/render.d.ts.map +1 -1
  24. package/dist/src/knowledge-state/render.js +31 -0
  25. package/dist/src/knowledge-state/render.js.map +1 -1
  26. package/dist/src/knowledge-state/store.d.ts.map +1 -1
  27. package/dist/src/knowledge-state/store.js +340 -6
  28. package/dist/src/knowledge-state/store.js.map +1 -1
  29. package/dist/src/knowledge-state/types.d.ts +19 -0
  30. package/dist/src/knowledge-state/types.d.ts.map +1 -1
  31. package/dist/src/literature/subscription-state.d.ts +2 -0
  32. package/dist/src/literature/subscription-state.d.ts.map +1 -1
  33. package/dist/src/literature/subscription-state.js +7 -0
  34. package/dist/src/literature/subscription-state.js.map +1 -1
  35. package/dist/src/research-subscriptions/prompt.d.ts.map +1 -1
  36. package/dist/src/research-subscriptions/prompt.js +44 -16
  37. package/dist/src/research-subscriptions/prompt.js.map +1 -1
  38. package/dist/src/templates/bootstrap.d.ts +8 -0
  39. package/dist/src/templates/bootstrap.d.ts.map +1 -0
  40. package/dist/src/templates/bootstrap.js +153 -0
  41. package/dist/src/templates/bootstrap.js.map +1 -0
  42. package/dist/src/tools/arxiv-download.d.ts +2 -2
  43. package/dist/src/tools/arxiv-download.d.ts.map +1 -1
  44. package/dist/src/tools/arxiv-download.js +9 -11
  45. package/dist/src/tools/arxiv-download.js.map +1 -1
  46. package/dist/src/tools/scientify-cron.d.ts +2 -0
  47. package/dist/src/tools/scientify-cron.d.ts.map +1 -1
  48. package/dist/src/tools/scientify-cron.js +33 -1
  49. package/dist/src/tools/scientify-cron.js.map +1 -1
  50. package/dist/src/tools/scientify-literature-state.d.ts +4 -0
  51. package/dist/src/tools/scientify-literature-state.d.ts.map +1 -1
  52. package/dist/src/tools/scientify-literature-state.js +45 -0
  53. package/dist/src/tools/scientify-literature-state.js.map +1 -1
  54. package/dist/src/tools/unpaywall-download.d.ts +2 -2
  55. package/dist/src/tools/unpaywall-download.js +4 -4
  56. package/dist/src/tools/unpaywall-download.js.map +1 -1
  57. package/openclaw.plugin.json +4 -2
  58. package/package.json +1 -1
  59. package/skills/idea-generation/SKILL.md +24 -29
  60. package/skills/metabolism/SKILL.md +118 -0
  61. package/skills/metabolism-init/SKILL.md +80 -0
  62. package/skills/{literature-survey → research-collect}/SKILL.md +23 -33
  63. package/skills/research-experiment/SKILL.md +1 -1
  64. package/skills/research-implement/SKILL.md +1 -1
  65. package/skills/research-pipeline/SKILL.md +6 -11
  66. package/skills/research-plan/SKILL.md +3 -3
  67. package/skills/research-review/SKILL.md +1 -1
  68. package/skills/research-subscription/SKILL.md +16 -0
  69. package/skills/research-survey/SKILL.md +6 -6
  70. package/skills/write-review-paper/SKILL.md +14 -14
  71. package/skills/_shared/workspace-spec.md +0 -164
  72. package/skills/install-scientify/SKILL.md +0 -106
@@ -19,32 +19,27 @@ Generate innovative research ideas grounded in literature analysis. This skill r
19
19
 
20
20
  **Core principle:** Ideas MUST be grounded in actual papers, not generated from model knowledge.
21
21
 
22
- **Workspace:** See `../_shared/workspace-spec.md` for directory structure. Outputs go to `$WORKSPACE/ideas/`.
22
+ **Workspace:** `$W` = working directory provided in task parameter. Outputs go to `$W/ideas/`.
23
23
 
24
24
  ---
25
25
 
26
26
  ## Step 1: Check Workspace Resources
27
27
 
28
- First, check what resources already exist:
28
+ First, check what resources already exist in `$W`:
29
29
 
30
30
  ```bash
31
- # Check active project
32
- cat ~/.openclaw/workspace/projects/.active 2>/dev/null
33
-
34
- # Check papers
35
- ls ~/.openclaw/workspace/projects/*/papers/ 2>/dev/null | head -20
36
-
37
- # Check survey results
38
- cat ~/.openclaw/workspace/projects/*/survey/clusters.json 2>/dev/null | head -5
31
+ ls $W/papers/ 2>/dev/null | head -20
32
+ ls $W/papers/_meta/ 2>/dev/null | head -10
33
+ ls $W/survey/ 2>/dev/null
39
34
  ```
40
35
 
41
36
  ### Assess Available Resources
42
37
 
43
38
  | Resource | Location | Status |
44
39
  |----------|----------|--------|
45
- | Papers | `$WORKSPACE/papers/` | Count: ? |
46
- | Survey clusters | `$WORKSPACE/survey/clusters.json` | Exists: Y/N |
47
- | Repos | `$WORKSPACE/repos/` | Count: ? |
40
+ | Papers | `$W/papers/` | Count: ? |
41
+ | Survey clusters | `$W/survey/clusters.json` | Exists: Y/N |
42
+ | Repos | `$W/repos/` | Count: ? |
48
43
 
49
44
  ---
50
45
 
@@ -57,14 +52,14 @@ Based on workspace state, ask user:
57
52
  >
58
53
  > Options:
59
54
  > 1. **Use existing papers** - Generate ideas from current collection
60
- > 2. **Search more** - Run `/literature-survey` to expand collection
55
+ > 2. **Search more** - Run `/research-collect` to expand collection
61
56
  > 3. **Quick search** - Add 5-10 more papers on specific topic
62
57
 
63
58
  **If no papers:**
64
59
  > 📭 No papers found in workspace.
65
60
  >
66
61
  > To generate grounded ideas, I need literature. Options:
67
- > 1. **Run /literature-survey** - Comprehensive search (100+ papers, recommended)
62
+ > 1. **Run /research-collect** - Comprehensive search (100+ papers, recommended)
68
63
  > 2. **Quick search** - Fetch 10-15 papers on your topic now
69
64
  > 3. **You provide papers** - Point me to existing PDFs/tex files
70
65
 
@@ -72,17 +67,17 @@ Based on workspace state, ask user:
72
67
 
73
68
  ## Step 3: Acquire Resources (if needed)
74
69
 
75
- ### Option A: Delegate to /literature-survey (Recommended)
70
+ ### Option A: Delegate to /research-collect (Recommended)
76
71
 
77
72
  If user wants comprehensive search:
78
73
  ```
79
- Please run: /literature-survey {topic}
74
+ Please run: /research-collect {topic}
80
75
 
81
76
  This will:
82
77
  - Search 100+ papers systematically
83
78
  - Filter by relevance (score ≥4)
84
79
  - Cluster into research directions
85
- - Save to $WORKSPACE/papers/
80
+ - Save to $W/papers/
86
81
 
87
82
  After survey completes, run /idea-generation again.
88
83
  ```
@@ -101,21 +96,21 @@ Arguments:
101
96
 
102
97
  2. **Clone 3-5 reference repos:**
103
98
  ```bash
104
- mkdir -p $WORKSPACE/repos
105
- git clone --depth 1 {repo_url} $WORKSPACE/repos/{name}
99
+ mkdir -p $W/repos
100
+ git clone --depth 1 {repo_url} $W/repos/{name}
106
101
  ```
107
102
 
108
103
  3. **Download paper sources:**
109
104
  ```bash
110
- mkdir -p $WORKSPACE/papers/{arxiv_id}
111
- curl -L "https://arxiv.org/src/{arxiv_id}" | tar -xz -C $WORKSPACE/papers/{arxiv_id}
105
+ mkdir -p $W/papers/{arxiv_id}
106
+ curl -L "https://arxiv.org/src/{arxiv_id}" | tar -xz -C $W/papers/{arxiv_id}
112
107
  ```
113
108
 
114
109
  ---
115
110
 
116
111
  ## Step 4: Analyze Literature
117
112
 
118
- **Prerequisites:** At least 5 papers in `$WORKSPACE/papers/`
113
+ **Prerequisites:** At least 5 papers in `$W/papers/`
119
114
 
120
115
  ### 4.1 Read Papers
121
116
 
@@ -135,7 +130,7 @@ Look for:
135
130
  - Scalability issues
136
131
  - Assumptions that could be relaxed
137
132
 
138
- Document gaps in `$WORKSPACE/ideas/gaps.md`:
133
+ Document gaps in `$W/ideas/gaps.md`:
139
134
  ```markdown
140
135
  # Research Gaps Identified
141
136
 
@@ -151,7 +146,7 @@ Document gaps in `$WORKSPACE/ideas/gaps.md`:
151
146
 
152
147
  ## Step 5: Generate 5 Ideas
153
148
 
154
- Create `$WORKSPACE/ideas/idea_1.md` through `idea_5.md` using template in `references/idea-template.md`.
149
+ Create `$W/ideas/idea_1.md` through `idea_5.md` using template in `references/idea-template.md`.
155
150
 
156
151
  **Requirements:**
157
152
  - Each idea cites ≥2 papers by arXiv ID
@@ -180,7 +175,7 @@ Create `$WORKSPACE/ideas/idea_1.md` through `idea_5.md` using template in `refer
180
175
 
181
176
  ### 6.2 Enhance Selected Idea
182
177
 
183
- Create `$WORKSPACE/ideas/selected_idea.md` with:
178
+ Create `$W/ideas/selected_idea.md` with:
184
179
  - Detailed math (loss functions, gradients)
185
180
  - Architecture choices
186
181
  - Hyperparameters
@@ -210,13 +205,13 @@ Map idea concepts to reference implementations.
210
205
 
211
206
  See `references/code-mapping.md` for template.
212
207
 
213
- **Output:** `$WORKSPACE/ideas/implementation_report.md`
208
+ **Output:** `$W/ideas/implementation_report.md`
214
209
 
215
210
  ---
216
211
 
217
212
  ## Step 8: Summary
218
213
 
219
- Create `$WORKSPACE/ideas/summary.md`:
214
+ Create `$W/ideas/summary.md`:
220
215
  - All 5 ideas with scores
221
216
  - Selected idea details
222
217
  - Next steps: `/research-pipeline` to implement
@@ -236,6 +231,6 @@ Create `$WORKSPACE/ideas/summary.md`:
236
231
 
237
232
  ## Integration
238
233
 
239
- - **Before:** `/literature-survey` to collect papers
234
+ - **Before:** `/research-collect` to collect papers
240
235
  - **After:** `/research-pipeline` to implement selected idea
241
236
  - **Alternative:** `/write-review-paper` to write survey instead
@@ -0,0 +1,118 @@
1
+ ---
2
+ name: metabolism
3
+ description: "Knowledge metabolism cycle: ingest new papers, update knowledge state, detect cross-topic links, generate hypotheses. Use /metabolism to trigger manually."
4
+ user-invokable: true
5
+ ---
6
+
7
+ # Continuous Knowledge Metabolism — Incremental Cycle
8
+
9
+ 你正在执行知识新陈代谢循环。严格按以下步骤执行。
10
+
11
+ **前提:** `metabolism/config.json` 必须已存在且 `currentDay >= 1`。如果不存在或 `currentDay` 为 0,提示用户先执行 /metabolism-init 完成初始化。
12
+
13
+ ## 准备
14
+
15
+ 1. 读取 `metabolism/config.json` 获取关键词、arXiv 分类、`processed_ids` 和 `currentDay`
16
+ 2. 读取 `metabolism/knowledge/_index.md` 获取当前知识状态
17
+
18
+ ## Step 1: Search(增量搜索)
19
+
20
+ 用滑动窗口(过去 5 天)搜索,靠 `processed_ids` 去重:
21
+
22
+ ```
23
+ arxiv_search({
24
+ query: "{keywords} AND cat:{category}",
25
+ date_from: "{5天前 YYYY-MM-DD}",
26
+ sort_by: "submittedDate",
27
+ max_results: 30
28
+ })
29
+
30
+ openalex_search({
31
+ query: "{keywords}",
32
+ filter: "from_publication_date:{5天前 YYYY-MM-DD}",
33
+ sort: "publication_date",
34
+ max_results: 20
35
+ })
36
+ ```
37
+
38
+ 合并结果,按 arXiv ID / DOI 去重,**跳过 `processed_ids` 中已有的论文**。
39
+
40
+ 下载新论文:
41
+
42
+ ```
43
+ arxiv_download({ arxiv_ids: ["{id1}", "{id2}", ...] })
44
+ unpaywall_download({ dois: ["{doi1}", "{doi2}", ...] })
45
+ ```
46
+
47
+ ## Step 2: Read(阅读)
48
+
49
+ 对每篇新论文:
50
+ - 读 .tex 源码(优先)或 PDF
51
+ - 提取:核心方法、关键结论、与已有知识的关系
52
+
53
+ 将每篇论文的 arXiv ID / DOI 追加到 `config.json` 的 `processed_ids`。
54
+
55
+ ## Step 3: Update Knowledge
56
+
57
+ 读取当前 `metabolism/knowledge/_index.md` 和相关 `topic-*.md`,根据今日阅读的论文更新。
58
+
59
+ **更新原则:**
60
+ - 新发现 → 添加到相关章节
61
+ - 印证已有认知 → 补充证据来源
62
+ - 与已有认知矛盾 → 标注分歧,保留两方论据
63
+ - 跨领域关联 → 记录连接
64
+
65
+ **篇幅管理:** 每个 topic 文件控制在 200 行以内。接近上限时,压缩早期内容(合并相似结论、删除低价值条目),保留信息密度。不要为了压缩而丢失关键结论和来源引用。
66
+
67
+ ## Step 4: Hypothesize(假设)
68
+
69
+ 更新完 knowledge.md 后,回顾今日新增内容,自问:
70
+
71
+ - 有没有反复出现但尚未被验证的模式?
72
+ - 有没有两个独立发现组合后暗示的新方向?
73
+ - 有没有现有方法的明显空白?
74
+
75
+ **有想法** → 写入 `metabolism/hypotheses/hyp-{NNN}.md`:
76
+
77
+ ```markdown
78
+ # Hypothesis {NNN}
79
+
80
+ ## 假设
81
+ {一句话}
82
+
83
+ ## 推理过程
84
+ {基于哪些论文/知识得出,2-3 段}
85
+
86
+ ## 来源论文
87
+ - {arxiv_id}: {title}
88
+
89
+ ## 自评
90
+ - 新颖性: {1-5}
91
+ - 可行性: {1-5}
92
+ - 影响力: {1-5}
93
+ ```
94
+
95
+ 然后用 `sessions_send` 通知 main session。
96
+
97
+ **没有想法** → 跳过,不要硬凑。
98
+
99
+ ## Step 5: Log & Finish
100
+
101
+ 写入 `metabolism/log/{YYYY-MM-DD}.md`:
102
+
103
+ ```markdown
104
+ # Day {currentDay} — {YYYY-MM-DD}
105
+
106
+ 新论文: {N} 篇
107
+ 知识更新: {简述主要变更}
108
+ 假设: {有/无}
109
+ ```
110
+
111
+ 更新 `config.json`:`currentDay` +1。
112
+
113
+ ## 行为约束
114
+
115
+ 1. 不捏造论文中未出现的事实性声明,但可以用自身知识做推理和关联判断
116
+ 2. 没有想法时不生成假设
117
+ 3. 自主运行,不向人类提问
118
+ 4. 修改知识文件前必须先读取当前内容
@@ -0,0 +1,80 @@
1
+ ---
2
+ name: metabolism-init
3
+ description: "Initialize knowledge metabolism for a research topic: broad literature survey, build baseline knowledge state, set up metabolism workspace"
4
+ user-invokable: true
5
+ ---
6
+
7
+ # Metabolism Initialization — Day 0 Baseline Building
8
+
9
+ 你正在为一个研究方向执行知识新陈代谢的初始化。这是 Day 0:构建领域基线知识。
10
+
11
+ ## 准备
12
+
13
+ 1. 检查 `metabolism/config.json` 是否存在
14
+ - 如果不存在:询问用户研究方向,创建 `metabolism/config.json`(包含 `keywords`, `categories`, `currentDay: 0`, `processed_ids: []`)
15
+ - 如果已存在且 `currentDay > 0`:提示用户已完成初始化,无需重复执行
16
+ 2. 创建目录结构(如不存在):
17
+ ```
18
+ metabolism/
19
+ knowledge/
20
+ hypotheses/
21
+ experiments/
22
+ conversations/
23
+ log/
24
+ ```
25
+
26
+ ## Step 1: 宽泛调研
27
+
28
+ 委托 /research-collect 执行宽泛调研(不限日期),构建初始知识:
29
+
30
+ ```
31
+ sessions_spawn({
32
+ task: "/research-collect\n研究主题: {从 config.json 的 keywords 提取}\n这是 Day 0 基线构建,请做宽泛调研(不限日期),覆盖领域经典工作和近期进展。\n预期产出: papers/_meta/*.json + papers/_downloads/",
33
+ label: "Day 0 Baseline Survey",
34
+ runTimeoutSeconds: 1800
35
+ })
36
+ ```
37
+
38
+ spawned session 共享工作目录,无需传路径。等待完成后,读取 `papers/_meta/*.json` 获取论文列表。
39
+
40
+ ## Step 2: 阅读与知识提取
41
+
42
+ 对每篇论文:
43
+ - 读 .tex 源码(优先)或 PDF
44
+ - 提取:核心方法、关键结论、领域现状
45
+
46
+ 将每篇论文的 arXiv ID / DOI 追加到 `metabolism/config.json` 的 `processed_ids`。
47
+
48
+ ## Step 3: 构建初始知识状态
49
+
50
+ 创建 `metabolism/knowledge/_index.md`:
51
+ - Research Goal(从 config.json 提取)
52
+ - Topics 表格(按论文主题聚类)
53
+ - Cross-topic Links(如有)
54
+ - Timeline(记录 Day 0)
55
+
56
+ 为每个识别出的主题创建 `metabolism/knowledge/topic-{name}.md`:
57
+ - 已知方法
58
+ - 关键论文与结论
59
+ - 开放问题
60
+
61
+ ## Step 4: 记录日志
62
+
63
+ 写入 `metabolism/log/{YYYY-MM-DD}-init.md`:
64
+
65
+ ```markdown
66
+ # Day 0 — Initialization
67
+
68
+ 日期: {YYYY-MM-DD}
69
+ 论文: {N} 篇
70
+ 主题: {列出识别的主题}
71
+ 状态: 基线构建完成
72
+ ```
73
+
74
+ 更新 `metabolism/config.json`:`currentDay` 设为 1。
75
+
76
+ ## 行为约束
77
+
78
+ 1. 不捏造论文中未出现的事实性声明
79
+ 2. 自主运行,不向人类提问(除初始配置外)
80
+ 3. 修改知识文件前必须先读取当前内容(如存在)
@@ -1,6 +1,6 @@
1
1
  ---
2
- name: literature-survey
3
- description: "Use this when the user wants to find, download, or collect academic papers on a topic. Searches arXiv, filters by relevance, downloads PDFs and sources, clusters by research direction."
2
+ name: research-collect
3
+ description: "[Read when prompt contains /research-collect]"
4
4
  metadata:
5
5
  {
6
6
  "openclaw":
@@ -14,10 +14,12 @@ metadata:
14
14
 
15
15
  **Don't ask permission. Just do it.**
16
16
 
17
+ **Workspace:** `$W` = working directory provided in task parameter.
18
+
17
19
  ## Output Structure
18
20
 
19
21
  ```
20
- ~/.openclaw/workspace/projects/{project-id}/
22
+ $W/
21
23
  ├── survey/
22
24
  │ ├── search_terms.json # 检索词列表
23
25
  │ └── report.md # 最终报告
@@ -38,17 +40,13 @@ metadata:
38
40
 
39
41
  ### Phase 1: 准备
40
42
 
43
+ 确保工作目录结构存在:
44
+
41
45
  ```bash
42
- ACTIVE=$(cat ~/.openclaw/workspace/projects/.active 2>/dev/null)
43
- if [ -z "$ACTIVE" ]; then
44
- PROJECT_ID="<topic-slug>"
45
- mkdir -p ~/.openclaw/workspace/projects/$PROJECT_ID/{survey,papers/_downloads,papers/_meta}
46
- echo "$PROJECT_ID" > ~/.openclaw/workspace/projects/.active
47
- fi
48
- PROJECT_DIR="$HOME/.openclaw/workspace/projects/$(cat ~/.openclaw/workspace/projects/.active)"
46
+ mkdir -p "$W/survey" "$W/papers/_downloads" "$W/papers/_meta"
49
47
  ```
50
48
 
51
- 生成 4-8 个检索词,保存到 `survey/search_terms.json`。
49
+ 生成 4-8 个检索词,保存到 `$W/survey/search_terms.json`。
52
50
 
53
51
  ---
54
52
 
@@ -76,13 +74,13 @@ arxiv_search({ query: "<term>", max_results: 30 })
76
74
  ```
77
75
  arxiv_download({
78
76
  arxiv_ids: ["<有用的论文ID>"],
79
- output_dir: "$PROJECT_DIR/papers/_downloads"
77
+ output_dir: "papers/_downloads"
80
78
  })
81
79
  ```
82
80
 
83
81
  #### 2.4 写入元数据
84
82
 
85
- 为每篇下载的论文创建元数据文件 `papers/_meta/{arxiv_id}.json`:
83
+ 为每篇下载的论文创建元数据文件 `$W/papers/_meta/{arxiv_id}.json`:
86
84
 
87
85
  ```json
88
86
  {
@@ -105,7 +103,7 @@ arxiv_download({
105
103
 
106
104
  #### 3.1 选择高分论文
107
105
 
108
- 读取 `papers/_meta/` 下得分 ≥4 的论文,选出 **Top 5** 最相关论文。
106
+ 读取 `$W/papers/_meta/` 下得分 ≥4 的论文,选出 **Top 5** 最相关论文。
109
107
 
110
108
  #### 3.2 搜索参考仓库
111
109
 
@@ -116,41 +114,32 @@ arxiv_download({
116
114
 
117
115
  使用 `github_search` 工具:
118
116
  ```javascript
119
- // 示例:
120
117
  github_search({
121
118
  query: "{paper_title} implementation",
122
119
  max_results: 10,
123
120
  sort: "stars",
124
- language: "python" // 可选:根据论文领域选择语言
125
- })
126
-
127
- // 如果有具体方法名:
128
- github_search({
129
- query: "{method_name} {author_last_name}",
130
- max_results: 5
121
+ language: "python"
131
122
  })
132
123
  ```
133
124
 
134
- **提示**:如果需要 GitHub API 高频率限制,设置环境变量 `GITHUB_TOKEN`。
135
-
136
125
  #### 3.3 筛选与 clone
137
126
 
138
127
  对搜索到的仓库,评估:
139
128
  - Star 数(建议 >100)
140
129
  - 代码质量(有 README、有 requirements.txt、代码结构清晰)
141
- - 与论文的匹配度(README 中引用了论文 / 实现了论文中的方法)
130
+ - 与论文的匹配度
142
131
 
143
- 选择 **3-5 个**最相关的仓库,clone 到 `repos/`:
132
+ 选择 **3-5 个**最相关的仓库,clone 到 `$W/repos/`:
144
133
 
145
134
  ```bash
146
- mkdir -p "$PROJECT_DIR/repos"
147
- cd "$PROJECT_DIR/repos"
135
+ mkdir -p "$W/repos"
136
+ cd "$W/repos"
148
137
  git clone --depth 1 <repo_url>
149
138
  ```
150
139
 
151
140
  #### 3.4 写入选择报告
152
141
 
153
- 创建 `$PROJECT_DIR/prepare_res.md`:
142
+ 创建 `$W/prepare_res.md`:
154
143
 
155
144
  ```markdown
156
145
  # 参考仓库选择
@@ -179,7 +168,7 @@ git clone --depth 1 <repo_url>
179
168
  #### 4.1 读取所有元数据
180
169
 
181
170
  ```bash
182
- ls $PROJECT_DIR/papers/_meta/
171
+ ls $W/papers/_meta/
183
172
  ```
184
173
 
185
174
  读取所有 `.json` 文件,汇总论文列表。
@@ -191,15 +180,15 @@ ls $PROJECT_DIR/papers/_meta/
191
180
  #### 4.3 创建文件夹并移动
192
181
 
193
182
  ```bash
194
- mkdir -p "$PROJECT_DIR/papers/data-driven"
195
- mv "$PROJECT_DIR/papers/_downloads/2401.12345" "$PROJECT_DIR/papers/data-driven/"
183
+ mkdir -p "$W/papers/data-driven"
184
+ mv "$W/papers/_downloads/2401.12345" "$W/papers/data-driven/"
196
185
  ```
197
186
 
198
187
  ---
199
188
 
200
189
  ### Phase 5: 生成报告
201
190
 
202
- 创建 `survey/report.md`:
191
+ 创建 `$W/survey/report.md`:
203
192
  - 调研概要(检索词数、论文数、方向数)
204
193
  - 各研究方向概述
205
194
  - Top 10 论文
@@ -222,3 +211,4 @@ mv "$PROJECT_DIR/papers/_downloads/2401.12345" "$PROJECT_DIR/papers/data-driven/
222
211
  |------|---------|
223
212
  | `arxiv_search` | 搜索论文(无副作用) |
224
213
  | `arxiv_download` | 下载 .tex/.pdf(需绝对路径) |
214
+ | `github_search` | 搜索参考仓库 |
@@ -15,7 +15,7 @@ metadata:
15
15
 
16
16
  **Don't ask permission. Just do it.**
17
17
 
18
- **Workspace:** See `../_shared/workspace-spec.md`. Set `$W` to the active project directory.
18
+ **Workspace:** `$W` = working directory provided in task parameter.
19
19
 
20
20
  ## Prerequisites
21
21
 
@@ -15,7 +15,7 @@ metadata:
15
15
 
16
16
  **Don't ask permission. Just do it.**
17
17
 
18
- **Workspace:** See `../_shared/workspace-spec.md`. Set `$W` to the active project directory.
18
+ **Workspace:** `$W` = working directory provided in task parameter.
19
19
 
20
20
  ## Prerequisites
21
21
 
@@ -94,22 +94,17 @@ task 必须以 `/skill-name` 开头(触发 slash command 解析),后续行
94
94
 
95
95
  ## Workspace
96
96
 
97
- See `../_shared/workspace-spec.md`. Set `$W` to the active project directory.
97
+ `$W` = agent workspace root (see AGENTS.md for layout).
98
98
 
99
99
  ---
100
100
 
101
101
  ## Step 0: 初始化
102
102
 
103
- ```bash
104
- ACTIVE=$(cat ~/.openclaw/workspace/projects/.active 2>/dev/null)
105
- ```
103
+ `$W` 即当前 agent 的工作目录(AGENTS.md 中定义)。
106
104
 
107
- 如果没有 active project:
108
- 1. 问用户:研究主题是什么?
109
- 2. 创建项目目录
110
- 3. 写入 `task.json`
105
+ 检查 `$W/SOUL.md` 是否包含研究方向信息。如果没有(BOOTSTRAP 未完成),提示用户先完成 BOOTSTRAP 配置。
111
106
 
112
- 设置 `$W = ~/.openclaw/workspace/projects/{project-id}`
107
+ 确保 `$W` 下存在必要的子目录(如 `survey/`, `papers/` 等)。
113
108
 
114
109
  ---
115
110
 
@@ -122,8 +117,8 @@ ACTIVE=$(cat ~/.openclaw/workspace/projects/.active 2>/dev/null)
122
117
  **检查:** `$W/papers/_meta/` 目录存在且有 `.json` 文件?
123
118
 
124
119
  **如果缺失,调用 sessions_spawn 工具(然后停止,等待完成通知):**
125
- - task: `"/literature-survey\n工作目录: {$W绝对路径}\n研究主题: {从task.json提取}\n请搜索、筛选、下载论文到工作目录的 papers/ 下。"`
126
- - label: `"Literature Survey"`
120
+ - task: `"/research-collect\n工作目录: {$W绝对路径}\n研究主题: {从task.json提取}\n请搜索、筛选、下载论文到工作目录的 papers/ 下。"`
121
+ - label: `"Research Collect"`
127
122
  - runTimeoutSeconds: `1800`
128
123
 
129
124
  **验证:** `ls $W/papers/_meta/*.json` 至少有 3 个文件
@@ -14,7 +14,7 @@ metadata:
14
14
 
15
15
  **Don't ask permission. Just do it.**
16
16
 
17
- **Workspace:** See `../_shared/workspace-spec.md`. Set `$W` to the active project directory.
17
+ **Workspace:** `$W` = working directory provided in task parameter.
18
18
 
19
19
  ## Prerequisites
20
20
 
@@ -23,8 +23,8 @@ metadata:
23
23
  | `$W/task.json` | /research-pipeline or user |
24
24
  | `$W/survey_res.md` | /research-survey |
25
25
  | `$W/notes/paper_*.md` | /research-survey |
26
- | `$W/repos/` | /literature-survey Phase 3 |
27
- | `$W/prepare_res.md` | /literature-survey Phase 3 |
26
+ | `$W/repos/` | /research-collect Phase 3 |
27
+ | `$W/prepare_res.md` | /research-collect Phase 3 |
28
28
 
29
29
  **If `survey_res.md` is missing, STOP:** "需要先运行 /research-survey 完成深度分析"
30
30
 
@@ -15,7 +15,7 @@ metadata:
15
15
 
16
16
  **Don't ask permission. Just do it.**
17
17
 
18
- **Workspace:** See `../_shared/workspace-spec.md`. Set `$W` to the active project directory.
18
+ **Workspace:** `$W` = working directory provided in task parameter.
19
19
 
20
20
  ## Prerequisites
21
21
 
@@ -25,6 +25,13 @@ Use this skill when the user asks for:
25
25
 
26
26
  Do not stop at explanation.
27
27
  Create a real cron job via `scientify_cron_job`.
28
+ Do not claim "still running/in progress" unless you created a real async handle (cron job id or task id). If no handle exists, finish the run in the same turn.
29
+ For high-effort strict-quality runs that are unlikely to finish in one turn, start real async execution instead of replying "not executed":
30
+ - if user forbids schedule creation, prefer `sessions_spawn` and return task id
31
+ - otherwise prefer `scientify_cron_job` with `run_now: true` and return job id
32
+ If current turn is already cron-triggered, never call `scientify_cron_job` again from inside that run (avoid nested cron/run_now recursion).
33
+ When using `scientify_literature_state`, keep `scope/topic` consistent across prepare -> record -> status (reuse prepare output; do not replace scope with project id).
34
+ If user gave hard constraints (for example exact/min core paper count), do not return status `ok` unless satisfied; otherwise persist `degraded_quality` with unmet reasons.
28
35
 
29
36
  ## Tool to call
30
37
 
@@ -33,6 +40,7 @@ Create a real cron job via `scientify_cron_job`.
33
40
  - `action: "upsert"`: create or update a schedule
34
41
  - `action: "list"`: show current schedules
35
42
  - `action: "remove"`: cancel schedules
43
+ - Optional `run_now: true` (upsert only): trigger one immediate execution right after creation and return a real handle
36
44
 
37
45
  Routing rules:
38
46
 
@@ -65,6 +73,7 @@ For `action: "upsert"`, set `schedule` to one of:
65
73
  - Optional aliases: `webui`, `tui` (both map to `last`)
66
74
  - Optional `to`: channel-specific user or chat id (required only for concrete channels like `feishu`/`telegram`, not for `last`/`webui`/`tui`)
67
75
  - Optional `no_deliver: true`: run in background without push
76
+ - `no_deliver` only disables delivery; research runs still must call `scientify_literature_state.record` to persist state
68
77
 
69
78
  If the user does not specify destination, leave `channel` and `to` unset to use default routing.
70
79
 
@@ -88,10 +97,17 @@ For selected core papers, prefer full-text reading first:
88
97
  - evidence-binding rate >= 90% (key conclusions should be backed by section+locator+quote)
89
98
  - citation error rate < 2%
90
99
  - if full text is missing, do not keep high-confidence conclusions
100
+ - Reflection guardrail:
101
+ - when `knowledge_changes` has BRIDGE (or REVISE+CONFIRM contradiction signal), execute at least one immediate reflection query and write it into `exploration_trace`
102
+ - do not emit BRIDGE unless `evidence_ids` resolve to this run's papers and include at least one full-text-backed paper
103
+ - Hypothesis gate:
104
+ - avoid speculative guesses; each hypothesis should include >=2 `evidence_ids`, `dependency_path` length >=2, and novelty/feasibility/impact scores
91
105
  If an incremental pass returns no unseen papers, run one fallback representative pass before returning empty.
92
106
  If user gives explicit preference feedback during follow-up (read/skip/star style intent, source preference, direction preference),
93
107
  persist it via `scientify_literature_state` action=`feedback` (backend-only memory, not user-facing by default).
94
108
  If the user asks "which papers did you push just now?", call `scientify_literature_state` action=`status` first and answer from `recent_papers` + `knowledge_state_summary` (do not claim you must re-search unless status is empty).
109
+ After each research `record`, call `scientify_literature_state` action=`status` and include `run_id`/`latest_run_id` in your reply for traceability.
110
+ Each research cycle should use a unique `run_id` (cron run id preferred, otherwise timestamp-based) to avoid idempotent no-op writes.
95
111
 
96
112
  ## Message field (plain reminder)
97
113
 
@@ -14,7 +14,7 @@ metadata:
14
14
 
15
15
  **Don't ask permission. Just do it.**
16
16
 
17
- **Workspace:** See `../_shared/workspace-spec.md`. Set `$W` to the active project directory.
17
+ **Workspace:** `$W` = working directory provided in task parameter.
18
18
 
19
19
  ## Prerequisites
20
20
 
@@ -22,12 +22,12 @@ Read and verify these files exist before starting:
22
22
 
23
23
  | File | Source |
24
24
  |------|--------|
25
- | `$W/papers/_meta/*.json` | /literature-survey |
26
- | `$W/papers/_downloads/` or `$W/papers/{direction}/` | /literature-survey |
27
- | `$W/repos/` | /literature-survey Phase 3 |
28
- | `$W/prepare_res.md` | /literature-survey Phase 3 |
25
+ | `$W/papers/_meta/*.json` | /research-collect |
26
+ | `$W/papers/_downloads/` or `$W/papers/{direction}/` | /research-collect |
27
+ | `$W/repos/` | /research-collect Phase 3 |
28
+ | `$W/prepare_res.md` | /research-collect Phase 3 |
29
29
 
30
- **If papers are missing, STOP:** "需要先运行 /literature-survey 完成论文下载"
30
+ **If papers are missing, STOP:** "需要先运行 /research-collect 完成论文下载"
31
31
 
32
32
  **Note:** 如果 `prepare_res.md` 中注明"无可用参考仓库",代码映射步骤可跳过,但需在 survey_res.md 中标注。
33
33