scientify 1.7.1 → 1.7.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -2,253 +2,381 @@
2
2
 
3
3
  **AI-powered research workflow automation for OpenClaw.**
4
4
 
5
+ Scientify is an [OpenClaw](https://github.com/openclaw/openclaw) plugin that automates the full academic research pipeline — from literature survey to experiment execution — using LLM-driven sub-agents.
6
+
5
7
  [中文文档](./README.zh.md)
6
8
 
7
9
  ---
8
10
 
9
- ## Features
11
+ ## What It Does
10
12
 
11
- ### Skills (LLM-powered)
13
+ Scientify turns a single research prompt into a complete automated pipeline. Each phase runs as an independent sub-agent — the orchestrator verifies outputs between steps and passes context forward.
12
14
 
13
- | Skill | Description |
14
- |-------|-------------|
15
- | **research-pipeline** | Orchestrator for end-to-end ML research. Spawns sub-agents for each phase, verifies outputs between steps. |
16
- | **research-survey** | Deep analysis of downloaded papers: extract formulas, map to code, produce method comparison table. |
17
- | **research-plan** | Create structured 4-part implementation plan (Dataset/Model/Training/Testing) from survey results. |
18
- | **research-implement** | Implement ML code from plan, run 2-epoch validation with `uv` venv isolation, verify real results. |
19
- | **research-review** | Review implementation against plan and survey. Iterates fix-rerun-review up to 3 times. |
20
- | **research-experiment** | Full training run + ablation experiments + result analysis. Requires review PASS. |
21
- | **literature-survey** | Comprehensive literature survey: search → filter → download → cluster → report. |
22
- | **idea-generation** | Generate innovative research ideas from a topic. Searches arXiv/GitHub, downloads papers, outputs 5 ideas. |
15
+ ### Scenario 1 End-to-End Research Pipeline
23
16
 
24
- ### Commands (Direct, no LLM)
17
+ > *"Research scaling laws for classical ML classifiers on Fashion-MNIST"*
25
18
 
26
- | Command | Description |
27
- |---------|-------------|
28
- | `/research-status` | Show workspace status |
29
- | `/papers` | List downloaded papers |
30
- | `/ideas` | List generated ideas |
31
- | `/projects` | List all projects |
32
- | `/project-switch <id>` | Switch project |
33
- | `/project-delete <id>` | Delete project |
19
+ The **research-pipeline** orchestrator runs all 6 phases in sequence, spawning a dedicated sub-agent for each:
34
20
 
35
- ### Tools
21
+ ```mermaid
22
+ flowchart LR
23
+ A["Literature\nSurvey"] --> B["Deep\nAnalysis"] --> C["Implementation\nPlan"] --> D["Code\nImplementation"] --> E["Automated\nReview"] --> F["Full\nExperiment"]
24
+ ```
36
25
 
37
- | Tool | Description |
38
- |------|-------------|
39
- | **arxiv_search** | Search arXiv.org API for papers. Returns metadata only (title, authors, abstract, arxiv_id). No side effects. |
40
- | **arxiv_download** | Download arXiv papers by ID. Tries .tex source first, falls back to PDF. Requires absolute `output_dir` path. |
41
- | **github_search** | Search GitHub repositories by keyword, filter by language, sort by stars/updated |
26
+ <details>
27
+ <summary><b>What each phase produces</b></summary>
28
+
29
+ | Phase | What Happens | Output File |
30
+ |:------|:-------------|:------------|
31
+ | **1. Literature Survey** | Search arXiv + OpenAlex, filter, download .tex sources, cluster by direction | `survey/report.md` |
32
+ | **2. Deep Analysis** | Extract formulas, map methods to code, build cross-comparison | `survey_res.md` |
33
+ | **3. Implementation Plan** | Design 4-part plan — Dataset / Model / Training / Testing | `plan_res.md` |
34
+ | **4. Code Implementation** | Write ML code in `uv`-isolated venv, validate with 2-epoch run | `project/run.py` |
35
+ | **5. Automated Review** | Review code → fix issues → rerun → re-review (up to 3 rounds) | `iterations/judge_v*.md` |
36
+ | **6. Full Experiment** | Complete training + ablation studies with final analysis | `experiment_res.md` |
37
+
38
+ </details>
42
39
 
43
40
  ---
44
41
 
45
- ## Quick Start
42
+ ### Scenario 2 — Idea Generation
46
43
 
47
- ```bash
48
- # Install the plugin
49
- openclaw plugins install scientify
44
+ > *"Explore recent advances in protein folding and generate innovative research ideas"*
50
45
 
51
- # Start using
52
- openclaw "Research transformer efficiency and generate ideas"
53
- ```
46
+ The **idea-generation** skill surveys the field, then:
47
+
48
+ 1. Generates **5 diverse research ideas** grounded in real papers
49
+ 2. Scores each on novelty, feasibility, and impact
50
+ 3. Selects the best and produces an **enhanced proposal** with detailed methodology
51
+
52
+ > [!TIP]
53
+ > **Output:** `ideas/selected_idea.md` — a ready-to-develop research proposal.
54
54
 
55
55
  ---
56
56
 
57
- ## Installation
57
+ ### Scenario 3 — Standalone Literature Survey
58
58
 
59
- ```bash
60
- openclaw plugins install scientify
61
- ```
59
+ > *"Survey the latest papers on vision-language models for medical imaging"*
60
+
61
+ Run just the survey phase when you need a structured reading list without running the full pipeline:
62
62
 
63
- > **Note:** Do NOT use `npm install scientify`. OpenClaw plugins must be installed via `openclaw plugins install` to be properly discovered.
63
+ - Searches **arXiv** (CS/ML) and **OpenAlex** (cross-disciplinary, broader coverage)
64
+ - Downloads `.tex` source files; retrieves open-access PDFs via **Unpaywall**
65
+ - Clusters papers by sub-topic and extracts key findings
66
+ - Generates a structured survey report
64
67
 
65
- The plugin will be installed to `~/.openclaw/extensions/scientify/` and automatically enabled.
68
+ > [!TIP]
69
+ > **Output:** `survey/report.md` + raw papers in `papers/_downloads/`
66
70
 
67
71
  ---
68
72
 
69
- ## Usage Scenarios
73
+ ### Scenario 4 — Review Paper Drafting
70
74
 
71
- ### 1. One-shot Idea Generation
75
+ > *"Write a survey paper based on my project's research outputs"*
72
76
 
73
- ```
74
- You: Research "long document summarization" and generate some innovative ideas
75
-
76
- Agent: [Auto-executes]
77
- 1. Search arXiv papers
78
- 2. Search GitHub repositories
79
- 3. Download and analyze .tex sources
80
- 4. Generate 5 innovative ideas
81
- 5. Select and enhance the best idea
82
- 6. Map to code implementations
83
- ```
77
+ After completing a research pipeline (or just a literature survey + deep analysis), the **write-review-paper** skill assembles a draft:
84
78
 
85
- ### 2. Daily Literature Tracking
79
+ - Synthesizes survey reports, analysis notes, and comparison tables
80
+ - Structures the paper with Introduction, Related Work, Methods, and Discussion
81
+ - Produces a publication-ready draft in Markdown
86
82
 
87
- ```
88
- You: Set up a daily task at 9am to check arXiv for new papers on
89
- "transformer efficiency", summarize relevant ones and send to Feishu
90
-
91
- Agent: Setting up:
92
- 1. Create scheduled Hook (cron: "0 9 * * *")
93
- 2. Daily arxiv search for papers from last 24h
94
- 3. Compare against your idea (selected_idea.md)
95
- 4. Filter relevant papers, generate summary
96
- 5. Push via Feishu webhook
97
-
98
- [Example push]
99
- 📚 Today's Relevant Papers (3)
100
- • "FlashAttention-3: Fast Attention with ..." - Highly relevant to your idea
101
- • "Efficient Long-Context Transformers" - Medium relevance
102
- • "..."
103
- ```
83
+ > [!TIP]
84
+ > **Output:** a survey/review paper draft based on all accumulated project artifacts.
104
85
 
105
- ### 3. Research Progress Reporter
86
+ ---
106
87
 
107
- ```
108
- You: Generate weekly research progress report every Friday and send to Slack
88
+ ### Advanced Scenarios — Combining OpenClaw Platform Capabilities
89
+
90
+ As an OpenClaw plugin, Scientify can leverage the platform's MCP servers, browser automation, multi-session concurrency, and more to build powerful composite workflows.
91
+
92
+ ---
109
93
 
110
- Agent: [Auto-executes every Friday]
111
- 1. Read $WORKSPACE changes
112
- 2. Stats: X new papers, Y new ideas, Z code commits
113
- 3. Generate weekly report markdown
114
- 4. Push to Slack #research channel
94
+ ### Scenario 5 — Literature Monitoring Bot
95
+
96
+ > *"Automatically search for new diffusion model papers every day and push a digest to our Slack channel"*
97
+
98
+ Combine OpenClaw's **MCP integration** (Slack / Feishu / Email) with **scheduled triggers** to build automated literature monitoring:
99
+
100
+ ```mermaid
101
+ flowchart LR
102
+ A["Scheduled Trigger\n(cron / webhook)"] --> B["arxiv_search\n+ openalex_search"]
103
+ B --> C["LLM Filtering\n+ Summary"]
104
+ C --> D["Push to\nSlack / Feishu / Email"]
115
105
  ```
116
106
 
117
- ### 4. Competitive Paper Monitor
107
+ 1. External cron job or OpenClaw webhook triggers a session periodically
108
+ 2. Scientify's `arxiv_search` + `openalex_search` fetch the latest papers
109
+ 3. LLM scores and filters by your research interests, generates concise summaries
110
+ 4. MCP tools push the digest to Slack, Feishu, or email
111
+
112
+ > [!NOTE]
113
+ > **Requires:** A configured MCP server (e.g., `slack-mcp`, `feishu-mcp`). OpenClaw supports declaring MCP servers in `openclaw.json`.
114
+
115
+ ---
116
+
117
+ ### Scenario 6 — Download Paywalled Papers via Browser
118
+
119
+ > *"Download these 5 IEEE papers using my university VPN"*
118
120
 
121
+ Scientify's built-in `arxiv_download` and `unpaywall_download` only handle open-access papers. For paywalled content, combine with OpenClaw's **browser automation** (Playwright MCP):
122
+
123
+ ```mermaid
124
+ flowchart LR
125
+ A["Scientify\nprovides paper URLs"] --> B["Playwright MCP\nopens browser"]
126
+ B --> C["Institutional Proxy\nauto-authenticate"]
127
+ C --> D["Navigate to Publisher\ndownload PDF"]
119
128
  ```
120
- You: Monitor new papers from "Yann LeCun" and "Meta AI"
121
129
 
122
- Agent: Setting up monitoring:
123
- - Daily check arxiv author "Yann LeCun"
124
- - Check arxiv affiliation "Meta AI"
125
- - Push notification when new papers appear
130
+ - OpenClaw launches a controlled browser via Playwright MCP server
131
+ - The browser accesses publisher sites through your institutional proxy / VPN
132
+ - Automatically navigates to the paper page and downloads the PDF to `papers/_downloads/`
133
+ - Works with IEEE, Springer, Elsevier, ACM, and other subscription-based publishers
134
+
135
+ > [!NOTE]
136
+ > **Requires:** Playwright MCP server configured, and institutional network access to the papers.
137
+
138
+ ---
139
+
140
+ ### Scenario 7 — Multi-Topic Parallel Research
141
+
142
+ > *"Research 3 directions simultaneously: LoRA fine-tuning, MoE architectures, KV-Cache optimization"*
143
+
144
+ Leverage OpenClaw's **multi-session concurrency** (`sessions_spawn`) to run multiple research pipelines in parallel:
145
+
146
+ ```mermaid
147
+ flowchart TD
148
+ O["Main Agent\n(Orchestrator)"] --> A["Sub-session 1\nLoRA Fine-tuning"]
149
+ O --> B["Sub-session 2\nMoE Architectures"]
150
+ O --> C["Sub-session 3\nKV-Cache Optimization"]
151
+ A --> D["Independent project dirs\nisolated from each other"]
152
+ B --> D
153
+ C --> D
126
154
  ```
127
155
 
128
- ### 5. Paper Reading Assistant
156
+ - Each sub-topic runs a full pipeline with its own project directory
157
+ - The main agent collects results and produces a cross-topic comparative analysis
158
+ - Ideal for quickly scouting multiple directions during the topic-selection phase of a survey paper
159
+
160
+ ---
161
+
162
+ ### Scenario 8 — Interactive Paper Reading Assistant
163
+
164
+ > *"Walk me through 'Attention Is All You Need' section by section, explain every formula"*
165
+
166
+ Combine OpenClaw's conversational interface with Scientify's `paper_browser` tool for interactive deep reading:
167
+
168
+ - `paper_browser` loads papers page-by-page, avoiding context overflow
169
+ - Discuss section by section: LLM explains derivations, compares with related work, highlights contributions
170
+ - Follow up on implementation details — LLM uses `github_search` to find corresponding open-source code
171
+ - All analysis notes are saved to `notes/paper_{id}.md`
129
172
 
173
+ ---
174
+
175
+ ### Scenario 9 — Paper-to-Reproducible-Experiment
176
+
177
+ > *"Reproduce the results from Table 2 of this paper"*
178
+
179
+ End-to-end automation: understand paper → implement code → run experiment → compare results:
180
+
181
+ ```mermaid
182
+ flowchart LR
183
+ A["paper_browser\nDeep read paper"] --> B["research-plan\nExtract experiment design"]
184
+ B --> C["research-implement\nWrite code"]
185
+ C --> D["research-experiment\nRun experiment"]
186
+ D --> E["Compare with\npaper's Table 2"]
130
187
  ```
131
- You: Read papers/2401.12345/ and compare its method with my idea
132
188
 
133
- Agent: [Reading paper .tex files]
189
+ 1. `paper_browser` reads the method and experiment sections in detail
190
+ 2. `research-plan` extracts experiment config (hyperparameters, datasets, metrics)
191
+ 3. `research-implement` generates code and validates in a `uv`-isolated environment
192
+ 4. `research-experiment` runs the full experiment
193
+ 5. LLM automatically compares your results against the paper's reported numbers
134
194
 
135
- 📄 Paper: "Efficient Attention for Long Documents"
195
+ ---
136
196
 
137
- ## Comparison with Your Idea
197
+ ## Prerequisites
138
198
 
139
- | Aspect | Paper Method | Your Method |
140
- |--------|-------------|-------------|
141
- | Attention | Sparse Attention | Hierarchical Attention |
142
- | Complexity | O(n√n) | O(n log n) |
143
- | Advantage | Simple implementation | Preserves more info |
199
+ - **Node.js** >= 18
200
+ - **Python 3** + **uv** (for ML code execution)
201
+ - **git**
144
202
 
145
- ## Suggestions
146
- Consider combining both approaches...
203
+ ---
204
+
205
+ ## Install OpenClaw
206
+
207
+ ```bash
208
+ # Install OpenClaw globally
209
+ pnpm add -g openclaw # or: npm install -g openclaw
210
+
211
+ # Run onboarding wizard (configures model provider, API key, workspace)
212
+ openclaw onboard
213
+
214
+ # Start the gateway (runs the WebUI server)
215
+ openclaw gateway
147
216
  ```
148
217
 
149
- ### 6. Related Work Writer
218
+ After `openclaw gateway`, the WebUI is available at **http://127.0.0.1:18789/** (default port).
219
+
220
+ > **Proxy users:** If you have `http_proxy` set, access the WebUI with `--noproxy 127.0.0.1` or configure your browser accordingly.
221
+
222
+ ---
150
223
 
224
+ ## Install Scientify
225
+
226
+ ### From npm (recommended)
227
+
228
+ ```bash
229
+ openclaw plugins install scientify
151
230
  ```
152
- You: Write Related Work section based on papers in my project
153
231
 
154
- Agent: [Reading papers/ and literature/notes/]
232
+ The plugin installs to `~/.openclaw/extensions/scientify/` and is automatically enabled.
155
233
 
156
- Generating related_work.tex:
234
+ ### From source (development)
157
235
 
158
- \section{Related Work}
236
+ ```bash
237
+ git clone https://github.com/user/scientify.git
238
+ cd scientify && pnpm install && pnpm build
239
+
240
+ # Link as dev plugin
241
+ openclaw plugins install -l ./
242
+ ```
159
243
 
160
- \subsection{Long Document Summarization}
161
- Recent works have explored various approaches...
162
- \cite{paper1} proposed...
163
- \cite{paper2} extended this by...
244
+ ### Verify installation
164
245
 
165
- \subsection{Efficient Transformers}
166
- ...
246
+ ```bash
247
+ openclaw plugins list
248
+ # Should show: scientify (enabled)
167
249
  ```
168
250
 
169
- ### 7. Idea Mashup
251
+ After installation, **restart the gateway** to load the plugin:
170
252
 
253
+ ```bash
254
+ # Stop the running gateway (Ctrl+C), then:
255
+ openclaw gateway
171
256
  ```
172
- You: Combine "reinforcement learning" and "text summarization" to spark new ideas
173
257
 
174
- Agent: [Searching papers from both fields]
258
+ ---
175
259
 
176
- 🔥 Mashup Results:
260
+ ## Usage via WebUI
177
261
 
178
- 1. **RL-guided Summarization**
179
- Use RL to optimize readability and information coverage
180
- References: [2301.xxx], [2302.xxx]
262
+ ### 1. Open the WebUI
181
263
 
182
- 2. **Interactive Summarization Agent**
183
- User feedback as reward signal for iterative optimization
264
+ Navigate to **http://127.0.0.1:18789/** in your browser.
184
265
 
185
- 3. ...
266
+ ### 2. Start a research task
267
+
268
+ Type a research prompt in the chat. Scientify skills are auto-matched by the LLM:
269
+
270
+ ```
271
+ Research "transformer efficiency" and generate some innovative ideas
186
272
  ```
187
273
 
188
- ### 8. Experiment Design Helper
274
+ Or invoke a specific skill directly with a slash command:
189
275
 
190
276
  ```
191
- You: My idea is xxx, help me design experiments
277
+ /research-pipeline
278
+ /literature-survey
279
+ /idea-generation
280
+ ```
192
281
 
193
- Agent: [Reading selected_idea.md and related papers]
282
+ ### 3. Monitor sub-agent progress
194
283
 
195
- ## Experiment Design
284
+ When the orchestrator spawns sub-agents, you'll see:
285
+ - **Spawn notification** — "Phase 1: Literature Survey started"
286
+ - **Completion announcement** — automatic message when the sub-agent finishes
287
+ - **Progress updates** — the orchestrator verifies outputs and advances to the next phase
196
288
 
197
- ### Datasets
198
- 1. CNN/DailyMail - Standard news summarization (287k samples)
199
- 2. arXiv - Long scientific papers (215k samples)
200
- 3. ...
289
+ You can also check status anytime with:
201
290
 
202
- ### Baselines
203
- 1. BART-large (ref: paper_001.md)
204
- 2. LED (ref: paper_003.md)
291
+ ```
292
+ /research-status
293
+ ```
205
294
 
206
- ### Metrics
207
- - ROUGE-1/2/L
208
- - BERTScore
209
- - Human evaluation: fluency, information coverage
295
+ ### 4. Manage projects
210
296
 
211
- ### Ablation Studies
212
- 1. Remove xxx module
213
- 2. ...
214
297
  ```
298
+ /projects # List all projects
299
+ /project-switch <id> # Switch to a different project
300
+ /papers # List downloaded papers
301
+ /ideas # List generated ideas
302
+ ```
303
+
304
+ ---
305
+
306
+ ## Skills
307
+
308
+ ### Pipeline Skills (LLM-powered)
309
+
310
+ | Skill | Slash Command | Description |
311
+ |-------|---------------|-------------|
312
+ | **research-pipeline** | `/research-pipeline` | Orchestrator. Spawns sub-agents for each phase, verifies outputs between steps. |
313
+ | **literature-survey** | `/literature-survey` | Search arXiv → filter → download .tex sources → cluster → generate survey report. |
314
+ | **research-survey** | `/research-survey` | Deep analysis of papers: extract formulas, map to code, produce method comparison table. |
315
+ | **research-plan** | `/research-plan` | Create 4-part implementation plan (Dataset/Model/Training/Testing) from survey results. |
316
+ | **research-implement** | `/research-implement` | Implement ML code from plan, run 2-epoch validation with `uv` venv isolation. |
317
+ | **research-review** | `/research-review` | Review implementation. Iterates fix → rerun → review up to 3 times. |
318
+ | **research-experiment** | `/research-experiment` | Full training + ablation experiments. Requires review PASS. |
319
+ | **idea-generation** | `/idea-generation` | Generate 5 innovative research ideas from a topic, select and enhance the best one. |
320
+
321
+ ### Standalone Skills
322
+
323
+ | Skill | Description |
324
+ |-------|-------------|
325
+ | **write-review-paper** | Draft a review/survey paper from project research outputs. |
326
+
327
+ ### Tools (available to LLM)
328
+
329
+ | Tool | Description |
330
+ |------|-------------|
331
+ | `arxiv_search` | Search arXiv papers. Returns metadata (title, authors, abstract, ID). Does not download files. Supports sorting by relevance/date and date filtering. |
332
+ | `arxiv_download` | Batch download papers by arXiv ID. Prefers .tex source files (PDF fallback). Requires absolute output directory path. |
333
+ | `openalex_search` | Search cross-disciplinary academic papers via OpenAlex API. Returns DOI, authors, citation count, OA status. Broader coverage than arXiv. |
334
+ | `unpaywall_download` | Download open access PDFs by DOI via Unpaywall API. Non-OA papers are silently skipped (no failure). |
335
+ | `github_search` | Search GitHub repositories. Returns repo name, description, stars, URL. Supports language filtering and sorting. |
336
+ | `paper_browser` | Paginated browsing of large paper files (.tex/.md) to avoid loading thousands of lines into context. Returns specified line range with navigation info. |
337
+
338
+ ### Commands (direct, no LLM)
339
+
340
+ | Command | Description |
341
+ |---------|-------------|
342
+ | `/research-status` | Show workspace status and active project |
343
+ | `/papers` | List downloaded papers with metadata |
344
+ | `/ideas` | List generated ideas |
345
+ | `/projects` | List all projects |
346
+ | `/project-switch <id>` | Switch active project |
347
+ | `/project-delete <id>` | Delete a project |
215
348
 
216
349
  ---
217
350
 
218
351
  ## Workspace Structure
219
352
 
353
+ All research data is organized under `~/.openclaw/workspace/projects/`:
354
+
220
355
  ```
221
- ~/.openclaw/workspace/projects/
222
- ├── .active # Current project ID
223
- ├── nlp-summarization/ # Project A
224
- │ ├── project.json # Metadata
225
- │ ├── task.json # Task definition
226
- │ ├── survey/
227
- │ │ ├── search_terms.json # Search terms used
228
- │ │ └── report.md # Final survey report
356
+ projects/
357
+ ├── .active # Current project ID
358
+ ├── scaling-law-fashion-mnist/ # Example project
359
+ │ ├── project.json # Metadata
360
+ │ ├── task.json # Task definition
229
361
  │ ├── papers/
230
- │ │ ├── _downloads/ # Raw downloaded files
231
- │ │ ├── _meta/ # Paper metadata JSON files
232
- │ │ └── {arxiv_id}.json
233
- │ │ └── {direction}/ # Clustered papers by research direction
234
- │ ├── repos/ # Cloned repos
235
- │ ├── notes/ # /research-survey: per-paper analysis
362
+ │ │ ├── _meta/ # Paper metadata (*.json)
363
+ │ │ └── _downloads/ # Raw .tex/.pdf files
364
+ ├── survey/
365
+ │ │ └── report.md # Literature survey report
366
+ │ ├── notes/ # Per-paper deep analysis
236
367
  │ │ └── paper_{arxiv_id}.md
237
- │ ├── survey_res.md # /research-survey: method comparison
238
- │ ├── plan_res.md # /research-plan: implementation plan
239
- │ ├── project/ # /research-implement: ML code
240
- │ │ ├── model/
241
- │ │ ├── data/
368
+ │ ├── survey_res.md # Method comparison table
369
+ │ ├── plan_res.md # Implementation plan
370
+ │ ├── project/ # ML code
242
371
  │ │ ├── run.py
243
372
  │ │ └── requirements.txt
244
- │ ├── ml_res.md # /research-implement: execution report
245
- │ ├── iterations/ # /research-review: judge reports
373
+ │ ├── ml_res.md # Implementation results
374
+ │ ├── iterations/ # Review iterations
246
375
  │ │ └── judge_v*.md
247
- │ ├── experiment_res.md # /research-experiment: final results
248
- │ └── ideas/ # Generated ideas
249
- │ ├── idea_1.md
250
- ├── idea_2.md
251
- │ └── selected_idea.md # Best idea
376
+ │ ├── experiment_res.md # Final experiment results
377
+ │ └── ideas/ # Generated ideas
378
+ │ ├── idea_*.md
379
+ └── selected_idea.md
252
380
  └── another-project/
253
381
  ```
254
382
 
@@ -256,58 +384,53 @@ Agent: [Reading selected_idea.md and related papers]
256
384
 
257
385
  ## Configuration
258
386
 
259
- After installation, the plugin is automatically enabled. You can customize settings in `~/.openclaw/openclaw.json`:
387
+ Plugin settings in `~/.openclaw/openclaw.json`:
260
388
 
261
389
  ```json
262
390
  {
263
391
  "plugins": {
264
392
  "entries": {
265
393
  "scientify": {
266
- "enabled": true,
267
- "workspaceRoot": "~/my-research",
268
- "defaultMaxPapers": 15
394
+ "enabled": true
269
395
  }
270
396
  }
271
397
  }
272
398
  }
273
399
  ```
274
400
 
275
- ### Plugin Management
401
+ ### Plugin management
276
402
 
277
403
  ```bash
278
- # List installed plugins
279
- openclaw plugins list
280
-
281
- # Disable plugin
282
- openclaw plugins disable scientify
283
-
284
- # Enable plugin
285
- openclaw plugins enable scientify
286
-
287
- # Update to latest version
288
- openclaw plugins update scientify
404
+ openclaw plugins list # List installed plugins
405
+ openclaw plugins enable scientify # Enable
406
+ openclaw plugins disable scientify # Disable
407
+ openclaw plugins update scientify # Update to latest
408
+ openclaw plugins doctor # Diagnose issues
289
409
  ```
290
410
 
291
411
  ---
292
412
 
293
413
  ## Known Limitations
294
414
 
295
- ### Sandbox & GPU
296
-
297
- The `research-pipeline` skill's code execution step depends on your OpenClaw agent configuration:
298
-
299
- - If `sandbox.mode: "off"` (default for CLI), commands run directly on host
300
- - Current sandbox does NOT support GPU (`--gpus`) or custom shared memory (`--shm-size`)
301
-
302
- For GPU-accelerated ML training, consider:
303
- 1. Running outside sandbox (configure agent with `sandbox.mode: "off"`)
304
- 2. Using a dedicated cloud GPU instance
305
- 3. Waiting for OpenClaw GPU support
415
+ - **Sub-agent timeout**: Each sub-agent has a 30-minute timeout (`runTimeoutSeconds: 1800`). Complex literature surveys with many papers may need longer.
416
+ - **GPU/Sandbox**: Code execution runs on host by default. OpenClaw sandbox does not support GPU passthrough yet.
417
+ - **Model dependency**: Research quality depends heavily on the LLM model used. Claude Opus 4.5+ or GPT-5+ recommended.
306
418
 
307
419
  ---
308
420
 
309
421
  ## Development
310
422
 
423
+ ```bash
424
+ git clone https://github.com/user/scientify.git
425
+ cd scientify
426
+ pnpm install
427
+ pnpm build # Build TypeScript
428
+ pnpm dev # Watch mode
429
+
430
+ # Link to OpenClaw for testing
431
+ openclaw plugins install -l ./
432
+ ```
433
+
311
434
  See [CLAUDE.md](./CLAUDE.md) for version update SOP and contribution guide.
312
435
 
313
436
  ---