open-research 0.1.1 → 0.1.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +65 -93
  2. package/dist/cli.js +250 -31
  3. package/package.json +1 -1
package/README.md CHANGED
@@ -1,137 +1,109 @@
1
- # Open Research
1
+ <p align="center">
2
+ <img src="assets/hero-banner.png" alt="Open Research" width="720" />
3
+ </p>
2
4
 
3
- Local-first research CLI agent. Discover papers, synthesize notes, run analysis, and draft artifacts from your terminal.
5
+ <h3 align="center">The research-native CLI agent.</h3>
4
6
 
5
- ## Install
7
+ <p align="center">
8
+ <a href="https://www.npmjs.com/package/open-research"><img src="https://img.shields.io/npm/v/open-research.svg" alt="npm" /></a>
9
+ <a href="https://github.com/gangj277/open-research/blob/main/LICENSE"><img src="https://img.shields.io/npm/l/open-research.svg" alt="license" /></a>
10
+ </p>
11
+
12
+ <p align="center">
13
+ <img src="assets/workflow-concept.png" alt="Papers → Analysis → Synthesis → Code" width="620" />
14
+ </p>
6
15
 
7
- Requires Node.js 20+.
16
+ ## Install
8
17
 
9
- **curl**
10
18
  ```bash
19
+ # curl
11
20
  curl -fsSL https://raw.githubusercontent.com/gangj277/open-research/main/install.sh | bash
12
21
  ```
13
22
 
14
- **npm**
15
23
  ```bash
24
+ # npm
16
25
  npm install -g open-research
17
26
  ```
18
27
 
19
- **bun**
20
28
  ```bash
29
+ # bun
21
30
  bun install -g open-research
22
31
  ```
23
32
 
24
- **pnpm**
25
33
  ```bash
34
+ # pnpm
26
35
  pnpm add -g open-research
27
36
  ```
28
37
 
29
- **yarn**
30
- ```bash
31
- yarn global add open-research
32
- ```
33
-
34
- **npx** (no install, runs latest)
35
38
  ```bash
39
+ # npx (no install)
36
40
  npx open-research
37
41
  ```
38
42
 
39
- ## Quick Start
43
+ > [!TIP]
44
+ > Requires Node.js 20+. Run `node -v` to check.
45
+
46
+ ## Usage
40
47
 
41
48
  ```bash
42
- # Launch the TUI
43
49
  open-research
50
+ ```
44
51
 
45
- # Connect your OpenAI account (inside the TUI)
46
- /auth
47
-
48
- # Initialize a workspace
49
- /init
52
+ Inside the TUI:
50
53
 
51
- # Start researching
52
- > What are the latest advances in transformer attention mechanisms?
54
+ ```
55
+ /auth Connect your OpenAI account
56
+ /init Initialize a workspace
57
+ /help Show all commands
53
58
  ```
54
59
 
55
- ## What It Does
56
-
57
- Open Research is an AI-powered research agent that runs in your terminal. It connects to OpenAI's API and gives you a full research workflow:
60
+ Then ask anything:
58
61
 
59
- - **Discover papers** across arXiv, Semantic Scholar, and OpenAlex
60
- - **Read and analyze** PDFs, datasets, and web pages
61
- - **Run code** Python scripts, R analysis, LaTeX compilation, anything
62
- - **Write artifacts** — notes, syntheses, paper drafts grounded in sources
63
- - **Review changes** — risky edits go to a review queue for your approval
62
+ ```
63
+ > Find the most-cited papers on transformer attention since 2022
64
+ and identify gaps in the literature
65
+ ```
64
66
 
65
- ## Tools
67
+ The agent searches arXiv, Semantic Scholar, and OpenAlex — reads papers, runs analysis scripts, writes source-grounded notes, and drafts artifacts in your local workspace.
66
68
 
67
- The agent has access to:
69
+ ## How is this different from Cursor / Claude Code?
68
70
 
69
- | Tool | What it does |
70
- |---|---|
71
- | `read_file` | Read any file on disk (text, with binary detection) |
72
- | `read_pdf` | Extract text from PDFs |
73
- | `list_directory` | Explore directory trees |
74
- | `run_command` | Execute shell commands (python, R, LaTeX, curl, etc.) |
75
- | `search_workspace` | Search across workspace files |
76
- | `write_new_file` | Create new workspace files |
77
- | `update_existing_file` | Edit existing files |
78
- | `search_external_sources` | Search academic paper databases |
79
- | `fetch_url` | Fetch web pages and APIs |
80
- | `ask_user` | Ask you questions when clarification is needed |
81
- | `load_skill` | Activate research skills |
82
- | `create_paper` | Create LaTeX paper drafts |
71
+ Those are coding agents. Open Research is a **research agent**.
83
72
 
84
- ## Slash Commands
73
+ It has tools that coding agents don't: federated academic paper search, PDF extraction, source-grounded synthesis, and pluggable research skills (devil's advocate, methodology critic, experiment designer, etc.).
85
74
 
86
- | Command | Description |
87
- |---|---|
88
- | `/auth` | Connect OpenAI account via browser |
89
- | `/auth-codex` | Import existing Codex CLI auth |
90
- | `/init` | Initialize workspace in current directory |
91
- | `/skills` | List available research skills |
92
- | `/config` | View or change settings |
93
- | `/clear` | Start a new conversation |
94
- | `/help` | Show all commands |
95
- | `/exit` | Quit |
75
+ Everything stays local. Your workspace is a directory with `sources/`, `notes/`, `papers/`, `experiments/`. The agent reads and writes to it. Risky edits go to a review queue.
96
76
 
97
77
  ## Skills
98
78
 
99
- Built-in research skills that guide the agent's methodology:
100
-
101
- - **source-scout** — Find citation gaps and discover relevant papers
102
- - **devils-advocate** — Stress-test claims and assumptions
103
- - **methodology-critic** — Critique research methodology
104
- - **evidence-adjudicator** — Evaluate evidence quality
105
- - **experiment-designer** — Design experiments and studies
106
- - **draft-paper** — Draft LaTeX papers from workspace evidence
107
- - **paper-explainer** — Explain complex papers
108
- - **synthesis-updater** — Update research syntheses
109
- - **skill-creator** — Create custom skills
110
-
111
- Type `/skill-name` in the TUI to activate any skill, or create your own in `~/.open-research/skills/`.
79
+ Built-in research methodologies. Type `/skill-name` to activate:
112
80
 
113
- ## Workspace Structure
81
+ - **source-scout** — find citation gaps, discover papers
82
+ - **devils-advocate** — stress-test claims and assumptions
83
+ - **methodology-critic** — critique research methodology
84
+ - **evidence-adjudicator** — evaluate evidence quality
85
+ - **experiment-designer** — design experiments
86
+ - **draft-paper** — draft LaTeX papers from workspace evidence
87
+ - **paper-explainer** — explain complex papers
88
+ - **synthesis-updater** — update syntheses with new findings
114
89
 
115
- ```
116
- my-research/
117
- sources/ # PDFs, papers, raw data
118
- notes/ # Research notes and briefs
119
- artifacts/ # Generated outputs
120
- papers/ # LaTeX paper drafts
121
- experiments/ # Analysis scripts and results
122
- .open-research/ # Workspace metadata
123
- ```
90
+ Create custom skills in `~/.open-research/skills/`.
124
91
 
125
- ## Features
92
+ ## Tools
126
93
 
127
- - **Markdown rendering** in terminal output (bold, italic, code blocks, lists, headings)
128
- - **Slash command autocomplete** with arrow-key navigation
129
- - **@file mentions** to reference workspace files inline
130
- - **Shift+Enter** for multi-line input
131
- - **Context management**automatic compaction when conversation gets long
132
- - **Token tracking** see context usage in the status bar
133
- - **Tool activity streaming** see what the agent is doing in real-time
134
- - **Review queue** risky edits require your approval before applying
94
+ | Tool | Description |
95
+ |---|---|
96
+ | `read_file` | Read any file with streaming, binary detection |
97
+ | `read_pdf` | Extract text from PDFs |
98
+ | `run_command` | Shell execution Python, R, LaTeX, anything |
99
+ | `list_directory` | Explore directory trees |
100
+ | `search_external_sources` | arXiv + Semantic Scholar + OpenAlex |
101
+ | `fetch_url` | Fetch web pages and APIs |
102
+ | `write_new_file` | Create workspace files |
103
+ | `update_existing_file` | Edit with review policy |
104
+ | `ask_user` | Pause and ask for clarification |
105
+ | `search_workspace` | Full-text search across files |
106
+ | `create_paper` | Create LaTeX drafts |
135
107
 
136
108
  ## Development
137
109
 
@@ -139,9 +111,9 @@ my-research/
139
111
  git clone https://github.com/gangj277/open-research.git
140
112
  cd open-research
141
113
  npm install
142
- npm run dev # Run in dev mode
143
- npm test # Run tests
144
- npm run build # Build for production
114
+ npm run dev # dev mode
115
+ npm test # 63 tests
116
+ npm run build # production build
145
117
  ```
146
118
 
147
119
  ## License
package/dist/cli.js CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  // src/cli.ts
4
4
  import React4 from "react";
5
- import path17 from "path";
5
+ import path18 from "path";
6
6
  import { Command } from "commander";
7
7
  import { render } from "ink";
8
8
 
@@ -848,7 +848,7 @@ async function ensureOpenResearchConfig(options) {
848
848
  }
849
849
 
850
850
  // src/tui/app.tsx
851
- import path16 from "path";
851
+ import path17 from "path";
852
852
  import {
853
853
  startTransition,
854
854
  useDeferredValue,
@@ -4496,7 +4496,7 @@ var MODEL_CONTEXT_WINDOWS = {
4496
4496
  "gpt-5.1": 272e3,
4497
4497
  "gpt-5": 272e3,
4498
4498
  "gpt-4o": 128e3,
4499
- "gpt-4o-mini": 128e3,
4499
+ "gpt-5.4-mini": 128e3,
4500
4500
  "o3": 2e5,
4501
4501
  "o4-mini": 2e5
4502
4502
  };
@@ -4610,6 +4610,177 @@ async function maybeCompact(messages, model, provider, usage, signal) {
4610
4610
  return { messages: compacted, didCompact: true };
4611
4611
  }
4612
4612
 
4613
+ // src/lib/memory/store.ts
4614
+ import fs14 from "fs/promises";
4615
+ import path13 from "path";
4616
+ function getMemoryFile(options) {
4617
+ return path13.join(getOpenResearchRoot(options), "memory.json");
4618
+ }
4619
+ async function loadMemories(options) {
4620
+ const file = getMemoryFile(options);
4621
+ try {
4622
+ const raw = await fs14.readFile(file, "utf8");
4623
+ const store = JSON.parse(raw);
4624
+ return store.memories ?? [];
4625
+ } catch {
4626
+ return [];
4627
+ }
4628
+ }
4629
+ async function saveMemories(memories, options) {
4630
+ const file = getMemoryFile(options);
4631
+ await fs14.mkdir(path13.dirname(file), { recursive: true });
4632
+ const store = { version: 1, memories };
4633
+ await fs14.writeFile(file, JSON.stringify(store, null, 2), "utf8");
4634
+ }
4635
+ var MAX_MEMORIES = 100;
4636
+ async function addMemory(memory, options) {
4637
+ const memories = await loadMemories(options);
4638
+ const existing = memories.find((m) => {
4639
+ const a = m.content.toLowerCase().replace(/\s+/g, " ");
4640
+ const b = memory.content.toLowerCase().replace(/\s+/g, " ");
4641
+ const wordsA = new Set(a.split(" "));
4642
+ const wordsB = new Set(b.split(" "));
4643
+ const intersection = [...wordsA].filter((w) => wordsB.has(w));
4644
+ const similarity = intersection.length / Math.max(wordsA.size, wordsB.size);
4645
+ return similarity > 0.7;
4646
+ });
4647
+ if (existing) {
4648
+ existing.lastRelevantAt = (/* @__PURE__ */ new Date()).toISOString();
4649
+ existing.relevanceCount++;
4650
+ if (memory.content.length > existing.content.length) {
4651
+ existing.content = memory.content;
4652
+ }
4653
+ await saveMemories(memories, options);
4654
+ return existing;
4655
+ }
4656
+ const newMemory = {
4657
+ id: crypto.randomUUID(),
4658
+ content: memory.content,
4659
+ category: memory.category,
4660
+ createdAt: (/* @__PURE__ */ new Date()).toISOString(),
4661
+ lastRelevantAt: (/* @__PURE__ */ new Date()).toISOString(),
4662
+ relevanceCount: 1
4663
+ };
4664
+ memories.push(newMemory);
4665
+ if (memories.length > MAX_MEMORIES) {
4666
+ memories.sort((a, b) => {
4667
+ const aScore = new Date(a.lastRelevantAt).getTime() + a.relevanceCount * 864e5;
4668
+ const bScore = new Date(b.lastRelevantAt).getTime() + b.relevanceCount * 864e5;
4669
+ return bScore - aScore;
4670
+ });
4671
+ memories.length = MAX_MEMORIES;
4672
+ }
4673
+ await saveMemories(memories, options);
4674
+ return newMemory;
4675
+ }
4676
+ async function deleteMemory(id, options) {
4677
+ const memories = await loadMemories(options);
4678
+ const idx = memories.findIndex((m) => m.id === id);
4679
+ if (idx === -1) return false;
4680
+ memories.splice(idx, 1);
4681
+ await saveMemories(memories, options);
4682
+ return true;
4683
+ }
4684
+ async function clearMemories(options) {
4685
+ await saveMemories([], options);
4686
+ }
4687
+ function formatMemoriesForPrompt(memories) {
4688
+ if (memories.length === 0) return "";
4689
+ const grouped = {};
4690
+ for (const m of memories) {
4691
+ (grouped[m.category] ??= []).push(m);
4692
+ }
4693
+ const sections = ["## What I Remember About You"];
4694
+ const categoryLabels = {
4695
+ user: "About you",
4696
+ preference: "Your preferences",
4697
+ project: "Your projects",
4698
+ methodology: "Methodology preferences",
4699
+ context: "Context"
4700
+ };
4701
+ for (const [cat, mems] of Object.entries(grouped)) {
4702
+ sections.push(`**${categoryLabels[cat] ?? cat}:**`);
4703
+ for (const m of mems) {
4704
+ sections.push(`- ${m.content}`);
4705
+ }
4706
+ }
4707
+ return sections.join("\n");
4708
+ }
4709
+
4710
+ // src/lib/memory/extractor.ts
4711
+ var EXTRACTION_PROMPT = `You are a memory extraction system. Your job is to identify facts worth remembering about the user from a conversation exchange.
4712
+
4713
+ Focus on:
4714
+ - Who they are (role, field, institution, expertise level)
4715
+ - What they're working on (current research projects, topics, deadlines)
4716
+ - How they prefer to work (preferred tools, languages, writing style, methodologies)
4717
+ - Methodological preferences (statistical approaches, theoretical frameworks, citation style)
4718
+ - Important context (collaborators, advisors, publication targets, funding constraints)
4719
+
4720
+ Rules:
4721
+ - Only extract facts that would be useful in FUTURE conversations
4722
+ - Be specific and concise \u2014 each memory should be one clear fact
4723
+ - Do NOT extract task-specific details that only matter for the current conversation
4724
+ - Do NOT extract obvious things ("user asked about papers" is not useful)
4725
+ - If there is nothing meaningful to remember, return an empty array
4726
+ - Maximum 3 new memories per exchange
4727
+
4728
+ Existing memories (do not duplicate these):
4729
+ {EXISTING_MEMORIES}
4730
+
4731
+ Respond with a JSON array of objects, each with "content" (string) and "category" (one of: "user", "preference", "project", "methodology", "context"). If nothing worth remembering, respond with [].
4732
+
4733
+ Example response:
4734
+ [{"content": "PhD student in computational neuroscience at MIT", "category": "user"}, {"content": "Prefers Python with statsmodels for statistical analysis over R", "category": "preference"}]`;
4735
+ async function extractMemories(input2) {
4736
+ const existing = await loadMemories({ homeDir: input2.homeDir });
4737
+ if (input2.userMessage.startsWith("/") || input2.userMessage.length < 20) {
4738
+ return [];
4739
+ }
4740
+ const existingList = existing.length > 0 ? existing.map((m) => `- [${m.category}] ${m.content}`).join("\n") : "(none)";
4741
+ const prompt2 = EXTRACTION_PROMPT.replace("{EXISTING_MEMORIES}", existingList);
4742
+ const conversationSnippet = [
4743
+ `User: ${input2.userMessage.slice(0, 2e3)}`,
4744
+ `Agent: ${input2.agentResponse.slice(0, 2e3)}`
4745
+ ].join("\n\n");
4746
+ try {
4747
+ const response = await input2.provider.callLLM({
4748
+ messages: [
4749
+ { role: "system", content: prompt2 },
4750
+ { role: "user", content: conversationSnippet }
4751
+ ],
4752
+ model: input2.model ?? "gpt-5.4-mini",
4753
+ maxTokens: 500,
4754
+ temperature: 0
4755
+ });
4756
+ const raw = response.content.trim();
4757
+ const jsonStr = raw.startsWith("```") ? raw.replace(/^```(?:json)?\n?/, "").replace(/\n?```$/, "") : raw;
4758
+ const parsed = JSON.parse(jsonStr);
4759
+ if (!Array.isArray(parsed)) return [];
4760
+ const valid = [];
4761
+ for (const item of parsed) {
4762
+ if (typeof item.content === "string" && item.content.length > 5 && ["user", "preference", "project", "methodology", "context"].includes(item.category)) {
4763
+ valid.push({
4764
+ content: item.content,
4765
+ category: item.category
4766
+ });
4767
+ }
4768
+ }
4769
+ return valid.slice(0, 3);
4770
+ } catch {
4771
+ return [];
4772
+ }
4773
+ }
4774
+ async function extractAndStoreMemories(input2) {
4775
+ const extracted = await extractMemories(input2);
4776
+ const stored = [];
4777
+ for (const mem of extracted) {
4778
+ const saved = await addMemory(mem, { homeDir: input2.homeDir });
4779
+ stored.push(saved);
4780
+ }
4781
+ return stored;
4782
+ }
4783
+
4613
4784
  // src/lib/agent/runtime.ts
4614
4785
  var TOOL_DESCRIPTIONS = {
4615
4786
  read_file: (a) => `Reading ${a.file_path ?? "file"}`,
@@ -4696,8 +4867,11 @@ async function runAgentTurn(input2) {
4696
4867
  const systemPrompt = isPlanning ? buildPlanningSystemPrompt(input2.workspace, activeSkills) : buildSystemPrompt(input2.workspace, activeSkills);
4697
4868
  const model = input2.model ?? "gpt-5.4";
4698
4869
  const usage = input2.sessionUsage ?? createSessionUsage();
4870
+ const memories = await loadMemories({ homeDir: input2.homeDir });
4871
+ const memoryBlock = formatMemoriesForPrompt(memories);
4872
+ const fullSystemPrompt = memoryBlock ? systemPrompt + "\n\n" + memoryBlock : systemPrompt;
4699
4873
  let messages = [
4700
- { role: "system", content: systemPrompt },
4874
+ { role: "system", content: fullSystemPrompt },
4701
4875
  ...input2.history,
4702
4876
  { role: "user", content: input2.message }
4703
4877
  ];
@@ -4750,6 +4924,18 @@ async function runAgentTurn(input2) {
4750
4924
  detectedCharter = charterMatch[1].trim();
4751
4925
  }
4752
4926
  }
4927
+ extractAndStoreMemories({
4928
+ userMessage: input2.message,
4929
+ agentResponse: fullText,
4930
+ provider: input2.provider,
4931
+ model: "gpt-5.4-mini",
4932
+ homeDir: input2.homeDir
4933
+ }).then((stored) => {
4934
+ if (stored.length > 0) {
4935
+ input2.onMemoryExtracted?.(stored.map((m) => m.content));
4936
+ }
4937
+ }).catch(() => {
4938
+ });
4753
4939
  return {
4754
4940
  text: fullText,
4755
4941
  proposedUpdates,
@@ -4841,8 +5027,8 @@ function classifyUpdateRisk(update) {
4841
5027
  }
4842
5028
 
4843
5029
  // src/lib/workspace/apply-update.ts
4844
- import fs14 from "fs/promises";
4845
- import path13 from "path";
5030
+ import fs15 from "fs/promises";
5031
+ import path14 from "path";
4846
5032
  function resolveRelativePath(update) {
4847
5033
  if (update.key.startsWith("path:")) {
4848
5034
  return update.key.slice(5);
@@ -4863,20 +5049,20 @@ function resolveRelativePath(update) {
4863
5049
  }
4864
5050
  async function applyProposedUpdate(workspaceDir, update) {
4865
5051
  const relativePath = resolveRelativePath(update);
4866
- const absolutePath = path13.join(workspaceDir, relativePath);
4867
- await fs14.mkdir(path13.dirname(absolutePath), { recursive: true });
4868
- await fs14.writeFile(absolutePath, update.content, "utf8");
5052
+ const absolutePath = path14.join(workspaceDir, relativePath);
5053
+ await fs15.mkdir(path14.dirname(absolutePath), { recursive: true });
5054
+ await fs15.writeFile(absolutePath, update.content, "utf8");
4869
5055
  return absolutePath;
4870
5056
  }
4871
5057
 
4872
5058
  // src/lib/workspace/sessions.ts
4873
- import fs15 from "fs/promises";
4874
- import path14 from "path";
5059
+ import fs16 from "fs/promises";
5060
+ import path15 from "path";
4875
5061
  async function appendSessionEvent(workspaceDir, sessionId, event) {
4876
5062
  const sessionsDir = getWorkspaceSessionsDir(workspaceDir);
4877
- await fs15.mkdir(sessionsDir, { recursive: true });
4878
- const sessionFile = path14.join(sessionsDir, `${sessionId}.jsonl`);
4879
- await fs15.appendFile(sessionFile, `${JSON.stringify(event)}
5063
+ await fs16.mkdir(sessionsDir, { recursive: true });
5064
+ const sessionFile = path15.join(sessionsDir, `${sessionId}.jsonl`);
5065
+ await fs16.appendFile(sessionFile, `${JSON.stringify(event)}
4880
5066
  `, "utf8");
4881
5067
  }
4882
5068
  function parseEvents(raw) {
@@ -4892,7 +5078,7 @@ async function listSessions(workspaceDir) {
4892
5078
  const sessionsDir = getWorkspaceSessionsDir(workspaceDir);
4893
5079
  let files;
4894
5080
  try {
4895
- files = await fs15.readdir(sessionsDir);
5081
+ files = await fs16.readdir(sessionsDir);
4896
5082
  } catch {
4897
5083
  return [];
4898
5084
  }
@@ -4900,7 +5086,7 @@ async function listSessions(workspaceDir) {
4900
5086
  for (const file of files) {
4901
5087
  if (!file.endsWith(".jsonl")) continue;
4902
5088
  const id = file.replace(/\.jsonl$/, "");
4903
- const raw = await fs15.readFile(path14.join(sessionsDir, file), "utf8");
5089
+ const raw = await fs16.readFile(path15.join(sessionsDir, file), "utf8");
4904
5090
  const events = parseEvents(raw);
4905
5091
  if (events.length === 0) continue;
4906
5092
  const chatTurns = events.filter((e) => e.type === "chat.turn");
@@ -4921,8 +5107,8 @@ async function listSessions(workspaceDir) {
4921
5107
  }
4922
5108
  async function loadSessionHistory(workspaceDir, sessionId) {
4923
5109
  const sessionsDir = getWorkspaceSessionsDir(workspaceDir);
4924
- const sessionFile = path14.join(sessionsDir, `${sessionId}.jsonl`);
4925
- const raw = await fs15.readFile(sessionFile, "utf8");
5110
+ const sessionFile = path15.join(sessionsDir, `${sessionId}.jsonl`);
5111
+ const raw = await fs16.readFile(sessionFile, "utf8");
4926
5112
  const events = parseEvents(raw);
4927
5113
  const messages = [];
4928
5114
  const llmHistory = [];
@@ -5034,23 +5220,23 @@ function ConfigScreen({ items, onUpdate, onClose }) {
5034
5220
  }
5035
5221
 
5036
5222
  // src/lib/cli/update-check.ts
5037
- import fs16 from "fs/promises";
5038
- import path15 from "path";
5223
+ import fs17 from "fs/promises";
5224
+ import path16 from "path";
5039
5225
  import os4 from "os";
5040
5226
  var PACKAGE_NAME = "open-research";
5041
5227
  var CHECK_INTERVAL_MS = 4 * 60 * 60 * 1e3;
5042
- var STATE_FILE = path15.join(os4.homedir(), ".open-research", "update-check.json");
5228
+ var STATE_FILE = path16.join(os4.homedir(), ".open-research", "update-check.json");
5043
5229
  async function readState() {
5044
5230
  try {
5045
- const raw = await fs16.readFile(STATE_FILE, "utf8");
5231
+ const raw = await fs17.readFile(STATE_FILE, "utf8");
5046
5232
  return JSON.parse(raw);
5047
5233
  } catch {
5048
5234
  return { lastCheck: 0, latestVersion: null };
5049
5235
  }
5050
5236
  }
5051
5237
  async function writeState(state) {
5052
- await fs16.mkdir(path15.dirname(STATE_FILE), { recursive: true });
5053
- await fs16.writeFile(STATE_FILE, JSON.stringify(state), "utf8");
5238
+ await fs17.mkdir(path16.dirname(STATE_FILE), { recursive: true });
5239
+ await fs17.writeFile(STATE_FILE, JSON.stringify(state), "utf8");
5054
5240
  }
5055
5241
  function getCurrentVersion() {
5056
5242
  try {
@@ -5120,6 +5306,7 @@ var SLASH_COMMANDS = [
5120
5306
  { name: "clear", aliases: ["/new"], description: "Clear conversation and start fresh", category: "session" },
5121
5307
  { name: "help", aliases: ["/commands"], description: "Show available commands", category: "system" },
5122
5308
  { name: "config", aliases: ["/settings"], description: "View or change settings (e.g. /config theme dark)", category: "system" },
5309
+ { name: "memory", aliases: ["/memories"], description: "View or clear stored memories about you", category: "system" },
5123
5310
  { name: "exit", aliases: ["/quit", "/q"], description: "Exit Open Research", category: "system" }
5124
5311
  ];
5125
5312
  function matchSlashCommand(input2) {
@@ -5945,6 +6132,33 @@ function App({
5945
6132
  addSystemMessage(" Esc unfocus prompt");
5946
6133
  break;
5947
6134
  }
6135
+ case "memory": {
6136
+ if (args === "clear") {
6137
+ await clearMemories({ homeDir });
6138
+ addSystemMessage("All memories cleared.");
6139
+ break;
6140
+ }
6141
+ if (args.startsWith("delete ")) {
6142
+ const memId = args.slice(7).trim();
6143
+ const deleted = await deleteMemory(memId, { homeDir });
6144
+ addSystemMessage(deleted ? `Deleted memory ${memId.slice(0, 8)}...` : "Memory not found.");
6145
+ break;
6146
+ }
6147
+ const mems = await loadMemories({ homeDir });
6148
+ if (mems.length === 0) {
6149
+ addSystemMessage("No memories stored yet. I'll learn about you as we talk.");
6150
+ } else {
6151
+ addSystemMessage(`${mems.length} memories:`);
6152
+ for (const m of mems) {
6153
+ addSystemMessage(` [${m.category}] ${m.content}`);
6154
+ addSystemMessage(` id: ${m.id.slice(0, 8)}... \xB7 reinforced ${m.relevanceCount}x`);
6155
+ }
6156
+ addSystemMessage("");
6157
+ addSystemMessage(" /memory clear \u2014 delete all");
6158
+ addSystemMessage(" /memory delete <id> \u2014 delete one");
6159
+ }
6160
+ break;
6161
+ }
5948
6162
  case "exit": {
5949
6163
  app.exit();
5950
6164
  break;
@@ -6156,6 +6370,11 @@ function App({
6156
6370
  addSystemMessage(` \u2713 ${activity.description ?? activity.name}${dur}`);
6157
6371
  }
6158
6372
  },
6373
+ onMemoryExtracted: (mems) => {
6374
+ for (const m of mems) {
6375
+ addSystemMessage(` \u25CA remembered: ${m}`);
6376
+ }
6377
+ },
6159
6378
  onCompaction: () => {
6160
6379
  addSystemMessage(" \u25CA Context compacted \u2014 older messages summarized");
6161
6380
  },
@@ -6588,7 +6807,7 @@ function App({
6588
6807
  statusParts,
6589
6808
  statusColor,
6590
6809
  tokenDisplay,
6591
- workspaceName: hasWorkspace ? path16.basename(workspacePath) : process.cwd(),
6810
+ workspaceName: hasWorkspace ? path17.basename(workspacePath) : process.cwd(),
6592
6811
  mode: agentMode,
6593
6812
  planningStatus: planningState.status
6594
6813
  }
@@ -6600,7 +6819,7 @@ function App({
6600
6819
  var program = new Command();
6601
6820
  program.name("open-research").description("Local-first research CLI powered by ChatGPT/Codex auth.").argument("[workspacePath]", "Optional workspace path to open").action(async (workspacePath) => {
6602
6821
  await ensureOpenResearchConfig();
6603
- const target = workspacePath ? path17.resolve(workspacePath) : process.cwd();
6822
+ const target = workspacePath ? path18.resolve(workspacePath) : process.cwd();
6604
6823
  const project = await loadWorkspaceProject(target);
6605
6824
  const auth2 = await loadStoredAuth();
6606
6825
  render(
@@ -6622,7 +6841,7 @@ program.name("open-research").description("Local-first research CLI powered by C
6622
6841
  });
6623
6842
  program.command("init").argument("[workspacePath]").description("Initialize an Open Research workspace.").action(async (workspacePath) => {
6624
6843
  await ensureOpenResearchConfig();
6625
- const target = path17.resolve(workspacePath ?? process.cwd());
6844
+ const target = path18.resolve(workspacePath ?? process.cwd());
6626
6845
  const project = await initWorkspace({ workspaceDir: target });
6627
6846
  console.log(`Initialized workspace: ${target}`);
6628
6847
  console.log(`Title: ${project.title}`);
@@ -6691,8 +6910,8 @@ skills.command("create").argument("[name]").description("Scaffold a new user ski
6691
6910
  });
6692
6911
  skills.command("edit").argument("<name>").description("Open a user skill in $EDITOR.").action(async (name) => {
6693
6912
  await ensureOpenResearchConfig();
6694
- const skillDir = path17.join(getOpenResearchSkillsDir(), name);
6695
- openInEditor(path17.join(skillDir, "SKILL.md"));
6913
+ const skillDir = path18.join(getOpenResearchSkillsDir(), name);
6914
+ openInEditor(path18.join(skillDir, "SKILL.md"));
6696
6915
  const validation = await validateSkillDirectory({ skillDir });
6697
6916
  if (!validation.ok) {
6698
6917
  console.error(validation.errors.join("\n"));
@@ -6703,9 +6922,9 @@ skills.command("edit").argument("<name>").description("Open a user skill in $EDI
6703
6922
  });
6704
6923
  skills.command("validate").argument("[nameOrPath]").description("Validate one user skill.").action(async (nameOrPath) => {
6705
6924
  await ensureOpenResearchConfig();
6706
- const skillDir = nameOrPath ? path17.isAbsolute(nameOrPath) ? nameOrPath : path17.join(getOpenResearchSkillsDir(), nameOrPath) : getOpenResearchSkillsDir();
6925
+ const skillDir = nameOrPath ? path18.isAbsolute(nameOrPath) ? nameOrPath : path18.join(getOpenResearchSkillsDir(), nameOrPath) : getOpenResearchSkillsDir();
6707
6926
  const stat = await import("fs/promises").then(
6708
- (fs17) => fs17.stat(skillDir).catch(() => null)
6927
+ (fs18) => fs18.stat(skillDir).catch(() => null)
6709
6928
  );
6710
6929
  if (!stat) {
6711
6930
  throw new Error(`Skill path not found: ${skillDir}`);
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "open-research",
3
- "version": "0.1.1",
3
+ "version": "0.1.2",
4
4
  "description": "Local-first research CLI agent — discover papers, synthesize notes, run analysis, and draft artifacts from your terminal.",
5
5
  "type": "module",
6
6
  "license": "MIT",