open-research 0.1.0 → 0.1.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +79 -81
  2. package/dist/cli.js +322 -25
  3. package/package.json +1 -1
package/README.md CHANGED
@@ -1,111 +1,109 @@
1
- # Open Research
1
+ <p align="center">
2
+ <img src="assets/hero-banner.png" alt="Open Research" width="720" />
3
+ </p>
2
4
 
3
- Local-first research CLI agent. Discover papers, synthesize notes, run analysis, and draft artifacts from your terminal.
5
+ <h3 align="center">The research-native CLI agent.</h3>
6
+
7
+ <p align="center">
8
+ <a href="https://www.npmjs.com/package/open-research"><img src="https://img.shields.io/npm/v/open-research.svg" alt="npm" /></a>
9
+ <a href="https://github.com/gangj277/open-research/blob/main/LICENSE"><img src="https://img.shields.io/npm/l/open-research.svg" alt="license" /></a>
10
+ </p>
11
+
12
+ <p align="center">
13
+ <img src="assets/workflow-concept.png" alt="Papers → Analysis → Synthesis → Code" width="620" />
14
+ </p>
4
15
 
5
16
  ## Install
6
17
 
7
18
  ```bash
19
+ # curl
20
+ curl -fsSL https://raw.githubusercontent.com/gangj277/open-research/main/install.sh | bash
21
+ ```
22
+
23
+ ```bash
24
+ # npm
8
25
  npm install -g open-research
9
26
  ```
10
27
 
11
- Requires Node.js 20+.
28
+ ```bash
29
+ # bun
30
+ bun install -g open-research
31
+ ```
12
32
 
13
- ## Quick Start
33
+ ```bash
34
+ # pnpm
35
+ pnpm add -g open-research
36
+ ```
14
37
 
15
38
  ```bash
16
- # Launch the TUI
17
- open-research
39
+ # npx (no install)
40
+ npx open-research
41
+ ```
18
42
 
19
- # Connect your OpenAI account (inside the TUI)
20
- /auth
43
+ > [!TIP]
44
+ > Requires Node.js 20+. Run `node -v` to check.
21
45
 
22
- # Initialize a workspace
23
- /init
46
+ ## Usage
24
47
 
25
- # Start researching
26
- > What are the latest advances in transformer attention mechanisms?
48
+ ```bash
49
+ open-research
27
50
  ```
28
51
 
29
- ## What It Does
52
+ Inside the TUI:
30
53
 
31
- Open Research is an AI-powered research agent that runs in your terminal. It connects to OpenAI's API and gives you a full research workflow:
32
-
33
- - **Discover papers** across arXiv, Semantic Scholar, and OpenAlex
34
- - **Read and analyze** PDFs, datasets, and web pages
35
- - **Run code** — Python scripts, R analysis, LaTeX compilation, anything
36
- - **Write artifacts** — notes, syntheses, paper drafts grounded in sources
37
- - **Review changes** — risky edits go to a review queue for your approval
54
+ ```
55
+ /auth Connect your OpenAI account
56
+ /init Initialize a workspace
57
+ /help Show all commands
58
+ ```
38
59
 
39
- ## Tools
60
+ Then ask anything:
40
61
 
41
- The agent has access to:
62
+ ```
63
+ > Find the most-cited papers on transformer attention since 2022
64
+ and identify gaps in the literature
65
+ ```
42
66
 
43
- | Tool | What it does |
44
- |---|---|
45
- | `read_file` | Read any file on disk (text, with binary detection) |
46
- | `read_pdf` | Extract text from PDFs |
47
- | `list_directory` | Explore directory trees |
48
- | `run_command` | Execute shell commands (python, R, LaTeX, curl, etc.) |
49
- | `search_workspace` | Search across workspace files |
50
- | `write_new_file` | Create new workspace files |
51
- | `update_existing_file` | Edit existing files |
52
- | `search_external_sources` | Search academic paper databases |
53
- | `fetch_url` | Fetch web pages and APIs |
54
- | `ask_user` | Ask you questions when clarification is needed |
55
- | `load_skill` | Activate research skills |
56
- | `create_paper` | Create LaTeX paper drafts |
67
+ The agent searches arXiv, Semantic Scholar, and OpenAlex — reads papers, runs analysis scripts, writes source-grounded notes, and drafts artifacts in your local workspace.
57
68
 
58
- ## Slash Commands
69
+ ## How is this different from Cursor / Claude Code?
59
70
 
60
- | Command | Description |
61
- |---|---|
62
- | `/auth` | Connect OpenAI account via browser |
63
- | `/auth-codex` | Import existing Codex CLI auth |
64
- | `/init` | Initialize workspace in current directory |
65
- | `/skills` | List available research skills |
66
- | `/config` | View or change settings |
67
- | `/clear` | Start a new conversation |
68
- | `/help` | Show all commands |
69
- | `/exit` | Quit |
71
+ Those are coding agents. Open Research is a **research agent**.
70
72
 
71
- ## Skills
73
+ It has tools that coding agents don't: federated academic paper search, PDF extraction, source-grounded synthesis, and pluggable research skills (devil's advocate, methodology critic, experiment designer, etc.).
72
74
 
73
- Built-in research skills that guide the agent's methodology:
75
+ Everything stays local. Your workspace is a directory with `sources/`, `notes/`, `papers/`, `experiments/`. The agent reads and writes to it. Risky edits go to a review queue.
74
76
 
75
- - **source-scout** — Find citation gaps and discover relevant papers
76
- - **devils-advocate** — Stress-test claims and assumptions
77
- - **methodology-critic** — Critique research methodology
78
- - **evidence-adjudicator** — Evaluate evidence quality
79
- - **experiment-designer** — Design experiments and studies
80
- - **draft-paper** — Draft LaTeX papers from workspace evidence
81
- - **paper-explainer** — Explain complex papers
82
- - **synthesis-updater** — Update research syntheses
83
- - **skill-creator** — Create custom skills
77
+ ## Skills
84
78
 
85
- Type `/skill-name` in the TUI to activate any skill, or create your own in `~/.open-research/skills/`.
79
+ Built-in research methodologies. Type `/skill-name` to activate:
86
80
 
87
- ## Workspace Structure
81
+ - **source-scout** — find citation gaps, discover papers
82
+ - **devils-advocate** — stress-test claims and assumptions
83
+ - **methodology-critic** — critique research methodology
84
+ - **evidence-adjudicator** — evaluate evidence quality
85
+ - **experiment-designer** — design experiments
86
+ - **draft-paper** — draft LaTeX papers from workspace evidence
87
+ - **paper-explainer** — explain complex papers
88
+ - **synthesis-updater** — update syntheses with new findings
88
89
 
89
- ```
90
- my-research/
91
- sources/ # PDFs, papers, raw data
92
- notes/ # Research notes and briefs
93
- artifacts/ # Generated outputs
94
- papers/ # LaTeX paper drafts
95
- experiments/ # Analysis scripts and results
96
- .open-research/ # Workspace metadata
97
- ```
90
+ Create custom skills in `~/.open-research/skills/`.
98
91
 
99
- ## Features
92
+ ## Tools
100
93
 
101
- - **Markdown rendering** in terminal output (bold, italic, code blocks, lists, headings)
102
- - **Slash command autocomplete** with arrow-key navigation
103
- - **@file mentions** to reference workspace files inline
104
- - **Shift+Enter** for multi-line input
105
- - **Context management**automatic compaction when conversation gets long
106
- - **Token tracking** see context usage in the status bar
107
- - **Tool activity streaming** see what the agent is doing in real-time
108
- - **Review queue** risky edits require your approval before applying
94
+ | Tool | Description |
95
+ |---|---|
96
+ | `read_file` | Read any file with streaming, binary detection |
97
+ | `read_pdf` | Extract text from PDFs |
98
+ | `run_command` | Shell execution Python, R, LaTeX, anything |
99
+ | `list_directory` | Explore directory trees |
100
+ | `search_external_sources` | arXiv + Semantic Scholar + OpenAlex |
101
+ | `fetch_url` | Fetch web pages and APIs |
102
+ | `write_new_file` | Create workspace files |
103
+ | `update_existing_file` | Edit with review policy |
104
+ | `ask_user` | Pause and ask for clarification |
105
+ | `search_workspace` | Full-text search across files |
106
+ | `create_paper` | Create LaTeX drafts |
109
107
 
110
108
  ## Development
111
109
 
@@ -113,9 +111,9 @@ my-research/
113
111
  git clone https://github.com/gangj277/open-research.git
114
112
  cd open-research
115
113
  npm install
116
- npm run dev # Run in dev mode
117
- npm test # Run tests
118
- npm run build # Build for production
114
+ npm run dev # dev mode
115
+ npm test # 63 tests
116
+ npm run build # production build
119
117
  ```
120
118
 
121
119
  ## License
package/dist/cli.js CHANGED
@@ -2,7 +2,7 @@
2
2
 
3
3
  // src/cli.ts
4
4
  import React4 from "react";
5
- import path16 from "path";
5
+ import path18 from "path";
6
6
  import { Command } from "commander";
7
7
  import { render } from "ink";
8
8
 
@@ -848,7 +848,7 @@ async function ensureOpenResearchConfig(options) {
848
848
  }
849
849
 
850
850
  // src/tui/app.tsx
851
- import path15 from "path";
851
+ import path17 from "path";
852
852
  import {
853
853
  startTransition,
854
854
  useDeferredValue,
@@ -4496,7 +4496,7 @@ var MODEL_CONTEXT_WINDOWS = {
4496
4496
  "gpt-5.1": 272e3,
4497
4497
  "gpt-5": 272e3,
4498
4498
  "gpt-4o": 128e3,
4499
- "gpt-4o-mini": 128e3,
4499
+ "gpt-5.4-mini": 128e3,
4500
4500
  "o3": 2e5,
4501
4501
  "o4-mini": 2e5
4502
4502
  };
@@ -4610,6 +4610,177 @@ async function maybeCompact(messages, model, provider, usage, signal) {
4610
4610
  return { messages: compacted, didCompact: true };
4611
4611
  }
4612
4612
 
4613
+ // src/lib/memory/store.ts
4614
+ import fs14 from "fs/promises";
4615
+ import path13 from "path";
4616
+ function getMemoryFile(options) {
4617
+ return path13.join(getOpenResearchRoot(options), "memory.json");
4618
+ }
4619
+ async function loadMemories(options) {
4620
+ const file = getMemoryFile(options);
4621
+ try {
4622
+ const raw = await fs14.readFile(file, "utf8");
4623
+ const store = JSON.parse(raw);
4624
+ return store.memories ?? [];
4625
+ } catch {
4626
+ return [];
4627
+ }
4628
+ }
4629
+ async function saveMemories(memories, options) {
4630
+ const file = getMemoryFile(options);
4631
+ await fs14.mkdir(path13.dirname(file), { recursive: true });
4632
+ const store = { version: 1, memories };
4633
+ await fs14.writeFile(file, JSON.stringify(store, null, 2), "utf8");
4634
+ }
4635
+ var MAX_MEMORIES = 100;
4636
+ async function addMemory(memory, options) {
4637
+ const memories = await loadMemories(options);
4638
+ const existing = memories.find((m) => {
4639
+ const a = m.content.toLowerCase().replace(/\s+/g, " ");
4640
+ const b = memory.content.toLowerCase().replace(/\s+/g, " ");
4641
+ const wordsA = new Set(a.split(" "));
4642
+ const wordsB = new Set(b.split(" "));
4643
+ const intersection = [...wordsA].filter((w) => wordsB.has(w));
4644
+ const similarity = intersection.length / Math.max(wordsA.size, wordsB.size);
4645
+ return similarity > 0.7;
4646
+ });
4647
+ if (existing) {
4648
+ existing.lastRelevantAt = (/* @__PURE__ */ new Date()).toISOString();
4649
+ existing.relevanceCount++;
4650
+ if (memory.content.length > existing.content.length) {
4651
+ existing.content = memory.content;
4652
+ }
4653
+ await saveMemories(memories, options);
4654
+ return existing;
4655
+ }
4656
+ const newMemory = {
4657
+ id: crypto.randomUUID(),
4658
+ content: memory.content,
4659
+ category: memory.category,
4660
+ createdAt: (/* @__PURE__ */ new Date()).toISOString(),
4661
+ lastRelevantAt: (/* @__PURE__ */ new Date()).toISOString(),
4662
+ relevanceCount: 1
4663
+ };
4664
+ memories.push(newMemory);
4665
+ if (memories.length > MAX_MEMORIES) {
4666
+ memories.sort((a, b) => {
4667
+ const aScore = new Date(a.lastRelevantAt).getTime() + a.relevanceCount * 864e5;
4668
+ const bScore = new Date(b.lastRelevantAt).getTime() + b.relevanceCount * 864e5;
4669
+ return bScore - aScore;
4670
+ });
4671
+ memories.length = MAX_MEMORIES;
4672
+ }
4673
+ await saveMemories(memories, options);
4674
+ return newMemory;
4675
+ }
4676
+ async function deleteMemory(id, options) {
4677
+ const memories = await loadMemories(options);
4678
+ const idx = memories.findIndex((m) => m.id === id);
4679
+ if (idx === -1) return false;
4680
+ memories.splice(idx, 1);
4681
+ await saveMemories(memories, options);
4682
+ return true;
4683
+ }
4684
+ async function clearMemories(options) {
4685
+ await saveMemories([], options);
4686
+ }
4687
+ function formatMemoriesForPrompt(memories) {
4688
+ if (memories.length === 0) return "";
4689
+ const grouped = {};
4690
+ for (const m of memories) {
4691
+ (grouped[m.category] ??= []).push(m);
4692
+ }
4693
+ const sections = ["## What I Remember About You"];
4694
+ const categoryLabels = {
4695
+ user: "About you",
4696
+ preference: "Your preferences",
4697
+ project: "Your projects",
4698
+ methodology: "Methodology preferences",
4699
+ context: "Context"
4700
+ };
4701
+ for (const [cat, mems] of Object.entries(grouped)) {
4702
+ sections.push(`**${categoryLabels[cat] ?? cat}:**`);
4703
+ for (const m of mems) {
4704
+ sections.push(`- ${m.content}`);
4705
+ }
4706
+ }
4707
+ return sections.join("\n");
4708
+ }
4709
+
4710
+ // src/lib/memory/extractor.ts
4711
+ var EXTRACTION_PROMPT = `You are a memory extraction system. Your job is to identify facts worth remembering about the user from a conversation exchange.
4712
+
4713
+ Focus on:
4714
+ - Who they are (role, field, institution, expertise level)
4715
+ - What they're working on (current research projects, topics, deadlines)
4716
+ - How they prefer to work (preferred tools, languages, writing style, methodologies)
4717
+ - Methodological preferences (statistical approaches, theoretical frameworks, citation style)
4718
+ - Important context (collaborators, advisors, publication targets, funding constraints)
4719
+
4720
+ Rules:
4721
+ - Only extract facts that would be useful in FUTURE conversations
4722
+ - Be specific and concise \u2014 each memory should be one clear fact
4723
+ - Do NOT extract task-specific details that only matter for the current conversation
4724
+ - Do NOT extract obvious things ("user asked about papers" is not useful)
4725
+ - If there is nothing meaningful to remember, return an empty array
4726
+ - Maximum 3 new memories per exchange
4727
+
4728
+ Existing memories (do not duplicate these):
4729
+ {EXISTING_MEMORIES}
4730
+
4731
+ Respond with a JSON array of objects, each with "content" (string) and "category" (one of: "user", "preference", "project", "methodology", "context"). If nothing worth remembering, respond with [].
4732
+
4733
+ Example response:
4734
+ [{"content": "PhD student in computational neuroscience at MIT", "category": "user"}, {"content": "Prefers Python with statsmodels for statistical analysis over R", "category": "preference"}]`;
4735
+ async function extractMemories(input2) {
4736
+ const existing = await loadMemories({ homeDir: input2.homeDir });
4737
+ if (input2.userMessage.startsWith("/") || input2.userMessage.length < 20) {
4738
+ return [];
4739
+ }
4740
+ const existingList = existing.length > 0 ? existing.map((m) => `- [${m.category}] ${m.content}`).join("\n") : "(none)";
4741
+ const prompt2 = EXTRACTION_PROMPT.replace("{EXISTING_MEMORIES}", existingList);
4742
+ const conversationSnippet = [
4743
+ `User: ${input2.userMessage.slice(0, 2e3)}`,
4744
+ `Agent: ${input2.agentResponse.slice(0, 2e3)}`
4745
+ ].join("\n\n");
4746
+ try {
4747
+ const response = await input2.provider.callLLM({
4748
+ messages: [
4749
+ { role: "system", content: prompt2 },
4750
+ { role: "user", content: conversationSnippet }
4751
+ ],
4752
+ model: input2.model ?? "gpt-5.4-mini",
4753
+ maxTokens: 500,
4754
+ temperature: 0
4755
+ });
4756
+ const raw = response.content.trim();
4757
+ const jsonStr = raw.startsWith("```") ? raw.replace(/^```(?:json)?\n?/, "").replace(/\n?```$/, "") : raw;
4758
+ const parsed = JSON.parse(jsonStr);
4759
+ if (!Array.isArray(parsed)) return [];
4760
+ const valid = [];
4761
+ for (const item of parsed) {
4762
+ if (typeof item.content === "string" && item.content.length > 5 && ["user", "preference", "project", "methodology", "context"].includes(item.category)) {
4763
+ valid.push({
4764
+ content: item.content,
4765
+ category: item.category
4766
+ });
4767
+ }
4768
+ }
4769
+ return valid.slice(0, 3);
4770
+ } catch {
4771
+ return [];
4772
+ }
4773
+ }
4774
+ async function extractAndStoreMemories(input2) {
4775
+ const extracted = await extractMemories(input2);
4776
+ const stored = [];
4777
+ for (const mem of extracted) {
4778
+ const saved = await addMemory(mem, { homeDir: input2.homeDir });
4779
+ stored.push(saved);
4780
+ }
4781
+ return stored;
4782
+ }
4783
+
4613
4784
  // src/lib/agent/runtime.ts
4614
4785
  var TOOL_DESCRIPTIONS = {
4615
4786
  read_file: (a) => `Reading ${a.file_path ?? "file"}`,
@@ -4696,8 +4867,11 @@ async function runAgentTurn(input2) {
4696
4867
  const systemPrompt = isPlanning ? buildPlanningSystemPrompt(input2.workspace, activeSkills) : buildSystemPrompt(input2.workspace, activeSkills);
4697
4868
  const model = input2.model ?? "gpt-5.4";
4698
4869
  const usage = input2.sessionUsage ?? createSessionUsage();
4870
+ const memories = await loadMemories({ homeDir: input2.homeDir });
4871
+ const memoryBlock = formatMemoriesForPrompt(memories);
4872
+ const fullSystemPrompt = memoryBlock ? systemPrompt + "\n\n" + memoryBlock : systemPrompt;
4699
4873
  let messages = [
4700
- { role: "system", content: systemPrompt },
4874
+ { role: "system", content: fullSystemPrompt },
4701
4875
  ...input2.history,
4702
4876
  { role: "user", content: input2.message }
4703
4877
  ];
@@ -4750,6 +4924,18 @@ async function runAgentTurn(input2) {
4750
4924
  detectedCharter = charterMatch[1].trim();
4751
4925
  }
4752
4926
  }
4927
+ extractAndStoreMemories({
4928
+ userMessage: input2.message,
4929
+ agentResponse: fullText,
4930
+ provider: input2.provider,
4931
+ model: "gpt-5.4-mini",
4932
+ homeDir: input2.homeDir
4933
+ }).then((stored) => {
4934
+ if (stored.length > 0) {
4935
+ input2.onMemoryExtracted?.(stored.map((m) => m.content));
4936
+ }
4937
+ }).catch(() => {
4938
+ });
4753
4939
  return {
4754
4940
  text: fullText,
4755
4941
  proposedUpdates,
@@ -4841,8 +5027,8 @@ function classifyUpdateRisk(update) {
4841
5027
  }
4842
5028
 
4843
5029
  // src/lib/workspace/apply-update.ts
4844
- import fs14 from "fs/promises";
4845
- import path13 from "path";
5030
+ import fs15 from "fs/promises";
5031
+ import path14 from "path";
4846
5032
  function resolveRelativePath(update) {
4847
5033
  if (update.key.startsWith("path:")) {
4848
5034
  return update.key.slice(5);
@@ -4863,20 +5049,20 @@ function resolveRelativePath(update) {
4863
5049
  }
4864
5050
  async function applyProposedUpdate(workspaceDir, update) {
4865
5051
  const relativePath = resolveRelativePath(update);
4866
- const absolutePath = path13.join(workspaceDir, relativePath);
4867
- await fs14.mkdir(path13.dirname(absolutePath), { recursive: true });
4868
- await fs14.writeFile(absolutePath, update.content, "utf8");
5052
+ const absolutePath = path14.join(workspaceDir, relativePath);
5053
+ await fs15.mkdir(path14.dirname(absolutePath), { recursive: true });
5054
+ await fs15.writeFile(absolutePath, update.content, "utf8");
4869
5055
  return absolutePath;
4870
5056
  }
4871
5057
 
4872
5058
  // src/lib/workspace/sessions.ts
4873
- import fs15 from "fs/promises";
4874
- import path14 from "path";
5059
+ import fs16 from "fs/promises";
5060
+ import path15 from "path";
4875
5061
  async function appendSessionEvent(workspaceDir, sessionId, event) {
4876
5062
  const sessionsDir = getWorkspaceSessionsDir(workspaceDir);
4877
- await fs15.mkdir(sessionsDir, { recursive: true });
4878
- const sessionFile = path14.join(sessionsDir, `${sessionId}.jsonl`);
4879
- await fs15.appendFile(sessionFile, `${JSON.stringify(event)}
5063
+ await fs16.mkdir(sessionsDir, { recursive: true });
5064
+ const sessionFile = path15.join(sessionsDir, `${sessionId}.jsonl`);
5065
+ await fs16.appendFile(sessionFile, `${JSON.stringify(event)}
4880
5066
  `, "utf8");
4881
5067
  }
4882
5068
  function parseEvents(raw) {
@@ -4892,7 +5078,7 @@ async function listSessions(workspaceDir) {
4892
5078
  const sessionsDir = getWorkspaceSessionsDir(workspaceDir);
4893
5079
  let files;
4894
5080
  try {
4895
- files = await fs15.readdir(sessionsDir);
5081
+ files = await fs16.readdir(sessionsDir);
4896
5082
  } catch {
4897
5083
  return [];
4898
5084
  }
@@ -4900,7 +5086,7 @@ async function listSessions(workspaceDir) {
4900
5086
  for (const file of files) {
4901
5087
  if (!file.endsWith(".jsonl")) continue;
4902
5088
  const id = file.replace(/\.jsonl$/, "");
4903
- const raw = await fs15.readFile(path14.join(sessionsDir, file), "utf8");
5089
+ const raw = await fs16.readFile(path15.join(sessionsDir, file), "utf8");
4904
5090
  const events = parseEvents(raw);
4905
5091
  if (events.length === 0) continue;
4906
5092
  const chatTurns = events.filter((e) => e.type === "chat.turn");
@@ -4921,8 +5107,8 @@ async function listSessions(workspaceDir) {
4921
5107
  }
4922
5108
  async function loadSessionHistory(workspaceDir, sessionId) {
4923
5109
  const sessionsDir = getWorkspaceSessionsDir(workspaceDir);
4924
- const sessionFile = path14.join(sessionsDir, `${sessionId}.jsonl`);
4925
- const raw = await fs15.readFile(sessionFile, "utf8");
5110
+ const sessionFile = path15.join(sessionsDir, `${sessionId}.jsonl`);
5111
+ const raw = await fs16.readFile(sessionFile, "utf8");
4926
5112
  const events = parseEvents(raw);
4927
5113
  const messages = [];
4928
5114
  const llmHistory = [];
@@ -5033,6 +5219,81 @@ function ConfigScreen({ items, onUpdate, onClose }) {
5033
5219
  ] });
5034
5220
  }
5035
5221
 
5222
+ // src/lib/cli/update-check.ts
5223
+ import fs17 from "fs/promises";
5224
+ import path16 from "path";
5225
+ import os4 from "os";
5226
+ var PACKAGE_NAME = "open-research";
5227
+ var CHECK_INTERVAL_MS = 4 * 60 * 60 * 1e3;
5228
+ var STATE_FILE = path16.join(os4.homedir(), ".open-research", "update-check.json");
5229
+ async function readState() {
5230
+ try {
5231
+ const raw = await fs17.readFile(STATE_FILE, "utf8");
5232
+ return JSON.parse(raw);
5233
+ } catch {
5234
+ return { lastCheck: 0, latestVersion: null };
5235
+ }
5236
+ }
5237
+ async function writeState(state) {
5238
+ await fs17.mkdir(path16.dirname(STATE_FILE), { recursive: true });
5239
+ await fs17.writeFile(STATE_FILE, JSON.stringify(state), "utf8");
5240
+ }
5241
+ function getCurrentVersion() {
5242
+ try {
5243
+ const pkgPath = new URL("../../../package.json", import.meta.url);
5244
+ return process.env.npm_package_version ?? "0.0.0";
5245
+ } catch {
5246
+ return "0.0.0";
5247
+ }
5248
+ }
5249
+ async function fetchLatestVersion() {
5250
+ try {
5251
+ const controller = new AbortController();
5252
+ const timer = setTimeout(() => controller.abort(), 5e3);
5253
+ const res = await fetch(`https://registry.npmjs.org/${PACKAGE_NAME}/latest`, {
5254
+ signal: controller.signal
5255
+ });
5256
+ clearTimeout(timer);
5257
+ if (!res.ok) return null;
5258
+ const data = await res.json();
5259
+ return data.version ?? null;
5260
+ } catch {
5261
+ return null;
5262
+ }
5263
+ }
5264
+ function isNewer(latest, current) {
5265
+ const l = latest.split(".").map(Number);
5266
+ const c = current.split(".").map(Number);
5267
+ for (let i = 0; i < 3; i++) {
5268
+ if ((l[i] ?? 0) > (c[i] ?? 0)) return true;
5269
+ if ((l[i] ?? 0) < (c[i] ?? 0)) return false;
5270
+ }
5271
+ return false;
5272
+ }
5273
+ async function checkForUpdate() {
5274
+ try {
5275
+ const state = await readState();
5276
+ const now = Date.now();
5277
+ if (now - state.lastCheck < CHECK_INTERVAL_MS && state.latestVersion) {
5278
+ const current2 = getCurrentVersion();
5279
+ if (isNewer(state.latestVersion, current2)) {
5280
+ return `Update available: ${current2} \u2192 ${state.latestVersion}. Run: npm update -g open-research`;
5281
+ }
5282
+ return null;
5283
+ }
5284
+ const latest = await fetchLatestVersion();
5285
+ await writeState({ lastCheck: now, latestVersion: latest });
5286
+ if (!latest) return null;
5287
+ const current = getCurrentVersion();
5288
+ if (isNewer(latest, current)) {
5289
+ return `Update available: ${current} \u2192 ${latest}. Run: npm update -g open-research`;
5290
+ }
5291
+ return null;
5292
+ } catch {
5293
+ return null;
5294
+ }
5295
+ }
5296
+
5036
5297
  // src/tui/commands.ts
5037
5298
  var SLASH_COMMANDS = [
5038
5299
  { name: "auth", aliases: ["/connect", "/login"], description: "Connect your OpenAI account via browser OAuth", category: "auth" },
@@ -5045,6 +5306,7 @@ var SLASH_COMMANDS = [
5045
5306
  { name: "clear", aliases: ["/new"], description: "Clear conversation and start fresh", category: "session" },
5046
5307
  { name: "help", aliases: ["/commands"], description: "Show available commands", category: "system" },
5047
5308
  { name: "config", aliases: ["/settings"], description: "View or change settings (e.g. /config theme dark)", category: "system" },
5309
+ { name: "memory", aliases: ["/memories"], description: "View or clear stored memories about you", category: "system" },
5048
5310
  { name: "exit", aliases: ["/quit", "/q"], description: "Exit Open Research", category: "system" }
5049
5311
  ];
5050
5312
  function matchSlashCommand(input2) {
@@ -5605,6 +5867,9 @@ function App({
5605
5867
  setTheme(cfg.theme);
5606
5868
  const auth2 = await loadStoredAuth({ homeDir });
5607
5869
  setAuthStatus(auth2 ? "connected" : "missing");
5870
+ checkForUpdate().then((msg) => {
5871
+ if (msg) addSystemMessage(msg);
5872
+ });
5608
5873
  })();
5609
5874
  }, [homeDir]);
5610
5875
  useEffect2(() => {
@@ -5867,6 +6132,33 @@ function App({
5867
6132
  addSystemMessage(" Esc unfocus prompt");
5868
6133
  break;
5869
6134
  }
6135
+ case "memory": {
6136
+ if (args === "clear") {
6137
+ await clearMemories({ homeDir });
6138
+ addSystemMessage("All memories cleared.");
6139
+ break;
6140
+ }
6141
+ if (args.startsWith("delete ")) {
6142
+ const memId = args.slice(7).trim();
6143
+ const deleted = await deleteMemory(memId, { homeDir });
6144
+ addSystemMessage(deleted ? `Deleted memory ${memId.slice(0, 8)}...` : "Memory not found.");
6145
+ break;
6146
+ }
6147
+ const mems = await loadMemories({ homeDir });
6148
+ if (mems.length === 0) {
6149
+ addSystemMessage("No memories stored yet. I'll learn about you as we talk.");
6150
+ } else {
6151
+ addSystemMessage(`${mems.length} memories:`);
6152
+ for (const m of mems) {
6153
+ addSystemMessage(` [${m.category}] ${m.content}`);
6154
+ addSystemMessage(` id: ${m.id.slice(0, 8)}... \xB7 reinforced ${m.relevanceCount}x`);
6155
+ }
6156
+ addSystemMessage("");
6157
+ addSystemMessage(" /memory clear \u2014 delete all");
6158
+ addSystemMessage(" /memory delete <id> \u2014 delete one");
6159
+ }
6160
+ break;
6161
+ }
5870
6162
  case "exit": {
5871
6163
  app.exit();
5872
6164
  break;
@@ -6078,6 +6370,11 @@ function App({
6078
6370
  addSystemMessage(` \u2713 ${activity.description ?? activity.name}${dur}`);
6079
6371
  }
6080
6372
  },
6373
+ onMemoryExtracted: (mems) => {
6374
+ for (const m of mems) {
6375
+ addSystemMessage(` \u25CA remembered: ${m}`);
6376
+ }
6377
+ },
6081
6378
  onCompaction: () => {
6082
6379
  addSystemMessage(" \u25CA Context compacted \u2014 older messages summarized");
6083
6380
  },
@@ -6510,7 +6807,7 @@ function App({
6510
6807
  statusParts,
6511
6808
  statusColor,
6512
6809
  tokenDisplay,
6513
- workspaceName: hasWorkspace ? path15.basename(workspacePath) : process.cwd(),
6810
+ workspaceName: hasWorkspace ? path17.basename(workspacePath) : process.cwd(),
6514
6811
  mode: agentMode,
6515
6812
  planningStatus: planningState.status
6516
6813
  }
@@ -6522,7 +6819,7 @@ function App({
6522
6819
  var program = new Command();
6523
6820
  program.name("open-research").description("Local-first research CLI powered by ChatGPT/Codex auth.").argument("[workspacePath]", "Optional workspace path to open").action(async (workspacePath) => {
6524
6821
  await ensureOpenResearchConfig();
6525
- const target = workspacePath ? path16.resolve(workspacePath) : process.cwd();
6822
+ const target = workspacePath ? path18.resolve(workspacePath) : process.cwd();
6526
6823
  const project = await loadWorkspaceProject(target);
6527
6824
  const auth2 = await loadStoredAuth();
6528
6825
  render(
@@ -6544,7 +6841,7 @@ program.name("open-research").description("Local-first research CLI powered by C
6544
6841
  });
6545
6842
  program.command("init").argument("[workspacePath]").description("Initialize an Open Research workspace.").action(async (workspacePath) => {
6546
6843
  await ensureOpenResearchConfig();
6547
- const target = path16.resolve(workspacePath ?? process.cwd());
6844
+ const target = path18.resolve(workspacePath ?? process.cwd());
6548
6845
  const project = await initWorkspace({ workspaceDir: target });
6549
6846
  console.log(`Initialized workspace: ${target}`);
6550
6847
  console.log(`Title: ${project.title}`);
@@ -6613,8 +6910,8 @@ skills.command("create").argument("[name]").description("Scaffold a new user ski
6613
6910
  });
6614
6911
  skills.command("edit").argument("<name>").description("Open a user skill in $EDITOR.").action(async (name) => {
6615
6912
  await ensureOpenResearchConfig();
6616
- const skillDir = path16.join(getOpenResearchSkillsDir(), name);
6617
- openInEditor(path16.join(skillDir, "SKILL.md"));
6913
+ const skillDir = path18.join(getOpenResearchSkillsDir(), name);
6914
+ openInEditor(path18.join(skillDir, "SKILL.md"));
6618
6915
  const validation = await validateSkillDirectory({ skillDir });
6619
6916
  if (!validation.ok) {
6620
6917
  console.error(validation.errors.join("\n"));
@@ -6625,9 +6922,9 @@ skills.command("edit").argument("<name>").description("Open a user skill in $EDI
6625
6922
  });
6626
6923
  skills.command("validate").argument("[nameOrPath]").description("Validate one user skill.").action(async (nameOrPath) => {
6627
6924
  await ensureOpenResearchConfig();
6628
- const skillDir = nameOrPath ? path16.isAbsolute(nameOrPath) ? nameOrPath : path16.join(getOpenResearchSkillsDir(), nameOrPath) : getOpenResearchSkillsDir();
6925
+ const skillDir = nameOrPath ? path18.isAbsolute(nameOrPath) ? nameOrPath : path18.join(getOpenResearchSkillsDir(), nameOrPath) : getOpenResearchSkillsDir();
6629
6926
  const stat = await import("fs/promises").then(
6630
- (fs16) => fs16.stat(skillDir).catch(() => null)
6927
+ (fs18) => fs18.stat(skillDir).catch(() => null)
6631
6928
  );
6632
6929
  if (!stat) {
6633
6930
  throw new Error(`Skill path not found: ${skillDir}`);
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "open-research",
3
- "version": "0.1.0",
3
+ "version": "0.1.2",
4
4
  "description": "Local-first research CLI agent — discover papers, synthesize notes, run analysis, and draft artifacts from your terminal.",
5
5
  "type": "module",
6
6
  "license": "MIT",