recallmem 0.1.3 → 0.1.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +9 -8
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -6,11 +6,11 @@
6
6
  </p>
7
7
 
8
8
  <p align="center">
9
- <strong>Persistent personal AI that actually remembers you.</strong>
9
+ <strong>Your Persistent Private AI that actually remembers you.</strong>
10
10
  </p>
11
11
 
12
12
  <p align="center">
13
- LLMs like ChatGPT, Claude.ai, and Gemini tend to forget you the moment you end your session. RecallMEM doesn't. It builds a profile of who you are, extracts facts after every conversation, and runs vector search across your entire history to find relevant context. By the time you've used it for a week, it knows you better than any AI ever will.
13
+ Chatbots like ChatGPT, Claude, and Gemini tend to forget you the moment you end your session. RecallMEM doesn't. It builds a profile of who you are, extracts facts after every conversation, and runs vector search across your entire history to find relevant context. By the time you've used it for a week, it knows you better than any AI ever will.
14
14
  </p>
15
15
 
16
16
  <p align="center">
@@ -29,14 +29,14 @@
29
29
 
30
30
  ## What is this
31
31
 
32
- A personal AI chatbot with REAL memory. Plug in any LLM you want and RecallMEM gives it persistent memory of who you are, what you've talked about, and what's currently true vs historical.
32
+ A personal AI chatbot with REAL memory. Plug in any LLM you want and RecallMEM gives it persistent memory of who you are, what you've talked about, and what's currently true vs historical. All your memory is stored in a local Postgres database on your machine, with pgvector powering the semantic search across your past conversations.
33
33
 
34
34
  The best part is that the LLM will never touch your memory in the database. Every retrieval is deterministic SQL + cosine similarity, assembled by TypeScript before the LLM ever sees it. The LLM only proposes new facts; a TypeScript validator decides what gets stored. Facts have timestamps and get auto-retired when you contradict them ("works at Acme" → "left Acme"). [Deep dive on the architecture →](./docs/ARCHITECTURE.md)
35
35
 
36
36
  You can run it three ways:
37
37
 
38
38
  - **Cloud LLMs (recommended for most people).** Add a Claude or OpenAI API key in Settings. Fast, smart, works on any computer. Your memory still stays local in your own Postgres database. Only the chat messages go to the provider.
39
- - **Local LLMs (recommended for privacy).** Run Gemma 4 via Ollama. Nothing leaves your machine, ever. Slower setup (~18 GB model download) and slower responses, but truly air-gappable.
39
+ - **Local LLMs (recommended for privacy).** Run Gemma 4 via Ollama. Nothing leaves your machine, ever. Slower setup (~7-20 GB model download) and slower responses, but truly air-gappable.
40
40
  - **Both.** Use cloud for daily chat, switch to local for the sensitive stuff. The model dropdown lets you pick per-conversation.
41
41
 
42
42
  ## Features
@@ -58,7 +58,7 @@ Two options. Pick whichever fits your priority.
58
58
 
59
59
  ### Option A: Cloud LLM (Claude or OpenAI) — fastest, ~5 minutes
60
60
 
61
- You need Node.js 20+ and [Homebrew](https://brew.sh). Then:
61
+ You need Node.js 20+ and [Homebrew](https://brew.sh). The installer uses Homebrew to set up Postgres + pgvector (where your memory and vector search live) and Ollama (for local AI models). Then:
62
62
 
63
63
  ```bash
64
64
  npx recallmem
@@ -78,9 +78,10 @@ The installer sets up Postgres, pgvector, and Ollama (for the embedding model th
78
78
 
79
79
  Same `npx recallmem` command. When the app opens, click **Settings → Manage models** and download one of these:
80
80
 
81
- - **Gemma 4 E4B** (4 GB, ~5 minute download) — fastest to test
82
- - **Gemma 4 26B** (18 GB, ~20-30 minute download) — recommended for daily use
83
- - **Gemma 4 31B** (19 GB, slower, best quality)
81
+ - **Gemma 4 E2B** (~7 GB, fastest download) — good for a quick test or older laptops
82
+ - **Gemma 4 E4B** (~10 GB) — good for most laptops
83
+ - **Gemma 4 26B** (~18 GB, ~20-30 minute download) — recommended for daily use
84
+ - **Gemma 4 31B** (~20 GB, slower, best quality)
84
85
 
85
86
  Then pick that model from the dropdown and chat. Nothing leaves your machine.
86
87
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "recallmem",
3
- "version": "0.1.3",
3
+ "version": "0.1.4",
4
4
  "description": "Private, local-first AI chatbot with persistent working memory. One command install via npx.",
5
5
  "license": "Apache-2.0",
6
6
  "author": "Chris Sean",