@danielzfliu/memory 1.0.0 → 1.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,51 +1,55 @@
1
+ [![npm version](https://img.shields.io/npm/v/@danielzfliu/memory.svg)](https://www.npmjs.com/package/@danielzfliu/memory)
2
+
1
3
  # Memory
4
+
2
5
  A fully local Node.js library and REST API for storing, searching, and querying tagged text pieces using ChromaDB for vector storage and Ollama for embeddings + generation.
3
6
 
4
- ## Prerequisites
7
+ Two ways to use Memory:
5
8
 
6
- - **Node.js** 18
7
- - **Ollama** running locally ([install](https://ollama.com))
8
- - **ChromaDB** server running locally
9
+ - REST API Server — Clone the repo and run a standalone HTTP server with CRUD, semantic search, and RAG endpoints.
10
+ - npm Package Install `@danielzfliu/memory` in your own project and use the classes directly, or embed the Express server in your app.
9
11
 
10
- ### Start Ollama & pull models
12
+ ---
11
13
 
14
+ ## Prerequisites
15
+
16
+ - Node.js ≥ 18
17
+ - Ollama running locally ([install](https://ollama.com))
18
+ - ChromaDB server running locally
19
+
20
+ Pull the required models:
12
21
  ```bash
13
22
  ollama pull nomic-embed-text-v2-moe
14
23
  ollama pull llama3.2
15
- npm run ollama
16
24
  ```
17
25
 
18
- or on a specific port:
26
+ ---
19
27
 
20
- ```bash
21
- npm run ollama:port 11435
22
- ```
28
+ ## Option A: REST API Server
23
29
 
24
- ### Start ChromaDB
25
- ```bash
26
- npm run db
27
- ```
30
+ Use this option to run Memory as a standalone HTTP service.
28
31
 
29
- or on a specific port:
32
+ ### 1. Setup
30
33
 
31
34
  ```bash
32
- npm run db:port 9000
35
+ git clone https://github.com/DanielZFLiu/memory.git
36
+ cd memory
37
+ npm install
33
38
  ```
34
39
 
35
- **Windows note:** If `chroma` is not recognized, the `Scripts` directory may not be on your PATH. Either add it (e.g. `%APPDATA%\Python\Python3xx\Scripts`) or run the executable directly:
36
- ```powershell
37
- & "$env:APPDATA\Python\Python313\Scripts\chroma.exe" run --port 8000
38
- ```
40
+ ### 2. Start external services
39
41
 
40
- ## Install
42
+ The repo includes convenience scripts for starting Ollama and ChromaDB:
41
43
 
42
44
  ```bash
43
- npm install
44
- ```
45
+ npm run ollama # start Ollama on default port 11434
46
+ npm run ollama:port -- 11435 # start Ollama on a custom port
45
47
 
46
- ## Usage
48
+ npm run db # start ChromaDB on default port 8000
49
+ npm run db:port -- 9000 # start ChromaDB on a custom port
50
+ ```
47
51
 
48
- ### REST API Server
52
+ ### 3. Start the server
49
53
 
50
54
  ```bash
51
55
  npm run dev
@@ -113,10 +117,38 @@ Returns:
113
117
  }
114
118
  ```
115
119
 
116
- ### Programmatic Usage (Library)
120
+ ---
121
+
122
+ ## Option B: npm Package
123
+
124
+ Use this option to integrate Memory into your own Node.js/TypeScript project.
125
+
126
+ ### 1. Install
127
+
128
+ ```bash
129
+ npm install @danielzfliu/memory
130
+ ```
131
+
132
+ ### 2. Start external services
133
+
134
+ You are responsible for running Ollama and ChromaDB yourself:
135
+
136
+ ```bash
137
+ ollama serve # default port 11434
138
+ chroma run --port 8000 # default port 8000
139
+ ```
140
+
141
+ **Windows note:** If `chroma` is not recognized, the `Scripts` directory may not be on your PATH. Either add it (e.g. `%APPDATA%\Python\Python3xx\Scripts`) or run the executable directly:
142
+ ```powershell
143
+ & "$env:APPDATA\Python\Python313\Scripts\chroma.exe" run --port 8000
144
+ ```
145
+
146
+ ### 3. Programmatic usage
147
+
148
+ #### Using PieceStore and RagPipeline directly
117
149
 
118
150
  ```typescript
119
- import { PieceStore, RagPipeline, MemoryConfig } from "memory";
151
+ import { PieceStore, RagPipeline, MemoryConfig } from "@danielzfliu/memory";
120
152
 
121
153
  async function main() {
122
154
  const config: MemoryConfig = {
@@ -125,6 +157,7 @@ async function main() {
125
157
  embeddingModel: "nomic-embed-text-v2-moe",
126
158
  };
127
159
 
160
+ // Store: CRUD + semantic search
128
161
  const store = new PieceStore(config);
129
162
  await store.init();
130
163
 
@@ -146,7 +179,8 @@ async function main() {
146
179
  });
147
180
  console.log("filtered", filtered);
148
181
 
149
- const rag = new RagPipeline(store, "http://localhost:11434", "llama3.2");
182
+ // RAG: retrieve relevant pieces generate an answer via Ollama
183
+ const rag = new RagPipeline(store, config.ollamaUrl!, "llama3.2");
150
184
  const answer = await rag.query("What is TypeScript?", {
151
185
  tags: ["programming"],
152
186
  });
@@ -159,16 +193,52 @@ main().catch((err) => {
159
193
  });
160
194
  ```
161
195
 
196
+ #### Embedding the REST API in your own Express app
197
+
198
+ `createServer` returns a configured Express app you can mount or extend:
199
+
200
+ ```typescript
201
+ import { createServer } from "@danielzfliu/memory";
202
+
203
+ const app = createServer({
204
+ chromaUrl: "http://localhost:8000",
205
+ ollamaUrl: "http://localhost:11434",
206
+ });
207
+
208
+ app.listen(4000, () => console.log("Running on :4000"));
209
+ ```
210
+
211
+ ### Exports
212
+
213
+ | Export | Description |
214
+ |--------|-------------|
215
+ | `PieceStore` | CRUD + semantic search over tagged text pieces |
216
+ | `RagPipeline` | Retrieve-then-generate pipeline using `PieceStore` + Ollama |
217
+ | `EmbeddingClient` | Low-level Ollama embedding wrapper |
218
+ | `createServer` | Express app factory with all REST endpoints pre-configured |
219
+ | `MemoryConfig` | Configuration interface (all fields optional with defaults) |
220
+ | `DEFAULT_MEMORY_CONFIG` | The default values for `MemoryConfig` |
221
+ | `Piece` | `{ id, content, tags }` |
222
+ | `QueryOptions` | `{ tags?, topK? }` |
223
+ | `QueryResult` | `{ piece, score }` |
224
+ | `RagResult` | `{ answer, sources }` |
225
+
226
+ ---
227
+
162
228
  ## Configuration (`MemoryConfig`)
163
229
 
230
+ All fields are optional. Defaults are applied automatically.
231
+
164
232
  | Option | Default | Description |
165
233
  |--------|---------|-------------|
166
234
  | `chromaUrl` | `http://localhost:8000` | ChromaDB server URL |
167
235
  | `ollamaUrl` | `http://localhost:11434` | Ollama server URL |
168
236
  | `embeddingModel` | `nomic-embed-text-v2-moe` | Ollama model for embeddings |
169
- | `generationModel` | `llama3.2` | Ollama model for RAG generation |
237
+ | `generationModel` | `llama3.2` | Ollama model for RAG generation (used by `createServer`) |
170
238
  | `collectionName` | `pieces` | ChromaDB collection name |
171
239
 
240
+ > **Note:** `generationModel` is only used by `createServer`. When constructing `RagPipeline` directly, you pass the model name to its constructor.
241
+
172
242
  ## Testing
173
243
 
174
244
  ```bash
@@ -191,5 +261,6 @@ src/
191
261
  tests/
192
262
  ├── helpers/ # Shared test fixtures (in-memory ChromaDB mock, etc.)
193
263
  ├── unit/ # Unit tests (embeddings, store, rag)
264
+ ├── manual/ # Manual api tests (used by ai agents with access to terminal)
194
265
  └── integration/ # API integration tests (supertest)
195
266
  ```
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@danielzfliu/memory",
3
- "version": "1.0.0",
3
+ "version": "1.0.2",
4
4
  "description": "A local RAG system for storing, searching, and querying tagged text pieces using ChromaDB and Ollama",
5
5
  "main": "dist/index.js",
6
6
  "types": "dist/index.d.ts",
@@ -13,8 +13,7 @@
13
13
  "files": [
14
14
  "dist",
15
15
  "README.md",
16
- "LICENSE",
17
- "package_scripts"
16
+ "LICENSE"
18
17
  ],
19
18
  "repository": {
20
19
  "type": "git",
@@ -1,17 +0,0 @@
1
- const { spawn } = require("child_process");
2
-
3
- // Get port from argument or default to 8000
4
- const port = process.argv[2] || "8000";
5
-
6
- console.log(`🚀 Starting ChromaDB on port ${port}`);
7
-
8
- const command = `chroma run --port ${port}`;
9
-
10
- const child = spawn(command, [], {
11
- stdio: "inherit",
12
- shell: true,
13
- });
14
-
15
- child.on("close", (code) => {
16
- process.exit(code);
17
- });
@@ -1,20 +0,0 @@
1
- const { spawn } = require("child_process");
2
-
3
- // Get the port from the command line argument, or default to 11434
4
- const port = process.argv[2] || "11434";
5
-
6
- console.log(`🚀 Starting Ollama on 127.0.0.1:${port}`);
7
-
8
- // Run the command with the modified environment variable
9
- const child = spawn("ollama serve", [], {
10
- stdio: "inherit", // Show Ollama's output in your terminal
11
- shell: true, // Ensure compatibility across OS
12
- env: {
13
- ...process.env, // Keep existing environment variables
14
- OLLAMA_HOST: `127.0.0.1:${port}`,
15
- },
16
- });
17
-
18
- child.on("close", (code) => {
19
- process.exit(code);
20
- });