@danielzfliu/memory 1.0.1 → 1.0.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +100 -30
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -1,54 +1,57 @@
|
|
|
1
1
|
[](https://www.npmjs.com/package/@danielzfliu/memory)
|
|
2
2
|
|
|
3
3
|
# Memory
|
|
4
|
+
|
|
4
5
|
A fully local Node.js library and REST API for storing, searching, and querying tagged text pieces using ChromaDB for vector storage and Ollama for embeddings + generation.
|
|
5
6
|
|
|
7
|
+
Two ways to use Memory:
|
|
8
|
+
|
|
9
|
+
- REST API Server — Clone the repo and run a standalone HTTP server with CRUD, semantic search, and RAG endpoints.
|
|
10
|
+
- npm Package — Install `@danielzfliu/memory` in your own project and use the classes directly, or embed the Express server in your app.
|
|
11
|
+
|
|
12
|
+
---
|
|
13
|
+
|
|
6
14
|
## Prerequisites
|
|
7
15
|
|
|
8
|
-
-
|
|
9
|
-
-
|
|
10
|
-
-
|
|
16
|
+
- Node.js ≥ 18
|
|
17
|
+
- Ollama running locally ([install](https://ollama.com))
|
|
18
|
+
- ChromaDB server running locally
|
|
11
19
|
|
|
12
|
-
|
|
13
|
-
To pull models, run:
|
|
20
|
+
Pull the required models:
|
|
14
21
|
```bash
|
|
15
22
|
ollama pull nomic-embed-text-v2-moe
|
|
16
23
|
ollama pull llama3.2
|
|
17
24
|
```
|
|
18
25
|
|
|
19
|
-
|
|
20
|
-
```bash
|
|
21
|
-
npm run ollama # or
|
|
22
|
-
npm run ollama:port 11435
|
|
23
|
-
```
|
|
26
|
+
---
|
|
24
27
|
|
|
25
|
-
|
|
26
|
-
|
|
27
|
-
|
|
28
|
-
|
|
28
|
+
## Option A: REST API Server
|
|
29
|
+
|
|
30
|
+
Use this option to run Memory as a standalone HTTP service.
|
|
31
|
+
|
|
32
|
+
### 1. Setup
|
|
29
33
|
|
|
30
|
-
### Start ChromaDB
|
|
31
|
-
If used as api:
|
|
32
34
|
```bash
|
|
33
|
-
|
|
34
|
-
|
|
35
|
+
git clone https://github.com/DanielZFLiu/memory.git
|
|
36
|
+
cd memory
|
|
37
|
+
npm install
|
|
35
38
|
```
|
|
36
39
|
|
|
37
|
-
|
|
40
|
+
### 2. Start external services
|
|
41
|
+
|
|
42
|
+
The repo includes convenience scripts for starting Ollama and ChromaDB:
|
|
43
|
+
|
|
38
44
|
```bash
|
|
39
|
-
|
|
40
|
-
|
|
45
|
+
npm run ollama # start Ollama on default port 11434
|
|
46
|
+
npm run ollama:port -- 11435 # start Ollama on a custom port
|
|
41
47
|
|
|
42
|
-
|
|
43
|
-
|
|
44
|
-
& "$env:APPDATA\Python\Python313\Scripts\chroma.exe" run --port 8000
|
|
48
|
+
npm run db # start ChromaDB on default port 8000
|
|
49
|
+
npm run db:port -- 9000 # start ChromaDB on a custom port
|
|
45
50
|
```
|
|
46
51
|
|
|
47
|
-
|
|
52
|
+
### 3. Start the server
|
|
48
53
|
|
|
49
|
-
### REST API Server
|
|
50
54
|
```bash
|
|
51
|
-
npm install
|
|
52
55
|
npm run dev
|
|
53
56
|
```
|
|
54
57
|
|
|
@@ -114,7 +117,35 @@ Returns:
|
|
|
114
117
|
}
|
|
115
118
|
```
|
|
116
119
|
|
|
117
|
-
|
|
120
|
+
---
|
|
121
|
+
|
|
122
|
+
## Option B: npm Package
|
|
123
|
+
|
|
124
|
+
Use this option to integrate Memory into your own Node.js/TypeScript project.
|
|
125
|
+
|
|
126
|
+
### 1. Install
|
|
127
|
+
|
|
128
|
+
```bash
|
|
129
|
+
npm install @danielzfliu/memory
|
|
130
|
+
```
|
|
131
|
+
|
|
132
|
+
### 2. Start external services
|
|
133
|
+
|
|
134
|
+
You are responsible for running Ollama and ChromaDB yourself:
|
|
135
|
+
|
|
136
|
+
```bash
|
|
137
|
+
ollama serve # default port 11434
|
|
138
|
+
chroma run --port 8000 # default port 8000
|
|
139
|
+
```
|
|
140
|
+
|
|
141
|
+
**Windows note:** If `chroma` is not recognized, the `Scripts` directory may not be on your PATH. Either add it (e.g. `%APPDATA%\Python\Python3xx\Scripts`) or run the executable directly:
|
|
142
|
+
```powershell
|
|
143
|
+
& "$env:APPDATA\Python\Python313\Scripts\chroma.exe" run --port 8000
|
|
144
|
+
```
|
|
145
|
+
|
|
146
|
+
### 3. Programmatic usage
|
|
147
|
+
|
|
148
|
+
#### Using PieceStore and RagPipeline directly
|
|
118
149
|
|
|
119
150
|
```typescript
|
|
120
151
|
import { PieceStore, RagPipeline, MemoryConfig } from "@danielzfliu/memory";
|
|
@@ -126,6 +157,7 @@ async function main() {
|
|
|
126
157
|
embeddingModel: "nomic-embed-text-v2-moe",
|
|
127
158
|
};
|
|
128
159
|
|
|
160
|
+
// Store: CRUD + semantic search
|
|
129
161
|
const store = new PieceStore(config);
|
|
130
162
|
await store.init();
|
|
131
163
|
|
|
@@ -147,7 +179,8 @@ async function main() {
|
|
|
147
179
|
});
|
|
148
180
|
console.log("filtered", filtered);
|
|
149
181
|
|
|
150
|
-
|
|
182
|
+
// RAG: retrieve relevant pieces → generate an answer via Ollama
|
|
183
|
+
const rag = new RagPipeline(store, config.ollamaUrl!, "llama3.2");
|
|
151
184
|
const answer = await rag.query("What is TypeScript?", {
|
|
152
185
|
tags: ["programming"],
|
|
153
186
|
});
|
|
@@ -160,16 +193,52 @@ main().catch((err) => {
|
|
|
160
193
|
});
|
|
161
194
|
```
|
|
162
195
|
|
|
196
|
+
#### Embedding the REST API in your own Express app
|
|
197
|
+
|
|
198
|
+
`createServer` returns a configured Express app you can mount or extend:
|
|
199
|
+
|
|
200
|
+
```typescript
|
|
201
|
+
import { createServer } from "@danielzfliu/memory";
|
|
202
|
+
|
|
203
|
+
const app = createServer({
|
|
204
|
+
chromaUrl: "http://localhost:8000",
|
|
205
|
+
ollamaUrl: "http://localhost:11434",
|
|
206
|
+
});
|
|
207
|
+
|
|
208
|
+
app.listen(4000, () => console.log("Running on :4000"));
|
|
209
|
+
```
|
|
210
|
+
|
|
211
|
+
### Exports
|
|
212
|
+
|
|
213
|
+
| Export | Description |
|
|
214
|
+
|--------|-------------|
|
|
215
|
+
| `PieceStore` | CRUD + semantic search over tagged text pieces |
|
|
216
|
+
| `RagPipeline` | Retrieve-then-generate pipeline using `PieceStore` + Ollama |
|
|
217
|
+
| `EmbeddingClient` | Low-level Ollama embedding wrapper |
|
|
218
|
+
| `createServer` | Express app factory with all REST endpoints pre-configured |
|
|
219
|
+
| `MemoryConfig` | Configuration interface (all fields optional with defaults) |
|
|
220
|
+
| `DEFAULT_MEMORY_CONFIG` | The default values for `MemoryConfig` |
|
|
221
|
+
| `Piece` | `{ id, content, tags }` |
|
|
222
|
+
| `QueryOptions` | `{ tags?, topK? }` |
|
|
223
|
+
| `QueryResult` | `{ piece, score }` |
|
|
224
|
+
| `RagResult` | `{ answer, sources }` |
|
|
225
|
+
|
|
226
|
+
---
|
|
227
|
+
|
|
163
228
|
## Configuration (`MemoryConfig`)
|
|
164
229
|
|
|
230
|
+
All fields are optional. Defaults are applied automatically.
|
|
231
|
+
|
|
165
232
|
| Option | Default | Description |
|
|
166
233
|
|--------|---------|-------------|
|
|
167
234
|
| `chromaUrl` | `http://localhost:8000` | ChromaDB server URL |
|
|
168
235
|
| `ollamaUrl` | `http://localhost:11434` | Ollama server URL |
|
|
169
236
|
| `embeddingModel` | `nomic-embed-text-v2-moe` | Ollama model for embeddings |
|
|
170
|
-
| `generationModel` | `llama3.2` | Ollama model for RAG generation |
|
|
237
|
+
| `generationModel` | `llama3.2` | Ollama model for RAG generation (used by `createServer`) |
|
|
171
238
|
| `collectionName` | `pieces` | ChromaDB collection name |
|
|
172
239
|
|
|
240
|
+
> **Note:** `generationModel` is only used by `createServer`. When constructing `RagPipeline` directly, you pass the model name to its constructor.
|
|
241
|
+
|
|
173
242
|
## Testing
|
|
174
243
|
|
|
175
244
|
```bash
|
|
@@ -192,5 +261,6 @@ src/
|
|
|
192
261
|
tests/
|
|
193
262
|
├── helpers/ # Shared test fixtures (in-memory ChromaDB mock, etc.)
|
|
194
263
|
├── unit/ # Unit tests (embeddings, store, rag)
|
|
264
|
+
├── manual/ # Manual api tests (used by ai agents with access to terminal)
|
|
195
265
|
└── integration/ # API integration tests (supertest)
|
|
196
266
|
```
|
package/package.json
CHANGED