mcp-local-rag 0.2.3 → 0.4.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +194 -372
- package/dist/chunker/index.d.ts +1 -32
- package/dist/chunker/index.d.ts.map +1 -1
- package/dist/chunker/index.js +3 -62
- package/dist/chunker/index.js.map +1 -1
- package/dist/chunker/semantic-chunker.d.ts +81 -0
- package/dist/chunker/semantic-chunker.d.ts.map +1 -0
- package/dist/chunker/semantic-chunker.js +248 -0
- package/dist/chunker/semantic-chunker.js.map +1 -0
- package/dist/chunker/sentence-splitter.d.ts +16 -0
- package/dist/chunker/sentence-splitter.d.ts.map +1 -0
- package/dist/chunker/sentence-splitter.js +114 -0
- package/dist/chunker/sentence-splitter.js.map +1 -0
- package/dist/embedder/index.d.ts +3 -3
- package/dist/embedder/index.d.ts.map +1 -1
- package/dist/embedder/index.js +8 -5
- package/dist/embedder/index.js.map +1 -1
- package/dist/index.js +52 -2
- package/dist/index.js.map +1 -1
- package/dist/parser/index.d.ts +1 -1
- package/dist/parser/index.d.ts.map +1 -1
- package/dist/parser/index.js +8 -6
- package/dist/parser/index.js.map +1 -1
- package/dist/server/index.d.ts +8 -5
- package/dist/server/index.d.ts.map +1 -1
- package/dist/server/index.js +27 -17
- package/dist/server/index.js.map +1 -1
- package/dist/vectordb/index.d.ts +66 -7
- package/dist/vectordb/index.d.ts.map +1 -1
- package/dist/vectordb/index.js +271 -19
- package/dist/vectordb/index.js.map +1 -1
- package/package.json +2 -4
package/README.md
CHANGED
|
@@ -1,14 +1,35 @@
|
|
|
1
1
|
# MCP Local RAG
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
[](https://www.npmjs.com/package/mcp-local-rag)
|
|
4
|
+
[](https://opensource.org/licenses/MIT)
|
|
4
5
|
|
|
5
|
-
|
|
6
|
+
Local RAG for developers using MCP.
|
|
7
|
+
Hybrid search (BM25 + semantic) for exact technical terms — fully private, zero setup.
|
|
8
|
+
|
|
9
|
+
## Features
|
|
10
|
+
|
|
11
|
+
- **Code-aware hybrid search**
|
|
12
|
+
Keyword (BM25) + semantic search combined. Exact terms like `useEffect`, error codes, and class names are matched reliably—not just semantically guessed.
|
|
13
|
+
|
|
14
|
+
- **Smart semantic chunking**
|
|
15
|
+
Chunks documents by meaning, not character count. Uses embedding similarity to find natural topic boundaries—keeping related content together and splitting where topics change.
|
|
16
|
+
|
|
17
|
+
- **Quality-first result filtering**
|
|
18
|
+
Groups results by relevance gaps instead of arbitrary top-K cutoffs. Get fewer but more trustworthy chunks.
|
|
19
|
+
|
|
20
|
+
- **Runs entirely locally**
|
|
21
|
+
No API keys, no cloud, no data leaving your machine. Works fully offline after the first model download.
|
|
22
|
+
|
|
23
|
+
- **Zero-friction setup**
|
|
24
|
+
One `npx` command. No Docker, no Python, no servers to manage. Designed for Cursor, Codex, and Claude Code via MCP.
|
|
6
25
|
|
|
7
26
|
## Quick Start
|
|
8
27
|
|
|
9
|
-
|
|
28
|
+
Set `BASE_DIR` to the folder you want to search. Documents must live under it.
|
|
29
|
+
|
|
30
|
+
Add the MCP server to your AI coding tool:
|
|
10
31
|
|
|
11
|
-
**For Cursor**
|
|
32
|
+
**For Cursor** — Add to `~/.cursor/mcp.json`:
|
|
12
33
|
```json
|
|
13
34
|
{
|
|
14
35
|
"mcpServers": {
|
|
@@ -23,7 +44,7 @@ Add the MCP server to your AI coding tool. Choose your tool below:
|
|
|
23
44
|
}
|
|
24
45
|
```
|
|
25
46
|
|
|
26
|
-
**For Codex**
|
|
47
|
+
**For Codex** — Add to `~/.codex/config.toml`:
|
|
27
48
|
```toml
|
|
28
49
|
[mcp_servers.local-rag]
|
|
29
50
|
command = "npx"
|
|
@@ -33,103 +54,129 @@ args = ["-y", "mcp-local-rag"]
|
|
|
33
54
|
BASE_DIR = "/path/to/your/documents"
|
|
34
55
|
```
|
|
35
56
|
|
|
36
|
-
**For Claude Code**
|
|
57
|
+
**For Claude Code** — Run this command:
|
|
37
58
|
```bash
|
|
38
59
|
claude mcp add local-rag --scope user --env BASE_DIR=/path/to/your/documents -- npx -y mcp-local-rag
|
|
39
60
|
```
|
|
40
61
|
|
|
41
|
-
Restart your tool, then start using:
|
|
62
|
+
Restart your tool, then start using it:
|
|
63
|
+
|
|
42
64
|
```
|
|
43
|
-
"Ingest api-spec.pdf"
|
|
44
|
-
|
|
65
|
+
You: "Ingest api-spec.pdf"
|
|
66
|
+
Assistant: Successfully ingested api-spec.pdf (47 chunks created)
|
|
67
|
+
|
|
68
|
+
You: "What does the API documentation say about authentication?"
|
|
69
|
+
Assistant: Based on the documentation, authentication uses OAuth 2.0 with JWT tokens.
|
|
70
|
+
The flow is described in section 3.2...
|
|
45
71
|
```
|
|
46
72
|
|
|
47
73
|
That's it. No installation, no Docker, no complex setup.
|
|
48
74
|
|
|
49
75
|
## Why This Exists
|
|
50
76
|
|
|
51
|
-
You want
|
|
77
|
+
You want AI to search your documents—technical specs, research papers, internal docs. But most solutions send your files to external APIs.
|
|
52
78
|
|
|
53
|
-
This
|
|
79
|
+
**Privacy.** Your documents might contain sensitive data. This runs entirely locally.
|
|
54
80
|
|
|
55
|
-
**
|
|
81
|
+
**Cost.** External embedding APIs charge per use. This is free after the initial model download.
|
|
56
82
|
|
|
57
|
-
**
|
|
83
|
+
**Offline.** Works without internet after setup.
|
|
58
84
|
|
|
59
|
-
**
|
|
85
|
+
**Code search.** Pure semantic search misses exact terms like `useEffect` or `ERR_CONNECTION_REFUSED`. Hybrid search catches both meaning and exact matches.
|
|
60
86
|
|
|
61
|
-
|
|
87
|
+
## Usage
|
|
62
88
|
|
|
63
|
-
|
|
89
|
+
The server provides 5 MCP tools: ingest, search, list, delete, status
|
|
90
|
+
(`ingest_file`, `query_documents`, `list_files`, `delete_file`, `status`).
|
|
64
91
|
|
|
65
|
-
|
|
92
|
+
### Ingesting Documents
|
|
66
93
|
|
|
67
|
-
|
|
94
|
+
```
|
|
95
|
+
"Ingest the document at /Users/me/docs/api-spec.pdf"
|
|
96
|
+
```
|
|
68
97
|
|
|
69
|
-
|
|
98
|
+
Supports PDF, DOCX, TXT, and Markdown. The server extracts text, splits it into chunks, generates embeddings locally, and stores everything in a local vector database.
|
|
70
99
|
|
|
71
|
-
|
|
100
|
+
Re-ingesting the same file replaces the old version automatically.
|
|
72
101
|
|
|
73
|
-
|
|
102
|
+
### Searching Documents
|
|
103
|
+
|
|
104
|
+
```
|
|
105
|
+
"What does the API documentation say about authentication?"
|
|
106
|
+
"Find information about rate limiting"
|
|
107
|
+
"Search for error handling best practices"
|
|
108
|
+
```
|
|
74
109
|
|
|
75
|
-
|
|
110
|
+
The hybrid search combines keyword matching (BM25) with semantic search. This means `useEffect` finds documents containing that exact term, not just semantically similar React concepts.
|
|
76
111
|
|
|
77
|
-
|
|
78
|
-
- **LanceDB** for vector storage (file-based, no server needed)
|
|
79
|
-
- **Transformers.js** for embeddings (runs in Node.js, no Python)
|
|
80
|
-
- **all-MiniLM-L6-v2** model (384 dimensions, good balance of speed and accuracy)
|
|
81
|
-
- **RecursiveCharacterTextSplitter** for intelligent text chunking
|
|
112
|
+
Results include text content, source file, and relevance score. Adjust result count with `limit` (1-20, default 10).
|
|
82
113
|
|
|
83
|
-
|
|
114
|
+
### Managing Files
|
|
84
115
|
|
|
85
|
-
|
|
116
|
+
```
|
|
117
|
+
"List all ingested files" # See what's indexed
|
|
118
|
+
"Delete old-spec.pdf from RAG" # Remove a file
|
|
119
|
+
"Show RAG server status" # Check system health
|
|
120
|
+
```
|
|
86
121
|
|
|
87
|
-
|
|
88
|
-
- **Download size**: ~90MB (model files)
|
|
89
|
-
- **Disk usage after caching**: ~120MB (includes ONNX runtime cache)
|
|
90
|
-
- **Time**: 1-2 minutes on a decent connection
|
|
91
|
-
- **First operation delay**: Your initial ingest or search request will wait for the model download to complete
|
|
122
|
+
## Search Tuning
|
|
92
123
|
|
|
93
|
-
|
|
124
|
+
Adjust these for your use case:
|
|
94
125
|
|
|
95
|
-
|
|
126
|
+
| Variable | Default | Description |
|
|
127
|
+
|----------|---------|-------------|
|
|
128
|
+
| `RAG_HYBRID_WEIGHT` | `0.6` | Keyword boost factor. 0 = semantic only, higher = stronger keyword boost. |
|
|
129
|
+
| `RAG_GROUPING` | (not set) | `similar` for top group only, `related` for top 2 groups. |
|
|
130
|
+
| `RAG_MAX_DISTANCE` | (not set) | Filter out low-relevance results (e.g., `0.5`). |
|
|
96
131
|
|
|
97
|
-
|
|
132
|
+
Example (stricter, code-focused):
|
|
133
|
+
```json
|
|
134
|
+
"env": {
|
|
135
|
+
"RAG_HYBRID_WEIGHT": "0.7",
|
|
136
|
+
"RAG_GROUPING": "similar"
|
|
137
|
+
}
|
|
138
|
+
```
|
|
98
139
|
|
|
99
|
-
##
|
|
140
|
+
## How It Works
|
|
100
141
|
|
|
101
|
-
**
|
|
142
|
+
**TL;DR:**
|
|
143
|
+
- Documents are chunked by semantic similarity, not fixed character counts
|
|
144
|
+
- Each chunk is embedded locally using Transformers.js
|
|
145
|
+
- Search uses semantic similarity with keyword boost for exact matches
|
|
146
|
+
- Results are filtered based on relevance gaps, not raw scores
|
|
102
147
|
|
|
103
|
-
|
|
148
|
+
### Details
|
|
104
149
|
|
|
105
|
-
|
|
150
|
+
When you ingest a document, the parser extracts text based on file type (PDF via `unpdf`, DOCX via `mammoth`, text files directly).
|
|
106
151
|
|
|
107
|
-
|
|
152
|
+
The semantic chunker splits text into sentences, then groups them using embedding similarity. It finds natural topic boundaries where the meaning shifts—keeping related content together instead of cutting at arbitrary character limits. This produces chunks that are coherent units of meaning, typically 500-1000 characters.
|
|
108
153
|
|
|
109
|
-
|
|
154
|
+
Each chunk goes through the Transformers.js embedding model (`all-MiniLM-L6-v2`), converting text into 384-dimensional vectors. Vectors are stored in LanceDB, a file-based vector database requiring no server process.
|
|
110
155
|
|
|
111
|
-
|
|
156
|
+
When you search:
|
|
157
|
+
1. Your query becomes a vector using the same model
|
|
158
|
+
2. Semantic (vector) search finds the most relevant chunks
|
|
159
|
+
3. Quality filters apply (distance threshold, grouping)
|
|
160
|
+
4. Keyword matches boost rankings for exact term matching
|
|
112
161
|
|
|
113
|
-
|
|
162
|
+
The keyword boost ensures exact terms like `useEffect` or error codes rank higher when they match.
|
|
114
163
|
|
|
115
|
-
|
|
116
|
-
|
|
117
|
-
command = "npx"
|
|
118
|
-
args = ["-y", "mcp-local-rag"]
|
|
164
|
+
<details>
|
|
165
|
+
<summary><strong>Configuration</strong></summary>
|
|
119
166
|
|
|
120
|
-
|
|
121
|
-
BASE_DIR = "/path/to/your/documents"
|
|
122
|
-
DB_PATH = "./lancedb"
|
|
123
|
-
CACHE_DIR = "./models"
|
|
124
|
-
```
|
|
167
|
+
### Environment Variables
|
|
125
168
|
|
|
126
|
-
|
|
169
|
+
| Variable | Default | Description |
|
|
170
|
+
|----------|---------|-------------|
|
|
171
|
+
| `BASE_DIR` | Current directory | Document root directory (security boundary) |
|
|
172
|
+
| `DB_PATH` | `./lancedb/` | Vector database location |
|
|
173
|
+
| `CACHE_DIR` | `./models/` | Model cache directory |
|
|
174
|
+
| `MODEL_NAME` | `Xenova/all-MiniLM-L6-v2` | HuggingFace model ID ([available models](https://huggingface.co/models?library=transformers.js&pipeline_tag=feature-extraction)) |
|
|
175
|
+
| `MAX_FILE_SIZE` | `104857600` (100MB) | Maximum file size in bytes |
|
|
127
176
|
|
|
128
|
-
###
|
|
177
|
+
### Client-Specific Setup
|
|
129
178
|
|
|
130
|
-
|
|
131
|
-
- **Global** (all projects): `~/.cursor/mcp.json`
|
|
132
|
-
- **Project-specific**: `.cursor/mcp.json` in your project root
|
|
179
|
+
**Cursor** — Global: `~/.cursor/mcp.json`, Project: `.cursor/mcp.json`
|
|
133
180
|
|
|
134
181
|
```json
|
|
135
182
|
{
|
|
@@ -138,154 +185,125 @@ Add to your Cursor settings:
|
|
|
138
185
|
"command": "npx",
|
|
139
186
|
"args": ["-y", "mcp-local-rag"],
|
|
140
187
|
"env": {
|
|
141
|
-
"BASE_DIR": "/path/to/your/documents"
|
|
142
|
-
"DB_PATH": "./lancedb",
|
|
143
|
-
"CACHE_DIR": "./models"
|
|
188
|
+
"BASE_DIR": "/path/to/your/documents"
|
|
144
189
|
}
|
|
145
190
|
}
|
|
146
191
|
}
|
|
147
192
|
}
|
|
148
193
|
```
|
|
149
194
|
|
|
150
|
-
|
|
151
|
-
|
|
152
|
-
Run in your project directory to enable for that project:
|
|
195
|
+
**Codex** — `~/.codex/config.toml` (note: must use `mcp_servers` with underscore)
|
|
153
196
|
|
|
154
|
-
```
|
|
155
|
-
|
|
156
|
-
|
|
157
|
-
|
|
158
|
-
|
|
159
|
-
Or add globally for all projects:
|
|
197
|
+
```toml
|
|
198
|
+
[mcp_servers.local-rag]
|
|
199
|
+
command = "npx"
|
|
200
|
+
args = ["-y", "mcp-local-rag"]
|
|
160
201
|
|
|
161
|
-
|
|
162
|
-
|
|
202
|
+
[mcp_servers.local-rag.env]
|
|
203
|
+
BASE_DIR = "/path/to/your/documents"
|
|
163
204
|
```
|
|
164
205
|
|
|
165
|
-
**
|
|
206
|
+
**Claude Code**:
|
|
166
207
|
|
|
167
208
|
```bash
|
|
168
209
|
claude mcp add local-rag --scope user \
|
|
169
210
|
--env BASE_DIR=/path/to/your/documents \
|
|
170
|
-
--env DB_PATH=./lancedb \
|
|
171
|
-
--env CACHE_DIR=./models \
|
|
172
211
|
-- npx -y mcp-local-rag
|
|
173
212
|
```
|
|
174
213
|
|
|
175
|
-
###
|
|
214
|
+
### First Run
|
|
176
215
|
|
|
177
|
-
|
|
178
|
-
|----------|---------|-------------|-------------|
|
|
179
|
-
| `BASE_DIR` | Current directory | Document root directory. Server only accesses files within this path (prevents accidental system file access). | Any valid path |
|
|
180
|
-
| `DB_PATH` | `./lancedb/` | Vector database storage location. Can grow large with many documents. | Any valid path |
|
|
181
|
-
| `CACHE_DIR` | `./models/` | Model cache directory. After first download, model stays here for offline use. | Any valid path |
|
|
182
|
-
| `MODEL_NAME` | `Xenova/all-MiniLM-L6-v2` | HuggingFace model identifier. Must be Transformers.js compatible. See [available models](https://huggingface.co/models?library=transformers.js&pipeline_tag=feature-extraction&sort=trending). **Note:** Changing models requires re-ingesting all documents as embeddings from different models are incompatible. | HF model ID |
|
|
183
|
-
| `MAX_FILE_SIZE` | `104857600` (100MB) | Maximum file size in bytes. Larger files rejected to prevent memory issues. | 1MB - 500MB |
|
|
184
|
-
| `CHUNK_SIZE` | `512` | Characters per chunk. Larger = more context but slower processing. | 128 - 2048 |
|
|
185
|
-
| `CHUNK_OVERLAP` | `100` | Overlap between chunks. Preserves context across boundaries. | 0 - (CHUNK_SIZE/2) |
|
|
216
|
+
The embedding model (~90MB) downloads on first use. Takes 1-2 minutes, then works offline.
|
|
186
217
|
|
|
187
|
-
|
|
218
|
+
### Security
|
|
188
219
|
|
|
189
|
-
**
|
|
190
|
-
- **
|
|
191
|
-
- **
|
|
192
|
-
- **Claude Code**: No restart needed—changes apply immediately
|
|
220
|
+
- **Path restriction**: Only files within `BASE_DIR` are accessible
|
|
221
|
+
- **Local only**: No network requests after model download
|
|
222
|
+
- **Model source**: Official HuggingFace repository ([verify here](https://huggingface.co/Xenova/all-MiniLM-L6-v2))
|
|
193
223
|
|
|
194
|
-
|
|
224
|
+
</details>
|
|
195
225
|
|
|
196
|
-
|
|
226
|
+
<details>
|
|
227
|
+
<summary><strong>Performance</strong></summary>
|
|
197
228
|
|
|
198
|
-
|
|
229
|
+
Tested on MacBook Pro M1 (16GB RAM), Node.js 22:
|
|
199
230
|
|
|
200
|
-
|
|
201
|
-
"Ingest the document at /Users/me/docs/api-spec.pdf"
|
|
202
|
-
```
|
|
231
|
+
**Query Speed**: ~1.2 seconds for 10,000 chunks (p90 < 3s)
|
|
203
232
|
|
|
204
|
-
**
|
|
233
|
+
**Ingestion** (10MB PDF):
|
|
234
|
+
- PDF parsing: ~8s
|
|
235
|
+
- Chunking: ~2s
|
|
236
|
+
- Embedding: ~30s
|
|
237
|
+
- DB insertion: ~5s
|
|
205
238
|
|
|
206
|
-
|
|
207
|
-
codex "Ingest the document at /Users/me/docs/api-spec.pdf into the RAG system"
|
|
208
|
-
```
|
|
239
|
+
**Memory**: ~200MB idle, ~800MB peak (50MB file ingestion)
|
|
209
240
|
|
|
210
|
-
**
|
|
241
|
+
**Concurrency**: Handles 5 parallel queries without degradation.
|
|
211
242
|
|
|
212
|
-
|
|
213
|
-
"Ingest the document at /Users/me/docs/api-spec.pdf"
|
|
214
|
-
```
|
|
243
|
+
</details>
|
|
215
244
|
|
|
216
|
-
|
|
245
|
+
<details>
|
|
246
|
+
<summary><strong>Troubleshooting</strong></summary>
|
|
217
247
|
|
|
218
|
-
|
|
219
|
-
1. Validates the file exists and is under 100MB
|
|
220
|
-
2. Extracts text (handling PDF/DOCX/TXT/MD formats)
|
|
221
|
-
3. Splits into chunks (512 chars, 100 char overlap)
|
|
222
|
-
4. Generates embeddings for each chunk
|
|
223
|
-
5. Stores in the vector database
|
|
248
|
+
### "No results found"
|
|
224
249
|
|
|
225
|
-
|
|
226
|
-
|
|
227
|
-
### Searching Documents
|
|
250
|
+
Documents must be ingested first. Run `"List all ingested files"` to verify.
|
|
228
251
|
|
|
229
|
-
|
|
252
|
+
### Model download failed
|
|
230
253
|
|
|
231
|
-
|
|
232
|
-
"What does the API documentation say about authentication?"
|
|
233
|
-
"Find information about rate limiting"
|
|
234
|
-
"Search for error handling best practices"
|
|
235
|
-
```
|
|
254
|
+
Check internet connection. If behind a proxy, configure network settings. The model can also be [downloaded manually](https://huggingface.co/Xenova/all-MiniLM-L6-v2).
|
|
236
255
|
|
|
237
|
-
|
|
238
|
-
1. Converts your query to an embedding vector
|
|
239
|
-
2. Searches the vector database for similar chunks
|
|
240
|
-
3. Returns the top 5 matches with similarity scores
|
|
256
|
+
### "File too large"
|
|
241
257
|
|
|
242
|
-
|
|
258
|
+
Default limit is 100MB. Split large files or increase `MAX_FILE_SIZE`.
|
|
243
259
|
|
|
244
|
-
|
|
260
|
+
### Slow queries
|
|
245
261
|
|
|
246
|
-
|
|
247
|
-
"Search for database optimization tips, return 10 results"
|
|
248
|
-
```
|
|
262
|
+
Check chunk count with `status`. Large documents with many chunks may slow queries. Consider splitting very large files.
|
|
249
263
|
|
|
250
|
-
|
|
264
|
+
### "Path outside BASE_DIR"
|
|
251
265
|
|
|
252
|
-
|
|
266
|
+
Ensure file paths are within `BASE_DIR`. Use absolute paths.
|
|
253
267
|
|
|
254
|
-
|
|
268
|
+
### MCP client doesn't see tools
|
|
255
269
|
|
|
256
|
-
|
|
257
|
-
|
|
258
|
-
|
|
270
|
+
1. Verify config file syntax
|
|
271
|
+
2. Restart client completely (Cmd+Q on Mac for Cursor)
|
|
272
|
+
3. Test directly: `npx mcp-local-rag` should run without errors
|
|
259
273
|
|
|
260
|
-
|
|
274
|
+
</details>
|
|
261
275
|
|
|
262
|
-
|
|
276
|
+
<details>
|
|
277
|
+
<summary><strong>FAQ</strong></summary>
|
|
263
278
|
|
|
264
|
-
|
|
265
|
-
|
|
266
|
-
```
|
|
279
|
+
**Is this really private?**
|
|
280
|
+
Yes. After model download, nothing leaves your machine. Verify with network monitoring.
|
|
267
281
|
|
|
268
|
-
|
|
282
|
+
**Can I use this offline?**
|
|
283
|
+
Yes, after the first model download (~90MB).
|
|
269
284
|
|
|
270
|
-
|
|
285
|
+
**How does this compare to cloud RAG?**
|
|
286
|
+
Cloud services offer better accuracy at scale but require sending data externally. This trades some accuracy for complete privacy and zero runtime cost.
|
|
271
287
|
|
|
272
|
-
|
|
273
|
-
|
|
274
|
-
```
|
|
288
|
+
**What file formats are supported?**
|
|
289
|
+
PDF, DOCX, TXT, Markdown. Not yet: Excel, PowerPoint, images, HTML.
|
|
275
290
|
|
|
276
|
-
|
|
291
|
+
**Can I change the embedding model?**
|
|
292
|
+
Yes, but you must delete your database and re-ingest all documents. Different models produce incompatible vector dimensions.
|
|
277
293
|
|
|
278
|
-
|
|
294
|
+
**GPU acceleration?**
|
|
295
|
+
Transformers.js runs on CPU. GPU support is experimental. CPU performance is adequate for most use cases.
|
|
279
296
|
|
|
280
|
-
|
|
297
|
+
**Multi-user support?**
|
|
298
|
+
No. Designed for single-user, local access. Multi-user would require authentication/access control.
|
|
281
299
|
|
|
282
|
-
|
|
283
|
-
|
|
284
|
-
```
|
|
300
|
+
**How to backup?**
|
|
301
|
+
Copy `DB_PATH` directory (default: `./lancedb/`).
|
|
285
302
|
|
|
286
|
-
|
|
303
|
+
</details>
|
|
287
304
|
|
|
288
|
-
|
|
305
|
+
<details>
|
|
306
|
+
<summary><strong>Development</strong></summary>
|
|
289
307
|
|
|
290
308
|
### Building from Source
|
|
291
309
|
|
|
@@ -295,247 +313,51 @@ cd mcp-local-rag
|
|
|
295
313
|
npm install
|
|
296
314
|
```
|
|
297
315
|
|
|
298
|
-
###
|
|
316
|
+
### Testing
|
|
299
317
|
|
|
300
318
|
```bash
|
|
301
|
-
# Run all tests
|
|
302
|
-
npm test
|
|
303
|
-
|
|
304
|
-
# Run with coverage
|
|
305
|
-
npm run test:coverage
|
|
306
|
-
|
|
307
|
-
# Watch mode for development
|
|
308
|
-
npm run test:watch
|
|
319
|
+
npm test # Run all tests
|
|
320
|
+
npm run test:coverage # With coverage
|
|
321
|
+
npm run test:watch # Watch mode
|
|
309
322
|
```
|
|
310
323
|
|
|
311
|
-
The test suite includes:
|
|
312
|
-
- Unit tests for each component
|
|
313
|
-
- Integration tests for the full ingestion and search flow
|
|
314
|
-
- Security tests for path traversal protection
|
|
315
|
-
- Performance tests verifying query speed targets
|
|
316
|
-
|
|
317
324
|
### Code Quality
|
|
318
325
|
|
|
319
326
|
```bash
|
|
320
|
-
#
|
|
321
|
-
npm run
|
|
322
|
-
|
|
323
|
-
#
|
|
324
|
-
npm run check:fix
|
|
325
|
-
|
|
326
|
-
# Check circular dependencies
|
|
327
|
-
npm run check:deps
|
|
328
|
-
|
|
329
|
-
# Full quality check (runs everything)
|
|
330
|
-
npm run check:all
|
|
327
|
+
npm run type-check # TypeScript check
|
|
328
|
+
npm run check:fix # Lint and format
|
|
329
|
+
npm run check:deps # Circular dependency check
|
|
330
|
+
npm run check:all # Full quality check
|
|
331
331
|
```
|
|
332
332
|
|
|
333
333
|
### Project Structure
|
|
334
334
|
|
|
335
335
|
```
|
|
336
336
|
src/
|
|
337
|
-
index.ts
|
|
338
|
-
server/
|
|
339
|
-
parser/
|
|
340
|
-
chunker/
|
|
341
|
-
embedder/
|
|
342
|
-
vectordb/
|
|
343
|
-
__tests__/
|
|
337
|
+
index.ts # Entry point
|
|
338
|
+
server/ # MCP tool handlers
|
|
339
|
+
parser/ # PDF, DOCX, TXT, MD parsing
|
|
340
|
+
chunker/ # Text splitting
|
|
341
|
+
embedder/ # Transformers.js embeddings
|
|
342
|
+
vectordb/ # LanceDB operations
|
|
343
|
+
__tests__/ # Test suites
|
|
344
344
|
```
|
|
345
345
|
|
|
346
|
-
|
|
347
|
-
- **Parser** validates file paths and extracts text
|
|
348
|
-
- **Chunker** splits text into overlapping segments
|
|
349
|
-
- **Embedder** generates 384-dimensional vectors
|
|
350
|
-
- **VectorStore** handles all database operations
|
|
351
|
-
- **RAGServer** orchestrates everything and exposes MCP tools
|
|
352
|
-
|
|
353
|
-
## Performance
|
|
354
|
-
|
|
355
|
-
**Test Environment**: MacBook Pro M1 (16GB RAM), tested with v0.1.3 on Node.js 22 (January 2025)
|
|
356
|
-
|
|
357
|
-
**Query Performance**:
|
|
358
|
-
- Average: 1.2 seconds for 10,000 indexed chunks (5 results)
|
|
359
|
-
- Target: p90 < 3 seconds ✓
|
|
360
|
-
|
|
361
|
-
**Ingestion Speed** (10MB PDF):
|
|
362
|
-
- Total: ~45 seconds
|
|
363
|
-
- PDF parsing: ~8 seconds (17%)
|
|
364
|
-
- Text chunking: ~2 seconds (4%)
|
|
365
|
-
- Embedding generation: ~30 seconds (67%)
|
|
366
|
-
- Database insertion: ~5 seconds (11%)
|
|
367
|
-
|
|
368
|
-
**Memory Usage**:
|
|
369
|
-
- Baseline: ~200MB idle
|
|
370
|
-
- Peak: ~800MB when ingesting 50MB file
|
|
371
|
-
- Target: < 1GB ✓
|
|
372
|
-
|
|
373
|
-
**Concurrent Queries**: Handles 5 parallel queries without degradation. LanceDB's async API allows non-blocking operations.
|
|
374
|
-
|
|
375
|
-
**Note**: Your results will vary based on hardware, especially CPU speed (embeddings run on CPU, not GPU).
|
|
376
|
-
|
|
377
|
-
## Troubleshooting
|
|
378
|
-
|
|
379
|
-
### "No results found" when searching
|
|
380
|
-
|
|
381
|
-
**Cause**: Documents must be ingested before searching.
|
|
382
|
-
|
|
383
|
-
**Solution**:
|
|
384
|
-
1. First ingest documents: `"Ingest /path/to/document.pdf"`
|
|
385
|
-
2. Verify ingestion: `"List all ingested files"`
|
|
386
|
-
3. Then search: `"Search for [your query]"`
|
|
387
|
-
|
|
388
|
-
**Common mistake**: Trying to search immediately after configuration without ingesting any documents.
|
|
389
|
-
|
|
390
|
-
### "Model download failed"
|
|
391
|
-
|
|
392
|
-
The embedding model downloads from HuggingFace on first use (when you ingest or search for the first time). If you're behind a proxy or firewall, you might need to configure network settings.
|
|
393
|
-
|
|
394
|
-
**When it happens**: Your first ingest or search operation will trigger the download. If it fails, you'll see a detailed error message with troubleshooting guidance (network issues, disk space, cache corruption).
|
|
395
|
-
|
|
396
|
-
**What to do**: The error message provides specific recommendations. Common solutions:
|
|
397
|
-
1. Check your internet connection and retry the operation
|
|
398
|
-
2. Ensure you have sufficient disk space (~120MB needed)
|
|
399
|
-
3. If problems persist, delete the cache directory and try again
|
|
400
|
-
|
|
401
|
-
Alternatively, download the model manually:
|
|
402
|
-
1. Visit https://huggingface.co/Xenova/all-MiniLM-L6-v2
|
|
403
|
-
2. Download the model files
|
|
404
|
-
3. Set CACHE_DIR to where you saved them
|
|
405
|
-
|
|
406
|
-
### "File too large" error
|
|
407
|
-
|
|
408
|
-
Default limit is 100MB. For larger files:
|
|
409
|
-
- Split them into smaller documents
|
|
410
|
-
- Or increase MAX_FILE_SIZE in your config (be aware of memory usage)
|
|
411
|
-
|
|
412
|
-
### Slow query performance
|
|
413
|
-
|
|
414
|
-
If queries take longer than expected:
|
|
415
|
-
- Check how many chunks you have indexed (`status` command)
|
|
416
|
-
- Consider the hardware (embeddings are CPU-intensive)
|
|
417
|
-
- Try reducing CHUNK_SIZE to create fewer chunks
|
|
418
|
-
|
|
419
|
-
### "Path outside BASE_DIR" error
|
|
420
|
-
|
|
421
|
-
The server restricts file access to BASE_DIR for security. Make sure your file path is within that directory. Check for:
|
|
422
|
-
- Correct BASE_DIR setting in your MCP config
|
|
423
|
-
- Relative paths vs absolute paths
|
|
424
|
-
- Typos in the file path
|
|
425
|
-
|
|
426
|
-
### MCP client doesn't see the tools
|
|
427
|
-
|
|
428
|
-
**For Cursor:**
|
|
429
|
-
1. Open Settings → Features → Model Context Protocol
|
|
430
|
-
2. Verify the server configuration is saved
|
|
431
|
-
3. Restart Cursor completely
|
|
432
|
-
4. Check the MCP connection status in the status bar
|
|
433
|
-
|
|
434
|
-
**For Codex CLI:**
|
|
435
|
-
1. Check `~/.codex/config.toml` to verify the configuration
|
|
436
|
-
2. Ensure the section name is `[mcp_servers.local-rag]` (with underscore)
|
|
437
|
-
3. Test the server directly: `npx mcp-local-rag` should run without errors
|
|
438
|
-
4. Restart Codex CLI or IDE extension
|
|
439
|
-
5. Check for error messages when Codex starts
|
|
440
|
-
|
|
441
|
-
**For Claude Code:**
|
|
442
|
-
1. Run `claude mcp list` to see configured servers
|
|
443
|
-
2. Verify the server appears in the list
|
|
444
|
-
3. Check `~/.config/claude/mcp_config.json` for syntax errors
|
|
445
|
-
4. Test the server directly: `npx mcp-local-rag` should run without errors
|
|
446
|
-
|
|
447
|
-
**Common issues:**
|
|
448
|
-
- Invalid JSON syntax in config files
|
|
449
|
-
- Wrong file paths in BASE_DIR setting
|
|
450
|
-
- Server binary not found (try global install: `npm install -g mcp-local-rag`)
|
|
451
|
-
- Firewall blocking local communication
|
|
452
|
-
|
|
453
|
-
## How It Works
|
|
454
|
-
|
|
455
|
-
When you ingest a document, the parser extracts text based on the file type. PDFs use `pdf-parse`, DOCX uses `mammoth`, and text files are read directly.
|
|
456
|
-
|
|
457
|
-
The chunker then splits the text using LangChain's RecursiveCharacterTextSplitter. It tries to break on natural boundaries (paragraphs, sentences) while keeping chunks around 512 characters. Adjacent chunks overlap by 100 characters to preserve context.
|
|
458
|
-
|
|
459
|
-
Each chunk goes through the Transformers.js embedding model, which converts text into a 384-dimensional vector representing its semantic meaning. This happens in batches of 8 chunks at a time for efficiency.
|
|
460
|
-
|
|
461
|
-
Vectors are stored in LanceDB, a columnar vector database that works with local files. No server process, no complex setup. It's just a directory with data files.
|
|
462
|
-
|
|
463
|
-
When you search, your query becomes a vector using the same model. LanceDB finds the chunks with vectors most similar to your query vector (using cosine similarity). The top matches return to your MCP client with their original text and metadata.
|
|
464
|
-
|
|
465
|
-
The beauty of this approach: semantically similar text has similar vectors, even if the words are different. "authentication process" and "how users log in" will match each other, unlike keyword search.
|
|
466
|
-
|
|
467
|
-
## FAQ
|
|
468
|
-
|
|
469
|
-
**Is this really private?**
|
|
470
|
-
|
|
471
|
-
Yes. After the initial model download, nothing leaves your machine. You can verify with network monitoring tools—no outbound requests during ingestion or search.
|
|
472
|
-
|
|
473
|
-
**Can I use this offline?**
|
|
474
|
-
|
|
475
|
-
Yes, once the model is cached. The first run needs internet to download the model (~90MB), but after that, everything works offline.
|
|
476
|
-
|
|
477
|
-
**How does this compare to cloud RAG services?**
|
|
478
|
-
|
|
479
|
-
Cloud services (OpenAI, Pinecone, etc.) typically offer better accuracy and scale. But they require sending your documents externally, ongoing costs, and internet connectivity. This project trades some accuracy for complete privacy and zero runtime cost.
|
|
480
|
-
|
|
481
|
-
**What file formats are supported?**
|
|
482
|
-
|
|
483
|
-
Currently supported:
|
|
484
|
-
- **PDF**: `.pdf` (uses pdf-parse)
|
|
485
|
-
- **Microsoft Word**: `.docx` (uses mammoth, not `.doc`)
|
|
486
|
-
- **Plain Text**: `.txt`
|
|
487
|
-
- **Markdown**: `.md`, `.markdown`
|
|
488
|
-
|
|
489
|
-
**Not yet supported**:
|
|
490
|
-
- Excel/CSV (`.xlsx`, `.csv`)
|
|
491
|
-
- PowerPoint (`.pptx`)
|
|
492
|
-
- Images with OCR (`.jpg`, `.png`)
|
|
493
|
-
- HTML (`.html`)
|
|
494
|
-
- Old Word documents (`.doc`)
|
|
495
|
-
|
|
496
|
-
Want support for another format? [Open an issue](https://github.com/shinpr/mcp-local-rag/issues/new) with your use case.
|
|
497
|
-
|
|
498
|
-
**Can I customize the embedding model?**
|
|
499
|
-
|
|
500
|
-
Yes, set MODEL_NAME to any Transformers.js-compatible model from HuggingFace. Keep in mind that different models have different vector dimensions, so you'll need to rebuild your database if you switch.
|
|
501
|
-
|
|
502
|
-
**How much does accuracy depend on the model?**
|
|
503
|
-
|
|
504
|
-
`all-MiniLM-L6-v2` is optimized for English and performs well for technical documentation. For other languages, consider multilingual models like `multilingual-e5-small`. For higher accuracy, try larger models—but expect slower processing.
|
|
505
|
-
|
|
506
|
-
**What about GPU acceleration?**
|
|
507
|
-
|
|
508
|
-
Transformers.js runs on CPU by default. GPU support is experimental and varies by platform. For most use cases, CPU performance is adequate (embeddings are reasonably fast even without GPU).
|
|
509
|
-
|
|
510
|
-
**Can multiple people share a database?**
|
|
511
|
-
|
|
512
|
-
The current design assumes single-user, local access. For multi-user scenarios, you'd need to implement authentication and access control—both out of scope for this project's privacy-first design.
|
|
513
|
-
|
|
514
|
-
**How do I back up my data?**
|
|
515
|
-
|
|
516
|
-
Copy your DB_PATH directory (default: `./lancedb/`). That's your entire vector database. Copy BASE_DIR for your original documents. Both are just files—no special export needed.
|
|
346
|
+
</details>
|
|
517
347
|
|
|
518
348
|
## Contributing
|
|
519
349
|
|
|
520
|
-
Contributions
|
|
350
|
+
Contributions welcome. Before submitting a PR:
|
|
521
351
|
|
|
522
|
-
1. Run
|
|
523
|
-
2.
|
|
352
|
+
1. Run tests: `npm test`
|
|
353
|
+
2. Check quality: `npm run check:all`
|
|
524
354
|
3. Add tests for new features
|
|
525
|
-
4. Update
|
|
355
|
+
4. Update docs if behavior changes
|
|
526
356
|
|
|
527
357
|
## License
|
|
528
358
|
|
|
529
|
-
MIT License
|
|
530
|
-
|
|
531
|
-
Free for personal and commercial use. No attribution required, but appreciated.
|
|
359
|
+
MIT License. Free for personal and commercial use.
|
|
532
360
|
|
|
533
361
|
## Acknowledgments
|
|
534
362
|
|
|
535
|
-
Built with
|
|
536
|
-
- [Model Context Protocol](https://modelcontextprotocol.io/) by Anthropic
|
|
537
|
-
- [LanceDB](https://lancedb.com/) for vector storage
|
|
538
|
-
- [Transformers.js](https://huggingface.co/docs/transformers.js) by HuggingFace
|
|
539
|
-
- [LangChain.js](https://js.langchain.com/) for text splitting
|
|
540
|
-
|
|
541
|
-
Created as a practical tool for developers who want AI-powered document search without compromising privacy.
|
|
363
|
+
Built with [Model Context Protocol](https://modelcontextprotocol.io/) by Anthropic, [LanceDB](https://lancedb.com/), and [Transformers.js](https://huggingface.co/docs/transformers.js).
|