mcp-local-rag 0.2.3 → 0.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,14 +1,35 @@
1
1
  # MCP Local RAG
2
2
 
3
- A privacy-first document search server that runs entirely on your machine. No API keys, no cloud services, no data leaving your computer.
3
+ [![npm version](https://img.shields.io/npm/v/mcp-local-rag.svg)](https://www.npmjs.com/package/mcp-local-rag)
4
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
4
5
 
5
- Built for the Model Context Protocol (MCP), this lets you use Cursor, Codex, Claude Code, or any MCP client to search through your local documents using semantic search—without sending anything to external services.
6
+ Local RAG for developers using MCP.
7
+ Hybrid search (BM25 + semantic) for exact technical terms — fully private, zero setup.
8
+
9
+ ## Features
10
+
11
+ - **Code-aware hybrid search**
12
+ Keyword (BM25) + semantic search combined. Exact terms like `useEffect`, error codes, and class names are matched reliably—not just semantically guessed.
13
+
14
+ - **Smart semantic chunking**
15
+ Chunks documents by meaning, not character count. Uses embedding similarity to find natural topic boundaries—keeping related content together and splitting where topics change.
16
+
17
+ - **Quality-first result filtering**
18
+ Groups results by relevance gaps instead of arbitrary top-K cutoffs. Get fewer but more trustworthy chunks.
19
+
20
+ - **Runs entirely locally**
21
+ No API keys, no cloud, no data leaving your machine. Works fully offline after the first model download.
22
+
23
+ - **Zero-friction setup**
24
+ One `npx` command. No Docker, no Python, no servers to manage. Designed for Cursor, Codex, and Claude Code via MCP.
6
25
 
7
26
  ## Quick Start
8
27
 
9
- Add the MCP server to your AI coding tool. Choose your tool below:
28
+ Set `BASE_DIR` to the folder you want to search. Documents must live under it.
29
+
30
+ Add the MCP server to your AI coding tool:
10
31
 
11
- **For Cursor** - Add to `~/.cursor/mcp.json`:
32
+ **For Cursor** Add to `~/.cursor/mcp.json`:
12
33
  ```json
13
34
  {
14
35
  "mcpServers": {
@@ -23,7 +44,7 @@ Add the MCP server to your AI coding tool. Choose your tool below:
23
44
  }
24
45
  ```
25
46
 
26
- **For Codex** - Add to `~/.codex/config.toml`:
47
+ **For Codex** Add to `~/.codex/config.toml`:
27
48
  ```toml
28
49
  [mcp_servers.local-rag]
29
50
  command = "npx"
@@ -33,103 +54,129 @@ args = ["-y", "mcp-local-rag"]
33
54
  BASE_DIR = "/path/to/your/documents"
34
55
  ```
35
56
 
36
- **For Claude Code** - Run this command:
57
+ **For Claude Code** Run this command:
37
58
  ```bash
38
59
  claude mcp add local-rag --scope user --env BASE_DIR=/path/to/your/documents -- npx -y mcp-local-rag
39
60
  ```
40
61
 
41
- Restart your tool, then start using:
62
+ Restart your tool, then start using it:
63
+
42
64
  ```
43
- "Ingest api-spec.pdf"
44
- "What does this document say about authentication?"
65
+ You: "Ingest api-spec.pdf"
66
+ Assistant: Successfully ingested api-spec.pdf (47 chunks created)
67
+
68
+ You: "What does the API documentation say about authentication?"
69
+ Assistant: Based on the documentation, authentication uses OAuth 2.0 with JWT tokens.
70
+ The flow is described in section 3.2...
45
71
  ```
46
72
 
47
73
  That's it. No installation, no Docker, no complex setup.
48
74
 
49
75
  ## Why This Exists
50
76
 
51
- You want to use AI to search through your documents. Maybe they're technical specs, research papers, internal documentation, or meeting notes. The problem: most solutions require sending your files to external APIs.
77
+ You want AI to search your documentstechnical specs, research papers, internal docs. But most solutions send your files to external APIs.
52
78
 
53
- This creates three issues:
79
+ **Privacy.** Your documents might contain sensitive data. This runs entirely locally.
54
80
 
55
- **Privacy concerns.** Your documents might contain sensitive information—client data, proprietary research, personal notes. Sending them to third-party services means trusting them with that data.
81
+ **Cost.** External embedding APIs charge per use. This is free after the initial model download.
56
82
 
57
- **Cost at scale.** External embedding APIs charge per use. For large document sets or frequent searches, costs add up quickly.
83
+ **Offline.** Works without internet after setup.
58
84
 
59
- **Network dependency.** If you're offline or have limited connectivity, you can't search your own documents.
85
+ **Code search.** Pure semantic search misses exact terms like `useEffect` or `ERR_CONNECTION_REFUSED`. Hybrid search catches both meaning and exact matches.
60
86
 
61
- This project solves these problems by running everything locally. Documents never leave your machine. The embedding model downloads once, then works offline. And it's free to use as much as you want.
87
+ ## Usage
62
88
 
63
- ## What You Get
89
+ The server provides 5 MCP tools: ingest, search, list, delete, status
90
+ (`ingest_file`, `query_documents`, `list_files`, `delete_file`, `status`).
64
91
 
65
- The server provides five tools through MCP:
92
+ ### Ingesting Documents
66
93
 
67
- **Document ingestion** handles PDF, DOCX, TXT, and Markdown files. Point it at a file, and it extracts the text, splits it into searchable chunks, generates embeddings using a local model, and stores everything in a local vector database. If you ingest the same file again, it replaces the old version—no duplicate data.
94
+ ```
95
+ "Ingest the document at /Users/me/docs/api-spec.pdf"
96
+ ```
68
97
 
69
- **Semantic search** lets you query in natural language. Instead of keyword matching, it understands meaning. Ask "how does authentication work" and it finds relevant sections even if they use different words like "login flow" or "credential validation."
98
+ Supports PDF, DOCX, TXT, and Markdown. The server extracts text, splits it into chunks, generates embeddings locally, and stores everything in a local vector database.
70
99
 
71
- **File management** shows what you've ingested and when. You can see how many chunks each file produced and verify everything is indexed correctly.
100
+ Re-ingesting the same file replaces the old version automatically.
72
101
 
73
- **File deletion** removes ingested documents from the vector database. When you delete a file, all its chunks and embeddings are permanently removed. This is useful for removing outdated documents or sensitive data you no longer want indexed.
102
+ ### Searching Documents
103
+
104
+ ```
105
+ "What does the API documentation say about authentication?"
106
+ "Find information about rate limiting"
107
+ "Search for error handling best practices"
108
+ ```
74
109
 
75
- **System status** reports on your database—document count, total chunks, memory usage. Helpful for monitoring performance or debugging issues.
110
+ The hybrid search combines keyword matching (BM25) with semantic search. This means `useEffect` finds documents containing that exact term, not just semantically similar React concepts.
76
111
 
77
- All of this uses:
78
- - **LanceDB** for vector storage (file-based, no server needed)
79
- - **Transformers.js** for embeddings (runs in Node.js, no Python)
80
- - **all-MiniLM-L6-v2** model (384 dimensions, good balance of speed and accuracy)
81
- - **RecursiveCharacterTextSplitter** for intelligent text chunking
112
+ Results include text content, source file, and relevance score. Adjust result count with `limit` (1-20, default 10).
82
113
 
83
- The result: query responses typically under 3 seconds on a standard laptop, even with thousands of document chunks indexed.
114
+ ### Managing Files
84
115
 
85
- ## First Run
116
+ ```
117
+ "List all ingested files" # See what's indexed
118
+ "Delete old-spec.pdf from RAG" # Remove a file
119
+ "Show RAG server status" # Check system health
120
+ ```
86
121
 
87
- The server starts instantly, but the embedding model downloads **on first use** (when you ingest or search for the first time):
88
- - **Download size**: ~90MB (model files)
89
- - **Disk usage after caching**: ~120MB (includes ONNX runtime cache)
90
- - **Time**: 1-2 minutes on a decent connection
91
- - **First operation delay**: Your initial ingest or search request will wait for the model download to complete
122
+ ## Search Tuning
92
123
 
93
- You'll see a message like "Initializing model (downloading ~90MB, may take 1-2 minutes)..." in the console. The model caches in `CACHE_DIR` (default: `./models/`) for offline use.
124
+ Adjust these for your use case:
94
125
 
95
- **Why lazy initialization?** This approach allows the server to start immediately without upfront model loading. You only download when actually needed, making the server more responsive for quick status checks or file management operations.
126
+ | Variable | Default | Description |
127
+ |----------|---------|-------------|
128
+ | `RAG_HYBRID_WEIGHT` | `0.6` | Keyword boost factor. 0 = semantic only, higher = stronger keyword boost. |
129
+ | `RAG_GROUPING` | (not set) | `similar` for top group only, `related` for top 2 groups. |
130
+ | `RAG_MAX_DISTANCE` | (not set) | Filter out low-relevance results (e.g., `0.5`). |
96
131
 
97
- **Offline Mode**: After first download, works completely offline—no internet required.
132
+ Example (stricter, code-focused):
133
+ ```json
134
+ "env": {
135
+ "RAG_HYBRID_WEIGHT": "0.7",
136
+ "RAG_GROUPING": "similar"
137
+ }
138
+ ```
98
139
 
99
- ## Security
140
+ ## How It Works
100
141
 
101
- **Path Restriction**: This server only accesses files within your `BASE_DIR`. Any attempt to access files outside this directory (e.g., via `../` path traversal) will be rejected.
142
+ **TL;DR:**
143
+ - Documents are chunked by semantic similarity, not fixed character counts
144
+ - Each chunk is embedded locally using Transformers.js
145
+ - Search uses semantic similarity with keyword boost for exact matches
146
+ - Results are filtered based on relevance gaps, not raw scores
102
147
 
103
- **Local Only**: All processing happens on your machine. No network requests are made after the initial model download.
148
+ ### Details
104
149
 
105
- **Model Verification**: The embedding model downloads from HuggingFace's official repository (`Xenova/all-MiniLM-L6-v2`). Verify integrity by checking the [official model card](https://huggingface.co/Xenova/all-MiniLM-L6-v2).
150
+ When you ingest a document, the parser extracts text based on file type (PDF via `unpdf`, DOCX via `mammoth`, text files directly).
106
151
 
107
- ## Configuration
152
+ The semantic chunker splits text into sentences, then groups them using embedding similarity. It finds natural topic boundaries where the meaning shifts—keeping related content together instead of cutting at arbitrary character limits. This produces chunks that are coherent units of meaning, typically 500-1000 characters.
108
153
 
109
- The server works out of the box with sensible defaults, but you can customize it through environment variables.
154
+ Each chunk goes through the Transformers.js embedding model (`all-MiniLM-L6-v2`), converting text into 384-dimensional vectors. Vectors are stored in LanceDB, a file-based vector database requiring no server process.
110
155
 
111
- ### For Codex
156
+ When you search:
157
+ 1. Your query becomes a vector using the same model
158
+ 2. Semantic (vector) search finds the most relevant chunks
159
+ 3. Quality filters apply (distance threshold, grouping)
160
+ 4. Keyword matches boost rankings for exact term matching
112
161
 
113
- Add to `~/.codex/config.toml`:
162
+ The keyword boost ensures exact terms like `useEffect` or error codes rank higher when they match.
114
163
 
115
- ```toml
116
- [mcp_servers.local-rag]
117
- command = "npx"
118
- args = ["-y", "mcp-local-rag"]
164
+ <details>
165
+ <summary><strong>Configuration</strong></summary>
119
166
 
120
- [mcp_servers.local-rag.env]
121
- BASE_DIR = "/path/to/your/documents"
122
- DB_PATH = "./lancedb"
123
- CACHE_DIR = "./models"
124
- ```
167
+ ### Environment Variables
125
168
 
126
- **Note:** The section name must be `mcp_servers` (with underscore). Using `mcp-servers` or `mcpservers` will cause Codex to ignore the configuration.
169
+ | Variable | Default | Description |
170
+ |----------|---------|-------------|
171
+ | `BASE_DIR` | Current directory | Document root directory (security boundary) |
172
+ | `DB_PATH` | `./lancedb/` | Vector database location |
173
+ | `CACHE_DIR` | `./models/` | Model cache directory |
174
+ | `MODEL_NAME` | `Xenova/all-MiniLM-L6-v2` | HuggingFace model ID ([available models](https://huggingface.co/models?library=transformers.js&pipeline_tag=feature-extraction)) |
175
+ | `MAX_FILE_SIZE` | `104857600` (100MB) | Maximum file size in bytes |
127
176
 
128
- ### For Cursor
177
+ ### Client-Specific Setup
129
178
 
130
- Add to your Cursor settings:
131
- - **Global** (all projects): `~/.cursor/mcp.json`
132
- - **Project-specific**: `.cursor/mcp.json` in your project root
179
+ **Cursor** Global: `~/.cursor/mcp.json`, Project: `.cursor/mcp.json`
133
180
 
134
181
  ```json
135
182
  {
@@ -138,154 +185,125 @@ Add to your Cursor settings:
138
185
  "command": "npx",
139
186
  "args": ["-y", "mcp-local-rag"],
140
187
  "env": {
141
- "BASE_DIR": "/path/to/your/documents",
142
- "DB_PATH": "./lancedb",
143
- "CACHE_DIR": "./models"
188
+ "BASE_DIR": "/path/to/your/documents"
144
189
  }
145
190
  }
146
191
  }
147
192
  }
148
193
  ```
149
194
 
150
- ### For Claude Code
151
-
152
- Run in your project directory to enable for that project:
195
+ **Codex** `~/.codex/config.toml` (note: must use `mcp_servers` with underscore)
153
196
 
154
- ```bash
155
- cd /path/to/your/project
156
- claude mcp add local-rag --env BASE_DIR=/path/to/your/documents -- npx -y mcp-local-rag
157
- ```
158
-
159
- Or add globally for all projects:
197
+ ```toml
198
+ [mcp_servers.local-rag]
199
+ command = "npx"
200
+ args = ["-y", "mcp-local-rag"]
160
201
 
161
- ```bash
162
- claude mcp add local-rag --scope user --env BASE_DIR=/path/to/your/documents -- npx -y mcp-local-rag
202
+ [mcp_servers.local-rag.env]
203
+ BASE_DIR = "/path/to/your/documents"
163
204
  ```
164
205
 
165
- **With additional environment variables:**
206
+ **Claude Code**:
166
207
 
167
208
  ```bash
168
209
  claude mcp add local-rag --scope user \
169
210
  --env BASE_DIR=/path/to/your/documents \
170
- --env DB_PATH=./lancedb \
171
- --env CACHE_DIR=./models \
172
211
  -- npx -y mcp-local-rag
173
212
  ```
174
213
 
175
- ### Environment Variables
214
+ ### First Run
176
215
 
177
- | Variable | Default | Description | Valid Range |
178
- |----------|---------|-------------|-------------|
179
- | `BASE_DIR` | Current directory | Document root directory. Server only accesses files within this path (prevents accidental system file access). | Any valid path |
180
- | `DB_PATH` | `./lancedb/` | Vector database storage location. Can grow large with many documents. | Any valid path |
181
- | `CACHE_DIR` | `./models/` | Model cache directory. After first download, model stays here for offline use. | Any valid path |
182
- | `MODEL_NAME` | `Xenova/all-MiniLM-L6-v2` | HuggingFace model identifier. Must be Transformers.js compatible. See [available models](https://huggingface.co/models?library=transformers.js&pipeline_tag=feature-extraction&sort=trending). **Note:** Changing models requires re-ingesting all documents as embeddings from different models are incompatible. | HF model ID |
183
- | `MAX_FILE_SIZE` | `104857600` (100MB) | Maximum file size in bytes. Larger files rejected to prevent memory issues. | 1MB - 500MB |
184
- | `CHUNK_SIZE` | `512` | Characters per chunk. Larger = more context but slower processing. | 128 - 2048 |
185
- | `CHUNK_OVERLAP` | `100` | Overlap between chunks. Preserves context across boundaries. | 0 - (CHUNK_SIZE/2) |
216
+ The embedding model (~90MB) downloads on first use. Takes 1-2 minutes, then works offline.
186
217
 
187
- ## Usage
218
+ ### Security
188
219
 
189
- **After configuration**, restart your MCP client:
190
- - **Cursor**: Fully quit and relaunch (Cmd+Q on Mac, not just closing windows)
191
- - **Codex**: Restart the IDE/extension
192
- - **Claude Code**: No restart needed—changes apply immediately
220
+ - **Path restriction**: Only files within `BASE_DIR` are accessible
221
+ - **Local only**: No network requests after model download
222
+ - **Model source**: Official HuggingFace repository ([verify here](https://huggingface.co/Xenova/all-MiniLM-L6-v2))
193
223
 
194
- The server will appear as available tools that your AI assistant can use.
224
+ </details>
195
225
 
196
- ### Ingesting Documents
226
+ <details>
227
+ <summary><strong>Performance</strong></summary>
197
228
 
198
- **In Cursor**, the Composer Agent automatically uses MCP tools when needed:
229
+ Tested on MacBook Pro M1 (16GB RAM), Node.js 22:
199
230
 
200
- ```
201
- "Ingest the document at /Users/me/docs/api-spec.pdf"
202
- ```
231
+ **Query Speed**: ~1.2 seconds for 10,000 chunks (p90 < 3s)
203
232
 
204
- **In Codex CLI**, the assistant automatically uses configured MCP tools when needed:
233
+ **Ingestion** (10MB PDF):
234
+ - PDF parsing: ~8s
235
+ - Chunking: ~2s
236
+ - Embedding: ~30s
237
+ - DB insertion: ~5s
205
238
 
206
- ```bash
207
- codex "Ingest the document at /Users/me/docs/api-spec.pdf into the RAG system"
208
- ```
239
+ **Memory**: ~200MB idle, ~800MB peak (50MB file ingestion)
209
240
 
210
- **In Claude Code**, just ask naturally:
241
+ **Concurrency**: Handles 5 parallel queries without degradation.
211
242
 
212
- ```
213
- "Ingest the document at /Users/me/docs/api-spec.pdf"
214
- ```
243
+ </details>
215
244
 
216
- **Path Requirements**: The server requires **absolute paths** to files. Your AI assistant will typically convert natural language requests into absolute paths automatically. The `BASE_DIR` setting restricts access to only files within that directory tree for security, but you must still provide the full path.
245
+ <details>
246
+ <summary><strong>Troubleshooting</strong></summary>
217
247
 
218
- The server:
219
- 1. Validates the file exists and is under 100MB
220
- 2. Extracts text (handling PDF/DOCX/TXT/MD formats)
221
- 3. Splits into chunks (512 chars, 100 char overlap)
222
- 4. Generates embeddings for each chunk
223
- 5. Stores in the vector database
248
+ ### "No results found"
224
249
 
225
- This takes roughly 5-10 seconds per MB on a standard laptop. You'll see a confirmation when complete, including how many chunks were created.
226
-
227
- ### Searching Documents
250
+ Documents must be ingested first. Run `"List all ingested files"` to verify.
228
251
 
229
- Ask questions in natural language:
252
+ ### Model download failed
230
253
 
231
- ```
232
- "What does the API documentation say about authentication?"
233
- "Find information about rate limiting"
234
- "Search for error handling best practices"
235
- ```
254
+ Check internet connection. If behind a proxy, configure network settings. The model can also be [downloaded manually](https://huggingface.co/Xenova/all-MiniLM-L6-v2).
236
255
 
237
- The server:
238
- 1. Converts your query to an embedding vector
239
- 2. Searches the vector database for similar chunks
240
- 3. Returns the top 5 matches with similarity scores
256
+ ### "File too large"
241
257
 
242
- Results include the text content, which file it came from, and a relevance score. Your AI assistant then uses these results to answer your question.
258
+ Default limit is 100MB. Split large files or increase `MAX_FILE_SIZE`.
243
259
 
244
- You can request more results:
260
+ ### Slow queries
245
261
 
246
- ```
247
- "Search for database optimization tips, return 10 results"
248
- ```
262
+ Check chunk count with `status`. Large documents with many chunks may slow queries. Consider splitting very large files.
249
263
 
250
- The limit parameter accepts 1-20 results.
264
+ ### "Path outside BASE_DIR"
251
265
 
252
- ### Managing Files
266
+ Ensure file paths are within `BASE_DIR`. Use absolute paths.
253
267
 
254
- See what's indexed:
268
+ ### MCP client doesn't see tools
255
269
 
256
- ```
257
- "List all ingested files"
258
- ```
270
+ 1. Verify config file syntax
271
+ 2. Restart client completely (Cmd+Q on Mac for Cursor)
272
+ 3. Test directly: `npx mcp-local-rag` should run without errors
259
273
 
260
- This shows each file's path, how many chunks it produced, and when it was ingested.
274
+ </details>
261
275
 
262
- Delete a file from the database:
276
+ <details>
277
+ <summary><strong>FAQ</strong></summary>
263
278
 
264
- ```
265
- "Delete /Users/me/docs/old-spec.pdf from the RAG system"
266
- ```
279
+ **Is this really private?**
280
+ Yes. After model download, nothing leaves your machine. Verify with network monitoring.
267
281
 
268
- This permanently removes the file and all its chunks from the vector database. The operation is idempotent—deleting a file that doesn't exist succeeds without error.
282
+ **Can I use this offline?**
283
+ Yes, after the first model download (~90MB).
269
284
 
270
- Check system status:
285
+ **How does this compare to cloud RAG?**
286
+ Cloud services offer better accuracy at scale but require sending data externally. This trades some accuracy for complete privacy and zero runtime cost.
271
287
 
272
- ```
273
- "Show the RAG server status"
274
- ```
288
+ **What file formats are supported?**
289
+ PDF, DOCX, TXT, Markdown. Not yet: Excel, PowerPoint, images, HTML.
275
290
 
276
- This reports total documents, total chunks, current memory usage, and uptime.
291
+ **Can I change the embedding model?**
292
+ Yes, but you must delete your database and re-ingest all documents. Different models produce incompatible vector dimensions.
277
293
 
278
- ### Re-ingesting Files
294
+ **GPU acceleration?**
295
+ Transformers.js runs on CPU. GPU support is experimental. CPU performance is adequate for most use cases.
279
296
 
280
- If you update a document, ingest it again:
297
+ **Multi-user support?**
298
+ No. Designed for single-user, local access. Multi-user would require authentication/access control.
281
299
 
282
- ```
283
- "Re-ingest api-spec.pdf with the latest changes"
284
- ```
300
+ **How to backup?**
301
+ Copy `DB_PATH` directory (default: `./lancedb/`).
285
302
 
286
- The server automatically deletes old chunks for that file before adding new ones. No duplicates, no stale data.
303
+ </details>
287
304
 
288
- ## Development
305
+ <details>
306
+ <summary><strong>Development</strong></summary>
289
307
 
290
308
  ### Building from Source
291
309
 
@@ -295,247 +313,51 @@ cd mcp-local-rag
295
313
  npm install
296
314
  ```
297
315
 
298
- ### Running Tests
316
+ ### Testing
299
317
 
300
318
  ```bash
301
- # Run all tests
302
- npm test
303
-
304
- # Run with coverage
305
- npm run test:coverage
306
-
307
- # Watch mode for development
308
- npm run test:watch
319
+ npm test # Run all tests
320
+ npm run test:coverage # With coverage
321
+ npm run test:watch # Watch mode
309
322
  ```
310
323
 
311
- The test suite includes:
312
- - Unit tests for each component
313
- - Integration tests for the full ingestion and search flow
314
- - Security tests for path traversal protection
315
- - Performance tests verifying query speed targets
316
-
317
324
  ### Code Quality
318
325
 
319
326
  ```bash
320
- # Type check
321
- npm run type-check
322
-
323
- # Lint and format
324
- npm run check:fix
325
-
326
- # Check circular dependencies
327
- npm run check:deps
328
-
329
- # Full quality check (runs everything)
330
- npm run check:all
327
+ npm run type-check # TypeScript check
328
+ npm run check:fix # Lint and format
329
+ npm run check:deps # Circular dependency check
330
+ npm run check:all # Full quality check
331
331
  ```
332
332
 
333
333
  ### Project Structure
334
334
 
335
335
  ```
336
336
  src/
337
- index.ts # Entry point, starts the MCP server
338
- server/ # RAGServer class, MCP tool handlers
339
- parser/ # Document parsing (PDF, DOCX, TXT, MD)
340
- chunker/ # Text splitting logic
341
- embedder/ # Embedding generation with Transformers.js
342
- vectordb/ # LanceDB operations
343
- __tests__/ # Test suites
337
+ index.ts # Entry point
338
+ server/ # MCP tool handlers
339
+ parser/ # PDF, DOCX, TXT, MD parsing
340
+ chunker/ # Text splitting
341
+ embedder/ # Transformers.js embeddings
342
+ vectordb/ # LanceDB operations
343
+ __tests__/ # Test suites
344
344
  ```
345
345
 
346
- Each module has clear boundaries:
347
- - **Parser** validates file paths and extracts text
348
- - **Chunker** splits text into overlapping segments
349
- - **Embedder** generates 384-dimensional vectors
350
- - **VectorStore** handles all database operations
351
- - **RAGServer** orchestrates everything and exposes MCP tools
352
-
353
- ## Performance
354
-
355
- **Test Environment**: MacBook Pro M1 (16GB RAM), tested with v0.1.3 on Node.js 22 (January 2025)
356
-
357
- **Query Performance**:
358
- - Average: 1.2 seconds for 10,000 indexed chunks (5 results)
359
- - Target: p90 < 3 seconds ✓
360
-
361
- **Ingestion Speed** (10MB PDF):
362
- - Total: ~45 seconds
363
- - PDF parsing: ~8 seconds (17%)
364
- - Text chunking: ~2 seconds (4%)
365
- - Embedding generation: ~30 seconds (67%)
366
- - Database insertion: ~5 seconds (11%)
367
-
368
- **Memory Usage**:
369
- - Baseline: ~200MB idle
370
- - Peak: ~800MB when ingesting 50MB file
371
- - Target: < 1GB ✓
372
-
373
- **Concurrent Queries**: Handles 5 parallel queries without degradation. LanceDB's async API allows non-blocking operations.
374
-
375
- **Note**: Your results will vary based on hardware, especially CPU speed (embeddings run on CPU, not GPU).
376
-
377
- ## Troubleshooting
378
-
379
- ### "No results found" when searching
380
-
381
- **Cause**: Documents must be ingested before searching.
382
-
383
- **Solution**:
384
- 1. First ingest documents: `"Ingest /path/to/document.pdf"`
385
- 2. Verify ingestion: `"List all ingested files"`
386
- 3. Then search: `"Search for [your query]"`
387
-
388
- **Common mistake**: Trying to search immediately after configuration without ingesting any documents.
389
-
390
- ### "Model download failed"
391
-
392
- The embedding model downloads from HuggingFace on first use (when you ingest or search for the first time). If you're behind a proxy or firewall, you might need to configure network settings.
393
-
394
- **When it happens**: Your first ingest or search operation will trigger the download. If it fails, you'll see a detailed error message with troubleshooting guidance (network issues, disk space, cache corruption).
395
-
396
- **What to do**: The error message provides specific recommendations. Common solutions:
397
- 1. Check your internet connection and retry the operation
398
- 2. Ensure you have sufficient disk space (~120MB needed)
399
- 3. If problems persist, delete the cache directory and try again
400
-
401
- Alternatively, download the model manually:
402
- 1. Visit https://huggingface.co/Xenova/all-MiniLM-L6-v2
403
- 2. Download the model files
404
- 3. Set CACHE_DIR to where you saved them
405
-
406
- ### "File too large" error
407
-
408
- Default limit is 100MB. For larger files:
409
- - Split them into smaller documents
410
- - Or increase MAX_FILE_SIZE in your config (be aware of memory usage)
411
-
412
- ### Slow query performance
413
-
414
- If queries take longer than expected:
415
- - Check how many chunks you have indexed (`status` command)
416
- - Consider the hardware (embeddings are CPU-intensive)
417
- - Try reducing CHUNK_SIZE to create fewer chunks
418
-
419
- ### "Path outside BASE_DIR" error
420
-
421
- The server restricts file access to BASE_DIR for security. Make sure your file path is within that directory. Check for:
422
- - Correct BASE_DIR setting in your MCP config
423
- - Relative paths vs absolute paths
424
- - Typos in the file path
425
-
426
- ### MCP client doesn't see the tools
427
-
428
- **For Cursor:**
429
- 1. Open Settings → Features → Model Context Protocol
430
- 2. Verify the server configuration is saved
431
- 3. Restart Cursor completely
432
- 4. Check the MCP connection status in the status bar
433
-
434
- **For Codex CLI:**
435
- 1. Check `~/.codex/config.toml` to verify the configuration
436
- 2. Ensure the section name is `[mcp_servers.local-rag]` (with underscore)
437
- 3. Test the server directly: `npx mcp-local-rag` should run without errors
438
- 4. Restart Codex CLI or IDE extension
439
- 5. Check for error messages when Codex starts
440
-
441
- **For Claude Code:**
442
- 1. Run `claude mcp list` to see configured servers
443
- 2. Verify the server appears in the list
444
- 3. Check `~/.config/claude/mcp_config.json` for syntax errors
445
- 4. Test the server directly: `npx mcp-local-rag` should run without errors
446
-
447
- **Common issues:**
448
- - Invalid JSON syntax in config files
449
- - Wrong file paths in BASE_DIR setting
450
- - Server binary not found (try global install: `npm install -g mcp-local-rag`)
451
- - Firewall blocking local communication
452
-
453
- ## How It Works
454
-
455
- When you ingest a document, the parser extracts text based on the file type. PDFs use `pdf-parse`, DOCX uses `mammoth`, and text files are read directly.
456
-
457
- The chunker then splits the text using LangChain's RecursiveCharacterTextSplitter. It tries to break on natural boundaries (paragraphs, sentences) while keeping chunks around 512 characters. Adjacent chunks overlap by 100 characters to preserve context.
458
-
459
- Each chunk goes through the Transformers.js embedding model, which converts text into a 384-dimensional vector representing its semantic meaning. This happens in batches of 8 chunks at a time for efficiency.
460
-
461
- Vectors are stored in LanceDB, a columnar vector database that works with local files. No server process, no complex setup. It's just a directory with data files.
462
-
463
- When you search, your query becomes a vector using the same model. LanceDB finds the chunks with vectors most similar to your query vector (using cosine similarity). The top matches return to your MCP client with their original text and metadata.
464
-
465
- The beauty of this approach: semantically similar text has similar vectors, even if the words are different. "authentication process" and "how users log in" will match each other, unlike keyword search.
466
-
467
- ## FAQ
468
-
469
- **Is this really private?**
470
-
471
- Yes. After the initial model download, nothing leaves your machine. You can verify with network monitoring tools—no outbound requests during ingestion or search.
472
-
473
- **Can I use this offline?**
474
-
475
- Yes, once the model is cached. The first run needs internet to download the model (~90MB), but after that, everything works offline.
476
-
477
- **How does this compare to cloud RAG services?**
478
-
479
- Cloud services (OpenAI, Pinecone, etc.) typically offer better accuracy and scale. But they require sending your documents externally, ongoing costs, and internet connectivity. This project trades some accuracy for complete privacy and zero runtime cost.
480
-
481
- **What file formats are supported?**
482
-
483
- Currently supported:
484
- - **PDF**: `.pdf` (uses pdf-parse)
485
- - **Microsoft Word**: `.docx` (uses mammoth, not `.doc`)
486
- - **Plain Text**: `.txt`
487
- - **Markdown**: `.md`, `.markdown`
488
-
489
- **Not yet supported**:
490
- - Excel/CSV (`.xlsx`, `.csv`)
491
- - PowerPoint (`.pptx`)
492
- - Images with OCR (`.jpg`, `.png`)
493
- - HTML (`.html`)
494
- - Old Word documents (`.doc`)
495
-
496
- Want support for another format? [Open an issue](https://github.com/shinpr/mcp-local-rag/issues/new) with your use case.
497
-
498
- **Can I customize the embedding model?**
499
-
500
- Yes, set MODEL_NAME to any Transformers.js-compatible model from HuggingFace. Keep in mind that different models have different vector dimensions, so you'll need to rebuild your database if you switch.
501
-
502
- **How much does accuracy depend on the model?**
503
-
504
- `all-MiniLM-L6-v2` is optimized for English and performs well for technical documentation. For other languages, consider multilingual models like `multilingual-e5-small`. For higher accuracy, try larger models—but expect slower processing.
505
-
506
- **What about GPU acceleration?**
507
-
508
- Transformers.js runs on CPU by default. GPU support is experimental and varies by platform. For most use cases, CPU performance is adequate (embeddings are reasonably fast even without GPU).
509
-
510
- **Can multiple people share a database?**
511
-
512
- The current design assumes single-user, local access. For multi-user scenarios, you'd need to implement authentication and access control—both out of scope for this project's privacy-first design.
513
-
514
- **How do I back up my data?**
515
-
516
- Copy your DB_PATH directory (default: `./lancedb/`). That's your entire vector database. Copy BASE_DIR for your original documents. Both are just files—no special export needed.
346
+ </details>
517
347
 
518
348
  ## Contributing
519
349
 
520
- Contributions are welcome. Before submitting a PR:
350
+ Contributions welcome. Before submitting a PR:
521
351
 
522
- 1. Run the test suite: `npm test`
523
- 2. Ensure code quality: `npm run check:all`
352
+ 1. Run tests: `npm test`
353
+ 2. Check quality: `npm run check:all`
524
354
  3. Add tests for new features
525
- 4. Update documentation if you change behavior
355
+ 4. Update docs if behavior changes
526
356
 
527
357
  ## License
528
358
 
529
- MIT License - see LICENSE file for details.
530
-
531
- Free for personal and commercial use. No attribution required, but appreciated.
359
+ MIT License. Free for personal and commercial use.
532
360
 
533
361
  ## Acknowledgments
534
362
 
535
- Built with:
536
- - [Model Context Protocol](https://modelcontextprotocol.io/) by Anthropic
537
- - [LanceDB](https://lancedb.com/) for vector storage
538
- - [Transformers.js](https://huggingface.co/docs/transformers.js) by HuggingFace
539
- - [LangChain.js](https://js.langchain.com/) for text splitting
540
-
541
- Created as a practical tool for developers who want AI-powered document search without compromising privacy.
363
+ Built with [Model Context Protocol](https://modelcontextprotocol.io/) by Anthropic, [LanceDB](https://lancedb.com/), and [Transformers.js](https://huggingface.co/docs/transformers.js).