@softerist/heuristic-mcp 3.0.13 → 3.0.14
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +79 -56
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -99,31 +99,48 @@ Clears the cache for the current working directory (or `--workspace` if provided
|
|
|
99
99
|
|
|
100
100
|
Configuration is loaded from your workspace root when the server runs with `--workspace` (this is how IDEs launch it). In server mode, it falls back to the package `config.jsonc` (or `config.json`) and then your current working directory.
|
|
101
101
|
|
|
102
|
-
Example `config.jsonc`:
|
|
103
|
-
|
|
104
|
-
```json
|
|
105
|
-
{
|
|
106
|
-
"excludePatterns": ["**/legacy-code/**", "**/*.test.ts"],
|
|
107
|
-
"fileNames": ["Dockerfile", ".env.example", "Makefile"],
|
|
108
|
-
"
|
|
109
|
-
|
|
110
|
-
|
|
111
|
-
"
|
|
112
|
-
|
|
113
|
-
|
|
114
|
-
"
|
|
115
|
-
|
|
116
|
-
|
|
117
|
-
|
|
118
|
-
|
|
119
|
-
"
|
|
120
|
-
|
|
121
|
-
|
|
122
|
-
|
|
123
|
-
"
|
|
124
|
-
|
|
125
|
-
|
|
126
|
-
|
|
102
|
+
Example `config.jsonc`:
|
|
103
|
+
|
|
104
|
+
```json
|
|
105
|
+
{
|
|
106
|
+
"excludePatterns": ["**/legacy-code/**", "**/*.test.ts"],
|
|
107
|
+
"fileNames": ["Dockerfile", ".env.example", "Makefile"],
|
|
108
|
+
"indexing": {
|
|
109
|
+
"smartIndexing": true
|
|
110
|
+
},
|
|
111
|
+
"worker": {
|
|
112
|
+
"workerThreads": 0
|
|
113
|
+
},
|
|
114
|
+
"embedding": {
|
|
115
|
+
"embeddingModel": "jinaai/jina-embeddings-v2-base-code",
|
|
116
|
+
"embeddingBatchSize": null,
|
|
117
|
+
"embeddingProcessNumThreads": 8
|
|
118
|
+
},
|
|
119
|
+
"search": {
|
|
120
|
+
"recencyBoost": 0.1,
|
|
121
|
+
"recencyDecayDays": 30
|
|
122
|
+
},
|
|
123
|
+
"callGraph": {
|
|
124
|
+
"callGraphEnabled": true,
|
|
125
|
+
"callGraphBoost": 0.15
|
|
126
|
+
},
|
|
127
|
+
"ann": {
|
|
128
|
+
"annEnabled": true
|
|
129
|
+
},
|
|
130
|
+
"vectorStore": {
|
|
131
|
+
"vectorStoreFormat": "binary",
|
|
132
|
+
"vectorStoreContentMode": "external",
|
|
133
|
+
"vectorStoreLoadMode": "disk",
|
|
134
|
+
"contentCacheEntries": 256,
|
|
135
|
+
"vectorCacheEntries": 64
|
|
136
|
+
},
|
|
137
|
+
"memoryCleanup": {
|
|
138
|
+
"clearCacheAfterIndex": true
|
|
139
|
+
}
|
|
140
|
+
}
|
|
141
|
+
```
|
|
142
|
+
|
|
143
|
+
Preferred style is namespaced keys (shown above). Legacy top-level keys are still supported for backward compatibility.
|
|
127
144
|
|
|
128
145
|
### Embedding Model & Dimension Options
|
|
129
146
|
|
|
@@ -133,12 +150,14 @@ Example `config.jsonc`:
|
|
|
133
150
|
|
|
134
151
|
For faster search with smaller embeddings, switch to an MRL-compatible model:
|
|
135
152
|
|
|
136
|
-
```json
|
|
137
|
-
{
|
|
138
|
-
"
|
|
139
|
-
|
|
140
|
-
|
|
141
|
-
|
|
153
|
+
```json
|
|
154
|
+
{
|
|
155
|
+
"embedding": {
|
|
156
|
+
"embeddingModel": "nomic-ai/nomic-embed-text-v1.5",
|
|
157
|
+
"embeddingDimension": 128
|
|
158
|
+
}
|
|
159
|
+
}
|
|
160
|
+
```
|
|
142
161
|
|
|
143
162
|
**MRL-compatible models:**
|
|
144
163
|
- `nomic-ai/nomic-embed-text-v1.5` — recommended for 128d/256d
|
|
@@ -153,7 +172,9 @@ Cache location:
|
|
|
153
172
|
|
|
154
173
|
### Environment Variables
|
|
155
174
|
|
|
156
|
-
Selected overrides (prefix `SMART_CODING_`):
|
|
175
|
+
Selected overrides (prefix `SMART_CODING_`):
|
|
176
|
+
|
|
177
|
+
Environment overrides target runtime keys and are synced back into namespaces by `lib/config.js`.
|
|
157
178
|
|
|
158
179
|
- `SMART_CODING_VERBOSE=true|false` — enable detailed logging.
|
|
159
180
|
- `SMART_CODING_WORKER_THREADS=auto|N` — worker thread count.
|
|
@@ -179,35 +200,37 @@ Selected overrides (prefix `SMART_CODING_`):
|
|
|
179
200
|
|
|
180
201
|
See `lib/config.js` for the full list.
|
|
181
202
|
|
|
182
|
-
### Binary Vector Store
|
|
183
|
-
|
|
184
|
-
Set `vectorStoreFormat` to `binary` to use the on-disk binary cache. This keeps vectors and content out of JS heap
|
|
185
|
-
and reads on demand. Recommended for large repos.
|
|
186
|
-
|
|
187
|
-
- `vectorStoreContentMode=external` keeps content in the binary file and only loads for top-N results.
|
|
188
|
-
- `contentCacheEntries` controls the small in-memory LRU for decoded content strings.
|
|
189
|
-
- `vectorStoreLoadMode=disk` streams vectors from disk to reduce memory usage.
|
|
190
|
-
- `vectorCacheEntries` controls the small in-memory LRU for vectors when using disk mode.
|
|
191
|
-
- `clearCacheAfterIndex=true` drops in-memory vectors after indexing and reloads lazily on next query.
|
|
192
|
-
- `unloadModelAfterIndex=true` (default) unloads the embedding model after indexing to free ~500MB-1GB of RAM; the model will reload on the next search query.
|
|
193
|
-
- Note: `annEnabled=true` with `vectorStoreLoadMode=disk` can increase disk reads during ANN rebuilds on large indexes.
|
|
203
|
+
### Binary Vector Store
|
|
204
|
+
|
|
205
|
+
Set `vectorStore.vectorStoreFormat` to `binary` to use the on-disk binary cache. This keeps vectors and content out of JS heap
|
|
206
|
+
and reads on demand. Recommended for large repos.
|
|
207
|
+
|
|
208
|
+
- `vectorStore.vectorStoreContentMode=external` keeps content in the binary file and only loads for top-N results.
|
|
209
|
+
- `vectorStore.contentCacheEntries` controls the small in-memory LRU for decoded content strings.
|
|
210
|
+
- `vectorStore.vectorStoreLoadMode=disk` streams vectors from disk to reduce memory usage.
|
|
211
|
+
- `vectorStore.vectorCacheEntries` controls the small in-memory LRU for vectors when using disk mode.
|
|
212
|
+
- `memoryCleanup.clearCacheAfterIndex=true` drops in-memory vectors after indexing and reloads lazily on next query.
|
|
213
|
+
- `memoryCleanup.unloadModelAfterIndex=true` (default) unloads the embedding model after indexing to free ~500MB-1GB of RAM; the model will reload on the next search query.
|
|
214
|
+
- Note: `ann.annEnabled=true` with `vectorStore.vectorStoreLoadMode=disk` can increase disk reads during ANN rebuilds on large indexes.
|
|
194
215
|
|
|
195
216
|
### SQLite Vector Store
|
|
196
217
|
|
|
197
|
-
Set `vectorStoreFormat` to `sqlite` to use SQLite for persistence. This provides:
|
|
218
|
+
Set `vectorStore.vectorStoreFormat` to `sqlite` to use SQLite for persistence. This provides:
|
|
198
219
|
|
|
199
220
|
- ACID transactions for reliable writes
|
|
200
221
|
- Simpler concurrent access
|
|
201
222
|
- Standard database format for inspection
|
|
202
223
|
|
|
203
|
-
```json
|
|
204
|
-
{
|
|
205
|
-
"
|
|
206
|
-
|
|
207
|
-
|
|
208
|
-
|
|
209
|
-
|
|
210
|
-
|
|
224
|
+
```json
|
|
225
|
+
{
|
|
226
|
+
"vectorStore": {
|
|
227
|
+
"vectorStoreFormat": "sqlite"
|
|
228
|
+
}
|
|
229
|
+
}
|
|
230
|
+
```
|
|
231
|
+
|
|
232
|
+
The vectors and content are stored in `vectors.sqlite` in your cache directory. You can inspect it with any SQLite browser.
|
|
233
|
+
`vectorStore.vectorStoreContentMode` and `vectorStore.vectorStoreLoadMode` are respected for SQLite (use `vectorStore.vectorStoreLoadMode=disk` to avoid loading vectors into memory).
|
|
211
234
|
|
|
212
235
|
**Tradeoffs vs Binary:**
|
|
213
236
|
- Slightly higher read overhead (SQL queries vs direct memory access)
|
|
@@ -230,7 +253,7 @@ SMART_CODING_VECTOR_STORE_LOAD_MODE=disk node tools/scripts/benchmark-search.js
|
|
|
230
253
|
SMART_CODING_VECTOR_STORE_FORMAT=binary SMART_CODING_VECTOR_STORE_LOAD_MODE=disk node tools/scripts/benchmark-search.js --runs 10
|
|
231
254
|
```
|
|
232
255
|
|
|
233
|
-
Note: On small repos, disk mode may be slightly slower and show noisy RSS deltas; benefits are clearer on large indexes with a small `vectorCacheEntries`.
|
|
256
|
+
Note: On small repos, disk mode may be slightly slower and show noisy RSS deltas; benefits are clearer on large indexes with a small `vectorStore.vectorCacheEntries`.
|
|
234
257
|
|
|
235
258
|
---
|
|
236
259
|
|
|
@@ -293,8 +316,8 @@ Native ONNX backend unavailable: The operating system cannot run %1.
|
|
|
293
316
|
...onnxruntime_binding.node. Falling back to WASM.
|
|
294
317
|
```
|
|
295
318
|
|
|
296
|
-
The server will automatically disable workers and force `embeddingProcessPerBatch` to reduce memory spikes, but you
|
|
297
|
-
should fix the native binding to restore stable memory usage:
|
|
319
|
+
The server will automatically disable workers and force `embedding.embeddingProcessPerBatch` to reduce memory spikes, but you
|
|
320
|
+
should fix the native binding to restore stable memory usage:
|
|
298
321
|
|
|
299
322
|
- Ensure you are running **64-bit Node.js** (`node -p "process.arch"` should be `x64`).
|
|
300
323
|
- Install **Microsoft Visual C++ 2015–2022 Redistributable (x64)**.
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@softerist/heuristic-mcp",
|
|
3
|
-
"version": "3.0.
|
|
3
|
+
"version": "3.0.14",
|
|
4
4
|
"description": "An enhanced MCP server providing intelligent semantic code search with find-similar-code, recency ranking, and improved chunking. Fork of smart-coding-mcp.",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"main": "index.js",
|