@ambicuity/kindx 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +36 -0
- package/LICENSE +21 -0
- package/README.md +578 -0
- package/dist/catalogs.d.ts +137 -0
- package/dist/catalogs.js +349 -0
- package/dist/inference.d.ts +398 -0
- package/dist/inference.js +1131 -0
- package/dist/kindx.d.ts +1 -0
- package/dist/kindx.js +2621 -0
- package/dist/protocol.d.ts +21 -0
- package/dist/protocol.js +666 -0
- package/dist/renderer.d.ts +119 -0
- package/dist/renderer.js +350 -0
- package/dist/repository.d.ts +783 -0
- package/dist/repository.js +2787 -0
- package/dist/runtime.d.ts +33 -0
- package/dist/runtime.js +34 -0
- package/package.json +90 -0
package/CHANGELOG.md
ADDED
|
@@ -0,0 +1,36 @@
|
|
|
1
|
+
# Changelog
|
|
2
|
+
|
|
3
|
+
All notable changes to this project will be documented in this file.
|
|
4
|
+
|
|
5
|
+
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
|
|
6
|
+
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
|
7
|
+
|
|
8
|
+
## [0.1.0] - 2026-03-07
|
|
9
|
+
|
|
10
|
+
### Added
|
|
11
|
+
|
|
12
|
+
- Initial release of KINDX - On-Device Document Intelligence Engine
|
|
13
|
+
- BM25 full-text search via SQLite FTS5
|
|
14
|
+
- Vector semantic search via sqlite-vec with embeddinggemma-300M
|
|
15
|
+
- Hybrid search with Reciprocal Rank Fusion (RRF)
|
|
16
|
+
- LLM re-ranking via qwen3-reranker-0.6B
|
|
17
|
+
- Query expansion via fine-tuned model
|
|
18
|
+
- Smart document chunking with natural break point detection
|
|
19
|
+
- Collection management (add, remove, rename, list)
|
|
20
|
+
- Context management for collections and paths
|
|
21
|
+
- MCP (Model Context Protocol) server with stdio and HTTP transport
|
|
22
|
+
- Multi-get command for batch document retrieval
|
|
23
|
+
- Output formats: plain text, JSON, CSV, XML, Markdown
|
|
24
|
+
- Support for custom embedding models via KINDX_EMBED_MODEL
|
|
25
|
+
- Configurable reranker context size
|
|
26
|
+
- Position-aware score blending
|
|
27
|
+
- Code fence protection in chunking
|
|
28
|
+
- Document identification via 6-character hash (docid)
|
|
29
|
+
- Fuzzy path matching with suggestions
|
|
30
|
+
- LLM response caching
|
|
31
|
+
- Named indexes
|
|
32
|
+
- Schema migration support
|
|
33
|
+
- Comprehensive test suite (vitest)
|
|
34
|
+
- CI/CD via GitHub Actions
|
|
35
|
+
- CodeQL and Trivy security scanning
|
|
36
|
+
- Signed releases via Sigstore
|
package/LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2026 Ritesh Rana
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
package/README.md
ADDED
|
@@ -0,0 +1,578 @@
|
|
|
1
|
+
# KINDX -- On-Device Document Intelligence Engine
|
|
2
|
+
|
|
3
|
+
A local-first search engine for everything you need to remember. Index your markdown notes, meeting transcripts, documentation, and knowledge bases. Search with keywords or natural language. Designed for agentic workflows.
|
|
4
|
+
|
|
5
|
+
KINDX combines BM25 full-text search, vector semantic search, and LLM re-ranking -- all running locally via node-llama-cpp with GGUF models.
|
|
6
|
+
|
|
7
|
+
You can read more about KINDX's progress in the [CHANGELOG](./CHANGELOG.md).
|
|
8
|
+
|
|
9
|
+
## Quick Start
|
|
10
|
+
|
|
11
|
+
```bash
|
|
12
|
+
# Install globally (Node or Bun)
|
|
13
|
+
npm install -g @ambicuity/kindx
|
|
14
|
+
# or
|
|
15
|
+
bun install -g @ambicuity/kindx
|
|
16
|
+
|
|
17
|
+
# Or run directly
|
|
18
|
+
npx @ambicuity/kindx ...
|
|
19
|
+
bunx @ambicuity/kindx ...
|
|
20
|
+
|
|
21
|
+
# Create collections for your notes, docs, and meeting transcripts
|
|
22
|
+
kindx collection add ~/notes --name notes
|
|
23
|
+
kindx collection add ~/Documents/meetings --name meetings
|
|
24
|
+
kindx collection add ~/work/docs --name docs
|
|
25
|
+
|
|
26
|
+
# Add context to help with search results
|
|
27
|
+
kindx context add kindx://notes "Personal notes and ideas"
|
|
28
|
+
kindx context add kindx://meetings "Meeting transcripts and notes"
|
|
29
|
+
kindx context add kindx://docs "Work documentation"
|
|
30
|
+
|
|
31
|
+
# Generate embeddings for semantic search
|
|
32
|
+
kindx embed
|
|
33
|
+
|
|
34
|
+
# Search across everything
|
|
35
|
+
kindx search "project timeline" # Fast keyword search
|
|
36
|
+
kindx vsearch "how to deploy" # Semantic search
|
|
37
|
+
kindx query "quarterly planning process" # Hybrid + reranking (best quality)
|
|
38
|
+
|
|
39
|
+
# Get a specific document
|
|
40
|
+
kindx get "meetings/2024-01-15.md"
|
|
41
|
+
|
|
42
|
+
# Get a document by docid (shown in search results)
|
|
43
|
+
kindx get "#abc123"
|
|
44
|
+
|
|
45
|
+
# Get multiple documents by glob pattern
|
|
46
|
+
kindx multi-get "journals/2025-05*.md"
|
|
47
|
+
|
|
48
|
+
# Search within a specific collection
|
|
49
|
+
kindx search "API" -c notes
|
|
50
|
+
|
|
51
|
+
# Export all matches for an agent
|
|
52
|
+
kindx search "API" --all --files --min-score 0.3
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
### Using with AI Agents
|
|
56
|
+
|
|
57
|
+
KINDX's `--json` and `--files` output formats are designed for agentic workflows:
|
|
58
|
+
|
|
59
|
+
```bash
|
|
60
|
+
# Get structured results for an LLM
|
|
61
|
+
kindx search "authentication" --json -n 10
|
|
62
|
+
|
|
63
|
+
# List all relevant files above a threshold
|
|
64
|
+
kindx query "error handling" --all --files --min-score 0.4
|
|
65
|
+
|
|
66
|
+
# Retrieve full document content
|
|
67
|
+
kindx get "docs/api-reference.md" --full
|
|
68
|
+
```
|
|
69
|
+
|
|
70
|
+
### MCP Server
|
|
71
|
+
|
|
72
|
+
Although the tool works perfectly fine when you just tell your agent to use it on the command line, it also exposes an MCP (Model Context Protocol) server for tighter integration.
|
|
73
|
+
|
|
74
|
+
Tools exposed:
|
|
75
|
+
- `kindx_search` -- Fast BM25 keyword search (supports collection filter)
|
|
76
|
+
- `kindx_vector_search` -- Semantic vector search (supports collection filter)
|
|
77
|
+
- `kindx_deep_search` -- Deep search with query expansion and reranking (supports collection filter)
|
|
78
|
+
- `kindx_get` -- Retrieve document by path or docid (with fuzzy matching suggestions)
|
|
79
|
+
- `kindx_multi_get` -- Retrieve multiple documents by glob pattern, list, or docids
|
|
80
|
+
- `kindx_status` -- Index health and collection info
|
|
81
|
+
|
|
82
|
+
Claude Desktop configuration (`~/Library/Application Support/Claude/claude_desktop_config.json`):
|
|
83
|
+
|
|
84
|
+
```json
|
|
85
|
+
{
|
|
86
|
+
"mcpServers": {
|
|
87
|
+
"kindx": {
|
|
88
|
+
"command": "kindx",
|
|
89
|
+
"args": ["mcp"]
|
|
90
|
+
}
|
|
91
|
+
}
|
|
92
|
+
}
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
#### HTTP Transport
|
|
96
|
+
|
|
97
|
+
By default, KINDX's MCP server uses stdio (launched as a subprocess by each client). For a shared, long-lived server that avoids repeated model loading, use the HTTP transport:
|
|
98
|
+
|
|
99
|
+
```bash
|
|
100
|
+
# Foreground (Ctrl-C to stop)
|
|
101
|
+
kindx mcp --http # localhost:8181
|
|
102
|
+
kindx mcp --http --port 8080 # custom port
|
|
103
|
+
|
|
104
|
+
# Background daemon
|
|
105
|
+
kindx mcp --http --daemon # start, writes PID to ~/.cache/kindx/mcp.pid
|
|
106
|
+
kindx mcp stop # stop via PID file
|
|
107
|
+
kindx status # shows "MCP: running (PID ...)" when active
|
|
108
|
+
```
|
|
109
|
+
|
|
110
|
+
The HTTP server exposes two endpoints:
|
|
111
|
+
- `POST /mcp` -- MCP Streamable HTTP (JSON responses, stateless)
|
|
112
|
+
- `GET /health` -- liveness check with uptime
|
|
113
|
+
|
|
114
|
+
LLM models stay loaded in VRAM across requests. Embedding/reranking contexts are disposed after 5 min idle and transparently recreated on the next request (~1s penalty, models remain loaded).
|
|
115
|
+
|
|
116
|
+
Point any MCP client at `http://localhost:8181/mcp` to connect.
|
|
117
|
+
|
|
118
|
+
## Architecture
|
|
119
|
+
|
|
120
|
+
### Component Overview
|
|
121
|
+
|
|
122
|
+
```mermaid
|
|
123
|
+
graph TB
|
|
124
|
+
subgraph CLI["CLI Layer (engine/kindx.ts)"]
|
|
125
|
+
CMD["Command Parser"]
|
|
126
|
+
end
|
|
127
|
+
|
|
128
|
+
subgraph Engine["Core Engine"]
|
|
129
|
+
REPO["Repository (engine/repository.ts)"]
|
|
130
|
+
INF["Inference (engine/inference.ts)"]
|
|
131
|
+
CAT["Catalogs (engine/catalogs.ts)"]
|
|
132
|
+
REN["Renderer (engine/renderer.ts)"]
|
|
133
|
+
RT["Runtime (engine/runtime.ts)"]
|
|
134
|
+
end
|
|
135
|
+
|
|
136
|
+
subgraph Protocol["Integration Layer"]
|
|
137
|
+
MCP["MCP Server (engine/protocol.ts)"]
|
|
138
|
+
end
|
|
139
|
+
|
|
140
|
+
subgraph Storage["Data Layer"]
|
|
141
|
+
SQLite["SQLite + FTS5"]
|
|
142
|
+
VEC["sqlite-vec Vectors"]
|
|
143
|
+
CACHE["LLM Response Cache"]
|
|
144
|
+
end
|
|
145
|
+
|
|
146
|
+
subgraph Models["Local GGUF Models"]
|
|
147
|
+
EMBED["embeddinggemma-300M"]
|
|
148
|
+
RERANK["qwen3-reranker-0.6B"]
|
|
149
|
+
EXPAND["Query Expansion 1.7B"]
|
|
150
|
+
end
|
|
151
|
+
|
|
152
|
+
CMD --> REPO
|
|
153
|
+
CMD --> CAT
|
|
154
|
+
CMD --> REN
|
|
155
|
+
MCP --> REPO
|
|
156
|
+
REPO --> RT
|
|
157
|
+
REPO --> INF
|
|
158
|
+
RT --> SQLite
|
|
159
|
+
RT --> VEC
|
|
160
|
+
INF --> EMBED
|
|
161
|
+
INF --> RERANK
|
|
162
|
+
INF --> EXPAND
|
|
163
|
+
REPO --> CACHE
|
|
164
|
+
CAT --> SQLite
|
|
165
|
+
```
|
|
166
|
+
|
|
167
|
+
### Hybrid Search Pipeline
|
|
168
|
+
|
|
169
|
+
```mermaid
|
|
170
|
+
flowchart TD
|
|
171
|
+
Q["User Query"] --> SPLIT
|
|
172
|
+
|
|
173
|
+
subgraph SPLIT["Query Processing"]
|
|
174
|
+
ORIG["Original Query (x2 weight)"]
|
|
175
|
+
EXP["Query Expansion (fine-tuned LLM)"]
|
|
176
|
+
end
|
|
177
|
+
|
|
178
|
+
SPLIT --> P1 & P2 & P3
|
|
179
|
+
|
|
180
|
+
subgraph P1["Original Query"]
|
|
181
|
+
BM25_1["BM25 / FTS5"]
|
|
182
|
+
VEC_1["Vector Search"]
|
|
183
|
+
end
|
|
184
|
+
|
|
185
|
+
subgraph P2["Expanded Query 1"]
|
|
186
|
+
BM25_2["BM25 / FTS5"]
|
|
187
|
+
VEC_2["Vector Search"]
|
|
188
|
+
end
|
|
189
|
+
|
|
190
|
+
subgraph P3["Expanded Query 2"]
|
|
191
|
+
BM25_3["BM25 / FTS5"]
|
|
192
|
+
VEC_3["Vector Search"]
|
|
193
|
+
end
|
|
194
|
+
|
|
195
|
+
P1 & P2 & P3 --> RRF
|
|
196
|
+
|
|
197
|
+
subgraph RRF["Reciprocal Rank Fusion"]
|
|
198
|
+
FUSE["RRF Merge (k=60)"]
|
|
199
|
+
BONUS["Top-rank bonus (+0.05 for #1)"]
|
|
200
|
+
TOP30["Select Top 30 Candidates"]
|
|
201
|
+
end
|
|
202
|
+
|
|
203
|
+
RRF --> RERANK_STEP
|
|
204
|
+
|
|
205
|
+
subgraph RERANK_STEP["LLM Re-ranking"]
|
|
206
|
+
LLM["qwen3-reranker (Yes/No + logprobs)"]
|
|
207
|
+
end
|
|
208
|
+
|
|
209
|
+
RERANK_STEP --> BLEND
|
|
210
|
+
|
|
211
|
+
subgraph BLEND["Position-Aware Blending"]
|
|
212
|
+
B1["Rank 1-3: 75% RRF / 25% Reranker"]
|
|
213
|
+
B2["Rank 4-10: 60% RRF / 40% Reranker"]
|
|
214
|
+
B3["Rank 11+: 40% RRF / 60% Reranker"]
|
|
215
|
+
end
|
|
216
|
+
|
|
217
|
+
BLEND --> RESULTS["Final Ranked Results"]
|
|
218
|
+
```
|
|
219
|
+
|
|
220
|
+
### Score Normalization and Fusion
|
|
221
|
+
|
|
222
|
+
#### Search Backends
|
|
223
|
+
|
|
224
|
+
- **BM25 (FTS5)**: `Math.abs(score)` normalized via `score / 10`
|
|
225
|
+
- **Vector search**: `1 / (1 + distance)` cosine similarity
|
|
226
|
+
|
|
227
|
+
#### Fusion Strategy
|
|
228
|
+
|
|
229
|
+
The `query` command uses Reciprocal Rank Fusion (RRF) with position-aware blending:
|
|
230
|
+
|
|
231
|
+
1. **Query Expansion**: Original query (x2 for weighting) + 1 LLM variation
|
|
232
|
+
2. **Parallel Retrieval**: Each query searches both FTS and vector indexes
|
|
233
|
+
3. **RRF Fusion**: Combine all result lists using `score = Sum(1/(k+rank+1))` where k=60
|
|
234
|
+
4. **Top-Rank Bonus**: Documents ranking #1 in any list get +0.05, #2-3 get +0.02
|
|
235
|
+
5. **Top-K Selection**: Take top 30 candidates for reranking
|
|
236
|
+
6. **Re-ranking**: LLM scores each document (yes/no with logprobs confidence)
|
|
237
|
+
7. **Position-Aware Blending**:
|
|
238
|
+
- RRF rank 1-3: 75% retrieval, 25% reranker (preserves exact matches)
|
|
239
|
+
- RRF rank 4-10: 60% retrieval, 40% reranker
|
|
240
|
+
- RRF rank 11+: 40% retrieval, 60% reranker (trust reranker more)
|
|
241
|
+
|
|
242
|
+
Why this approach: Pure RRF can dilute exact matches when expanded queries don't match. The top-rank bonus preserves documents that score #1 for the original query. Position-aware blending prevents the reranker from destroying high-confidence retrieval results.
|
|
243
|
+
|
|
244
|
+
## Requirements
|
|
245
|
+
|
|
246
|
+
### System Requirements
|
|
247
|
+
|
|
248
|
+
- Node.js >= 22
|
|
249
|
+
- Bun >= 1.0.0
|
|
250
|
+
- macOS: Homebrew SQLite (for extension support)
|
|
251
|
+
|
|
252
|
+
```bash
|
|
253
|
+
brew install sqlite
|
|
254
|
+
```
|
|
255
|
+
|
|
256
|
+
### GGUF Models (via node-llama-cpp)
|
|
257
|
+
|
|
258
|
+
KINDX uses three local GGUF models (auto-downloaded on first use):
|
|
259
|
+
|
|
260
|
+
- `embeddinggemma-300M-Q8_0` -- embedding model
|
|
261
|
+
- `qwen3-reranker-0.6b-q8_0` -- cross-encoder reranker
|
|
262
|
+
- `kindx-query-expansion-1.7B-q4_k_m` -- query expansion (fine-tuned)
|
|
263
|
+
|
|
264
|
+
Models are downloaded from HuggingFace and cached in `~/.cache/kindx/models/`.
|
|
265
|
+
|
|
266
|
+
### Custom Embedding Model
|
|
267
|
+
|
|
268
|
+
Override the default embedding model via the `KINDX_EMBED_MODEL` environment variable. This is useful for multilingual corpora (e.g. Chinese, Japanese, Korean) where embeddinggemma-300M has limited coverage.
|
|
269
|
+
|
|
270
|
+
```bash
|
|
271
|
+
# Use Qwen3-Embedding-0.6B for better multilingual (CJK) support
|
|
272
|
+
export KINDX_EMBED_MODEL="hf:Qwen/Qwen3-Embedding-0.6B-GGUF/qwen3-embedding-0.6b-q8_0.gguf"
|
|
273
|
+
|
|
274
|
+
# After changing the model, re-embed all collections:
|
|
275
|
+
kindx embed -f
|
|
276
|
+
```
|
|
277
|
+
|
|
278
|
+
Supported model families:
|
|
279
|
+
- **embeddinggemma** (default) -- English-optimized, small footprint
|
|
280
|
+
- **Qwen3-Embedding** -- Multilingual (119 languages including CJK), MTEB top-ranked
|
|
281
|
+
|
|
282
|
+
Note: When switching embedding models, you must re-index with `kindx embed -f` since vectors are not cross-compatible between models. The prompt format is automatically adjusted for each model family.
|
|
283
|
+
|
|
284
|
+
## Installation
|
|
285
|
+
|
|
286
|
+
```bash
|
|
287
|
+
npm install -g @ambicuity/kindx
|
|
288
|
+
# or
|
|
289
|
+
bun install -g @ambicuity/kindx
|
|
290
|
+
```
|
|
291
|
+
|
|
292
|
+
### Development
|
|
293
|
+
|
|
294
|
+
```bash
|
|
295
|
+
git clone https://github.com/ambicuity/KINDX
|
|
296
|
+
cd KINDX
|
|
297
|
+
npm install
|
|
298
|
+
npm link
|
|
299
|
+
```
|
|
300
|
+
|
|
301
|
+
## Usage
|
|
302
|
+
|
|
303
|
+
### Collection Management
|
|
304
|
+
|
|
305
|
+
```bash
|
|
306
|
+
# Create a collection from current directory
|
|
307
|
+
kindx collection add . --name myproject
|
|
308
|
+
|
|
309
|
+
# Create a collection with explicit path and custom glob mask
|
|
310
|
+
kindx collection add ~/Documents/notes --name notes --mask "**/*.md"
|
|
311
|
+
|
|
312
|
+
# List all collections
|
|
313
|
+
kindx collection list
|
|
314
|
+
|
|
315
|
+
# Remove a collection
|
|
316
|
+
kindx collection remove myproject
|
|
317
|
+
|
|
318
|
+
# Rename a collection
|
|
319
|
+
kindx collection rename myproject my-project
|
|
320
|
+
|
|
321
|
+
# List files in a collection
|
|
322
|
+
kindx ls notes
|
|
323
|
+
kindx ls notes/subfolder
|
|
324
|
+
```
|
|
325
|
+
|
|
326
|
+
### Generate Vector Embeddings
|
|
327
|
+
|
|
328
|
+
```bash
|
|
329
|
+
# Embed all indexed documents (900 tokens/chunk, 15% overlap)
|
|
330
|
+
kindx embed
|
|
331
|
+
|
|
332
|
+
# Force re-embed everything
|
|
333
|
+
kindx embed -f
|
|
334
|
+
```
|
|
335
|
+
|
|
336
|
+
### Context Management
|
|
337
|
+
|
|
338
|
+
Context adds descriptive metadata to collections and paths, helping search understand your content.
|
|
339
|
+
|
|
340
|
+
```bash
|
|
341
|
+
# Add context to a collection (using kindx:// virtual paths)
|
|
342
|
+
kindx context add kindx://notes "Personal notes and ideas"
|
|
343
|
+
kindx context add kindx://docs/api "API documentation"
|
|
344
|
+
|
|
345
|
+
# Add context from within a collection directory
|
|
346
|
+
cd ~/notes && kindx context add "Personal notes and ideas"
|
|
347
|
+
cd ~/notes/work && kindx context add "Work-related notes"
|
|
348
|
+
|
|
349
|
+
# Add global context (applies to all collections)
|
|
350
|
+
kindx context add / "Knowledge base for my projects"
|
|
351
|
+
|
|
352
|
+
# List all contexts
|
|
353
|
+
kindx context list
|
|
354
|
+
|
|
355
|
+
# Remove context
|
|
356
|
+
kindx context rm kindx://notes/old
|
|
357
|
+
```
|
|
358
|
+
|
|
359
|
+
### Search Commands
|
|
360
|
+
|
|
361
|
+
```
|
|
362
|
+
+------------------------------------------------------------+
|
|
363
|
+
| Search Modes |
|
|
364
|
+
+----------+-------------------------------------------------+
|
|
365
|
+
| search | BM25 full-text search only |
|
|
366
|
+
| vsearch | Vector semantic search only |
|
|
367
|
+
| query | Hybrid: FTS + Vector + Query Expansion + Rerank |
|
|
368
|
+
+----------+-------------------------------------------------+
|
|
369
|
+
```
|
|
370
|
+
|
|
371
|
+
```bash
|
|
372
|
+
# Full-text search (fast, keyword-based)
|
|
373
|
+
kindx search "authentication flow"
|
|
374
|
+
|
|
375
|
+
# Vector search (semantic similarity)
|
|
376
|
+
kindx vsearch "how to login"
|
|
377
|
+
|
|
378
|
+
# Hybrid search with re-ranking (best quality)
|
|
379
|
+
kindx query "user authentication"
|
|
380
|
+
```
|
|
381
|
+
|
|
382
|
+
### Options
|
|
383
|
+
|
|
384
|
+
```bash
|
|
385
|
+
# Search options
|
|
386
|
+
-n <num> # Number of results (default: 5, or 20 for --files/--json)
|
|
387
|
+
-c, --collection # Restrict search to a specific collection
|
|
388
|
+
--all # Return all matches (use with --min-score to filter)
|
|
389
|
+
--min-score <num> # Minimum score threshold (default: 0)
|
|
390
|
+
--full # Show full document content
|
|
391
|
+
--line-numbers # Add line numbers to output
|
|
392
|
+
--explain # Include retrieval score traces (query, JSON/CLI output)
|
|
393
|
+
--index <name> # Use named index
|
|
394
|
+
|
|
395
|
+
# Output formats (for search and multi-get)
|
|
396
|
+
--files # Output: docid,score,filepath,context
|
|
397
|
+
--json # JSON output with snippets
|
|
398
|
+
--csv # CSV output
|
|
399
|
+
--md # Markdown output
|
|
400
|
+
--xml # XML output
|
|
401
|
+
|
|
402
|
+
# Get options
|
|
403
|
+
kindx get <file>[:line] # Get document, optionally starting at line
|
|
404
|
+
-l <num> # Maximum lines to return
|
|
405
|
+
--from <num> # Start from line number
|
|
406
|
+
|
|
407
|
+
# Multi-get options
|
|
408
|
+
-l <num> # Maximum lines per file
|
|
409
|
+
--max-bytes <num> # Skip files larger than N bytes (default: 10KB)
|
|
410
|
+
```
|
|
411
|
+
|
|
412
|
+
### Index Maintenance
|
|
413
|
+
|
|
414
|
+
```bash
|
|
415
|
+
# Show index status and collections with contexts
|
|
416
|
+
kindx status
|
|
417
|
+
|
|
418
|
+
# Re-index all collections
|
|
419
|
+
kindx update
|
|
420
|
+
|
|
421
|
+
# Re-index with git pull first (for remote repos)
|
|
422
|
+
kindx update --pull
|
|
423
|
+
|
|
424
|
+
# Get document by filepath (with fuzzy matching suggestions)
|
|
425
|
+
kindx get notes/meeting.md
|
|
426
|
+
|
|
427
|
+
# Get document by docid (from search results)
|
|
428
|
+
kindx get "#abc123"
|
|
429
|
+
|
|
430
|
+
# Get document starting at line 50, max 100 lines
|
|
431
|
+
kindx get notes/meeting.md:50 -l 100
|
|
432
|
+
|
|
433
|
+
# Get multiple documents by glob pattern
|
|
434
|
+
kindx multi-get "journals/2025-05*.md"
|
|
435
|
+
|
|
436
|
+
# Get multiple documents by comma-separated list (supports docids)
|
|
437
|
+
kindx multi-get "doc1.md, doc2.md, #abc123"
|
|
438
|
+
|
|
439
|
+
# Limit multi-get to files under 20KB
|
|
440
|
+
kindx multi-get "docs/*.md" --max-bytes 20480
|
|
441
|
+
|
|
442
|
+
# Output multi-get as JSON for agent processing
|
|
443
|
+
kindx multi-get "docs/*.md" --json
|
|
444
|
+
|
|
445
|
+
# Clean up cache and orphaned data
|
|
446
|
+
kindx cleanup
|
|
447
|
+
```
|
|
448
|
+
|
|
449
|
+
## Data Storage
|
|
450
|
+
|
|
451
|
+
Index stored in: `~/.cache/kindx/index.sqlite`
|
|
452
|
+
|
|
453
|
+
### Schema
|
|
454
|
+
|
|
455
|
+
```mermaid
|
|
456
|
+
erDiagram
|
|
457
|
+
collections {
|
|
458
|
+
text name PK
|
|
459
|
+
text path
|
|
460
|
+
text mask
|
|
461
|
+
}
|
|
462
|
+
path_contexts {
|
|
463
|
+
text virtual_path PK
|
|
464
|
+
text description
|
|
465
|
+
}
|
|
466
|
+
documents {
|
|
467
|
+
text hash PK
|
|
468
|
+
text path
|
|
469
|
+
text title
|
|
470
|
+
text content
|
|
471
|
+
text docid
|
|
472
|
+
text collection
|
|
473
|
+
integer mtime
|
|
474
|
+
}
|
|
475
|
+
documents_fts {
|
|
476
|
+
text content
|
|
477
|
+
text title
|
|
478
|
+
}
|
|
479
|
+
content_vectors {
|
|
480
|
+
text hash_seq PK
|
|
481
|
+
text hash FK
|
|
482
|
+
integer seq
|
|
483
|
+
integer pos
|
|
484
|
+
blob embedding
|
|
485
|
+
}
|
|
486
|
+
vectors_vec {
|
|
487
|
+
text hash_seq PK
|
|
488
|
+
blob vector
|
|
489
|
+
}
|
|
490
|
+
llm_cache {
|
|
491
|
+
text key PK
|
|
492
|
+
text response
|
|
493
|
+
integer created
|
|
494
|
+
}
|
|
495
|
+
|
|
496
|
+
collections ||--o{ documents : contains
|
|
497
|
+
documents ||--|{ documents_fts : indexes
|
|
498
|
+
documents ||--o{ content_vectors : chunks
|
|
499
|
+
content_vectors ||--|| vectors_vec : embeds
|
|
500
|
+
```
|
|
501
|
+
|
|
502
|
+
## Environment Variables
|
|
503
|
+
|
|
504
|
+
| Variable | Default | Description |
|
|
505
|
+
|----------|---------|-------------|
|
|
506
|
+
| `KINDX_EMBED_MODEL` | `embeddinggemma-300M` | Override embedding model (HuggingFace URI) |
|
|
507
|
+
| `KINDX_EXPAND_CONTEXT_SIZE` | `2048` | Context window for query expansion |
|
|
508
|
+
| `KINDX_CONFIG_DIR` | `~/.config/kindx` | Configuration directory override |
|
|
509
|
+
| `XDG_CACHE_HOME` | `~/.cache` | Cache base directory |
|
|
510
|
+
| `NO_COLOR` | (unset) | Disable terminal colors |
|
|
511
|
+
|
|
512
|
+
## How It Works
|
|
513
|
+
|
|
514
|
+
### Indexing Flow
|
|
515
|
+
|
|
516
|
+
```mermaid
|
|
517
|
+
flowchart LR
|
|
518
|
+
COL["Collection Config"] --> GLOB["Glob Pattern Scan"]
|
|
519
|
+
GLOB --> MD["Markdown Files"]
|
|
520
|
+
MD --> PARSE["Parse Title + Hash Content"]
|
|
521
|
+
PARSE --> DOCID["Generate docid (6-char hash)"]
|
|
522
|
+
DOCID --> SQL["Store in SQLite"]
|
|
523
|
+
SQL --> FTS["FTS5 Full-Text Index"]
|
|
524
|
+
```
|
|
525
|
+
|
|
526
|
+
### Embedding Flow
|
|
527
|
+
|
|
528
|
+
Documents are chunked into ~900-token pieces with 15% overlap using smart boundary detection:
|
|
529
|
+
|
|
530
|
+
```mermaid
|
|
531
|
+
flowchart LR
|
|
532
|
+
DOC["Document"] --> CHUNK["Smart Chunk (~900 tokens)"]
|
|
533
|
+
CHUNK --> FMT["Format: title | text"]
|
|
534
|
+
FMT --> LLM["node-llama-cpp embedBatch"]
|
|
535
|
+
LLM --> STORE["Store Vectors in sqlite-vec"]
|
|
536
|
+
CHUNK --> META["Metadata: hash, seq, pos"]
|
|
537
|
+
META --> STORE
|
|
538
|
+
```
|
|
539
|
+
|
|
540
|
+
### Smart Chunking
|
|
541
|
+
|
|
542
|
+
Instead of cutting at hard token boundaries, KINDX uses a scoring algorithm to find natural markdown break points. This keeps semantic units (sections, paragraphs, code blocks) together.
|
|
543
|
+
|
|
544
|
+
Algorithm:
|
|
545
|
+
1. Scan document for all break points with scores
|
|
546
|
+
2. When approaching the 900-token target, search a 200-token window before the cutoff
|
|
547
|
+
3. Score each break point: `finalScore = baseScore x (1 - (distance/window)^2 x 0.7)`
|
|
548
|
+
4. Cut at the highest-scoring break point
|
|
549
|
+
|
|
550
|
+
The squared distance decay means a heading 200 tokens back (score ~30) still beats a simple line break at the target (score 1), but a closer heading wins over a distant one.
|
|
551
|
+
|
|
552
|
+
Code Fence Protection: Break points inside code blocks are ignored -- code stays together. If a code block exceeds the chunk size, it is kept whole when possible.
|
|
553
|
+
|
|
554
|
+
### Model Configuration
|
|
555
|
+
|
|
556
|
+
Models are configured in `engine/inference.ts` as HuggingFace URIs:
|
|
557
|
+
|
|
558
|
+
```typescript
|
|
559
|
+
const DEFAULT_EMBED_MODEL = "hf:ggml-org/embeddinggemma-300M-GGUF/embeddinggemma-300M-Q8_0.gguf";
|
|
560
|
+
const DEFAULT_RERANK_MODEL = "hf:ggml-org/Qwen3-Reranker-0.6B-Q8_0-GGUF/qwen3-reranker-0.6b-q8_0.gguf";
|
|
561
|
+
const DEFAULT_GENERATE_MODEL = "hf:ambicuity/kindx-query-expansion-1.7B-gguf/kindx-query-expansion-1.7B-q4_k_m.gguf";
|
|
562
|
+
```
|
|
563
|
+
|
|
564
|
+
## Contributing
|
|
565
|
+
|
|
566
|
+
See [CONTRIBUTING.md](./CONTRIBUTING.md) for the full contribution guide.
|
|
567
|
+
|
|
568
|
+
## Security
|
|
569
|
+
|
|
570
|
+
See [SECURITY.md](./SECURITY.md) for reporting vulnerabilities.
|
|
571
|
+
|
|
572
|
+
## License
|
|
573
|
+
|
|
574
|
+
MIT -- see [LICENSE](./LICENSE) for details.
|
|
575
|
+
|
|
576
|
+
---
|
|
577
|
+
|
|
578
|
+
Maintained by [Ritesh Rana](https://github.com/ambicuity) -- `contact@riteshrana.engineer`
|