llm-kb 0.2.0 → 0.4.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +322 -60
- package/bin/anthropic-5TIU2EED.js +5515 -0
- package/bin/azure-openai-responses-ZVUVMK3G.js +190 -0
- package/bin/chunk-2WV6TQRI.js +4792 -0
- package/bin/chunk-3YMNGUZZ.js +262 -0
- package/bin/chunk-5PYKQQLA.js +14295 -0
- package/bin/chunk-65KFH7OI.js +31 -0
- package/bin/chunk-DHOXVEIR.js +7261 -0
- package/bin/chunk-EAQYK3U2.js +41 -0
- package/bin/chunk-IFS3OKBN.js +428 -0
- package/bin/chunk-LDHOKBJA.js +86 -0
- package/bin/chunk-SLYBG6ZQ.js +32681 -0
- package/bin/chunk-UEODFF7H.js +17 -0
- package/bin/chunk-XCXTZJGO.js +174 -0
- package/bin/chunk-XFV534WU.js +7056 -0
- package/bin/cli.js +5496 -163
- package/bin/dist-3YH7P2QF.js +1244 -0
- package/bin/google-JFC43EFJ.js +371 -0
- package/bin/google-gemini-cli-K4XNMYDI.js +712 -0
- package/bin/google-vertex-Y42F254G.js +414 -0
- package/bin/indexer-KSYRIVVN.js +10 -0
- package/bin/mistral-ZU2JS5XZ.js +38406 -0
- package/bin/multipart-parser-CO464TZY.js +371 -0
- package/bin/openai-codex-responses-NW2LELBH.js +712 -0
- package/bin/openai-completions-TW3VKTHO.js +662 -0
- package/bin/openai-responses-VGL522MK.js +198 -0
- package/bin/src-Y22OHE3S.js +1408 -0
- package/package.json +16 -6
- package/PHASE2_SPEC.md +0 -274
- package/SPEC.md +0 -275
- package/bin/chunk-MYQ36JJB.js +0 -118
- package/bin/indexer-LSYSZXZX.js +0 -6
- package/plan.md +0 -55
- package/src/cli.ts +0 -132
- package/src/indexer.ts +0 -148
- package/src/pdf.ts +0 -119
- package/src/query.ts +0 -132
- package/src/resolve-kb.ts +0 -19
- package/src/scan.ts +0 -59
- package/src/watcher.ts +0 -84
- package/tsconfig.json +0 -14
package/README.md
CHANGED
|
@@ -1,8 +1,8 @@
|
|
|
1
1
|
# llm-kb
|
|
2
2
|
|
|
3
|
-
Drop files into a folder. Get a knowledge base you can query.
|
|
3
|
+
Drop files into a folder. Get a knowledge base you can query — with a self-improving wiki that gets smarter every time you ask.
|
|
4
4
|
|
|
5
|
-
Inspired by [Karpathy's LLM Knowledge Bases](https://x.com/karpathy/status/2039805659525644595).
|
|
5
|
+
Inspired by [Karpathy's LLM Knowledge Bases](https://x.com/karpathy/status/2039805659525644595) and [Farzapedia](https://x.com/FarzaTV).
|
|
6
6
|
|
|
7
7
|
## Quick Start
|
|
8
8
|
|
|
@@ -11,121 +11,383 @@ npm install -g llm-kb
|
|
|
11
11
|
llm-kb run ./my-documents
|
|
12
12
|
```
|
|
13
13
|
|
|
14
|
-
That's it.
|
|
14
|
+
That's it. PDFs get parsed, an index is built, and an interactive chat opens — ready for questions.
|
|
15
15
|
|
|
16
|
-
|
|
16
|
+
## Authentication
|
|
17
17
|
|
|
18
|
-
|
|
19
|
-
- **Pi SDK** installed and authenticated (`npm install -g @mariozechner/pi-coding-agent` + run `pi` once to set up auth)
|
|
18
|
+
Two options (you need one):
|
|
20
19
|
|
|
21
|
-
|
|
20
|
+
**Option 1 — Pi SDK (recommended)**
|
|
21
|
+
```bash
|
|
22
|
+
npm install -g @mariozechner/pi-coding-agent
|
|
23
|
+
pi # run once to authenticate
|
|
24
|
+
```
|
|
25
|
+
|
|
26
|
+
**Option 2 — Anthropic API key**
|
|
27
|
+
```bash
|
|
28
|
+
export ANTHROPIC_API_KEY=sk-ant-...
|
|
29
|
+
```
|
|
30
|
+
|
|
31
|
+
If neither is configured, `llm-kb` shows a clear error with setup instructions.
|
|
22
32
|
|
|
23
33
|
## What It Does
|
|
24
34
|
|
|
25
|
-
###
|
|
35
|
+
### Run — scan, parse, index, chat
|
|
26
36
|
|
|
27
37
|
```bash
|
|
28
38
|
llm-kb run ./my-documents
|
|
29
39
|
```
|
|
30
40
|
|
|
31
41
|
```
|
|
32
|
-
llm-kb v0.
|
|
42
|
+
llm-kb v0.4.0
|
|
33
43
|
|
|
34
44
|
Scanning ./my-documents...
|
|
35
45
|
Found 9 files (9 PDF)
|
|
36
46
|
9 parsed
|
|
37
47
|
|
|
38
|
-
Building index...
|
|
48
|
+
Building index... (claude-haiku-4-5)
|
|
39
49
|
Index built: .llm-kb/wiki/index.md
|
|
40
50
|
|
|
41
|
-
|
|
51
|
+
Ready. Ask a question or drop files in to re-index.
|
|
52
|
+
|
|
53
|
+
────────────────────────────────────────────
|
|
54
|
+
> What are the key findings?
|
|
55
|
+
────────────────────────────────────────────
|
|
56
|
+
|
|
57
|
+
⟡ claude-sonnet-4-6
|
|
58
|
+
|
|
59
|
+
▸ Thinking
|
|
60
|
+
Let me check the relevant source files...
|
|
61
|
+
|
|
62
|
+
▸ Reading q3-report.md
|
|
63
|
+
▸ Reading q4-report.md
|
|
64
|
+
|
|
65
|
+
──────────────────────────────────────────────
|
|
42
66
|
|
|
43
|
-
|
|
67
|
+
## Key Findings
|
|
68
|
+
Revenue grew 12% QoQ driven by...
|
|
69
|
+
(cited answer with page references)
|
|
70
|
+
|
|
71
|
+
── 8.3s · 2 files read ──────────────────────
|
|
44
72
|
```
|
|
45
73
|
|
|
46
|
-
|
|
47
|
-
|
|
48
|
-
|
|
49
|
-
|
|
74
|
+
**What happens:**
|
|
75
|
+
1. **Scans** — finds all supported files (PDF, DOCX, XLSX, PPTX, MD, TXT, CSV, images)
|
|
76
|
+
2. **Parses** — PDFs converted to markdown + bounding boxes via [LiteParse](https://github.com/run-llama/liteparse)
|
|
77
|
+
3. **Indexes** — Haiku reads sources, writes `index.md` with summary table
|
|
78
|
+
4. **Watches** — drop new files while running, they get parsed and indexed automatically
|
|
79
|
+
5. **Chat** — interactive TUI with Pi-style markdown rendering, thinking display, tool call progress
|
|
80
|
+
6. **Learns** — every answer updates a concept-organized wiki; repeated questions answered instantly from cache
|
|
50
81
|
|
|
51
|
-
###
|
|
82
|
+
### Continuous conversation
|
|
52
83
|
|
|
53
|
-
|
|
54
|
-
|
|
55
|
-
|
|
84
|
+
The chat maintains full conversation history. Follow-up questions work naturally:
|
|
85
|
+
|
|
86
|
+
```
|
|
87
|
+
> What is BNS 2023?
|
|
88
|
+
(detailed answer)
|
|
56
89
|
|
|
57
|
-
|
|
58
|
-
|
|
90
|
+
> Tell me more about the mob lynching clause
|
|
91
|
+
(agent remembers context — answers about Clause 101 without re-reading)
|
|
59
92
|
|
|
60
|
-
|
|
61
|
-
|
|
93
|
+
> How does that compare to the old IPC?
|
|
94
|
+
(continues the thread with full context)
|
|
62
95
|
```
|
|
63
96
|
|
|
64
|
-
|
|
97
|
+
Sessions persist across restarts — run `llm-kb run` again and the conversation continues.
|
|
65
98
|
|
|
66
|
-
|
|
67
|
-
**Research mode** (`--save`) — read + write + bash. The agent saves answers to `outputs/`, re-indexes, and can write scripts to read Excel/Word files. Answers compound over time.
|
|
99
|
+
### Query — single question from CLI
|
|
68
100
|
|
|
69
|
-
|
|
101
|
+
```bash
|
|
102
|
+
llm-kb query "compare Q3 vs Q4"
|
|
103
|
+
llm-kb query "summarize revenue data" --folder ./my-documents
|
|
104
|
+
llm-kb query "full analysis of lease terms" --save # research mode
|
|
105
|
+
```
|
|
70
106
|
|
|
107
|
+
### Eval — analyze and improve
|
|
108
|
+
|
|
109
|
+
```bash
|
|
110
|
+
llm-kb eval
|
|
111
|
+
llm-kb eval --last 10
|
|
71
112
|
```
|
|
72
|
-
|
|
73
|
-
|
|
74
|
-
|
|
75
|
-
|
|
76
|
-
|
|
77
|
-
|
|
78
|
-
|
|
79
|
-
|
|
80
|
-
|
|
113
|
+
|
|
114
|
+
```
|
|
115
|
+
llm-kb eval
|
|
116
|
+
|
|
117
|
+
Reading sessions...
|
|
118
|
+
Found 29 Q&A exchanges across sessions
|
|
119
|
+
Judging 1/29: "What are the 2023 new laws?"
|
|
120
|
+
...
|
|
121
|
+
Judging 29/29: "How many files you have"
|
|
122
|
+
|
|
123
|
+
Results:
|
|
124
|
+
Queries analyzed: 29
|
|
125
|
+
Wiki hit rate: 66%
|
|
126
|
+
Wasted reads: 42
|
|
127
|
+
Issues: 22 errors 24 warnings
|
|
128
|
+
Wiki gaps: 28
|
|
129
|
+
|
|
130
|
+
Report: .llm-kb/wiki/outputs/eval-report.md
|
|
81
131
|
```
|
|
82
132
|
|
|
83
|
-
|
|
133
|
+
Eval reads your session files and uses Haiku as a judge to find:
|
|
84
134
|
|
|
85
|
-
|
|
135
|
+
| Check | What it catches |
|
|
136
|
+
|---|---|
|
|
137
|
+
| **Citation validity** | Agent claims "Clause 303" but source says "Clause 304" |
|
|
138
|
+
| **Contradictions** | Answer says "sedition retained" but source says "removed" |
|
|
139
|
+
| **Wiki gaps** | Topics asked 4 times but never cached in wiki |
|
|
140
|
+
| **Wasted reads** | Files read but never cited in the answer |
|
|
141
|
+
| **Performance** | Wiki hit rate, avg duration, most-read files |
|
|
86
142
|
|
|
87
|
-
|
|
143
|
+
The eval report includes actionable recommendations and updates `.llm-kb/guidelines.md` — learned rules the agent reads on-demand during queries. You can also add your own rules to this file (see [Guidelines](#guidelines) below).
|
|
144
|
+
|
|
145
|
+
### Status — KB overview
|
|
88
146
|
|
|
89
|
-
**Local (default when enabled):**
|
|
90
147
|
```bash
|
|
91
|
-
|
|
148
|
+
llm-kb status
|
|
149
|
+
```
|
|
150
|
+
|
|
151
|
+
```
|
|
152
|
+
Knowledge Base Status
|
|
153
|
+
Folder: /path/to/my-documents
|
|
154
|
+
Sources: 12 parsed sources
|
|
155
|
+
Index: 3 min ago
|
|
156
|
+
Articles: 15 compiled
|
|
157
|
+
Outputs: 2 saved answers
|
|
158
|
+
Models: claude-sonnet-4-6 (query) claude-haiku-4-5 (index)
|
|
159
|
+
Auth: Pi SDK
|
|
160
|
+
```
|
|
161
|
+
|
|
162
|
+
## The Three-Layer Architecture
|
|
163
|
+
|
|
164
|
+
The system separates **how to behave**, **what to know**, and **what went wrong** into three files with distinct lifecycles:
|
|
165
|
+
|
|
166
|
+
```
|
|
167
|
+
┌──────────────────────────────────────────────────────────────┐
|
|
168
|
+
│ AGENTS.md (runtime — built by code, not on disk) │
|
|
169
|
+
│ How to answer: source list, tool patterns, citation │
|
|
170
|
+
│ rules. Points to guidelines.md for learned behaviour. │
|
|
171
|
+
└──────────────────────────────────────────────────────────────┘
|
|
172
|
+
│
|
|
173
|
+
┌─────────────────┴─────────────────┐
|
|
174
|
+
▼ ▼
|
|
175
|
+
┌────────────────────────────┐ ┌────────────────────────────┐
|
|
176
|
+
│ wiki.md │ │ guidelines.md │
|
|
177
|
+
│ WHAT to know │ │ HOW to behave better │
|
|
178
|
+
│ │ │ │
|
|
179
|
+
│ Concept-organized knowledge │ │ Eval insights (auto) │
|
|
180
|
+
│ synthesized from sources. │ │ + your custom rules. │
|
|
181
|
+
│ Updated after every query. │ │ Read on-demand by agent. │
|
|
182
|
+
└────────────────────────────┘ └────────────────────────────┘
|
|
183
|
+
▲ ▲
|
|
184
|
+
│ updated by wiki-updater │ updated by eval
|
|
185
|
+
│ │
|
|
186
|
+
┌──────┴─────────────────────────────────────────┴────────┐
|
|
187
|
+
│ llm-kb eval │
|
|
188
|
+
│ Reads sessions → judges quality → updates guidelines.md │
|
|
189
|
+
│ + writes eval-report.md for humans │
|
|
190
|
+
└──────────────────────────────────────────────────────────────┘
|
|
191
|
+
```
|
|
192
|
+
|
|
193
|
+
| Layer | File | Changes when | Written by |
|
|
194
|
+
|---|---|---|---|
|
|
195
|
+
| Architecture | AGENTS.md (runtime) | Code deploys | Developer |
|
|
196
|
+
| Behaviour | `guidelines.md` | After eval / by you | Eval + user |
|
|
197
|
+
| Knowledge | `wiki.md` | After every query | Wiki updater |
|
|
198
|
+
|
|
199
|
+
The agent sees AGENTS.md in its system prompt (lean, stable). It reads `guidelines.md` and `wiki.md` on-demand via tool calls — progressive disclosure, not context bloat.
|
|
200
|
+
|
|
201
|
+
## The Data Flywheel
|
|
202
|
+
|
|
203
|
+
Every query makes the system faster. Every eval makes it smarter.
|
|
204
|
+
|
|
205
|
+
```
|
|
206
|
+
┌─────────────────┐
|
|
207
|
+
│ User asks │
|
|
208
|
+
│ a question │
|
|
209
|
+
└────────┬────────┘
|
|
210
|
+
│
|
|
211
|
+
▼
|
|
212
|
+
┌────────────────────────┐
|
|
213
|
+
│ Agent checks wiki.md │
|
|
214
|
+
│ + reads guidelines.md │ ◄── on-demand, not forced
|
|
215
|
+
│ + reads source files │
|
|
216
|
+
└────────────┬───────────┘
|
|
217
|
+
│
|
|
218
|
+
▼
|
|
219
|
+
┌────────────────────────┐
|
|
220
|
+
│ Wiki updated │ ◄── knowledge compounds
|
|
221
|
+
│ (concept-organized) │
|
|
222
|
+
└────────────┬───────────┘
|
|
223
|
+
│
|
|
224
|
+
▼
|
|
225
|
+
┌────────────────────────┐
|
|
226
|
+
│ Next similar query │
|
|
227
|
+
│ answered from wiki │ ── 0 file reads, 2s instead of 25s
|
|
228
|
+
└────────────┬───────────┘
|
|
229
|
+
│
|
|
230
|
+
▼
|
|
231
|
+
┌────────────────────────┐
|
|
232
|
+
│ llm-kb eval │ ◄── behaviour compounds
|
|
233
|
+
│ analyzes sessions │ updates guidelines.md
|
|
234
|
+
│ improves behaviour │ with learned rules
|
|
235
|
+
└────────────────────────┘
|
|
92
236
|
```
|
|
93
|
-
Uses Tesseract.js (built-in, slower but works everywhere).
|
|
94
237
|
|
|
95
|
-
**
|
|
238
|
+
**Proven results:**
|
|
239
|
+
- First query about a topic: ~25s, reads source files
|
|
240
|
+
- Same question again: ~2s, answered from wiki, 0 files read
|
|
241
|
+
- Wiki hit rate grows with usage: 0% → 66% after 29 queries
|
|
242
|
+
|
|
243
|
+
## The Concept Wiki
|
|
244
|
+
|
|
245
|
+
The wiki organizes knowledge by **concepts**, not source files. A single wiki entry can synthesize information from multiple sources:
|
|
246
|
+
|
|
247
|
+
```markdown
|
|
248
|
+
## Mob Lynching
|
|
249
|
+
First-ever criminalisation in Indian law under BNS 2023, Clause 101(2).
|
|
250
|
+
Group of 5+ persons, discriminatory grounds, minimum 7 years to death.
|
|
251
|
+
IPC had no equivalent — prosecuted under general S.302.
|
|
252
|
+
See also: [[Murder and Homicide]], [[BNS 2023 Overview]]
|
|
253
|
+
*Sources: indian penal code - new.md (p.137), Annotated comparison (p.15) · 2026-04-06*
|
|
254
|
+
|
|
255
|
+
---
|
|
256
|
+
|
|
257
|
+
## Electronic Evidence
|
|
258
|
+
Section 65B requires certificate from responsible official.
|
|
259
|
+
BSB 2023 expands: emails, WhatsApp, GPS, cloud docs all admissible.
|
|
260
|
+
See also: [[Evidence Law Overview]]
|
|
261
|
+
*Sources: Indian Evidence Act.md, Comparison Chart.md · 2026-04-06*
|
|
262
|
+
```
|
|
263
|
+
|
|
264
|
+
## Model Configuration
|
|
265
|
+
|
|
266
|
+
Auto-generated at `.llm-kb/config.json`:
|
|
267
|
+
|
|
268
|
+
```json
|
|
269
|
+
{
|
|
270
|
+
"indexModel": "claude-haiku-4-5",
|
|
271
|
+
"queryModel": "claude-sonnet-4-6"
|
|
272
|
+
}
|
|
273
|
+
```
|
|
274
|
+
|
|
275
|
+
| Task | Model | Why |
|
|
276
|
+
|---|---|---|
|
|
277
|
+
| Index | Haiku | Summarizing sources — cheap, fast |
|
|
278
|
+
| Wiki update | Haiku | Merging knowledge — cheap, fast |
|
|
279
|
+
| Eval judge | Haiku | Checking quality — cheap, fast |
|
|
280
|
+
| Query | Sonnet | Complex reasoning, citations — needs strength |
|
|
281
|
+
|
|
282
|
+
Override with env vars:
|
|
96
283
|
```bash
|
|
97
|
-
|
|
284
|
+
LLM_KB_INDEX_MODEL=claude-haiku-4-5 llm-kb run ./docs
|
|
285
|
+
LLM_KB_QUERY_MODEL=claude-sonnet-4-6 llm-kb query "question"
|
|
98
286
|
```
|
|
99
|
-
Routes scanned pages to an Azure Document Intelligence bridge. Native-text pages still processed locally (free).
|
|
100
287
|
|
|
101
288
|
## Non-PDF Files
|
|
102
289
|
|
|
103
|
-
PDFs are parsed at
|
|
290
|
+
PDFs are parsed at scan time. Other file types are read dynamically by the agent using bash scripts:
|
|
104
291
|
|
|
105
|
-
|
|
|
292
|
+
| File type | How it's read |
|
|
106
293
|
|---|---|
|
|
107
|
-
|
|
|
108
|
-
|
|
|
109
|
-
|
|
|
294
|
+
| `.pdf` | Pre-parsed to markdown + bounding boxes (LiteParse) |
|
|
295
|
+
| `.docx` | Selective XML reading via `adm-zip` (structure first, then relevant sections) |
|
|
296
|
+
| `.xlsx` | Specific sheets/cells via `exceljs` |
|
|
297
|
+
| `.pptx` | Text extraction via `officeparser` |
|
|
298
|
+
| `.md`, `.txt`, `.csv` | Read directly |
|
|
299
|
+
|
|
300
|
+
For large files, the agent reads the structure first, then extracts only the sections relevant to the question — never dumps the entire file.
|
|
301
|
+
|
|
302
|
+
## OCR for Scanned PDFs
|
|
303
|
+
|
|
304
|
+
Most PDFs have native text. For scanned PDFs:
|
|
305
|
+
|
|
306
|
+
```bash
|
|
307
|
+
OCR_ENABLED=true llm-kb run ./docs # local Tesseract
|
|
308
|
+
OCR_SERVER_URL="http://localhost:8080/ocr?key=KEY" llm-kb run . # remote Azure OCR
|
|
309
|
+
```
|
|
310
|
+
|
|
311
|
+
## Guidelines
|
|
312
|
+
|
|
313
|
+
`guidelines.md` is the agent’s learned behaviour file. Eval writes the `## Eval Insights` section automatically. You can add your own rules below it — eval will never overwrite them.
|
|
314
|
+
|
|
315
|
+
```markdown
|
|
316
|
+
## Eval Insights (auto-generated 2026-04-07)
|
|
110
317
|
|
|
111
|
-
|
|
318
|
+
### Wiki Gaps — add to wiki when users ask about these topics
|
|
319
|
+
- Reserve requirements
|
|
320
|
+
- Engine types
|
|
112
321
|
|
|
113
|
-
|
|
322
|
+
### Behaviour Fixes
|
|
323
|
+
- Double-check clause numbers against source text.
|
|
114
324
|
|
|
115
|
-
|
|
116
|
-
-
|
|
117
|
-
-
|
|
118
|
-
|
|
325
|
+
### Performance
|
|
326
|
+
- Wiki hit rate: 82% (target: 80%+)
|
|
327
|
+
- Avg query time: 3.1s
|
|
328
|
+
|
|
329
|
+
## My Rules
|
|
330
|
+
|
|
331
|
+
- Always use Hindi transliterations for legal terms
|
|
332
|
+
- Respond in bullet points for legal questions
|
|
333
|
+
- For aviation leases: always check both lessee and lessor obligations
|
|
334
|
+
```
|
|
335
|
+
|
|
336
|
+
The agent reads this file on-demand — not on every query. It consults guidelines when unsure about citation accuracy, file selection, or when a question touches a topic that had issues before. This keeps the system prompt lean while making learned behaviour available when it matters.
|
|
337
|
+
|
|
338
|
+
You can create `guidelines.md` manually before ever running eval. The agent will find it.
|
|
339
|
+
|
|
340
|
+
## What It Creates
|
|
341
|
+
|
|
342
|
+
```
|
|
343
|
+
./my-documents/
|
|
344
|
+
├── (your files — untouched)
|
|
345
|
+
└── .llm-kb/
|
|
346
|
+
├── config.json ← model configuration
|
|
347
|
+
├── guidelines.md ← learned rules from eval + your custom rules
|
|
348
|
+
├── sessions/ ← conversation history (JSONL)
|
|
349
|
+
├── traces/ ← per-query traces (JSON)
|
|
350
|
+
│ └── .processed ← prevents re-processing on restart
|
|
351
|
+
└── wiki/
|
|
352
|
+
├── index.md ← source summary table
|
|
353
|
+
├── wiki.md ← concept-organized knowledge wiki
|
|
354
|
+
├── queries.md ← query log (newest first)
|
|
355
|
+
├── sources/ ← parsed markdown + bounding boxes
|
|
356
|
+
└── outputs/
|
|
357
|
+
├── eval-report.md ← eval analysis report
|
|
358
|
+
└── ... ← saved research answers (--save)
|
|
359
|
+
```
|
|
360
|
+
|
|
361
|
+
Your original files are never modified. Delete `.llm-kb/` to start fresh.
|
|
362
|
+
|
|
363
|
+
## Display
|
|
364
|
+
|
|
365
|
+
The interactive TUI (via `@mariozechner/pi-tui`) shows the Claude Web UI pattern:
|
|
366
|
+
|
|
367
|
+
| Phase | What you see |
|
|
368
|
+
|---|---|
|
|
369
|
+
| Model | `⟡ claude-sonnet-4-6` |
|
|
370
|
+
| Thinking | `▸ Thinking` + streamed reasoning (dim) |
|
|
371
|
+
| Tool calls | `▸ Reading file.md` / `▸ Running bash` + code block |
|
|
372
|
+
| Answer | Separator line → markdown with tables, code blocks, headers |
|
|
373
|
+
| Done | `── 8.3s · 2 files read ──` |
|
|
374
|
+
|
|
375
|
+
Phases can interleave: think → read files → answer → think again → read more → continue answer.
|
|
376
|
+
|
|
377
|
+
The `llm-kb query` command uses stdout mode — same phases, works with pipes and scripts.
|
|
119
378
|
|
|
120
379
|
## Development
|
|
121
380
|
|
|
122
381
|
```bash
|
|
123
382
|
git clone https://github.com/satish860/llm-kb
|
|
124
383
|
cd llm-kb
|
|
125
|
-
|
|
126
|
-
|
|
384
|
+
npm install
|
|
385
|
+
npm run build
|
|
127
386
|
npm link
|
|
128
387
|
|
|
388
|
+
npm test # 42 tests
|
|
389
|
+
npm run test:watch # vitest watch mode
|
|
390
|
+
|
|
129
391
|
llm-kb run ./test-folder
|
|
130
392
|
```
|
|
131
393
|
|
|
@@ -135,4 +397,4 @@ Building this in public: [themindfulai.dev](https://themindfulai.dev/articles/bu
|
|
|
135
397
|
|
|
136
398
|
## License
|
|
137
399
|
|
|
138
|
-
MIT
|
|
400
|
+
MIT — [Satish Venkatakrishnan](https://deltaxy.ai)
|