rust-kgdb 0.6.18 → 0.6.19
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +25 -0
- package/README.md +116 -11
- package/package.json +1 -1
package/CHANGELOG.md
CHANGED
|
@@ -2,6 +2,31 @@
|
|
|
2
2
|
|
|
3
3
|
All notable changes to the rust-kgdb TypeScript SDK will be documented in this file.
|
|
4
4
|
|
|
5
|
+
## [0.6.19] - 2025-12-16
|
|
6
|
+
|
|
7
|
+
### Documentation: The Power of the Concept
|
|
8
|
+
|
|
9
|
+
Clear explanation of why HyperMind eliminates hallucinations.
|
|
10
|
+
|
|
11
|
+
#### README.md - Before/After Comparison
|
|
12
|
+
- **Side-by-side code**: Vanilla LLM (unreliable) vs HyperMind (verifiable)
|
|
13
|
+
- **Visual problem markers**: ❌ for vanilla problems, ✅ for HyperMind solutions
|
|
14
|
+
- **Concrete output examples**: Shows exactly what each approach returns
|
|
15
|
+
|
|
16
|
+
#### README.md - Architecture Deep Dive
|
|
17
|
+
- **4-step diagram**: Schema Injection → Typed Plan → Database Execution → Verified Answer
|
|
18
|
+
- **"Why Hallucination Is Impossible" table**: Each step explained
|
|
19
|
+
- **Key insight**: "The LLM is a planner, not an oracle"
|
|
20
|
+
|
|
21
|
+
#### The Core Message
|
|
22
|
+
```
|
|
23
|
+
The LLM decides WHAT to look for.
|
|
24
|
+
The database finds EXACTLY that.
|
|
25
|
+
The answer is the intersection of LLM intelligence and database truth.
|
|
26
|
+
```
|
|
27
|
+
|
|
28
|
+
---
|
|
29
|
+
|
|
5
30
|
## [0.6.18] - 2025-12-16
|
|
6
31
|
|
|
7
32
|
### Documentation: Progressive Disclosure Structure
|
package/README.md
CHANGED
|
@@ -27,6 +27,62 @@
|
|
|
27
27
|
|
|
28
28
|
---
|
|
29
29
|
|
|
30
|
+
## The Difference: Before & After
|
|
31
|
+
|
|
32
|
+
### Before: Vanilla LLM (Unreliable)
|
|
33
|
+
|
|
34
|
+
```javascript
|
|
35
|
+
// Ask LLM to query your database
|
|
36
|
+
const answer = await openai.chat.completions.create({
|
|
37
|
+
model: 'gpt-4o',
|
|
38
|
+
messages: [{ role: 'user', content: 'Find suspicious providers in my database' }]
|
|
39
|
+
});
|
|
40
|
+
|
|
41
|
+
console.log(answer.choices[0].message.content);
|
|
42
|
+
// "Based on my analysis, Provider P001 appears suspicious because..."
|
|
43
|
+
//
|
|
44
|
+
// PROBLEMS:
|
|
45
|
+
// ❌ Did it actually query your database? No - it's guessing
|
|
46
|
+
// ❌ Where's the evidence? None - it made up "Provider P001"
|
|
47
|
+
// ❌ Will this answer be the same tomorrow? No - probabilistic
|
|
48
|
+
// ❌ Can you audit this for regulators? No - black box
|
|
49
|
+
```
|
|
50
|
+
|
|
51
|
+
### After: HyperMind (Verifiable)
|
|
52
|
+
|
|
53
|
+
```javascript
|
|
54
|
+
// Ask HyperMind to query your database
|
|
55
|
+
const { HyperMindAgent, GraphDB } = require('rust-kgdb');
|
|
56
|
+
|
|
57
|
+
const db = new GraphDB('http://insurance.org/');
|
|
58
|
+
db.loadTtl(yourActualData, null); // Your real data
|
|
59
|
+
|
|
60
|
+
const agent = new HyperMindAgent({ kg: db, model: 'gpt-4o' });
|
|
61
|
+
const result = await agent.call('Find suspicious providers');
|
|
62
|
+
|
|
63
|
+
console.log(result.answer);
|
|
64
|
+
// "Provider PROV001 has risk score 0.87 with 47 claims over $50,000"
|
|
65
|
+
//
|
|
66
|
+
// VERIFIED:
|
|
67
|
+
// ✅ Queried your actual database (SPARQL executed)
|
|
68
|
+
// ✅ Evidence included (47 real claims found)
|
|
69
|
+
// ✅ Reproducible (same hash every time)
|
|
70
|
+
// ✅ Full audit trail for regulators
|
|
71
|
+
|
|
72
|
+
console.log(result.reasoningTrace);
|
|
73
|
+
// [
|
|
74
|
+
// { tool: 'kg.sparql.query', input: 'SELECT ?p WHERE...', output: '[PROV001]' },
|
|
75
|
+
// { tool: 'kg.datalog.apply', input: 'highRisk(?p) :- ...', output: 'MATCHED' }
|
|
76
|
+
// ]
|
|
77
|
+
|
|
78
|
+
console.log(result.hash);
|
|
79
|
+
// "sha256:8f3a2b1c..." - Same question = Same answer = Same hash
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
**The key insight**: The LLM plans WHAT to look for. The database finds EXACTLY that. Every answer traces back to your actual data.
|
|
83
|
+
|
|
84
|
+
---
|
|
85
|
+
|
|
30
86
|
## Quick Start
|
|
31
87
|
|
|
32
88
|
### Installation
|
|
@@ -154,21 +210,70 @@ const result = await agent.call('Calculate risk score for entity P001')
|
|
|
154
210
|
|
|
155
211
|
## How It Works
|
|
156
212
|
|
|
157
|
-
|
|
158
|
-
|
|
159
|
-
1. **Neural** (LLM): Understands your question in natural language
|
|
160
|
-
2. **Symbolic** (Database): Executes precise queries against your data
|
|
213
|
+
### The Architecture
|
|
161
214
|
|
|
162
215
|
```
|
|
163
|
-
|
|
164
|
-
|
|
165
|
-
"Find
|
|
166
|
-
|
|
167
|
-
|
|
168
|
-
|
|
216
|
+
┌─────────────────────────────────────────────────────────────────────────────┐
|
|
217
|
+
│ YOUR QUESTION │
|
|
218
|
+
│ "Find suspicious providers" │
|
|
219
|
+
└─────────────────────────────────┬───────────────────────────────────────────┘
|
|
220
|
+
│
|
|
221
|
+
▼
|
|
222
|
+
┌─────────────────────────────────────────────────────────────────────────────┐
|
|
223
|
+
│ STEP 1: SCHEMA INJECTION │
|
|
224
|
+
│ │
|
|
225
|
+
│ LLM receives your question PLUS your actual data schema: │
|
|
226
|
+
│ • Classes: Claim, Provider, Policy (from YOUR database) │
|
|
227
|
+
│ • Properties: amount, riskScore, claimCount (from YOUR database) │
|
|
228
|
+
│ │
|
|
229
|
+
│ The LLM can ONLY reference things that actually exist in your data. │
|
|
230
|
+
└─────────────────────────────────┬───────────────────────────────────────────┘
|
|
231
|
+
│
|
|
232
|
+
▼
|
|
233
|
+
┌─────────────────────────────────────────────────────────────────────────────┐
|
|
234
|
+
│ STEP 2: TYPED EXECUTION PLAN │
|
|
235
|
+
│ │
|
|
236
|
+
│ LLM generates a plan using typed tools: │
|
|
237
|
+
│ 1. kg.sparql.query("SELECT ?p WHERE { ?p :riskScore ?r . FILTER(?r > 0.8)}")│
|
|
238
|
+
│ 2. kg.datalog.apply("suspicious(?p) :- highRisk(?p), highClaimCount(?p)") │
|
|
239
|
+
│ │
|
|
240
|
+
│ Each tool has defined inputs/outputs. Invalid combinations rejected. │
|
|
241
|
+
└─────────────────────────────────┬───────────────────────────────────────────┘
|
|
242
|
+
│
|
|
243
|
+
▼
|
|
244
|
+
┌─────────────────────────────────────────────────────────────────────────────┐
|
|
245
|
+
│ STEP 3: DATABASE EXECUTION │
|
|
246
|
+
│ │
|
|
247
|
+
│ The database executes the plan against YOUR ACTUAL DATA: │
|
|
248
|
+
│ • SPARQL query runs → finds 3 providers with riskScore > 0.8 │
|
|
249
|
+
│ • Datalog rules run → 1 provider matches "suspicious" pattern │
|
|
250
|
+
│ │
|
|
251
|
+
│ Every step is recorded in the reasoning trace. │
|
|
252
|
+
└─────────────────────────────────┬───────────────────────────────────────────┘
|
|
253
|
+
│
|
|
254
|
+
▼
|
|
255
|
+
┌─────────────────────────────────────────────────────────────────────────────┐
|
|
256
|
+
│ STEP 4: VERIFIED ANSWER │
|
|
257
|
+
│ │
|
|
258
|
+
│ Answer: "Provider PROV001 is suspicious (riskScore: 0.87, claims: 47)" │
|
|
259
|
+
│ │
|
|
260
|
+
│ + Reasoning Trace: Every query, every rule, every result │
|
|
261
|
+
│ + Hash: sha256:8f3a2b1c... (reproducible) │
|
|
262
|
+
│ │
|
|
263
|
+
│ Run the same question tomorrow → Same answer → Same hash │
|
|
264
|
+
└─────────────────────────────────────────────────────────────────────────────┘
|
|
169
265
|
```
|
|
170
266
|
|
|
171
|
-
|
|
267
|
+
### Why Hallucination Is Impossible
|
|
268
|
+
|
|
269
|
+
| Step | What Prevents Hallucination |
|
|
270
|
+
|------|----------------------------|
|
|
271
|
+
| Schema Injection | LLM only sees properties that exist in YOUR data |
|
|
272
|
+
| Typed Tools | Invalid query structures rejected before execution |
|
|
273
|
+
| Database Execution | Answers come from actual data, not LLM imagination |
|
|
274
|
+
| Reasoning Trace | Every claim is backed by recorded evidence |
|
|
275
|
+
|
|
276
|
+
**The key insight**: The LLM is a planner, not an oracle. It decides WHAT to look for. The database finds EXACTLY that. The answer is the intersection of LLM intelligence and database truth.
|
|
172
277
|
|
|
173
278
|
---
|
|
174
279
|
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "rust-kgdb",
|
|
3
|
-
"version": "0.6.
|
|
3
|
+
"version": "0.6.19",
|
|
4
4
|
"description": "Production-grade Neuro-Symbolic AI Framework with Schema-Aware GraphDB, Context Theory, and Memory Hypergraph: +86.4% accuracy over vanilla LLMs. Features Schema-Aware GraphDB (auto schema extraction), BYOO (Bring Your Own Ontology) for enterprise, cross-agent schema caching, LLM Planner for natural language to typed SPARQL, ProofDAG with Curry-Howard witnesses. High-performance (2.78µs lookups, 35x faster than RDFox). W3C SPARQL 1.1 compliant.",
|
|
5
5
|
"main": "index.js",
|
|
6
6
|
"types": "index.d.ts",
|