rust-kgdb 0.6.69 → 0.6.70
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +92 -1
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -8,7 +8,98 @@
|
|
|
8
8
|
|
|
9
9
|
---
|
|
10
10
|
|
|
11
|
-
## The Problem
|
|
11
|
+
## The Problem With AI Today
|
|
12
|
+
|
|
13
|
+
Enterprise AI projects keep failing. Not because the technology is bad, but because organizations use it wrong.
|
|
14
|
+
|
|
15
|
+
A claims investigator asks ChatGPT: *"Has Provider #4521 shown suspicious billing patterns?"*
|
|
16
|
+
|
|
17
|
+
The AI responds confidently: *"Yes, Provider #4521 has a history of duplicate billing and upcoding."*
|
|
18
|
+
|
|
19
|
+
The investigator opens a case. Weeks later, legal discovers Provider #4521 has a perfect record. **The AI made it up.** Lawsuit incoming.
|
|
20
|
+
|
|
21
|
+
This keeps happening:
|
|
22
|
+
|
|
23
|
+
- A lawyer cites "Smith v. Johnson (2019)" in court. The judge is confused. **That case doesn't exist.**
|
|
24
|
+
- A doctor avoids prescribing "Nexapril" due to cardiac interactions. **Nexapril isn't a real drug.**
|
|
25
|
+
- A fraud analyst flags Account #7842 for money laundering. **It belongs to a children's charity.**
|
|
26
|
+
|
|
27
|
+
Every time, the same pattern: The AI sounds confident. The AI is wrong. People get hurt.
|
|
28
|
+
|
|
29
|
+
---
|
|
30
|
+
|
|
31
|
+
## The Engineering Problem
|
|
32
|
+
|
|
33
|
+
The root cause is simple: **LLMs are language models, not databases.** They predict plausible text. They don't look up facts.
|
|
34
|
+
|
|
35
|
+
When you ask "Has Provider #4521 shown suspicious patterns?", the LLM doesn't query your claims database. It generates text that *sounds like* an answer based on patterns from its training data.
|
|
36
|
+
|
|
37
|
+
The industry's response? Add guardrails. Use RAG. Fine-tune models.
|
|
38
|
+
|
|
39
|
+
These help, but they're patches:
|
|
40
|
+
- **RAG** retrieves similar documents - similar isn't the same as correct
|
|
41
|
+
- **Fine-tuning** teaches patterns, not facts
|
|
42
|
+
- **Guardrails** catch obvious errors, but "Provider #4521 has billing anomalies" sounds perfectly plausible
|
|
43
|
+
|
|
44
|
+
A real solution requires a different architecture. One built on solid engineering principles, not hope.
|
|
45
|
+
|
|
46
|
+
---
|
|
47
|
+
|
|
48
|
+
## The Solution: Query Generation, Not Answer Generation
|
|
49
|
+
|
|
50
|
+
What if AI stopped providing answers and started **generating queries**?
|
|
51
|
+
|
|
52
|
+
Think about it:
|
|
53
|
+
- Your database knows the facts (claims, providers, transactions)
|
|
54
|
+
- AI understands language (can parse "find suspicious patterns")
|
|
55
|
+
- You need both working together
|
|
56
|
+
|
|
57
|
+
**The AI translates intent into queries. The database finds facts. The AI never makes up data.**
|
|
58
|
+
|
|
59
|
+
```
|
|
60
|
+
Before (Dangerous):
|
|
61
|
+
Human: "Is Provider #4521 suspicious?"
|
|
62
|
+
AI: "Yes, they have billing anomalies" <-- FABRICATED
|
|
63
|
+
|
|
64
|
+
After (Safe):
|
|
65
|
+
Human: "Is Provider #4521 suspicious?"
|
|
66
|
+
AI: Generates SPARQL query
|
|
67
|
+
AI: Executes against YOUR database
|
|
68
|
+
Database: Returns actual facts about Provider #4521
|
|
69
|
+
Result: Real data with audit trail <-- VERIFIABLE
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
rust-kgdb is a knowledge graph database with an AI layer that **cannot hallucinate** because it only returns data from your actual systems.
|
|
73
|
+
|
|
74
|
+
---
|
|
75
|
+
|
|
76
|
+
## The Business Value
|
|
77
|
+
|
|
78
|
+
**For Enterprises:**
|
|
79
|
+
- **Zero hallucinations** - Every answer traces back to your actual data
|
|
80
|
+
- **Full audit trail** - Regulators can verify every AI decision (SOX, GDPR, FDA 21 CFR Part 11)
|
|
81
|
+
- **No infrastructure** - Runs embedded in your app, no servers to manage
|
|
82
|
+
- **Instant deployment** - `npm install` and you're running
|
|
83
|
+
|
|
84
|
+
**For Engineering Teams:**
|
|
85
|
+
- **449ns lookups** - 35x faster than RDFox, the previous gold standard
|
|
86
|
+
- **24 bytes per triple** - 25% more memory efficient than competitors
|
|
87
|
+
- **132K writes/sec** - Handle enterprise transaction volumes
|
|
88
|
+
- **94% recall** on memory retrieval - Agent remembers past queries accurately
|
|
89
|
+
|
|
90
|
+
**For AI/ML Teams:**
|
|
91
|
+
- **86.4% SPARQL accuracy** - vs 0% with vanilla LLMs on LUBM benchmark
|
|
92
|
+
- **16ms similarity search** - Find related entities across 10K vectors
|
|
93
|
+
- **Recursive reasoning** - Datalog rules cascade automatically (fraud rings, compliance chains)
|
|
94
|
+
- **Schema-aware generation** - AI uses YOUR ontology, not guessed class names
|
|
95
|
+
|
|
96
|
+
The math matters. When your fraud detection runs 35x faster, you catch fraud before payments clear. When your agent remembers with 94% accuracy, analysts don't repeat work. When every decision has a proof hash, you pass audits.
|
|
97
|
+
|
|
98
|
+
---
|
|
99
|
+
|
|
100
|
+
## The Technical Problem (SPARQL Generation)
|
|
101
|
+
|
|
102
|
+
Beyond hallucination, there's a practical issue: **LLMs can't write correct SPARQL.**
|
|
12
103
|
|
|
13
104
|
We asked GPT-4 to write a simple SPARQL query: *"Find all professors."*
|
|
14
105
|
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "rust-kgdb",
|
|
3
|
-
"version": "0.6.
|
|
3
|
+
"version": "0.6.70",
|
|
4
4
|
"description": "High-performance RDF/SPARQL database with AI agent framework. GraphDB (449ns lookups, 35x faster than RDFox), GraphFrames analytics (PageRank, motifs), Datalog reasoning, HNSW vector embeddings. HyperMindAgent for schema-aware query generation with audit trails. W3C SPARQL 1.1 compliant. Native performance via Rust + NAPI-RS.",
|
|
5
5
|
"main": "index.js",
|
|
6
6
|
"types": "index.d.ts",
|