rust-kgdb 0.6.60 → 0.6.62

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +181 -132
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -6,15 +6,15 @@
6
6
 
7
7
  ---
8
8
 
9
- ## Why I Built This
9
+ ## The Problem With AI Today
10
10
 
11
- I spent years watching enterprise AI projects fail. Not because the technology was bad, but because we were using it wrong.
11
+ Enterprise AI projects keep failing. Not because the technology is bad, but because organizations use it wrong.
12
12
 
13
13
  A claims investigator asks ChatGPT: *"Has Provider #4521 shown suspicious billing patterns?"*
14
14
 
15
15
  The AI responds confidently: *"Yes, Provider #4521 has a history of duplicate billing and upcoding."*
16
16
 
17
- The investigator opens a case. Weeks later, legal discovers **Provider #4521 has a perfect record**. The AI made it up. Now we're facing a lawsuit.
17
+ The investigator opens a case. Weeks later, legal discovers **Provider #4521 has a perfect record**. The AI made it up. Lawsuit incoming.
18
18
 
19
19
  This keeps happening:
20
20
 
@@ -30,8 +30,6 @@ Every time, the same pattern: The AI sounds confident. The AI is wrong. People g
30
30
 
31
31
  ## The Engineering Problem
32
32
 
33
- I'm an engineer. I don't accept "that's just how LLMs work." I wanted to understand *why* this happens and *how* to fix it properly.
34
-
35
33
  **The root cause is simple:** LLMs are language models, not databases. They predict plausible text. They don't look up facts.
36
34
 
37
35
  When you ask "Has Provider #4521 shown suspicious patterns?", the LLM doesn't query your claims database. It generates text that *sounds like* an answer based on patterns from its training data.
@@ -40,34 +38,34 @@ When you ask "Has Provider #4521 shown suspicious patterns?", the LLM doesn't qu
40
38
 
41
39
  These help, but they're patches. RAG retrieves *similar* documents - similar isn't the same as *correct*. Fine-tuning teaches patterns, not facts. Guardrails catch obvious errors, but "Provider #4521 has billing anomalies" sounds perfectly plausible.
42
40
 
43
- **I wanted a real solution.** One built on solid engineering principles, not hope.
41
+ **A real solution requires a different architecture.** One built on solid engineering principles, not hope.
44
42
 
45
43
  ---
46
44
 
47
- ## The Insight
45
+ ## The Solution
48
46
 
49
- What if we stopped asking AI for **answers** and started asking it for **questions**?
47
+ What if AI stopped providing **answers** and started generating **queries**?
50
48
 
51
49
  Think about it:
52
50
  - **Your database** knows the facts (claims, providers, transactions)
53
51
  - **AI** understands language (can parse "find suspicious patterns")
54
52
  - **You need both** working together
55
53
 
56
- The AI should translate intent into queries. The database should find facts. The AI should never make up data.
54
+ The AI translates intent into queries. The database finds facts. The AI never makes up data.
57
55
 
58
56
  ```
59
57
  Before (Dangerous):
60
58
  Human: "Is Provider #4521 suspicious?"
61
- AI: "Yes, they have billing anomalies" FABRICATED
59
+ AI: "Yes, they have billing anomalies" <- FABRICATED
62
60
 
63
61
  After (Safe):
64
62
  Human: "Is Provider #4521 suspicious?"
65
- AI: Generates SPARQL query Executes against YOUR database
63
+ AI: Generates SPARQL query -> Executes against YOUR database
66
64
  Database: Returns actual facts about Provider #4521
67
- Result: Real data with audit trail VERIFIABLE
65
+ Result: Real data with audit trail <- VERIFIABLE
68
66
  ```
69
67
 
70
- This is what I built. A knowledge graph database with an AI layer that **cannot hallucinate** because it only returns data from your actual systems.
68
+ rust-kgdb is a knowledge graph database with an AI layer that **cannot hallucinate** because it only returns data from your actual systems.
71
69
 
72
70
  ---
73
71
 
@@ -104,18 +102,18 @@ This is what I built. A knowledge graph database with an AI layer that **cannot
104
102
  A high-performance RDF/SPARQL database that runs **inside your application**. No server. No Docker. No config.
105
103
 
106
104
  ```
107
- ┌─────────────────────────────────────────────────────────────────────────────┐
108
- rust-kgdb CORE ENGINE
109
-
110
- ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
111
- GraphDB GraphFrame Embeddings Datalog
112
- (SPARQL) (Analytics) (HNSW) (Reasoning)
113
- 449ns PageRank 16ms/10K Semi-naive
114
- └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘
115
-
116
- Storage: InMemory | RocksDB | LMDB Standards: SPARQL 1.1 | RDF 1.2
117
- Memory: 24 bytes/triple Compliance: SHACL | PROV | OWL 2 RL
118
- └─────────────────────────────────────────────────────────────────────────────┘
105
+ +-----------------------------------------------------------------------------+
106
+ | rust-kgdb CORE ENGINE |
107
+ | |
108
+ | +-------------+ +-------------+ +-------------+ +-------------+ |
109
+ | | GraphDB | | GraphFrame | | Embeddings | | Datalog | |
110
+ | | (SPARQL) | | (Analytics) | | (HNSW) | | (Reasoning) | |
111
+ | | 449ns | | PageRank | | 16ms/10K | | Semi-naive | |
112
+ | +-------------+ +-------------+ +-------------+ +-------------+ |
113
+ | |
114
+ | Storage: InMemory | RocksDB | LMDB Standards: SPARQL 1.1 | RDF 1.2 |
115
+ | Memory: 24 bytes/triple Compliance: SHACL | PROV | OWL 2 RL |
116
+ +-----------------------------------------------------------------------------+
119
117
  ```
120
118
 
121
119
  **Performance (Verified on LUBM benchmark):**
@@ -134,18 +132,18 @@ A high-performance RDF/SPARQL database that runs **inside your application**. No
134
132
  An AI agent layer that uses **the database to prevent hallucinations**. The LLM plans, the database executes.
135
133
 
136
134
  ```
137
- ┌─────────────────────────────────────────────────────────────────────────────┐
138
- HYPERMIND AGENT FRAMEWORK
139
-
140
- ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
141
- LLMPlanner WasmSandbox ProofDAG Memory
142
- (Claude/GPT) (Security) (Audit) (Hypergraph)
143
- └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘
144
-
145
- Type Theory: Hindley-Milner types ensure tool composition is valid
146
- Category Theory: Tools are morphisms (A B) with composition laws
147
- Proof Theory: Every execution produces cryptographic audit trail
148
- └─────────────────────────────────────────────────────────────────────────────┘
135
+ +-----------------------------------------------------------------------------+
136
+ | HYPERMIND AGENT FRAMEWORK |
137
+ | |
138
+ | +-------------+ +-------------+ +-------------+ +-------------+ |
139
+ | | LLMPlanner | | WasmSandbox | | ProofDAG | | Memory | |
140
+ | | (Claude/GPT)| | (Security) | | (Audit) | | (Hypergraph)| |
141
+ | +-------------+ +-------------+ +-------------+ +-------------+ |
142
+ | |
143
+ | Type Theory: Hindley-Milner types ensure tool composition is valid |
144
+ | Category Theory: Tools are morphisms (A -> B) with composition laws |
145
+ | Proof Theory: Every execution produces cryptographic audit trail |
146
+ +-----------------------------------------------------------------------------+
149
147
  ```
150
148
 
151
149
  **Agent Accuracy (LUBM Benchmark - 14 Queries, 3,272 Triples):**
@@ -217,25 +215,25 @@ Traditional AI tool calling (OpenAI Functions, LangChain Tools) has fundamental
217
215
 
218
216
  **The Traditional Approach:**
219
217
  ```
220
- LLM generates JSON Runtime validates schema Tool executes Hope it works
218
+ LLM generates JSON -> Runtime validates schema -> Tool executes -> Hope it works
221
219
  ```
222
220
 
223
221
  1. **Schema is decorative.** The LLM sees a JSON schema and tries to match it. No guarantee outputs are correct types.
224
- 2. **Composition is ad-hoc.** Chain Tool A Tool B? Pray that A's output format happens to match B's input.
222
+ 2. **Composition is ad-hoc.** Chain Tool A -> Tool B? Pray that A's output format happens to match B's input.
225
223
  3. **Errors happen at runtime.** You find out a tool chain is broken when a user hits it in production.
226
224
  4. **No mathematical guarantees.** "It usually works" is the best you get.
227
225
 
228
226
  **Our Approach: Tools as Typed Morphisms**
229
227
  ```
230
228
  Tools are arrows in a category:
231
- kg.sparql.query: Query BindingSet
232
- kg.motif.find: Pattern Matches
233
- kg.embeddings.search: EntityId SimilarEntities
229
+ kg.sparql.query: Query -> BindingSet
230
+ kg.motif.find: Pattern -> Matches
231
+ kg.embeddings.search: EntityId -> SimilarEntities
234
232
 
235
233
  Composition is verified:
236
- f: A B
237
- g: B C
238
- g f: A C Compiles only if types match
234
+ f: A -> B
235
+ g: B -> C
236
+ g o f: A -> C [x] Compiles only if types match
239
237
 
240
238
  Errors caught at plan time, not runtime.
241
239
  ```
@@ -260,6 +258,57 @@ The type system *guarantees* a tool that outputs `RiskScore` produces a valid ri
260
258
 
261
259
  **The Insight:** Category theory isn't academic overhead. It's the same math that makes your database transactions safe (ACID = category theory applied to data). We apply it to tool composition.
262
260
 
261
+ **Trust Model: Proxied Execution**
262
+
263
+ Traditional tool calling trusts the LLM output completely:
264
+ ```
265
+ LLM -> Tool (direct execution) -> Result
266
+ ```
267
+
268
+ The LLM decides what to execute. The tool runs it blindly. This is why prompt injection attacks work - the LLM's output *is* the program.
269
+
270
+ **Our approach: Agent -> Proxy -> Sandbox -> Tool**
271
+ ```
272
+ +---------------------------------------------------------------------+
273
+ | Agent Request: "Find suspicious claims" |
274
+ +----------------------------+----------------------------------------+
275
+ |
276
+ v
277
+ +---------------------------------------------------------------------+
278
+ | LLMPlanner: Generates tool call plan |
279
+ | -> kg.sparql.query(pattern) |
280
+ | -> kg.datalog.infer(rules) |
281
+ +----------------------------+----------------------------------------+
282
+ | Plan (NOT executed yet)
283
+ v
284
+ +---------------------------------------------------------------------+
285
+ | HyperAgentProxy: Validates plan against capabilities |
286
+ | [x] Does agent have ReadKG capability? Yes |
287
+ | [x] Is query schema-valid? Yes |
288
+ | [x] Are all types correct? Yes |
289
+ | [ ] Blocked: WriteKG not in capability set |
290
+ +----------------------------+----------------------------------------+
291
+ | Validated plan only
292
+ v
293
+ +---------------------------------------------------------------------+
294
+ | WasmSandbox: Executes with resource limits |
295
+ | * Fuel metering: 1M operations max |
296
+ | * Memory cap: 64MB |
297
+ | * Capability enforcement: Cannot exceed granted permissions |
298
+ +----------------------------+----------------------------------------+
299
+ | Execution with audit
300
+ v
301
+ +---------------------------------------------------------------------+
302
+ | ProofDAG: Records execution witness |
303
+ | * What tool ran |
304
+ | * What inputs were used |
305
+ | * What outputs were produced |
306
+ | * SHA-256 hash of entire execution |
307
+ +---------------------------------------------------------------------+
308
+ ```
309
+
310
+ The LLM never executes directly. It proposes. The proxy validates. The sandbox enforces. The proof records. Four independent layers of defense.
311
+
263
312
  ---
264
313
 
265
314
  ## What You Can Do
@@ -335,38 +384,38 @@ console.log(result.evidence);
335
384
  ## Architecture: Two Layers
336
385
 
337
386
  ```
338
- ┌─────────────────────────────────────────────────────────────────────────────────┐
339
- YOUR APPLICATION
340
- (Fraud Detection, Underwriting, Compliance)
341
- └────────────────────────────────────┬────────────────────────────────────────────┘
342
-
343
- ┌────────────────────────────────────▼────────────────────────────────────────────┐
344
- HYPERMIND AGENT FRAMEWORK (JavaScript)
345
- ┌────────────────────────────────────────────────────────────────────────────┐
346
- LLMPlanner: Natural language typed tool pipelines
347
- WasmSandbox: Capability-based security with fuel metering
348
- ProofDAG: Cryptographic audit trail (SHA-256)
349
- MemoryHypergraph: Temporal agent memory with KG integration
350
- TypeId: Hindley-Milner type system with refinement types
351
- └────────────────────────────────────────────────────────────────────────────┘
352
-
353
- Category Theory: Tools as Morphisms (A B)
354
- Proof Theory: Every execution has a witness
355
- └────────────────────────────────────┬────────────────────────────────────────────┘
356
- NAPI-RS Bindings
357
- ┌────────────────────────────────────▼────────────────────────────────────────────┐
358
- RUST CORE ENGINE (Native Performance)
359
- ┌────────────────────────────────────────────────────────────────────────────┐
360
- GraphDB RDF/SPARQL quad store 449ns lookups, 24 bytes/triple
361
- GraphFrame Graph algorithms WCOJ optimal joins, PageRank
362
- EmbeddingService Vector similarity HNSW index, 1-hop ARCADE cache
363
- DatalogProgram Rule-based reasoning Semi-naive evaluation
364
- Pregel BSP graph processing Billion-edge scale
365
- └────────────────────────────────────────────────────────────────────────────┘
366
-
367
- W3C Standards: SPARQL 1.1 (100%) | RDF 1.2 | OWL 2 RL | SHACL | PROV
368
- Storage Backends: InMemory | RocksDB | LMDB
369
- └──────────────────────────────────────────────────────────────────────────────────┘
387
+ +---------------------------------------------------------------------------------+
388
+ | YOUR APPLICATION |
389
+ | (Fraud Detection, Underwriting, Compliance) |
390
+ +------------------------------------+--------------------------------------------+
391
+ |
392
+ +------------------------------------v--------------------------------------------+
393
+ | HYPERMIND AGENT FRAMEWORK (JavaScript) |
394
+ | +----------------------------------------------------------------------------+ |
395
+ | | * LLMPlanner: Natural language -> typed tool pipelines | |
396
+ | | * WasmSandbox: Capability-based security with fuel metering | |
397
+ | | * ProofDAG: Cryptographic audit trail (SHA-256) | |
398
+ | | * MemoryHypergraph: Temporal agent memory with KG integration | |
399
+ | | * TypeId: Hindley-Milner type system with refinement types | |
400
+ | +----------------------------------------------------------------------------+ |
401
+ | |
402
+ | Category Theory: Tools as Morphisms (A -> B) |
403
+ | Proof Theory: Every execution has a witness |
404
+ +------------------------------------+--------------------------------------------+
405
+ | NAPI-RS Bindings
406
+ +------------------------------------v--------------------------------------------+
407
+ | RUST CORE ENGINE (Native Performance) |
408
+ | +----------------------------------------------------------------------------+ |
409
+ | | GraphDB | RDF/SPARQL quad store | 449ns lookups, 24 bytes/triple|
410
+ | | GraphFrame | Graph algorithms | WCOJ optimal joins, PageRank |
411
+ | | EmbeddingService | Vector similarity | HNSW index, 1-hop ARCADE cache|
412
+ | | DatalogProgram | Rule-based reasoning | Semi-naive evaluation |
413
+ | | Pregel | BSP graph processing | Billion-edge scale |
414
+ | +----------------------------------------------------------------------------+ |
415
+ | |
416
+ | W3C Standards: SPARQL 1.1 (100%) | RDF 1.2 | OWL 2 RL | SHACL | PROV |
417
+ | Storage Backends: InMemory | RocksDB | LMDB |
418
+ +----------------------------------------------------------------------------------+
370
419
  ```
371
420
 
372
421
  ---
@@ -490,15 +539,15 @@ const distances = pregelShortestPaths(graph, 'v0', 100);
490
539
  User: "Find all professors"
491
540
 
492
541
  Vanilla LLM Output:
493
- ┌───────────────────────────────────────────────────────────────────────┐
494
- ```sparql
495
- SELECT ?professor WHERE { ?professor a ub:Faculty . }
496
- ``` Parser rejects markdown
497
-
498
- This query retrieves faculty members.
499
- Mixed text breaks parsing
500
- └───────────────────────────────────────────────────────────────────────┘
501
- Result: PARSER ERROR - Invalid SPARQL syntax
542
+ +-----------------------------------------------------------------------+
543
+ | ```sparql |
544
+ | SELECT ?professor WHERE { ?professor a ub:Faculty . } |
545
+ | ``` <- Parser rejects markdown |
546
+ | |
547
+ | This query retrieves faculty members. |
548
+ | ^ Mixed text breaks parsing |
549
+ +-----------------------------------------------------------------------+
550
+ Result: FAIL PARSER ERROR - Invalid SPARQL syntax
502
551
  ```
503
552
 
504
553
  **Problems:** (1) Markdown code fences, (2) Wrong class name (Faculty vs Professor), (3) Mixed text
@@ -509,11 +558,11 @@ Result: ❌ PARSER ERROR - Invalid SPARQL syntax
509
558
  User: "Find all professors"
510
559
 
511
560
  HyperMind Output:
512
- ┌───────────────────────────────────────────────────────────────────────┐
513
- PREFIX ub: <http://swat.cse.lehigh.edu/onto/univ-bench.owl#>
514
- SELECT ?professor WHERE { ?professor a ub:Professor . }
515
- └───────────────────────────────────────────────────────────────────────┘
516
- Result: 15 results returned in 2.3ms
561
+ +-----------------------------------------------------------------------+
562
+ | PREFIX ub: <http://swat.cse.lehigh.edu/onto/univ-bench.owl#> |
563
+ | SELECT ?professor WHERE { ?professor a ub:Professor . } |
564
+ +-----------------------------------------------------------------------+
565
+ Result: OK 15 results returned in 2.3ms
517
566
  ```
518
567
 
519
568
  **Why it works:**
@@ -521,7 +570,7 @@ Result: ✅ 15 results returned in 2.3ms
521
570
  2. **Type-checked** - Query validated before execution
522
571
  3. **No text pollution** - Output is pure SPARQL, not markdown
523
572
 
524
- **Accuracy: 0% 86.4%** (LUBM benchmark, 14 queries)
573
+ **Accuracy: 0% -> 86.4%** (LUBM benchmark, 14 queries)
525
574
 
526
575
  ### Agent Components
527
576
 
@@ -564,10 +613,10 @@ const sandbox = new WasmSandbox({
564
613
  });
565
614
 
566
615
  // All tool calls are:
567
- // Capability-checked
568
- // Fuel-metered
569
- // Memory-bounded
570
- // Logged for audit
616
+ // [x] Capability-checked
617
+ // [x] Fuel-metered
618
+ // [x] Memory-bounded
619
+ // [x] Logged for audit
571
620
  ```
572
621
 
573
622
  ### Execution Witness (Audit Trail)
@@ -602,30 +651,30 @@ Most AI agents have amnesia. Ask the same question twice, they start from scratc
602
651
  ### Our Solution: Memory Hypergraph
603
652
 
604
653
  ```
605
- ┌─────────────────────────────────────────────────────────────────────────────┐
606
- MEMORY HYPERGRAPH
607
-
608
- AGENT MEMORY LAYER
609
- ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
610
- Episode:001 Episode:002 Episode:003
611
- "Fraud ring "Denied "Follow-up
612
- detected" claim" on P001"
613
- Dec 10 Dec 12 Dec 15
614
- └──────┬──────┘ └──────┬──────┘ └──────┬──────┘
615
-
616
- └───────────────────┼───────────────────┘
617
- HyperEdges connect to KG
618
-
619
- KNOWLEDGE GRAPH LAYER
620
- ┌─────────────────────────────────────────────────────────────────────┐
621
- Provider:P001 ──────▶ Claim:C123 ◀────── Claimant:John
622
-
623
-
624
- riskScore: 0.87 amount: 50000 address: "123 Main"
625
- └─────────────────────────────────────────────────────────────────────┘
626
-
627
- SAME QUAD STORE - Single SPARQL query traverses BOTH!
628
- └─────────────────────────────────────────────────────────────────────────────┘
654
+ +-----------------------------------------------------------------------------+
655
+ | MEMORY HYPERGRAPH |
656
+ | |
657
+ | AGENT MEMORY LAYER |
658
+ | +-------------+ +-------------+ +-------------+ |
659
+ | | Episode:001 | | Episode:002 | | Episode:003 | |
660
+ | | "Fraud ring | | "Denied | | "Follow-up | |
661
+ | | detected" | | claim" | | on P001" | |
662
+ | | Dec 10 | | Dec 12 | | Dec 15 | |
663
+ | +------+------+ +------+------+ +------+------+ |
664
+ | | | | |
665
+ | +-------------------+-------------------+ |
666
+ | | HyperEdges connect to KG |
667
+ | v |
668
+ | KNOWLEDGE GRAPH LAYER |
669
+ | +---------------------------------------------------------------------+ |
670
+ | | Provider:P001 ------> Claim:C123 <------ Claimant:John | |
671
+ | | | | | | |
672
+ | | v v v | |
673
+ | | riskScore: 0.87 amount: 50000 address: "123 Main" | |
674
+ | +---------------------------------------------------------------------+ |
675
+ | |
676
+ | SAME QUAD STORE - Single SPARQL query traverses BOTH! |
677
+ +-----------------------------------------------------------------------------+
629
678
  ```
630
679
 
631
680
  ### Benchmarked Performance
@@ -647,7 +696,7 @@ const result1 = await agent.call("Analyze claims from Provider P001");
647
696
 
648
697
  // Second call (different wording): Cache HIT!
649
698
  const result2 = await agent.call("Show me P001's claim patterns");
650
- // Same semantic hash Same result
699
+ // Same semantic hash -> Same result
651
700
  ```
652
701
 
653
702
  ---
@@ -658,18 +707,18 @@ const result2 = await agent.call("Show me P001's claim patterns");
658
707
 
659
708
  ```
660
709
  Tools are typed arrows:
661
- kg.sparql.query: Query BindingSet
662
- kg.motif.find: Pattern Matches
663
- kg.datalog.apply: Rules InferredFacts
710
+ kg.sparql.query: Query -> BindingSet
711
+ kg.motif.find: Pattern -> Matches
712
+ kg.datalog.apply: Rules -> InferredFacts
664
713
 
665
714
  Composition is type-checked:
666
- f: A B
667
- g: B C
668
- g f: A C (valid only if B matches)
715
+ f: A -> B
716
+ g: B -> C
717
+ g o f: A -> C (valid only if B matches)
669
718
 
670
719
  Laws guaranteed:
671
- Identity: id f = f
672
- Associativity: (h g) f = h (g f)
720
+ Identity: id o f = f
721
+ Associativity: (h o g) o f = h o (g o f)
673
722
  ```
674
723
 
675
724
  **In practice:** The AI can only chain tools where outputs match inputs. Like Lego blocks that must fit.
@@ -1096,8 +1145,8 @@ console.log('Underwriting decisions:', decisions);
1096
1145
  ```javascript
1097
1146
  const db = new GraphDB(baseUri) // Create database
1098
1147
  db.loadTtl(turtle, graphUri) // Load Turtle data
1099
- db.querySelect(sparql) // SELECT query [{bindings}]
1100
- db.queryConstruct(sparql) // CONSTRUCT query triples
1148
+ db.querySelect(sparql) // SELECT query -> [{bindings}]
1149
+ db.queryConstruct(sparql) // CONSTRUCT query -> triples
1101
1150
  db.countTriples() // Total triple count
1102
1151
  db.clear() // Clear all data
1103
1152
  db.getVersion() // SDK version
@@ -1131,7 +1180,7 @@ emb.getNeighborsOut(entityId) // Get outgoing neighbors
1131
1180
  const dl = new DatalogProgram()
1132
1181
  dl.addFact(factJson) // Add fact
1133
1182
  dl.addRule(ruleJson) // Add rule
1134
- evaluateDatalog(dl) // Run evaluation facts JSON
1183
+ evaluateDatalog(dl) // Run evaluation -> facts JSON
1135
1184
  queryDatalog(dl, queryJson) // Query specific predicate
1136
1185
  ```
1137
1186
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "rust-kgdb",
3
- "version": "0.6.60",
3
+ "version": "0.6.62",
4
4
  "description": "High-performance RDF/SPARQL database with AI agent framework. GraphDB (449ns lookups, 35x faster than RDFox), GraphFrames analytics (PageRank, motifs), Datalog reasoning, HNSW vector embeddings. HyperMindAgent for schema-aware query generation with audit trails. W3C SPARQL 1.1 compliant. Native performance via Rust + NAPI-RS.",
5
5
  "main": "index.js",
6
6
  "types": "index.d.ts",