rust-kgdb 0.8.0 → 0.8.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +12 -0
- package/README.md +300 -0
- package/package.json +1 -1
package/CHANGELOG.md
CHANGED
|
@@ -2,6 +2,18 @@
|
|
|
2
2
|
|
|
3
3
|
All notable changes to the rust-kgdb TypeScript SDK will be documented in this file.
|
|
4
4
|
|
|
5
|
+
## [0.8.1] - 2025-12-21
|
|
6
|
+
|
|
7
|
+
### Documentation & README Updates
|
|
8
|
+
|
|
9
|
+
- Added comprehensive ThinkingReasoner documentation section to README
|
|
10
|
+
- Updated "What's New" section with v0.8.0 features at top
|
|
11
|
+
- Added ThinkingReasoner to Engineering Foundation table
|
|
12
|
+
- Expanded derivation chain examples with insurance fraud detection scenario
|
|
13
|
+
- Added "The Setup Data" section explaining synthetic ontologies
|
|
14
|
+
|
|
15
|
+
---
|
|
16
|
+
|
|
5
17
|
## [0.8.0] - 2025-12-21
|
|
6
18
|
|
|
7
19
|
### ThinkingReasoner: When AI Shows Its Work
|
package/README.md
CHANGED
|
@@ -7,6 +7,65 @@
|
|
|
7
7
|
> **Your knowledge is scattered. Your claims live in Snowflake. Your customer graph sits in Neo4j. Your risk models run on BigQuery. Your compliance docs are in SharePoint. And your AI? It hallucinates because it can't see the full picture.**
|
|
8
8
|
>
|
|
9
9
|
> rust-kgdb unifies scattered enterprise knowledge into a single queryable graph—with native embeddings, cross-database federation, and AI that generates queries instead of fabricating answers. No hallucinations. Full audit trails. One query across everything.
|
|
10
|
+
>
|
|
11
|
+
> **What makes it different?** Totally in-memory KGDB with memory acceleration (449ns lookups). Multi-way federated joins across KGDB + Snowflake + BigQuery in a single SQL statement. W3C DCAT/DPROD data cataloging for self-describing data products. And now: **ThinkingReasoner**—deductive AI with proof-carrying outputs where every conclusion has a cryptographic derivation chain.
|
|
12
|
+
|
|
13
|
+
---
|
|
14
|
+
|
|
15
|
+
## What's New in v0.8.0
|
|
16
|
+
|
|
17
|
+
**What if every AI conclusion came with a mathematical proof?**
|
|
18
|
+
|
|
19
|
+
| Feature | Description | Performance |
|
|
20
|
+
|---------|-------------|-------------|
|
|
21
|
+
| **ThinkingReasoner** | Generic ontology-driven deductive reasoning engine | 6 rules auto-generated from ontology |
|
|
22
|
+
| **Thinking Events** | Append-only event sourcing for AI reasoning steps | Observations, hypotheses, inferences |
|
|
23
|
+
| **Proof-Carrying Outputs** | Cryptographic proofs via Curry-Howard correspondence | SHA-256 hash per derivation |
|
|
24
|
+
| **Derivation Chain** | Step-by-step reasoning visualization | 7-step trace with premises |
|
|
25
|
+
| **Auto-Generated Rules** | Rules from OWL/RDFS properties, not hardcoded | Transitive, symmetric, subclass |
|
|
26
|
+
|
|
27
|
+
```javascript
|
|
28
|
+
const { ThinkingReasoner } = require('rust-kgdb')
|
|
29
|
+
|
|
30
|
+
// Load YOUR ontology - rules are auto-generated, not hardcoded
|
|
31
|
+
const reasoner = new ThinkingReasoner()
|
|
32
|
+
reasoner.loadOntology(`
|
|
33
|
+
@prefix ins: <http://insurance.example.org/> .
|
|
34
|
+
@prefix owl: <http://www.w3.org/2002/07/owl#> .
|
|
35
|
+
|
|
36
|
+
# This single line auto-generates transitivity rules
|
|
37
|
+
ins:transfers a owl:TransitiveProperty .
|
|
38
|
+
`)
|
|
39
|
+
|
|
40
|
+
// Record observations (ground truth from your data)
|
|
41
|
+
reasoner.observe("Alice transfers $10K to Bob", { subject: "alice", predicate: "transfers", object: "bob" })
|
|
42
|
+
reasoner.observe("Bob transfers $9.5K to Carol", { subject: "bob", predicate: "transfers", object: "carol" })
|
|
43
|
+
reasoner.observe("Carol transfers $9K to Alice", { subject: "carol", predicate: "transfers", object: "alice" })
|
|
44
|
+
|
|
45
|
+
// Run deduction - derives: alice transfers to carol (transitivity!)
|
|
46
|
+
const result = reasoner.deduce()
|
|
47
|
+
// result.derivedFacts: 3 new facts
|
|
48
|
+
// result.proofs: 3 cryptographic witnesses
|
|
49
|
+
// result.derivationChain: step-by-step reasoning trace
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
**The key insight**: The LLM proposes hypotheses. The ThinkingReasoner validates them against your ontology. Only facts with valid proofs become assertions. No hallucinations possible—every conclusion traces back through a derivation chain to ground truth observations.
|
|
53
|
+
|
|
54
|
+
```
|
|
55
|
+
Derivation Chain (like Claude's thinking, but verifiable):
|
|
56
|
+
|
|
57
|
+
Step 1: [OBSERVATION] Alice transfers to Bob
|
|
58
|
+
Step 2: [OBSERVATION] Bob transfers to Carol
|
|
59
|
+
Step 3: [RULE: owl:TransitiveProperty] Alice transfers to Carol
|
|
60
|
+
Premises: [Step 1, Step 2]
|
|
61
|
+
Proof Hash: a3f8c2...
|
|
62
|
+
Step 4: [OBSERVATION] Carol transfers to Alice
|
|
63
|
+
Step 5: [RULE: circularPayment] Circular payment detected: Alice → Bob → Carol → Alice
|
|
64
|
+
Premises: [Step 1, Step 2, Step 4]
|
|
65
|
+
Confidence: 0.85
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
*See [ThinkingReasoner: Deductive AI](#thinkingreasoner-deductive-ai) for complete documentation.*
|
|
10
69
|
|
|
11
70
|
---
|
|
12
71
|
|
|
@@ -207,6 +266,7 @@ At no point does the AI "know" anything. It's a translator—from human intent t
|
|
|
207
266
|
| **Analytics** | GraphFrames | PageRank, connected components, triangle count, motif matching |
|
|
208
267
|
| **Analytics** | Pregel API | Bulk synchronous parallel graph algorithms |
|
|
209
268
|
| **Reasoning** | Datalog Engine | Recursive rule evaluation with fixpoint semantics |
|
|
269
|
+
| **Reasoning** | ThinkingReasoner | Ontology-driven deduction with proof-carrying outputs |
|
|
210
270
|
| **AI Agent** | HyperMindAgent | Schema-aware SPARQL generation from natural language |
|
|
211
271
|
| **AI Agent** | Type System | Hindley-Milner type inference for query validation |
|
|
212
272
|
| **AI Agent** | Proof DAG | SHA-256 audit trail for every AI decision |
|
|
@@ -1660,6 +1720,246 @@ const agent = new AgentBuilder('scoped-agent')
|
|
|
1660
1720
|
|
|
1661
1721
|
---
|
|
1662
1722
|
|
|
1723
|
+
## ThinkingReasoner: Deductive AI
|
|
1724
|
+
|
|
1725
|
+
### The Problem: AI That Can't Show Its Work
|
|
1726
|
+
|
|
1727
|
+
When a fraud analyst asks your AI: *"Is this circular payment pattern suspicious?"*
|
|
1728
|
+
|
|
1729
|
+
What happens today:
|
|
1730
|
+
- **GPT-4**: "Yes, this appears to be money laundering." (Confidence: high. Evidence: none.)
|
|
1731
|
+
- **Claude**: "The pattern suggests fraudulent activity." (Sounds authoritative. No proof.)
|
|
1732
|
+
- **LLaMA**: "Based on typical patterns..." (Based on what exactly?)
|
|
1733
|
+
|
|
1734
|
+
Every AI system gives confident answers. None can explain *how* they reached them. None can prove they're correct. None can trace the reasoning chain back to your actual data.
|
|
1735
|
+
|
|
1736
|
+
This is the hallucination problem at its core: **AI generates conclusions without derivations.**
|
|
1737
|
+
|
|
1738
|
+
### The Solution: Proof-Carrying Outputs
|
|
1739
|
+
|
|
1740
|
+
What if every AI conclusion came with a cryptographic proof?
|
|
1741
|
+
|
|
1742
|
+
```
|
|
1743
|
+
Traditional AI:
|
|
1744
|
+
Input: "Is Alice → Bob → Carol → Alice suspicious?"
|
|
1745
|
+
Output: "Yes, this is suspicious." ← UNVERIFIED CLAIM
|
|
1746
|
+
|
|
1747
|
+
ThinkingReasoner:
|
|
1748
|
+
Input: "Is Alice → Bob → Carol → Alice suspicious?"
|
|
1749
|
+
Output:
|
|
1750
|
+
Conclusion: "Circular payment pattern detected"
|
|
1751
|
+
Proof Hash: a3f8c2e7...
|
|
1752
|
+
Derivation Chain:
|
|
1753
|
+
[1] OBSERVATION: Alice transfers to Bob (fact from database)
|
|
1754
|
+
[2] OBSERVATION: Bob transfers to Carol (fact from database)
|
|
1755
|
+
[3] OBSERVATION: Carol transfers to Alice (fact from database)
|
|
1756
|
+
[4] RULE: owl:TransitiveProperty → Alice transfers to Carol
|
|
1757
|
+
[5] RULE: circularPayment(A,B,C) :- A→B, B→C, C→A
|
|
1758
|
+
[6] CONCLUSION: circularPayment(Alice, Bob, Carol) ← VERIFIABLE
|
|
1759
|
+
```
|
|
1760
|
+
|
|
1761
|
+
**The difference**: Every conclusion has a derivation chain. Every derivation step cites its source (observation or rule). Every chain can be replayed to verify correctness. No hallucinations possible.
|
|
1762
|
+
|
|
1763
|
+
### The Mathematical Foundation
|
|
1764
|
+
|
|
1765
|
+
The ThinkingReasoner implements three interconnected theories:
|
|
1766
|
+
|
|
1767
|
+
**1. Event Sourcing (Ground Truth)**
|
|
1768
|
+
```javascript
|
|
1769
|
+
// Every observation is append-only, immutable, timestamped
|
|
1770
|
+
reasoner.observe("Alice transfers $10K to Bob", {
|
|
1771
|
+
subject: "http://example.org/alice",
|
|
1772
|
+
predicate: "http://example.org/transfers",
|
|
1773
|
+
object: "http://example.org/bob",
|
|
1774
|
+
timestamp: "2025-12-21T10:30:00Z",
|
|
1775
|
+
source: "banking-system-export"
|
|
1776
|
+
})
|
|
1777
|
+
```
|
|
1778
|
+
Observations are facts from your systems. They can't be modified. They form the ground truth for all reasoning.
|
|
1779
|
+
|
|
1780
|
+
**2. Ontology-Driven Rules (No Hardcoding)**
|
|
1781
|
+
```javascript
|
|
1782
|
+
// Load YOUR ontology - rules are auto-generated
|
|
1783
|
+
reasoner.loadOntology(`
|
|
1784
|
+
@prefix owl: <http://www.w3.org/2002/07/owl#> .
|
|
1785
|
+
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
|
|
1786
|
+
|
|
1787
|
+
# This single line generates: transfers(A,C) :- transfers(A,B), transfers(B,C)
|
|
1788
|
+
:transfers a owl:TransitiveProperty .
|
|
1789
|
+
|
|
1790
|
+
# This generates: relatedTo(B,A) :- relatedTo(A,B)
|
|
1791
|
+
:relatedTo a owl:SymmetricProperty .
|
|
1792
|
+
|
|
1793
|
+
# This generates: Claim(X) :- FraudulentClaim(X)
|
|
1794
|
+
:FraudulentClaim rdfs:subClassOf :Claim .
|
|
1795
|
+
`)
|
|
1796
|
+
```
|
|
1797
|
+
Rules aren't hardcoded in your application. They're derived from OWL/RDFS properties in your ontology. Change the ontology, change the rules. No code changes required.
|
|
1798
|
+
|
|
1799
|
+
**3. Curry-Howard Correspondence (Proofs as Programs)**
|
|
1800
|
+
```
|
|
1801
|
+
Every assertion A has a proof P such that:
|
|
1802
|
+
- P.conclusion = A
|
|
1803
|
+
- P.premises ⊆ (Observations ∪ DerivedFacts)
|
|
1804
|
+
- P.rules ⊆ OntologyRules
|
|
1805
|
+
- P.hash = SHA-256(P.conclusion, P.premises, P.rules)
|
|
1806
|
+
```
|
|
1807
|
+
This is the Curry-Howard correspondence: proofs are programs, propositions are types. An assertion without a proof is a type without an inhabitant—it doesn't exist.
|
|
1808
|
+
|
|
1809
|
+
### Complete API Example
|
|
1810
|
+
|
|
1811
|
+
```javascript
|
|
1812
|
+
const { ThinkingReasoner } = require('rust-kgdb')
|
|
1813
|
+
|
|
1814
|
+
// Create reasoner for fraud detection domain
|
|
1815
|
+
const reasoner = new ThinkingReasoner()
|
|
1816
|
+
|
|
1817
|
+
// Load insurance/fraud ontology
|
|
1818
|
+
reasoner.loadOntology(`
|
|
1819
|
+
@prefix ins: <http://insurance.example.org/> .
|
|
1820
|
+
@prefix owl: <http://www.w3.org/2002/07/owl#> .
|
|
1821
|
+
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
|
|
1822
|
+
|
|
1823
|
+
# Transitivity: if A transfers to B, and B transfers to C, then A transfers to C
|
|
1824
|
+
ins:transfers a owl:TransitiveProperty .
|
|
1825
|
+
|
|
1826
|
+
# Symmetry: if A is related to B, then B is related to A
|
|
1827
|
+
ins:relatedTo a owl:SymmetricProperty .
|
|
1828
|
+
|
|
1829
|
+
# Hierarchy: FraudulentClaim is a subclass of Claim
|
|
1830
|
+
ins:FraudulentClaim rdfs:subClassOf ins:Claim .
|
|
1831
|
+
ins:HighRiskClaim rdfs:subClassOf ins:Claim .
|
|
1832
|
+
`)
|
|
1833
|
+
|
|
1834
|
+
// Record observations from your data sources
|
|
1835
|
+
const obs1 = reasoner.observe("Alice transfers $10K to Bob", {
|
|
1836
|
+
subject: "ins:alice",
|
|
1837
|
+
predicate: "ins:transfers",
|
|
1838
|
+
object: "ins:bob"
|
|
1839
|
+
})
|
|
1840
|
+
|
|
1841
|
+
const obs2 = reasoner.observe("Bob transfers $9.5K to Carol", {
|
|
1842
|
+
subject: "ins:bob",
|
|
1843
|
+
predicate: "ins:transfers",
|
|
1844
|
+
object: "ins:carol"
|
|
1845
|
+
})
|
|
1846
|
+
|
|
1847
|
+
const obs3 = reasoner.observe("Carol transfers $9K to Alice", {
|
|
1848
|
+
subject: "ins:carol",
|
|
1849
|
+
predicate: "ins:transfers",
|
|
1850
|
+
object: "ins:alice"
|
|
1851
|
+
})
|
|
1852
|
+
|
|
1853
|
+
// Record a hypothesis (LLM-proposed, needs verification)
|
|
1854
|
+
const hyp = reasoner.hypothesize(
|
|
1855
|
+
"Circular payment fraud detected",
|
|
1856
|
+
{
|
|
1857
|
+
subject: "ins:alice",
|
|
1858
|
+
predicate: "ins:suspectedFraud",
|
|
1859
|
+
object: "circular-payment-pattern",
|
|
1860
|
+
confidence: 0.85
|
|
1861
|
+
},
|
|
1862
|
+
[obs1, obs2, obs3] // Supporting observations
|
|
1863
|
+
)
|
|
1864
|
+
|
|
1865
|
+
// Run deduction - validates hypothesis against ontology rules
|
|
1866
|
+
const result = reasoner.deduce()
|
|
1867
|
+
|
|
1868
|
+
console.log(`Rules fired: ${result.rulesFired}`) // 6
|
|
1869
|
+
console.log(`Derived facts: ${result.derivedFacts.length}`) // 3
|
|
1870
|
+
console.log(`Proofs generated: ${result.proofs.length}`) // 3
|
|
1871
|
+
|
|
1872
|
+
// Get the thinking graph (for visualization)
|
|
1873
|
+
const graph = reasoner.getThinkingGraph()
|
|
1874
|
+
|
|
1875
|
+
console.log(`Nodes: ${graph.nodes.length}`) // Events + Facts
|
|
1876
|
+
console.log(`Edges: ${graph.edges.length}`) // Causal relationships
|
|
1877
|
+
console.log(`Derivation steps: ${graph.derivationChain.length}`) // 7
|
|
1878
|
+
|
|
1879
|
+
// Display derivation chain (like Claude's thinking, but verifiable)
|
|
1880
|
+
for (const step of graph.derivationChain) {
|
|
1881
|
+
console.log(`Step ${step.step}: [${step.rule}] ${step.conclusion}`)
|
|
1882
|
+
if (step.premises.length > 0) {
|
|
1883
|
+
console.log(` Premises: ${step.premises.join(', ')}`)
|
|
1884
|
+
}
|
|
1885
|
+
}
|
|
1886
|
+
```
|
|
1887
|
+
|
|
1888
|
+
### Output: Derivation Chain
|
|
1889
|
+
|
|
1890
|
+
```
|
|
1891
|
+
[1] Creating ThinkingContext...
|
|
1892
|
+
Context ID: fraud-detection-session
|
|
1893
|
+
Actor ID: fraud-agent-001
|
|
1894
|
+
|
|
1895
|
+
[2] Loading insurance ontology...
|
|
1896
|
+
Auto-generated 6 rules from ontology
|
|
1897
|
+
- Transitivity rules for ins:transfers
|
|
1898
|
+
- Symmetry rules for ins:relatedTo
|
|
1899
|
+
- SubClass inference rules
|
|
1900
|
+
|
|
1901
|
+
[3] Recording observations...
|
|
1902
|
+
Observation 1: Alice → Bob (ID: obs_001)
|
|
1903
|
+
Observation 2: Bob → Carol (ID: obs_002)
|
|
1904
|
+
Observation 3: Carol → Alice (ID: obs_003)
|
|
1905
|
+
|
|
1906
|
+
[4] Recording hypothesis...
|
|
1907
|
+
Hypothesis recorded (ID: hyp_001)
|
|
1908
|
+
Confidence: 0.85
|
|
1909
|
+
Based on observations: [obs_001, obs_002, obs_003]
|
|
1910
|
+
|
|
1911
|
+
[5] Running deduction...
|
|
1912
|
+
Deduction complete!
|
|
1913
|
+
- Rules fired: 6
|
|
1914
|
+
- Iterations: 3
|
|
1915
|
+
- Derived facts: 3
|
|
1916
|
+
- Proofs generated: 3
|
|
1917
|
+
|
|
1918
|
+
[6] Derivation Chain:
|
|
1919
|
+
Step 1: [OBSERVATION] ins:alice ins:transfers ins:bob
|
|
1920
|
+
Step 2: [OBSERVATION] ins:bob ins:transfers ins:carol
|
|
1921
|
+
Step 3: [owl:TransitiveProperty] ins:alice ins:transfers ins:carol
|
|
1922
|
+
Premises: [Step 1, Step 2]
|
|
1923
|
+
Step 4: [OBSERVATION] ins:carol ins:transfers ins:alice
|
|
1924
|
+
Step 5: [owl:TransitiveProperty] ins:bob ins:transfers ins:alice
|
|
1925
|
+
Premises: [Step 2, Step 4]
|
|
1926
|
+
Step 6: [owl:TransitiveProperty] ins:alice ins:transfers ins:alice
|
|
1927
|
+
Premises: [Step 1, Step 5]
|
|
1928
|
+
Step 7: [circularPayment] Circular payment detected
|
|
1929
|
+
Premises: [Step 1, Step 2, Step 4]
|
|
1930
|
+
Confidence: 0.85
|
|
1931
|
+
Proof Hash: a3f8c2e7...
|
|
1932
|
+
```
|
|
1933
|
+
|
|
1934
|
+
### Why This Matters
|
|
1935
|
+
|
|
1936
|
+
| Capability | Traditional AI | ThinkingReasoner |
|
|
1937
|
+
|------------|----------------|------------------|
|
|
1938
|
+
| **Confidence scores** | Made up by LLM | Derived from proof chain |
|
|
1939
|
+
| **Explanation** | "Based on patterns..." | Step-by-step derivation |
|
|
1940
|
+
| **Verification** | Trust the AI | Replay the proof |
|
|
1941
|
+
| **Audit trail** | None | SHA-256 cryptographic hash |
|
|
1942
|
+
| **Rule changes** | Retrain model | Update ontology |
|
|
1943
|
+
| **Domain adaptation** | Fine-tuning ($$$) | Load new ontology (free) |
|
|
1944
|
+
|
|
1945
|
+
### The Setup Data
|
|
1946
|
+
|
|
1947
|
+
The ThinkingReasoner demo uses synthetic inline ontologies:
|
|
1948
|
+
|
|
1949
|
+
**Insurance Ontology** (fraud detection):
|
|
1950
|
+
- `ins:transfers` as `owl:TransitiveProperty` (payment chain detection)
|
|
1951
|
+
- `ins:relatedTo` as `owl:SymmetricProperty` (relationship inference)
|
|
1952
|
+
- `ins:FraudulentClaim rdfs:subClassOf ins:Claim` (type hierarchy)
|
|
1953
|
+
|
|
1954
|
+
**Underwriting Ontology** (risk assessment):
|
|
1955
|
+
- `uw:HighRiskApplicant rdfs:subClassOf uw:Applicant`
|
|
1956
|
+
- `uw:employs` as `owl:TransitiveProperty` (employment verification)
|
|
1957
|
+
- `uw:hasRiskIndicator` with domain/range constraints
|
|
1958
|
+
|
|
1959
|
+
No external datasets required. Load your own ontology for your domain.
|
|
1960
|
+
|
|
1961
|
+
---
|
|
1962
|
+
|
|
1663
1963
|
## HyperFederate: Cross-Database Federation
|
|
1664
1964
|
|
|
1665
1965
|
### The Real Problem: Your Knowledge Lives Everywhere
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "rust-kgdb",
|
|
3
|
-
"version": "0.8.
|
|
3
|
+
"version": "0.8.2",
|
|
4
4
|
"description": "High-performance RDF/SPARQL database with AI agent framework and cross-database federation. GraphDB (449ns lookups, 5-11x faster than RDFox), HyperFederate (KGDB + Snowflake + BigQuery), GraphFrames analytics, Datalog reasoning, HNSW vector embeddings. HyperMindAgent for schema-aware query generation with audit trails. W3C SPARQL 1.1 compliant. Native performance via Rust + NAPI-RS.",
|
|
5
5
|
"main": "index.js",
|
|
6
6
|
"types": "index.d.ts",
|