rust-kgdb 0.5.11 → 0.5.12

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/CHANGELOG.md +10 -0
  2. package/README.md +16 -59
  3. package/package.json +1 -1
package/CHANGELOG.md CHANGED
@@ -2,6 +2,16 @@
2
2
 
3
3
  All notable changes to the rust-kgdb TypeScript SDK will be documented in this file.
4
4
 
5
+ ## [0.5.12] - 2025-12-15
6
+
7
+ ### Benchmark Section Cleanup
8
+
9
+ - Removed internal Cargo/Rust implementation details from benchmark documentation
10
+ - Simplified to focus on WHAT (metrics), WHY (value), and HOW (user-facing commands)
11
+ - Kept key numbers: 2.78µs lookups, 24 bytes/triple, 86.4% accuracy
12
+ - Removed: rustc commands, cargo bench paths, crate paths
13
+ - User-facing: `node hypermind-benchmark.js` for accuracy comparison
14
+
5
15
  ## [0.5.11] - 2025-12-15
6
16
 
7
17
  ### Documentation Clarification
package/README.md CHANGED
@@ -131,68 +131,25 @@ We don't make claims we can't prove. All measurements use **publicly available,
131
131
  - **SP2Bench** - DBLP-based SPARQL performance benchmark
132
132
  - **W3C SPARQL 1.1 Conformance Suite** - Official W3C test cases
133
133
 
134
- **Test Environment:**
135
- - Hardware: Apple Silicon M-series (ARM64), Intel x64
136
- - Dataset: LUBM(1) - 3,272 triples, LUBM(10) - 32K triples, LUBM(100) - 327K triples
137
- - Tool: Criterion.rs statistical benchmarking (10,000+ iterations per measurement)
138
- - Comparison: Apache Jena 4.x, RDFox 7.x under identical conditions
139
-
140
- | Metric | Value | Context |
141
- |--------|-------|---------|
134
+ | Metric | Value | Why It Matters |
135
+ |--------|-------|----------------|
142
136
  | **Lookup Latency** | 2.78 µs | 35x faster than RDFox |
143
137
  | **Memory per Triple** | 24 bytes | 25% more efficient than RDFox |
144
- | **Bulk Insert** | 146K triples/sec | Competitive with commercial systems |
145
- | **SPARQL Accuracy** | 86.4% | vs 0% vanilla LLM (LUBM Q1-Q14) |
138
+ | **Bulk Insert** | 146K triples/sec | Production-ready throughput |
139
+ | **SPARQL Accuracy** | 86.4% | vs 0% vanilla LLM (LUBM benchmark) |
146
140
  | **W3C Compliance** | 100% | Full SPARQL 1.1 + RDF 1.2 |
147
- | **SIMD Speedup** | 44.5% avg | Range: 9-77% depending on query |
148
- | **WCOJ Joins** | O(N^(ρ/2)) | Worst-case optimal guaranteed |
149
- | **Ontology Support** | RDFS + OWL 2 RL | Full reasoning engine |
150
- | **Test Coverage** | 945+ tests | Production certified |
151
-
152
- **Reproducibility:** All benchmarks at `crates/storage/benches/` and `crates/hypergraph/benches/`. Run with `cargo bench --workspace`.
153
-
154
- ### Benchmark Methodology
155
-
156
- **How we measure performance:**
157
-
158
- 1. **LUBM Data Generation**
159
- ```bash
160
- # Generate test data (matches official Java UBA generator)
161
- rustc tools/lubm_generator.rs -O -o tools/lubm_generator
162
- ./tools/lubm_generator 1 /tmp/lubm_1.nt # 3,272 triples
163
- ./tools/lubm_generator 10 /tmp/lubm_10.nt # ~32K triples
164
- ```
165
-
166
- 2. **Storage Benchmarks**
167
- ```bash
168
- # Run Criterion benchmarks (statistical analysis, 10K+ samples)
169
- cargo bench --package storage --bench triple_store_benchmark
170
-
171
- # Results include:
172
- # - Mean, median, standard deviation
173
- # - Outlier detection
174
- # - Comparison vs baseline
175
- ```
176
-
177
- 3. **HyperMind Agent Accuracy**
178
- ```bash
179
- # Run LUBM benchmark comparing Vanilla LLM vs HyperMind
180
- node hypermind-benchmark.js
181
-
182
- # Tests 12 queries (Easy: 3, Medium: 5, Hard: 4)
183
- # Measures: Syntax validity, execution success, latency
184
- ```
185
-
186
- 4. **Hardware Requirements**
187
- - Minimum: 4GB RAM, any x64/ARM64 CPU
188
- - Recommended: 8GB+ RAM, Apple Silicon or modern x64
189
- - Benchmarks run on: M2 MacBook Pro (baseline measurements)
190
-
191
- 5. **Fair Comparison Conditions**
192
- - All systems tested with identical LUBM datasets
193
- - Same SPARQL queries across all systems
194
- - Cold-start measurements (no warm cache)
195
- - 10,000+ iterations per measurement for statistical significance
141
+
142
+ ### How We Measured
143
+
144
+ - **Dataset**: LUBM benchmark (industry standard since 2005)
145
+ - **Hardware**: Apple Silicon M2 MacBook Pro
146
+ - **Methodology**: 10,000+ iterations, cold-start, statistical analysis
147
+ - **Comparison**: Apache Jena 4.x, RDFox 7.x under identical conditions
148
+
149
+ **Try it yourself:**
150
+ ```bash
151
+ node hypermind-benchmark.js # Compare HyperMind vs Vanilla LLM accuracy
152
+ ```
196
153
 
197
154
  ---
198
155
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "rust-kgdb",
3
- "version": "0.5.11",
3
+ "version": "0.5.12",
4
4
  "description": "Production-grade Neuro-Symbolic AI Framework: +86.4% accuracy improvement over vanilla LLMs. High-performance knowledge graph (2.78µs lookups, 35x faster than RDFox). Features fraud detection, underwriting agents, WASM sandbox, type/category/proof theory, and W3C SPARQL 1.1 compliance.",
5
5
  "main": "index.js",
6
6
  "types": "index.d.ts",