ruvector 0.1.35 β†’ 0.1.37

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (39) hide show
  1. package/README.md +459 -1571
  2. package/bin/ruvector.js +1150 -0
  3. package/dist/index.d.mts +95 -0
  4. package/dist/index.d.ts +82 -87
  5. package/dist/index.js +89 -169
  6. package/dist/index.mjs +5 -0
  7. package/package.json +93 -41
  8. package/.claude-flow/metrics/agent-metrics.json +0 -1
  9. package/.claude-flow/metrics/performance.json +0 -87
  10. package/.claude-flow/metrics/task-metrics.json +0 -10
  11. package/PACKAGE_SUMMARY.md +0 -409
  12. package/bin/cli.js +0 -2094
  13. package/dist/core/agentdb-fast.d.ts +0 -149
  14. package/dist/core/agentdb-fast.d.ts.map +0 -1
  15. package/dist/core/agentdb-fast.js +0 -301
  16. package/dist/core/attention-fallbacks.d.ts +0 -221
  17. package/dist/core/attention-fallbacks.d.ts.map +0 -1
  18. package/dist/core/attention-fallbacks.js +0 -361
  19. package/dist/core/gnn-wrapper.d.ts +0 -143
  20. package/dist/core/gnn-wrapper.d.ts.map +0 -1
  21. package/dist/core/gnn-wrapper.js +0 -213
  22. package/dist/core/index.d.ts +0 -15
  23. package/dist/core/index.d.ts.map +0 -1
  24. package/dist/core/index.js +0 -39
  25. package/dist/core/sona-wrapper.d.ts +0 -215
  26. package/dist/core/sona-wrapper.d.ts.map +0 -1
  27. package/dist/core/sona-wrapper.js +0 -258
  28. package/dist/index.d.ts.map +0 -1
  29. package/dist/services/embedding-service.d.ts +0 -136
  30. package/dist/services/embedding-service.d.ts.map +0 -1
  31. package/dist/services/embedding-service.js +0 -294
  32. package/dist/services/index.d.ts +0 -6
  33. package/dist/services/index.d.ts.map +0 -1
  34. package/dist/services/index.js +0 -26
  35. package/dist/types.d.ts +0 -145
  36. package/dist/types.d.ts.map +0 -1
  37. package/dist/types.js +0 -2
  38. package/examples/api-usage.js +0 -211
  39. package/examples/cli-demo.sh +0 -85
package/README.md CHANGED
@@ -1,1739 +1,627 @@
1
- # ruvector
1
+ # RuVector
2
2
 
3
- [![npm version](https://badge.fury.io/js/ruvector.svg)](https://www.npmjs.com/package/ruvector)
4
- [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
5
- [![Node Version](https://img.shields.io/node/v/ruvector)](https://nodejs.org)
6
- [![Downloads](https://img.shields.io/npm/dm/ruvector)](https://www.npmjs.com/package/ruvector)
7
- [![Build Status](https://img.shields.io/badge/build-passing-brightgreen.svg)](https://github.com/ruvnet/ruvector)
8
- [![Performance](https://img.shields.io/badge/latency-<0.5ms-green.svg)](https://github.com/ruvnet/ruvector)
9
- [![GitHub Stars](https://img.shields.io/github/stars/ruvnet/ruvector?style=social)](https://github.com/ruvnet/ruvector)
10
-
11
- **The fastest vector database for Node.jsβ€”built in Rust, runs everywhere**
12
-
13
- Ruvector is a next-generation vector database that brings **enterprise-grade semantic search** to Node.js applications. Unlike cloud-only solutions or Python-first databases, Ruvector is designed specifically for JavaScript/TypeScript developers who need **blazing-fast vector similarity search** without the complexity of external services.
14
-
15
- > πŸš€ **Sub-millisecond queries** β€’ 🎯 **52,000+ inserts/sec** β€’ πŸ’Ύ **~50 bytes per vector** β€’ 🌍 **Runs anywhere**
16
-
17
- Built by [rUv](https://ruv.io) with production-grade Rust performance and intelligent platform detectionβ€”**automatically uses native bindings when available, falls back to WebAssembly when needed**.
18
-
19
- 🌐 **[Visit ruv.io](https://ruv.io)** | πŸ“¦ **[GitHub](https://github.com/ruvnet/ruvector)** | πŸ“š **[Documentation](https://github.com/ruvnet/ruvector/tree/main/docs)**
20
-
21
- ---
22
-
23
- ## 🌟 Why Ruvector?
24
-
25
- ### The Problem with Existing Vector Databases
26
-
27
- Most vector databases force you to choose between three painful trade-offs:
28
-
29
- 1. **Cloud-Only Services** (Pinecone, Weaviate Cloud) - Expensive, vendor lock-in, latency issues, API rate limits
30
- 2. **Python-First Solutions** (ChromaDB, Faiss) - Poor Node.js support, require separate Python processes
31
- 3. **Self-Hosted Complexity** (Milvus, Qdrant) - Heavy infrastructure, Docker orchestration, operational overhead
32
-
33
- **Ruvector eliminates these trade-offs.**
34
-
35
- ### The Ruvector Advantage
36
-
37
- Ruvector is purpose-built for **modern JavaScript/TypeScript applications** that need vector search:
38
-
39
- 🎯 **Native Node.js Integration**
40
- - Drop-in npm packageβ€”no Docker, no Python, no external services
41
- - Full TypeScript support with complete type definitions
42
- - Automatic platform detection with native Rust bindings
43
- - Seamless WebAssembly fallback for universal compatibility
3
+ [![MIT License](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
4
+ [![npm](https://img.shields.io/npm/v/ruvector.svg)](https://www.npmjs.com/package/ruvector)
5
+ [![npm downloads](https://img.shields.io/npm/dm/ruvector.svg)](https://www.npmjs.com/package/ruvector)
6
+ [![TypeScript](https://img.shields.io/badge/TypeScript-Ready-blue.svg)](https://www.typescriptlang.org/)
7
+ [![Node.js](https://img.shields.io/badge/Node.js-16+-green.svg)](https://nodejs.org/)
44
8
 
45
- ⚑ **Production-Grade Performance**
46
- - **52,000+ inserts/second** with native Rust (10x faster than Python alternatives)
47
- - **<0.5ms query latency** with HNSW indexing and SIMD optimizations
48
- - **~50 bytes per vector** with advanced memory optimization
49
- - Scales from edge devices to millions of vectors
9
+ **A distributed vector database that learns.** Store embeddings, query with Cypher, scale horizontally, and let the index improve itself through Graph Neural Networks.
50
10
 
51
- 🧠 **Built for AI Applications**
52
- - Optimized for LLM embeddings (OpenAI, Cohere, Hugging Face)
53
- - Perfect for RAG (Retrieval-Augmented Generation) systems
54
- - Agent memory and semantic caching
55
- - Real-time recommendation engines
11
+ ```bash
12
+ npx ruvector
13
+ ```
56
14
 
57
- 🌍 **Universal Deployment**
58
- - **Linux, macOS, Windows** with native performance
59
- - **Browser support** via WebAssembly (experimental)
60
- - **Edge computing** and serverless environments
61
- - **Alpine Linux** and non-glibc systems supported
15
+ > **All-in-One Package**: The `ruvector` package includes everything β€” vector search, graph queries, GNN layers, AI agent routing, and WASM support. No additional packages needed.
62
16
 
63
- πŸ’° **Zero Operational Costs**
64
- - No cloud API fees or usage limits
65
- - No infrastructure to manage
66
- - No separate database servers
67
- - Open source MIT license
17
+ ## Why RuVector?
68
18
 
69
- ### Key Advantages
19
+ Traditional vector databases just store and search. When you ask "find similar items," they return results but never get smarter. They can't handle complex relationships. They don't optimize your AI costs.
70
20
 
71
- - ⚑ **Blazing Fast**: <0.5ms p50 latency with native Rust, 10-50ms with WASM fallback
72
- - 🎯 **Automatic Platform Detection**: Uses native when available, falls back to WASM seamlessly
73
- - 🧠 **AI-Native**: Built specifically for embeddings, RAG, semantic search, and agent memory
74
- - πŸ”§ **CLI Tools Included**: Full command-line interface for database management
75
- - 🌍 **Universal Deployment**: Works on all platformsβ€”Linux, macOS, Windows, even browsers
76
- - πŸ’Ύ **Memory Efficient**: ~50 bytes per vector with advanced quantization
77
- - πŸš€ **Production Ready**: Battle-tested algorithms with comprehensive benchmarks
78
- - πŸ”“ **Open Source**: MIT licensed, community-driven
21
+ **RuVector is built for the agentic AI era:**
79
22
 
80
- ## πŸš€ Quick Start Tutorial
23
+ | Challenge | RuVector Solution |
24
+ |-----------|-------------------|
25
+ | RAG retrieval quality plateaus | **Self-learning GNN** improves results over time |
26
+ | Knowledge graphs need separate DB | **Cypher queries** built-in (Neo4j syntax) |
27
+ | LLM costs spiral out of control | **AI Router** sends simple queries to cheaper models |
28
+ | Memory usage explodes at scale | **Adaptive compression** (2-32x reduction) |
29
+ | Can't run AI in the browser | **Full WASM support** for client-side inference |
81
30
 
82
- ### Step 1: Installation
31
+ ## Quick Start
83
32
 
84
- Install Ruvector with a single npm command:
33
+ ### Installation
85
34
 
86
35
  ```bash
36
+ # Install the package
87
37
  npm install ruvector
88
- ```
89
38
 
90
- **What happens during installation:**
91
- - npm automatically detects your platform (Linux, macOS, Windows)
92
- - Downloads the correct native binary for maximum performance
93
- - Falls back to WebAssembly if native binaries aren't available
94
- - No additional setup, Docker, or external services required
95
-
96
- **Verify installation:**
97
- ```bash
98
- npx ruvector info
99
- ```
39
+ # Or try instantly without installing
40
+ npx ruvector
100
41
 
101
- You should see your platform and implementation type (native Rust or WASM fallback).
42
+ # With yarn
43
+ yarn add ruvector
102
44
 
103
- ### Step 2: Your First Vector Database
45
+ # With pnpm
46
+ pnpm add ruvector
47
+ ```
104
48
 
105
- Let's create a simple vector database and perform basic operations. This example demonstrates the complete CRUD (Create, Read, Update, Delete) workflow:
49
+ ### Basic Vector Search
106
50
 
107
51
  ```javascript
108
- const { VectorDb } = require('ruvector');
109
-
110
- async function tutorial() {
111
- // Step 2.1: Create a new vector database
112
- // The 'dimensions' parameter must match your embedding model
113
- // Common sizes: 128, 384 (sentence-transformers), 768 (BERT), 1536 (OpenAI)
114
- const db = new VectorDb({
115
- dimensions: 128, // Vector size - MUST match your embeddings
116
- maxElements: 10000, // Maximum vectors (can grow automatically)
117
- storagePath: './my-vectors.db' // Persist to disk (omit for in-memory)
118
- });
119
-
120
- console.log('βœ… Database created successfully');
121
-
122
- // Step 2.2: Insert vectors
123
- // In real applications, these would come from an embedding model
124
- const documents = [
125
- { id: 'doc1', text: 'Artificial intelligence and machine learning' },
126
- { id: 'doc2', text: 'Deep learning neural networks' },
127
- { id: 'doc3', text: 'Natural language processing' },
128
- ];
129
-
130
- for (const doc of documents) {
131
- // Generate random vector for demonstration
132
- // In production: use OpenAI, Cohere, or sentence-transformers
133
- const vector = new Float32Array(128).map(() => Math.random());
134
-
135
- await db.insert({
136
- id: doc.id,
137
- vector: vector,
138
- metadata: {
139
- text: doc.text,
140
- timestamp: Date.now(),
141
- category: 'AI'
142
- }
143
- });
144
-
145
- console.log(`βœ… Inserted: ${doc.id}`);
146
- }
52
+ const { VectorDB } = require('ruvector');
147
53
 
148
- // Step 2.3: Search for similar vectors
149
- // Create a query vector (in production, this would be from your search query)
150
- const queryVector = new Float32Array(128).map(() => Math.random());
54
+ // Create a vector database (384 = OpenAI ada-002 dimensions)
55
+ const db = new VectorDB(384);
151
56
 
152
- const results = await db.search({
153
- vector: queryVector,
154
- k: 5, // Return top 5 most similar vectors
155
- threshold: 0.7 // Only return results with similarity > 0.7
156
- });
157
-
158
- console.log('\nπŸ” Search Results:');
159
- results.forEach((result, index) => {
160
- console.log(`${index + 1}. ${result.id} - Score: ${result.score.toFixed(3)}`);
161
- console.log(` Text: ${result.metadata.text}`);
162
- });
163
-
164
- // Step 2.4: Retrieve a specific vector
165
- const retrieved = await db.get('doc1');
166
- if (retrieved) {
167
- console.log('\nπŸ“„ Retrieved document:', retrieved.metadata.text);
168
- }
169
-
170
- // Step 2.5: Get database statistics
171
- const count = await db.len();
172
- console.log(`\nπŸ“Š Total vectors in database: ${count}`);
173
-
174
- // Step 2.6: Delete a vector
175
- const deleted = await db.delete('doc1');
176
- console.log(`\nπŸ—‘οΈ Deleted doc1: ${deleted ? 'Success' : 'Not found'}`);
177
-
178
- // Final count
179
- const finalCount = await db.len();
180
- console.log(`πŸ“Š Final count: ${finalCount}`);
181
- }
57
+ // Insert vectors with metadata
58
+ await db.insert('doc1', embedding1, {
59
+ title: 'Introduction to AI',
60
+ category: 'tech',
61
+ date: '2024-01-15'
62
+ });
182
63
 
183
- // Run the tutorial
184
- tutorial().catch(console.error);
185
- ```
64
+ // Semantic search
65
+ const results = await db.search(queryEmbedding, 10);
186
66
 
187
- **Expected Output:**
67
+ // Filter by metadata
68
+ const filtered = await db.search(queryEmbedding, 10, {
69
+ category: 'tech',
70
+ date: { $gte: '2024-01-01' }
71
+ });
188
72
  ```
189
- βœ… Database created successfully
190
- βœ… Inserted: doc1
191
- βœ… Inserted: doc2
192
- βœ… Inserted: doc3
193
-
194
- πŸ” Search Results:
195
- 1. doc2 - Score: 0.892
196
- Text: Deep learning neural networks
197
- 2. doc1 - Score: 0.856
198
- Text: Artificial intelligence and machine learning
199
- 3. doc3 - Score: 0.801
200
- Text: Natural language processing
201
-
202
- πŸ“„ Retrieved document: Artificial intelligence and machine learning
203
-
204
- πŸ“Š Total vectors in database: 3
205
73
 
206
- πŸ—‘οΈ Deleted doc1: Success
207
- πŸ“Š Final count: 2
208
- ```
74
+ ### RAG (Retrieval-Augmented Generation)
209
75
 
210
- ### Step 3: TypeScript Tutorial
76
+ ```javascript
77
+ const { VectorDB } = require('ruvector');
78
+ const OpenAI = require('openai');
211
79
 
212
- Ruvector provides full TypeScript support with complete type safety. Here's how to use it:
80
+ const db = new VectorDB(1536); // text-embedding-3-small dimensions
81
+ const openai = new OpenAI();
213
82
 
214
- ```typescript
215
- import { VectorDb, VectorEntry, SearchQuery, SearchResult } from 'ruvector';
216
-
217
- // Step 3.1: Define your custom metadata type
218
- interface DocumentMetadata {
219
- title: string;
220
- content: string;
221
- author: string;
222
- date: Date;
223
- tags: string[];
83
+ // Index your documents
84
+ async function indexDocument(doc) {
85
+ const embedding = await openai.embeddings.create({
86
+ model: 'text-embedding-3-small',
87
+ input: doc.content
88
+ });
89
+ await db.insert(doc.id, embedding.data[0].embedding, {
90
+ title: doc.title,
91
+ content: doc.content
92
+ });
224
93
  }
225
94
 
226
- async function typescriptTutorial() {
227
- // Step 3.2: Create typed database
228
- const db = new VectorDb({
229
- dimensions: 384, // sentence-transformers/all-MiniLM-L6-v2
230
- maxElements: 10000,
231
- storagePath: './typed-vectors.db'
95
+ // RAG query
96
+ async function ragQuery(question) {
97
+ // 1. Embed the question
98
+ const questionEmb = await openai.embeddings.create({
99
+ model: 'text-embedding-3-small',
100
+ input: question
232
101
  });
233
102
 
234
- // Step 3.3: Type-safe vector entry
235
- const entry: VectorEntry<DocumentMetadata> = {
236
- id: 'article-001',
237
- vector: new Float32Array(384), // Your embedding here
238
- metadata: {
239
- title: 'Introduction to Vector Databases',
240
- content: 'Vector databases enable semantic search...',
241
- author: 'Jane Doe',
242
- date: new Date('2024-01-15'),
243
- tags: ['database', 'AI', 'search']
244
- }
245
- };
246
-
247
- // Step 3.4: Insert with type checking
248
- await db.insert(entry);
249
- console.log('βœ… Inserted typed document');
103
+ // 2. Retrieve relevant context
104
+ const context = await db.search(questionEmb.data[0].embedding, 5);
250
105
 
251
- // Step 3.5: Type-safe search
252
- const query: SearchQuery = {
253
- vector: new Float32Array(384),
254
- k: 10,
255
- threshold: 0.8
256
- };
106
+ // 3. Generate answer with context
107
+ const response = await openai.chat.completions.create({
108
+ model: 'gpt-4-turbo',
109
+ messages: [{
110
+ role: 'user',
111
+ content: `Context:\n${context.map(c => c.metadata.content).join('\n\n')}
257
112
 
258
- // Step 3.6: Fully typed results
259
- const results: SearchResult<DocumentMetadata>[] = await db.search(query);
260
-
261
- // TypeScript knows the exact shape of metadata
262
- results.forEach(result => {
263
- console.log(`Title: ${result.metadata.title}`);
264
- console.log(`Author: ${result.metadata.author}`);
265
- console.log(`Tags: ${result.metadata.tags.join(', ')}`);
266
- console.log(`Similarity: ${result.score.toFixed(3)}\n`);
113
+ Question: ${question}
114
+ Answer based only on the context above:`
115
+ }]
267
116
  });
268
117
 
269
- // Step 3.7: Type-safe retrieval
270
- const doc = await db.get('article-001');
271
- if (doc) {
272
- // TypeScript autocomplete works perfectly here
273
- const publishYear = doc.metadata.date.getFullYear();
274
- console.log(`Published in ${publishYear}`);
275
- }
118
+ return response.choices[0].message.content;
276
119
  }
277
-
278
- typescriptTutorial().catch(console.error);
279
120
  ```
280
121
 
281
- **TypeScript Benefits:**
282
- - βœ… Full autocomplete for all methods and properties
283
- - βœ… Compile-time type checking prevents errors
284
- - βœ… IDE IntelliSense shows documentation
285
- - βœ… Custom metadata types for your use case
286
- - βœ… No `any` types - fully typed throughout
287
-
288
- ## 🎯 Platform Detection
289
-
290
- Ruvector automatically detects the best implementation for your platform:
122
+ ### Knowledge Graphs (Cypher)
291
123
 
292
124
  ```javascript
293
- const { getImplementationType, isNative, isWasm } = require('ruvector');
294
-
295
- console.log(getImplementationType()); // 'native' or 'wasm'
296
- console.log(isNative()); // true if using native Rust
297
- console.log(isWasm()); // true if using WebAssembly fallback
298
-
299
- // Performance varies by implementation:
300
- // Native (Rust): <0.5ms latency, 50K+ ops/sec
301
- // WASM fallback: 10-50ms latency, ~1K ops/sec
302
- ```
303
-
304
- ## πŸ”§ CLI Tools
305
-
306
- Ruvector includes a full command-line interface for database management:
307
-
308
- ### Create Database
125
+ const { GraphDB } = require('ruvector');
126
+
127
+ const graph = new GraphDB();
128
+
129
+ // Create entities and relationships
130
+ graph.execute(`
131
+ CREATE (alice:Person {name: 'Alice', role: 'Engineer'})
132
+ CREATE (bob:Person {name: 'Bob', role: 'Manager'})
133
+ CREATE (techcorp:Company {name: 'TechCorp', industry: 'AI'})
134
+ CREATE (alice)-[:WORKS_AT {since: 2022}]->(techcorp)
135
+ CREATE (bob)-[:WORKS_AT {since: 2020}]->(techcorp)
136
+ CREATE (alice)-[:REPORTS_TO]->(bob)
137
+ `);
138
+
139
+ // Query relationships
140
+ const team = graph.execute(`
141
+ MATCH (p:Person)-[:WORKS_AT]->(c:Company {name: 'TechCorp'})
142
+ RETURN p.name, p.role
143
+ `);
144
+
145
+ // Find paths
146
+ const chain = graph.execute(`
147
+ MATCH path = (a:Person {name: 'Alice'})-[:REPORTS_TO*1..3]->(manager)
148
+ RETURN path
149
+ `);
150
+
151
+ // Combine with vector search
152
+ const similarPeople = graph.execute(`
153
+ MATCH (p:Person)
154
+ WHERE vector.similarity(p.embedding, $queryEmbedding) > 0.8
155
+ RETURN p ORDER BY vector.similarity(p.embedding, $queryEmbedding) DESC
156
+ LIMIT 10
157
+ `);
158
+ ```
159
+
160
+ ### GNN-Enhanced Search (Self-Learning)
309
161
 
310
- ```bash
311
- # Create a new vector database
312
- npx ruvector create mydb.vec --dimensions 384 --metric cosine
313
-
314
- # Options:
315
- # --dimensions, -d Vector dimensionality (required)
316
- # --metric, -m Distance metric (cosine, euclidean, dot)
317
- # --max-elements Maximum number of vectors (default: 10000)
318
- ```
319
-
320
- ### Insert Vectors
321
-
322
- ```bash
323
- # Insert vectors from JSON file
324
- npx ruvector insert mydb.vec vectors.json
325
-
326
- # JSON format:
327
- # [
328
- # { "id": "doc1", "vector": [0.1, 0.2, ...], "metadata": {...} },
329
- # { "id": "doc2", "vector": [0.3, 0.4, ...], "metadata": {...} }
330
- # ]
331
- ```
332
-
333
- ### Search Vectors
334
-
335
- ```bash
336
- # Search for similar vectors
337
- npx ruvector search mydb.vec --vector "[0.1,0.2,0.3,...]" --top-k 10
338
-
339
- # Options:
340
- # --vector, -v Query vector (JSON array)
341
- # --top-k, -k Number of results (default: 10)
342
- # --threshold Minimum similarity score
343
- ```
344
-
345
- ### Database Statistics
346
-
347
- ```bash
348
- # Show database statistics
349
- npx ruvector stats mydb.vec
350
-
351
- # Output:
352
- # Total vectors: 10,000
353
- # Dimensions: 384
354
- # Metric: cosine
355
- # Memory usage: ~500 KB
356
- # Index type: HNSW
357
- ```
358
-
359
- ### Benchmarking
360
-
361
- ```bash
362
- # Run performance benchmark
363
- npx ruvector benchmark --num-vectors 10000 --num-queries 1000
364
-
365
- # Options:
366
- # --num-vectors Number of vectors to insert
367
- # --num-queries Number of search queries
368
- # --dimensions Vector dimensionality (default: 128)
369
- ```
162
+ ```javascript
163
+ const { GNNLayer, VectorDB } = require('ruvector');
370
164
 
371
- ### System Information
165
+ // Create GNN layer for query enhancement
166
+ const gnn = new GNNLayer(384, 512, 4); // input_dim, output_dim, num_heads
372
167
 
373
- ```bash
374
- # Show platform and implementation info
375
- npx ruvector info
168
+ // The GNN learns from your search patterns
169
+ async function enhancedSearch(query) {
170
+ // Get initial results
171
+ const neighbors = await db.search(query, 20);
376
172
 
377
- # Output:
378
- # Platform: linux-x64-gnu
379
- # Implementation: native (Rust)
380
- # GNN Module: Available
381
- # Node.js: v18.17.0
382
- # Performance: <0.5ms p50 latency
383
- ```
173
+ // Compute attention weights based on user clicks/relevance
174
+ const weights = computeRelevanceWeights(neighbors);
384
175
 
385
- ### Install Optional Packages
176
+ // GNN enhances the query using graph structure
177
+ const enhancedQuery = gnn.forward(query,
178
+ neighbors.map(n => n.embedding),
179
+ weights
180
+ );
386
181
 
387
- Ruvector supports optional packages that extend functionality. Use the `install` command to add them:
182
+ // Re-rank with enhanced understanding
183
+ return db.search(enhancedQuery, 10);
184
+ }
388
185
 
389
- ```bash
390
- # List available packages
391
- npx ruvector install
392
-
393
- # Output:
394
- # Available Ruvector Packages:
395
- #
396
- # gnn not installed
397
- # Graph Neural Network layers, tensor compression, differentiable search
398
- # npm: @ruvector/gnn
399
- #
400
- # core βœ“ installed
401
- # Core vector database with native Rust bindings
402
- # npm: @ruvector/core
403
-
404
- # Install specific package
405
- npx ruvector install gnn
406
-
407
- # Install all optional packages
408
- npx ruvector install --all
409
-
410
- # Interactive selection
411
- npx ruvector install -i
186
+ // Train on user feedback
187
+ gnn.train({
188
+ queries: historicalQueries,
189
+ clicks: userClickData,
190
+ relevance: expertLabels
191
+ }, { epochs: 100 });
412
192
  ```
413
193
 
414
- The install command auto-detects your package manager (npm, yarn, pnpm, bun).
194
+ ### AI Agent Routing (Tiny Dancer)
415
195
 
416
- ### GNN Commands
196
+ Route queries to the optimal LLM based on complexity β€” save 60-80% on API costs:
417
197
 
418
- Ruvector includes Graph Neural Network (GNN) capabilities for advanced tensor compression and differentiable search.
198
+ ```javascript
199
+ const { Router } = require('ruvector');
419
200
 
420
- #### GNN Info
201
+ const router = new Router({
202
+ confidenceThreshold: 0.85,
203
+ maxUncertainty: 0.15,
204
+ enableCircuitBreaker: true
205
+ });
421
206
 
422
- ```bash
423
- # Show GNN module information
424
- npx ruvector gnn info
425
-
426
- # Output:
427
- # GNN Module Information
428
- # Status: Available
429
- # Platform: linux
430
- # Architecture: x64
431
- #
432
- # Available Features:
433
- # β€’ RuvectorLayer - GNN layer with multi-head attention
434
- # β€’ TensorCompress - Adaptive tensor compression (5 levels)
435
- # β€’ differentiableSearch - Soft attention-based search
436
- # β€’ hierarchicalForward - Multi-layer GNN processing
437
- ```
207
+ // Define your model candidates
208
+ const models = [
209
+ { id: 'gpt-4-turbo', embedding: gpt4Emb, cost: 0.03, quality: 0.95 },
210
+ { id: 'gpt-3.5-turbo', embedding: gpt35Emb, cost: 0.002, quality: 0.80 },
211
+ { id: 'claude-3-haiku', embedding: haikuEmb, cost: 0.001, quality: 0.75 },
212
+ { id: 'llama-3-8b', embedding: llamaEmb, cost: 0.0005, quality: 0.70 }
213
+ ];
438
214
 
439
- #### GNN Layer
215
+ async function smartComplete(prompt) {
216
+ const promptEmb = await embed(prompt);
440
217
 
441
- ```bash
442
- # Create and test a GNN layer
443
- npx ruvector gnn layer -i 128 -h 256 --test
444
-
445
- # Options:
446
- # -i, --input-dim Input dimension (required)
447
- # -h, --hidden-dim Hidden dimension (required)
448
- # -a, --heads Number of attention heads (default: 4)
449
- # -d, --dropout Dropout rate (default: 0.1)
450
- # --test Run a test forward pass
451
- # -o, --output Save layer config to JSON file
452
- ```
218
+ // Router decides optimal model
219
+ const decision = router.route(promptEmb, models);
453
220
 
454
- #### GNN Compress
221
+ console.log(`Routing to ${decision.candidateId} (confidence: ${decision.confidence})`);
222
+ // Output: "Routing to gpt-3.5-turbo (confidence: 0.92)"
455
223
 
456
- ```bash
457
- # Compress embeddings using adaptive tensor compression
458
- npx ruvector gnn compress -f embeddings.json -l pq8 -o compressed.json
459
-
460
- # Options:
461
- # -f, --file Input JSON file with embeddings (required)
462
- # -l, --level Compression level: none|half|pq8|pq4|binary (default: auto)
463
- # -a, --access-freq Access frequency for auto compression (default: 0.5)
464
- # -o, --output Output file for compressed data
465
-
466
- # Compression levels:
467
- # none (freq > 0.8) - Full precision, hot data
468
- # half (freq > 0.4) - ~50% savings, warm data
469
- # pq8 (freq > 0.1) - ~8x compression, cool data
470
- # pq4 (freq > 0.01) - ~16x compression, cold data
471
- # binary (freq <= 0.01) - ~32x compression, archive
224
+ // Call the selected model
225
+ return callModel(decision.candidateId, prompt);
226
+ }
472
227
  ```
473
228
 
474
- #### GNN Search
229
+ ### Compression (2-32x Memory Savings)
475
230
 
476
- ```bash
477
- # Differentiable search with soft attention
478
- npx ruvector gnn search -q "[1.0,0.0,0.0]" -c candidates.json -k 5
479
-
480
- # Options:
481
- # -q, --query Query vector as JSON array (required)
482
- # -c, --candidates Candidates file - JSON array of vectors (required)
483
- # -k, --top-k Number of results (default: 5)
484
- # -t, --temperature Softmax temperature (default: 1.0)
231
+ ```javascript
232
+ const { compress, decompress, CompressionTier } = require('ruvector');
233
+
234
+ // Automatic tier selection
235
+ const auto = compress(embedding, 0.3); // 30% quality threshold
236
+
237
+ // Explicit tiers
238
+ const f16 = compress(embedding, CompressionTier.F16); // 2x compression
239
+ const pq8 = compress(embedding, CompressionTier.PQ8); // 8x compression
240
+ const pq4 = compress(embedding, CompressionTier.PQ4); // 16x compression
241
+ const binary = compress(embedding, CompressionTier.Binary); // 32x compression
242
+
243
+ // Adaptive tiering based on access frequency
244
+ db.enableAdaptiveCompression({
245
+ hotThreshold: 0.8, // Keep hot data in f32
246
+ warmThreshold: 0.4, // Compress to f16
247
+ coldThreshold: 0.1, // Compress to PQ8
248
+ archiveThreshold: 0.01 // Compress to binary
249
+ });
485
250
  ```
486
251
 
487
- ### Attention Commands
488
-
489
- Ruvector includes high-performance attention mechanisms for transformer-based operations, hyperbolic embeddings, and graph attention.
252
+ ## CLI Usage
490
253
 
491
254
  ```bash
492
- # Install the attention module (optional)
493
- npm install @ruvector/attention
494
- ```
495
-
496
- #### Attention Mechanisms Reference
497
-
498
- | Mechanism | Type | Complexity | When to Use |
499
- |-----------|------|------------|-------------|
500
- | **DotProductAttention** | Core | O(nΒ²) | Standard scaled dot-product attention for transformers |
501
- | **MultiHeadAttention** | Core | O(nΒ²) | Parallel attention heads for capturing different relationships |
502
- | **FlashAttention** | Core | O(nΒ²) IO-optimized | Memory-efficient attention for long sequences |
503
- | **HyperbolicAttention** | Core | O(nΒ²) | Hierarchical data, tree-like structures, taxonomies |
504
- | **LinearAttention** | Core | O(n) | Very long sequences where O(nΒ²) is prohibitive |
505
- | **MoEAttention** | Core | O(n*k) | Mixture of Experts routing, specialized attention |
506
- | **GraphRoPeAttention** | Graph | O(nΒ²) | Graph data with rotary position embeddings |
507
- | **EdgeFeaturedAttention** | Graph | O(nΒ²) | Graphs with rich edge features/attributes |
508
- | **DualSpaceAttention** | Graph | O(nΒ²) | Combined Euclidean + hyperbolic representation |
509
- | **LocalGlobalAttention** | Graph | O(n*k) | Large graphs with local + global context |
510
-
511
- #### Attention Info
255
+ # Show system info and backend status
256
+ npx ruvector info
512
257
 
513
- ```bash
514
- # Show attention module information
515
- npx ruvector attention info
516
-
517
- # Output:
518
- # Attention Module Information
519
- # Status: Available
520
- # Version: 0.1.0
521
- # Platform: linux
522
- # Architecture: x64
523
- #
524
- # Core Attention Mechanisms:
525
- # β€’ DotProductAttention - Scaled dot-product attention
526
- # β€’ MultiHeadAttention - Multi-head self-attention
527
- # β€’ FlashAttention - Memory-efficient IO-aware attention
528
- # β€’ HyperbolicAttention - PoincarΓ© ball attention
529
- # β€’ LinearAttention - O(n) linear complexity attention
530
- # β€’ MoEAttention - Mixture of Experts attention
531
- ```
258
+ # Initialize a new index
259
+ npx ruvector init my-index --dimension 384 --type hnsw
532
260
 
533
- #### Attention List
261
+ # Insert vectors from JSON/JSONL
262
+ npx ruvector insert my-index vectors.json
263
+ npx ruvector insert my-index vectors.jsonl --format jsonl
534
264
 
535
- ```bash
536
- # List all available attention mechanisms
537
- npx ruvector attention list
265
+ # Search with a query
266
+ npx ruvector search my-index --query "[0.1, 0.2, ...]" -k 10
267
+ npx ruvector search my-index --text "machine learning" -k 10 # Auto-embed
538
268
 
539
- # With verbose details
540
- npx ruvector attention list -v
541
- ```
269
+ # Show index statistics
270
+ npx ruvector stats my-index
542
271
 
543
- #### Attention Benchmark
272
+ # Run performance benchmarks
273
+ npx ruvector benchmark --dimension 384 --num-vectors 10000
544
274
 
545
- ```bash
546
- # Benchmark attention mechanisms
547
- npx ruvector attention benchmark -d 256 -n 100 -i 100
548
-
549
- # Options:
550
- # -d, --dimension Vector dimension (default: 256)
551
- # -n, --num-vectors Number of vectors (default: 100)
552
- # -i, --iterations Benchmark iterations (default: 100)
553
- # -t, --types Attention types to benchmark (default: dot,flash,linear)
554
-
555
- # Example output:
556
- # Dimension: 256
557
- # Vectors: 100
558
- # Iterations: 100
559
- #
560
- # dot: 0.012ms/op (84,386 ops/sec)
561
- # flash: 0.012ms/op (82,844 ops/sec)
562
- # linear: 0.066ms/op (15,259 ops/sec)
275
+ # Export/import
276
+ npx ruvector export my-index backup.bin
277
+ npx ruvector import backup.bin restored-index
563
278
  ```
564
279
 
565
- #### Hyperbolic Operations
566
-
567
- ```bash
568
- # Calculate PoincarΓ© distance between two points
569
- npx ruvector attention hyperbolic -a distance -v "[0.1,0.2,0.3]" -b "[0.4,0.5,0.6]"
280
+ ## Integrations
570
281
 
571
- # Project vector to PoincarΓ© ball
572
- npx ruvector attention hyperbolic -a project -v "[1.5,2.0,0.8]"
282
+ ### LangChain
573
283
 
574
- # MΓΆbius addition in hyperbolic space
575
- npx ruvector attention hyperbolic -a mobius-add -v "[0.1,0.2]" -b "[0.3,0.4]"
284
+ ```javascript
285
+ const { RuVectorStore } = require('ruvector/langchain');
286
+ const { OpenAIEmbeddings } = require('@langchain/openai');
576
287
 
577
- # Exponential map (tangent space β†’ PoincarΓ© ball)
578
- npx ruvector attention hyperbolic -a exp-map -v "[0.1,0.2,0.3]"
288
+ const vectorStore = new RuVectorStore(
289
+ new OpenAIEmbeddings(),
290
+ { dimension: 1536 }
291
+ );
579
292
 
580
- # Options:
581
- # -a, --action Action: distance|project|mobius-add|exp-map|log-map
582
- # -v, --vector Input vector as JSON array (required)
583
- # -b, --vector-b Second vector for binary operations
584
- # -c, --curvature PoincarΓ© ball curvature (default: 1.0)
293
+ await vectorStore.addDocuments(documents);
294
+ const results = await vectorStore.similaritySearch("query", 5);
585
295
  ```
586
296
 
587
- #### When to Use Each Attention Type
588
-
589
- | Use Case | Recommended Attention | Reason |
590
- |----------|----------------------|--------|
591
- | **Standard NLP/Transformers** | MultiHeadAttention | Industry standard, well-tested |
592
- | **Long Documents (>4K tokens)** | FlashAttention or LinearAttention | Memory efficient |
593
- | **Hierarchical Classification** | HyperbolicAttention | Captures tree-like structures |
594
- | **Knowledge Graphs** | GraphRoPeAttention | Position-aware graph attention |
595
- | **Multi-Relational Graphs** | EdgeFeaturedAttention | Leverages edge attributes |
596
- | **Taxonomy/Ontology Search** | DualSpaceAttention | Best of both Euclidean + hyperbolic |
597
- | **Large-Scale Graphs** | LocalGlobalAttention | Efficient local + global context |
598
- | **Model Routing/MoE** | MoEAttention | Expert selection and routing |
599
-
600
- ## πŸ“Š Performance Benchmarks
601
-
602
- Tested on AMD Ryzen 9 5950X, 128-dimensional vectors:
603
-
604
- ### Native Performance (Rust)
605
-
606
- | Operation | Throughput | Latency (p50) | Latency (p99) |
607
- |-----------|------------|---------------|---------------|
608
- | Insert | 52,341 ops/sec | 0.019 ms | 0.045 ms |
609
- | Search (k=10) | 11,234 ops/sec | 0.089 ms | 0.156 ms |
610
- | Search (k=100) | 8,932 ops/sec | 0.112 ms | 0.203 ms |
611
- | Delete | 45,678 ops/sec | 0.022 ms | 0.051 ms |
612
-
613
- **Memory Usage**: ~50 bytes per 128-dim vector (including index)
614
-
615
- ### Comparison with Alternatives
616
-
617
- | Database | Insert (ops/sec) | Search (ops/sec) | Memory per Vector | Node.js | Browser |
618
- |----------|------------------|------------------|-------------------|---------|---------|
619
- | **Ruvector (Native)** | **52,341** | **11,234** | **50 bytes** | βœ… | ❌ |
620
- | **Ruvector (WASM)** | **~1,000** | **~100** | **50 bytes** | βœ… | βœ… |
621
- | Faiss (HNSW) | 38,200 | 9,800 | 68 bytes | ❌ | ❌ |
622
- | Hnswlib | 41,500 | 10,200 | 62 bytes | βœ… | ❌ |
623
- | ChromaDB | ~1,000 | ~20 | 150 bytes | βœ… | ❌ |
624
-
625
- *Benchmarks measured with 100K vectors, 128 dimensions, k=10*
626
-
627
- ## πŸ” Comparison with Other Vector Databases
628
-
629
- Comprehensive comparison of Ruvector against popular vector database solutions:
630
-
631
- | Feature | Ruvector | Pinecone | Qdrant | Weaviate | Milvus | ChromaDB | Faiss |
632
- |---------|----------|----------|--------|----------|--------|----------|-------|
633
- | **Deployment** |
634
- | Installation | `npm install` βœ… | Cloud API ☁️ | Docker 🐳 | Docker 🐳 | Docker/K8s 🐳 | `pip install` 🐍 | `pip install` 🐍 |
635
- | Node.js Native | βœ… First-class | ❌ API only | ⚠️ HTTP API | ⚠️ HTTP API | ⚠️ HTTP API | ❌ Python | ❌ Python |
636
- | Setup Time | < 1 minute | 5-10 minutes | 10-30 minutes | 15-30 minutes | 30-60 minutes | 5 minutes | 5 minutes |
637
- | Infrastructure | None required | Managed cloud | Self-hosted | Self-hosted | Self-hosted | Embedded | Embedded |
638
- | **Performance** |
639
- | Query Latency (p50) | **<0.5ms** | ~2-5ms | ~1-2ms | ~2-3ms | ~3-5ms | ~50ms | ~1ms |
640
- | Insert Throughput | **52,341 ops/sec** | ~10,000 ops/sec | ~20,000 ops/sec | ~15,000 ops/sec | ~25,000 ops/sec | ~1,000 ops/sec | ~40,000 ops/sec |
641
- | Memory per Vector (128d) | **50 bytes** | ~80 bytes | 62 bytes | ~100 bytes | ~70 bytes | 150 bytes | 68 bytes |
642
- | Recall @ k=10 | 95%+ | 93% | 94% | 92% | 96% | 85% | 97% |
643
- | **Platform Support** |
644
- | Linux | βœ… Native | ☁️ API | βœ… Docker | βœ… Docker | βœ… Docker | βœ… Python | βœ… Python |
645
- | macOS | βœ… Native | ☁️ API | βœ… Docker | βœ… Docker | βœ… Docker | βœ… Python | βœ… Python |
646
- | Windows | βœ… Native | ☁️ API | βœ… Docker | βœ… Docker | ⚠️ WSL2 | βœ… Python | βœ… Python |
647
- | Browser/WASM | βœ… Yes | ❌ No | ❌ No | ❌ No | ❌ No | ❌ No | ❌ No |
648
- | ARM64 | βœ… Native | ☁️ API | βœ… Yes | βœ… Yes | ⚠️ Limited | βœ… Yes | βœ… Yes |
649
- | Alpine Linux | βœ… WASM | ☁️ API | ⚠️ Build from source | ⚠️ Build from source | ❌ No | βœ… Yes | βœ… Yes |
650
- | **Features** |
651
- | Distance Metrics | Cosine, L2, Dot | Cosine, L2, Dot | 11 metrics | 10 metrics | 8 metrics | L2, Cosine, IP | L2, IP, Cosine |
652
- | Filtering | βœ… Metadata | βœ… Advanced | βœ… Advanced | βœ… Advanced | βœ… Advanced | βœ… Basic | ❌ Limited |
653
- | Persistence | βœ… File-based | ☁️ Managed | βœ… Disk | βœ… Disk | βœ… Disk | βœ… DuckDB | ❌ Memory |
654
- | Indexing | HNSW | Proprietary | HNSW | HNSW | IVF/HNSW | HNSW | IVF/HNSW |
655
- | Quantization | βœ… PQ | βœ… Yes | βœ… Scalar | βœ… PQ | βœ… PQ/SQ | ❌ No | βœ… PQ |
656
- | Batch Operations | βœ… Yes | βœ… Yes | βœ… Yes | βœ… Yes | βœ… Yes | βœ… Yes | βœ… Yes |
657
- | **Developer Experience** |
658
- | TypeScript Types | βœ… Full | βœ… Generated | ⚠️ Community | ⚠️ Community | ⚠️ Community | ⚠️ Partial | ❌ No |
659
- | Documentation | βœ… Excellent | βœ… Excellent | βœ… Good | βœ… Good | βœ… Good | βœ… Good | ⚠️ Technical |
660
- | Examples | βœ… Many | βœ… Many | βœ… Good | βœ… Good | βœ… Many | βœ… Good | ⚠️ Limited |
661
- | CLI Tools | βœ… Included | ⚠️ Limited | βœ… Yes | βœ… Yes | βœ… Yes | ⚠️ Basic | ❌ No |
662
- | **Operations** |
663
- | Monitoring | βœ… Metrics | βœ… Dashboard | βœ… Prometheus | βœ… Prometheus | βœ… Prometheus | ⚠️ Basic | ❌ No |
664
- | Backups | βœ… File copy | ☁️ Automatic | βœ… Snapshots | βœ… Snapshots | βœ… Snapshots | βœ… File copy | ❌ Manual |
665
- | High Availability | ⚠️ App-level | βœ… Built-in | βœ… Clustering | βœ… Clustering | βœ… Clustering | ❌ No | ❌ No |
666
- | Auto-Scaling | ⚠️ App-level | βœ… Automatic | ⚠️ Manual | ⚠️ Manual | ⚠️ K8s HPA | ❌ No | ❌ No |
667
- | **Cost** |
668
- | Pricing Model | Free (MIT) | Pay-per-use | Free (Apache) | Free (BSD) | Free (Apache) | Free (Apache) | Free (MIT) |
669
- | Monthly Cost (1M vectors) | **$0** | ~$70-200 | ~$20-50 (infra) | ~$30-60 (infra) | ~$50-100 (infra) | $0 | $0 |
670
- | Monthly Cost (10M vectors) | **$0** | ~$500-1000 | ~$100-200 (infra) | ~$150-300 (infra) | ~$200-400 (infra) | $0 | $0 |
671
- | API Rate Limits | None | Yes | None | None | None | None | None |
672
- | **Use Cases** |
673
- | RAG Systems | βœ… Excellent | βœ… Excellent | βœ… Excellent | βœ… Excellent | βœ… Excellent | βœ… Good | ⚠️ Limited |
674
- | Serverless | βœ… Perfect | βœ… Good | ❌ No | ❌ No | ❌ No | ⚠️ Possible | ⚠️ Possible |
675
- | Edge Computing | βœ… Excellent | ❌ No | ❌ No | ❌ No | ❌ No | ❌ No | ⚠️ Possible |
676
- | Production Scale (100M+) | ⚠️ Single node | βœ… Yes | βœ… Yes | βœ… Yes | βœ… Excellent | ⚠️ Limited | ⚠️ Manual |
677
- | Embedded Apps | βœ… Excellent | ❌ No | ❌ No | ❌ No | ❌ No | ⚠️ Possible | βœ… Good |
678
-
679
- ### When to Choose Ruvector
680
-
681
- βœ… **Perfect for:**
682
- - **Node.js/TypeScript applications** needing embedded vector search
683
- - **Serverless and edge computing** where external services aren't practical
684
- - **Rapid prototyping and development** with minimal setup time
685
- - **RAG systems** with LangChain, LlamaIndex, or custom implementations
686
- - **Cost-sensitive projects** that can't afford cloud API pricing
687
- - **Offline-first applications** requiring local vector search
688
- - **Browser-based AI** with WebAssembly fallback
689
- - **Small to medium scale** (up to 10M vectors per instance)
690
-
691
- ⚠️ **Consider alternatives for:**
692
- - **Massive scale (100M+ vectors)** - Consider Pinecone, Milvus, or Qdrant clusters
693
- - **Multi-tenancy requirements** - Weaviate or Qdrant offer better isolation
694
- - **Distributed systems** - Milvus provides better horizontal scaling
695
- - **Zero-ops cloud solution** - Pinecone handles all infrastructure
696
-
697
- ### Why Choose Ruvector Over...
698
-
699
- **vs Pinecone:**
700
- - βœ… No API costs (save $1000s/month)
701
- - βœ… No network latency (10x faster queries)
702
- - βœ… No vendor lock-in
703
- - βœ… Works offline and in restricted environments
704
- - ❌ No managed multi-region clusters
705
-
706
- **vs ChromaDB:**
707
- - βœ… 50x faster queries (native Rust vs Python)
708
- - βœ… True Node.js support (not HTTP API)
709
- - βœ… Better TypeScript integration
710
- - βœ… Lower memory usage
711
- - ❌ Smaller ecosystem and community
712
-
713
- **vs Qdrant:**
714
- - βœ… Zero infrastructure setup
715
- - βœ… Embedded in your app (no Docker)
716
- - βœ… Better for serverless environments
717
- - βœ… Native Node.js bindings
718
- - ❌ No built-in clustering or HA
719
-
720
- **vs Faiss:**
721
- - βœ… Full Node.js support (Faiss is Python-only)
722
- - βœ… Easier API and better developer experience
723
- - βœ… Built-in persistence and metadata
724
- - ⚠️ Slightly lower recall at same performance
725
-
726
- ## 🎯 Real-World Tutorials
727
-
728
- ### Tutorial 1: Building a RAG System with OpenAI
729
-
730
- **What you'll learn:** Create a production-ready Retrieval-Augmented Generation system that enhances LLM responses with relevant context from your documents.
731
-
732
- **Prerequisites:**
733
- ```bash
734
- npm install ruvector openai
735
- export OPENAI_API_KEY="your-api-key-here"
736
- ```
737
-
738
- **Complete Implementation:**
297
+ ### LlamaIndex
739
298
 
740
299
  ```javascript
741
- const { VectorDb } = require('ruvector');
742
- const OpenAI = require('openai');
743
-
744
- class RAGSystem {
745
- constructor() {
746
- // Initialize OpenAI client
747
- this.openai = new OpenAI({
748
- apiKey: process.env.OPENAI_API_KEY
749
- });
750
-
751
- // Create vector database for OpenAI embeddings
752
- // text-embedding-ada-002 produces 1536-dimensional vectors
753
- this.db = new VectorDb({
754
- dimensions: 1536,
755
- maxElements: 100000,
756
- storagePath: './rag-knowledge-base.db'
757
- });
758
-
759
- console.log('βœ… RAG System initialized');
760
- }
761
-
762
- // Step 1: Index your knowledge base
763
- async indexDocuments(documents) {
764
- console.log(`πŸ“š Indexing ${documents.length} documents...`);
765
-
766
- for (let i = 0; i < documents.length; i++) {
767
- const doc = documents[i];
768
-
769
- // Generate embedding for the document
770
- const response = await this.openai.embeddings.create({
771
- model: 'text-embedding-ada-002',
772
- input: doc.content
773
- });
774
-
775
- // Store in vector database
776
- await this.db.insert({
777
- id: doc.id || `doc_${i}`,
778
- vector: new Float32Array(response.data[0].embedding),
779
- metadata: {
780
- title: doc.title,
781
- content: doc.content,
782
- source: doc.source,
783
- date: doc.date || new Date().toISOString()
784
- }
785
- });
786
-
787
- console.log(` βœ… Indexed: ${doc.title}`);
788
- }
789
-
790
- const count = await this.db.len();
791
- console.log(`\nβœ… Indexed ${count} documents total`);
792
- }
793
-
794
- // Step 2: Retrieve relevant context for a query
795
- async retrieveContext(query, k = 3) {
796
- console.log(`πŸ” Searching for: "${query}"`);
797
-
798
- // Generate embedding for the query
799
- const response = await this.openai.embeddings.create({
800
- model: 'text-embedding-ada-002',
801
- input: query
802
- });
803
-
804
- // Search for similar documents
805
- const results = await this.db.search({
806
- vector: new Float32Array(response.data[0].embedding),
807
- k: k,
808
- threshold: 0.7 // Only use highly relevant results
809
- });
300
+ const { RuVectorIndex } = require('ruvector/llamaindex');
810
301
 
811
- console.log(`πŸ“„ Found ${results.length} relevant documents\n`);
812
-
813
- return results.map(r => ({
814
- content: r.metadata.content,
815
- title: r.metadata.title,
816
- score: r.score
817
- }));
818
- }
819
-
820
- // Step 3: Generate answer with retrieved context
821
- async answer(question) {
822
- // Retrieve relevant context
823
- const context = await this.retrieveContext(question, 3);
824
-
825
- if (context.length === 0) {
826
- return "I don't have enough information to answer that question.";
827
- }
828
-
829
- // Build prompt with context
830
- const contextText = context
831
- .map((doc, i) => `[${i + 1}] ${doc.title}\n${doc.content}`)
832
- .join('\n\n');
833
-
834
- const prompt = `Answer the question based on the following context. If the context doesn't contain the answer, say so.
835
-
836
- Context:
837
- ${contextText}
838
-
839
- Question: ${question}
840
-
841
- Answer:`;
842
-
843
- console.log('πŸ€– Generating answer...\n');
844
-
845
- // Generate completion
846
- const completion = await this.openai.chat.completions.create({
847
- model: 'gpt-4',
848
- messages: [
849
- { role: 'system', content: 'You are a helpful assistant that answers questions based on provided context.' },
850
- { role: 'user', content: prompt }
851
- ],
852
- temperature: 0.3 // Lower temperature for more factual responses
853
- });
854
-
855
- return {
856
- answer: completion.choices[0].message.content,
857
- sources: context.map(c => c.title)
858
- };
859
- }
860
- }
861
-
862
- // Example Usage
863
- async function main() {
864
- const rag = new RAGSystem();
865
-
866
- // Step 1: Index your knowledge base
867
- const documents = [
868
- {
869
- id: 'doc1',
870
- title: 'Ruvector Introduction',
871
- content: 'Ruvector is a high-performance vector database for Node.js built in Rust. It provides sub-millisecond query latency and supports over 52,000 inserts per second.',
872
- source: 'documentation'
873
- },
874
- {
875
- id: 'doc2',
876
- title: 'Vector Databases Explained',
877
- content: 'Vector databases store data as high-dimensional vectors, enabling semantic similarity search. They are essential for AI applications like RAG systems and recommendation engines.',
878
- source: 'blog'
879
- },
880
- {
881
- id: 'doc3',
882
- title: 'HNSW Algorithm',
883
- content: 'Hierarchical Navigable Small World (HNSW) is a graph-based algorithm for approximate nearest neighbor search. It provides excellent recall with low latency.',
884
- source: 'research'
885
- }
886
- ];
887
-
888
- await rag.indexDocuments(documents);
889
-
890
- // Step 2: Ask questions
891
- console.log('\n' + '='.repeat(60) + '\n');
892
-
893
- const result = await rag.answer('What is Ruvector and what are its performance characteristics?');
894
-
895
- console.log('πŸ“ Answer:', result.answer);
896
- console.log('\nπŸ“š Sources:', result.sources.join(', '));
897
- }
898
-
899
- main().catch(console.error);
900
- ```
302
+ const index = new RuVectorIndex({
303
+ dimension: 384,
304
+ enableGNN: true
305
+ });
901
306
 
902
- **Expected Output:**
307
+ await index.insert(documents);
308
+ const queryEngine = index.asQueryEngine();
309
+ const response = await queryEngine.query("What is machine learning?");
903
310
  ```
904
- βœ… RAG System initialized
905
- πŸ“š Indexing 3 documents...
906
- βœ… Indexed: Ruvector Introduction
907
- βœ… Indexed: Vector Databases Explained
908
- βœ… Indexed: HNSW Algorithm
909
311
 
910
- βœ… Indexed 3 documents total
312
+ ### OpenAI / Anthropic
911
313
 
912
- ============================================================
913
-
914
- πŸ” Searching for: "What is Ruvector and what are its performance characteristics?"
915
- πŸ“„ Found 2 relevant documents
314
+ ```javascript
315
+ const { createEmbedder } = require('ruvector');
916
316
 
917
- πŸ€– Generating answer...
317
+ // OpenAI
318
+ const openaiEmbed = createEmbedder('openai', {
319
+ model: 'text-embedding-3-small'
320
+ });
918
321
 
919
- πŸ“ Answer: Ruvector is a high-performance vector database built in Rust for Node.js applications. Its key performance characteristics include:
920
- - Sub-millisecond query latency
921
- - Over 52,000 inserts per second
922
- - Optimized for semantic similarity search
322
+ // Anthropic (via Voyage)
323
+ const anthropicEmbed = createEmbedder('voyage', {
324
+ model: 'voyage-2'
325
+ });
923
326
 
924
- πŸ“š Sources: Ruvector Introduction, Vector Databases Explained
327
+ // Cohere
328
+ const cohereEmbed = createEmbedder('cohere', {
329
+ model: 'embed-english-v3.0'
330
+ });
925
331
  ```
926
332
 
927
- **Production Tips:**
928
- - βœ… Use batch embedding for better throughput (OpenAI supports up to 2048 texts)
929
- - βœ… Implement caching for frequently asked questions
930
- - βœ… Add error handling for API rate limits
931
- - βœ… Monitor token usage and costs
932
- - βœ… Regularly update your knowledge base
333
+ ## Benchmarks
933
334
 
934
- ---
935
-
936
- ### Tutorial 2: Semantic Search Engine
937
-
938
- **What you'll learn:** Build a semantic search engine that understands meaning, not just keywords.
335
+ | Operation | Dimensions | Time | Throughput |
336
+ |-----------|------------|------|------------|
337
+ | **HNSW Search (k=10)** | 384 | 61Β΅s | 16,400 QPS |
338
+ | **HNSW Search (k=100)** | 384 | 164Β΅s | 6,100 QPS |
339
+ | **Cosine Similarity** | 1536 | 143ns | 7M ops/sec |
340
+ | **Dot Product** | 384 | 33ns | 30M ops/sec |
341
+ | **Insert** | 384 | 20Β΅s | 50,000/sec |
342
+ | **GNN Forward** | 384β†’512 | 89Β΅s | 11,200/sec |
343
+ | **Compression (PQ8)** | 384 | 12Β΅s | 83,000/sec |
939
344
 
940
- **Prerequisites:**
345
+ Run your own benchmarks:
941
346
  ```bash
942
- npm install ruvector @xenova/transformers
943
- ```
944
-
945
- **Complete Implementation:**
946
-
947
- ```javascript
948
- const { VectorDb } = require('ruvector');
949
- const { pipeline } = require('@xenova/transformers');
950
-
951
- class SemanticSearchEngine {
952
- constructor() {
953
- this.db = null;
954
- this.embedder = null;
955
- }
956
-
957
- // Step 1: Initialize the embedding model
958
- async initialize() {
959
- console.log('πŸš€ Initializing semantic search engine...');
960
-
961
- // Load sentence-transformers model (runs locally, no API needed!)
962
- console.log('πŸ“₯ Loading embedding model...');
963
- this.embedder = await pipeline(
964
- 'feature-extraction',
965
- 'Xenova/all-MiniLM-L6-v2'
966
- );
967
-
968
- // Create vector database (384 dimensions for all-MiniLM-L6-v2)
969
- this.db = new VectorDb({
970
- dimensions: 384,
971
- maxElements: 50000,
972
- storagePath: './semantic-search.db'
973
- });
974
-
975
- console.log('βœ… Search engine ready!\n');
976
- }
977
-
978
- // Step 2: Generate embeddings
979
- async embed(text) {
980
- const output = await this.embedder(text, {
981
- pooling: 'mean',
982
- normalize: true
983
- });
984
-
985
- // Convert to Float32Array
986
- return new Float32Array(output.data);
987
- }
988
-
989
- // Step 3: Index documents
990
- async indexDocuments(documents) {
991
- console.log(`πŸ“š Indexing ${documents.length} documents...`);
992
-
993
- for (const doc of documents) {
994
- const vector = await this.embed(doc.content);
995
-
996
- await this.db.insert({
997
- id: doc.id,
998
- vector: vector,
999
- metadata: {
1000
- title: doc.title,
1001
- content: doc.content,
1002
- category: doc.category,
1003
- url: doc.url
1004
- }
1005
- });
1006
-
1007
- console.log(` βœ… ${doc.title}`);
1008
- }
1009
-
1010
- const count = await this.db.len();
1011
- console.log(`\nβœ… Indexed ${count} documents\n`);
1012
- }
1013
-
1014
- // Step 4: Semantic search
1015
- async search(query, options = {}) {
1016
- const {
1017
- k = 5,
1018
- category = null,
1019
- threshold = 0.3
1020
- } = options;
1021
-
1022
- console.log(`πŸ” Searching for: "${query}"`);
1023
-
1024
- // Generate query embedding
1025
- const queryVector = await this.embed(query);
1026
-
1027
- // Search vector database
1028
- const results = await this.db.search({
1029
- vector: queryVector,
1030
- k: k * 2, // Get more results for filtering
1031
- threshold: threshold
1032
- });
1033
-
1034
- // Filter by category if specified
1035
- let filtered = results;
1036
- if (category) {
1037
- filtered = results.filter(r => r.metadata.category === category);
1038
- }
1039
-
1040
- // Return top k after filtering
1041
- const final = filtered.slice(0, k);
1042
-
1043
- console.log(`πŸ“„ Found ${final.length} results\n`);
1044
-
1045
- return final.map(r => ({
1046
- id: r.id,
1047
- title: r.metadata.title,
1048
- content: r.metadata.content,
1049
- category: r.metadata.category,
1050
- score: r.score,
1051
- url: r.metadata.url
1052
- }));
1053
- }
1054
-
1055
- // Step 5: Find similar documents
1056
- async findSimilar(documentId, k = 5) {
1057
- const doc = await this.db.get(documentId);
1058
-
1059
- if (!doc) {
1060
- throw new Error(`Document ${documentId} not found`);
1061
- }
1062
-
1063
- const results = await this.db.search({
1064
- vector: doc.vector,
1065
- k: k + 1 // +1 because the document itself will be included
1066
- });
1067
-
1068
- // Remove the document itself from results
1069
- return results
1070
- .filter(r => r.id !== documentId)
1071
- .slice(0, k);
1072
- }
1073
- }
1074
-
1075
- // Example Usage
1076
- async function main() {
1077
- const engine = new SemanticSearchEngine();
1078
- await engine.initialize();
1079
-
1080
- // Sample documents (in production, load from your database)
1081
- const documents = [
1082
- {
1083
- id: '1',
1084
- title: 'Understanding Neural Networks',
1085
- content: 'Neural networks are computing systems inspired by biological neural networks. They learn to perform tasks by considering examples.',
1086
- category: 'AI',
1087
- url: '/docs/neural-networks'
1088
- },
1089
- {
1090
- id: '2',
1091
- title: 'Introduction to Machine Learning',
1092
- content: 'Machine learning is a subset of artificial intelligence that provides systems the ability to learn and improve from experience.',
1093
- category: 'AI',
1094
- url: '/docs/machine-learning'
1095
- },
1096
- {
1097
- id: '3',
1098
- title: 'Web Development Best Practices',
1099
- content: 'Modern web development involves responsive design, performance optimization, and accessibility considerations.',
1100
- category: 'Web',
1101
- url: '/docs/web-dev'
1102
- },
1103
- {
1104
- id: '4',
1105
- title: 'Deep Learning Applications',
1106
- content: 'Deep learning has revolutionized computer vision, natural language processing, and speech recognition.',
1107
- category: 'AI',
1108
- url: '/docs/deep-learning'
1109
- }
1110
- ];
1111
-
1112
- // Index documents
1113
- await engine.indexDocuments(documents);
1114
-
1115
- // Example 1: Basic semantic search
1116
- console.log('Example 1: Basic Search\n' + '='.repeat(60));
1117
- const results1 = await engine.search('AI and neural nets');
1118
- results1.forEach((result, i) => {
1119
- console.log(`${i + 1}. ${result.title} (Score: ${result.score.toFixed(3)})`);
1120
- console.log(` ${result.content.slice(0, 80)}...`);
1121
- console.log(` Category: ${result.category}\n`);
1122
- });
1123
-
1124
- // Example 2: Category-filtered search
1125
- console.log('\nExample 2: Category-Filtered Search\n' + '='.repeat(60));
1126
- const results2 = await engine.search('learning algorithms', {
1127
- category: 'AI',
1128
- k: 3
1129
- });
1130
- results2.forEach((result, i) => {
1131
- console.log(`${i + 1}. ${result.title} (Score: ${result.score.toFixed(3)})`);
1132
- });
347
+ npx ruvector benchmark --dimension 384 --num-vectors 100000
348
+ ```
349
+
350
+ ## Comparison
351
+
352
+ | Feature | RuVector | Pinecone | Qdrant | ChromaDB | Milvus | Weaviate |
353
+ |---------|----------|----------|--------|----------|--------|----------|
354
+ | **Latency (p50)** | **61Β΅s** | ~2ms | ~1ms | ~50ms | ~5ms | ~3ms |
355
+ | **Graph Queries** | βœ… Cypher | ❌ | ❌ | ❌ | ❌ | βœ… GraphQL |
356
+ | **Self-Learning** | βœ… GNN | ❌ | ❌ | ❌ | ❌ | ❌ |
357
+ | **AI Routing** | βœ… | ❌ | ❌ | ❌ | ❌ | ❌ |
358
+ | **Browser/WASM** | βœ… | ❌ | ❌ | ❌ | ❌ | ❌ |
359
+ | **Compression** | 2-32x | ❌ | βœ… | ❌ | βœ… | βœ… |
360
+ | **Hybrid Search** | βœ… | βœ… | βœ… | ❌ | βœ… | βœ… |
361
+ | **Multi-tenancy** | βœ… | βœ… | βœ… | βœ… | βœ… | βœ… |
362
+ | **Open Source** | βœ… MIT | ❌ | βœ… Apache | βœ… Apache | βœ… Apache | βœ… BSD |
363
+ | **Pricing** | Free | $70+/mo | Free | Free | Free | Free |
364
+
365
+ ## npm Packages
366
+
367
+ | Package | Description |
368
+ |---------|-------------|
369
+ | [`ruvector`](https://www.npmjs.com/package/ruvector) | **All-in-one package (recommended)** |
370
+ | [`@ruvector/wasm`](https://www.npmjs.com/package/@ruvector/wasm) | Browser/WASM bindings |
371
+ | [`@ruvector/graph`](https://www.npmjs.com/package/@ruvector/graph) | Graph database with Cypher |
372
+ | [`@ruvector/gnn`](https://www.npmjs.com/package/@ruvector/gnn) | Graph Neural Network layers |
373
+ | [`@ruvector/tiny-dancer`](https://www.npmjs.com/package/@ruvector/tiny-dancer) | AI agent routing (FastGRNN) |
374
+ | [`@ruvector/router`](https://www.npmjs.com/package/@ruvector/router) | Semantic routing engine |
1133
375
 
1134
- // Example 3: Find similar documents
1135
- console.log('\n\nExample 3: Find Similar Documents\n' + '='.repeat(60));
1136
- const similar = await engine.findSimilar('1', 2);
1137
- console.log('Documents similar to "Understanding Neural Networks":');
1138
- similar.forEach((doc, i) => {
1139
- console.log(`${i + 1}. ${doc.metadata.title} (Score: ${doc.score.toFixed(3)})`);
1140
- });
1141
- }
376
+ ```bash
377
+ # Install all-in-one (recommended)
378
+ npm install ruvector
1142
379
 
1143
- main().catch(console.error);
380
+ # Or install specific packages
381
+ npm install @ruvector/graph @ruvector/gnn
1144
382
  ```
1145
383
 
1146
- **Key Features:**
1147
- - βœ… Runs completely locally (no API keys needed)
1148
- - βœ… Understands semantic meaning, not just keywords
1149
- - βœ… Category filtering for better results
1150
- - βœ… "Find similar" functionality
1151
- - βœ… Fast: ~10ms query latency
1152
-
1153
- ---
1154
-
1155
- ### Tutorial 3: AI Agent Memory System
1156
-
1157
- **What you'll learn:** Implement a memory system for AI agents that remembers past experiences and learns from them.
1158
-
1159
- **Complete Implementation:**
1160
-
1161
- ```javascript
1162
- const { VectorDb } = require('ruvector');
1163
-
1164
- class AgentMemory {
1165
- constructor(agentId) {
1166
- this.agentId = agentId;
1167
-
1168
- // Create separate databases for different memory types
1169
- this.episodicMemory = new VectorDb({
1170
- dimensions: 768,
1171
- storagePath: `./memory/${agentId}-episodic.db`
1172
- });
1173
-
1174
- this.semanticMemory = new VectorDb({
1175
- dimensions: 768,
1176
- storagePath: `./memory/${agentId}-semantic.db`
1177
- });
1178
-
1179
- console.log(`🧠 Memory system initialized for agent: ${agentId}`);
1180
- }
1181
-
1182
- // Step 1: Store an experience (episodic memory)
1183
- async storeExperience(experience) {
1184
- const {
1185
- state,
1186
- action,
1187
- result,
1188
- reward,
1189
- embedding
1190
- } = experience;
1191
-
1192
- const experienceId = `exp_${Date.now()}_${Math.random()}`;
1193
-
1194
- await this.episodicMemory.insert({
1195
- id: experienceId,
1196
- vector: new Float32Array(embedding),
1197
- metadata: {
1198
- state: state,
1199
- action: action,
1200
- result: result,
1201
- reward: reward,
1202
- timestamp: Date.now(),
1203
- type: 'episodic'
1204
- }
1205
- });
1206
-
1207
- console.log(`πŸ’Ύ Stored experience: ${action} -> ${result} (reward: ${reward})`);
1208
- return experienceId;
1209
- }
1210
-
1211
- // Step 2: Store learned knowledge (semantic memory)
1212
- async storeKnowledge(knowledge) {
1213
- const {
1214
- concept,
1215
- description,
1216
- embedding,
1217
- confidence = 1.0
1218
- } = knowledge;
1219
-
1220
- const knowledgeId = `know_${Date.now()}`;
1221
-
1222
- await this.semanticMemory.insert({
1223
- id: knowledgeId,
1224
- vector: new Float32Array(embedding),
1225
- metadata: {
1226
- concept: concept,
1227
- description: description,
1228
- confidence: confidence,
1229
- learned: Date.now(),
1230
- uses: 0,
1231
- type: 'semantic'
1232
- }
1233
- });
1234
-
1235
- console.log(`πŸ“š Learned: ${concept}`);
1236
- return knowledgeId;
1237
- }
1238
-
1239
- // Step 3: Recall similar experiences
1240
- async recallExperiences(currentState, k = 5) {
1241
- console.log(`πŸ” Recalling similar experiences...`);
1242
-
1243
- const results = await this.episodicMemory.search({
1244
- vector: new Float32Array(currentState.embedding),
1245
- k: k,
1246
- threshold: 0.6 // Only recall reasonably similar experiences
1247
- });
1248
-
1249
- // Sort by reward to prioritize successful experiences
1250
- const sorted = results.sort((a, b) => b.metadata.reward - a.metadata.reward);
1251
-
1252
- console.log(`πŸ“ Recalled ${sorted.length} relevant experiences`);
1253
-
1254
- return sorted.map(r => ({
1255
- state: r.metadata.state,
1256
- action: r.metadata.action,
1257
- result: r.metadata.result,
1258
- reward: r.metadata.reward,
1259
- similarity: r.score
1260
- }));
1261
- }
1262
-
1263
- // Step 4: Query knowledge base
1264
- async queryKnowledge(query, k = 3) {
1265
- const results = await this.semanticMemory.search({
1266
- vector: new Float32Array(query.embedding),
1267
- k: k
1268
- });
1269
-
1270
- // Update usage statistics
1271
- for (const result of results) {
1272
- const knowledge = await this.semanticMemory.get(result.id);
1273
- if (knowledge) {
1274
- knowledge.metadata.uses += 1;
1275
- // In production, update the entry
1276
- }
1277
- }
1278
-
1279
- return results.map(r => ({
1280
- concept: r.metadata.concept,
1281
- description: r.metadata.description,
1282
- confidence: r.metadata.confidence,
1283
- relevance: r.score
1284
- }));
1285
- }
1286
-
1287
- // Step 5: Reflect and learn from experiences
1288
- async reflect() {
1289
- console.log('\nπŸ€” Reflecting on experiences...');
384
+ ## API Reference
1290
385
 
1291
- // Get all experiences
1292
- const totalExperiences = await this.episodicMemory.len();
1293
- console.log(`πŸ“Š Total experiences: ${totalExperiences}`);
386
+ ### VectorDB
1294
387
 
1295
- // Analyze success rate
1296
- // In production, you'd aggregate experiences and extract patterns
1297
- console.log('πŸ’‘ Analysis complete');
1298
-
1299
- return {
1300
- totalExperiences: totalExperiences,
1301
- knowledgeItems: await this.semanticMemory.len()
1302
- };
1303
- }
1304
-
1305
- // Step 6: Get memory statistics
1306
- async getStats() {
1307
- return {
1308
- episodicMemorySize: await this.episodicMemory.len(),
1309
- semanticMemorySize: await this.semanticMemory.len(),
1310
- agentId: this.agentId
1311
- };
1312
- }
388
+ ```typescript
389
+ class VectorDB {
390
+ constructor(dimension: number, options?: VectorDBOptions);
391
+
392
+ // CRUD operations
393
+ insert(id: string, values: number[], metadata?: object): Promise<void>;
394
+ insertBatch(vectors: Vector[], options?: BatchOptions): Promise<void>;
395
+ get(id: string): Promise<Vector | null>;
396
+ update(id: string, values?: number[], metadata?: object): Promise<void>;
397
+ delete(id: string): Promise<boolean>;
398
+
399
+ // Search
400
+ search(query: number[], k?: number, filter?: Filter): Promise<SearchResult[]>;
401
+ hybridSearch(query: number[], text: string, k?: number): Promise<SearchResult[]>;
402
+
403
+ // Persistence
404
+ save(path: string): Promise<void>;
405
+ static load(path: string): Promise<VectorDB>;
406
+
407
+ // Management
408
+ stats(): Promise<IndexStats>;
409
+ optimize(): Promise<void>;
410
+ clear(): Promise<void>;
1313
411
  }
412
+ ```
1314
413
 
1315
- // Example Usage: Simulated agent learning to navigate
1316
- async function main() {
1317
- const agent = new AgentMemory('agent-001');
1318
-
1319
- // Simulate embedding function (in production, use a real model)
1320
- function embed(text) {
1321
- return Array(768).fill(0).map(() => Math.random());
1322
- }
1323
-
1324
- console.log('\n' + '='.repeat(60));
1325
- console.log('PHASE 1: Learning from experiences');
1326
- console.log('='.repeat(60) + '\n');
1327
-
1328
- // Store some experiences
1329
- await agent.storeExperience({
1330
- state: { location: 'room1', goal: 'room3' },
1331
- action: 'move_north',
1332
- result: 'reached room2',
1333
- reward: 0.5,
1334
- embedding: embed('navigating from room1 to room2')
1335
- });
1336
-
1337
- await agent.storeExperience({
1338
- state: { location: 'room2', goal: 'room3' },
1339
- action: 'move_east',
1340
- result: 'reached room3',
1341
- reward: 1.0,
1342
- embedding: embed('navigating from room2 to room3')
1343
- });
1344
-
1345
- await agent.storeExperience({
1346
- state: { location: 'room1', goal: 'room3' },
1347
- action: 'move_south',
1348
- result: 'hit wall',
1349
- reward: -0.5,
1350
- embedding: embed('failed navigation attempt')
1351
- });
1352
-
1353
- // Store learned knowledge
1354
- await agent.storeKnowledge({
1355
- concept: 'navigation_strategy',
1356
- description: 'Moving north then east is efficient for reaching room3 from room1',
1357
- embedding: embed('navigation strategy knowledge'),
1358
- confidence: 0.9
1359
- });
1360
-
1361
- console.log('\n' + '='.repeat(60));
1362
- console.log('PHASE 2: Applying memory');
1363
- console.log('='.repeat(60) + '\n');
1364
-
1365
- // Agent encounters a similar situation
1366
- const currentState = {
1367
- location: 'room1',
1368
- goal: 'room3',
1369
- embedding: embed('navigating from room1 to room3')
1370
- };
1371
-
1372
- // Recall relevant experiences
1373
- const experiences = await agent.recallExperiences(currentState, 3);
1374
-
1375
- console.log('\nπŸ“– Recalled experiences:');
1376
- experiences.forEach((exp, i) => {
1377
- console.log(`${i + 1}. Action: ${exp.action} | Result: ${exp.result} | Reward: ${exp.reward} | Similarity: ${exp.similarity.toFixed(3)}`);
1378
- });
1379
-
1380
- // Query relevant knowledge
1381
- const knowledge = await agent.queryKnowledge({
1382
- embedding: embed('how to navigate efficiently')
1383
- }, 2);
414
+ ### GraphDB
1384
415
 
1385
- console.log('\nπŸ“š Relevant knowledge:');
1386
- knowledge.forEach((k, i) => {
1387
- console.log(`${i + 1}. ${k.concept}: ${k.description} (confidence: ${k.confidence})`);
1388
- });
416
+ ```typescript
417
+ class GraphDB {
418
+ constructor(options?: GraphDBOptions);
1389
419
 
1390
- console.log('\n' + '='.repeat(60));
1391
- console.log('PHASE 3: Reflection');
1392
- console.log('='.repeat(60) + '\n');
420
+ // Cypher execution
421
+ execute(cypher: string, params?: object): QueryResult;
1393
422
 
1394
- // Reflect on learning
1395
- const stats = await agent.reflect();
1396
- const memoryStats = await agent.getStats();
423
+ // Direct API
424
+ createNode(label: string, properties: object): string;
425
+ createRelationship(from: string, to: string, type: string, props?: object): void;
426
+ createHyperedge(nodeIds: string[], type: string, props?: object): string;
1397
427
 
1398
- console.log('\nπŸ“Š Memory Statistics:');
1399
- console.log(` Episodic memories: ${memoryStats.episodicMemorySize}`);
1400
- console.log(` Semantic knowledge: ${memoryStats.semanticMemorySize}`);
1401
- console.log(` Agent ID: ${memoryStats.agentId}`);
428
+ // Traversal
429
+ shortestPath(from: string, to: string): Path | null;
430
+ neighbors(nodeId: string, depth?: number): Node[];
1402
431
  }
1403
-
1404
- main().catch(console.error);
1405
432
  ```
1406
433
 
1407
- **Expected Output:**
1408
- ```
1409
- 🧠 Memory system initialized for agent: agent-001
1410
-
1411
- ============================================================
1412
- PHASE 1: Learning from experiences
1413
- ============================================================
1414
-
1415
- πŸ’Ύ Stored experience: move_north -> reached room2 (reward: 0.5)
1416
- πŸ’Ύ Stored experience: move_east -> reached room3 (reward: 1.0)
1417
- πŸ’Ύ Stored experience: move_south -> hit wall (reward: -0.5)
1418
- πŸ“š Learned: navigation_strategy
1419
-
1420
- ============================================================
1421
- PHASE 2: Applying memory
1422
- ============================================================
1423
-
1424
- πŸ” Recalling similar experiences...
1425
- πŸ“ Recalled 3 relevant experiences
434
+ ### GNNLayer
1426
435
 
1427
- πŸ“– Recalled experiences:
1428
- 1. Action: move_east | Result: reached room3 | Reward: 1.0 | Similarity: 0.892
1429
- 2. Action: move_north | Result: reached room2 | Reward: 0.5 | Similarity: 0.876
1430
- 3. Action: move_south | Result: hit wall | Reward: -0.5 | Similarity: 0.654
1431
-
1432
- πŸ“š Relevant knowledge:
1433
- 1. navigation_strategy: Moving north then east is efficient for reaching room3 from room1 (confidence: 0.9)
1434
-
1435
- ============================================================
1436
- PHASE 3: Reflection
1437
- ============================================================
436
+ ```typescript
437
+ class GNNLayer {
438
+ constructor(inputDim: number, outputDim: number, numHeads: number);
1438
439
 
1439
- πŸ€” Reflecting on experiences...
1440
- πŸ“Š Total experiences: 3
1441
- πŸ’‘ Analysis complete
440
+ // Inference
441
+ forward(query: number[], neighbors: number[][], weights: number[]): number[];
1442
442
 
1443
- πŸ“Š Memory Statistics:
1444
- Episodic memories: 3
1445
- Semantic knowledge: 1
1446
- Agent ID: agent-001
443
+ // Training
444
+ train(data: TrainingData, config?: TrainingConfig): TrainingMetrics;
445
+ save(path: string): void;
446
+ static load(path: string): GNNLayer;
447
+ }
1447
448
  ```
1448
449
 
1449
- **Use Cases:**
1450
- - βœ… Reinforcement learning agents
1451
- - βœ… Chatbot conversation history
1452
- - βœ… Game AI that learns from gameplay
1453
- - βœ… Personal assistant memory
1454
- - βœ… Robotic navigation systems
1455
-
1456
- ## πŸ—οΈ API Reference
1457
-
1458
- ### Constructor
450
+ ### Router
1459
451
 
1460
452
  ```typescript
1461
- new VectorDb(options: {
1462
- dimensions: number; // Vector dimensionality (required)
1463
- maxElements?: number; // Max vectors (default: 10000)
1464
- storagePath?: string; // Persistent storage path
1465
- ef_construction?: number; // HNSW construction parameter (default: 200)
1466
- m?: number; // HNSW M parameter (default: 16)
1467
- distanceMetric?: string; // 'cosine', 'euclidean', or 'dot' (default: 'cosine')
1468
- })
1469
- ```
1470
-
1471
- ### Methods
453
+ class Router {
454
+ constructor(config?: RouterConfig);
1472
455
 
1473
- #### insert(entry: VectorEntry): Promise<string>
1474
- Insert a vector into the database.
456
+ // Routing
457
+ route(query: number[], candidates: Candidate[]): RoutingDecision;
458
+ routeBatch(queries: number[][], candidates: Candidate[]): RoutingDecision[];
1475
459
 
1476
- ```javascript
1477
- const id = await db.insert({
1478
- id: 'doc_1',
1479
- vector: new Float32Array([0.1, 0.2, 0.3, ...]),
1480
- metadata: { title: 'Document 1' }
1481
- });
460
+ // Management
461
+ reloadModel(): void;
462
+ circuitBreakerStatus(): 'closed' | 'open' | 'half-open';
463
+ resetCircuitBreaker(): void;
464
+ }
1482
465
  ```
1483
466
 
1484
- #### search(query: SearchQuery): Promise<SearchResult[]>
1485
- Search for similar vectors.
467
+ ## Use Cases
1486
468
 
1487
- ```javascript
1488
- const results = await db.search({
1489
- vector: new Float32Array([0.1, 0.2, 0.3, ...]),
1490
- k: 10,
1491
- threshold: 0.7
1492
- });
1493
- ```
1494
-
1495
- #### get(id: string): Promise<VectorEntry | null>
1496
- Retrieve a vector by ID.
469
+ ### Agentic AI / Multi-Agent Systems
1497
470
 
1498
471
  ```javascript
1499
- const entry = await db.get('doc_1');
1500
- if (entry) {
1501
- console.log(entry.vector, entry.metadata);
1502
- }
1503
- ```
1504
-
1505
- #### delete(id: string): Promise<boolean>
1506
- Remove a vector from the database.
472
+ // Route tasks to specialized agents
473
+ const agents = [
474
+ { id: 'researcher', embedding: researchEmb, capabilities: ['search', 'summarize'] },
475
+ { id: 'coder', embedding: codeEmb, capabilities: ['code', 'debug'] },
476
+ { id: 'analyst', embedding: analysisEmb, capabilities: ['data', 'visualize'] }
477
+ ];
1507
478
 
1508
- ```javascript
1509
- const deleted = await db.delete('doc_1');
1510
- console.log(deleted ? 'Deleted' : 'Not found');
479
+ const taskEmb = await embed("Write a Python script to analyze sales data");
480
+ const decision = router.route(taskEmb, agents);
481
+ // Routes to 'coder' agent with high confidence
1511
482
  ```
1512
483
 
1513
- #### len(): Promise<number>
1514
- Get the total number of vectors.
484
+ ### Recommendation Systems
1515
485
 
1516
486
  ```javascript
1517
- const count = await db.len();
1518
- console.log(`Total vectors: ${count}`);
487
+ const recommendations = graph.execute(`
488
+ MATCH (user:User {id: $userId})-[:VIEWED]->(item:Product)
489
+ MATCH (item)-[:SIMILAR_TO]->(rec:Product)
490
+ WHERE NOT (user)-[:VIEWED]->(rec)
491
+ AND vector.similarity(rec.embedding, $userPreference) > 0.7
492
+ RETURN rec
493
+ ORDER BY vector.similarity(rec.embedding, $userPreference) DESC
494
+ LIMIT 10
495
+ `);
1519
496
  ```
1520
497
 
1521
- ## 🎨 Advanced Configuration
1522
-
1523
- ### HNSW Parameters
498
+ ### Semantic Caching
1524
499
 
1525
500
  ```javascript
1526
- const db = new VectorDb({
1527
- dimensions: 384,
1528
- maxElements: 1000000,
1529
- ef_construction: 200, // Higher = better recall, slower build
1530
- m: 16, // Higher = better recall, more memory
1531
- storagePath: './large-db.db'
1532
- });
1533
- ```
1534
-
1535
- **Parameter Guidelines:**
1536
- - `ef_construction`: 100-400 (higher = better recall, slower indexing)
1537
- - `m`: 8-64 (higher = better recall, more memory)
1538
- - Default values work well for most use cases
501
+ const cache = new VectorDB(1536);
1539
502
 
1540
- ### Distance Metrics
503
+ async function cachedLLMCall(prompt) {
504
+ const promptEmb = await embed(prompt);
1541
505
 
1542
- ```javascript
1543
- // Cosine similarity (default, best for normalized vectors)
1544
- const db1 = new VectorDb({
1545
- dimensions: 128,
1546
- distanceMetric: 'cosine'
1547
- });
506
+ // Check semantic cache
507
+ const cached = await cache.search(promptEmb, 1);
508
+ if (cached[0]?.score > 0.95) {
509
+ return cached[0].metadata.response; // Cache hit
510
+ }
1548
511
 
1549
- // Euclidean distance (L2, best for spatial data)
1550
- const db2 = new VectorDb({
1551
- dimensions: 128,
1552
- distanceMetric: 'euclidean'
1553
- });
512
+ // Cache miss - call LLM
513
+ const response = await llm.complete(prompt);
514
+ await cache.insert(generateId(), promptEmb, { prompt, response });
1554
515
 
1555
- // Dot product (best for pre-normalized vectors)
1556
- const db3 = new VectorDb({
1557
- dimensions: 128,
1558
- distanceMetric: 'dot'
1559
- });
516
+ return response;
517
+ }
1560
518
  ```
1561
519
 
1562
- ### Persistence
520
+ ### Document Q&A with Sources
1563
521
 
1564
522
  ```javascript
1565
- // Auto-save to disk
1566
- const persistent = new VectorDb({
1567
- dimensions: 128,
1568
- storagePath: './persistent.db'
1569
- });
523
+ async function qaWithSources(question) {
524
+ const results = await db.search(await embed(question), 5);
1570
525
 
1571
- // In-memory only (faster, but data lost on exit)
1572
- const temporary = new VectorDb({
1573
- dimensions: 128
1574
- // No storagePath = in-memory
1575
- });
1576
- ```
1577
-
1578
- ## πŸ“¦ Platform Support
1579
-
1580
- Automatically installs the correct implementation for:
1581
-
1582
- ### Native (Rust) - Best Performance
1583
- - **Linux**: x64, ARM64 (GNU libc)
1584
- - **macOS**: x64 (Intel), ARM64 (Apple Silicon)
1585
- - **Windows**: x64 (MSVC)
1586
-
1587
- Performance: **<0.5ms latency**, **50K+ ops/sec**
1588
-
1589
- ### WASM Fallback - Universal Compatibility
1590
- - Any platform where native module isn't available
1591
- - Browser environments (experimental)
1592
- - Alpine Linux (musl) and other non-glibc systems
1593
-
1594
- Performance: **10-50ms latency**, **~1K ops/sec**
1595
-
1596
- **Node.js 18+ required** for all platforms.
526
+ const answer = await llm.complete({
527
+ prompt: `Answer based on these sources:\n${results.map(r =>
528
+ `[${r.id}] ${r.metadata.content}`
529
+ ).join('\n')}\n\nQuestion: ${question}`,
530
+ });
1597
531
 
1598
- ## πŸ”§ Building from Source
532
+ return {
533
+ answer,
534
+ sources: results.map(r => ({
535
+ id: r.id,
536
+ title: r.metadata.title,
537
+ relevance: r.score
538
+ }))
539
+ };
540
+ }
541
+ ```
1599
542
 
1600
- If you need to rebuild the native module:
543
+ ## Architecture
544
+
545
+ ```
546
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
547
+ β”‚ ruvector β”‚
548
+ β”‚ (All-in-One npm Package) β”‚
549
+ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
550
+ β”‚ VectorDB β”‚ GraphDB β”‚ GNNLayer β”‚ Router β”‚
551
+ β”‚ (Search) β”‚ (Cypher) β”‚ (ML) β”‚ (AI Routing) β”‚
552
+ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
553
+ β”‚ Rust Core Engine β”‚
554
+ β”‚ β€’ HNSW Index β€’ Cypher Parser β€’ Attention β€’ FastGRNN β”‚
555
+ β”‚ β€’ SIMD Ops β€’ Hyperedges β€’ Training β€’ Uncertainty β”‚
556
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
557
+ β”‚
558
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
559
+ β”‚ β”‚ β”‚
560
+ β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”
561
+ β”‚ Native β”‚ β”‚ WASM β”‚ β”‚ FFI β”‚
562
+ β”‚(napi-rs)β”‚ β”‚(wasm32) β”‚ β”‚ (C) β”‚
563
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
564
+ β”‚ β”‚ β”‚
565
+ β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”
566
+ β”‚ Node.js β”‚ β”‚ Browser β”‚ β”‚ Python β”‚
567
+ β”‚ Bun β”‚ β”‚ Deno β”‚ β”‚ Go β”‚
568
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
569
+ ```
570
+
571
+ ## Platform Support
572
+
573
+ | Platform | Backend | Installation |
574
+ |----------|---------|--------------|
575
+ | **Node.js 16+** | Native (napi-rs) | `npm install ruvector` |
576
+ | **Node.js (fallback)** | WASM | Automatic if native fails |
577
+ | **Bun** | Native | `bun add ruvector` |
578
+ | **Deno** | WASM | `import from "npm:ruvector"` |
579
+ | **Browser** | WASM | `npm install @ruvector/wasm` |
580
+ | **Cloudflare Workers** | WASM | `npm install @ruvector/wasm` |
581
+ | **Vercel Edge** | WASM | `npm install @ruvector/wasm` |
582
+
583
+ ## Documentation
584
+
585
+ - [Getting Started Guide](https://github.com/ruvnet/ruvector/blob/main/docs/guide/GETTING_STARTED.md)
586
+ - [Cypher Reference](https://github.com/ruvnet/ruvector/blob/main/docs/api/CYPHER_REFERENCE.md)
587
+ - [GNN Architecture](https://github.com/ruvnet/ruvector/blob/main/docs/gnn-layer-implementation.md)
588
+ - [Performance Tuning](https://github.com/ruvnet/ruvector/blob/main/docs/optimization/PERFORMANCE_TUNING_GUIDE.md)
589
+ - [API Reference](https://github.com/ruvnet/ruvector/tree/main/docs/api)
590
+
591
+ ## Contributing
1601
592
 
1602
593
  ```bash
1603
- # Install Rust toolchain
1604
- curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
1605
-
1606
594
  # Clone repository
1607
595
  git clone https://github.com/ruvnet/ruvector.git
1608
596
  cd ruvector
1609
597
 
1610
- # Build native module
1611
- cd npm/packages/core
1612
- npm run build:napi
1613
-
1614
- # Build wrapper package
1615
- cd ../ruvector
598
+ # Install dependencies
1616
599
  npm install
1617
- npm run build
1618
600
 
1619
601
  # Run tests
1620
602
  npm test
1621
- ```
1622
-
1623
- **Requirements:**
1624
- - Rust 1.77+
1625
- - Node.js 18+
1626
- - Cargo
1627
-
1628
- ## 🌍 Ecosystem
1629
-
1630
- ### Related Packages
1631
603
 
1632
- - **[ruvector-core](https://www.npmjs.com/package/ruvector-core)** - Core native bindings (lower-level API)
1633
- - **[ruvector-wasm](https://www.npmjs.com/package/ruvector-wasm)** - WebAssembly implementation for browsers
1634
- - **[ruvector-cli](https://www.npmjs.com/package/ruvector-cli)** - Standalone CLI tools
1635
-
1636
- ### Platform-Specific Packages (auto-installed)
1637
-
1638
- - **[ruvector-core-linux-x64-gnu](https://www.npmjs.com/package/ruvector-core-linux-x64-gnu)**
1639
- - **[ruvector-core-linux-arm64-gnu](https://www.npmjs.com/package/ruvector-core-linux-arm64-gnu)**
1640
- - **[ruvector-core-darwin-x64](https://www.npmjs.com/package/ruvector-core-darwin-x64)**
1641
- - **[ruvector-core-darwin-arm64](https://www.npmjs.com/package/ruvector-core-darwin-arm64)**
1642
- - **[ruvector-core-win32-x64-msvc](https://www.npmjs.com/package/ruvector-core-win32-x64-msvc)**
1643
-
1644
- ## πŸ› Troubleshooting
1645
-
1646
- ### Native Module Not Loading
1647
-
1648
- If you see "Cannot find module 'ruvector-core-*'":
1649
-
1650
- ```bash
1651
- # Reinstall with optional dependencies
1652
- npm install --include=optional ruvector
1653
-
1654
- # Verify platform
1655
- npx ruvector info
604
+ # Build
605
+ npm run build
1656
606
 
1657
- # Check Node.js version (18+ required)
1658
- node --version
607
+ # Benchmarks
608
+ npm run bench
1659
609
  ```
1660
610
 
1661
- ### WASM Fallback Performance
611
+ See [CONTRIBUTING.md](https://github.com/ruvnet/ruvector/blob/main/docs/development/CONTRIBUTING.md) for guidelines.
1662
612
 
1663
- If you're using WASM fallback and need better performance:
613
+ ## License
1664
614
 
1665
- 1. **Install native toolchain** for your platform
1666
- 2. **Rebuild native module**: `npm rebuild ruvector`
1667
- 3. **Verify native**: `npx ruvector info` should show "native (Rust)"
1668
-
1669
- ### Platform Compatibility
1670
-
1671
- - **Alpine Linux**: Uses WASM fallback (musl not supported)
1672
- - **Windows ARM**: Not yet supported, uses WASM fallback
1673
- - **Node.js < 18**: Not supported, upgrade to Node.js 18+
1674
-
1675
- ## πŸ“š Documentation
1676
-
1677
- - 🏠 [Homepage](https://ruv.io)
1678
- - πŸ“¦ [GitHub Repository](https://github.com/ruvnet/ruvector)
1679
- - πŸ“š [Full Documentation](https://github.com/ruvnet/ruvector/tree/main/docs)
1680
- - πŸš€ [Getting Started Guide](https://github.com/ruvnet/ruvector/blob/main/docs/guide/GETTING_STARTED.md)
1681
- - πŸ“– [API Reference](https://github.com/ruvnet/ruvector/blob/main/docs/api/NODEJS_API.md)
1682
- - 🎯 [Performance Tuning](https://github.com/ruvnet/ruvector/blob/main/docs/optimization/PERFORMANCE_TUNING_GUIDE.md)
1683
- - πŸ› [Issue Tracker](https://github.com/ruvnet/ruvector/issues)
1684
- - πŸ’¬ [Discussions](https://github.com/ruvnet/ruvector/discussions)
1685
-
1686
- ## 🀝 Contributing
1687
-
1688
- We welcome contributions! See [CONTRIBUTING.md](https://github.com/ruvnet/ruvector/blob/main/docs/development/CONTRIBUTING.md) for guidelines.
1689
-
1690
- ### Quick Start
1691
-
1692
- 1. Fork the repository
1693
- 2. Create a feature branch: `git checkout -b feature/amazing-feature`
1694
- 3. Commit changes: `git commit -m 'Add amazing feature'`
1695
- 4. Push to branch: `git push origin feature/amazing-feature`
1696
- 5. Open a Pull Request
1697
-
1698
- ## 🌐 Community & Support
1699
-
1700
- - **GitHub**: [github.com/ruvnet/ruvector](https://github.com/ruvnet/ruvector) - ⭐ Star and follow
1701
- - **Discord**: [Join our community](https://discord.gg/ruvnet) - Chat with developers
1702
- - **Twitter**: [@ruvnet](https://twitter.com/ruvnet) - Follow for updates
1703
- - **Issues**: [Report bugs](https://github.com/ruvnet/ruvector/issues)
1704
-
1705
- ### Enterprise Support
1706
-
1707
- Need custom development or consulting?
1708
-
1709
- πŸ“§ [enterprise@ruv.io](mailto:enterprise@ruv.io)
1710
-
1711
- ## πŸ“œ License
1712
-
1713
- **MIT License** - see [LICENSE](https://github.com/ruvnet/ruvector/blob/main/LICENSE) for details.
1714
-
1715
- Free for commercial and personal use.
1716
-
1717
- ## πŸ™ Acknowledgments
1718
-
1719
- Built with battle-tested technologies:
1720
-
1721
- - **HNSW**: Hierarchical Navigable Small World graphs
1722
- - **SIMD**: Hardware-accelerated vector operations via simsimd
1723
- - **Rust**: Memory-safe, zero-cost abstractions
1724
- - **NAPI-RS**: High-performance Node.js bindings
1725
- - **WebAssembly**: Universal browser compatibility
615
+ MIT License β€” free for commercial and personal use.
1726
616
 
1727
617
  ---
1728
618
 
1729
619
  <div align="center">
1730
620
 
1731
- **Built with ❀️ by [rUv](https://ruv.io)**
621
+ **Built by [rUv](https://ruv.io)** β€’ [GitHub](https://github.com/ruvnet/ruvector) β€’ [npm](https://npmjs.com/package/ruvector)
1732
622
 
1733
- [![npm](https://img.shields.io/npm/v/ruvector.svg)](https://www.npmjs.com/package/ruvector)
1734
- [![GitHub Stars](https://img.shields.io/github/stars/ruvnet/ruvector?style=social)](https://github.com/ruvnet/ruvector)
1735
- [![Twitter](https://img.shields.io/twitter/follow/ruvnet?style=social)](https://twitter.com/ruvnet)
623
+ *Vector search that gets smarter over time.*
1736
624
 
1737
- **[Get Started](https://github.com/ruvnet/ruvector/blob/main/docs/guide/GETTING_STARTED.md)** β€’ **[Documentation](https://github.com/ruvnet/ruvector/tree/main/docs)** β€’ **[API Reference](https://github.com/ruvnet/ruvector/blob/main/docs/api/NODEJS_API.md)** β€’ **[Contributing](https://github.com/ruvnet/ruvector/blob/main/docs/development/CONTRIBUTING.md)**
625
+ **[⭐ Star on GitHub](https://github.com/ruvnet/ruvector)** if RuVector helps your project!
1738
626
 
1739
627
  </div>