@ruvector/edge-net 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,1168 @@
1
+ # @ruvector/edge-net
2
+
3
+ **Collective AI Computing Network - Share, Contribute, Compute Together**
4
+
5
+ A distributed computing platform that enables collective resource sharing for AI workloads. Contributors share idle compute resources, earning participation units (rUv) that can be used to access the network's collective AI computing power.
6
+
7
+ ```
8
+ ┌─────────────────────────────────────────────────────────────────────────────┐
9
+ │ EDGE-NET: COLLECTIVE AI COMPUTING NETWORK │
10
+ ├─────────────────────────────────────────────────────────────────────────────┤
11
+ │ │
12
+ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
13
+ │ │ Your │ │ Collective │ │ AI Tasks │ │
14
+ │ │ Browser │◄─────►│ Network │◄─────►│ Completed │ │
15
+ │ │ (Idle CPU) │ P2P │ (1000s) │ │ for You │ │
16
+ │ └─────────────┘ └─────────────┘ └─────────────┘ │
17
+ │ │ │ │ │
18
+ │ ▼ ▼ ▼ │
19
+ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
20
+ │ │ Contribute │ │ Earn rUv │ │ Use rUv │ │
21
+ │ │ Compute │ ───► │ Units │ ───► │ for AI │ │
22
+ │ │ When Idle │ │ (Credits) │ │ Workloads │ │
23
+ │ └─────────────┘ └─────────────┘ └─────────────┘ │
24
+ │ │
25
+ │ Vector Search │ Embeddings │ Semantic Match │ Encryption │ Compression │
26
+ │ │
27
+ └─────────────────────────────────────────────────────────────────────────────┘
28
+ ```
29
+
30
+ ## Table of Contents
31
+
32
+ - [What is Edge-Net?](#what-is-edge-net)
33
+ - [Key Features](#key-features)
34
+ - [Quick Start](#quick-start)
35
+ - [How It Works](#how-it-works)
36
+ - [AI Computing Tasks](#ai-computing-tasks)
37
+ - [Pi-Key Identity System](#pi-key-identity-system)
38
+ - [Self-Optimization](#self-optimization)
39
+ - [Tutorials](#tutorials)
40
+ - [API Reference](#api-reference)
41
+ - [Development](#development)
42
+ - [Exotic AI Capabilities](#exotic-ai-capabilities)
43
+ - [Core Architecture & Capabilities](#core-architecture--capabilities)
44
+ - [Self-Learning Hooks & MCP Integration](#self-learning-hooks--mcp-integration)
45
+
46
+ ---
47
+
48
+ ## What is Edge-Net?
49
+
50
+ Edge-net creates a **collective computing network** where participants share idle browser resources to power distributed AI workloads. Think of it as a cooperative where:
51
+
52
+ 1. **You Contribute** - Share unused CPU cycles when browsing
53
+ 2. **You Earn** - Accumulate rUv (Resource Utility Vouchers) based on contribution
54
+ 3. **You Use** - Spend rUv to run AI tasks across the collective network
55
+ 4. **Network Grows** - More participants = more collective computing power
56
+
57
+ ### Why Collective AI Computing?
58
+
59
+ | Traditional AI Computing | Collective Edge-Net |
60
+ |-------------------------|---------------------|
61
+ | Expensive GPU servers | Free idle browser CPUs |
62
+ | Centralized data centers | Distributed global network |
63
+ | Pay-per-use pricing | Contribution-based access |
64
+ | Single point of failure | Resilient P2P mesh |
65
+ | Limited by your hardware | Scale with the collective |
66
+
67
+ ### Core Principles
68
+
69
+ | Principle | Description |
70
+ |-----------|-------------|
71
+ | **Collectibility** | Resources are pooled and shared fairly |
72
+ | **Contribution** | Earn by giving, spend by using |
73
+ | **Self-Sustaining** | Network operates without central control |
74
+ | **Privacy-First** | Pi-Key cryptographic identity system |
75
+ | **Adaptive** | Q-learning security protects the collective |
76
+
77
+ ---
78
+
79
+ ## Key Features
80
+
81
+ ### Collective Resource Sharing
82
+
83
+ | Feature | Benefit |
84
+ |---------|---------|
85
+ | **Idle CPU Utilization** | Use resources that would otherwise be wasted |
86
+ | **Browser-Based** | No installation, runs in any modern browser |
87
+ | **Adjustable Contribution** | Control how much you share (10-50% CPU) |
88
+ | **Battery Aware** | Automatically reduces on battery power |
89
+ | **Fair Distribution** | Work routed based on capability matching |
90
+
91
+ ### AI Computing Capabilities
92
+
93
+ Edge-net provides a complete AI stack that runs entirely in your browser. Each component is designed to be lightweight, fast, and work without a central server.
94
+
95
+ ```
96
+ ┌─────────────────────────────────────────────────────────────────────────────┐
97
+ │ AI INTELLIGENCE STACK │
98
+ ├─────────────────────────────────────────────────────────────────────────────┤
99
+ │ │
100
+ │ ┌─────────────────────────────────────────────────────────────────────┐ │
101
+ │ │ MicroLoRA Adapter Pool (from ruvLLM) │ │
102
+ │ │ • LRU-managed pool (16 slots) • Rank 1-16 adaptation │ │
103
+ │ │ • <50µs rank-1 forward • 2,236+ ops/sec with batch 32 │ │
104
+ │ │ • 4-bit/8-bit quantization • P2P shareable adapters │ │
105
+ │ └─────────────────────────────────────────────────────────────────────┘ │
106
+ │ │
107
+ │ ┌─────────────────────────────────────────────────────────────────────┐ │
108
+ │ │ SONA - Self-Optimizing Neural Architecture │ │
109
+ │ │ • Instant Loop: Per-request MicroLoRA adaptation │ │
110
+ │ │ • Background Loop: Hourly K-means consolidation │ │
111
+ │ │ • Deep Loop: Weekly EWC++ consolidation (catastrophic forgetting) │ │
112
+ │ └─────────────────────────────────────────────────────────────────────┘ │
113
+ │ │
114
+ │ ┌──────────────────────┐ ┌──────────────────────┐ ┌─────────────────┐ │
115
+ │ │ HNSW Vector Index │ │ Federated Learning │ │ ReasoningBank │ │
116
+ │ │ • 150x faster │ │ • TopK Sparsify 90% │ │ • Trajectories │ │
117
+ │ │ • O(log N) search │ │ • Byzantine tolerant│ │ • Pattern learn │ │
118
+ │ │ • Incremental P2P │ │ • Diff privacy │ │ • 87x energy │ │
119
+ │ └──────────────────────┘ └──────────────────────┘ └─────────────────┘ │
120
+ │ │
121
+ └─────────────────────────────────────────────────────────────────────────────┘
122
+ ```
123
+
124
+ #### Core AI Tasks
125
+
126
+ | Task Type | Use Case | How It Works |
127
+ |-----------|----------|--------------|
128
+ | **Vector Search** | Find similar items | HNSW index with 150x speedup |
129
+ | **Embeddings** | Text understanding | Generate semantic vectors |
130
+ | **Semantic Match** | Intent detection | Classify meaning |
131
+ | **LoRA Inference** | Task adaptation | MicroLoRA <100µs forward |
132
+ | **Pattern Learning** | Self-optimization | ReasoningBank trajectories |
133
+
134
+ ---
135
+
136
+ #### MicroLoRA Adapter System
137
+
138
+ > **What it does:** Lets the network specialize for different tasks without retraining the whole model. Think of it like having 16 expert "hats" the AI can quickly swap between - one for searching, one for encryption, one for routing, etc.
139
+
140
+ Ported from **ruvLLM** with enhancements for distributed compute:
141
+
142
+ | Feature | Specification | Performance |
143
+ |---------|--------------|-------------|
144
+ | **Rank Support** | 1-16 | Rank-1: <50µs, Rank-2: <100µs |
145
+ | **Pool Size** | 16 concurrent adapters | LRU eviction policy |
146
+ | **Quantization** | 4-bit, 8-bit | 75% memory reduction |
147
+ | **Batch Size** | 32 (optimal) | 2,236+ ops/sec |
148
+ | **Task Types** | VectorSearch, Embedding, Inference, Crypto, Routing | Auto-routing |
149
+
150
+ **Why it matters:** Traditional AI models are "one size fits all." MicroLoRA lets each node become a specialist for specific tasks in under 100 microseconds - faster than a blink.
151
+
152
+ ---
153
+
154
+ #### SONA: Self-Optimizing Neural Architecture
155
+
156
+ > **What it does:** The network teaches itself to get better over time using three learning speeds - instant reactions, daily improvements, and long-term memory. Like how your brain handles reflexes, daily learning, and permanent memories differently.
157
+
158
+ Three-temporal-loop continuous learning system:
159
+
160
+ | Loop | Interval | Mechanism | Purpose |
161
+ |------|----------|-----------|---------|
162
+ | **Instant** | Per-request | MicroLoRA rank-2 | Immediate adaptation |
163
+ | **Background** | Hourly | K-means clustering | Pattern consolidation |
164
+ | **Deep** | Weekly | EWC++ (λ=2000) | Prevent catastrophic forgetting |
165
+
166
+ **Why it matters:** Most AI systems forget old knowledge when learning new things ("catastrophic forgetting"). SONA's three-loop design lets the network learn continuously without losing what it already knows.
167
+
168
+ ---
169
+
170
+ #### HNSW Vector Index
171
+
172
+ > **What it does:** Finds similar items incredibly fast by organizing data like a multi-level highway system. Instead of checking every item (like walking door-to-door), it takes smart shortcuts to find what you need 150x faster.
173
+
174
+ | Parameter | Default | Description |
175
+ |-----------|---------|-------------|
176
+ | **M** | 32 | Max connections per node |
177
+ | **M_max_0** | 64 | Max connections at layer 0 |
178
+ | **ef_construction** | 200 | Build-time beam width |
179
+ | **ef_search** | 64 | Search-time beam width |
180
+ | **Performance** | 150x | Speedup vs linear scan |
181
+
182
+ **Why it matters:** When searching millions of vectors, naive search takes seconds. HNSW takes milliseconds - essential for real-time AI responses.
183
+
184
+ ---
185
+
186
+ #### Federated Learning
187
+
188
+ > **What it does:** Nodes teach each other without sharing their private data. Each node trains on its own data, then shares only the "lessons learned" (gradients) - like students sharing study notes instead of copying each other's homework.
189
+
190
+ P2P gradient gossip without central coordinator:
191
+
192
+ | Feature | Mechanism | Benefit |
193
+ |---------|-----------|---------|
194
+ | **TopK Sparsification** | 90% compression | Only share the most important updates |
195
+ | **Rep-Weighted FedAvg** | Reputation scoring | Trusted nodes have more influence |
196
+ | **Byzantine Tolerance** | Outlier detection, clipping | Ignore malicious or broken nodes |
197
+ | **Differential Privacy** | Noise injection | Mathematically guaranteed privacy |
198
+ | **Gossip Protocol** | Eventually consistent | Works even if some nodes go offline |
199
+
200
+ **Why it matters:** Traditional AI training requires sending all your data to a central server. Federated learning keeps your data local while still benefiting from collective intelligence.
201
+
202
+ ---
203
+
204
+ #### ReasoningBank & Learning Intelligence
205
+
206
+ > **What it does:** The network's "memory system" that remembers what worked and what didn't. Like keeping a journal of successful strategies that any node can learn from.
207
+
208
+ | Component | What It Does | Why It's Fast |
209
+ |-----------|--------------|---------------|
210
+ | **ReasoningBank** | Stores successful task patterns | Semantic search for quick recall |
211
+ | **Pattern Extractor** | Groups similar experiences together | K-means finds common patterns |
212
+ | **Multi-Head Attention** | Decides which node handles each task | Parallel evaluation of options |
213
+ | **Spike-Driven Attention** | Ultra-low-power decision making | 87x more energy efficient |
214
+
215
+ **Why it matters:** Without memory, the network would repeat the same mistakes. ReasoningBank lets nodes learn from each other's successes and failures across the entire collective.
216
+
217
+ ### Pi-Key Identity System
218
+
219
+ Ultra-compact cryptographic identity using mathematical constants:
220
+
221
+ | Key Type | Size | Purpose |
222
+ |----------|------|---------|
223
+ | **π (Pi-Key)** | 40 bytes | Your permanent identity |
224
+ | **e (Session)** | 34 bytes | Temporary encrypted sessions |
225
+ | **φ (Genesis)** | 21 bytes | Network origin markers |
226
+
227
+ ### Self-Optimizing Network
228
+
229
+ - **Automatic Task Routing** - Work goes to best-suited nodes
230
+ - **Topology Optimization** - Network self-organizes for efficiency
231
+ - **Q-Learning Security** - Learns to defend against threats
232
+ - **Economic Balance** - Self-sustaining resource economy
233
+
234
+ ---
235
+
236
+ ## Quick Start
237
+
238
+ ### 1. Add to Your Website
239
+
240
+ ```html
241
+ <script type="module">
242
+ import init, { EdgeNetNode, EdgeNetConfig } from '@ruvector/edge-net';
243
+
244
+ async function joinCollective() {
245
+ await init();
246
+
247
+ // Join the collective with your site ID
248
+ const node = new EdgeNetConfig('my-website')
249
+ .cpuLimit(0.3) // Contribute 30% CPU when idle
250
+ .memoryLimit(256 * 1024 * 1024) // 256MB max
251
+ .respectBattery(true) // Reduce on battery
252
+ .build();
253
+
254
+ // Start contributing to the collective
255
+ node.start();
256
+
257
+ // Monitor your participation
258
+ setInterval(() => {
259
+ console.log(`Contributed: ${node.ruvBalance()} rUv`);
260
+ console.log(`Tasks completed: ${node.getStats().tasks_completed}`);
261
+ }, 10000);
262
+ }
263
+
264
+ joinCollective();
265
+ </script>
266
+ ```
267
+
268
+ ### 2. Use the Collective's AI Power
269
+
270
+ ```javascript
271
+ // Submit an AI task to the collective
272
+ const result = await node.submitTask('vector_search', {
273
+ query: embeddings,
274
+ k: 10,
275
+ index: 'shared-knowledge-base'
276
+ }, 5); // Spend up to 5 rUv
277
+
278
+ console.log('Similar items:', result);
279
+ ```
280
+
281
+ ### 3. Monitor Your Contribution
282
+
283
+ ```javascript
284
+ // Check your standing in the collective
285
+ const stats = node.getStats();
286
+ console.log(`
287
+ rUv Earned: ${stats.ruv_earned}
288
+ rUv Spent: ${stats.ruv_spent}
289
+ Net Balance: ${stats.ruv_earned - stats.ruv_spent}
290
+ Tasks Completed: ${stats.tasks_completed}
291
+ Reputation: ${(stats.reputation * 100).toFixed(1)}%
292
+ `);
293
+ ```
294
+
295
+ ---
296
+
297
+ ## How It Works
298
+
299
+ ### The Contribution Cycle
300
+
301
+ ```
302
+ ┌─────────────────────────────────────────────────────────────────────────────┐
303
+ │ CONTRIBUTION CYCLE │
304
+ ├─────────────────────────────────────────────────────────────────────────────┤
305
+ │ │
306
+ │ 1. CONTRIBUTE 2. EARN 3. USE │
307
+ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
308
+ │ │ Browser │ │ rUv │ │ AI Tasks │ │
309
+ │ │ detects │ ───► │ credited │ ───► │ submitted │ │
310
+ │ │ idle time │ │ to you │ │ to network │ │
311
+ │ └─────────────┘ └─────────────┘ └─────────────┘ │
312
+ │ │ │ │ │
313
+ │ ▼ ▼ ▼ │
314
+ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
315
+ │ │ Process │ │ 10x boost │ │ Results │ │
316
+ │ │ incoming │ │ for early │ │ returned │ │
317
+ │ │ tasks │ │ adopters │ │ to you │ │
318
+ │ └─────────────┘ └─────────────┘ └─────────────┘ │
319
+ │ │
320
+ └─────────────────────────────────────────────────────────────────────────────┘
321
+ ```
322
+
323
+ ### Network Growth Phases
324
+
325
+ The collective grows through natural phases:
326
+
327
+ | Phase | Size | Your Benefit |
328
+ |-------|------|--------------|
329
+ | **Genesis** | 0-10K nodes | 10x rUv multiplier (early adopter bonus) |
330
+ | **Growth** | 10K-50K | Multiplier decreases, network strengthens |
331
+ | **Maturation** | 50K-100K | Stable economy, high reliability |
332
+ | **Independence** | 100K+ | Self-sustaining, maximum collective power |
333
+
334
+ ### Fair Resource Allocation
335
+
336
+ ```javascript
337
+ // The network automatically optimizes task distribution
338
+ const health = JSON.parse(node.getEconomicHealth());
339
+
340
+ console.log(`
341
+ Resource Velocity: ${health.velocity} // How fast resources circulate
342
+ Utilization: ${health.utilization} // Network capacity used
343
+ Growth Rate: ${health.growth} // Network expansion
344
+ Stability: ${health.stability} // Economic equilibrium
345
+ `);
346
+ ```
347
+
348
+ ---
349
+
350
+ ## AI Computing Tasks
351
+
352
+ ### Vector Search (Distributed Similarity)
353
+
354
+ Find similar items across the collective's distributed index:
355
+
356
+ ```javascript
357
+ // Search for similar documents
358
+ const similar = await node.submitTask('vector_search', {
359
+ query: [0.1, 0.2, 0.3, ...], // Your query vector
360
+ k: 10, // Top 10 results
361
+ index: 'shared-docs' // Distributed index name
362
+ }, 3); // Max 3 rUv
363
+
364
+ // Results from across the network
365
+ similar.forEach(item => {
366
+ console.log(`Score: ${item.score}, ID: ${item.id}`);
367
+ });
368
+ ```
369
+
370
+ ### Embedding Generation
371
+
372
+ Generate semantic embeddings using collective compute:
373
+
374
+ ```javascript
375
+ // Generate embeddings for text
376
+ const embeddings = await node.submitTask('embedding', {
377
+ text: 'Your text to embed',
378
+ model: 'sentence-transformer'
379
+ }, 2);
380
+
381
+ console.log('Embedding vector:', embeddings);
382
+ ```
383
+
384
+ ### Semantic Matching
385
+
386
+ Classify intent or meaning:
387
+
388
+ ```javascript
389
+ // Classify text intent
390
+ const intent = await node.submitTask('semantic_match', {
391
+ text: 'I want to cancel my subscription',
392
+ categories: ['billing', 'support', 'sales', 'general']
393
+ }, 1);
394
+
395
+ console.log('Detected intent:', intent.category);
396
+ ```
397
+
398
+ ### Secure Operations
399
+
400
+ Encrypt data across the network:
401
+
402
+ ```javascript
403
+ // Distributed encryption
404
+ const encrypted = await node.submitTask('encryption', {
405
+ data: sensitiveData,
406
+ operation: 'encrypt',
407
+ key_id: 'my-shared-key'
408
+ }, 2);
409
+ ```
410
+
411
+ ---
412
+
413
+ ## Pi-Key Identity System
414
+
415
+ Your identity in the collective uses mathematical constants for key sizes:
416
+
417
+ ### Key Types
418
+
419
+ ```
420
+ ┌─────────────────────────────────────────────────────────────────────────────┐
421
+ │ PI-KEY IDENTITY SYSTEM │
422
+ ├─────────────────────────────────────────────────────────────────────────────┤
423
+ │ │
424
+ │ π Pi-Key (Identity) e Euler-Key (Session) φ Phi-Key (Genesis) │
425
+ │ ┌─────────────────┐ ┌───────────────┐ ┌───────────────┐ │
426
+ │ │ 314 bits │ │ 271 bits │ │ 161 bits │ │
427
+ │ │ = 40 bytes │ │ = 34 bytes │ │ = 21 bytes │ │
428
+ │ │ │ │ │ │ │ │
429
+ │ │ Your unique │ │ Temporary │ │ Origin │ │
430
+ │ │ identity │ │ sessions │ │ markers │ │
431
+ │ │ (permanent) │ │ (encrypted) │ │ (network) │ │
432
+ │ └─────────────────┘ └───────────────┘ └───────────────┘ │
433
+ │ │
434
+ │ Ed25519 Signing AES-256-GCM SHA-256 Derived │
435
+ │ │
436
+ └─────────────────────────────────────────────────────────────────────────────┘
437
+ ```
438
+
439
+ ### Using Pi-Keys
440
+
441
+ ```javascript
442
+ import { PiKey, SessionKey, GenesisKey } from '@ruvector/edge-net';
443
+
444
+ // Create your permanent identity
445
+ const identity = new PiKey();
446
+ console.log(`Your ID: ${identity.getShortId()}`); // π:a1b2c3d4...
447
+
448
+ // Sign data
449
+ const signature = identity.sign(data);
450
+ const valid = identity.verify(data, signature, identity.getPublicKey());
451
+
452
+ // Create encrypted backup
453
+ const backup = identity.createEncryptedBackup('my-password');
454
+
455
+ // Create temporary session
456
+ const session = SessionKey.create(identity, 3600); // 1 hour
457
+ const encrypted = session.encrypt(sensitiveData);
458
+ const decrypted = session.decrypt(encrypted);
459
+ ```
460
+
461
+ ---
462
+
463
+ ## Security Architecture
464
+
465
+ Edge-net implements production-grade cryptographic security:
466
+
467
+ ### Cryptographic Primitives
468
+
469
+ | Component | Algorithm | Purpose |
470
+ |-----------|-----------|---------|
471
+ | **Key Derivation** | Argon2id (64MB, 3 iterations) | Memory-hard password hashing |
472
+ | **Signing** | Ed25519 | Digital signatures (128-bit security) |
473
+ | **Encryption** | AES-256-GCM | Authenticated encryption |
474
+ | **Hashing** | SHA-256 | Content hashing and verification |
475
+
476
+ ### Identity Protection
477
+
478
+ ```rust
479
+ // Password-protected key export with Argon2id + AES-256-GCM
480
+ let encrypted = identity.export_secret_key("strong_password")?;
481
+
482
+ // Secure memory cleanup (zeroize)
483
+ // All sensitive key material is automatically zeroed after use
484
+ ```
485
+
486
+ ### Authority Verification
487
+
488
+ All resolution events require cryptographic proof:
489
+
490
+ ```rust
491
+ // Ed25519 signature verification for authority decisions
492
+ let signature = ScopedAuthority::sign_resolution(&resolution, &context, &signing_key);
493
+ // Signature verified against registered authority public keys
494
+ ```
495
+
496
+ ### Attack Resistance
497
+
498
+ The RAC (RuVector Adversarial Coherence) protocol defends against:
499
+
500
+ | Attack | Defense |
501
+ |--------|---------|
502
+ | **Sybil** | Stake-weighted voting, witness path diversity |
503
+ | **Eclipse** | Context isolation, Merkle divergence detection |
504
+ | **Byzantine** | 1/3 threshold, escalation tracking |
505
+ | **Replay** | Timestamp validation, duplicate detection |
506
+ | **Double-spend** | Conflict detection, quarantine system |
507
+
508
+ ---
509
+
510
+ ## Self-Optimization
511
+
512
+ The network continuously improves itself:
513
+
514
+ ### Automatic Task Routing
515
+
516
+ ```javascript
517
+ // Get optimal peers for your tasks
518
+ const peers = node.getOptimalPeers(5);
519
+
520
+ // Network learns from every interaction
521
+ node.recordTaskRouting('vector_search', 'peer-123', 45, true);
522
+ ```
523
+
524
+ ### Fitness-Based Evolution
525
+
526
+ ```javascript
527
+ // High-performing nodes can replicate their config
528
+ if (node.shouldReplicate()) {
529
+ const optimalConfig = node.getRecommendedConfig();
530
+ // New nodes inherit successful configurations
531
+ }
532
+
533
+ // Track your contribution
534
+ const fitness = node.getNetworkFitness(); // 0.0 - 1.0
535
+ ```
536
+
537
+ ### Q-Learning Security
538
+
539
+ The collective learns to defend itself:
540
+
541
+ ```javascript
542
+ // Run security audit
543
+ const audit = JSON.parse(node.runSecurityAudit());
544
+ console.log(`Security Score: ${audit.security_score}/10`);
545
+
546
+ // Defends against:
547
+ // - DDoS attacks
548
+ // - Sybil attacks
549
+ // - Byzantine behavior
550
+ // - Eclipse attacks
551
+ // - Replay attacks
552
+ ```
553
+
554
+ ---
555
+
556
+ ## Tutorials
557
+
558
+ ### Tutorial 1: Join the Collective
559
+
560
+ ```javascript
561
+ import init, { EdgeNetConfig } from '@ruvector/edge-net';
562
+
563
+ async function joinCollective() {
564
+ await init();
565
+
566
+ // Configure your contribution
567
+ const node = new EdgeNetConfig('my-site')
568
+ .cpuLimit(0.25) // 25% CPU when idle
569
+ .memoryLimit(128 * 1024 * 1024) // 128MB
570
+ .minIdleTime(5000) // Wait 5s of idle
571
+ .respectBattery(true) // Reduce on battery
572
+ .build();
573
+
574
+ // Join the network
575
+ node.start();
576
+
577
+ // Check your status
578
+ console.log('Joined collective!');
579
+ console.log(`Node ID: ${node.nodeId()}`);
580
+ console.log(`Multiplier: ${node.getMultiplier()}x`);
581
+
582
+ return node;
583
+ }
584
+ ```
585
+
586
+ ### Tutorial 2: Contribute and Earn
587
+
588
+ ```javascript
589
+ async function contributeAndEarn(node) {
590
+ // Process tasks from the collective
591
+ let tasksCompleted = 0;
592
+
593
+ while (true) {
594
+ // Check if we should work
595
+ if (node.isIdle()) {
596
+ // Process a task from the network
597
+ const processed = await node.processNextTask();
598
+
599
+ if (processed) {
600
+ tasksCompleted++;
601
+ const stats = node.getStats();
602
+ console.log(`Completed ${tasksCompleted} tasks, earned ${stats.ruv_earned} rUv`);
603
+ }
604
+ }
605
+
606
+ await new Promise(r => setTimeout(r, 1000));
607
+ }
608
+ }
609
+ ```
610
+
611
+ ### Tutorial 3: Use Collective AI Power
612
+
613
+ ```javascript
614
+ async function useCollectiveAI(node) {
615
+ // Check your balance
616
+ const balance = node.ruvBalance();
617
+ console.log(`Available: ${balance} rUv`);
618
+
619
+ // Submit AI tasks
620
+ const tasks = [
621
+ { type: 'vector_search', cost: 3 },
622
+ { type: 'embedding', cost: 2 },
623
+ { type: 'semantic_match', cost: 1 }
624
+ ];
625
+
626
+ for (const task of tasks) {
627
+ if (balance >= task.cost) {
628
+ console.log(`Running ${task.type}...`);
629
+ const result = await node.submitTask(
630
+ task.type,
631
+ { data: 'sample' },
632
+ task.cost
633
+ );
634
+ console.log(`Result: ${JSON.stringify(result)}`);
635
+ }
636
+ }
637
+ }
638
+ ```
639
+
640
+ ### Tutorial 4: Monitor Network Health
641
+
642
+ ```javascript
643
+ async function monitorHealth(node) {
644
+ setInterval(() => {
645
+ // Your contribution
646
+ const stats = node.getStats();
647
+ console.log(`
648
+ === Your Contribution ===
649
+ Earned: ${stats.ruv_earned} rUv
650
+ Spent: ${stats.ruv_spent} rUv
651
+ Tasks: ${stats.tasks_completed}
652
+ Reputation: ${(stats.reputation * 100).toFixed(1)}%
653
+ `);
654
+
655
+ // Network health
656
+ const health = JSON.parse(node.getEconomicHealth());
657
+ console.log(`
658
+ === Network Health ===
659
+ Velocity: ${health.velocity.toFixed(2)}
660
+ Utilization: ${(health.utilization * 100).toFixed(1)}%
661
+ Stability: ${health.stability.toFixed(2)}
662
+ `);
663
+
664
+ // Check sustainability
665
+ const sustainable = node.isSelfSustaining(10000, 50000);
666
+ console.log(`Self-sustaining: ${sustainable}`);
667
+
668
+ }, 30000);
669
+ }
670
+ ```
671
+
672
+ ---
673
+
674
+ ## API Reference
675
+
676
+ ### Core Methods
677
+
678
+ | Method | Description | Returns |
679
+ |--------|-------------|---------|
680
+ | `new EdgeNetNode(siteId)` | Join the collective | `EdgeNetNode` |
681
+ | `start()` | Begin contributing | `void` |
682
+ | `pause()` / `resume()` | Control contribution | `void` |
683
+ | `ruvBalance()` | Check your credits | `u64` |
684
+ | `submitTask(type, payload, maxCost)` | Use collective compute | `Promise<Result>` |
685
+ | `processNextTask()` | Process work for others | `Promise<bool>` |
686
+
687
+ ### Identity Methods
688
+
689
+ | Method | Description | Returns |
690
+ |--------|-------------|---------|
691
+ | `new PiKey()` | Generate identity | `PiKey` |
692
+ | `getIdentity()` | Get 40-byte identity | `Vec<u8>` |
693
+ | `sign(data)` | Sign data | `Vec<u8>` |
694
+ | `verify(data, sig, pubkey)` | Verify signature | `bool` |
695
+ | `createEncryptedBackup(password)` | Backup identity | `Vec<u8>` |
696
+
697
+ ### Network Methods
698
+
699
+ | Method | Description | Returns |
700
+ |--------|-------------|---------|
701
+ | `getNetworkFitness()` | Your contribution score | `f32` |
702
+ | `getOptimalPeers(count)` | Best nodes for tasks | `Vec<String>` |
703
+ | `getEconomicHealth()` | Network health metrics | `String (JSON)` |
704
+ | `isSelfSustaining(nodes, tasks)` | Check sustainability | `bool` |
705
+
706
+ ---
707
+
708
+ ## Development
709
+
710
+ ### Build
711
+
712
+ ```bash
713
+ cd examples/edge-net
714
+ wasm-pack build --target web --out-dir pkg
715
+ ```
716
+
717
+ ### Test
718
+
719
+ ```bash
720
+ cargo test
721
+ ```
722
+
723
+ ### Run Simulation
724
+
725
+ ```bash
726
+ cd sim
727
+ npm install
728
+ npm run simulate
729
+ ```
730
+
731
+ ---
732
+
733
+ ## Exotic AI Capabilities
734
+
735
+ Edge-net can be enhanced with exotic AI WASM capabilities for advanced P2P coordination, self-learning, and distributed reasoning. Enable these features by building with the appropriate feature flags.
736
+
737
+ ### Available Feature Flags
738
+
739
+ | Feature | Description | Dependencies |
740
+ |---------|-------------|--------------|
741
+ | `exotic` | Time Crystal, NAO, Morphogenetic Networks | ruvector-exotic-wasm |
742
+ | `learning-enhanced` | MicroLoRA, BTSP, HDC, WTA, Global Workspace | ruvector-learning-wasm, ruvector-nervous-system-wasm |
743
+ | `economy-enhanced` | Enhanced CRDT credits | ruvector-economy-wasm |
744
+ | `exotic-full` | All exotic capabilities | All above |
745
+
746
+ ### Time Crystal (P2P Synchronization)
747
+
748
+ Robust distributed coordination using discrete time crystal dynamics:
749
+
750
+ ```javascript
751
+ // Enable time crystal with 10 oscillators
752
+ node.enableTimeCrystal(10);
753
+
754
+ // Check synchronization level (0.0 - 1.0)
755
+ const sync = node.getTimeCrystalSync();
756
+ console.log(`P2P sync: ${(sync * 100).toFixed(1)}%`);
757
+
758
+ // Check if crystal is stable
759
+ if (node.isTimeCrystalStable()) {
760
+ console.log('Network is synchronized!');
761
+ }
762
+ ```
763
+
764
+ ### NAO (Neural Autonomous Organization)
765
+
766
+ Decentralized governance with stake-weighted quadratic voting:
767
+
768
+ ```javascript
769
+ // Enable NAO with 70% quorum requirement
770
+ node.enableNAO(0.7);
771
+
772
+ // Add peer nodes as members
773
+ node.addNAOMember('peer-123', 100);
774
+ node.addNAOMember('peer-456', 50);
775
+
776
+ // Propose and vote on network actions
777
+ const propId = node.proposeNAOAction('Increase task capacity');
778
+ node.voteNAOProposal(propId, 0.9); // Vote with 90% weight
779
+
780
+ // Execute if quorum reached
781
+ if (node.executeNAOProposal(propId)) {
782
+ console.log('Proposal executed!');
783
+ }
784
+ ```
785
+
786
+ ### MicroLoRA (Per-Node Self-Learning)
787
+
788
+ Ultra-fast LoRA adaptation with <100us latency:
789
+
790
+ ```javascript
791
+ // Enable MicroLoRA with rank-2 adaptation
792
+ node.enableMicroLoRA(2);
793
+
794
+ // Adapt weights based on task feedback
795
+ const gradient = new Float32Array(128);
796
+ node.adaptMicroLoRA('vector_search', gradient);
797
+
798
+ // Apply adaptation to inputs
799
+ const input = new Float32Array(128);
800
+ const adapted = node.applyMicroLoRA('vector_search', input);
801
+ ```
802
+
803
+ ### HDC (Hyperdimensional Computing)
804
+
805
+ 10,000-bit binary hypervectors for distributed reasoning:
806
+
807
+ ```javascript
808
+ // Enable HDC memory
809
+ node.enableHDC();
810
+
811
+ // Store patterns for semantic operations
812
+ node.storeHDCPattern('concept_a');
813
+ node.storeHDCPattern('concept_b');
814
+ ```
815
+
816
+ ### WTA (Winner-Take-All)
817
+
818
+ Instant decisions with <1us latency:
819
+
820
+ ```javascript
821
+ // Enable WTA with 1000 neurons
822
+ node.enableWTA(1000);
823
+ ```
824
+
825
+ ### BTSP (One-Shot Learning)
826
+
827
+ Immediate pattern association without iterative training:
828
+
829
+ ```javascript
830
+ // Enable BTSP with 128-dim inputs
831
+ node.enableBTSP(128);
832
+
833
+ // One-shot associate a pattern
834
+ const pattern = new Float32Array(128);
835
+ node.oneShotAssociate(pattern, 1.0);
836
+ ```
837
+
838
+ ### Morphogenetic Network
839
+
840
+ Self-organizing network topology through cellular differentiation:
841
+
842
+ ```javascript
843
+ // Enable 100x100 morphogenetic grid
844
+ node.enableMorphogenetic(100);
845
+
846
+ // Network grows automatically
847
+ console.log(`Cells: ${node.getMorphogeneticCellCount()}`);
848
+ ```
849
+
850
+ ### Stepping All Capabilities
851
+
852
+ In your main loop, step all capabilities forward:
853
+
854
+ ```javascript
855
+ function gameLoop(dt) {
856
+ // Step exotic capabilities
857
+ node.stepCapabilities(dt);
858
+
859
+ // Process tasks
860
+ node.processNextTask();
861
+ }
862
+
863
+ setInterval(() => gameLoop(0.016), 16); // 60 FPS
864
+ ```
865
+
866
+ ### Building with Exotic Features
867
+
868
+ ```bash
869
+ # Build with exotic capabilities
870
+ wasm-pack build --target web --release --out-dir pkg -- --features exotic
871
+
872
+ # Build with learning-enhanced capabilities
873
+ wasm-pack build --target web --release --out-dir pkg -- --features learning-enhanced
874
+
875
+ # Build with all exotic capabilities
876
+ wasm-pack build --target web --release --out-dir pkg -- --features exotic-full
877
+ ```
878
+
879
+ ---
880
+
881
+ ## Core Architecture & Capabilities
882
+
883
+ Edge-net is a production-grade distributed AI computing platform with **~36,500 lines of Rust code** and **177 passing tests**.
884
+
885
+ ### Unified Attention Architecture
886
+
887
+ Four attention mechanisms that answer critical questions for distributed AI:
888
+
889
+ ```
890
+ ┌─────────────────────────────────────────────────────────────────────────────┐
891
+ │ UNIFIED ATTENTION ARCHITECTURE │
892
+ ├─────────────────────────────────────────────────────────────────────────────┤
893
+ │ │
894
+ │ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
895
+ │ │ Neural Attention│ │ DAG Attention │ │ Graph Attention │ │
896
+ │ │ │ │ │ │ │ │
897
+ │ │ "What words │ │ "What steps │ │ "What relations │ │
898
+ │ │ matter?" │ │ matter?" │ │ matter?" │ │
899
+ │ │ │ │ │ │ │ │
900
+ │ │ • Multi-head │ │ • Topo-sort │ │ • GAT-style │ │
901
+ │ │ • Q/K/V project │ │ • Critical path │ │ • Edge features │ │
902
+ │ │ • Softmax focus │ │ • Parallelism │ │ • Message pass │ │
903
+ │ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
904
+ │ │
905
+ │ ┌─────────────────────────────────────────────────────────────┐ │
906
+ │ │ State Space Model (SSM) │ │
907
+ │ │ │ │
908
+ │ │ "What history still matters?" - O(n) Mamba-style │ │
909
+ │ │ │ │
910
+ │ │ • Selective gating: What to remember vs forget │ │
911
+ │ │ • O(n) complexity: Efficient long-sequence processing │ │
912
+ │ │ • Temporal dynamics: dt, A, B, C, D state transitions │ │
913
+ │ └─────────────────────────────────────────────────────────────┘ │
914
+ │ │
915
+ └─────────────────────────────────────────────────────────────────────────────┘
916
+ ```
917
+
918
+ | Attention Type | Question Answered | Use Case |
919
+ |----------------|-------------------|----------|
920
+ | **Neural** | What words matter? | Semantic focus, importance weighting |
921
+ | **DAG** | What steps matter? | Task scheduling, critical path analysis |
922
+ | **Graph** | What relationships matter? | Network topology, peer connections |
923
+ | **State Space** | What history matters? | Long-term memory, temporal patterns |
924
+
925
+ ### AI Intelligence Layer
926
+
927
+ ```
928
+ ┌─────────────────────────────────────────────────────────────────────────────┐
929
+ │ AI Intelligence Layer │
930
+ ├─────────────────────────────────────────────────────────────────────────────┤
931
+ │ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
932
+ │ │ HNSW Index │ │ AdapterPool │ │ Federated │ │
933
+ │ │ (memory.rs) │ │ (lora.rs) │ │ (federated.rs) │ │
934
+ │ │ │ │ │ │ │ │
935
+ │ │ • 150x speedup │ │ • LRU eviction │ │ • TopK Sparse │ │
936
+ │ │ • O(log N) │ │ • 16 slots │ │ • Byzantine tol │ │
937
+ │ │ • Cosine dist │ │ • Task routing │ │ • Rep-weighted │ │
938
+ │ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
939
+ │ │
940
+ │ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
941
+ │ │ DAG Attention │ │ LoraAdapter │ │ GradientGossip │ │
942
+ │ │ │ │ │ │ │ │
943
+ │ │ • Critical path │ │ • Rank 1-16 │ │ • Error feedback│ │
944
+ │ │ • Topo sort │ │ • SIMD forward │ │ • Diff privacy │ │
945
+ │ │ • Parallelism │ │ • 4/8-bit quant │ │ • Gossipsub │ │
946
+ │ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
947
+ └─────────────────────────────────────────────────────────────────────────────┘
948
+ ```
949
+
950
+ ### Swarm Intelligence
951
+
952
+ | Component | Capability | Description |
953
+ |-----------|------------|-------------|
954
+ | **Entropy Consensus** | Belief convergence | Shannon entropy-based decision making |
955
+ | **Collective Memory** | Pattern sharing | Hippocampal-inspired consolidation and replay |
956
+ | **Stigmergy** | Pheromone trails | Ant colony optimization for task routing |
957
+ | **Consensus Coordinator** | Multi-topic | Parallel consensus on multiple decisions |
958
+
959
+ ### Compute Acceleration
960
+
961
+ ```
962
+ ┌─────────────────────────────────────────────────────────────────────────────┐
963
+ │ COMPUTE ACCELERATION STACK │
964
+ ├─────────────────────────────────────────────────────────────────────────────┤
965
+ │ │
966
+ │ ┌─────────────────────────────────────────────────────────────────────┐ │
967
+ │ │ WebGPU Compute Backend │ │
968
+ │ │ │ │
969
+ │ │ • wgpu-based GPU acceleration (10+ TFLOPS target) │ │
970
+ │ │ • Matrix multiplication pipeline (tiled, cache-friendly) │ │
971
+ │ │ • Attention pipeline (Flash Attention algorithm) │ │
972
+ │ │ • LoRA forward pipeline (<1ms inference) │ │
973
+ │ │ • Staging buffer pool (16MB, zero-copy transfers) │ │
974
+ │ └─────────────────────────────────────────────────────────────────────┘ │
975
+ │ │
976
+ │ ┌─────────────────────────────────────────────────────────────────────┐ │
977
+ │ │ WebWorker Pool │ │
978
+ │ │ │ │
979
+ │ │ +------------------+ │ │
980
+ │ │ | Main Thread | │ │
981
+ │ │ | (Coordinator) | │ │
982
+ │ │ +--------+---------+ │ │
983
+ │ │ | │ │
984
+ │ │ +-----+-----+-----+-----+ │ │
985
+ │ │ | | | | | │ │
986
+ │ │ +--v-+ +-v--+ +--v-+ +--v-+ +--v-+ │ │
987
+ │ │ | W1 | | W2 | | W3 | | W4 | | Wn | (up to 16 workers) │ │
988
+ │ │ +----+ +----+ +----+ +----+ +----+ │ │
989
+ │ │ | | | | | │ │
990
+ │ │ +-----+-----+-----+-----+ │ │
991
+ │ │ | │ │
992
+ │ │ SharedArrayBuffer (when available, zero-copy) │ │
993
+ │ └─────────────────────────────────────────────────────────────────────┘ │
994
+ │ │
995
+ │ ┌────────────────────────┐ ┌────────────────────────┐ │
996
+ │ │ WASM SIMD (simd128) │ │ WebGL Compute │ │
997
+ │ │ • f32x4 vectorized │ │ • Shader fallback │ │
998
+ │ │ • 4x parallel ops │ │ • Universal support │ │
999
+ │ │ • All modern browsers│ │ • Fragment matmul │ │
1000
+ │ └────────────────────────┘ └────────────────────────┘ │
1001
+ │ │
1002
+ └─────────────────────────────────────────────────────────────────────────────┘
1003
+ ```
1004
+
1005
+ | Backend | Availability | Performance | Operations |
1006
+ |---------|-------------|-------------|------------|
1007
+ | **WebGPU** | Chrome 113+, Firefox 120+ | 10+ TFLOPS | Matmul, Attention, LoRA |
1008
+ | **WebWorker Pool** | All browsers | 4-16x CPU cores | Parallel matmul, dot product |
1009
+ | **WASM SIMD** | All modern browsers | 4x vectorized | Cosine distance, softmax |
1010
+ | **WebGL** | Universal fallback | Shader compute | Matrix operations |
1011
+ | **CPU** | Always available | Loop-unrolled | All operations |
1012
+
1013
+ ### WebGPU Pipelines
1014
+
1015
+ | Pipeline | Purpose | Performance Target |
1016
+ |----------|---------|-------------------|
1017
+ | **Matmul** | Matrix multiplication (tiled) | 10+ TFLOPS |
1018
+ | **Attention** | Flash attention (memory efficient) | 2ms for 4K context |
1019
+ | **LoRA** | Low-rank adapter forward pass | <1ms inference |
1020
+
1021
+ ### WebWorker Operations
1022
+
1023
+ | Operation | Description | Parallelization |
1024
+ |-----------|-------------|-----------------|
1025
+ | **MatmulPartial** | Row-blocked matrix multiply | Rows split across workers |
1026
+ | **DotProductPartial** | Partial vector dot products | Segments split across workers |
1027
+ | **VectorOp** | Element-wise ops (add, mul, relu, sigmoid) | Ranges split across workers |
1028
+ | **Reduce** | Sum, max, min, mean reductions | Hierarchical aggregation |
1029
+
1030
+ ### Work Stealing
1031
+
1032
+ Workers that finish early can steal tasks from busy workers' queues:
1033
+ - **LIFO** for local tasks (cache locality)
1034
+ - **FIFO** for stolen tasks (load balancing)
1035
+
1036
+ ### Economics & Reputation
1037
+
1038
+ | Feature | Mechanism | Purpose |
1039
+ |---------|-----------|---------|
1040
+ | **AMM** | Automated Market Maker | Dynamic rUv pricing |
1041
+ | **Reputation** | Stake-weighted scoring | Trust computation |
1042
+ | **Slashing** | Byzantine penalties | Bad actor deterrence |
1043
+ | **Rewards** | Contribution tracking | Fair distribution |
1044
+
1045
+ ### Network Learning
1046
+
1047
+ | Component | Learning Type | Application |
1048
+ |-----------|---------------|-------------|
1049
+ | **RAC** | Adversarial Coherence | Conflict resolution |
1050
+ | **ReasoningBank** | Trajectory learning | Strategy optimization |
1051
+ | **Q-Learning** | Reinforcement | Security adaptation |
1052
+ | **Federated** | Distributed training | Model improvement |
1053
+
1054
+ ---
1055
+
1056
+ ## Self-Learning Hooks & MCP Integration
1057
+
1058
+ Edge-net integrates with Claude Code's hooks system for continuous self-learning.
1059
+
1060
+ ### Learning Scenarios Module
1061
+
1062
+ ```rust
1063
+ use ruvector_edge_net::learning_scenarios::{
1064
+ NeuralAttention, DagAttention, GraphAttention, StateSpaceAttention,
1065
+ AttentionOrchestrator, ErrorLearningTracker, SequenceTracker,
1066
+ get_ruvector_tools, generate_settings_json,
1067
+ };
1068
+
1069
+ // Create unified attention orchestrator
1070
+ let orchestrator = AttentionOrchestrator::new(
1071
+ NeuralAttention::new(128, 4), // 128 dim, 4 heads
1072
+ DagAttention::new(),
1073
+ GraphAttention::new(64, 4), // 64 dim, 4 heads
1074
+ StateSpaceAttention::new(256, 0.95), // 256 dim, 0.95 decay
1075
+ );
1076
+
1077
+ // Get comprehensive attention analysis
1078
+ let analysis = orchestrator.analyze(tokens, &dag, &graph, &history);
1079
+ ```
1080
+
1081
+ ### Error Pattern Learning
1082
+
1083
+ ```rust
1084
+ let mut tracker = ErrorLearningTracker::new();
1085
+
1086
+ // Record errors for learning
1087
+ tracker.record_error(ErrorPattern::TypeMismatch, "expected String", "lib.rs", 42);
1088
+
1089
+ // Get AI-suggested fixes
1090
+ let fixes = tracker.get_suggestions("type mismatch");
1091
+ // ["Use .to_string()", "Use String::from()", ...]
1092
+ ```
1093
+
1094
+ ### MCP Tool Categories
1095
+
1096
+ | Category | Tools | Purpose |
1097
+ |----------|-------|---------|
1098
+ | **VectorDb** | `vector_search`, `vector_store`, `vector_query` | Semantic similarity |
1099
+ | **Learning** | `learn_pattern`, `train_model`, `get_suggestions` | Pattern recognition |
1100
+ | **Memory** | `remember`, `recall`, `forget` | Vector memory |
1101
+ | **Swarm** | `spawn_agent`, `coordinate`, `route_task` | Multi-agent coordination |
1102
+ | **Telemetry** | `track_event`, `get_stats`, `export_metrics` | Usage analytics |
1103
+ | **AgentRouting** | `suggest_agent`, `record_outcome`, `get_routing_table` | Agent selection |
1104
+
1105
+ ### RuVector CLI Commands
1106
+
1107
+ ```bash
1108
+ # Session management
1109
+ ruvector hooks session-start # Start learning session
1110
+ ruvector hooks session-end # Save patterns
1111
+
1112
+ # Intelligence
1113
+ ruvector hooks stats # Show learning stats
1114
+ ruvector hooks route <task> # Get agent suggestion
1115
+ ruvector hooks suggest-context # Context suggestions
1116
+
1117
+ # Memory
1118
+ ruvector hooks remember <content> -t <type> # Store memory
1119
+ ruvector hooks recall <query> # Semantic search
1120
+ ```
1121
+
1122
+ ### Claude Code Hook Events
1123
+
1124
+ | Event | Trigger | Action |
1125
+ |-------|---------|--------|
1126
+ | `PreToolUse` | Before Edit/Bash | Agent routing, risk analysis |
1127
+ | `PostToolUse` | After Edit/Bash | Q-learning update, pattern recording |
1128
+ | `SessionStart` | Conversation begins | Load intelligence |
1129
+ | `Stop` | Conversation ends | Save learning data |
1130
+ | `UserPromptSubmit` | User message | Context suggestions |
1131
+ | `PreCompact` | Before compaction | Preserve context |
1132
+
1133
+ ---
1134
+
1135
+ ## Research Foundation
1136
+
1137
+ Edge-net is built on research in:
1138
+
1139
+ - **Distributed Computing** - P2P resource sharing
1140
+ - **Collective Intelligence** - Emergent optimization
1141
+ - **Game Theory** - Incentive-compatible mechanisms
1142
+ - **Adaptive Security** - Q-learning threat response
1143
+ - **Time Crystals** - Floquet engineering for coordination
1144
+ - **Neuromorphic Computing** - BTSP, HDC, WTA mechanisms
1145
+ - **Decentralized Governance** - Neural Autonomous Organizations
1146
+
1147
+ ---
1148
+
1149
+ ## Disclaimer
1150
+
1151
+ Edge-net is a **research platform** for collective computing. The rUv units are:
1152
+
1153
+ - Resource participation metrics, not currency
1154
+ - Used for balancing contribution and consumption
1155
+ - Not redeemable for money or goods outside the network
1156
+
1157
+ ---
1158
+
1159
+ ## Links
1160
+
1161
+ - [Design Document](./DESIGN.md)
1162
+ - [Technical Report](./docs/FINAL_REPORT.md)
1163
+ - [Simulation Guide](./sim/README.md)
1164
+ - [RuVector GitHub](https://github.com/ruvnet/ruvector)
1165
+
1166
+ ## License
1167
+
1168
+ MIT License