agentic-flow 2.0.1-alpha.13 → 2.0.1-alpha.15
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +63 -0
- package/dist/.tsbuildinfo +1 -1
- package/dist/intelligence/EmbeddingService.d.ts +168 -0
- package/dist/intelligence/EmbeddingService.d.ts.map +1 -0
- package/dist/intelligence/EmbeddingService.js +526 -0
- package/dist/intelligence/EmbeddingService.js.map +1 -0
- package/dist/intelligence/embedding-benchmark.d.ts +7 -0
- package/dist/intelligence/embedding-benchmark.d.ts.map +1 -0
- package/dist/intelligence/embedding-benchmark.js +155 -0
- package/dist/intelligence/embedding-benchmark.js.map +1 -0
- package/dist/mcp/fastmcp/tools/hooks/intelligence-bridge.d.ts.map +1 -1
- package/dist/mcp/fastmcp/tools/hooks/intelligence-bridge.js +13 -17
- package/dist/mcp/fastmcp/tools/hooks/intelligence-bridge.js.map +1 -1
- package/package.json +4 -2
package/CHANGELOG.md
CHANGED
|
@@ -2,6 +2,69 @@
|
|
|
2
2
|
|
|
3
3
|
All notable changes to this project will be documented in this file.
|
|
4
4
|
|
|
5
|
+
## [2.0.1-alpha.15] - 2025-12-31
|
|
6
|
+
|
|
7
|
+
### Added
|
|
8
|
+
- **Parallel Worker Embeddings**: 7 worker threads for parallel ONNX processing
|
|
9
|
+
- Uses ruvector@0.1.61 with parallel worker pool
|
|
10
|
+
- SIMD128 enabled (6x faster single-threaded, 7x parallel)
|
|
11
|
+
- Auto-detection: defaults to ONNX when SIMD available
|
|
12
|
+
|
|
13
|
+
- **Advanced Embedding Features**:
|
|
14
|
+
- `similarityMatrix(texts)` - NxN pairwise similarity computation
|
|
15
|
+
- `semanticSearch(query, topK)` - Search against pre-built corpus
|
|
16
|
+
- `findDuplicates(texts, threshold)` - Near-duplicate detection
|
|
17
|
+
- `clusterTexts(texts, k)` - K-means semantic clustering
|
|
18
|
+
- `streamEmbed(texts, batchSize)` - Memory-efficient streaming
|
|
19
|
+
|
|
20
|
+
- **Parallel Use Cases** (from ruvector@0.1.61):
|
|
21
|
+
| Use Case | Current | With Workers | Benefit |
|
|
22
|
+
|--------------------|--------------------|-------------------|--------------------------|
|
|
23
|
+
| Q-learning updates | Sequential | Parallel batch | Faster learning |
|
|
24
|
+
| Pattern matching | 1 file at a time | 4+ files parallel | 3-4x faster pretrain |
|
|
25
|
+
| Memory indexing | Blocking | Background | Non-blocking hooks |
|
|
26
|
+
| Similarity search | Sequential scan | Parallel shards | Faster recall |
|
|
27
|
+
| Code analysis | Single AST | Multi-file AST | Faster routing |
|
|
28
|
+
| Git history | Sequential commits | Parallel commits | Faster co-edit detection |
|
|
29
|
+
|
|
30
|
+
### Changed
|
|
31
|
+
- EmbeddingService now uses ruvector@0.1.61 (not ruvector-onnx-embeddings-wasm directly)
|
|
32
|
+
- Default backend changed from 'simple' to 'auto' (auto-detects ONNX/SIMD)
|
|
33
|
+
- Updated dependency: ruvector ^0.1.61
|
|
34
|
+
|
|
35
|
+
### Performance (7 workers + SIMD)
|
|
36
|
+
- Cold start: ~1.5s (includes model download, worker init)
|
|
37
|
+
- Warm embedding: ~100-200ms per text (parallelized)
|
|
38
|
+
- Batch embedding: Up to 7x faster with parallel workers
|
|
39
|
+
|
|
40
|
+
## [2.0.1-alpha.14] - 2025-12-31
|
|
41
|
+
|
|
42
|
+
### Added
|
|
43
|
+
- **ONNX Embeddings with SIMD**: Real semantic embeddings via ruvector-onnx-embeddings-wasm
|
|
44
|
+
- SIMD128 enabled for 6x faster embedding generation
|
|
45
|
+
- 100% semantic accuracy (correctly identifies related/unrelated texts)
|
|
46
|
+
- all-MiniLM-L6-v2 model (384 dimensions)
|
|
47
|
+
- Configure with `AGENTIC_FLOW_EMBEDDINGS=onnx`
|
|
48
|
+
|
|
49
|
+
- **EmbeddingService**: Unified embedding interface
|
|
50
|
+
- Simple backend: ~0.04ms (hash-based, fast but not semantic)
|
|
51
|
+
- ONNX backend: ~400ms with SIMD (true semantic similarity)
|
|
52
|
+
- LRU cache for repeated embeddings
|
|
53
|
+
- Auto-fallback to simple if ONNX fails
|
|
54
|
+
|
|
55
|
+
- **Embedding Benchmark**: Compare simple vs ONNX embeddings
|
|
56
|
+
- Run with: `node --experimental-wasm-modules dist/intelligence/embedding-benchmark.js`
|
|
57
|
+
- Shows latency, accuracy, and semantic similarity comparisons
|
|
58
|
+
|
|
59
|
+
### Changed
|
|
60
|
+
- Updated intelligence-bridge.ts to use EmbeddingService
|
|
61
|
+
- Added onnxruntime-node and ruvector-onnx-embeddings-wasm dependencies
|
|
62
|
+
|
|
63
|
+
### Performance (SIMD enabled)
|
|
64
|
+
- Cold start: ~1.5s (includes model download)
|
|
65
|
+
- Warm embedding: ~400ms per text
|
|
66
|
+
- Batch embedding: ~400ms per text (sequential)
|
|
67
|
+
|
|
5
68
|
## [2.0.1-alpha.13] - 2025-12-31
|
|
6
69
|
|
|
7
70
|
### Added
|