agentic-flow 2.0.1-alpha.14 → 2.0.1-alpha.16

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -2,6 +2,61 @@
2
2
 
3
3
  All notable changes to this project will be documented in this file.
4
4
 
5
+ ## [2.0.1-alpha.16] - 2025-12-31
6
+
7
+ ### Added
8
+ - **Parallel Intelligence** (ruvector@0.1.62): Full parallel worker integration
9
+ - `queueEpisode()` + `flushEpisodeBatch()` - 3-4x faster batch Q-learning
10
+ - `matchPatternsParallel(files)` - Multi-file parallel pretrain
11
+ - `indexMemoriesBackground(memories)` - Non-blocking hook indexing
12
+ - `searchParallel(query, topK)` - Parallel shard similarity search
13
+ - `analyzeFilesParallel(files)` - Multi-file AST routing
14
+ - `analyzeCommitsParallel(commits)` - Faster co-edit detection
15
+
16
+ - **Auto-detection for parallel mode**:
17
+ - MCP server (`MCP_SERVER=1`): Workers enabled automatically
18
+ - CLI hooks (`RUVECTOR_CLI=1`): Fast sequential mode
19
+ - Manual: `RUVECTOR_PARALLEL=1` to force enable
20
+
21
+ ### Changed
22
+ - Updated ruvector dependency: 0.1.61 → 0.1.62
23
+ - Intelligence bridge now lazy-loads parallel engine
24
+
25
+ ## [2.0.1-alpha.15] - 2025-12-31
26
+
27
+ ### Added
28
+ - **Parallel Worker Embeddings**: 7 worker threads for parallel ONNX processing
29
+ - Uses ruvector@0.1.61 with parallel worker pool
30
+ - SIMD128 enabled (6x faster single-threaded, 7x parallel)
31
+ - Auto-detection: defaults to ONNX when SIMD available
32
+
33
+ - **Advanced Embedding Features**:
34
+ - `similarityMatrix(texts)` - NxN pairwise similarity computation
35
+ - `semanticSearch(query, topK)` - Search against pre-built corpus
36
+ - `findDuplicates(texts, threshold)` - Near-duplicate detection
37
+ - `clusterTexts(texts, k)` - K-means semantic clustering
38
+ - `streamEmbed(texts, batchSize)` - Memory-efficient streaming
39
+
40
+ - **Parallel Use Cases** (from ruvector@0.1.61):
41
+ | Use Case | Current | With Workers | Benefit |
42
+ |--------------------|--------------------|-------------------|--------------------------|
43
+ | Q-learning updates | Sequential | Parallel batch | Faster learning |
44
+ | Pattern matching | 1 file at a time | 4+ files parallel | 3-4x faster pretrain |
45
+ | Memory indexing | Blocking | Background | Non-blocking hooks |
46
+ | Similarity search | Sequential scan | Parallel shards | Faster recall |
47
+ | Code analysis | Single AST | Multi-file AST | Faster routing |
48
+ | Git history | Sequential commits | Parallel commits | Faster co-edit detection |
49
+
50
+ ### Changed
51
+ - EmbeddingService now uses ruvector@0.1.61 (not ruvector-onnx-embeddings-wasm directly)
52
+ - Default backend changed from 'simple' to 'auto' (auto-detects ONNX/SIMD)
53
+ - Updated dependency: ruvector ^0.1.61
54
+
55
+ ### Performance (7 workers + SIMD)
56
+ - Cold start: ~1.5s (includes model download, worker init)
57
+ - Warm embedding: ~100-200ms per text (parallelized)
58
+ - Batch embedding: Up to 7x faster with parallel workers
59
+
5
60
  ## [2.0.1-alpha.14] - 2025-12-31
6
61
 
7
62
  ### Added