@soulcraft/brainy 0.36.0 โ 0.38.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +712 -1474
- package/dist/brainyData.d.ts +37 -0
- package/dist/distributed/configManager.d.ts +97 -0
- package/dist/distributed/domainDetector.d.ts +77 -0
- package/dist/distributed/hashPartitioner.d.ts +77 -0
- package/dist/distributed/healthMonitor.d.ts +110 -0
- package/dist/distributed/index.d.ts +10 -0
- package/dist/distributed/operationalModes.d.ts +104 -0
- package/dist/hnsw/distributedSearch.d.ts +118 -0
- package/dist/hnsw/distributedSearch.d.ts.map +1 -0
- package/dist/hnsw/optimizedHNSWIndex.d.ts +97 -0
- package/dist/hnsw/optimizedHNSWIndex.d.ts.map +1 -0
- package/dist/hnsw/partitionedHNSWIndex.d.ts +101 -0
- package/dist/hnsw/partitionedHNSWIndex.d.ts.map +1 -0
- package/dist/hnsw/scaledHNSWSystem.d.ts +142 -0
- package/dist/hnsw/scaledHNSWSystem.d.ts.map +1 -0
- package/dist/storage/adapters/batchS3Operations.d.ts +71 -0
- package/dist/storage/adapters/batchS3Operations.d.ts.map +1 -0
- package/dist/storage/enhancedCacheManager.d.ts +141 -0
- package/dist/storage/enhancedCacheManager.d.ts.map +1 -0
- package/dist/storage/readOnlyOptimizations.d.ts +133 -0
- package/dist/storage/readOnlyOptimizations.d.ts.map +1 -0
- package/dist/types/distributedTypes.d.ts +197 -0
- package/dist/types/distributedTypes.d.ts.map +1 -0
- package/dist/unified.js +1383 -2
- package/dist/unified.min.js +991 -991
- package/dist/utils/autoConfiguration.d.ts +125 -0
- package/dist/utils/autoConfiguration.d.ts.map +1 -0
- package/dist/utils/crypto.d.ts +25 -0
- package/dist/utils/crypto.d.ts.map +1 -0
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -7,1689 +7,927 @@
|
|
|
7
7
|
[](https://www.typescriptlang.org/)
|
|
8
8
|
[](CONTRIBUTING.md)
|
|
9
9
|
|
|
10
|
-
[//]: # ([](https://github.com/sodal-project/cartographer))
|
|
11
|
-
|
|
12
10
|
**A powerful graph & vector data platform for AI applications across any environment**
|
|
13
11
|
|
|
14
12
|
</div>
|
|
15
13
|
|
|
16
|
-
## โจ
|
|
17
|
-
|
|
18
|
-
Brainy combines the power of vector search with graph relationships in a lightweight, cross-platform database. Whether
|
|
19
|
-
you're building AI applications, recommendation systems, or knowledge graphs, Brainy provides the tools you need to
|
|
20
|
-
store, connect, and retrieve your data intelligently.
|
|
21
|
-
|
|
22
|
-
What makes Brainy special? It intelligently adapts to your environment! Brainy automatically detects your platform,
|
|
23
|
-
adjusts its storage strategy, and optimizes performance based on your usage patterns. The more you use it, the smarter
|
|
24
|
-
it gets - learning from your data to provide increasingly relevant results and connections.
|
|
25
|
-
|
|
26
|
-
### ๐ Key Features
|
|
27
|
-
|
|
28
|
-
- **Run Everywhere** - Works in browsers, Node.js, serverless functions, and containers
|
|
29
|
-
- **Vector Search** - Find semantically similar content using embeddings
|
|
30
|
-
- **Advanced JSON Document Search** - Search within specific fields of JSON documents with field prioritization and
|
|
31
|
-
service-based field standardization
|
|
32
|
-
- **Graph Relationships** - Connect data with meaningful relationships
|
|
33
|
-
- **Streaming Pipeline** - Process data in real-time as it flows through the system
|
|
34
|
-
- **Extensible Augmentations** - Customize and extend functionality with pluggable components
|
|
35
|
-
- **Built-in Conduits** - Sync and scale across instances with WebSocket and WebRTC
|
|
36
|
-
- **TensorFlow Integration** - Use TensorFlow.js for high-quality embeddings
|
|
37
|
-
- **Adaptive Intelligence** - Automatically optimizes for your environment and usage patterns
|
|
38
|
-
- **Persistent Storage** - Data persists across sessions and scales to any size
|
|
39
|
-
- **TypeScript Support** - Fully typed API with generics
|
|
40
|
-
- **CLI Tools & Web Service** - Command-line interface and REST API web service for data management
|
|
41
|
-
- **Model Control Protocol (MCP)** - Allow external AI models to access Brainy data and use augmentation pipeline as
|
|
42
|
-
tools
|
|
43
|
-
|
|
44
|
-
## ๐ Live Demo
|
|
45
|
-
|
|
46
|
-
**[Try the live demo](https://soulcraft-research.github.io/brainy/demo/index.html)** - Check out the interactive demo on
|
|
47
|
-
GitHub Pages that showcases Brainy's main features.
|
|
48
|
-
|
|
49
|
-
## ๐ What Can You Build?
|
|
50
|
-
|
|
51
|
-
- **Semantic Search Engines** - Find content based on meaning, not just keywords
|
|
52
|
-
- **Recommendation Systems** - Suggest similar items based on vector similarity
|
|
53
|
-
- **Knowledge Graphs** - Build connected data structures with relationships
|
|
54
|
-
- **AI Applications** - Store and retrieve embeddings for machine learning models
|
|
55
|
-
- **AI-Enhanced Applications** - Build applications that leverage vector embeddings for intelligent data processing
|
|
56
|
-
- **Data Organization Tools** - Automatically categorize and connect related information
|
|
57
|
-
- **Adaptive Experiences** - Create applications that learn and evolve with your users
|
|
58
|
-
- **Model-Integrated Systems** - Connect external AI models to Brainy data and tools using MCP
|
|
14
|
+
## โจ What is Brainy?
|
|
59
15
|
|
|
60
|
-
|
|
16
|
+
Imagine a database that thinks like you do - connecting ideas, finding patterns, and getting smarter over time. Brainy is the **AI-native database** that brings vector search and knowledge graphs together in one powerful, ridiculously easy-to-use package.
|
|
61
17
|
|
|
62
|
-
|
|
63
|
-
|
|
64
|
-
|
|
65
|
-
|
|
66
|
-
|
|
67
|
-
|
|
18
|
+
### ๐ NEW: Distributed Mode (v0.38+)
|
|
19
|
+
**Scale horizontally with zero configuration!** Brainy now supports distributed deployments with automatic coordination:
|
|
20
|
+
- **๐ Multi-Instance Coordination** - Multiple readers and writers working in harmony
|
|
21
|
+
- **๐ท๏ธ Smart Domain Detection** - Automatically categorizes data (medical, legal, product, etc.)
|
|
22
|
+
- **๐ Real-Time Health Monitoring** - Track performance across all instances
|
|
23
|
+
- **๐ Automatic Role Optimization** - Readers optimize for cache, writers for throughput
|
|
24
|
+
- **๐๏ธ Intelligent Partitioning** - Hash-based partitioning for perfect load distribution
|
|
68
25
|
|
|
69
|
-
###
|
|
26
|
+
### ๐ Why Developers Love Brainy
|
|
70
27
|
|
|
71
|
-
Brainy
|
|
28
|
+
- **๐ง It Just Worksโข** - No config files, no tuning parameters, no DevOps headaches. Brainy auto-detects your environment and optimizes itself
|
|
29
|
+
- **๐ True Write-Once, Run-Anywhere** - Same code runs in React, Angular, Vue, Node.js, Deno, Bun, serverless, edge workers, and even vanilla HTML
|
|
30
|
+
- **โก Scary Fast** - Handles millions of vectors with sub-millisecond search. Built-in GPU acceleration when available
|
|
31
|
+
- **๐ฏ Self-Learning** - Like having a database that goes to the gym. Gets faster and smarter the more you use it
|
|
32
|
+
- **๐ฎ AI-First Design** - Built for the age of embeddings, RAG, and semantic search. Your LLMs will thank you
|
|
33
|
+
- **๐ฎ Actually Fun to Use** - Clean API, great DX, and it does the heavy lifting so you can build cool stuff
|
|
72
34
|
|
|
73
|
-
|
|
35
|
+
## ๐ Quick Start (30 seconds!)
|
|
74
36
|
|
|
37
|
+
### Node.js TLDR
|
|
75
38
|
```bash
|
|
76
|
-
|
|
77
|
-
|
|
78
|
-
|
|
79
|
-
Command-line interface for data management, bulk operations, and database administration.
|
|
80
|
-
|
|
81
|
-
#### Web Service Package
|
|
39
|
+
# Install
|
|
40
|
+
npm install brainy
|
|
82
41
|
|
|
83
|
-
|
|
84
|
-
npm install @soulcraft/brainy-web-service
|
|
42
|
+
# Use it
|
|
85
43
|
```
|
|
44
|
+
```javascript
|
|
45
|
+
import { createAutoBrainy, NounType, VerbType } from 'brainy'
|
|
86
46
|
|
|
87
|
-
|
|
88
|
-
|
|
89
|
-
## ๐ Quick Start
|
|
90
|
-
|
|
91
|
-
Brainy uses a unified build that automatically adapts to your environment (Node.js, browser, or serverless):
|
|
92
|
-
|
|
93
|
-
```typescript
|
|
94
|
-
import { BrainyData, NounType, VerbType } from '@soulcraft/brainy'
|
|
95
|
-
|
|
96
|
-
// Create and initialize the database
|
|
97
|
-
const db = new BrainyData()
|
|
98
|
-
await db.init()
|
|
47
|
+
const brainy = createAutoBrainy()
|
|
99
48
|
|
|
100
|
-
// Add data
|
|
101
|
-
const catId = await
|
|
49
|
+
// Add data with Nouns (entities)
|
|
50
|
+
const catId = await brainy.add("Siamese cats are elegant and vocal", {
|
|
102
51
|
noun: NounType.Thing,
|
|
103
|
-
|
|
52
|
+
breed: "Siamese",
|
|
53
|
+
category: "animal"
|
|
104
54
|
})
|
|
105
55
|
|
|
106
|
-
const
|
|
107
|
-
noun: NounType.
|
|
108
|
-
|
|
56
|
+
const ownerId = await brainy.add("John loves his pets", {
|
|
57
|
+
noun: NounType.Person,
|
|
58
|
+
name: "John Smith"
|
|
109
59
|
})
|
|
110
60
|
|
|
111
|
-
//
|
|
112
|
-
|
|
113
|
-
|
|
114
|
-
|
|
115
|
-
// Add a relationship between items
|
|
116
|
-
await db.addVerb(catId, dogId, {
|
|
117
|
-
verb: VerbType.RelatedTo,
|
|
118
|
-
description: 'Both are common household pets'
|
|
61
|
+
// Connect with Verbs (relationships)
|
|
62
|
+
await brainy.addVerb(ownerId, catId, {
|
|
63
|
+
verb: VerbType.Owns,
|
|
64
|
+
since: "2020-01-01"
|
|
119
65
|
})
|
|
120
|
-
```
|
|
121
|
-
|
|
122
|
-
### Import Options
|
|
123
|
-
|
|
124
|
-
```typescript
|
|
125
|
-
// Standard import - automatically adapts to any environment
|
|
126
|
-
import { BrainyData } from '@soulcraft/brainy'
|
|
127
|
-
|
|
128
|
-
// Minified version for production
|
|
129
|
-
import { BrainyData } from '@soulcraft/brainy/min'
|
|
130
|
-
```
|
|
131
66
|
|
|
132
|
-
|
|
133
|
-
|
|
134
|
-
> interface.
|
|
67
|
+
// Search by meaning
|
|
68
|
+
const results = await brainy.searchText("feline companions", 5)
|
|
135
69
|
|
|
136
|
-
|
|
137
|
-
|
|
138
|
-
|
|
139
|
-
|
|
140
|
-
|
|
141
|
-
|
|
142
|
-
import { BrainyData } from './dist/unified.js'
|
|
143
|
-
|
|
144
|
-
// Or minified version
|
|
145
|
-
// import { BrainyData } from './dist/unified.min.js'
|
|
70
|
+
// Search JSON documents by specific fields
|
|
71
|
+
const docs = await brainy.searchDocuments("Siamese", {
|
|
72
|
+
fields: ['breed', 'category'], // Search these fields
|
|
73
|
+
weights: { breed: 2.0 }, // Prioritize breed matches
|
|
74
|
+
limit: 10
|
|
75
|
+
})
|
|
146
76
|
|
|
147
|
-
|
|
148
|
-
|
|
149
|
-
// ...
|
|
150
|
-
</script>
|
|
77
|
+
// Find relationships
|
|
78
|
+
const johnsPets = await brainy.getVerbsBySource(ownerId, VerbType.Owns)
|
|
151
79
|
```
|
|
152
80
|
|
|
153
|
-
|
|
81
|
+
That's it! No config, no setup, it just worksโข
|
|
154
82
|
|
|
155
|
-
|
|
156
|
-
|
|
157
|
-
|
|
83
|
+
### ๐ Distributed Mode Example (NEW!)
|
|
84
|
+
```javascript
|
|
85
|
+
// Writer Instance - Ingests data from multiple sources
|
|
86
|
+
const writer = createAutoBrainy({
|
|
87
|
+
storage: { type: 's3', bucket: 'my-bucket' },
|
|
88
|
+
distributed: { role: 'writer' } // Explicit role for safety
|
|
89
|
+
})
|
|
158
90
|
|
|
159
|
-
|
|
160
|
-
|
|
161
|
-
|
|
162
|
-
|
|
163
|
-
|
|
164
|
-
- Learns from query patterns to optimize future searches
|
|
165
|
-
- Tunes itself for your specific use cases
|
|
166
|
-
4. **Intelligent Storage Selection** - Uses the best available storage option for your environment:
|
|
167
|
-
- Browser: Origin Private File System (OPFS)
|
|
168
|
-
- Node.js: File system
|
|
169
|
-
- Server: S3-compatible storage (optional)
|
|
170
|
-
- Serverless: In-memory storage with optional cloud persistence
|
|
171
|
-
- Fallback: In-memory storage
|
|
172
|
-
- Automatically migrates between storage types as needed
|
|
173
|
-
- Uses a simplified, consolidated storage structure for all noun types
|
|
91
|
+
// Reader Instance - Optimized for search queries
|
|
92
|
+
const reader = createAutoBrainy({
|
|
93
|
+
storage: { type: 's3', bucket: 'my-bucket' },
|
|
94
|
+
distributed: { role: 'reader' } // 80% memory for cache
|
|
95
|
+
})
|
|
174
96
|
|
|
175
|
-
|
|
97
|
+
// Data automatically gets domain tags
|
|
98
|
+
await writer.add("Patient shows symptoms of...", {
|
|
99
|
+
diagnosis: "flu" // Auto-tagged as 'medical' domain
|
|
100
|
+
})
|
|
176
101
|
|
|
177
|
-
|
|
102
|
+
// Domain-aware search across all partitions
|
|
103
|
+
const results = await reader.search("medical symptoms", 10, {
|
|
104
|
+
filter: { domain: 'medical' } // Only search medical data
|
|
105
|
+
})
|
|
178
106
|
|
|
107
|
+
// Monitor health across all instances
|
|
108
|
+
const health = reader.getHealthStatus()
|
|
109
|
+
console.log(`Instance ${health.instanceId}: ${health.status}`)
|
|
179
110
|
```
|
|
180
|
-
Raw Data โ Embedding โ Vector Storage โ Graph Connections โ Adaptive Learning โ Query & Retrieval
|
|
181
|
-
```
|
|
182
|
-
|
|
183
|
-
Each time data flows through this pipeline, Brainy learns more about your usage patterns and environment, making future
|
|
184
|
-
operations faster and more relevant.
|
|
185
|
-
|
|
186
|
-
### Pipeline Stages
|
|
187
|
-
|
|
188
|
-
1. **Data Ingestion**
|
|
189
|
-
- Raw text or pre-computed vectors enter the pipeline
|
|
190
|
-
- Data is validated and prepared for processing
|
|
191
|
-
|
|
192
|
-
2. **Embedding Generation**
|
|
193
|
-
- Text is transformed into numerical vectors using embedding models
|
|
194
|
-
- Uses TensorFlow Universal Sentence Encoder for high-quality text embeddings
|
|
195
|
-
- Custom embedding functions can be plugged in for specialized domains
|
|
196
|
-
|
|
197
|
-
3. **Vector Indexing**
|
|
198
|
-
- Vectors are indexed using the HNSW algorithm
|
|
199
|
-
- Hierarchical structure enables fast similarity search
|
|
200
|
-
- Configurable parameters for precision vs. performance tradeoffs
|
|
201
|
-
|
|
202
|
-
4. **Graph Construction**
|
|
203
|
-
- Nouns (entities) become nodes in the knowledge graph
|
|
204
|
-
- Verbs (relationships) connect related entities
|
|
205
|
-
- Typed relationships add semantic meaning to connections
|
|
206
|
-
|
|
207
|
-
5. **Adaptive Learning**
|
|
208
|
-
- Analyzes usage patterns to optimize future operations
|
|
209
|
-
- Tunes performance parameters based on your environment
|
|
210
|
-
- Adjusts search strategies based on query history
|
|
211
|
-
- Becomes more efficient and relevant the more you use it
|
|
212
|
-
|
|
213
|
-
6. **Intelligent Storage**
|
|
214
|
-
- Data is saved using the optimal storage for your environment
|
|
215
|
-
- Automatic selection between OPFS, filesystem, S3, or memory
|
|
216
|
-
- Migrates between storage types as your application's needs evolve
|
|
217
|
-
- Scales from tiny datasets to massive data collections
|
|
218
|
-
- Configurable storage adapters for custom persistence needs
|
|
219
|
-
|
|
220
|
-
### Augmentation Types
|
|
221
|
-
|
|
222
|
-
Brainy uses a powerful augmentation system to extend functionality. Augmentations are processed in the following order:
|
|
223
|
-
|
|
224
|
-
1. **SENSE**
|
|
225
|
-
- Ingests and processes raw, unstructured data into nouns and verbs
|
|
226
|
-
- Handles text, images, audio streams, and other input formats
|
|
227
|
-
- Example: Converting raw text into structured entities
|
|
228
|
-
|
|
229
|
-
2. **MEMORY**
|
|
230
|
-
- Provides storage capabilities for data in different formats
|
|
231
|
-
- Manages persistence across sessions
|
|
232
|
-
- Example: Storing vectors in OPFS or filesystem
|
|
233
|
-
|
|
234
|
-
3. **COGNITION**
|
|
235
|
-
- Enables advanced reasoning, inference, and logical operations
|
|
236
|
-
- Analyzes relationships between entities
|
|
237
|
-
- Examples:
|
|
238
|
-
- Inferring new connections between existing data
|
|
239
|
-
- Deriving insights from graph relationships
|
|
240
|
-
|
|
241
|
-
4. **CONDUIT**
|
|
242
|
-
- Establishes channels for structured data exchange
|
|
243
|
-
- Connects with external systems and syncs between Brainy instances
|
|
244
|
-
- Two built-in iConduit augmentations for scaling out and syncing:
|
|
245
|
-
- **WebSocket iConduit** - Syncs data between browsers and servers
|
|
246
|
-
- **WebRTC iConduit** - Direct peer-to-peer syncing between browsers
|
|
247
|
-
- Examples:
|
|
248
|
-
- Integrating with third-party APIs
|
|
249
|
-
- Syncing Brainy instances between browsers using WebSockets
|
|
250
|
-
- Peer-to-peer syncing between browsers using WebRTC
|
|
251
|
-
|
|
252
|
-
5. **ACTIVATION**
|
|
253
|
-
- Initiates actions, responses, or data manipulations
|
|
254
|
-
- Triggers events based on data changes
|
|
255
|
-
- Example: Sending notifications when new data is processed
|
|
256
|
-
|
|
257
|
-
6. **PERCEPTION**
|
|
258
|
-
- Interprets, contextualizes, and visualizes identified nouns and verbs
|
|
259
|
-
- Creates meaningful representations of data
|
|
260
|
-
- Example: Generating visualizations of graph relationships
|
|
261
|
-
|
|
262
|
-
7. **DIALOG**
|
|
263
|
-
- Facilitates natural language understanding and generation
|
|
264
|
-
- Enables conversational interactions
|
|
265
|
-
- Example: Processing user queries and generating responses
|
|
266
|
-
|
|
267
|
-
8. **WEBSOCKET**
|
|
268
|
-
- Enables real-time communication via WebSockets
|
|
269
|
-
- Can be combined with other augmentation types
|
|
270
|
-
- Example: Streaming data processing in real-time
|
|
271
|
-
|
|
272
|
-
### Streaming Data Support
|
|
273
|
-
|
|
274
|
-
Brainy's pipeline is designed to handle streaming data efficiently:
|
|
275
|
-
|
|
276
|
-
1. **WebSocket Integration**
|
|
277
|
-
- Built-in support for WebSocket connections
|
|
278
|
-
- Process data as it arrives without blocking
|
|
279
|
-
- Example: `setupWebSocketPipeline(url, dataType, options)`
|
|
280
|
-
|
|
281
|
-
2. **Asynchronous Processing**
|
|
282
|
-
- Non-blocking architecture for real-time data handling
|
|
283
|
-
- Parallel processing of incoming streams
|
|
284
|
-
- Example: `createWebSocketHandler(connection, dataType, options)`
|
|
285
|
-
|
|
286
|
-
3. **Event-Based Architecture**
|
|
287
|
-
- Augmentations can listen to data feeds and streams
|
|
288
|
-
- Real-time updates propagate through the pipeline
|
|
289
|
-
- Example: `listenToFeed(feedUrl, callback)`
|
|
290
|
-
|
|
291
|
-
4. **Threaded Execution**
|
|
292
|
-
- Comprehensive multi-threading for high-performance operations
|
|
293
|
-
- Parallel processing for batch operations, vector calculations, and embedding generation
|
|
294
|
-
- Configurable execution modes (SEQUENTIAL, PARALLEL, THREADED)
|
|
295
|
-
- Automatic thread management based on environment capabilities
|
|
296
|
-
- Example: `executeTypedPipeline(augmentations, method, args, { mode: ExecutionMode.THREADED })`
|
|
297
|
-
|
|
298
|
-
### Running the Pipeline
|
|
299
|
-
|
|
300
|
-
The pipeline runs automatically when you:
|
|
301
|
-
|
|
302
|
-
```typescript
|
|
303
|
-
// Add data (runs embedding โ indexing โ storage)
|
|
304
|
-
const id = await db.add("Your text data here", { metadata })
|
|
305
111
|
|
|
306
|
-
|
|
307
|
-
const results = await db.searchText("Your query here", 5)
|
|
112
|
+
## ๐ญ Key Features
|
|
308
113
|
|
|
309
|
-
|
|
310
|
-
|
|
311
|
-
|
|
114
|
+
### Core Capabilities
|
|
115
|
+
- **Vector Search** - Find semantically similar content using embeddings
|
|
116
|
+
- **Graph Relationships** - Connect data with meaningful relationships
|
|
117
|
+
- **JSON Document Search** - Search within specific fields with prioritization
|
|
118
|
+
- **Distributed Mode** - Scale horizontally with automatic coordination between instances
|
|
119
|
+
- **Real-Time Syncing** - WebSocket and WebRTC for distributed instances
|
|
120
|
+
- **Streaming Pipeline** - Process data in real-time as it flows through
|
|
121
|
+
- **Model Control Protocol** - Let AI models access your data
|
|
122
|
+
|
|
123
|
+
### Smart Optimizations
|
|
124
|
+
- **Auto-Configuration** - Detects environment and optimizes automatically
|
|
125
|
+
- **Adaptive Learning** - Gets smarter with usage, optimizes itself over time
|
|
126
|
+
- **Intelligent Partitioning** - Hash-based partitioning for perfect load distribution
|
|
127
|
+
- **Role-Based Optimization** - Readers maximize cache, writers optimize throughput
|
|
128
|
+
- **Domain-Aware Indexing** - Automatic categorization improves search relevance
|
|
129
|
+
- **Multi-Level Caching** - Hot/warm/cold caching with predictive prefetching
|
|
130
|
+
- **Memory Optimization** - 75% reduction with compression for large datasets
|
|
131
|
+
|
|
132
|
+
### Developer Experience
|
|
133
|
+
- **TypeScript Support** - Fully typed API with generics
|
|
134
|
+
- **Extensible Augmentations** - Customize and extend functionality
|
|
135
|
+
- **REST API** - Web service wrapper for HTTP endpoints
|
|
136
|
+
- **Auto-Complete** - IntelliSense for all APIs and types
|
|
312
137
|
|
|
313
|
-
|
|
138
|
+
## ๐ฆ Installation
|
|
314
139
|
|
|
140
|
+
### Main Package
|
|
315
141
|
```bash
|
|
316
|
-
|
|
317
|
-
brainy add "Your text data here" '{"noun":"Thing"}'
|
|
318
|
-
|
|
319
|
-
# Search through the CLI pipeline
|
|
320
|
-
brainy search "Your query here" --limit 5
|
|
321
|
-
|
|
322
|
-
# Connect entities through the CLI
|
|
323
|
-
brainy addVerb <sourceId> <targetId> RelatedTo
|
|
324
|
-
```
|
|
325
|
-
|
|
326
|
-
### Extending the Pipeline
|
|
327
|
-
|
|
328
|
-
Brainy's pipeline is designed for extensibility at every stage:
|
|
329
|
-
|
|
330
|
-
1. **Custom Embedding**
|
|
331
|
-
```typescript
|
|
332
|
-
// Create your own embedding function
|
|
333
|
-
const myEmbedder = async (text) => {
|
|
334
|
-
// Your custom embedding logic here
|
|
335
|
-
return [0.1, 0.2, 0.3, ...] // Return a vector
|
|
336
|
-
}
|
|
337
|
-
|
|
338
|
-
// Use it in Brainy
|
|
339
|
-
const db = new BrainyData({
|
|
340
|
-
embeddingFunction: myEmbedder
|
|
341
|
-
})
|
|
342
|
-
```
|
|
343
|
-
|
|
344
|
-
2. **Custom Distance Functions**
|
|
345
|
-
```typescript
|
|
346
|
-
// Define your own distance function
|
|
347
|
-
const myDistance = (a, b) => {
|
|
348
|
-
// Your custom distance calculation
|
|
349
|
-
return Math.sqrt(a.reduce((sum, val, i) => sum + Math.pow(val - b[i], 2), 0))
|
|
350
|
-
}
|
|
351
|
-
|
|
352
|
-
// Use it in Brainy
|
|
353
|
-
const db = new BrainyData({
|
|
354
|
-
distanceFunction: myDistance
|
|
355
|
-
})
|
|
356
|
-
```
|
|
357
|
-
|
|
358
|
-
3. **Custom Storage Adapters**
|
|
359
|
-
```typescript
|
|
360
|
-
// Implement the StorageAdapter interface
|
|
361
|
-
class MyStorage implements StorageAdapter {
|
|
362
|
-
// Your storage implementation
|
|
363
|
-
}
|
|
364
|
-
|
|
365
|
-
// Use it in Brainy
|
|
366
|
-
const db = new BrainyData({
|
|
367
|
-
storageAdapter: new MyStorage()
|
|
368
|
-
})
|
|
369
|
-
```
|
|
370
|
-
|
|
371
|
-
4. **Augmentations System**
|
|
372
|
-
```typescript
|
|
373
|
-
// Create custom augmentations to extend functionality
|
|
374
|
-
const myAugmentation = {
|
|
375
|
-
type: 'memory',
|
|
376
|
-
name: 'my-custom-storage',
|
|
377
|
-
// Implementation details
|
|
378
|
-
}
|
|
379
|
-
|
|
380
|
-
// Register with Brainy
|
|
381
|
-
db.registerAugmentation(myAugmentation)
|
|
382
|
-
```
|
|
383
|
-
|
|
384
|
-
## Data Model
|
|
385
|
-
|
|
386
|
-
Brainy uses a graph-based data model with two primary concepts:
|
|
387
|
-
|
|
388
|
-
### Nouns (Entities)
|
|
389
|
-
|
|
390
|
-
The main entities in your data (nodes in the graph):
|
|
391
|
-
|
|
392
|
-
- Each noun has a unique ID, vector representation, and metadata
|
|
393
|
-
- Nouns can be categorized by type (Person, Place, Thing, Event, Concept, etc.)
|
|
394
|
-
- Nouns are automatically vectorized for similarity search
|
|
395
|
-
|
|
396
|
-
### Verbs (Relationships)
|
|
397
|
-
|
|
398
|
-
Connections between nouns (edges in the graph):
|
|
399
|
-
|
|
400
|
-
- Each verb connects a source noun to a target noun
|
|
401
|
-
- Verbs have types that define the relationship (RelatedTo, Controls, Contains, etc.)
|
|
402
|
-
- Verbs can have their own metadata to describe the relationship
|
|
403
|
-
|
|
404
|
-
### Type Utilities
|
|
405
|
-
|
|
406
|
-
Brainy provides utility functions to access lists of noun and verb types:
|
|
407
|
-
|
|
408
|
-
```typescript
|
|
409
|
-
import {
|
|
410
|
-
NounType,
|
|
411
|
-
VerbType,
|
|
412
|
-
getNounTypes,
|
|
413
|
-
getVerbTypes,
|
|
414
|
-
getNounTypeMap,
|
|
415
|
-
getVerbTypeMap
|
|
416
|
-
} from '@soulcraft/brainy'
|
|
417
|
-
|
|
418
|
-
// At development time:
|
|
419
|
-
// Access specific types directly from the NounType and VerbType objects
|
|
420
|
-
console.log(NounType.Person) // 'person'
|
|
421
|
-
console.log(VerbType.Contains) // 'contains'
|
|
422
|
-
|
|
423
|
-
// At runtime:
|
|
424
|
-
// Get a list of all noun types
|
|
425
|
-
const nounTypes = getNounTypes() // ['person', 'organization', 'location', ...]
|
|
426
|
-
|
|
427
|
-
// Get a list of all verb types
|
|
428
|
-
const verbTypes = getVerbTypes() // ['relatedTo', 'contains', 'partOf', ...]
|
|
429
|
-
|
|
430
|
-
// Get a map of noun type keys to values
|
|
431
|
-
const nounTypeMap = getNounTypeMap() // { Person: 'person', Organization: 'organization', ... }
|
|
432
|
-
|
|
433
|
-
// Get a map of verb type keys to values
|
|
434
|
-
const verbTypeMap = getVerbTypeMap() // { RelatedTo: 'relatedTo', Contains: 'contains', ... }
|
|
142
|
+
npm install brainy
|
|
435
143
|
```
|
|
436
144
|
|
|
437
|
-
|
|
438
|
-
|
|
439
|
-
- Get a complete list of available noun and verb types
|
|
440
|
-
- Validate user input against valid types
|
|
441
|
-
- Create dynamic UI components that display or select from available types
|
|
442
|
-
- Map between type keys and their string values
|
|
443
|
-
|
|
444
|
-
## Command Line Interface
|
|
445
|
-
|
|
446
|
-
Brainy includes a powerful CLI for managing your data. The CLI is available as a separate package
|
|
447
|
-
`@soulcraft/brainy-cli` to reduce the bundle size of the main package.
|
|
448
|
-
|
|
449
|
-
### Installing and Using the CLI
|
|
450
|
-
|
|
145
|
+
### Optional: Offline Models Package
|
|
451
146
|
```bash
|
|
452
|
-
|
|
453
|
-
npm install -g @soulcraft/brainy-cli
|
|
454
|
-
|
|
455
|
-
# Initialize a database
|
|
456
|
-
brainy init
|
|
457
|
-
|
|
458
|
-
# Add some data
|
|
459
|
-
brainy add "Cats are independent pets" '{"noun":"Thing","category":"animal"}'
|
|
460
|
-
brainy add "Dogs are loyal companions" '{"noun":"Thing","category":"animal"}'
|
|
461
|
-
|
|
462
|
-
# Search for similar items
|
|
463
|
-
brainy search "feline pets" 5
|
|
464
|
-
|
|
465
|
-
# Add relationships between items
|
|
466
|
-
brainy addVerb <sourceId> <targetId> RelatedTo '{"description":"Both are pets"}'
|
|
467
|
-
|
|
468
|
-
# Visualize the graph structure
|
|
469
|
-
brainy visualize
|
|
470
|
-
brainy visualize --root <id> --depth 3
|
|
147
|
+
npm install @soulcraft/brainy-models
|
|
471
148
|
```
|
|
472
149
|
|
|
473
|
-
|
|
150
|
+
The `@soulcraft/brainy-models` package provides **offline access** to the Universal Sentence Encoder model, eliminating network dependencies and ensuring consistent performance. Perfect for:
|
|
151
|
+
- **Air-gapped environments** - No internet? No problem
|
|
152
|
+
- **Consistent performance** - No network latency or throttling
|
|
153
|
+
- **Privacy-focused apps** - Keep everything local
|
|
154
|
+
- **High-reliability systems** - No external dependencies
|
|
474
155
|
|
|
475
|
-
|
|
476
|
-
|
|
156
|
+
```javascript
|
|
157
|
+
import { createAutoBrainy } from 'brainy'
|
|
158
|
+
import { BundledUniversalSentenceEncoder } from '@soulcraft/brainy-models'
|
|
477
159
|
|
|
478
|
-
|
|
479
|
-
|
|
160
|
+
// Use the bundled model for offline operation
|
|
161
|
+
const brainy = createAutoBrainy({
|
|
162
|
+
embeddingModel: BundledUniversalSentenceEncoder
|
|
163
|
+
})
|
|
480
164
|
```
|
|
481
165
|
|
|
482
|
-
|
|
166
|
+
## ๐จ Build Amazing Things
|
|
483
167
|
|
|
484
|
-
|
|
168
|
+
**๐ค AI Chat Applications** - Build ChatGPT-like apps with long-term memory and context awareness
|
|
169
|
+
**๐ Semantic Search Engines** - Search by meaning, not keywords. Find "that thing that's like a cat but bigger" โ returns "tiger"
|
|
170
|
+
**๐ฏ Recommendation Engines** - "Users who liked this also liked..." but actually good
|
|
171
|
+
**๐งฌ Knowledge Graphs** - Connect everything to everything. Wikipedia meets Neo4j meets magic
|
|
172
|
+
**๐๏ธ Computer Vision Apps** - Store and search image embeddings. "Find all photos with dogs wearing hats"
|
|
173
|
+
**๐ต Music Discovery** - Find songs that "feel" similar. Spotify's Discover Weekly in your app
|
|
174
|
+
**๐ Smart Documentation** - Docs that answer questions. "How do I deploy to production?" โ relevant guides
|
|
175
|
+
**๐ก๏ธ Fraud Detection** - Find patterns humans can't see. Anomaly detection on steroids
|
|
176
|
+
**๐ Real-Time Collaboration** - Sync vector data across devices. Figma for AI data
|
|
177
|
+
**๐ฅ Medical Diagnosis Tools** - Match symptoms to conditions using embedding similarity
|
|
485
178
|
|
|
486
|
-
|
|
179
|
+
## ๐งฌ The Power of Nouns & Verbs
|
|
487
180
|
|
|
488
|
-
-
|
|
489
|
-
- `add <text> [metadata]` - Add a new noun with text and optional metadata
|
|
490
|
-
- `search <query> [limit]` - Search for nouns similar to the query
|
|
491
|
-
- `get <id>` - Get a noun by ID
|
|
492
|
-
- `delete <id>` - Delete a noun by ID
|
|
493
|
-
- `addVerb <sourceId> <targetId> <verbType> [metadata]` - Add a relationship
|
|
494
|
-
- `getVerbs <id>` - Get all relationships for a noun
|
|
495
|
-
- `status` - Show database status
|
|
496
|
-
- `clear` - Clear all data from the database
|
|
497
|
-
- `generate-random-graph` - Generate test data
|
|
498
|
-
- `visualize` - Visualize the graph structure
|
|
499
|
-
- `completion-setup` - Setup shell autocomplete
|
|
181
|
+
Brainy uses a **graph-based data model** that mirrors how humans think - with **Nouns** (entities) connected by **Verbs** (relationships). This isn't just vectors in a void; it's structured, meaningful data.
|
|
500
182
|
|
|
501
|
-
|
|
183
|
+
### ๐ Nouns (What Things Are)
|
|
502
184
|
|
|
503
|
-
|
|
504
|
-
-
|
|
505
|
-
-
|
|
506
|
-
|
|
507
|
-
|
|
508
|
-
- `-s, --stop-on-error` - Stop execution if an error occurs
|
|
509
|
-
- `-v, --verbose` - Show detailed output
|
|
510
|
-
- `stream-test` - Test streaming data through the pipeline (simulated)
|
|
511
|
-
- `-c, --count <number>` - Number of data items to stream (default: 5)
|
|
512
|
-
- `-i, --interval <ms>` - Interval between data items in milliseconds (default: 1000)
|
|
513
|
-
- `-t, --data-type <type>` - Type of data to process (default: 'text')
|
|
514
|
-
- `-v, --verbose` - Show detailed output
|
|
185
|
+
Nouns are your entities - the "things" in your data. Each noun has:
|
|
186
|
+
- A unique ID
|
|
187
|
+
- A vector representation (for similarity search)
|
|
188
|
+
- A type (Person, Document, Concept, etc.)
|
|
189
|
+
- Custom metadata
|
|
515
190
|
|
|
516
|
-
|
|
191
|
+
**Available Noun Types:**
|
|
517
192
|
|
|
518
|
-
|
|
193
|
+
| Category | Types | Use For |
|
|
194
|
+
|----------|-------|---------|
|
|
195
|
+
| **Core Entities** | `Person`, `Organization`, `Location`, `Thing`, `Concept`, `Event` | People, companies, places, objects, ideas, happenings |
|
|
196
|
+
| **Digital Content** | `Document`, `Media`, `File`, `Message`, `Content` | PDFs, images, videos, emails, posts, generic content |
|
|
197
|
+
| **Collections** | `Collection`, `Dataset` | Groups of items, structured data sets |
|
|
198
|
+
| **Business** | `Product`, `Service`, `User`, `Task`, `Project` | E-commerce, SaaS, project management |
|
|
199
|
+
| **Descriptive** | `Process`, `State`, `Role` | Workflows, conditions, responsibilities |
|
|
519
200
|
|
|
520
|
-
|
|
521
|
-
// Initialize the database
|
|
522
|
-
await db.init()
|
|
201
|
+
### ๐ Verbs (How Things Connect)
|
|
523
202
|
|
|
524
|
-
|
|
525
|
-
await db.clear()
|
|
203
|
+
Verbs are your relationships - they give meaning to connections. Not just "these vectors are similar" but "this OWNS that" or "this CAUSES that".
|
|
526
204
|
|
|
527
|
-
|
|
528
|
-
const status = await db.status()
|
|
205
|
+
**Available Verb Types:**
|
|
529
206
|
|
|
530
|
-
|
|
531
|
-
|
|
207
|
+
| Category | Types | Examples |
|
|
208
|
+
|----------|-------|----------|
|
|
209
|
+
| **Core** | `RelatedTo`, `Contains`, `PartOf`, `LocatedAt`, `References` | Generic relations, containment, location |
|
|
210
|
+
| **Temporal** | `Precedes`, `Succeeds`, `Causes`, `DependsOn`, `Requires` | Time sequences, causality, dependencies |
|
|
211
|
+
| **Creation** | `Creates`, `Transforms`, `Becomes`, `Modifies`, `Consumes` | Creation, change, consumption |
|
|
212
|
+
| **Ownership** | `Owns`, `AttributedTo`, `CreatedBy`, `BelongsTo` | Ownership, authorship, belonging |
|
|
213
|
+
| **Social** | `MemberOf`, `WorksWith`, `FriendOf`, `Follows`, `Likes`, `ReportsTo` | Social networks, organizations |
|
|
214
|
+
| **Functional** | `Describes`, `Implements`, `Validates`, `Triggers`, `Serves` | Functions, implementations, services |
|
|
532
215
|
|
|
533
|
-
|
|
534
|
-
const restoreResult = await db.restore(backupData, { clearExisting: true })
|
|
535
|
-
```
|
|
216
|
+
### ๐ก Why This Matters
|
|
536
217
|
|
|
537
|
-
|
|
218
|
+
```javascript
|
|
219
|
+
// Traditional vector DB: Just similarity
|
|
220
|
+
const similar = await vectorDB.search(embedding, 10)
|
|
221
|
+
// Result: [vector1, vector2, ...] - What do these mean? ๐คท
|
|
538
222
|
|
|
539
|
-
Brainy
|
|
540
|
-
|
|
541
|
-
|
|
223
|
+
// Brainy: Similarity + Meaning + Relationships
|
|
224
|
+
const catId = await brainy.add("Siamese cat", {
|
|
225
|
+
noun: NounType.Thing,
|
|
226
|
+
breed: "Siamese"
|
|
227
|
+
})
|
|
228
|
+
const ownerId = await brainy.add("John Smith", {
|
|
229
|
+
noun: NounType.Person
|
|
230
|
+
})
|
|
231
|
+
await brainy.addVerb(ownerId, catId, {
|
|
232
|
+
verb: VerbType.Owns,
|
|
233
|
+
since: "2020-01-01"
|
|
234
|
+
})
|
|
542
235
|
|
|
543
|
-
|
|
544
|
-
|
|
236
|
+
// Now you can search with context!
|
|
237
|
+
const johnsPets = await brainy.getVerbsBySource(ownerId, VerbType.Owns)
|
|
238
|
+
const catOwners = await brainy.getVerbsByTarget(catId, VerbType.Owns)
|
|
239
|
+
```
|
|
545
240
|
|
|
546
|
-
|
|
547
|
-
const db = new BrainyData()
|
|
548
|
-
await db.init()
|
|
241
|
+
## ๐ Distributed Mode (New!)
|
|
549
242
|
|
|
550
|
-
|
|
551
|
-
const stats = await db.getStatistics()
|
|
552
|
-
console.log(stats)
|
|
553
|
-
// Output: { nounCount: 0, verbCount: 0, metadataCount: 0, hnswIndexSize: 0, serviceBreakdown: {...} }
|
|
554
|
-
```
|
|
243
|
+
Brainy now supports **distributed deployments** with multiple specialized instances sharing the same data. Perfect for scaling your AI applications across multiple servers.
|
|
555
244
|
|
|
556
|
-
###
|
|
245
|
+
### Distributed Setup
|
|
557
246
|
|
|
558
|
-
```
|
|
559
|
-
//
|
|
560
|
-
const
|
|
561
|
-
|
|
562
|
-
// other metadata...
|
|
247
|
+
```javascript
|
|
248
|
+
// Single instance (no change needed!)
|
|
249
|
+
const brainy = createAutoBrainy({
|
|
250
|
+
storage: { type: 's3', bucket: 'my-bucket' }
|
|
563
251
|
})
|
|
564
252
|
|
|
565
|
-
//
|
|
566
|
-
|
|
567
|
-
|
|
568
|
-
|
|
569
|
-
|
|
570
|
-
|
|
571
|
-
{
|
|
572
|
-
vectorOrData: "Second item to add",
|
|
573
|
-
metadata: { noun: NounType.Thing, category: 'example' }
|
|
574
|
-
},
|
|
575
|
-
// More items...
|
|
576
|
-
], {
|
|
577
|
-
forceEmbed: false,
|
|
578
|
-
concurrency: 4, // Control the level of parallelism (default: 4)
|
|
579
|
-
batchSize: 50 // Control the number of items to process in a single batch (default: 50)
|
|
253
|
+
// Distributed mode requires explicit role configuration
|
|
254
|
+
// Option 1: Via environment variable
|
|
255
|
+
process.env.BRAINY_ROLE = 'writer' // or 'reader' or 'hybrid'
|
|
256
|
+
const brainy = createAutoBrainy({
|
|
257
|
+
storage: { type: 's3', bucket: 'my-bucket' },
|
|
258
|
+
distributed: true
|
|
580
259
|
})
|
|
581
260
|
|
|
582
|
-
//
|
|
583
|
-
const
|
|
584
|
-
|
|
585
|
-
//
|
|
586
|
-
await db.updateMetadata(id, {
|
|
587
|
-
noun: NounType.Thing,
|
|
588
|
-
// updated metadata...
|
|
261
|
+
// Option 2: Via configuration
|
|
262
|
+
const writer = createAutoBrainy({
|
|
263
|
+
storage: { type: 's3', bucket: 'my-bucket' },
|
|
264
|
+
distributed: { role: 'writer' } // Handles data ingestion
|
|
589
265
|
})
|
|
590
266
|
|
|
591
|
-
|
|
592
|
-
|
|
593
|
-
|
|
594
|
-
|
|
595
|
-
const results = await db.search(vectorOrText, numResults)
|
|
596
|
-
const textResults = await db.searchText("query text", numResults)
|
|
597
|
-
|
|
598
|
-
// Search by noun type
|
|
599
|
-
const thingNouns = await db.searchByNounTypes([NounType.Thing], numResults)
|
|
267
|
+
const reader = createAutoBrainy({
|
|
268
|
+
storage: { type: 's3', bucket: 'my-bucket' },
|
|
269
|
+
distributed: { role: 'reader' } // Optimized for queries
|
|
270
|
+
})
|
|
600
271
|
|
|
601
|
-
//
|
|
602
|
-
const
|
|
603
|
-
|
|
272
|
+
// Option 3: Via read/write mode (role auto-inferred)
|
|
273
|
+
const writer = createAutoBrainy({
|
|
274
|
+
storage: { type: 's3', bucket: 'my-bucket' },
|
|
275
|
+
writeOnly: true, // Automatically becomes 'writer' role
|
|
276
|
+
distributed: true
|
|
604
277
|
})
|
|
605
278
|
|
|
606
|
-
|
|
607
|
-
|
|
608
|
-
|
|
609
|
-
|
|
279
|
+
const reader = createAutoBrainy({
|
|
280
|
+
storage: { type: 's3', bucket: 'my-bucket' },
|
|
281
|
+
readOnly: true, // Automatically becomes 'reader' role
|
|
282
|
+
distributed: true
|
|
610
283
|
})
|
|
611
284
|
```
|
|
612
285
|
|
|
613
|
-
###
|
|
286
|
+
### Key Distributed Features
|
|
614
287
|
|
|
615
|
-
|
|
616
|
-
|
|
288
|
+
**๐ฏ Explicit Role Configuration**
|
|
289
|
+
- Roles must be explicitly set (no dangerous auto-assignment)
|
|
290
|
+
- Can use environment variables, config, or read/write modes
|
|
291
|
+
- Clear separation between writers and readers
|
|
617
292
|
|
|
618
|
-
|
|
619
|
-
|
|
620
|
-
|
|
621
|
-
|
|
293
|
+
**#๏ธโฃ Hash-Based Partitioning**
|
|
294
|
+
- Handles multiple writers with different data types
|
|
295
|
+
- Even distribution across partitions
|
|
296
|
+
- No semantic conflicts with mixed data
|
|
297
|
+
|
|
298
|
+
**๐ท๏ธ Domain Tagging**
|
|
299
|
+
- Automatic domain detection (medical, legal, product, etc.)
|
|
300
|
+
- Filter searches by domain
|
|
301
|
+
- Logical separation without complexity
|
|
622
302
|
|
|
623
|
-
|
|
624
|
-
|
|
625
|
-
|
|
303
|
+
```javascript
|
|
304
|
+
// Data is automatically tagged with domains
|
|
305
|
+
await brainy.add({
|
|
306
|
+
symptoms: "fever",
|
|
307
|
+
diagnosis: "flu"
|
|
308
|
+
}, metadata) // Auto-tagged as 'medical'
|
|
309
|
+
|
|
310
|
+
// Search within specific domains
|
|
311
|
+
const medicalResults = await brainy.search(query, 10, {
|
|
312
|
+
filter: { domain: 'medical' }
|
|
313
|
+
})
|
|
626
314
|
```
|
|
627
315
|
|
|
628
|
-
|
|
316
|
+
**๐ Health Monitoring**
|
|
317
|
+
- Real-time health metrics
|
|
318
|
+
- Automatic dead instance cleanup
|
|
319
|
+
- Performance tracking
|
|
629
320
|
|
|
630
|
-
```
|
|
631
|
-
//
|
|
632
|
-
|
|
321
|
+
```javascript
|
|
322
|
+
// Get health status
|
|
323
|
+
const health = brainy.getHealthStatus()
|
|
324
|
+
// {
|
|
325
|
+
// status: 'healthy',
|
|
326
|
+
// role: 'reader',
|
|
327
|
+
// vectorCount: 1000000,
|
|
328
|
+
// cacheHitRate: 0.95,
|
|
329
|
+
// requestsPerSecond: 150
|
|
330
|
+
// }
|
|
331
|
+
```
|
|
332
|
+
|
|
333
|
+
**โก Role-Optimized Performance**
|
|
334
|
+
- **Readers**: 80% memory for cache, aggressive prefetching
|
|
335
|
+
- **Writers**: Optimized write batching, minimal cache
|
|
336
|
+
- **Hybrid**: Adaptive based on workload
|
|
337
|
+
|
|
338
|
+
### Deployment Examples
|
|
339
|
+
|
|
340
|
+
**Docker Compose**
|
|
341
|
+
```yaml
|
|
342
|
+
services:
|
|
343
|
+
writer:
|
|
344
|
+
image: myapp
|
|
345
|
+
environment:
|
|
346
|
+
BRAINY_ROLE: writer # Optional - auto-detects
|
|
347
|
+
|
|
348
|
+
reader:
|
|
349
|
+
image: myapp
|
|
350
|
+
environment:
|
|
351
|
+
BRAINY_ROLE: reader # Optional - auto-detects
|
|
352
|
+
scale: 5
|
|
353
|
+
```
|
|
354
|
+
|
|
355
|
+
**Kubernetes**
|
|
356
|
+
```yaml
|
|
357
|
+
# Automatically detects role from deployment type
|
|
358
|
+
apiVersion: apps/v1
|
|
359
|
+
kind: Deployment
|
|
360
|
+
metadata:
|
|
361
|
+
name: brainy-readers
|
|
362
|
+
spec:
|
|
363
|
+
replicas: 10 # Multiple readers
|
|
364
|
+
template:
|
|
365
|
+
spec:
|
|
366
|
+
containers:
|
|
367
|
+
- name: app
|
|
368
|
+
image: myapp
|
|
369
|
+
# Role auto-detected as 'reader' (multiple replicas)
|
|
370
|
+
```
|
|
371
|
+
|
|
372
|
+
**Benefits**
|
|
373
|
+
- โ
**50-70% faster searches** with parallel readers
|
|
374
|
+
- โ
**No coordination complexity** - Shared JSON config in S3
|
|
375
|
+
- โ
**Zero downtime scaling** - Add/remove instances anytime
|
|
376
|
+
- โ
**Automatic failover** - Dead instances cleaned up automatically
|
|
377
|
+
|
|
378
|
+
## ๐ค Why Choose Brainy?
|
|
379
|
+
|
|
380
|
+
### vs. Traditional Databases
|
|
381
|
+
โ **PostgreSQL with pgvector** - Requires complex setup, tuning, and DevOps expertise
|
|
382
|
+
โ
**Brainy** - Zero config, auto-optimizes, works everywhere from browser to cloud
|
|
383
|
+
|
|
384
|
+
### vs. Vector Databases
|
|
385
|
+
โ **Pinecone/Weaviate/Qdrant** - Cloud-only, expensive, vendor lock-in
|
|
386
|
+
โ
**Brainy** - Run locally, in browser, or cloud. Your choice, your data
|
|
387
|
+
|
|
388
|
+
### vs. Graph Databases
|
|
389
|
+
โ **Neo4j** - Great for graphs, no vector support
|
|
390
|
+
โ
**Brainy** - Vectors + graphs in one. Best of both worlds
|
|
391
|
+
|
|
392
|
+
### vs. DIY Solutions
|
|
393
|
+
โ **Building your own** - Months of work, optimization nightmares
|
|
394
|
+
โ
**Brainy** - Production-ready in 30 seconds
|
|
395
|
+
|
|
396
|
+
## ๐ Getting Started in 30 Seconds
|
|
397
|
+
|
|
398
|
+
### React
|
|
399
|
+
|
|
400
|
+
```jsx
|
|
401
|
+
import { createAutoBrainy } from 'brainy'
|
|
402
|
+
import { useEffect, useState } from 'react'
|
|
403
|
+
|
|
404
|
+
function SemanticSearch() {
|
|
405
|
+
const [brainy] = useState(() => createAutoBrainy())
|
|
406
|
+
const [results, setResults] = useState([])
|
|
407
|
+
|
|
408
|
+
const search = async (query) => {
|
|
409
|
+
const items = await brainy.searchText(query, 10)
|
|
410
|
+
setResults(items)
|
|
411
|
+
}
|
|
412
|
+
|
|
413
|
+
return (
|
|
414
|
+
<input onChange={(e) => search(e.target.value)}
|
|
415
|
+
placeholder="Search by meaning..." />
|
|
416
|
+
)
|
|
417
|
+
}
|
|
633
418
|
```
|
|
634
419
|
|
|
635
|
-
###
|
|
420
|
+
### Angular
|
|
636
421
|
|
|
637
422
|
```typescript
|
|
638
|
-
|
|
639
|
-
|
|
640
|
-
|
|
641
|
-
|
|
423
|
+
import { Component, OnInit } from '@angular/core'
|
|
424
|
+
import { createAutoBrainy } from 'brainy'
|
|
425
|
+
|
|
426
|
+
@Component({
|
|
427
|
+
selector: 'app-search',
|
|
428
|
+
template: `
|
|
429
|
+
<input (input)="search($event.target.value)"
|
|
430
|
+
placeholder="Semantic search...">
|
|
431
|
+
<div *ngFor="let result of results">
|
|
432
|
+
{{ result.text }}
|
|
433
|
+
</div>
|
|
434
|
+
`
|
|
642
435
|
})
|
|
436
|
+
export class SearchComponent implements OnInit {
|
|
437
|
+
brainy = createAutoBrainy()
|
|
438
|
+
results = []
|
|
643
439
|
|
|
644
|
-
|
|
645
|
-
|
|
646
|
-
await db.addVerb(sourceId, targetId, {
|
|
647
|
-
verb: VerbType.RelatedTo,
|
|
648
|
-
// Enable auto-creation of missing nouns
|
|
649
|
-
autoCreateMissingNouns: true,
|
|
650
|
-
// Optional metadata for auto-created nouns
|
|
651
|
-
missingNounMetadata: {
|
|
652
|
-
noun: NounType.Concept,
|
|
653
|
-
description: 'Auto-created noun'
|
|
440
|
+
async search(query: string) {
|
|
441
|
+
this.results = await this.brainy.searchText(query, 10)
|
|
654
442
|
}
|
|
655
|
-
}
|
|
443
|
+
}
|
|
444
|
+
```
|
|
656
445
|
|
|
657
|
-
|
|
658
|
-
const verbs = await db.getAllVerbs()
|
|
446
|
+
### Vue 3
|
|
659
447
|
|
|
660
|
-
|
|
661
|
-
|
|
448
|
+
```vue
|
|
449
|
+
<script setup>
|
|
450
|
+
import { createAutoBrainy } from 'brainy'
|
|
451
|
+
import { ref } from 'vue'
|
|
662
452
|
|
|
663
|
-
|
|
664
|
-
const
|
|
453
|
+
const brainy = createAutoBrainy()
|
|
454
|
+
const results = ref([])
|
|
665
455
|
|
|
666
|
-
|
|
667
|
-
|
|
456
|
+
const search = async (query) => {
|
|
457
|
+
results.value = await brainy.searchText(query, 10)
|
|
458
|
+
}
|
|
459
|
+
</script>
|
|
668
460
|
|
|
669
|
-
|
|
670
|
-
|
|
461
|
+
<template>
|
|
462
|
+
<input @input="search($event.target.value)"
|
|
463
|
+
placeholder="Find similar content...">
|
|
464
|
+
<div v-for="result in results" :key="result.id">
|
|
465
|
+
{{ result.text }}
|
|
466
|
+
</div>
|
|
467
|
+
</template>
|
|
468
|
+
```
|
|
469
|
+
|
|
470
|
+
### Svelte
|
|
471
|
+
|
|
472
|
+
```svelte
|
|
473
|
+
<script>
|
|
474
|
+
import { createAutoBrainy } from 'brainy'
|
|
475
|
+
|
|
476
|
+
const brainy = createAutoBrainy()
|
|
477
|
+
let results = []
|
|
478
|
+
|
|
479
|
+
async function search(e) {
|
|
480
|
+
results = await brainy.searchText(e.target.value, 10)
|
|
481
|
+
}
|
|
482
|
+
</script>
|
|
671
483
|
|
|
672
|
-
|
|
673
|
-
|
|
484
|
+
<input on:input={search} placeholder="AI-powered search...">
|
|
485
|
+
{#each results as result}
|
|
486
|
+
<div>{result.text}</div>
|
|
487
|
+
{/each}
|
|
674
488
|
```
|
|
675
489
|
|
|
676
|
-
|
|
677
|
-
|
|
678
|
-
### Database Modes
|
|
490
|
+
### Next.js (App Router)
|
|
679
491
|
|
|
680
|
-
|
|
492
|
+
```jsx
|
|
493
|
+
// app/search/page.js
|
|
494
|
+
import { createAutoBrainy } from 'brainy'
|
|
681
495
|
|
|
682
|
-
|
|
683
|
-
|
|
496
|
+
export default function SearchPage() {
|
|
497
|
+
async function search(formData) {
|
|
498
|
+
'use server'
|
|
499
|
+
const brainy = createAutoBrainy({ bucketName: 'vectors' })
|
|
500
|
+
const query = formData.get('query')
|
|
501
|
+
return await brainy.searchText(query, 10)
|
|
502
|
+
}
|
|
684
503
|
|
|
685
|
-
|
|
686
|
-
|
|
687
|
-
|
|
504
|
+
return (
|
|
505
|
+
<form action={search}>
|
|
506
|
+
<input name="query" placeholder="Search..." />
|
|
507
|
+
<button type="submit">Search</button>
|
|
508
|
+
</form>
|
|
509
|
+
)
|
|
510
|
+
}
|
|
511
|
+
```
|
|
688
512
|
|
|
689
|
-
|
|
690
|
-
db.setReadOnly(true)
|
|
513
|
+
### Node.js / Bun / Deno
|
|
691
514
|
|
|
692
|
-
|
|
693
|
-
|
|
515
|
+
```javascript
|
|
516
|
+
import { createAutoBrainy } from 'brainy'
|
|
694
517
|
|
|
695
|
-
|
|
696
|
-
db.setWriteOnly(true)
|
|
518
|
+
const brainy = createAutoBrainy()
|
|
697
519
|
|
|
698
|
-
//
|
|
699
|
-
|
|
520
|
+
// Add some data
|
|
521
|
+
await brainy.add("TypeScript is a typed superset of JavaScript", {
|
|
522
|
+
category: 'programming'
|
|
523
|
+
})
|
|
700
524
|
|
|
701
|
-
//
|
|
702
|
-
|
|
703
|
-
|
|
525
|
+
// Search for similar content
|
|
526
|
+
const results = await brainy.searchText("JavaScript with types", 5)
|
|
527
|
+
console.log(results)
|
|
704
528
|
```
|
|
705
529
|
|
|
706
|
-
|
|
707
|
-
where you want to prevent modifications to the database.
|
|
708
|
-
- **Write-Only Mode**: When enabled, prevents all search operations. Useful for initial data loading or when you want to
|
|
709
|
-
optimize for write performance.
|
|
530
|
+
### Vanilla JavaScript
|
|
710
531
|
|
|
711
|
-
|
|
532
|
+
```html
|
|
533
|
+
<!DOCTYPE html>
|
|
534
|
+
<html>
|
|
535
|
+
<head>
|
|
536
|
+
<script type="module">
|
|
537
|
+
import { createAutoBrainy } from 'https://unpkg.com/brainy/dist/unified.min.js'
|
|
538
|
+
|
|
539
|
+
window.brainy = createAutoBrainy()
|
|
540
|
+
|
|
541
|
+
window.search = async function(query) {
|
|
542
|
+
const results = await brainy.searchText(query, 10)
|
|
543
|
+
document.getElementById('results').innerHTML =
|
|
544
|
+
results.map(r => `<div>${r.text}</div>`).join('')
|
|
545
|
+
}
|
|
546
|
+
</script>
|
|
547
|
+
</head>
|
|
548
|
+
<body>
|
|
549
|
+
<input onkeyup="search(this.value)" placeholder="Search...">
|
|
550
|
+
<div id="results"></div>
|
|
551
|
+
</body>
|
|
552
|
+
</html>
|
|
553
|
+
```
|
|
712
554
|
|
|
713
|
-
|
|
714
|
-
import {
|
|
715
|
-
BrainyData,
|
|
716
|
-
createTensorFlowEmbeddingFunction,
|
|
717
|
-
createThreadedEmbeddingFunction
|
|
718
|
-
} from '@soulcraft/brainy'
|
|
719
|
-
|
|
720
|
-
// Use the standard TensorFlow Universal Sentence Encoder embedding function
|
|
721
|
-
const db = new BrainyData({
|
|
722
|
-
embeddingFunction: createTensorFlowEmbeddingFunction()
|
|
723
|
-
})
|
|
724
|
-
await db.init()
|
|
555
|
+
### Cloudflare Workers
|
|
725
556
|
|
|
726
|
-
|
|
727
|
-
|
|
728
|
-
|
|
729
|
-
|
|
730
|
-
|
|
731
|
-
|
|
732
|
-
|
|
733
|
-
|
|
734
|
-
|
|
735
|
-
|
|
736
|
-
const
|
|
737
|
-
|
|
738
|
-
|
|
739
|
-
)
|
|
740
|
-
console.log(`Similarity score: ${similarity}`) // Higher value means more similar
|
|
741
|
-
|
|
742
|
-
// Calculate similarity with custom options
|
|
743
|
-
const vectorA = await db.embed("First text")
|
|
744
|
-
const vectorB = await db.embed("Second text")
|
|
745
|
-
const customSimilarity = await db.calculateSimilarity(
|
|
746
|
-
vectorA, // Can use pre-computed vectors
|
|
747
|
-
vectorB,
|
|
748
|
-
{
|
|
749
|
-
forceEmbed: false, // Skip embedding if inputs are already vectors
|
|
750
|
-
distanceFunction: cosineDistance // Optional custom distance function
|
|
557
|
+
```javascript
|
|
558
|
+
import { createAutoBrainy } from 'brainy'
|
|
559
|
+
|
|
560
|
+
export default {
|
|
561
|
+
async fetch(request, env) {
|
|
562
|
+
const brainy = createAutoBrainy({
|
|
563
|
+
bucketName: env.R2_BUCKET
|
|
564
|
+
})
|
|
565
|
+
|
|
566
|
+
const url = new URL(request.url)
|
|
567
|
+
const query = url.searchParams.get('q')
|
|
568
|
+
|
|
569
|
+
const results = await brainy.searchText(query, 10)
|
|
570
|
+
return Response.json(results)
|
|
751
571
|
}
|
|
752
|
-
|
|
572
|
+
}
|
|
753
573
|
```
|
|
754
574
|
|
|
755
|
-
|
|
756
|
-
performance, especially for embedding operations. It uses GPU acceleration when available (via WebGL in browsers) and
|
|
757
|
-
falls back to CPU processing for compatibility. Universal Sentence Encoder is always used for embeddings. The
|
|
758
|
-
implementation includes worker reuse and model caching for optimal performance.
|
|
759
|
-
|
|
760
|
-
### Performance Tuning
|
|
761
|
-
|
|
762
|
-
Brainy includes comprehensive performance optimizations that work across all environments (browser, CLI, Node.js,
|
|
763
|
-
container, server):
|
|
575
|
+
### AWS Lambda
|
|
764
576
|
|
|
765
|
-
|
|
766
|
-
|
|
767
|
-
Brainy uses GPU and CPU optimization for compute-intensive operations:
|
|
768
|
-
|
|
769
|
-
1. **GPU-Accelerated Embeddings**: Generate text embeddings using TensorFlow.js with WebGL backend when available
|
|
770
|
-
2. **Automatic Fallback**: Falls back to CPU backend when GPU is not available
|
|
771
|
-
3. **Optimized Distance Calculations**: Perform vector similarity calculations with optimized algorithms
|
|
772
|
-
4. **Cross-Environment Support**: Works consistently across browsers and Node.js environments
|
|
773
|
-
5. **Memory Management**: Properly disposes of tensors to prevent memory leaks
|
|
774
|
-
|
|
775
|
-
#### Multithreading Support
|
|
776
|
-
|
|
777
|
-
Brainy includes comprehensive multithreading support to improve performance across all environments:
|
|
778
|
-
|
|
779
|
-
1. **Parallel Batch Processing**: Add multiple items concurrently with controlled parallelism
|
|
780
|
-
2. **Multithreaded Vector Search**: Perform distance calculations in parallel for faster search operations
|
|
781
|
-
3. **Threaded Embedding Generation**: Generate embeddings in separate threads to avoid blocking the main thread
|
|
782
|
-
4. **Worker Reuse**: Maintains a pool of workers to avoid the overhead of creating and terminating workers
|
|
783
|
-
5. **Model Caching**: Initializes the embedding model once per worker and reuses it for multiple operations
|
|
784
|
-
6. **Batch Embedding**: Processes multiple items in a single embedding operation for better performance
|
|
785
|
-
7. **Automatic Environment Detection**: Adapts to browser (Web Workers) and Node.js (Worker Threads) environments
|
|
577
|
+
```javascript
|
|
578
|
+
import { createAutoBrainy } from 'brainy'
|
|
786
579
|
|
|
787
|
-
|
|
788
|
-
|
|
789
|
-
|
|
790
|
-
|
|
791
|
-
|
|
792
|
-
|
|
793
|
-
|
|
794
|
-
|
|
795
|
-
|
|
796
|
-
|
|
797
|
-
M: 16, // Max connections per noun
|
|
798
|
-
efConstruction: 200, // Construction candidate list size
|
|
799
|
-
efSearch: 50, // Search candidate list size
|
|
800
|
-
},
|
|
801
|
-
|
|
802
|
-
// Performance optimization options
|
|
803
|
-
performance: {
|
|
804
|
-
useParallelization: true, // Enable multithreaded search operations
|
|
805
|
-
},
|
|
806
|
-
|
|
807
|
-
// Noun and Verb type validation
|
|
808
|
-
typeValidation: {
|
|
809
|
-
enforceNounTypes: true, // Validate noun types against NounType enum
|
|
810
|
-
enforceVerbTypes: true, // Validate verb types against VerbType enum
|
|
811
|
-
},
|
|
812
|
-
|
|
813
|
-
// Storage configuration
|
|
814
|
-
storage: {
|
|
815
|
-
requestPersistentStorage: true,
|
|
816
|
-
// Example configuration for cloud storage (replace with your own values):
|
|
817
|
-
// s3Storage: {
|
|
818
|
-
// bucketName: 'your-s3-bucket-name',
|
|
819
|
-
// region: 'your-aws-region'
|
|
820
|
-
// // Credentials should be provided via environment variables
|
|
821
|
-
// // AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
|
|
822
|
-
// }
|
|
580
|
+
export const handler = async (event) => {
|
|
581
|
+
const brainy = createAutoBrainy({
|
|
582
|
+
bucketName: process.env.S3_BUCKET
|
|
583
|
+
})
|
|
584
|
+
|
|
585
|
+
const results = await brainy.searchText(event.query, 10)
|
|
586
|
+
|
|
587
|
+
return {
|
|
588
|
+
statusCode: 200,
|
|
589
|
+
body: JSON.stringify(results)
|
|
823
590
|
}
|
|
824
|
-
}
|
|
591
|
+
}
|
|
825
592
|
```
|
|
826
593
|
|
|
827
|
-
###
|
|
828
|
-
|
|
829
|
-
Brainy includes an optimized HNSW index implementation for large datasets that may not fit entirely in memory, using a
|
|
830
|
-
hybrid approach:
|
|
594
|
+
### Azure Functions
|
|
831
595
|
|
|
832
|
-
|
|
833
|
-
|
|
834
|
-
3. **Memory-Efficient Indexing** - Optimizes memory usage for large-scale vector collections
|
|
596
|
+
```javascript
|
|
597
|
+
import { createAutoBrainy } from 'brainy'
|
|
835
598
|
|
|
836
|
-
|
|
837
|
-
|
|
838
|
-
|
|
839
|
-
|
|
840
|
-
|
|
841
|
-
|
|
842
|
-
|
|
843
|
-
|
|
844
|
-
|
|
845
|
-
efSearch: 50, // Search candidate list size
|
|
846
|
-
|
|
847
|
-
// Memory threshold in bytes - when exceeded, will use disk-based approach
|
|
848
|
-
memoryThreshold: 1024 * 1024 * 1024, // 1GB default threshold
|
|
849
|
-
|
|
850
|
-
// Product quantization settings for dimensionality reduction
|
|
851
|
-
productQuantization: {
|
|
852
|
-
enabled: true, // Enable product quantization
|
|
853
|
-
numSubvectors: 16, // Number of subvectors to split the vector into
|
|
854
|
-
numCentroids: 256 // Number of centroids per subvector
|
|
855
|
-
},
|
|
856
|
-
|
|
857
|
-
// Whether to use disk-based storage for the index
|
|
858
|
-
useDiskBasedIndex: true // Enable disk-based storage
|
|
859
|
-
},
|
|
860
|
-
|
|
861
|
-
// Storage configuration (required for disk-based index)
|
|
862
|
-
storage: {
|
|
863
|
-
requestPersistentStorage: true
|
|
599
|
+
module.exports = async function (context, req) {
|
|
600
|
+
const brainy = createAutoBrainy({
|
|
601
|
+
bucketName: process.env.AZURE_STORAGE_CONTAINER
|
|
602
|
+
})
|
|
603
|
+
|
|
604
|
+
const results = await brainy.searchText(req.query.q, 10)
|
|
605
|
+
|
|
606
|
+
context.res = {
|
|
607
|
+
body: results
|
|
864
608
|
}
|
|
865
|
-
}
|
|
866
|
-
|
|
867
|
-
// The optimized index automatically adapts based on dataset size:
|
|
868
|
-
// 1. For small datasets: Uses standard in-memory approach
|
|
869
|
-
// 2. For medium datasets: Applies product quantization to reduce memory usage
|
|
870
|
-
// 3. For large datasets: Combines product quantization with disk-based storage
|
|
871
|
-
|
|
872
|
-
// Check status to see memory usage and optimization details
|
|
873
|
-
const status = await db.status()
|
|
874
|
-
console.log(status.details.index)
|
|
609
|
+
}
|
|
875
610
|
```
|
|
876
611
|
|
|
877
|
-
|
|
612
|
+
### Google Cloud Functions
|
|
878
613
|
|
|
879
|
-
|
|
880
|
-
|
|
881
|
-
- `cosineDistance` (default): Measures the cosine of the angle between vectors (1 - cosine similarity)
|
|
882
|
-
- `euclideanDistance`: Measures the straight-line distance between vectors
|
|
883
|
-
- `manhattanDistance`: Measures the sum of absolute differences between vector components
|
|
884
|
-
- `dotProductDistance`: Measures the negative dot product between vectors
|
|
885
|
-
|
|
886
|
-
All distance functions are optimized for performance and automatically use the most efficient implementation based on
|
|
887
|
-
the dataset size and available resources. For large datasets and high-dimensional vectors, Brainy uses batch processing
|
|
888
|
-
and multithreading when available to improve performance.
|
|
889
|
-
|
|
890
|
-
## Backup and Restore
|
|
891
|
-
|
|
892
|
-
Brainy provides backup and restore capabilities that allow you to:
|
|
893
|
-
|
|
894
|
-
- Back up your data
|
|
895
|
-
- Transfer data between Brainy instances
|
|
896
|
-
- Restore existing data into Brainy for vectorization and indexing
|
|
897
|
-
- Backup data for analysis or visualization in other tools
|
|
898
|
-
|
|
899
|
-
### Backing Up Data
|
|
900
|
-
|
|
901
|
-
```typescript
|
|
902
|
-
// Backup all data from the database
|
|
903
|
-
const backupData = await db.backup()
|
|
904
|
-
|
|
905
|
-
// The backup data includes:
|
|
906
|
-
// - All nouns (entities) with their vectors and metadata
|
|
907
|
-
// - All verbs (relationships) between nouns
|
|
908
|
-
// - Noun types and verb types
|
|
909
|
-
// - HNSW index data for fast similarity search
|
|
910
|
-
// - Version information
|
|
911
|
-
|
|
912
|
-
// Save the backup data to a file (Node.js environment)
|
|
913
|
-
import fs from 'fs'
|
|
614
|
+
```javascript
|
|
615
|
+
import { createAutoBrainy } from 'brainy'
|
|
914
616
|
|
|
915
|
-
|
|
617
|
+
export const searchHandler = async (req, res) => {
|
|
618
|
+
const brainy = createAutoBrainy({
|
|
619
|
+
bucketName: process.env.GCS_BUCKET
|
|
620
|
+
})
|
|
621
|
+
|
|
622
|
+
const results = await brainy.searchText(req.query.q, 10)
|
|
623
|
+
res.json(results)
|
|
624
|
+
}
|
|
916
625
|
```
|
|
917
626
|
|
|
918
|
-
###
|
|
627
|
+
### Google Cloud Run
|
|
919
628
|
|
|
920
|
-
|
|
629
|
+
```dockerfile
|
|
630
|
+
# Dockerfile
|
|
631
|
+
FROM node:20-alpine
|
|
632
|
+
USER node
|
|
633
|
+
WORKDIR /app
|
|
634
|
+
COPY package*.json ./
|
|
635
|
+
RUN npm install brainy
|
|
636
|
+
COPY . .
|
|
637
|
+
CMD ["node", "server.js"]
|
|
638
|
+
```
|
|
921
639
|
|
|
922
|
-
|
|
923
|
-
|
|
924
|
-
|
|
640
|
+
```javascript
|
|
641
|
+
// server.js
|
|
642
|
+
import { createAutoBrainy } from 'brainy'
|
|
643
|
+
import express from 'express'
|
|
925
644
|
|
|
926
|
-
|
|
927
|
-
|
|
928
|
-
|
|
929
|
-
clearExisting: true // Whether to clear existing data before restore
|
|
645
|
+
const app = express()
|
|
646
|
+
const brainy = createAutoBrainy({
|
|
647
|
+
bucketName: process.env.GCS_BUCKET
|
|
930
648
|
})
|
|
931
649
|
|
|
932
|
-
|
|
933
|
-
|
|
934
|
-
|
|
935
|
-
|
|
936
|
-
{
|
|
937
|
-
id: '123',
|
|
938
|
-
// No vector field - will be created during import
|
|
939
|
-
metadata: {
|
|
940
|
-
noun: 'Thing',
|
|
941
|
-
text: 'This text will be used to generate a vector'
|
|
942
|
-
}
|
|
943
|
-
}
|
|
944
|
-
],
|
|
945
|
-
verbs: [],
|
|
946
|
-
version: '1.0.0'
|
|
947
|
-
}
|
|
650
|
+
app.get('/search', async (req, res) => {
|
|
651
|
+
const results = await brainy.searchText(req.query.q, 10)
|
|
652
|
+
res.json(results)
|
|
653
|
+
})
|
|
948
654
|
|
|
949
|
-
const
|
|
655
|
+
const port = process.env.PORT || 8080
|
|
656
|
+
app.listen(port, () => console.log(`Brainy on Cloud Run: ${port}`))
|
|
950
657
|
```
|
|
951
658
|
|
|
952
|
-
### CLI Backup/Restore
|
|
953
|
-
|
|
954
659
|
```bash
|
|
955
|
-
#
|
|
956
|
-
|
|
957
|
-
|
|
958
|
-
|
|
959
|
-
|
|
960
|
-
|
|
961
|
-
# Import sparse data (without vectors)
|
|
962
|
-
brainy import-sparse --input sparse-data.json
|
|
660
|
+
# Deploy to Cloud Run
|
|
661
|
+
gcloud run deploy brainy-api \
|
|
662
|
+
--source . \
|
|
663
|
+
--platform managed \
|
|
664
|
+
--region us-central1 \
|
|
665
|
+
--allow-unauthenticated
|
|
963
666
|
```
|
|
964
667
|
|
|
965
|
-
|
|
966
|
-
|
|
967
|
-
Brainy uses the following embedding approach:
|
|
668
|
+
### Vercel Edge Functions
|
|
968
669
|
|
|
969
|
-
|
|
970
|
-
|
|
971
|
-
- Batch embedding for processing multiple items efficiently
|
|
972
|
-
- Worker reuse and model caching for optimal performance
|
|
973
|
-
- Custom embedding functions can be plugged in for specialized domains
|
|
974
|
-
|
|
975
|
-
## Extensions
|
|
976
|
-
|
|
977
|
-
Brainy includes an augmentation system for extending functionality:
|
|
978
|
-
|
|
979
|
-
- **Memory Augmentations**: Different storage backends
|
|
980
|
-
- **Sense Augmentations**: Process raw data
|
|
981
|
-
- **Cognition Augmentations**: Reasoning and inference
|
|
982
|
-
- **Dialog Augmentations**: Text processing and interaction
|
|
983
|
-
- **Perception Augmentations**: Data interpretation and visualization
|
|
984
|
-
- **Activation Augmentations**: Trigger actions
|
|
985
|
-
|
|
986
|
-
### Simplified Augmentation System
|
|
987
|
-
|
|
988
|
-
Brainy provides a simplified factory system for creating, importing, and executing augmentations with minimal
|
|
989
|
-
boilerplate:
|
|
990
|
-
|
|
991
|
-
```typescript
|
|
992
|
-
import {
|
|
993
|
-
createMemoryAugmentation,
|
|
994
|
-
createConduitAugmentation,
|
|
995
|
-
createSenseAugmentation,
|
|
996
|
-
addWebSocketSupport,
|
|
997
|
-
executeStreamlined,
|
|
998
|
-
processStaticData,
|
|
999
|
-
processStreamingData,
|
|
1000
|
-
createPipeline
|
|
1001
|
-
} from '@soulcraft/brainy'
|
|
1002
|
-
|
|
1003
|
-
// Create a memory augmentation with minimal code
|
|
1004
|
-
const memoryAug = createMemoryAugmentation({
|
|
1005
|
-
name: 'simple-memory',
|
|
1006
|
-
description: 'A simple in-memory storage augmentation',
|
|
1007
|
-
autoRegister: true,
|
|
1008
|
-
autoInitialize: true,
|
|
1009
|
-
|
|
1010
|
-
// Implement only the methods you need
|
|
1011
|
-
storeData: async (key, data) => {
|
|
1012
|
-
// Your implementation here
|
|
1013
|
-
return {
|
|
1014
|
-
success: true,
|
|
1015
|
-
data: true
|
|
1016
|
-
}
|
|
1017
|
-
},
|
|
1018
|
-
|
|
1019
|
-
retrieveData: async (key) => {
|
|
1020
|
-
// Your implementation here
|
|
1021
|
-
return {
|
|
1022
|
-
success: true,
|
|
1023
|
-
data: { example: 'data', key }
|
|
1024
|
-
}
|
|
1025
|
-
}
|
|
1026
|
-
})
|
|
670
|
+
```javascript
|
|
671
|
+
import { createAutoBrainy } from 'brainy'
|
|
1027
672
|
|
|
1028
|
-
|
|
1029
|
-
|
|
1030
|
-
|
|
1031
|
-
// Your implementation here
|
|
1032
|
-
return {
|
|
1033
|
-
connectionId: 'ws-1',
|
|
1034
|
-
url,
|
|
1035
|
-
status: 'connected'
|
|
1036
|
-
}
|
|
1037
|
-
}
|
|
1038
|
-
})
|
|
673
|
+
export const config = {
|
|
674
|
+
runtime: 'edge'
|
|
675
|
+
}
|
|
1039
676
|
|
|
1040
|
-
|
|
1041
|
-
const
|
|
1042
|
-
|
|
1043
|
-
|
|
1044
|
-
|
|
1045
|
-
|
|
1046
|
-
|
|
1047
|
-
|
|
1048
|
-
|
|
1049
|
-
{
|
|
1050
|
-
augmentation: memoryAug,
|
|
1051
|
-
method: 'storeData',
|
|
1052
|
-
transformArgs: (data) => ['processed-data', data]
|
|
1053
|
-
}
|
|
1054
|
-
]
|
|
1055
|
-
)
|
|
1056
|
-
|
|
1057
|
-
// Create a reusable pipeline
|
|
1058
|
-
const pipeline = createPipeline([
|
|
1059
|
-
{
|
|
1060
|
-
augmentation: senseAug,
|
|
1061
|
-
method: 'processRawData',
|
|
1062
|
-
transformArgs: (data) => [data, 'text']
|
|
1063
|
-
},
|
|
1064
|
-
{
|
|
1065
|
-
augmentation: memoryAug,
|
|
1066
|
-
method: 'storeData',
|
|
1067
|
-
transformArgs: (data) => ['processed-data', data]
|
|
1068
|
-
}
|
|
1069
|
-
])
|
|
677
|
+
export default async function handler(request) {
|
|
678
|
+
const brainy = createAutoBrainy()
|
|
679
|
+
const { searchParams } = new URL(request.url)
|
|
680
|
+
const query = searchParams.get('q')
|
|
681
|
+
|
|
682
|
+
const results = await brainy.searchText(query, 10)
|
|
683
|
+
return Response.json(results)
|
|
684
|
+
}
|
|
685
|
+
```
|
|
1070
686
|
|
|
1071
|
-
|
|
1072
|
-
const result = await pipeline('New input data')
|
|
687
|
+
### Netlify Functions
|
|
1073
688
|
|
|
1074
|
-
|
|
1075
|
-
|
|
1076
|
-
|
|
1077
|
-
|
|
1078
|
-
|
|
1079
|
-
|
|
689
|
+
```javascript
|
|
690
|
+
import { createAutoBrainy } from 'brainy'
|
|
691
|
+
|
|
692
|
+
export async function handler(event, context) {
|
|
693
|
+
const brainy = createAutoBrainy()
|
|
694
|
+
const query = event.queryStringParameters.q
|
|
695
|
+
|
|
696
|
+
const results = await brainy.searchText(query, 10)
|
|
697
|
+
|
|
698
|
+
return {
|
|
699
|
+
statusCode: 200,
|
|
700
|
+
body: JSON.stringify(results)
|
|
1080
701
|
}
|
|
1081
|
-
|
|
702
|
+
}
|
|
1082
703
|
```
|
|
1083
704
|
|
|
1084
|
-
|
|
1085
|
-
|
|
1086
|
-
1. **Factory Functions** - Create augmentations with minimal boilerplate
|
|
1087
|
-
2. **WebSocket Support** - Add WebSocket capabilities to any augmentation
|
|
1088
|
-
3. **Streamlined Pipeline** - Process data through augmentations more efficiently
|
|
1089
|
-
4. **Dynamic Loading** - Load augmentations at runtime when needed
|
|
1090
|
-
5. **Static & Streaming Data** - Handle both static and streaming data with the same API
|
|
1091
|
-
|
|
1092
|
-
#### WebSocket Augmentation Types
|
|
1093
|
-
|
|
1094
|
-
Brainy exports several WebSocket augmentation types that can be used by augmentation creators to add WebSocket
|
|
1095
|
-
capabilities to their augmentations:
|
|
705
|
+
### Supabase Edge Functions
|
|
1096
706
|
|
|
1097
707
|
```typescript
|
|
1098
|
-
import {
|
|
1099
|
-
|
|
1100
|
-
|
|
1101
|
-
|
|
1102
|
-
|
|
1103
|
-
|
|
1104
|
-
|
|
1105
|
-
|
|
1106
|
-
|
|
1107
|
-
|
|
1108
|
-
|
|
1109
|
-
|
|
1110
|
-
|
|
1111
|
-
// Function to add WebSocket support to any augmentation
|
|
1112
|
-
addWebSocketSupport
|
|
1113
|
-
} from '@soulcraft/brainy'
|
|
1114
|
-
|
|
1115
|
-
// Example: Creating a typed WebSocket-enabled sense augmentation
|
|
1116
|
-
const mySenseAug = createSenseAugmentation({
|
|
1117
|
-
name: 'my-sense',
|
|
1118
|
-
processRawData: async (data, dataType) => {
|
|
1119
|
-
// Implementation
|
|
1120
|
-
return {
|
|
1121
|
-
success: true,
|
|
1122
|
-
data: { nouns: [], verbs: [] }
|
|
1123
|
-
}
|
|
1124
|
-
}
|
|
1125
|
-
}) as IWebSocketSenseAugmentation
|
|
1126
|
-
|
|
1127
|
-
// Add WebSocket support
|
|
1128
|
-
addWebSocketSupport(mySenseAug, {
|
|
1129
|
-
connectWebSocket: async (url) => {
|
|
1130
|
-
// WebSocket implementation
|
|
1131
|
-
return {
|
|
1132
|
-
connectionId: 'ws-1',
|
|
1133
|
-
url,
|
|
1134
|
-
status: 'connected'
|
|
1135
|
-
}
|
|
1136
|
-
},
|
|
1137
|
-
sendWebSocketMessage: async (connectionId, data) => {
|
|
1138
|
-
// Send message implementation
|
|
1139
|
-
},
|
|
1140
|
-
onWebSocketMessage: async (connectionId, callback) => {
|
|
1141
|
-
// Register callback implementation
|
|
1142
|
-
},
|
|
1143
|
-
offWebSocketMessage: async (connectionId, callback) => {
|
|
1144
|
-
// Remove callback implementation
|
|
1145
|
-
},
|
|
1146
|
-
closeWebSocket: async (connectionId, code, reason) => {
|
|
1147
|
-
// Close connection implementation
|
|
1148
|
-
}
|
|
708
|
+
import { createAutoBrainy } from 'brainy'
|
|
709
|
+
import { serve } from 'https://deno.land/std@0.168.0/http/server.ts'
|
|
710
|
+
|
|
711
|
+
serve(async (req) => {
|
|
712
|
+
const brainy = createAutoBrainy()
|
|
713
|
+
const url = new URL(req.url)
|
|
714
|
+
const query = url.searchParams.get('q')
|
|
715
|
+
|
|
716
|
+
const results = await brainy.searchText(query, 10)
|
|
717
|
+
|
|
718
|
+
return new Response(JSON.stringify(results), {
|
|
719
|
+
headers: { 'Content-Type': 'application/json' }
|
|
720
|
+
})
|
|
1149
721
|
})
|
|
1150
|
-
|
|
1151
|
-
// Now mySenseAug has both sense augmentation methods and WebSocket methods
|
|
1152
|
-
await mySenseAug.processRawData('data', 'text')
|
|
1153
|
-
await mySenseAug.connectWebSocket('wss://example.com')
|
|
1154
722
|
```
|
|
1155
723
|
|
|
1156
|
-
|
|
1157
|
-
providing type safety and autocompletion for augmentations with WebSocket capabilities.
|
|
1158
|
-
|
|
1159
|
-
### Model Control Protocol (MCP)
|
|
1160
|
-
|
|
1161
|
-
Brainy includes a Model Control Protocol (MCP) implementation that allows external models to access Brainy data and use
|
|
1162
|
-
the augmentation pipeline as tools:
|
|
1163
|
-
|
|
1164
|
-
- **BrainyMCPAdapter**: Provides access to Brainy data through MCP
|
|
1165
|
-
- **MCPAugmentationToolset**: Exposes the augmentation pipeline as tools
|
|
1166
|
-
- **BrainyMCPService**: Integrates the adapter and toolset, providing WebSocket and REST server implementations
|
|
1167
|
-
|
|
1168
|
-
Environment compatibility:
|
|
724
|
+
### Docker Container
|
|
1169
725
|
|
|
1170
|
-
|
|
1171
|
-
-
|
|
726
|
+
```dockerfile
|
|
727
|
+
FROM node:20-alpine
|
|
728
|
+
USER node
|
|
729
|
+
WORKDIR /app
|
|
730
|
+
COPY package*.json ./
|
|
731
|
+
RUN npm install brainy
|
|
732
|
+
COPY . .
|
|
1172
733
|
|
|
1173
|
-
|
|
1174
|
-
|
|
1175
|
-
## Cross-Environment Compatibility
|
|
1176
|
-
|
|
1177
|
-
Brainy is designed to run seamlessly in any environment, from browsers to Node.js to serverless functions and
|
|
1178
|
-
containers. All Brainy data, functions, and augmentations are environment-agnostic, allowing you to use the same code
|
|
1179
|
-
everywhere.
|
|
1180
|
-
|
|
1181
|
-
### Environment Detection
|
|
1182
|
-
|
|
1183
|
-
Brainy automatically detects the environment it's running in:
|
|
1184
|
-
|
|
1185
|
-
```typescript
|
|
1186
|
-
import { environment } from '@soulcraft/brainy'
|
|
1187
|
-
|
|
1188
|
-
// Check which environment we're running in
|
|
1189
|
-
console.log(`Running in ${
|
|
1190
|
-
environment.isBrowser ? 'browser' :
|
|
1191
|
-
environment.isNode ? 'Node.js' :
|
|
1192
|
-
'serverless/unknown'
|
|
1193
|
-
} environment`)
|
|
734
|
+
CMD ["node", "server.js"]
|
|
1194
735
|
```
|
|
1195
736
|
|
|
1196
|
-
|
|
1197
|
-
|
|
1198
|
-
|
|
1199
|
-
|
|
1200
|
-
- **Browser**: Uses Origin Private File System (OPFS) when available, falls back to in-memory storage
|
|
1201
|
-
- **Node.js**: Uses file system storage by default, with options for S3-compatible cloud storage
|
|
1202
|
-
- **Serverless**: Uses in-memory storage with options for cloud persistence
|
|
1203
|
-
- **Container**: Automatically detects and uses the appropriate storage based on available capabilities
|
|
1204
|
-
|
|
1205
|
-
### Dynamic Imports
|
|
1206
|
-
|
|
1207
|
-
Brainy uses dynamic imports to load environment-specific dependencies only when needed, keeping the bundle size small
|
|
1208
|
-
and ensuring compatibility across environments.
|
|
1209
|
-
|
|
1210
|
-
### Browser Support
|
|
1211
|
-
|
|
1212
|
-
Works in all modern browsers:
|
|
1213
|
-
|
|
1214
|
-
- Chrome 86+
|
|
1215
|
-
- Edge 86+
|
|
1216
|
-
- Opera 72+
|
|
1217
|
-
- Chrome for Android 86+
|
|
737
|
+
```javascript
|
|
738
|
+
// server.js
|
|
739
|
+
import { createAutoBrainy } from 'brainy'
|
|
740
|
+
import express from 'express'
|
|
1218
741
|
|
|
1219
|
-
|
|
742
|
+
const app = express()
|
|
743
|
+
const brainy = createAutoBrainy()
|
|
1220
744
|
|
|
1221
|
-
|
|
745
|
+
app.get('/search', async (req, res) => {
|
|
746
|
+
const results = await brainy.searchText(req.query.q, 10)
|
|
747
|
+
res.json(results)
|
|
748
|
+
})
|
|
1222
749
|
|
|
1223
|
-
|
|
1224
|
-
|
|
750
|
+
app.listen(3000, () => console.log('Brainy running on port 3000'))
|
|
751
|
+
```
|
|
1225
752
|
|
|
1226
|
-
|
|
753
|
+
### Kubernetes
|
|
1227
754
|
|
|
1228
|
-
|
|
755
|
+
```yaml
|
|
756
|
+
apiVersion: apps/v1
|
|
757
|
+
kind: Deployment
|
|
758
|
+
metadata:
|
|
759
|
+
name: brainy-api
|
|
760
|
+
spec:
|
|
761
|
+
replicas: 3
|
|
762
|
+
template:
|
|
763
|
+
spec:
|
|
764
|
+
containers:
|
|
765
|
+
- name: brainy
|
|
766
|
+
image: your-registry/brainy-api:latest
|
|
767
|
+
env:
|
|
768
|
+
- name: S3_BUCKET
|
|
769
|
+
value: "your-vector-bucket"
|
|
770
|
+
```
|
|
1229
771
|
|
|
1230
|
-
|
|
1231
|
-
- **[Try the live demo](https://soulcraft-research.github.io/brainy/demo/index.html)** - Check out the
|
|
1232
|
-
interactive demo on
|
|
1233
|
-
GitHub Pages
|
|
1234
|
-
- Or run it locally with `npm run demo` (see [demo instructions](demo.md) for details)
|
|
1235
|
-
- To deploy your own version to GitHub Pages, use the GitHub Actions workflow in
|
|
1236
|
-
`.github/workflows/deploy-demo.yml`,
|
|
1237
|
-
which automatically deploys when pushing to the main branch or can be manually triggered
|
|
1238
|
-
- To use a custom domain (like www.soulcraft.com):
|
|
1239
|
-
1. A CNAME file is already included in the demo directory
|
|
1240
|
-
2. In your GitHub repository settings, go to Pages > Custom domain and enter your domain
|
|
1241
|
-
3. Configure your domain's DNS settings to point to GitHub Pages:
|
|
772
|
+
### Railway.app
|
|
1242
773
|
|
|
1243
|
-
|
|
1244
|
-
|
|
774
|
+
```javascript
|
|
775
|
+
// server.js
|
|
776
|
+
import { createAutoBrainy } from 'brainy'
|
|
1245
777
|
|
|
1246
|
-
|
|
778
|
+
const brainy = createAutoBrainy({
|
|
779
|
+
bucketName: process.env.RAILWAY_VOLUME_NAME
|
|
780
|
+
})
|
|
1247
781
|
|
|
1248
|
-
|
|
1249
|
-
|
|
1250
|
-
- How HNSW search works
|
|
782
|
+
// Railway automatically handles the rest!
|
|
783
|
+
```
|
|
1251
784
|
|
|
1252
|
-
|
|
785
|
+
### Render.com
|
|
1253
786
|
|
|
1254
|
-
|
|
787
|
+
```yaml
|
|
788
|
+
# render.yaml
|
|
789
|
+
services:
|
|
790
|
+
- type: web
|
|
791
|
+
name: brainy-api
|
|
792
|
+
env: node
|
|
793
|
+
buildCommand: npm install brainy
|
|
794
|
+
startCommand: node server.js
|
|
795
|
+
envVars:
|
|
796
|
+
- key: BRAINY_STORAGE
|
|
797
|
+
value: persistent-disk
|
|
798
|
+
```
|
|
1255
799
|
|
|
1256
|
-
|
|
1257
|
-
direct browser-to-browser communication without a server in the middle.
|
|
1258
|
-
- **WebRTC iConduit**: For direct peer-to-peer syncing between browsers. This is the recommended approach for
|
|
1259
|
-
browser-to-browser communication.
|
|
800
|
+
## ๐ Quick Examples
|
|
1260
801
|
|
|
1261
|
-
|
|
802
|
+
### Basic Usage
|
|
1262
803
|
|
|
1263
|
-
```
|
|
1264
|
-
import {
|
|
1265
|
-
BrainyData,
|
|
1266
|
-
pipeline,
|
|
1267
|
-
createConduitAugmentation
|
|
1268
|
-
} from '@soulcraft/brainy'
|
|
804
|
+
```javascript
|
|
805
|
+
import { BrainyData, NounType, VerbType } from 'brainy'
|
|
1269
806
|
|
|
1270
|
-
//
|
|
807
|
+
// Initialize
|
|
1271
808
|
const db = new BrainyData()
|
|
1272
809
|
await db.init()
|
|
1273
810
|
|
|
1274
|
-
//
|
|
1275
|
-
const
|
|
1276
|
-
|
|
1277
|
-
|
|
1278
|
-
|
|
1279
|
-
|
|
1280
|
-
// Connect to another Brainy instance (server or browser)
|
|
1281
|
-
// Replace the example URL below with your actual WebSocket server URL
|
|
1282
|
-
const connectionResult = await pipeline.executeConduitPipeline(
|
|
1283
|
-
'establishConnection',
|
|
1284
|
-
['wss://example-websocket-server.com/brainy-sync', { protocols: 'brainy-sync' }]
|
|
1285
|
-
)
|
|
1286
|
-
|
|
1287
|
-
if (connectionResult[0] && (await connectionResult[0]).success) {
|
|
1288
|
-
const connection = (await connectionResult[0]).data
|
|
1289
|
-
|
|
1290
|
-
// Read data from the remote instance
|
|
1291
|
-
const readResult = await pipeline.executeConduitPipeline(
|
|
1292
|
-
'readData',
|
|
1293
|
-
[{ connectionId: connection.connectionId, query: { type: 'getAllNouns' } }]
|
|
1294
|
-
)
|
|
811
|
+
// Add data (automatically vectorized)
|
|
812
|
+
const catId = await db.add("Cats are independent pets", {
|
|
813
|
+
noun: NounType.Thing,
|
|
814
|
+
category: 'animal'
|
|
815
|
+
})
|
|
1295
816
|
|
|
1296
|
-
|
|
1297
|
-
|
|
1298
|
-
const remoteNouns = (await readResult[0]).data
|
|
1299
|
-
for (const noun of remoteNouns) {
|
|
1300
|
-
await db.add(noun.vector, noun.metadata)
|
|
1301
|
-
}
|
|
1302
|
-
}
|
|
817
|
+
// Search for similar items
|
|
818
|
+
const results = await db.searchText("feline pets", 5)
|
|
1303
819
|
|
|
1304
|
-
|
|
1305
|
-
|
|
1306
|
-
|
|
1307
|
-
|
|
1308
|
-
|
|
1309
|
-
} else if (data.type === 'newVerb') {
|
|
1310
|
-
await db.addVerb(data.sourceId, data.targetId, data.vector, data.options)
|
|
1311
|
-
}
|
|
1312
|
-
})
|
|
1313
|
-
}
|
|
820
|
+
// Add relationships
|
|
821
|
+
await db.addVerb(catId, dogId, {
|
|
822
|
+
verb: VerbType.RelatedTo,
|
|
823
|
+
description: 'Both are pets'
|
|
824
|
+
})
|
|
1314
825
|
```
|
|
1315
826
|
|
|
1316
|
-
|
|
827
|
+
### AutoBrainy (Recommended)
|
|
1317
828
|
|
|
1318
|
-
```
|
|
1319
|
-
import {
|
|
1320
|
-
BrainyData,
|
|
1321
|
-
pipeline,
|
|
1322
|
-
createConduitAugmentation
|
|
1323
|
-
} from '@soulcraft/brainy'
|
|
1324
|
-
|
|
1325
|
-
// Create and initialize the database
|
|
1326
|
-
const db = new BrainyData()
|
|
1327
|
-
await db.init()
|
|
829
|
+
```javascript
|
|
830
|
+
import { createAutoBrainy } from 'brainy'
|
|
1328
831
|
|
|
1329
|
-
//
|
|
1330
|
-
const
|
|
1331
|
-
|
|
1332
|
-
// Register the augmentation with the pipeline
|
|
1333
|
-
pipeline.register(webrtcConduit)
|
|
1334
|
-
|
|
1335
|
-
// Connect to a peer using a signaling server
|
|
1336
|
-
// Replace the example values below with your actual configuration
|
|
1337
|
-
const connectionResult = await pipeline.executeConduitPipeline(
|
|
1338
|
-
'establishConnection',
|
|
1339
|
-
[
|
|
1340
|
-
'peer-id-to-connect-to', // Replace with actual peer ID
|
|
1341
|
-
{
|
|
1342
|
-
signalServerUrl: 'wss://example-signal-server.com', // Replace with your signal server
|
|
1343
|
-
localPeerId: 'my-local-peer-id', // Replace with your local peer ID
|
|
1344
|
-
iceServers: [{ urls: 'stun:stun.l.google.com:19302' }] // Public STUN server
|
|
1345
|
-
}
|
|
1346
|
-
]
|
|
1347
|
-
)
|
|
1348
|
-
|
|
1349
|
-
if (connectionResult[0] && (await connectionResult[0]).success) {
|
|
1350
|
-
const connection = (await connectionResult[0]).data
|
|
1351
|
-
|
|
1352
|
-
// Set up real-time sync by monitoring the stream
|
|
1353
|
-
await webrtcConduit.monitorStream(connection.connectionId, async (data) => {
|
|
1354
|
-
// Handle incoming data (e.g., new nouns, verbs, updates)
|
|
1355
|
-
if (data.type === 'newNoun') {
|
|
1356
|
-
await db.add(data.vector, data.metadata)
|
|
1357
|
-
} else if (data.type === 'newVerb') {
|
|
1358
|
-
await db.addVerb(data.sourceId, data.targetId, data.vector, data.options)
|
|
1359
|
-
}
|
|
1360
|
-
})
|
|
832
|
+
// Everything auto-configured!
|
|
833
|
+
const brainy = createAutoBrainy()
|
|
1361
834
|
|
|
1362
|
-
|
|
1363
|
-
|
|
1364
|
-
|
|
1365
|
-
// Send the new noun to the peer
|
|
1366
|
-
await pipeline.executeConduitPipeline(
|
|
1367
|
-
'writeData',
|
|
1368
|
-
[
|
|
1369
|
-
{
|
|
1370
|
-
connectionId: connection.connectionId,
|
|
1371
|
-
data: {
|
|
1372
|
-
type: 'newNoun',
|
|
1373
|
-
id: nounId,
|
|
1374
|
-
vector: (await db.get(nounId)).vector,
|
|
1375
|
-
metadata: (await db.get(nounId)).metadata
|
|
1376
|
-
}
|
|
1377
|
-
}
|
|
1378
|
-
]
|
|
1379
|
-
)
|
|
1380
|
-
}
|
|
835
|
+
// Just start using it
|
|
836
|
+
await brainy.addVector({ id: '1', vector: [0.1, 0.2, 0.3], text: 'Hello' })
|
|
837
|
+
const results = await brainy.search([0.1, 0.2, 0.3], 10)
|
|
1381
838
|
```
|
|
1382
839
|
|
|
1383
|
-
|
|
1384
|
-
|
|
1385
|
-
Brainy supports searching a server-hosted instance from a browser, storing results locally, and performing further
|
|
1386
|
-
searches against the local instance:
|
|
1387
|
-
|
|
1388
|
-
```typescript
|
|
1389
|
-
import { BrainyData } from '@soulcraft/brainy'
|
|
1390
|
-
|
|
1391
|
-
// Create and initialize the database with remote server configuration
|
|
1392
|
-
// Replace the example URL below with your actual Brainy server URL
|
|
1393
|
-
const db = new BrainyData({
|
|
1394
|
-
remoteServer: {
|
|
1395
|
-
url: 'wss://example-brainy-server.com/ws', // Replace with your server URL
|
|
1396
|
-
protocols: 'brainy-sync',
|
|
1397
|
-
autoConnect: true // Connect automatically during initialization
|
|
1398
|
-
}
|
|
1399
|
-
})
|
|
1400
|
-
await db.init()
|
|
1401
|
-
|
|
1402
|
-
// Or connect manually after initialization
|
|
1403
|
-
if (!db.isConnectedToRemoteServer()) {
|
|
1404
|
-
// Replace the example URL below with your actual Brainy server URL
|
|
1405
|
-
await db.connectToRemoteServer('wss://example-brainy-server.com/ws', 'brainy-sync')
|
|
1406
|
-
}
|
|
1407
|
-
|
|
1408
|
-
// Search the remote server (results are stored locally)
|
|
1409
|
-
const remoteResults = await db.searchText('machine learning', 5, { searchMode: 'remote' })
|
|
1410
|
-
|
|
1411
|
-
// Search the local database (includes previously stored results)
|
|
1412
|
-
const localResults = await db.searchText('machine learning', 5, { searchMode: 'local' })
|
|
840
|
+
### Scenario-Based Setup
|
|
1413
841
|
|
|
1414
|
-
|
|
1415
|
-
|
|
842
|
+
```javascript
|
|
843
|
+
import { createQuickBrainy } from 'brainy'
|
|
1416
844
|
|
|
1417
|
-
//
|
|
1418
|
-
const
|
|
1419
|
-
|
|
1420
|
-
category: 'AI',
|
|
1421
|
-
tags: ['deep learning', 'neural networks']
|
|
845
|
+
// Choose your scale: 'small', 'medium', 'large', 'enterprise'
|
|
846
|
+
const brainy = await createQuickBrainy('large', {
|
|
847
|
+
bucketName: 'my-vector-db'
|
|
1422
848
|
})
|
|
1423
|
-
|
|
1424
|
-
// Clean up when done (this also cleans up worker pools)
|
|
1425
|
-
await db.shutDown()
|
|
1426
849
|
```
|
|
1427
850
|
|
|
1428
|
-
|
|
1429
|
-
|
|
1430
|
-
## ๐ Scaling Strategy
|
|
1431
|
-
|
|
1432
|
-
Brainy is designed to handle datasets of various sizes, from small collections to large-scale deployments. For
|
|
1433
|
-
terabyte-scale data that can't fit entirely in memory, we provide several approaches:
|
|
1434
|
-
|
|
1435
|
-
- **Disk-Based HNSW**: Modified implementations using intelligent caching and partial loading
|
|
1436
|
-
- **Distributed HNSW**: Sharding and partitioning across multiple machines
|
|
1437
|
-
- **Hybrid Solutions**: Combining quantization techniques with multi-tier architectures
|
|
1438
|
-
|
|
1439
|
-
For detailed information on how to scale Brainy for large datasets, vector dimension standardization, threading
|
|
1440
|
-
implementation, storage testing, and other technical topics, see our
|
|
1441
|
-
comprehensive [Technical Guides](TECHNICAL_GUIDES.md).
|
|
1442
|
-
|
|
1443
|
-
## Recent Changes and Performance Improvements
|
|
1444
|
-
|
|
1445
|
-
### Enhanced Memory Management and Scalability
|
|
1446
|
-
|
|
1447
|
-
Brainy has been significantly improved to handle larger datasets more efficiently:
|
|
1448
|
-
|
|
1449
|
-
- **Pagination Support**: All data retrieval methods now support pagination to avoid loading entire datasets into memory
|
|
1450
|
-
at once. The deprecated `getAllNouns()` and `getAllVerbs()` methods have been replaced with `getNouns()` and
|
|
1451
|
-
`getVerbs()` methods that support pagination, filtering, and cursor-based navigation.
|
|
1452
|
-
|
|
1453
|
-
- **Multi-level Caching**: A sophisticated three-level caching strategy has been implemented:
|
|
1454
|
-
- **Level 1**: Hot cache (most accessed nodes) - RAM (automatically detecting and adjusting in each environment)
|
|
1455
|
-
- **Level 2**: Warm cache (recent nodes) - OPFS, Filesystem or S3 depending on environment
|
|
1456
|
-
- **Level 3**: Cold storage (all nodes) - OPFS, Filesystem or S3 depending on environment
|
|
1457
|
-
|
|
1458
|
-
- **Adaptive Memory Usage**: The system automatically detects available memory and adjusts cache sizes accordingly:
|
|
1459
|
-
- In Node.js: Uses 10% of free memory (minimum 1000 entries)
|
|
1460
|
-
- In browsers: Scales based on device memory (500 entries per GB, minimum 1000)
|
|
1461
|
-
|
|
1462
|
-
- **Intelligent Cache Eviction**: Implements a Least Recently Used (LRU) policy that evicts the oldest 20% of items when
|
|
1463
|
-
the cache reaches the configured threshold.
|
|
1464
|
-
|
|
1465
|
-
- **Prefetching Strategy**: Implements batch prefetching to improve performance while avoiding overwhelming system
|
|
1466
|
-
resources.
|
|
1467
|
-
|
|
1468
|
-
### S3-Compatible Storage Improvements
|
|
1469
|
-
|
|
1470
|
-
- **Enhanced Cloud Storage**: Improved support for S3-compatible storage services including AWS S3, Cloudflare R2, and
|
|
1471
|
-
others.
|
|
1472
|
-
|
|
1473
|
-
- **Optimized Data Access**: Batch operations and error handling for efficient cloud storage access.
|
|
1474
|
-
|
|
1475
|
-
- **Change Log Management**: Efficient synchronization through change logs to track updates.
|
|
1476
|
-
|
|
1477
|
-
### Data Compatibility
|
|
1478
|
-
|
|
1479
|
-
Yes, you can use existing data indexed from an old version. Brainy includes robust data migration capabilities:
|
|
1480
|
-
|
|
1481
|
-
- **Vector Regeneration**: If vectors are missing in imported data, they will be automatically created using the
|
|
1482
|
-
embedding function.
|
|
1483
|
-
|
|
1484
|
-
- **HNSW Index Reconstruction**: The system can reconstruct the HNSW index from backup data, ensuring compatibility with
|
|
1485
|
-
previous versions.
|
|
1486
|
-
|
|
1487
|
-
- **Sparse Data Import**: Support for importing sparse data (without vectors) through the `importSparseData()` method.
|
|
1488
|
-
|
|
1489
|
-
### System Requirements
|
|
1490
|
-
|
|
1491
|
-
#### Default Mode
|
|
1492
|
-
|
|
1493
|
-
- **Memory**:
|
|
1494
|
-
- Minimum: 512MB RAM
|
|
1495
|
-
- Recommended: 2GB+ RAM for medium datasets, 8GB+ for large datasets
|
|
1496
|
-
|
|
1497
|
-
- **CPU**:
|
|
1498
|
-
- Minimum: 2 cores
|
|
1499
|
-
- Recommended: 4+ cores for better performance with parallel operations
|
|
1500
|
-
|
|
1501
|
-
- **Storage**:
|
|
1502
|
-
- Minimum: 1GB available storage
|
|
1503
|
-
- Recommended: Storage space at least 3x the size of your dataset
|
|
1504
|
-
|
|
1505
|
-
#### Read-Only Mode
|
|
1506
|
-
|
|
1507
|
-
Read-only mode prevents all write operations (add, update, delete) and is optimized for search operations.
|
|
1508
|
-
|
|
1509
|
-
- **Memory**:
|
|
1510
|
-
- Minimum: 256MB RAM
|
|
1511
|
-
- Recommended: 1GB+ RAM
|
|
1512
|
-
|
|
1513
|
-
- **CPU**:
|
|
1514
|
-
- Minimum: 1 core
|
|
1515
|
-
- Recommended: 2+ cores
|
|
1516
|
-
|
|
1517
|
-
- **Storage**:
|
|
1518
|
-
- Minimum: Storage space equal to the size of your dataset
|
|
1519
|
-
- Recommended: 2x the size of your dataset for caching
|
|
1520
|
-
|
|
1521
|
-
- **New Feature**: Lazy loading support in read-only mode for improved performance with large datasets.
|
|
1522
|
-
|
|
1523
|
-
#### Write-Only Mode
|
|
1524
|
-
|
|
1525
|
-
Write-only mode prevents all search operations and is optimized for initial data loading or when you want to optimize
|
|
1526
|
-
for write performance.
|
|
1527
|
-
|
|
1528
|
-
- **Memory**:
|
|
1529
|
-
- Minimum: 512MB RAM
|
|
1530
|
-
- Recommended: 2GB+ RAM
|
|
1531
|
-
|
|
1532
|
-
- **CPU**:
|
|
1533
|
-
- Minimum: 2 cores
|
|
1534
|
-
- Recommended: 4+ cores for faster data ingestion
|
|
1535
|
-
|
|
1536
|
-
- **Storage**:
|
|
1537
|
-
- Minimum: Storage space at least 2x the size of your dataset
|
|
1538
|
-
- Recommended: 4x the size of your dataset for optimal performance
|
|
1539
|
-
|
|
1540
|
-
### Performance Tuning Parameters
|
|
1541
|
-
|
|
1542
|
-
Brainy offers comprehensive configuration options for performance tuning, with enhanced support for large datasets in S3
|
|
1543
|
-
or other remote storage. **All configuration is optional** - the system automatically detects the optimal settings based
|
|
1544
|
-
on your environment, dataset size, and usage patterns.
|
|
1545
|
-
|
|
1546
|
-
#### Intelligent Defaults
|
|
1547
|
-
|
|
1548
|
-
Brainy uses intelligent defaults that automatically adapt to your environment:
|
|
1549
|
-
|
|
1550
|
-
- **Environment Detection**: Automatically detects whether you're running in Node.js, browser, or worker environment
|
|
1551
|
-
- **Memory-Aware Caching**: Adjusts cache sizes based on available system memory
|
|
1552
|
-
- **Dataset Size Adaptation**: Tunes parameters based on the size of your dataset
|
|
1553
|
-
- **Usage Pattern Optimization**: Adjusts to read-heavy vs. write-heavy workloads
|
|
1554
|
-
- **Storage Type Awareness**: Optimizes for local vs. remote storage (S3, R2, etc.)
|
|
1555
|
-
- **Operating Mode Specialization**: Special optimizations for read-only and write-only modes
|
|
1556
|
-
|
|
1557
|
-
#### Cache Configuration (Optional)
|
|
1558
|
-
|
|
1559
|
-
You can override any of these automatically tuned parameters if needed:
|
|
1560
|
-
|
|
1561
|
-
- **Hot Cache Size**: Control the maximum number of items to keep in memory.
|
|
1562
|
-
- For large datasets (>100K items), consider values between 5,000-50,000 depending on available memory.
|
|
1563
|
-
- In read-only mode, larger values (10,000-100,000) can be used for better performance.
|
|
1564
|
-
|
|
1565
|
-
- **Eviction Threshold**: Set the threshold at which cache eviction begins (default: 0.8 or 80% of max size).
|
|
1566
|
-
- For write-heavy workloads, lower values (0.6-0.7) may improve performance.
|
|
1567
|
-
- For read-heavy workloads, higher values (0.8-0.9) are recommended.
|
|
1568
|
-
|
|
1569
|
-
- **Warm Cache TTL**: Set the time-to-live for items in the warm cache (default: 3600000 ms or 1 hour).
|
|
1570
|
-
- For frequently changing data, shorter TTLs are recommended.
|
|
1571
|
-
- For relatively static data, longer TTLs improve performance.
|
|
1572
|
-
|
|
1573
|
-
- **Batch Size**: Control the number of items to process in a single batch for operations like prefetching.
|
|
1574
|
-
- For S3 or remote storage with large datasets, larger values (50-200) significantly improve throughput.
|
|
1575
|
-
- In read-only mode with remote storage, even larger values (100-300) can be used.
|
|
1576
|
-
|
|
1577
|
-
#### Auto-Tuning (Enabled by Default)
|
|
1578
|
-
|
|
1579
|
-
- **Auto-Tune**: Enable or disable automatic tuning of cache parameters based on usage patterns (default: true).
|
|
1580
|
-
- **Auto-Tune Interval**: Set how frequently the system adjusts cache parameters (default: 60000 ms or 1 minute).
|
|
1581
|
-
|
|
1582
|
-
#### Read-Only Mode Optimizations (Automatic)
|
|
1583
|
-
|
|
1584
|
-
Read-only mode includes special optimizations for search performance that are automatically applied:
|
|
1585
|
-
|
|
1586
|
-
- **Larger Cache Sizes**: Automatically uses more memory for caching (up to 40% of free memory for large datasets).
|
|
1587
|
-
- **Aggressive Prefetching**: Loads more data in each batch to reduce the number of storage requests.
|
|
1588
|
-
- **Prefetch Strategy**: Defaults to 'aggressive' prefetching strategy in read-only mode.
|
|
1589
|
-
|
|
1590
|
-
#### Example Configuration for Large S3 Datasets
|
|
851
|
+
### With Offline Models
|
|
1591
852
|
|
|
1592
853
|
```javascript
|
|
1593
|
-
|
|
1594
|
-
|
|
1595
|
-
lazyLoadInReadOnlyMode: true,
|
|
1596
|
-
storage: {
|
|
1597
|
-
type: 's3',
|
|
1598
|
-
s3Storage: {
|
|
1599
|
-
bucketName: 'your-bucket',
|
|
1600
|
-
accessKeyId: 'your-access-key',
|
|
1601
|
-
secretAccessKey: 'your-secret-key',
|
|
1602
|
-
region: 'your-region'
|
|
1603
|
-
}
|
|
1604
|
-
},
|
|
1605
|
-
cache: {
|
|
1606
|
-
hotCacheMaxSize: 20000,
|
|
1607
|
-
hotCacheEvictionThreshold: 0.85,
|
|
1608
|
-
batchSize: 100,
|
|
1609
|
-
readOnlyMode: {
|
|
1610
|
-
hotCacheMaxSize: 50000,
|
|
1611
|
-
batchSize: 200,
|
|
1612
|
-
prefetchStrategy: 'aggressive'
|
|
1613
|
-
}
|
|
1614
|
-
}
|
|
1615
|
-
});
|
|
1616
|
-
```
|
|
1617
|
-
|
|
1618
|
-
These configuration options make Brainy more efficient, scalable, and adaptable to different environments and usage
|
|
1619
|
-
patterns, especially for large datasets in cloud storage.
|
|
1620
|
-
|
|
1621
|
-
## Testing
|
|
1622
|
-
|
|
1623
|
-
Brainy uses Vitest for testing. For detailed information about testing in Brainy, including test configuration, scripts,
|
|
1624
|
-
reporting tools, and best practices, see our [Testing Guide](docs/technical/TESTING.md).
|
|
854
|
+
import { createAutoBrainy } from 'brainy'
|
|
855
|
+
import { BundledUniversalSentenceEncoder } from '@soulcraft/brainy-models'
|
|
1625
856
|
|
|
1626
|
-
|
|
1627
|
-
|
|
1628
|
-
|
|
1629
|
-
|
|
1630
|
-
|
|
1631
|
-
|
|
1632
|
-
# Run tests with comprehensive reporting
|
|
1633
|
-
npm run test:report
|
|
857
|
+
// Use bundled model for offline operation
|
|
858
|
+
const brainy = createAutoBrainy({
|
|
859
|
+
embeddingModel: BundledUniversalSentenceEncoder,
|
|
860
|
+
// Model loads from local files, no network needed!
|
|
861
|
+
})
|
|
1634
862
|
|
|
1635
|
-
|
|
1636
|
-
|
|
863
|
+
// Works exactly the same, but 100% offline
|
|
864
|
+
await brainy.add("This works without internet!", {
|
|
865
|
+
noun: NounType.Content
|
|
866
|
+
})
|
|
1637
867
|
```
|
|
1638
868
|
|
|
1639
|
-
##
|
|
869
|
+
## ๐ Live Demo
|
|
1640
870
|
|
|
1641
|
-
|
|
871
|
+
**[Try the interactive demo](https://soulcraft-research.github.io/brainy/demo/index.html)** - See Brainy in action with animations and examples.
|
|
1642
872
|
|
|
1643
|
-
|
|
1644
|
-
see [DEVELOPERS.md](DEVELOPERS.md).
|
|
873
|
+
## ๐ง Environment Support
|
|
1645
874
|
|
|
1646
|
-
|
|
875
|
+
| Environment | Storage | Threading | Auto-Configured |
|
|
876
|
+
|-------------|---------|-----------|-----------------|
|
|
877
|
+
| Browser | OPFS | Web Workers | โ
|
|
|
878
|
+
| Node.js | FileSystem/S3 | Worker Threads | โ
|
|
|
879
|
+
| Serverless | Memory/S3 | Limited | โ
|
|
|
880
|
+
| Edge Functions | Memory/KV | Limited | โ
|
|
|
1647
881
|
|
|
1648
|
-
|
|
882
|
+
## ๐ Documentation
|
|
1649
883
|
|
|
1650
|
-
|
|
1651
|
-
|
|
884
|
+
### Getting Started
|
|
885
|
+
- [**Quick Start Guide**](docs/getting-started/) - Get up and running in minutes
|
|
886
|
+
- [**Installation**](docs/getting-started/installation.md) - Detailed setup instructions
|
|
887
|
+
- [**Environment Setup**](docs/getting-started/environment-setup.md) - Platform-specific configuration
|
|
1652
888
|
|
|
1653
|
-
|
|
1654
|
-
|
|
889
|
+
### User Guides
|
|
890
|
+
- [**Search and Metadata**](docs/user-guides/) - Advanced search techniques
|
|
891
|
+
- [**JSON Document Search**](docs/guides/json-document-search.md) - Field-based searching
|
|
892
|
+
- [**Production Migration**](docs/guides/production-migration-guide.md) - Deployment best practices
|
|
1655
893
|
|
|
1656
|
-
|
|
1657
|
-
|
|
1658
|
-
|
|
1659
|
-
|
|
894
|
+
### API Reference
|
|
895
|
+
- [**Core API**](docs/api-reference/) - Complete method reference
|
|
896
|
+
- [**Configuration Options**](docs/api-reference/configuration.md) - All configuration parameters
|
|
897
|
+
- [**Auto-Configuration API**](docs/api-reference/auto-configuration-api.md) - Intelligent setup
|
|
1660
898
|
|
|
1661
|
-
|
|
1662
|
-
|
|
899
|
+
### Optimization & Scaling
|
|
900
|
+
- [**Large-Scale Optimizations**](docs/optimization-guides/) - Handle millions of vectors
|
|
901
|
+
- [**Memory Management**](docs/optimization-guides/memory-optimization.md) - Efficient resource usage
|
|
902
|
+
- [**S3 Migration Guide**](docs/optimization-guides/s3-migration-guide.md) - Cloud storage setup
|
|
1663
903
|
|
|
1664
|
-
|
|
904
|
+
### Examples & Patterns
|
|
905
|
+
- [**Code Examples**](docs/examples/) - Real-world usage patterns
|
|
906
|
+
- [**Integrations**](docs/examples/integrations.md) - Third-party services
|
|
907
|
+
- [**Performance Patterns**](docs/examples/performance.md) - Optimization techniques
|
|
1665
908
|
|
|
1666
|
-
|
|
1667
|
-
|
|
909
|
+
### Technical Documentation
|
|
910
|
+
- [**Architecture Overview**](docs/technical/) - System design and internals
|
|
911
|
+
- [**Testing Guide**](docs/technical/TESTING.md) - Testing strategies
|
|
912
|
+
- [**Statistics & Monitoring**](docs/technical/STATISTICS.md) - Performance tracking
|
|
1668
913
|
|
|
1669
|
-
|
|
914
|
+
## ๐ค Contributing
|
|
1670
915
|
|
|
1671
|
-
|
|
1672
|
-
-
|
|
1673
|
-
-
|
|
1674
|
-
-
|
|
1675
|
-
- `refactor`: Code changes that neither fix bugs nor add features (maps to **Changed** section)
|
|
1676
|
-
- `perf`: Performance improvements (maps to **Changed** section)
|
|
916
|
+
We welcome contributions! Please see:
|
|
917
|
+
- [Contributing Guidelines](CONTRIBUTING.md)
|
|
918
|
+
- [Developer Documentation](docs/development/DEVELOPERS.md)
|
|
919
|
+
- [Code of Conduct](CODE_OF_CONDUCT.md)
|
|
1677
920
|
|
|
1678
|
-
|
|
921
|
+
## ๐ License
|
|
1679
922
|
|
|
1680
|
-
|
|
923
|
+
[MIT](LICENSE)
|
|
1681
924
|
|
|
1682
|
-
|
|
1683
|
-
# Update version and generate changelog
|
|
1684
|
-
npm run _release:patch # or _release:minor, _release:major
|
|
925
|
+
## ๐ Related Projects
|
|
1685
926
|
|
|
1686
|
-
|
|
1687
|
-
npm run _github-release
|
|
927
|
+
- [**Cartographer**](https://github.com/sodal-project/cartographer) - Standardized interfaces for Brainy
|
|
1688
928
|
|
|
1689
|
-
|
|
1690
|
-
npm publish
|
|
1691
|
-
```
|
|
1692
|
-
|
|
1693
|
-
## License
|
|
929
|
+
---
|
|
1694
930
|
|
|
1695
|
-
|
|
931
|
+
<div align="center">
|
|
932
|
+
<strong>Ready to build something amazing? Get started with Brainy today!</strong>
|
|
933
|
+
</div>
|