@openfluke/welvet 0.1.3 → 0.1.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +286 -800
- package/dist/index.browser.d.ts +32 -0
- package/dist/index.browser.js +37 -0
- package/dist/index.d.ts +30 -30
- package/dist/index.js +34 -83
- package/dist/loader.browser.d.ts +5 -0
- package/dist/loader.browser.js +25 -0
- package/dist/loader.d.ts +5 -3
- package/dist/loader.js +25 -89
- package/dist/{loom.wasm → main.wasm} +0 -0
- package/dist/types.d.ts +91 -197
- package/dist/types.js +2 -10
- package/dist/wasm_exec.js +568 -658
- package/package.json +4 -2
- package/dist/env.d.ts +0 -3
- package/dist/env.js +0 -3
- package/dist/transformer.d.ts +0 -5
- package/dist/transformer.js +0 -127
package/README.md
CHANGED
|
@@ -1,889 +1,375 @@
|
|
|
1
1
|
# @openfluke/welvet
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
Isomorphic TypeScript/JavaScript wrapper for the LOOM WebAssembly neural network framework.
|
|
4
4
|
|
|
5
|
-
|
|
5
|
+
## Features
|
|
6
6
|
|
|
7
|
-
|
|
8
|
-
|
|
7
|
+
- 🎉 **NEW: Simple API** - Streamlined functions with cross-platform consistency
|
|
8
|
+
- 🚀 **Isomorphic WASM Wrapper** - Works in Node.js and browser with same API
|
|
9
|
+
- 🔄 **Mirrors main.go** - Direct 1:1 mapping to WASM exports
|
|
10
|
+
- 🎯 **Type-Safe** - Full TypeScript type definitions for all Network methods
|
|
11
|
+
- 🤖 **Multi-Agent Networks** - Grid scatter architecture for heterogeneous agents
|
|
12
|
+
- 📦 **JSON Configuration** - Build networks from simple JSON configs
|
|
13
|
+
- ⚡ **Fast Training** - Optimized training with configurable parameters
|
|
14
|
+
- 💾 **Model Persistence** - Save and load trained models as JSON
|
|
15
|
+
- ✅ **Cross-Platform Consistency** - Same API as Python, C#, C, WASM
|
|
9
16
|
|
|
10
|
-
##
|
|
11
|
-
|
|
12
|
-
- 🤖 **Transformer Inference (NEW!)** - Run LLMs like SmolLM2-135M with streaming generation
|
|
13
|
-
- 🚀 **6.0MB WASM Binary** - Complete neural network framework + transformer inference
|
|
14
|
-
- 🧠 **7 Layer Types (All CPU)** - Dense, Conv2D, Multi-Head Attention, LayerNorm, RNN, LSTM, Softmax (10 variants)
|
|
15
|
-
- ✅ **Full CPU Implementation** - Every layer works with complete forward/backward passes
|
|
16
|
-
- 🎯 **Registry-based Initialization** - Dynamic layer creation via `CallLayerInit()` with zero manual exports
|
|
17
|
-
- 🔍 **Runtime Introspection** - Discover methods, signatures, and parameters dynamically
|
|
18
|
-
- 💾 **Model Serialization** - Save/load models as JSON (no filesystem required)
|
|
19
|
-
- ⚡ **Full Training Support** - Train networks with `network.Train()` API and automatic gradients
|
|
20
|
-
- 📘 **Full TypeScript Support** - Complete type definitions for IntelliSense
|
|
21
|
-
- 🎯 **Zero Dependencies** - Pure WASM + Go runtime, no external libs
|
|
22
|
-
- 🌐 **Isomorphic** - Works in browsers, Node.js, Bun, and Deno
|
|
23
|
-
- 🎨 **Multiple Activation Functions** - ReLU, Sigmoid, Tanh, Softplus, LeakyReLU, Linear
|
|
24
|
-
- ⚠️ **CPU-Only** (GPU support via WebGPU coming soon)
|
|
25
|
-
|
|
26
|
-
## 📦 Installation
|
|
17
|
+
## Installation
|
|
27
18
|
|
|
28
19
|
```bash
|
|
29
20
|
npm install @openfluke/welvet
|
|
30
21
|
```
|
|
31
22
|
|
|
32
|
-
|
|
33
|
-
|
|
34
|
-
```bash
|
|
35
|
-
# Yarn
|
|
36
|
-
yarn add @openfluke/welvet
|
|
37
|
-
|
|
38
|
-
# pnpm
|
|
39
|
-
pnpm add @openfluke/welvet
|
|
40
|
-
|
|
41
|
-
# Bun
|
|
42
|
-
bun add @openfluke/welvet
|
|
43
|
-
```
|
|
44
|
-
|
|
45
|
-
## 🚀 Quick Start
|
|
23
|
+
## Quick Start
|
|
46
24
|
|
|
47
|
-
###
|
|
25
|
+
### 🎉 NEW: Simple API (Recommended)
|
|
48
26
|
|
|
49
|
-
|
|
27
|
+
The simple API provides streamlined functions with consistent behavior across all platforms:
|
|
50
28
|
|
|
51
29
|
```typescript
|
|
52
|
-
import {
|
|
53
|
-
|
|
54
|
-
|
|
55
|
-
|
|
56
|
-
|
|
57
|
-
// Create transformer API
|
|
58
|
-
const transformer = await createTransformerAPI();
|
|
59
|
-
|
|
60
|
-
// Load tokenizer
|
|
61
|
-
const tokenizerData = await fetch("models/SmolLM2-135M-Instruct/tokenizer.json")
|
|
62
|
-
.then((r) => r.arrayBuffer())
|
|
63
|
-
.then((buf) => new Uint8Array(buf));
|
|
64
|
-
await transformer.loadTokenizer(tokenizerData);
|
|
65
|
-
|
|
66
|
-
// Load model
|
|
67
|
-
const configData = await fetch("models/SmolLM2-135M-Instruct/config.json")
|
|
68
|
-
.then((r) => r.arrayBuffer())
|
|
69
|
-
.then((buf) => new Uint8Array(buf));
|
|
70
|
-
const weightsData = await fetch(
|
|
71
|
-
"models/SmolLM2-135M-Instruct/model.safetensors"
|
|
72
|
-
)
|
|
73
|
-
.then((r) => r.arrayBuffer())
|
|
74
|
-
.then((buf) => new Uint8Array(buf));
|
|
75
|
-
await transformer.loadModel(configData, weightsData);
|
|
76
|
-
|
|
77
|
-
// Stream generation token-by-token
|
|
78
|
-
for await (const token of transformer.generateStream(
|
|
79
|
-
"The capital of France is",
|
|
80
|
-
50,
|
|
81
|
-
0.7
|
|
82
|
-
)) {
|
|
83
|
-
process.stdout.write(token); // Paris...
|
|
84
|
-
}
|
|
85
|
-
```
|
|
86
|
-
|
|
87
|
-
**Live Demo:** See `wasm/inference.html` for a beautiful web UI with real-time token streaming!
|
|
88
|
-
|
|
89
|
-
### The Easy Way: Load Complete Models
|
|
90
|
-
|
|
91
|
-
Instead of manually configuring layers, **load a complete model with ONE line**:
|
|
92
|
-
|
|
93
|
-
```typescript
|
|
94
|
-
import { initLoom } from "@openfluke/welvet";
|
|
95
|
-
|
|
96
|
-
const loom = await initLoom();
|
|
97
|
-
|
|
98
|
-
// Load model from JSON (architecture + weights all at once!)
|
|
99
|
-
const modelJSON = await fetch("test.json").then((r) => r.json());
|
|
100
|
-
const network = loom.LoadModelFromString(
|
|
101
|
-
JSON.stringify(modelJSON),
|
|
102
|
-
"all_layers_test"
|
|
103
|
-
);
|
|
104
|
-
|
|
105
|
-
// That's it! All 16 layers, weights, biases loaded automatically
|
|
106
|
-
const input = new Array(10).fill(0).map(() => Math.random());
|
|
107
|
-
const [output, duration] = JSON.parse(
|
|
108
|
-
network.ForwardCPU(JSON.stringify([input]))
|
|
109
|
-
);
|
|
110
|
-
console.log("Output:", output);
|
|
111
|
-
```
|
|
112
|
-
|
|
113
|
-
**Live Demo:** See `wasm/all_layers_test.html` for a complete working example that loads a 26.4KB model with 16 layers (Dense, Conv2D, Attention, RNN, LSTM) and runs inference in the browser!
|
|
114
|
-
|
|
115
|
-
### Manual Configuration (for building models from scratch)
|
|
116
|
-
|
|
117
|
-
```typescript
|
|
118
|
-
import { initLoom, ActivationType } from "@openfluke/welvet";
|
|
119
|
-
|
|
120
|
-
// Initialize the WASM module
|
|
121
|
-
const loom = await initLoom();
|
|
30
|
+
import {
|
|
31
|
+
init,
|
|
32
|
+
createNetworkFromJSON,
|
|
33
|
+
loadLoomNetwork,
|
|
34
|
+
} from "@openfluke/welvet";
|
|
122
35
|
|
|
123
|
-
//
|
|
124
|
-
|
|
36
|
+
// Initialize LOOM WASM
|
|
37
|
+
await init();
|
|
125
38
|
|
|
126
|
-
//
|
|
127
|
-
const
|
|
128
|
-
|
|
129
|
-
|
|
130
|
-
|
|
131
|
-
|
|
132
|
-
|
|
133
|
-
|
|
134
|
-
|
|
135
|
-
|
|
136
|
-
|
|
137
|
-
|
|
138
|
-
|
|
139
|
-
|
|
140
|
-
|
|
141
|
-
|
|
142
|
-
|
|
39
|
+
// Create network from JSON config
|
|
40
|
+
const config = {
|
|
41
|
+
batch_size: 1,
|
|
42
|
+
grid_rows: 1,
|
|
43
|
+
grid_cols: 3,
|
|
44
|
+
layers_per_cell: 1,
|
|
45
|
+
layers: [
|
|
46
|
+
{ type: "dense", input_size: 8, output_size: 16, activation: "relu" },
|
|
47
|
+
{
|
|
48
|
+
type: "parallel",
|
|
49
|
+
combine_mode: "grid_scatter",
|
|
50
|
+
grid_output_rows: 3,
|
|
51
|
+
grid_output_cols: 1,
|
|
52
|
+
grid_output_layers: 1,
|
|
53
|
+
grid_positions: [
|
|
54
|
+
{ branch_index: 0, target_row: 0, target_col: 0, target_layer: 0 },
|
|
55
|
+
{ branch_index: 1, target_row: 1, target_col: 0, target_layer: 0 },
|
|
56
|
+
{ branch_index: 2, target_row: 2, target_col: 0, target_layer: 0 },
|
|
57
|
+
],
|
|
58
|
+
branches: [
|
|
59
|
+
{
|
|
60
|
+
type: "parallel",
|
|
61
|
+
combine_mode: "add",
|
|
62
|
+
branches: [
|
|
63
|
+
{
|
|
64
|
+
type: "dense",
|
|
65
|
+
input_size: 16,
|
|
66
|
+
output_size: 8,
|
|
67
|
+
activation: "relu",
|
|
68
|
+
},
|
|
69
|
+
{
|
|
70
|
+
type: "dense",
|
|
71
|
+
input_size: 16,
|
|
72
|
+
output_size: 8,
|
|
73
|
+
activation: "gelu",
|
|
74
|
+
},
|
|
75
|
+
],
|
|
76
|
+
},
|
|
77
|
+
{ type: "lstm", input_size: 16, hidden_size: 8, seq_length: 1 },
|
|
78
|
+
{ type: "rnn", input_size: 16, hidden_size: 8, seq_length: 1 },
|
|
79
|
+
],
|
|
80
|
+
},
|
|
81
|
+
{ type: "dense", input_size: 24, output_size: 2, activation: "sigmoid" },
|
|
82
|
+
],
|
|
83
|
+
};
|
|
143
84
|
|
|
144
|
-
|
|
145
|
-
console.log("Inference time:", duration / 1e6, "ms");
|
|
85
|
+
const network = createNetworkFromJSON(JSON.stringify(config));
|
|
146
86
|
|
|
147
|
-
// Training
|
|
87
|
+
// Training
|
|
148
88
|
const batches = [
|
|
149
|
-
{ Input:
|
|
89
|
+
{ Input: [0.2, 0.2, 0.2, 0.2, 0.8, 0.8, 0.8, 0.8], Target: [1.0, 0.0] },
|
|
90
|
+
{ Input: [0.9, 0.9, 0.9, 0.9, 0.1, 0.1, 0.1, 0.1], Target: [0.0, 1.0] },
|
|
91
|
+
{ Input: [0.7, 0.7, 0.7, 0.7, 0.3, 0.3, 0.3, 0.3], Target: [0.0, 1.0] },
|
|
92
|
+
{ Input: [0.3, 0.3, 0.3, 0.3, 0.7, 0.7, 0.7, 0.7], Target: [1.0, 0.0] },
|
|
150
93
|
];
|
|
151
|
-
|
|
152
|
-
|
|
153
|
-
|
|
94
|
+
|
|
95
|
+
const trainingConfig = {
|
|
96
|
+
Epochs: 800,
|
|
97
|
+
LearningRate: 0.15,
|
|
98
|
+
UseGPU: false,
|
|
99
|
+
PrintEveryBatch: 0,
|
|
154
100
|
GradientClip: 1.0,
|
|
155
101
|
LossType: "mse",
|
|
102
|
+
Verbose: false,
|
|
156
103
|
};
|
|
157
104
|
|
|
158
|
-
const
|
|
159
|
-
|
|
160
|
-
console.log("Final loss:", result.FinalLoss);
|
|
161
|
-
```
|
|
162
|
-
|
|
163
|
-
## 📚 API Reference
|
|
164
|
-
|
|
165
|
-
### Initialization
|
|
105
|
+
const [result] = network.Train(JSON.stringify([batches, trainingConfig]));
|
|
106
|
+
console.log("Training complete!");
|
|
166
107
|
|
|
167
|
-
|
|
168
|
-
|
|
169
|
-
|
|
170
|
-
injectGoRuntime?: boolean; // Include Go runtime (default: true)
|
|
171
|
-
}
|
|
172
|
-
|
|
173
|
-
const loom = await initLoom(options?);
|
|
174
|
-
```
|
|
175
|
-
|
|
176
|
-
### Creating Networks
|
|
177
|
-
|
|
178
|
-
```typescript
|
|
179
|
-
const network = loom.NewNetwork(
|
|
180
|
-
inputSize: number, // Input layer size
|
|
181
|
-
gridRows: number, // Grid rows (use 1 for simple networks)
|
|
182
|
-
gridCols: number, // Grid columns (use 1 for simple networks)
|
|
183
|
-
layersPerCell: number // Number of layers
|
|
184
|
-
);
|
|
185
|
-
```
|
|
186
|
-
|
|
187
|
-
### Layer Types
|
|
188
|
-
|
|
189
|
-
All layer types are created via the registry system using `CallLayerInit()`:
|
|
190
|
-
|
|
191
|
-
#### Dense (Fully-Connected) Layer
|
|
192
|
-
|
|
193
|
-
```typescript
|
|
194
|
-
const config = loom.CallLayerInit(
|
|
195
|
-
"InitDenseLayer",
|
|
196
|
-
JSON.stringify([
|
|
197
|
-
inputSize: number,
|
|
198
|
-
outputSize: number,
|
|
199
|
-
activation: ActivationType,
|
|
200
|
-
])
|
|
201
|
-
);
|
|
202
|
-
```
|
|
203
|
-
|
|
204
|
-
#### Conv2D Layer
|
|
205
|
-
|
|
206
|
-
```typescript
|
|
207
|
-
const config = loom.CallLayerInit(
|
|
208
|
-
"InitConv2DLayer",
|
|
209
|
-
JSON.stringify([
|
|
210
|
-
height: number, // Input height
|
|
211
|
-
width: number, // Input width
|
|
212
|
-
channels: number, // Input channels
|
|
213
|
-
filters: number, // Number of output filters
|
|
214
|
-
kernelSize: number, // Kernel size (e.g., 3 for 3x3)
|
|
215
|
-
stride: number, // Stride (typically 1 or 2)
|
|
216
|
-
padding: number, // Padding (typically 0 or 1)
|
|
217
|
-
activation: ActivationType,
|
|
218
|
-
])
|
|
219
|
-
);
|
|
220
|
-
```
|
|
221
|
-
|
|
222
|
-
#### Multi-Head Attention Layer
|
|
223
|
-
|
|
224
|
-
```typescript
|
|
225
|
-
const config = loom.CallLayerInit(
|
|
226
|
-
"InitMultiHeadAttentionLayer",
|
|
227
|
-
JSON.stringify([
|
|
228
|
-
seqLength: number, // Sequence length
|
|
229
|
-
dModel: number, // Model dimension
|
|
230
|
-
numHeads: number, // Number of attention heads
|
|
231
|
-
activation: ActivationType,
|
|
232
|
-
])
|
|
233
|
-
);
|
|
234
|
-
```
|
|
235
|
-
|
|
236
|
-
#### RNN Layer
|
|
237
|
-
|
|
238
|
-
```typescript
|
|
239
|
-
const config = loom.CallLayerInit(
|
|
240
|
-
"InitRNNLayer",
|
|
241
|
-
JSON.stringify([
|
|
242
|
-
inputSize: number, // Input feature size
|
|
243
|
-
hiddenSize: number, // Hidden state size
|
|
244
|
-
seqLength: number, // Sequence length
|
|
245
|
-
outputSize: number, // Output size (hiddenSize * seqLength)
|
|
246
|
-
])
|
|
108
|
+
// Forward pass
|
|
109
|
+
const [output] = network.ForwardCPU(
|
|
110
|
+
JSON.stringify([[0.2, 0.2, 0.2, 0.2, 0.8, 0.8, 0.8, 0.8]])
|
|
247
111
|
);
|
|
248
|
-
|
|
112
|
+
console.log("Output:", JSON.parse(output)); // [0.950, 0.050]
|
|
249
113
|
|
|
250
|
-
|
|
251
|
-
|
|
252
|
-
|
|
253
|
-
const
|
|
254
|
-
|
|
255
|
-
|
|
256
|
-
|
|
257
|
-
hiddenSize: number, // Hidden/cell state size
|
|
258
|
-
seqLength: number, // Sequence length
|
|
259
|
-
outputSize: number, // Output size (hiddenSize * seqLength)
|
|
260
|
-
])
|
|
114
|
+
// Evaluate network
|
|
115
|
+
const inputs = batches.map((b) => b.Input);
|
|
116
|
+
const expected = [0, 1, 1, 0];
|
|
117
|
+
const [metrics] = network.EvaluateNetwork(JSON.stringify([inputs, expected]));
|
|
118
|
+
const metricsData = JSON.parse(metrics);
|
|
119
|
+
console.log(
|
|
120
|
+
`Quality: ${metricsData.score}/100, Deviation: ${metricsData.avg_deviation}%`
|
|
261
121
|
);
|
|
262
|
-
```
|
|
263
122
|
|
|
264
|
-
|
|
123
|
+
// Save/Load
|
|
124
|
+
const [modelJSON] = network.SaveModelToString(JSON.stringify(["my_model"]));
|
|
125
|
+
console.log(`Model saved (${modelJSON.length} bytes)`);
|
|
265
126
|
|
|
266
|
-
|
|
267
|
-
|
|
268
|
-
|
|
269
|
-
|
|
270
|
-
Tanh = 2, // Hyperbolic tangent
|
|
271
|
-
Softplus = 3, // Smooth ReLU
|
|
272
|
-
LeakyReLU = 4, // ReLU with 0.1x negative slope
|
|
273
|
-
Linear = 5, // Identity (no activation)
|
|
274
|
-
}
|
|
275
|
-
```
|
|
276
|
-
|
|
277
|
-
activation: ActivationType // ReLU, Sigmoid, Tanh, Linear
|
|
127
|
+
// Load model
|
|
128
|
+
const loadedNetwork = loadLoomNetwork(modelJSON, "my_model");
|
|
129
|
+
const [output2] = loadedNetwork.ForwardCPU(
|
|
130
|
+
JSON.stringify([[0.2, 0.2, 0.2, 0.2, 0.8, 0.8, 0.8, 0.8]])
|
|
278
131
|
);
|
|
132
|
+
// output2 === output (bit-for-bit identical!)
|
|
133
|
+
```
|
|
134
|
+
|
|
135
|
+
**Simple API Functions:**
|
|
136
|
+
|
|
137
|
+
- `createNetworkFromJSON(jsonConfig)` - Create network from JSON
|
|
138
|
+
- `loadLoomNetwork(jsonString, modelID)` - Load saved model
|
|
139
|
+
- `network.ForwardCPU(inputJSON)` - Forward pass
|
|
140
|
+
- `network.BackwardCPU(gradientsJSON)` - Backward pass
|
|
141
|
+
- `network.Train(paramsJSON)` - Train network
|
|
142
|
+
- `network.SaveModelToString(idJSON)` - Save to JSON string
|
|
143
|
+
- `network.EvaluateNetwork(paramsJSON)` - Evaluate with metrics
|
|
144
|
+
- `network.UpdateWeights(lrJSON)` - Update weights
|
|
145
|
+
|
|
146
|
+
**Cross-Platform Results:**
|
|
147
|
+
|
|
148
|
+
- ✅ Same training: 99.5% improvement, 100/100 quality score
|
|
149
|
+
- ✅ Same save/load: 0.00 difference in predictions
|
|
150
|
+
- ✅ Same evaluation: Identical deviation metrics
|
|
151
|
+
- ✅ Same behavior as Python, C#, C, and WASM
|
|
152
|
+
|
|
153
|
+
See `example/grid-scatter.ts` for a complete working example.
|
|
154
|
+
{
|
|
155
|
+
type: "parallel",
|
|
156
|
+
combine_mode: "add",
|
|
157
|
+
branches: [
|
|
158
|
+
{
|
|
159
|
+
type: "dense",
|
|
160
|
+
input_size: 16,
|
|
161
|
+
output_size: 8,
|
|
162
|
+
activation: "relu",
|
|
163
|
+
},
|
|
164
|
+
{
|
|
165
|
+
type: "dense",
|
|
166
|
+
input_size: 16,
|
|
167
|
+
output_size: 8,
|
|
168
|
+
activation: "gelu",
|
|
169
|
+
},
|
|
170
|
+
],
|
|
171
|
+
},
|
|
172
|
+
{ type: "lstm", input_size: 16, hidden_size: 8, seq_length: 1 },
|
|
173
|
+
{ type: "rnn", input_size: 16, hidden_size: 8, seq_length: 1 },
|
|
174
|
+
],
|
|
175
|
+
},
|
|
176
|
+
{
|
|
177
|
+
type: "dense",
|
|
178
|
+
input_size: 24,
|
|
179
|
+
output_size: 2,
|
|
180
|
+
activation: "sigmoid",
|
|
181
|
+
},
|
|
182
|
+
],
|
|
183
|
+
});
|
|
279
184
|
|
|
280
|
-
network
|
|
281
|
-
|
|
282
|
-
|
|
283
|
-
|
|
284
|
-
|
|
285
|
-
]));
|
|
286
|
-
|
|
287
|
-
````
|
|
185
|
+
// Train multi-agent network
|
|
186
|
+
const batches: TrainingBatch[] = [
|
|
187
|
+
{ Input: [0.2, 0.2, 0.2, 0.2, 0.8, 0.8, 0.8, 0.8], Target: [1.0, 0.0] },
|
|
188
|
+
{ Input: [0.9, 0.9, 0.9, 0.9, 0.1, 0.1, 0.1, 0.1], Target: [0.0, 1.0] },
|
|
189
|
+
];
|
|
288
190
|
|
|
289
|
-
|
|
191
|
+
const config: TrainingConfig = {
|
|
192
|
+
Epochs: 800,
|
|
193
|
+
LearningRate: 0.15,
|
|
194
|
+
LossType: "mse",
|
|
195
|
+
Verbose: false,
|
|
196
|
+
};
|
|
290
197
|
|
|
291
|
-
|
|
292
|
-
const config = loom.InitMultiHeadAttentionLayer(
|
|
293
|
-
dModel: number, // Model dimension
|
|
294
|
-
numHeads: number, // Number of attention heads
|
|
295
|
-
seqLength: number, // Sequence length
|
|
296
|
-
activation: ActivationType
|
|
297
|
-
);
|
|
198
|
+
const result = agentNetwork.Train(JSON.stringify([batches, config]));
|
|
298
199
|
|
|
299
|
-
network.SetLayer(JSON.stringify([0, 0, layerIndex, JSON.parse(config)]));
|
|
300
200
|
````
|
|
301
201
|
|
|
302
|
-
|
|
303
|
-
|
|
304
|
-
#### Forward Pass
|
|
305
|
-
|
|
306
|
-
```typescript
|
|
307
|
-
const input = [0.1, 0.2, 0.3, 0.4];
|
|
308
|
-
const resultJSON = network.ForwardCPU(JSON.stringify([input]));
|
|
309
|
-
const [output, duration] = JSON.parse(resultJSON);
|
|
310
|
-
```
|
|
311
|
-
|
|
312
|
-
#### Backward Pass
|
|
313
|
-
|
|
314
|
-
```typescript
|
|
315
|
-
const gradOutput = new Array(outputSize).fill(0.01);
|
|
316
|
-
const backwardJSON = network.BackwardCPU(JSON.stringify([gradOutput]));
|
|
317
|
-
const [gradInput, duration] = JSON.parse(backwardJSON);
|
|
318
|
-
```
|
|
319
|
-
|
|
320
|
-
#### Update Weights
|
|
321
|
-
|
|
322
|
-
```typescript
|
|
323
|
-
const learningRate = 0.01;
|
|
324
|
-
network.UpdateWeights(JSON.stringify([learningRate]));
|
|
325
|
-
```
|
|
326
|
-
|
|
327
|
-
### Model Persistence
|
|
328
|
-
|
|
329
|
-
#### Load Model (The Easy Way - ONE LINE!)
|
|
330
|
-
|
|
331
|
-
```typescript
|
|
332
|
-
// Fetch model from server
|
|
333
|
-
const savedModel = await fetch("model.json").then((r) => r.json());
|
|
334
|
-
|
|
335
|
-
// Load complete network with ONE function call!
|
|
336
|
-
const network = loom.LoadModelFromString(
|
|
337
|
-
JSON.stringify(savedModel),
|
|
338
|
-
"model_name"
|
|
339
|
-
);
|
|
340
|
-
|
|
341
|
-
// Or from localStorage
|
|
342
|
-
const savedModel = JSON.parse(localStorage.getItem("my_model")!);
|
|
343
|
-
const network = loom.LoadModelFromString(
|
|
344
|
-
JSON.stringify(savedModel),
|
|
345
|
-
"model_name"
|
|
346
|
-
);
|
|
347
|
-
```
|
|
348
|
-
|
|
349
|
-
**That's it!** All layers, weights, biases, and configurations are automatically restored. No manual layer setup needed!
|
|
350
|
-
|
|
351
|
-
#### Save Model
|
|
352
|
-
|
|
353
|
-
```typescript
|
|
354
|
-
const modelJSON = network.SaveModelToString(JSON.stringify(["model_name"]));
|
|
355
|
-
const model = JSON.parse(JSON.parse(modelJSON)[0]);
|
|
356
|
-
|
|
357
|
-
// Store anywhere (localStorage, IndexedDB, etc.)
|
|
358
|
-
localStorage.setItem("my_model", JSON.stringify(model));
|
|
359
|
-
```
|
|
360
|
-
|
|
361
|
-
#### Save Model
|
|
362
|
-
|
|
363
|
-
```typescript
|
|
364
|
-
const modelJSON = network.SaveModelToString(JSON.stringify(["model_name"]));
|
|
365
|
-
const model = JSON.parse(JSON.parse(modelJSON)[0]);
|
|
366
|
-
|
|
367
|
-
// Store anywhere (localStorage, IndexedDB, backend API, etc.)
|
|
368
|
-
localStorage.setItem("my_model", JSON.stringify(model));
|
|
369
|
-
```
|
|
370
|
-
|
|
371
|
-
#### Cross-Platform Model Loading
|
|
202
|
+
## API Reference
|
|
372
203
|
|
|
373
|
-
|
|
204
|
+
### Functions
|
|
374
205
|
|
|
375
|
-
|
|
376
|
-
// JavaScript/WASM
|
|
377
|
-
const network = loom.LoadModelFromString(modelJSON, "model_id");
|
|
378
|
-
```
|
|
379
|
-
|
|
380
|
-
```python
|
|
381
|
-
# Python
|
|
382
|
-
network = welvet.load_model_from_string(model_json, "model_id")
|
|
383
|
-
```
|
|
206
|
+
#### `async init(): Promise<void>`
|
|
384
207
|
|
|
385
|
-
|
|
386
|
-
// Go
|
|
387
|
-
network, _ := nn.LoadModelFromString(modelJSON, "model_id")
|
|
388
|
-
```
|
|
389
|
-
|
|
390
|
-
See `examples/all_layers_validation.go` for a complete demo that generates test.json (26.4KB with 16 layers) and verifies all three platforms load it identically!
|
|
208
|
+
Initialize LOOM WASM module for Node.js environment.
|
|
391
209
|
|
|
392
|
-
|
|
210
|
+
#### `async initBrowser(): Promise<void>`
|
|
393
211
|
|
|
394
|
-
|
|
395
|
-
|
|
396
|
-
```typescript
|
|
397
|
-
import { initLoom, createTransformerAPI } from "@openfluke/welvet";
|
|
212
|
+
Initialize LOOM WASM module for browser environment.
|
|
398
213
|
|
|
399
|
-
|
|
400
|
-
await initLoom();
|
|
214
|
+
#### `createNetwork(config: object | string): Network`
|
|
401
215
|
|
|
402
|
-
|
|
403
|
-
const transformer = await createTransformerAPI();
|
|
216
|
+
Create a new neural network from JSON configuration object or string.
|
|
404
217
|
|
|
405
|
-
|
|
406
|
-
const tokenizerData = await fetch("models/SmolLM2-135M/tokenizer.json")
|
|
407
|
-
.then((r) => r.arrayBuffer())
|
|
408
|
-
.then((buf) => new Uint8Array(buf));
|
|
218
|
+
**Note:** This is the only global function exposed by the WASM (mirrors `createLoomNetwork` from main.go). To load a saved model, just pass the saved JSON string to `createNetwork()`.
|
|
409
219
|
|
|
410
|
-
|
|
411
|
-
console.log(`Tokenizer loaded: ${tokResult.vocab_size} tokens`);
|
|
220
|
+
### Network Interface
|
|
412
221
|
|
|
413
|
-
|
|
414
|
-
const configData = await fetch("models/SmolLM2-135M/config.json")
|
|
415
|
-
.then((r) => r.arrayBuffer())
|
|
416
|
-
.then((buf) => new Uint8Array(buf));
|
|
222
|
+
The `Network` object returned by `createNetwork()` has all methods from the Go `nn.Network` type automatically exposed via reflection.
|
|
417
223
|
|
|
418
|
-
|
|
419
|
-
.then((r) => r.arrayBuffer())
|
|
420
|
-
.then((buf) => new Uint8Array(buf));
|
|
224
|
+
**Important:** All Network methods follow the WASM calling convention:
|
|
421
225
|
|
|
422
|
-
|
|
423
|
-
|
|
424
|
-
`Model loaded: ${modelResult.num_layers} layers, ${modelResult.hidden_size} hidden size`
|
|
425
|
-
);
|
|
426
|
-
```
|
|
226
|
+
- Take a single parameter: JSON string of an array of parameters
|
|
227
|
+
- Return a JSON string of an array of results
|
|
427
228
|
|
|
428
|
-
|
|
229
|
+
Example:
|
|
429
230
|
|
|
430
231
|
```typescript
|
|
431
|
-
//
|
|
432
|
-
const
|
|
433
|
-
|
|
434
|
-
|
|
435
|
-
// Decode token IDs to text
|
|
436
|
-
const decodeResult = await transformer.decode([1, 9906, 2088], true);
|
|
437
|
-
console.log(decodeResult.text); // "Hello world"
|
|
438
|
-
```
|
|
439
|
-
|
|
440
|
-
### Text Generation
|
|
441
|
-
|
|
442
|
-
#### Blocking Generation
|
|
443
|
-
|
|
444
|
-
```typescript
|
|
445
|
-
const result = await transformer.generate(
|
|
446
|
-
"The capital of France is",
|
|
447
|
-
50, // maxTokens
|
|
448
|
-
0.7 // temperature
|
|
449
|
-
);
|
|
450
|
-
console.log(result.generated_text);
|
|
451
|
-
```
|
|
452
|
-
|
|
453
|
-
#### Streaming Generation
|
|
454
|
-
|
|
455
|
-
```typescript
|
|
456
|
-
// Stream tokens one at a time
|
|
457
|
-
process.stdout.write("Generated: ");
|
|
458
|
-
for await (const token of transformer.generateStream(
|
|
459
|
-
"Once upon a time",
|
|
460
|
-
50, // maxTokens
|
|
461
|
-
0.7 // temperature
|
|
462
|
-
)) {
|
|
463
|
-
process.stdout.write(token); // Print each token as it's generated
|
|
464
|
-
}
|
|
465
|
-
console.log();
|
|
466
|
-
```
|
|
467
|
-
|
|
468
|
-
### Transformer API Reference
|
|
469
|
-
|
|
470
|
-
```typescript
|
|
471
|
-
interface TransformerAPI {
|
|
472
|
-
// Load tokenizer from JSON bytes
|
|
473
|
-
loadTokenizer(tokenizerData: Uint8Array): Promise<TokenizerLoadResult>;
|
|
474
|
-
|
|
475
|
-
// Load model from config + weights bytes
|
|
476
|
-
loadModel(
|
|
477
|
-
configData: Uint8Array,
|
|
478
|
-
weightsData: Uint8Array
|
|
479
|
-
): Promise<TransformerLoadResult>;
|
|
480
|
-
|
|
481
|
-
// Encode text to token IDs
|
|
482
|
-
encode(text: string, addSpecialTokens?: boolean): Promise<EncodeResult>;
|
|
483
|
-
|
|
484
|
-
// Decode token IDs to text
|
|
485
|
-
decode(
|
|
486
|
-
tokenIds: number[],
|
|
487
|
-
skipSpecialTokens?: boolean
|
|
488
|
-
): Promise<DecodeResult>;
|
|
489
|
-
|
|
490
|
-
// Generate text (blocking)
|
|
491
|
-
generate(
|
|
492
|
-
prompt: string,
|
|
493
|
-
maxTokens?: number,
|
|
494
|
-
temperature?: number
|
|
495
|
-
): Promise<GenerateResult>;
|
|
496
|
-
|
|
497
|
-
// Generate text (streaming)
|
|
498
|
-
generateStream(
|
|
499
|
-
prompt: string,
|
|
500
|
-
maxTokens?: number,
|
|
501
|
-
temperature?: number
|
|
502
|
-
): AsyncGenerator<string, void, unknown>;
|
|
503
|
-
}
|
|
504
|
-
```
|
|
505
|
-
|
|
506
|
-
#### Load Model (Legacy API)
|
|
507
|
-
|
|
508
|
-
````
|
|
509
|
-
|
|
510
|
-
### Runtime Introspection
|
|
232
|
+
// Method with no parameters
|
|
233
|
+
const info = network.GetNetworkInfo(JSON.stringify([]));
|
|
234
|
+
const parsed = JSON.parse(info)[0];
|
|
511
235
|
|
|
512
|
-
|
|
513
|
-
|
|
514
|
-
|
|
515
|
-
const methodsJSON = network.GetMethods();
|
|
516
|
-
const methods = JSON.parse(methodsJSON);
|
|
236
|
+
// Method with parameters
|
|
237
|
+
const result = network.Train(JSON.stringify([batches, config]));
|
|
238
|
+
const data = JSON.parse(result)[0];
|
|
517
239
|
|
|
518
|
-
|
|
519
|
-
|
|
520
|
-
|
|
521
|
-
);
|
|
522
|
-
});
|
|
240
|
+
// Save model (requires modelID parameter)
|
|
241
|
+
const saved = network.SaveModelToString(JSON.stringify(["my-model"]));
|
|
242
|
+
const json = JSON.parse(saved)[0];
|
|
523
243
|
````
|
|
524
244
|
|
|
525
|
-
####
|
|
526
|
-
|
|
527
|
-
|
|
528
|
-
|
|
529
|
-
|
|
530
|
-
|
|
245
|
+
#### Available Network Methods
|
|
246
|
+
|
|
247
|
+
- `ForwardCPU(paramsJSON)` - CPU forward pass: `[inputs]`
|
|
248
|
+
- `ForwardGPU(paramsJSON)` - GPU forward pass: `[inputs]`
|
|
249
|
+
- `BackwardCPU(paramsJSON)` - CPU backward pass: `[gradients]`
|
|
250
|
+
- `BackwardGPU(paramsJSON)` - GPU backward pass: `[gradients]`
|
|
251
|
+
- `UpdateWeights(paramsJSON)` - Update weights: `[learningRate]`
|
|
252
|
+
- `Train(paramsJSON)` - Train network: `[batches, config]`
|
|
253
|
+
- `SaveModelToString(paramsJSON)` - Save model: `["modelID"]`
|
|
254
|
+
- `GetWeights(paramsJSON)` - Get layer weights: `[row, col, layer]`
|
|
255
|
+
- `SetWeights(paramsJSON)` - Set layer weights: `[row, col, layer, weights]`
|
|
256
|
+
- `GetBiases(paramsJSON)` - Get layer biases: `[row, col, layer]`
|
|
257
|
+
- `SetBiases(paramsJSON)` - Set layer biases: `[row, col, layer, biases]`
|
|
258
|
+
- `GetActivation(paramsJSON)` - Get activation: `[row, col, layer]`
|
|
259
|
+
- `GetLayerType(paramsJSON)` - Get layer type: `[row, col, layer]`
|
|
260
|
+
- `GetLayerSizes(paramsJSON)` - Get layer sizes: `[row, col, layer]`
|
|
261
|
+
- `GetBatchSize(paramsJSON)` - Get batch size: `[]`
|
|
262
|
+
- `GetGridDimensions(paramsJSON)` - Get grid dimensions: `[]`
|
|
263
|
+
- `GetNetworkInfo(paramsJSON)` - Get network info: `[]`
|
|
264
|
+
- `GetTotalParameters(paramsJSON)` - Get parameter count: `[]`
|
|
265
|
+
- `InitializeWeights(paramsJSON)` - Initialize weights: `[]` or `[method]`
|
|
266
|
+
- `Clone(paramsJSON)` - Clone network: `[]`
|
|
267
|
+
- And 10+ more methods...
|
|
268
|
+
- `GetLastOutput(): string` - Get last forward pass output
|
|
269
|
+
|
|
270
|
+
### Types
|
|
271
|
+
|
|
272
|
+
#### `NetworkConfig`
|
|
273
|
+
|
|
274
|
+
```typescript
|
|
275
|
+
interface NetworkConfig {
|
|
276
|
+
batch_size: number;
|
|
277
|
+
grid_rows?: number; // Required for grid networks (use 1 for sequential)
|
|
278
|
+
grid_cols?: number; // Required for grid networks (use 1 for sequential)
|
|
279
|
+
layers_per_cell?: number; // Required for grid networks
|
|
280
|
+
layers: LayerConfig[];
|
|
531
281
|
}
|
|
532
282
|
```
|
|
533
283
|
|
|
534
|
-
####
|
|
535
|
-
|
|
536
|
-
```typescript
|
|
537
|
-
const names = JSON.parse(network.ListMethods());
|
|
538
|
-
console.log("Available methods:", names);
|
|
539
|
-
```
|
|
540
|
-
|
|
541
|
-
## 🎨 Activation Functions
|
|
284
|
+
#### `LayerConfig`
|
|
542
285
|
|
|
543
286
|
```typescript
|
|
544
|
-
|
|
545
|
-
|
|
546
|
-
|
|
547
|
-
|
|
548
|
-
|
|
287
|
+
interface LayerConfig {
|
|
288
|
+
type: string;
|
|
289
|
+
input_size?: number;
|
|
290
|
+
output_size?: number;
|
|
291
|
+
hidden_size?: number;
|
|
292
|
+
seq_length?: number;
|
|
293
|
+
activation?: string;
|
|
294
|
+
combine_mode?: string;
|
|
295
|
+
grid_output_rows?: number;
|
|
296
|
+
grid_output_cols?: number;
|
|
297
|
+
grid_output_layers?: number;
|
|
298
|
+
grid_positions?: GridPosition[];
|
|
299
|
+
branches?: LayerConfig[];
|
|
549
300
|
}
|
|
550
301
|
```
|
|
551
302
|
|
|
552
|
-
|
|
553
|
-
|
|
554
|
-
### MNIST-Style Classifier
|
|
303
|
+
#### `TrainingBatch`
|
|
555
304
|
|
|
556
305
|
```typescript
|
|
557
|
-
|
|
558
|
-
|
|
559
|
-
|
|
560
|
-
const loom = await initLoom();
|
|
561
|
-
|
|
562
|
-
// Network: 784 → 128 → 64 → 10
|
|
563
|
-
const network = loom.NewNetwork(784, 1, 1, 3);
|
|
564
|
-
|
|
565
|
-
const layer0 = loom.InitDenseLayer(784, 128, ActivationType.ReLU);
|
|
566
|
-
const layer1 = loom.InitDenseLayer(128, 64, ActivationType.ReLU);
|
|
567
|
-
const layer2 = loom.InitDenseLayer(64, 10, ActivationType.Sigmoid);
|
|
568
|
-
|
|
569
|
-
network.SetLayer(JSON.stringify([0, 0, 0, JSON.parse(layer0)]));
|
|
570
|
-
network.SetLayer(JSON.stringify([0, 0, 1, JSON.parse(layer1)]));
|
|
571
|
-
network.SetLayer(JSON.stringify([0, 0, 2, JSON.parse(layer2)]));
|
|
572
|
-
|
|
573
|
-
// Training loop
|
|
574
|
-
const epochs = 50;
|
|
575
|
-
const learningRate = 0.01;
|
|
576
|
-
|
|
577
|
-
for (let epoch = 0; epoch < epochs; epoch++) {
|
|
578
|
-
// Your training data here
|
|
579
|
-
const input = new Array(784).fill(0).map(() => Math.random());
|
|
580
|
-
const target = new Array(10).fill(0);
|
|
581
|
-
target[Math.floor(Math.random() * 10)] = 1;
|
|
582
|
-
|
|
583
|
-
// Forward
|
|
584
|
-
const [output] = JSON.parse(network.ForwardCPU(JSON.stringify([input])));
|
|
585
|
-
|
|
586
|
-
// Compute loss (MSE)
|
|
587
|
-
const loss =
|
|
588
|
-
output.reduce((sum, val, i) => sum + Math.pow(val - target[i], 2), 0) /
|
|
589
|
-
output.length;
|
|
590
|
-
|
|
591
|
-
// Backward
|
|
592
|
-
const gradOutput = output.map(
|
|
593
|
-
(val, i) => (2 * (val - target[i])) / output.length
|
|
594
|
-
);
|
|
595
|
-
network.BackwardCPU(JSON.stringify([gradOutput]));
|
|
596
|
-
|
|
597
|
-
// Update
|
|
598
|
-
network.UpdateWeights(JSON.stringify([learningRate]));
|
|
599
|
-
|
|
600
|
-
if (epoch % 10 === 0) {
|
|
601
|
-
console.log(`Epoch ${epoch}: Loss = ${loss.toFixed(6)}`);
|
|
602
|
-
}
|
|
603
|
-
}
|
|
604
|
-
|
|
605
|
-
// Save model
|
|
606
|
-
const modelJSON = network.SaveModelToString(JSON.stringify(["mnist"]));
|
|
607
|
-
localStorage.setItem("mnist_model", JSON.parse(modelJSON)[0]);
|
|
306
|
+
interface TrainingBatch {
|
|
307
|
+
Input: number[];
|
|
308
|
+
Target: number[];
|
|
608
309
|
}
|
|
609
310
|
```
|
|
610
311
|
|
|
611
|
-
|
|
312
|
+
#### `TrainingConfig`
|
|
612
313
|
|
|
613
314
|
```typescript
|
|
614
|
-
|
|
615
|
-
|
|
616
|
-
|
|
617
|
-
|
|
618
|
-
|
|
619
|
-
|
|
620
|
-
|
|
621
|
-
|
|
622
|
-
|
|
623
|
-
network.SetLayer(JSON.stringify([0, 0, 0, JSON.parse(layer0)]));
|
|
624
|
-
network.SetLayer(JSON.stringify([0, 0, 1, JSON.parse(layer1)]));
|
|
625
|
-
|
|
626
|
-
const trainingData = [
|
|
627
|
-
{ input: [0, 0], target: [0] },
|
|
628
|
-
{ input: [0, 1], target: [1] },
|
|
629
|
-
{ input: [1, 0], target: [1] },
|
|
630
|
-
{ input: [1, 1], target: [0] },
|
|
631
|
-
];
|
|
632
|
-
|
|
633
|
-
for (let epoch = 0; epoch < 1000; epoch++) {
|
|
634
|
-
let totalLoss = 0;
|
|
635
|
-
|
|
636
|
-
for (const sample of trainingData) {
|
|
637
|
-
const [output] = JSON.parse(
|
|
638
|
-
network.ForwardCPU(JSON.stringify([sample.input]))
|
|
639
|
-
);
|
|
640
|
-
const loss = Math.pow(output[0] - sample.target[0], 2);
|
|
641
|
-
totalLoss += loss;
|
|
642
|
-
|
|
643
|
-
const gradOutput = [2 * (output[0] - sample.target[0])];
|
|
644
|
-
network.BackwardCPU(JSON.stringify([gradOutput]));
|
|
645
|
-
network.UpdateWeights(JSON.stringify([0.1]));
|
|
646
|
-
}
|
|
647
|
-
|
|
648
|
-
if (epoch % 100 === 0) {
|
|
649
|
-
console.log(`Epoch ${epoch}: Loss = ${(totalLoss / 4).toFixed(6)}`);
|
|
650
|
-
}
|
|
315
|
+
interface TrainingConfig {
|
|
316
|
+
Epochs: number;
|
|
317
|
+
LearningRate: number;
|
|
318
|
+
LossType?: string;
|
|
319
|
+
Verbose?: boolean;
|
|
320
|
+
UseGPU?: boolean;
|
|
321
|
+
PrintEveryBatch?: number;
|
|
322
|
+
GradientClip?: number;
|
|
651
323
|
}
|
|
652
|
-
|
|
653
|
-
// Test
|
|
654
|
-
trainingData.forEach((sample) => {
|
|
655
|
-
const [output] = JSON.parse(
|
|
656
|
-
network.ForwardCPU(JSON.stringify([sample.input]))
|
|
657
|
-
);
|
|
658
|
-
console.log(
|
|
659
|
-
`${sample.input} → ${output[0].toFixed(4)} (expected ${sample.target[0]})`
|
|
660
|
-
);
|
|
661
|
-
});
|
|
662
|
-
```
|
|
663
|
-
|
|
664
|
-
## 🌐 Browser Usage
|
|
665
|
-
|
|
666
|
-
### Via CDN (UMD)
|
|
667
|
-
|
|
668
|
-
```html
|
|
669
|
-
<!DOCTYPE html>
|
|
670
|
-
<html>
|
|
671
|
-
<head>
|
|
672
|
-
<script src="https://unpkg.com/@openfluke/welvet"></script>
|
|
673
|
-
</head>
|
|
674
|
-
<body>
|
|
675
|
-
<script>
|
|
676
|
-
(async () => {
|
|
677
|
-
const { initLoom, ActivationType } = window.Welvet;
|
|
678
|
-
const loom = await initLoom();
|
|
679
|
-
|
|
680
|
-
const network = loom.NewNetwork(4, 1, 1, 1);
|
|
681
|
-
console.log("LOOM ready!");
|
|
682
|
-
})();
|
|
683
|
-
</script>
|
|
684
|
-
</body>
|
|
685
|
-
</html>
|
|
686
|
-
```
|
|
687
|
-
|
|
688
|
-
### Via ES Modules
|
|
689
|
-
|
|
690
|
-
```html
|
|
691
|
-
<!DOCTYPE html>
|
|
692
|
-
<html>
|
|
693
|
-
<head>
|
|
694
|
-
<script type="module">
|
|
695
|
-
import {
|
|
696
|
-
initLoom,
|
|
697
|
-
ActivationType,
|
|
698
|
-
} from "https://unpkg.com/@openfluke/welvet/dist/esm/index.js";
|
|
699
|
-
|
|
700
|
-
const loom = await initLoom();
|
|
701
|
-
const network = loom.NewNetwork(4, 1, 1, 1);
|
|
702
|
-
console.log("LOOM ready!");
|
|
703
|
-
</script>
|
|
704
|
-
</head>
|
|
705
|
-
</html>
|
|
706
324
|
```
|
|
707
325
|
|
|
708
|
-
##
|
|
709
|
-
|
|
710
|
-
### React
|
|
711
|
-
|
|
712
|
-
```tsx
|
|
713
|
-
import { useEffect, useState } from "react";
|
|
714
|
-
import { initLoom, type LoomAPI } from "@openfluke/welvet";
|
|
715
|
-
|
|
716
|
-
function NeuralNetworkComponent() {
|
|
717
|
-
const [loom, setLoom] = useState<LoomAPI | null>(null);
|
|
718
|
-
const [prediction, setPrediction] = useState<number[] | null>(null);
|
|
719
|
-
|
|
720
|
-
useEffect(() => {
|
|
721
|
-
initLoom().then((api) => {
|
|
722
|
-
setLoom(api);
|
|
723
|
-
|
|
724
|
-
// Initialize network
|
|
725
|
-
const network = api.NewNetwork(4, 1, 1, 2);
|
|
726
|
-
const layer0 = api.InitDenseLayer(4, 8, 0); // ReLU
|
|
727
|
-
const layer1 = api.InitDenseLayer(8, 2, 1); // Sigmoid
|
|
728
|
-
|
|
729
|
-
network.SetLayer(JSON.stringify([0, 0, 0, JSON.parse(layer0)]));
|
|
730
|
-
network.SetLayer(JSON.stringify([0, 0, 1, JSON.parse(layer1)]));
|
|
731
|
-
|
|
732
|
-
// Make prediction
|
|
733
|
-
const input = [0.5, 0.3, 0.2, 0.1];
|
|
734
|
-
const [output] = JSON.parse(network.ForwardCPU(JSON.stringify([input])));
|
|
735
|
-
setPrediction(output);
|
|
736
|
-
});
|
|
737
|
-
}, []);
|
|
738
|
-
|
|
739
|
-
if (!loom) return <div>Loading neural network...</div>;
|
|
740
|
-
|
|
741
|
-
return (
|
|
742
|
-
<div>
|
|
743
|
-
<h2>Prediction: {prediction?.map((v) => v.toFixed(4)).join(", ")}</h2>
|
|
744
|
-
</div>
|
|
745
|
-
);
|
|
746
|
-
}
|
|
747
|
-
```
|
|
748
|
-
|
|
749
|
-
### Vue 3
|
|
750
|
-
|
|
751
|
-
```vue
|
|
752
|
-
<script setup lang="ts">
|
|
753
|
-
import { ref, onMounted } from "vue";
|
|
754
|
-
import { initLoom, type LoomAPI } from "@openfluke/welvet";
|
|
755
|
-
|
|
756
|
-
const loom = ref<LoomAPI | null>(null);
|
|
757
|
-
const output = ref<number[] | null>(null);
|
|
758
|
-
|
|
759
|
-
onMounted(async () => {
|
|
760
|
-
const api = await initLoom();
|
|
761
|
-
loom.value = api;
|
|
762
|
-
|
|
763
|
-
const network = api.NewNetwork(2, 1, 1, 1);
|
|
764
|
-
const layer = api.InitDenseLayer(2, 1, 1); // Sigmoid
|
|
765
|
-
network.SetLayer(JSON.stringify([0, 0, 0, JSON.parse(layer)]));
|
|
766
|
-
|
|
767
|
-
const [result] = JSON.parse(network.ForwardCPU(JSON.stringify([[0.5, 0.5]])));
|
|
768
|
-
output.value = result;
|
|
769
|
-
});
|
|
770
|
-
</script>
|
|
771
|
-
|
|
772
|
-
<template>
|
|
773
|
-
<div v-if="!loom">Loading...</div>
|
|
774
|
-
<div v-else>
|
|
775
|
-
<h2>Neural Network Output</h2>
|
|
776
|
-
<pre>{{ output }}</pre>
|
|
777
|
-
</div>
|
|
778
|
-
</template>
|
|
779
|
-
```
|
|
780
|
-
|
|
781
|
-
### Svelte
|
|
782
|
-
|
|
783
|
-
```svelte
|
|
784
|
-
<script lang="ts">
|
|
785
|
-
import { onMount } from 'svelte';
|
|
786
|
-
import { initLoom, type LoomAPI } from '@openfluke/welvet';
|
|
787
|
-
|
|
788
|
-
let loom: LoomAPI | null = null;
|
|
789
|
-
let result: number[] = [];
|
|
790
|
-
|
|
791
|
-
onMount(async () => {
|
|
792
|
-
loom = await initLoom();
|
|
326
|
+
## Example
|
|
793
327
|
|
|
794
|
-
|
|
795
|
-
const layer = loom.InitDenseLayer(3, 2, 0); // ReLU
|
|
796
|
-
network.SetLayer(JSON.stringify([0, 0, 0, JSON.parse(layer)]));
|
|
328
|
+
See `example/grid-scatter.ts` for a complete multi-agent training demo:
|
|
797
329
|
|
|
798
|
-
|
|
799
|
-
|
|
800
|
-
|
|
801
|
-
|
|
802
|
-
|
|
803
|
-
{#if !loom}
|
|
804
|
-
<p>Loading neural network...</p>
|
|
805
|
-
{:else}
|
|
806
|
-
<h2>Result: {result.join(', ')}</h2>
|
|
807
|
-
{/if}
|
|
808
|
-
```
|
|
809
|
-
|
|
810
|
-
## 🔧 Advanced Configuration
|
|
811
|
-
|
|
812
|
-
### Custom WASM Location
|
|
813
|
-
|
|
814
|
-
```typescript
|
|
815
|
-
const loom = await initLoom({
|
|
816
|
-
wasmUrl: "/custom/path/loom.wasm",
|
|
817
|
-
});
|
|
818
|
-
```
|
|
819
|
-
|
|
820
|
-
### Skip Go Runtime Injection
|
|
821
|
-
|
|
822
|
-
```typescript
|
|
823
|
-
// Useful if you're loading Go runtime separately
|
|
824
|
-
const loom = await initLoom({
|
|
825
|
-
injectGoRuntime: false,
|
|
826
|
-
});
|
|
827
|
-
```
|
|
828
|
-
|
|
829
|
-
## 📊 Performance Tips
|
|
830
|
-
|
|
831
|
-
1. **Batch Processing** - Process multiple inputs together when possible
|
|
832
|
-
2. **Model Caching** - Save trained models to avoid retraining
|
|
833
|
-
3. **Layer Sizing** - Start with smaller layers and scale up as needed
|
|
834
|
-
4. **Learning Rate** - Tune learning rate for faster convergence (typically 0.001 - 0.1)
|
|
835
|
-
5. **Activation Functions** - ReLU often trains faster than Sigmoid/Tanh
|
|
836
|
-
|
|
837
|
-
## 🐛 Troubleshooting
|
|
838
|
-
|
|
839
|
-
### WASM fails to load
|
|
840
|
-
|
|
841
|
-
Ensure your server serves `.wasm` files with the correct MIME type:
|
|
842
|
-
|
|
843
|
-
```
|
|
844
|
-
Content-Type: application/wasm
|
|
330
|
+
```bash
|
|
331
|
+
cd example
|
|
332
|
+
bun install
|
|
333
|
+
bun run grid-scatter.ts
|
|
845
334
|
```
|
|
846
335
|
|
|
847
|
-
|
|
336
|
+
Expected output:
|
|
848
337
|
|
|
849
|
-
Make sure to await the initialization:
|
|
850
|
-
|
|
851
|
-
```typescript
|
|
852
|
-
const loom = await initLoom(); // Don't forget await!
|
|
853
338
|
```
|
|
854
|
-
|
|
855
|
-
|
|
856
|
-
|
|
857
|
-
|
|
858
|
-
|
|
859
|
-
|
|
860
|
-
|
|
861
|
-
|
|
862
|
-
|
|
863
|
-
// ❌ Wrong
|
|
864
|
-
network.ForwardCPU(input);
|
|
339
|
+
🤖 Running Grid Scatter Multi-Agent Training...
|
|
340
|
+
✅ Agent network created!
|
|
341
|
+
Training for 800 epochs with learning rate 0.150
|
|
342
|
+
✅ Training complete!
|
|
343
|
+
Training time: 0.47 seconds
|
|
344
|
+
Initial Loss: 0.252249
|
|
345
|
+
Final Loss: 0.001374
|
|
346
|
+
Improvement: 99.46%
|
|
347
|
+
Total Epochs: 800
|
|
865
348
|
```
|
|
866
349
|
|
|
867
|
-
##
|
|
868
|
-
|
|
869
|
-
- **Python Package**: [`welvet`](https://pypi.org/project/welvet/) - Python bindings for LOOM
|
|
870
|
-
- **Go Framework**: [LOOM](https://github.com/openfluke/loom) - Original Go implementation
|
|
871
|
-
- **Legacy Package**: [`@openfluke/portal`](https://github.com/openfluke/portal) - Previous generation framework
|
|
872
|
-
|
|
873
|
-
## 📄 License
|
|
350
|
+
## Layer Types
|
|
874
351
|
|
|
875
|
-
|
|
352
|
+
- `dense` - Fully connected layer
|
|
353
|
+
- `lstm` - Long Short-Term Memory layer
|
|
354
|
+
- `rnn` - Recurrent Neural Network layer
|
|
355
|
+
- `gru` - Gated Recurrent Unit layer
|
|
356
|
+
- `cnn` - Convolutional layer
|
|
357
|
+
- `parallel` - Parallel branches with combine modes:
|
|
358
|
+
- `add` - Element-wise addition
|
|
359
|
+
- `concat` - Concatenation
|
|
360
|
+
- `multiply` - Element-wise multiplication
|
|
361
|
+
- `grid_scatter` - Multi-agent grid routing
|
|
876
362
|
|
|
877
|
-
##
|
|
363
|
+
## Activation Functions
|
|
878
364
|
|
|
879
|
-
|
|
365
|
+
`relu`, `sigmoid`, `tanh`, `softmax`, `gelu`, `swish`, `mish`, `leaky_relu`, `elu`, `selu`
|
|
880
366
|
|
|
881
|
-
##
|
|
367
|
+
## License
|
|
882
368
|
|
|
883
|
-
|
|
884
|
-
- 💬 [Discussions](https://github.com/openfluke/loom/discussions)
|
|
885
|
-
- 📖 [Documentation](https://github.com/openfluke/loom/tree/main/typescript)
|
|
369
|
+
MIT
|
|
886
370
|
|
|
887
|
-
|
|
371
|
+
## Links
|
|
888
372
|
|
|
889
|
-
|
|
373
|
+
- [GitHub](https://github.com/openfluke/loom)
|
|
374
|
+
- [WASM Documentation](../wasm/README.md)
|
|
375
|
+
- [Go Examples](../examples/)
|