mini-jstorch 1.1.9 → 1.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,130 +1,17 @@
1
- ## Mini-JSTorch ##
2
-
3
- # IMPORTANT! #
4
-
5
- This Module will not gonna Run *Well* In at GPU or a Backend.
6
- We're will gonna *Optimize* So all User can still use this Module.
7
-
8
- ---
9
-
10
- **Mini JSTorch** is a lightweight, high-performance JavaScript library designed for rapid prototyping of neural networks directly in the frontend environment and for low-end devices. Its core purpose is to enable experimentation and learning on devices with limited computing resources, without compromising stability or training reliability.
11
-
12
- ---
13
-
14
- ## Vision
15
-
16
- Our vision is to democratize AI experimentation by providing a tool that is:
17
-
18
- - **Accessible**: Runs efficiently on *low-end* laptops or mobile devices.
19
- - **Stable**: Handles gradient calculations and NaN values robustly.
20
- - **Educational**: Clear abstractions for neural network layers, activations, and optimizers.
21
- - **Fast**: Immediate output without lengthy training delays.
22
-
23
- *Mini JSTorch* aims to bridge the gap between conceptual AI learning and hands-on experimentation, making it possible to explore neural network behavior interactively, even on minimal hardware.
24
-
25
- ---
26
-
27
- ## Features
28
-
29
- - **Core Layers**
30
- - `Linear`: Fully connected layer with weight and bias support.
31
- - **Activations**
32
- - `ReLU`: Rectified Linear Unit activation.
33
- - `Sigmoid`: Sigmoid activation function.
34
- - **Loss Functions**
35
- - `CrossEntropyLoss`: Combines softmax and cross-entropy for classification tasks.
36
- - **Optimizers**
37
- - `Adam`: Adaptive optimizer for stable and efficient training.
38
- - **Utilities**
39
- - `zeros(rows, cols)`: Generate a zero matrix.
40
- - `randomMatrix(rows, cols, scale)`: Random initialization for weights.
41
- - `softmax(x)`: Compute softmax probabilities.
42
- - `crossEntropy(pred, target)`: Compute cross-entropy loss.
43
- - `dot(a, b)`: Matrix multiplication.
44
- - `addMatrices(a, b)`: Element-wise addition of matrices.
45
- - **Model Container**
46
- - `Sequential`: Container to stack layers sequentially with easy forward and backward passes.
1
+ # MINI JSTORCH #
47
2
 
48
3
  ---
49
4
 
50
- ## Installation
51
5
 
52
- ```bash
53
- npm install mini-jstorch
54
- # Node.js v20+ recommended for a good experience while using module
55
- ```
6
+ **Just Empty.. this versions only an note do not use it for your projects models.**
7
+ **I ONLY WANNA GIVE A MESSAGES IF THE MAJOR UPDATE WOULD BE COMING AT TIME 2:22 AM JUST BE READY FOR IT GUYS!**
56
8
 
57
- ---
58
-
59
- ## Example Scripts
60
-
61
- ```javascript
62
- import { Sequential, Linear, ReLU, Sigmoid, CrossEntropy, Adam } from './engine/MainEngine.js'; // use import to import the modules
63
-
64
- // Construct a model with two hidden layers
65
- const model = new Sequential([
66
- new Linear(2, 4),
67
- new ReLU(),
68
- new Linear(4, 4),
69
- new ReLU(),
70
- new Linear(4, 2),
71
- new Sigmoid()
72
- ]);
9
+ # HUGE THANKS 🎉 #
73
10
 
74
- // Training data (XOR)
75
- const inputs = [
76
- [0,0], [0,1], [1,0], [1,1]
77
- ];
78
- const targets = [
79
- [1,0], [0,1], [0,1], [1,0]
80
- ];
81
-
82
- // Optimizer and loss
83
- const optimizer = new Adam(model.parameters(), 0.01);
84
- const lossFunc = CrossEntropy;
85
-
86
- // Training loop
87
- for(let epoch = 1; epoch <= 5000; epoch++) {
88
- let totalLoss = 0;
89
- for(let i = 0; i < inputs.length; i++) {
90
- const output = model.forward([inputs[i]]);
91
- const loss = lossFunc(output, [targets[i]]);
92
- totalLoss += loss;
93
- model.zeroGrad();
94
- model.backward(lossFunc.backward());
95
- optimizer.step();
96
- }
97
- if(epoch % 500 === 0) console.log(`Epoch ${epoch}, Loss: ${totalLoss.toFixed(6)}`);
98
- }
99
-
100
- // Predictions
101
- inputs.forEach(inp => {
102
- const pred = model.forward([inp]);
103
- console.log(`${inp} -> ${pred.map(p => p.toFixed(4))}`);
104
- });
105
- ```
106
- ---
11
+ I genuinely appreciate everyone who has taken the time to install and try out this module. Every download isn’t just a number it’s a real person spending their time to explore, learn, or build something with this module!
107
12
 
108
- ## Intended Use Cases
109
-
110
- - **Experimentation on low-end devices or mobile browsers.**
111
- - **Learning and teaching foundational neural network concepts.**
112
- - **Testing small to medium feedforward models in real-time.**
113
- - **Quick prototyping without GPU dependency or complex setup.**
114
-
115
- ---
116
-
117
- ## Roadmap
118
-
119
- - **Browser-based interactive playground.**
120
- - **Additional activation functions and loss options.**
121
- - **Visualization of training metrics and loss curves in real-time.**
122
- - **Support for small convolutional networks in frontend environments.**
123
-
124
- ---
13
+ Reaching 400+ weekly downloads might sound small compared to giant frameworks, but for me it’s huge. It means this project is actually helping people out there, and that alone makes every late-night coding session worth it.
125
14
 
126
- ## Facts
15
+ Thank you for giving this module a chance. Your support and trust are what keep this project moving forward! 🚀
127
16
 
128
- - **This module is implemented entirely in pure JavaScript.**
129
- - **The `Dummy` folder contains modules used for development, testing, and debugging before integration into the main engine.**
130
- - **This module was created by a `single` developer.**
17
+ # BE READY FOR IT... SET YOUR TIME CLOCK. #
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "mini-jstorch",
3
- "version": "1.1.9",
3
+ "version": "1.2.1",
4
4
  "type": "module",
5
5
  "description": "A lightweight JavaScript neural network library for rapid frontend AI experimentation on low-resource devices Inspired by PyTorch.",
6
6
  "main": "index.js",
@@ -0,0 +1,256 @@
1
+ // MINI JSTORCH ENGINE
2
+ // ENGINE VER: 1.2.0
3
+ // LICENSE: MIT (C) Rizal 2025
4
+
5
+ // ---------------------- Utilities ----------------------
6
+ function zeros(rows, cols) { return Array.from({length:rows},()=>Array(cols).fill(0)); }
7
+ function ones(rows, cols) { return Array.from({length:rows},()=>Array(cols).fill(1)); }
8
+ function randomMatrix(rows, cols, scale=0.1){ return Array.from({length:rows},()=>Array.from({length:cols},()=> (Math.random()*2-1)*scale)); }
9
+ function transpose(matrix){ return matrix[0].map((_,i)=>matrix.map(row=>row[i])); }
10
+ function addMatrices(a,b){ return a.map((row,i)=>row.map((v,j)=>v+(b[i] && b[i][j]!==undefined?b[i][j]:0))); }
11
+ function dot(a,b){ const res=zeros(a.length,b[0].length); for(let i=0;i<a.length;i++) for(let j=0;j<b[0].length;j++) for(let k=0;k<a[0].length;k++) res[i][j]+=a[i][k]*b[k][j]; return res; }
12
+ function softmax(x){ const m=Math.max(...x); const exps=x.map(v=>Math.exp(v-m)); const s=exps.reduce((a,b)=>a+b,0); return exps.map(v=>v/s); }
13
+ function crossEntropy(pred,target){ const eps=1e-12; return -target.reduce((sum,t,i)=>sum+t*Math.log(pred[i]+eps),0); }
14
+
15
+ // ---------------------- Tensor ----------------------
16
+ export class Tensor {
17
+ constructor(data){ this.data=data; this.grad=zeros(data.length,data[0].length); }
18
+ shape(){ return [this.data.length,this.data[0].length]; }
19
+ add(t){ return t instanceof Tensor?this.data.map((r,i)=>r.map((v,j)=>v+t.data[i][j])):this.data.map(r=>r.map(v=>v+t)); }
20
+ sub(t){ return t instanceof Tensor?this.data.map((r,i)=>r.map((v,j)=>v-t.data[i][j])):this.data.map(r=>r.map(v=>v-t)); }
21
+ mul(t){ return t instanceof Tensor?this.data.map((r,i)=>r.map((v,j)=>v*t.data[i][j])):this.data.map(r=>r.map(v=>v*t)); }
22
+ matmul(t){ if(t instanceof Tensor) return dot(this.data,t.data); else throw new Error("matmul requires Tensor"); }
23
+ transpose(){ return transpose(this.data); }
24
+ flatten(){ return this.data.flat(); }
25
+ static zeros(r,c){ return new Tensor(zeros(r,c)); }
26
+ static ones(r,c){ return new Tensor(ones(r,c)); }
27
+ static random(r,c,scale=0.1){ return new Tensor(randomMatrix(r,c,scale)); }
28
+ }
29
+
30
+ // ---------------------- Layers ----------------------
31
+ export class Linear {
32
+ constructor(inputDim,outputDim){
33
+ this.W=randomMatrix(inputDim,outputDim);
34
+ this.b=Array(outputDim).fill(0);
35
+ this.gradW=zeros(inputDim,outputDim);
36
+ this.gradb=Array(outputDim).fill(0);
37
+ this.x=null;
38
+ }
39
+
40
+ forward(x){
41
+ this.x=x;
42
+ const out=dot(x,this.W);
43
+ return out.map((row,i)=>row.map((v,j)=>v+this.b[j]));
44
+ }
45
+
46
+ backward(grad){
47
+ for(let i=0;i<this.W.length;i++) for(let j=0;j<this.W[0].length;j++)
48
+ this.gradW[i][j]=this.x.reduce((sum,row,k)=>sum+row[i]*grad[k][j],0);
49
+ for(let j=0;j<this.b.length;j++)
50
+ this.gradb[j]=grad.reduce((sum,row)=>sum+row[j],0);
51
+
52
+ const gradInput=zeros(this.x.length,this.W.length);
53
+ for(let i=0;i<this.x.length;i++)
54
+ for(let j=0;j<this.W.length;j++)
55
+ for(let k=0;k<this.W[0].length;k++)
56
+ gradInput[i][j]+=grad[i][k]*this.W[j][k];
57
+ return gradInput;
58
+ }
59
+
60
+ parameters(){ return [ {param:this.W,grad:this.gradW}, {param:[this.b],grad:[this.gradb]} ]; }
61
+ }
62
+
63
+ // ---------------------- Conv2D ----------------------
64
+ export class Conv2D {
65
+ constructor(inC,outC,kernel,stride=1,padding=0){
66
+ this.inC=inC; this.outC=outC; this.kernel=kernel;
67
+ this.stride=stride; this.padding=padding;
68
+ this.W=Array(outC).fill(0).map(()=>Array(inC).fill(0).map(()=>randomMatrix(kernel,kernel)));
69
+ this.gradW=Array(outC).fill(0).map(()=>Array(inC).fill(0).map(()=>zeros(kernel,kernel)));
70
+ this.x=null;
71
+ }
72
+
73
+ pad2D(input,pad){
74
+ return input.map(channel=>{
75
+ const rows=channel.length+2*pad;
76
+ const cols=channel[0].length+2*pad;
77
+ const out=Array.from({length:rows},()=>Array(cols).fill(0));
78
+ for(let i=0;i<channel.length;i++) for(let j=0;j<channel[0].length;j++) out[i+pad][j+pad]=channel[i][j];
79
+ return out;
80
+ });
81
+ }
82
+
83
+ conv2DSingle(input,kernel){
84
+ const rows=input.length-kernel.length+1;
85
+ const cols=input[0].length-kernel[0].length+1;
86
+ const out=zeros(rows,cols);
87
+ for(let i=0;i<rows;i++) for(let j=0;j<cols;j++)
88
+ for(let ki=0;ki<kernel.length;ki++) for(let kj=0;kj<kernel[0].length;kj++)
89
+ out[i][j]+=input[i+ki][j+kj]*kernel[ki][kj];
90
+ return out;
91
+ }
92
+
93
+ forward(batch){
94
+ this.x=batch;
95
+ return batch.map(sample=>{
96
+ const channelsOut=[];
97
+ for(let oc=0;oc<this.outC;oc++){
98
+ let outChan=zeros(sample[0].length,sample[0][0].length);
99
+ for(let ic=0;ic<this.inC;ic++){
100
+ let inputChan=sample[ic];
101
+ if(this.padding>0) inputChan=this.pad2D([inputChan],this.padding)[0];
102
+ const conv=this.conv2DSingle(inputChan,this.W[oc][ic]);
103
+ outChan=addMatrices(outChan,conv);
104
+ }
105
+ channelsOut.push(outChan);
106
+ }
107
+ return channelsOut;
108
+ });
109
+ }
110
+
111
+ backward(grad) {
112
+ const batchSize = this.x.length;
113
+ const gradInput = this.x.map(sample => sample.map(chan => zeros(chan.length, chan[0].length)));
114
+ const gradW = this.W.map(oc => oc.map(ic => zeros(this.kernel,this.kernel)));
115
+
116
+ for (let b = 0; b < batchSize; b++) {
117
+ const xPadded = this.pad2D(this.x[b], this.padding);
118
+ const gradInputPadded = xPadded.map(chan => zeros(chan.length, chan[0].length));
119
+
120
+ for (let oc = 0; oc < this.outC; oc++) {
121
+ for (let ic = 0; ic < this.inC; ic++) {
122
+ const outGrad = grad[b][oc];
123
+ const inChan = xPadded[ic];
124
+
125
+ // Compute gradW
126
+ for (let i = 0; i < this.kernel; i++) {
127
+ for (let j = 0; j < this.kernel; j++) {
128
+ let sum = 0;
129
+ for (let y = 0; y < outGrad.length; y++) {
130
+ for (let x = 0; x < outGrad[0].length; x++) {
131
+ const inY = y * this.stride + i;
132
+ const inX = x * this.stride + j;
133
+ if (inY < inChan.length && inX < inChan[0].length) {
134
+ sum += inChan[inY][inX] * outGrad[y][x];
135
+ }
136
+ }
137
+ }
138
+ gradW[oc][ic][i][j] += sum;
139
+ }
140
+ }
141
+
142
+ // Compute gradInput
143
+ const flippedKernel = this.W[oc][ic].map(row => [...row].reverse()).reverse();
144
+ for (let y = 0; y < outGrad.length; y++) {
145
+ for (let x = 0; x < outGrad[0].length; x++) {
146
+ for (let i = 0; i < this.kernel; i++) {
147
+ for (let j = 0; j < this.kernel; j++) {
148
+ const inY = y * this.stride + i;
149
+ const inX = x * this.stride + j;
150
+ if (inY < gradInputPadded[ic].length && inX < gradInputPadded[ic][0].length) {
151
+ gradInputPadded[ic][inY][inX] += flippedKernel[i][j] * outGrad[y][x];
152
+ }
153
+ }
154
+ }
155
+ }
156
+ }
157
+ }
158
+ }
159
+
160
+ // Remove padding from gradInput
161
+ if (this.padding > 0) {
162
+ for (let ic = 0; ic < this.inC; ic++) {
163
+ const padded = gradInputPadded[ic];
164
+ const cropped = padded.slice(this.padding, padded.length - this.padding)
165
+ .map(row => row.slice(this.padding, row.length - this.padding));
166
+ gradInput[b][ic] = cropped;
167
+ }
168
+ } else {
169
+ for (let ic = 0; ic < this.inC; ic++) gradInput[b][ic] = gradInputPadded[ic];
170
+ }
171
+ }
172
+
173
+ this.gradW = gradW;
174
+ return gradInput;
175
+ }
176
+
177
+ parameters(){ return this.W.flatMap((w,oc)=>w.map((wc,ic)=>({param:wc,grad:this.gradW[oc][ic]}))); }
178
+ }
179
+
180
+ // ---------------------- Sequential ----------------------
181
+ export class Sequential {
182
+ constructor(layers=[]){ this.layers=layers; }
183
+ forward(x){ return this.layers.reduce((acc,l)=>l.forward(acc), x); }
184
+ backward(grad){ return this.layers.reduceRight((g,l)=>l.backward(g), grad); }
185
+ parameters(){ return this.layers.flatMap(l=>l.parameters?l.parameters():[]); }
186
+ }
187
+
188
+ // ---------------------- Activations ----------------------
189
+ export class ReLU{ constructor(){ this.out=null; } forward(x){ this.out=x.map(r=>r.map(v=>Math.max(0,v))); return this.out; } backward(grad){ return grad.map((r,i)=>r.map((v,j)=>v*(this.out[i][j]>0?1:0))); } }
190
+ export class Sigmoid{ constructor(){ this.out=null; } forward(x){ const fn=v=>1/(1+Math.exp(-v)); this.out=x.map(r=>r.map(fn)); return this.out; } backward(grad){ return grad.map((r,i)=>r.map((v,j)=>v*this.out[i][j]*(1-this.out[i][j]))); } }
191
+ export class Tanh{ constructor(){ this.out=null; } forward(x){ this.out=x.map(r=>r.map(v=>Math.tanh(v))); return this.out; } backward(grad){ return grad.map((r,i)=>r.map((v,j)=>v*(1-this.out[i][j]**2))); } }
192
+ export class LeakyReLU{ constructor(alpha=0.01){ this.alpha=alpha; this.out=null; } forward(x){ this.out=x.map(r=>r.map(v=>v>0?v:v*this.alpha)); return this.out; } backward(grad){ return grad.map((r,i)=>r.map((v,j)=>v*(this.out[i][j]>0?1:this.alpha))); } }
193
+ export class GELU{ constructor(){ this.out=null; } forward(x){ const fn=v=>0.5*v*(1+Math.tanh(Math.sqrt(2/Math.PI)*(v+0.044715*v**3))); this.out=x.map(r=>r.map(fn)); return this.out; } backward(grad){ return grad.map((r,i)=>r.map(v=>v*1)); } }
194
+
195
+ // ---------------------- Dropout ----------------------
196
+ export class Dropout{ constructor(p=0.5){ this.p=p; } forward(x){ return x.map(r=>r.map(v=>v*Math.random()>=this.p?v:0)); } backward(grad){ return grad.map(r=>r.map(v=>v*(1-this.p))); } }
197
+
198
+ // ---------------------- Losses ----------------------
199
+ export class MSELoss{ forward(pred,target){ this.pred=pred; this.target=target; const losses=pred.map((row,i)=>row.reduce((sum,v,j)=>sum+(v-target[i][j])**2,0)/row.length); return losses.reduce((a,b)=>a+b,0)/pred.length; } backward(){ return this.pred.map((row,i)=>row.map((v,j)=>2*(v-this.target[i][j])/row.length)); } }
200
+ export class CrossEntropyLoss{ forward(pred,target){ this.pred=pred; this.target=target; const losses=pred.map((p,i)=>crossEntropy(softmax(p),target[i])); return losses.reduce((a,b)=>a+b,0)/pred.length; } backward(){ return this.pred.map((p,i)=>{ const s=softmax(p); return s.map((v,j)=>(v-this.target[i][j])/this.pred.length); }); } }
201
+
202
+ // ---------------------- Optimizers ----------------------
203
+ export class Adam{
204
+ constructor(params,lr=0.001,b1=0.9,b2=0.999,eps=1e-8){
205
+ this.params=params; this.lr=lr; this.beta1=b1; this.beta2=b2; this.eps=eps;
206
+ this.m=params.map(p=>zeros(p.param.length,p.param[0].length||1));
207
+ this.v=params.map(p=>zeros(p.param.length,p.param[0].length||1));
208
+ this.t=0;
209
+ }
210
+ step(){
211
+ this.t++;
212
+ this.params.forEach((p,idx)=>{
213
+ for(let i=0;i<p.param.length;i++)
214
+ for(let j=0;j<(p.param[0].length||1);j++){
215
+ const g=p.grad[i][j];
216
+ this.m[idx][i][j]=this.beta1*this.m[idx][i][j]+(1-this.beta1)*g;
217
+ this.v[idx][i][j]=this.beta2*this.v[idx][i][j]+(1-this.beta2)*g*g;
218
+ const mHat=this.m[idx][i][j]/(1-Math.pow(this.beta1,this.t));
219
+ const vHat=this.v[idx][i][j]/(1-Math.pow(this.beta2,this.t));
220
+ p.param[i][j]-=this.lr*mHat/(Math.sqrt(vHat)+this.eps);
221
+ }
222
+ });
223
+ }
224
+ }
225
+
226
+ export class SGD{ constructor(params,lr=0.01){ this.params=params; this.lr=lr; } step(){ this.params.forEach(p=>{ for(let i=0;i<p.param.length;i++) for(let j=0;j<(p.param[0].length||1);j++) p.param[i][j]-=this.lr*p.grad[i][j]; }); } }
227
+
228
+ // ---------------------- Model Save/Load ----------------------
229
+ export function saveModel(model){
230
+ if(!(model instanceof Sequential)) throw new Error("saveModel supports only Sequential");
231
+ const weights=model.layers.map(layer=>({weights:layer.W||null,biases:layer.b||null}));
232
+ return JSON.stringify(weights);
233
+ }
234
+
235
+ export function loadModel(model,json){
236
+ if(!(model instanceof Sequential)) throw new Error("loadModel supports only Sequential");
237
+ const weights=JSON.parse(json);
238
+ model.layers.forEach((layer,i)=>{
239
+ if(layer.W && weights[i].weights) layer.W=weights[i].weights;
240
+ if(layer.b && weights[i].biases) layer.b=weights[i].biases;
241
+ });
242
+ }
243
+
244
+ // ---------------------- Advanced Utils ----------------------
245
+ export function flattenBatch(batch){ return batch.flat(2); }
246
+ export function stack(tensors){ return tensors.map(t=>t.data); }
247
+ export function eye(n){ return Array.from({length:n},(_,i)=>Array.from({length:n},(_,j)=>i===j?1:0)); }
248
+ export function concat(a,b,axis=0){ /* concat along axis */ if(axis===0) return [...a,...b]; if(axis===1) return a.map((row,i)=>[...row,...b[i]]); }
249
+ export function reshape(tensor, rows, cols) {
250
+ let flat = tensor.data.flat(); // flatten dulu
251
+ if(flat.length < rows*cols) throw new Error("reshape size mismatch");
252
+ const out = Array.from({length: rows}, (_, i) =>
253
+ flat.slice(i*cols, i*cols + cols)
254
+ );
255
+ return out;
256
+ }
@@ -1,5 +1,5 @@
1
1
  **File Unavailable**
2
- This file has been restricted by the developer.
2
+ file inside this folder has been restricted by the developer.
3
3
 
4
4
  It is intended solely for experimental purposes in future versions, including optimizations and bug fixes before full integration into the Main Engine.
5
5
 
@@ -0,0 +1 @@
1
+ console.log("MAJOR UPDATE IN 2:22 AM BE READY GUYS! this was empty because this versions only for an major note.")
@@ -0,0 +1,12 @@
1
+ // you can delete this files this files are not important for the engine runtime.
2
+
3
+ e=run=[cpu[runtime]]
4
+ e.set.runtime('beta')
5
+ e.rnt()
6
+ e.set()
7
+ e.register('vanilla',expe='Experiments.js',main='MainEngine.js')
8
+ l=e.prog('asm')
9
+ r=l.gv=[0xCAFEBABE]
10
+ eng=e.load(register,r,'vanilla')
11
+ eng.boot(e,r,'dp')
12
+ eng.load()
package/CHANGELOG.md DELETED
@@ -1,32 +0,0 @@
1
- ## Changelog ##
2
-
3
- **Official Release:** 2025-Monday-August-18
4
- **Version:** 1.1.9
5
-
6
- New Files that will All notable changes to *Mini-JSTorch* will be documented in this file.
7
-
8
- ---
9
-
10
- ## [1.1.9] - 2025-Monday-August-18
11
-
12
- ### Added
13
- - Complete overhaul of core engine for stability and full feature support.
14
- - Adam optimizer fully integrated with batch gradient handling.
15
- - NaN-safe operations for forward/backward passes and loss calculations.
16
- - Sequential container with robust parameter tracking.
17
- - Utility functions enhanced for numeric safety and matrix operations.
18
- - Ready-to-use example scripts with multiple hidden layers for frontend experimentation.
19
- - Full handling for lightweight devices, optimized for speed and low memory usage.
20
-
21
- ### Fixed
22
- - bugs in ReLU and gradient propagation causing NaN loss values.
23
- - Backward propagation errors for batch matrices corrected.
24
- - Critical Error while startup Engine.
25
-
26
- ### Changed
27
- - Layer and activation API standardized for consistency.
28
- - CrossEntropyLoss now safely handles edge cases in probability computations.
29
- - Improvements systems for make experiences more stable and not heavy.
30
- - Minor numeric stability improvements.
31
-
32
- ---
@@ -1,148 +0,0 @@
1
- // ================================
2
- // MINI JS AI ENGINE v1
3
- // ================================
4
-
5
- // Basic tensor ops
6
- const zeros = n => Array(n).fill(0);
7
- const randn = () => (Math.random() * 2 - 1) * 0.1;
8
-
9
- const dot = (a,b) => a.map(row=>b[0].map((_,j)=>row.reduce((sum,v,k)=>sum+v*b[k][j],0)));
10
- const add = (a,b) => a.map((row,i)=>row.map((v,j)=>v+b[i][j]));
11
- const sub = (a,b) => a.map((row,i)=>row.map((v,j)=>v-b[i][j]));
12
- const mulScalar = (a,s) => a.map(row=>row.map(v=>v*s));
13
- const transpose = m => m[0].map((_,i)=>m.map(row=>row[i]));
14
-
15
- // Activations
16
- const Activations = {
17
- relu: x => x.map(v=>Math.max(0,v)),
18
- linear: x => x,
19
- leakyRelu: (x, alpha=0.01) => x.map(v=>v>0?v:alpha*v)
20
- };
21
- const dActivations = {
22
- relu: x => x.map(v=>v>0?1:0),
23
- linear: x => x.map(_=>1),
24
- leakyRelu: (x, alpha=0.01) => x.map(v=>v>0?1:alpha)
25
- };
26
-
27
- // Dense layer with manual grad & auto-grad
28
- class Dense {
29
- constructor(inputSize, outputSize, activation='linear'){
30
- this.inputSize = inputSize;
31
- this.outputSize = outputSize;
32
- this.activation = activation;
33
- this.W = Array.from({length:inputSize},()=>Array.from({length:outputSize},()=>randn()*Math.sqrt(2/inputSize)));
34
- this.b = Array(outputSize).fill(0);
35
-
36
- // Adam variables
37
- this.mW = mulScalar(this.W,0);
38
- this.vW = mulScalar(this.W,0);
39
- this.mb = Array(outputSize).fill(0);
40
- this.vb = Array(outputSize).fill(0);
41
- this.lastInput = null;
42
- this.lastOutput = null;
43
- }
44
-
45
- forward(X){
46
- this.lastInput = X;
47
- let output = dot(X,this.W);
48
- output = output.map((row,i)=>row.map((v,j)=>v+this.b[j]));
49
- this.lastOutput = output.map(row => Activations[this.activation](row));
50
- return this.lastOutput;
51
- }
52
-
53
- backward(dLoss, lr=0.001, beta1=0.9, beta2=0.999, t=1){
54
- const flatOut = this.lastOutput.flat();
55
- const actGrad = dActivations[this.activation](flatOut);
56
- const dOut = dLoss.flat().map((v,i)=>v*actGrad[i]);
57
-
58
- const gradW = Array.from({length:this.inputSize},()=>Array(this.outputSize).fill(0));
59
- const gradB = Array(this.outputSize).fill(0);
60
-
61
- for(let k=0;k<this.lastInput.length;k++){
62
- for(let i=0;i<this.inputSize;i++){
63
- for(let j=0;j<this.outputSize;j++){
64
- gradW[i][j] += this.lastInput[k][i]*dOut[j]/this.lastInput.length;
65
- }
66
- }
67
- }
68
- for(let j=0;j<this.outputSize;j++) gradB[j] = dOut[j]/this.lastInput.length;
69
-
70
- // Adam update with bias correction
71
- for(let i=0;i<this.inputSize;i++){
72
- for(let j=0;j<this.outputSize;j++){
73
- this.mW[i][j] = beta1*this.mW[i][j]+(1-beta1)*gradW[i][j];
74
- this.vW[i][j] = beta2*this.vW[i][j]+(1-beta2)*gradW[i][j]*gradW[i][j];
75
- const mHat = this.mW[i][j]/(1-Math.pow(beta1,t));
76
- const vHat = this.vW[i][j]/(1-Math.pow(beta2,t));
77
- this.W[i][j] -= lr*mHat/(Math.sqrt(vHat)+1e-8);
78
- }
79
- }
80
-
81
- for(let j=0;j<this.outputSize;j++){
82
- this.mb[j] = beta1*this.mb[j]+(1-beta1)*gradB[j];
83
- this.vb[j] = beta2*this.vb[j]+(1-beta2)*gradB[j]*gradB[j];
84
- const mHat = this.mb[j]/(1-Math.pow(beta1,t));
85
- const vHat = this.vb[j]/(1-Math.pow(beta2,t));
86
- this.b[j] -= lr*mHat/(Math.sqrt(vHat)+1e-8);
87
- }
88
-
89
- // Return dLoss for next layer (manual grad)
90
- const dNext = Array(this.lastInput.length).fill(0).map(_=>Array(this.inputSize).fill(0));
91
- for(let i=0;i<this.inputSize;i++){
92
- for(let j=0;j<this.outputSize;j++){
93
- for(let k=0;k<this.lastInput.length;k++){
94
- dNext[k][i] += dOut[j]*this.W[i][j];
95
- }
96
- }
97
- }
98
- return dNext;
99
- }
100
- }
101
-
102
- // Sequential model
103
- class Seq {
104
- constructor(layers){
105
- this.layers = layers;
106
- this.logs = [];
107
- }
108
-
109
- forward(X){
110
- return this.layers.reduce((inp,layer)=>layer.forward(inp), X);
111
- }
112
-
113
- train(X,y,epochs=200, lr=0.001){
114
- for(let epoch=1;epoch<=epochs;epoch++){
115
- const yPred = this.forward(X);
116
- const dLoss = yPred.map((row,i)=>row.map((v,j)=>2*(v - y[i][j])/y.length));
117
-
118
- let grad = dLoss;
119
- for(let l=this.layers.length-1;l>=0;l--){
120
- grad = this.layers[l].backward(grad, lr, 0.9, 0.999, epoch);
121
- }
122
-
123
- const loss = yPred.map((row,i)=>row.map((v,j)=>Math.pow(v - y[i][j],2))).flat().reduce((a,b)=>a+b,0)/y.length;
124
- if(epoch%50===0) console.log(`Epoch ${epoch}, Loss ${loss.toFixed(4)}, Pred sample ${yPred[0][0].toFixed(2)}`);
125
- }
126
- }
127
-
128
- predict(X){
129
- return this.forward(X);
130
- }
131
- }
132
-
133
- // ================================
134
- // EXAMPLE USAGE
135
- // ================================
136
-
137
- const model = new Seq([
138
- new Dense(2,16,'relu'),
139
- new Dense(16,12,'relu'),
140
- new Dense(12,1,'linear')
141
- ]);
142
-
143
- const X = [[1,2],[2,3],[3,4]];
144
- const y = [[3],[5],[7]];
145
-
146
- model.train(X,y,200,0.01);
147
-
148
- console.log('Prediction [7,8]:', model.predict([[7,8]]));
@@ -1,8 +0,0 @@
1
- --start[line]--
2
- e=run=[cpu[core]]
3
- e.cg()
4
- e.register('vanilla')
5
- l=e.load('asm')
6
- r=l.gv=[0xCAFEBABE]
7
- e.load(register,r,'vanilla')
8
- --end[line]--