mini-jstorch 1.2.1 β†’ 1.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/MODULE.md ADDED
@@ -0,0 +1,46 @@
1
+ ## MODULE STATS ##
2
+
3
+ New Files that automatically will All notable changes status state to *Mini-JSTorch* will be documented in this file.
4
+
5
+ # MSG
6
+
7
+ btw. This files is actually would be modified and note all changes system state automatically without manual i type with myself.
8
+
9
+ ---
10
+
11
+ **OFFICIAL RELEASE:** 2025-Monday-August-23 time: 2:22 AM (estimated time release)
12
+ **VERSION:** 1.3.0
13
+ **LICENSE:** MIT Β© 2025
14
+ **AUTHOR:** Rizal
15
+ **MODULE NAME:** mini-jstorch
16
+ **MODULE DESC:** A lightweight JavaScript neural network library for rapid frontend AI experimentation on low-resource devices Inspired by PyTorch.
17
+ **MODULE TYPE:** module
18
+ **ENGINE VERSIONS:** 1.2.0
19
+ **UPDATE TITLE:** `MAJOR` update.
20
+ **ADDED FILES/FOLDER:** {
21
+ "tests" //folder
22
+ "tests/proj1.js" //files
23
+ "tests/tests.js" //files [npmignore detected]
24
+ "tests/proj2.js" //files
25
+ "tests/proj3.js" //files
26
+ "N/A" //N/A [N/A]
27
+ }
28
+
29
+ ---
30
+
31
+ **MODIFIED FILES:** {
32
+ "src" //folder
33
+ "src/startup.cpu" //files
34
+ "src/MainEngine.js" //files
35
+ "tests/tests.js" //files [npmignore detected]
36
+ "tests" //folder
37
+ "src/Dummy/exp.js" //files [npmignore detected]
38
+ "package.json" //files
39
+ "src/EngState.json" //files [npmignore detected]
40
+ "src/state.txt" //files [npmignore detected]
41
+ "README.md" //files
42
+ ".npmignore" //files [npmignore detected]
43
+ "N/A" //N/A [N/A]
44
+ }
45
+
46
+ ---
package/README.md CHANGED
@@ -1,17 +1,136 @@
1
- # MINI JSTORCH #
1
+ # Mini-JSTorch Major Update
2
2
 
3
- ---
4
-
5
-
6
- **Just Empty.. this versions only an note do not use it for your projects models.**
7
- **I ONLY WANNA GIVE A MESSAGES IF THE MAJOR UPDATE WOULD BE COMING AT TIME 2:22 AM JUST BE READY FOR IT GUYS!**
3
+ # MAJOR UPDATE IS RELEASED NOW!! [got little late]
8
4
 
9
- # HUGE THANKS πŸŽ‰ #
5
+ ---
10
6
 
7
+ ## πŸ™ Big Thanks πŸ™
11
8
  I genuinely appreciate everyone who has taken the time to install and try out this module. Every download isn’t just a number it’s a real person spending their time to explore, learn, or build something with this module!
12
9
 
13
10
  Reaching 400+ weekly downloads might sound small compared to giant frameworks, but for me it’s huge. It means this project is actually helping people out there, and that alone makes every late-night coding session worth it.
14
11
 
15
12
  Thank you for giving this module a chance. Your support and trust are what keep this project moving forward! πŸš€
16
13
 
17
- # BE READY FOR IT... SET YOUR TIME CLOCK. #
14
+ ---
15
+
16
+ ## Overview
17
+
18
+ Mini-JSTorch is a lightweight, high-performance JavaScript library for building neural networks that runs efficiently in both frontend and backend environments, including low-end devices. The library enables experimentation and learning in AI without compromising stability, accuracy, or training reliability.
19
+
20
+ This release, **version 1.2.3**, is a **major update** that introduces full feature coverage, including convolutional layers, advanced activations, dropout, tensor broadcasting, and enhanced utilities. The engine remains pure JavaScript, lightweight, and compatible with both Node.js and browser environments.
21
+
22
+ ---
23
+
24
+ ## Major Updates in 1.2.3
25
+
26
+ - Full **Conv2D support** with forward and backward operations.
27
+ - **Tensor operations** now support broadcasting and reshaping.
28
+ - Added new activations: `LeakyReLU`, `GELU`.
29
+ - Optimizers: `Adam` and `SGD` fully integrated with gradient updates.
30
+ - Dropout layer added for regularization.
31
+ - Advanced utilities for tensor manipulation: `flatten`, `stack`, `concat`, `eye`, `reshape`.
32
+ - End-to-end training and prediction workflow now fully tested.
33
+ - Save and load model functionality included for seamless persistence.
34
+ - Optimized performance for both frontend and backend usage.
35
+ - Maintained backward compatibility for previous core layers and activations.
36
+ - folder tests files template that you can use it before make your models.
37
+
38
+ ---
39
+
40
+ ## Features
41
+
42
+ - **Core Layers**: Linear, Conv2D
43
+ - **Activations**: ReLU, Sigmoid, Tanh, LeakyReLU, GELU
44
+ - **Loss Functions**: MSELoss, CrossEntropyLoss
45
+ - **Optimizers**: Adam, SGD
46
+ - **Utilities**: zeros, randomMatrix, softmax, crossEntropy, dot, addMatrices, reshape, stack, flatten, eye, concat
47
+ - **Model Container**: Sequential (for stacking layers with forward/backward passes)
48
+
49
+ ---
50
+
51
+ ## Installation
52
+
53
+ ```bash
54
+ npm install mini-jstorch
55
+ # Node.js v20+ recommended for best performance
56
+ ```
57
+
58
+ ---
59
+
60
+ ## Quick Start Example
61
+
62
+ ```javascript
63
+ import { Sequential, Linear, ReLU, Sigmoid, CrossEntropyLoss, Adam } from './src/MainEngine.js';
64
+
65
+ // Build model
66
+ const model = new Sequential([
67
+ new Linear(2,4),
68
+ new ReLU(),
69
+ new Linear(4,2),
70
+ new Sigmoid()
71
+ ]);
72
+
73
+ // Sample XOR dataset
74
+ const X = [
75
+ [0,0], [0,1], [1,0], [1,1]
76
+ ];
77
+ const Y = [
78
+ [1,0], [0,1], [0,1], [1,0]
79
+ ];
80
+
81
+ // Loss & optimizer
82
+ const lossFn = new CrossEntropyLoss();
83
+ const optimizer = new Adam(model.parameters(), 0.1);
84
+
85
+ // Training loop
86
+ for (let epoch = 1; epoch <= 100; epoch++) {
87
+ const pred = model.forward(X);
88
+ const loss = lossFn.forward(pred, Y);
89
+ const gradLoss = lossFn.backward();
90
+ model.backward(gradLoss);
91
+ optimizer.step();
92
+ if (epoch % 20 === 0) console.log(`Epoch ${epoch}, Loss: ${loss.toFixed(4)}`);
93
+ }
94
+
95
+ // Prediction
96
+ const predTest = model.forward(X);
97
+ predTest.forEach((p,i) => {
98
+ const predictedClass = p.indexOf(Math.max(...p));
99
+ console.log(`Input: ${X[i]}, Predicted class: ${predictedClass}, Raw output: ${p.map(v => v.toFixed(3))}`);
100
+ });
101
+ ```
102
+
103
+ ---
104
+
105
+ ## Save & Load Models
106
+
107
+ ```javascript
108
+ import { saveModel, loadModel } from 'mini-jstorch';
109
+
110
+ const json = saveModel(model);
111
+ const model2 = new Sequential([...]); // same architecture
112
+ loadModel(model2, json);
113
+ ```
114
+
115
+ ---
116
+
117
+ ## Intended Use Cases
118
+ - Rapid prototyping of neural networks in frontend and backend.
119
+ - Learning and teaching foundational neural network concepts.
120
+ - Experimentation on low-end devices or mobile browsers.
121
+ - Lightweight AI projects without GPU dependency.
122
+
123
+ ---
124
+
125
+ # License
126
+
127
+ **MIT Β© 2025 Rizal**
128
+
129
+ ---
130
+
131
+ ## Facts
132
+
133
+ - **This module is implemented entirely in pure JavaScript.**
134
+ - **The `Dummy` folder contains modules used for development, testing, and debugging before integration into the main engine.**
135
+ - **files startup.cpu is actually an some random files lol.**
136
+ - **This module was created by a `single` developer.**
package/package.json CHANGED
@@ -1,13 +1,16 @@
1
1
  {
2
2
  "name": "mini-jstorch",
3
- "version": "1.2.1",
3
+ "version": "1.3.0",
4
4
  "type": "module",
5
5
  "description": "A lightweight JavaScript neural network library for rapid frontend AI experimentation on low-resource devices Inspired by PyTorch.",
6
6
  "main": "index.js",
7
7
  "keywords": [
8
8
  "neural-network",
9
9
  "javascript",
10
+ "lightweight-torch",
10
11
  "lightweight",
12
+ "small",
13
+ "javascript-torch",
11
14
  "ai",
12
15
  "jstorch",
13
16
  "pytorch",
package/src/MainEngine.js CHANGED
@@ -1 +1,255 @@
1
- console.log("MAJOR UPDATE IN 2:22 AM BE READY GUYS! this was empty because this versions only for an major note.")
1
+ // MINI JSTORCH ENGINE - MAJOR ULTRA OPTIMIZED 1.2.3
2
+ // LICENSE: MIT (C) Rizal 2025
3
+
4
+ // ---------------------- Utilities ----------------------
5
+ function zeros(rows, cols) { return Array.from({length:rows},()=>Array(cols).fill(0)); }
6
+ function ones(rows, cols) { return Array.from({length:rows},()=>Array(cols).fill(1)); }
7
+ function randomMatrix(rows, cols, scale=0.1){ return Array.from({length:rows},()=>Array.from({length:cols},()=> (Math.random()*2-1)*scale)); }
8
+ function transpose(matrix){ return matrix[0].map((_,i)=>matrix.map(row=>row[i])); }
9
+ function addMatrices(a,b){ return a.map((row,i)=>row.map((v,j)=>v+(b[i] && b[i][j]!==undefined?b[i][j]:0))); }
10
+ function dot(a,b){ const res=zeros(a.length,b[0].length); for(let i=0;i<a.length;i++) for(let j=0;j<b[0].length;j++) for(let k=0;k<a[0].length;k++) res[i][j]+=a[i][k]*b[k][j]; return res; }
11
+ function softmax(x){ const m=Math.max(...x); const exps=x.map(v=>Math.exp(v-m)); const s=exps.reduce((a,b)=>a+b,0); return exps.map(v=>v/s); }
12
+ function crossEntropy(pred,target){ const eps=1e-12; return -target.reduce((sum,t,i)=>sum+t*Math.log(pred[i]+eps),0); }
13
+
14
+ // ---------------------- Tensor ----------------------
15
+ export class Tensor {
16
+ constructor(data){ this.data=data; this.grad=zeros(data.length,data[0].length); }
17
+ shape(){ return [this.data.length,this.data[0].length]; }
18
+ add(t){ return t instanceof Tensor?this.data.map((r,i)=>r.map((v,j)=>v+t.data[i][j])):this.data.map(r=>r.map(v=>v+t)); }
19
+ sub(t){ return t instanceof Tensor?this.data.map((r,i)=>r.map((v,j)=>v-t.data[i][j])):this.data.map(r=>r.map(v=>v-t)); }
20
+ mul(t){ return t instanceof Tensor?this.data.map((r,i)=>r.map((v,j)=>v*t.data[i][j])):this.data.map(r=>r.map(v=>v*t)); }
21
+ matmul(t){ if(t instanceof Tensor) return dot(this.data,t.data); else throw new Error("matmul requires Tensor"); }
22
+ transpose(){ return transpose(this.data); }
23
+ flatten(){ return this.data.flat(); }
24
+ static zeros(r,c){ return new Tensor(zeros(r,c)); }
25
+ static ones(r,c){ return new Tensor(ones(r,c)); }
26
+ static random(r,c,scale=0.1){ return new Tensor(randomMatrix(r,c,scale)); }
27
+ }
28
+
29
+ // ---------------------- Layers ----------------------
30
+ export class Linear {
31
+ constructor(inputDim,outputDim){
32
+ this.W=randomMatrix(inputDim,outputDim);
33
+ this.b=Array(outputDim).fill(0);
34
+ this.gradW=zeros(inputDim,outputDim);
35
+ this.gradb=Array(outputDim).fill(0);
36
+ this.x=null;
37
+ }
38
+
39
+ forward(x){
40
+ this.x=x;
41
+ const out=dot(x,this.W);
42
+ return out.map((row,i)=>row.map((v,j)=>v+this.b[j]));
43
+ }
44
+
45
+ backward(grad){
46
+ for(let i=0;i<this.W.length;i++) for(let j=0;j<this.W[0].length;j++)
47
+ this.gradW[i][j]=this.x.reduce((sum,row,k)=>sum+row[i]*grad[k][j],0);
48
+ for(let j=0;j<this.b.length;j++)
49
+ this.gradb[j]=grad.reduce((sum,row)=>sum+row[j],0);
50
+
51
+ const gradInput=zeros(this.x.length,this.W.length);
52
+ for(let i=0;i<this.x.length;i++)
53
+ for(let j=0;j<this.W.length;j++)
54
+ for(let k=0;k<this.W[0].length;k++)
55
+ gradInput[i][j]+=grad[i][k]*this.W[j][k];
56
+ return gradInput;
57
+ }
58
+
59
+ parameters(){ return [ {param:this.W,grad:this.gradW}, {param:[this.b],grad:[this.gradb]} ]; }
60
+ }
61
+
62
+ // ---------------------- Conv2D ----------------------
63
+ export class Conv2D {
64
+ constructor(inC,outC,kernel,stride=1,padding=0){
65
+ this.inC=inC; this.outC=outC; this.kernel=kernel;
66
+ this.stride=stride; this.padding=padding;
67
+ this.W=Array(outC).fill(0).map(()=>Array(inC).fill(0).map(()=>randomMatrix(kernel,kernel)));
68
+ this.gradW=Array(outC).fill(0).map(()=>Array(inC).fill(0).map(()=>zeros(kernel,kernel)));
69
+ this.x=null;
70
+ }
71
+
72
+ pad2D(input,pad){
73
+ return input.map(channel=>{
74
+ const rows=channel.length+2*pad;
75
+ const cols=channel[0].length+2*pad;
76
+ const out=Array.from({length:rows},()=>Array(cols).fill(0));
77
+ for(let i=0;i<channel.length;i++) for(let j=0;j<channel[0].length;j++) out[i+pad][j+pad]=channel[i][j];
78
+ return out;
79
+ });
80
+ }
81
+
82
+ conv2DSingle(input,kernel){
83
+ const rows=input.length-kernel.length+1;
84
+ const cols=input[0].length-kernel[0].length+1;
85
+ const out=zeros(rows,cols);
86
+ for(let i=0;i<rows;i++) for(let j=0;j<cols;j++)
87
+ for(let ki=0;ki<kernel.length;ki++) for(let kj=0;kj<kernel[0].length;kj++)
88
+ out[i][j]+=input[i+ki][j+kj]*kernel[ki][kj];
89
+ return out;
90
+ }
91
+
92
+ forward(batch){
93
+ this.x=batch;
94
+ return batch.map(sample=>{
95
+ const channelsOut=[];
96
+ for(let oc=0;oc<this.outC;oc++){
97
+ let outChan=zeros(sample[0].length,sample[0][0].length);
98
+ for(let ic=0;ic<this.inC;ic++){
99
+ let inputChan=sample[ic];
100
+ if(this.padding>0) inputChan=this.pad2D([inputChan],this.padding)[0];
101
+ const conv=this.conv2DSingle(inputChan,this.W[oc][ic]);
102
+ outChan=addMatrices(outChan,conv);
103
+ }
104
+ channelsOut.push(outChan);
105
+ }
106
+ return channelsOut;
107
+ });
108
+ }
109
+
110
+ backward(grad) {
111
+ const batchSize = this.x.length;
112
+ const gradInput = this.x.map(sample => sample.map(chan => zeros(chan.length, chan[0].length)));
113
+ const gradW = this.W.map(oc => oc.map(ic => zeros(this.kernel,this.kernel)));
114
+
115
+ for (let b = 0; b < batchSize; b++) {
116
+ const xPadded = this.pad2D(this.x[b], this.padding);
117
+ const gradInputPadded = xPadded.map(chan => zeros(chan.length, chan[0].length));
118
+
119
+ for (let oc = 0; oc < this.outC; oc++) {
120
+ for (let ic = 0; ic < this.inC; ic++) {
121
+ const outGrad = grad[b][oc];
122
+ const inChan = xPadded[ic];
123
+
124
+ // Compute gradW
125
+ for (let i = 0; i < this.kernel; i++) {
126
+ for (let j = 0; j < this.kernel; j++) {
127
+ let sum = 0;
128
+ for (let y = 0; y < outGrad.length; y++) {
129
+ for (let x = 0; x < outGrad[0].length; x++) {
130
+ const inY = y * this.stride + i;
131
+ const inX = x * this.stride + j;
132
+ if (inY < inChan.length && inX < inChan[0].length) {
133
+ sum += inChan[inY][inX] * outGrad[y][x];
134
+ }
135
+ }
136
+ }
137
+ gradW[oc][ic][i][j] += sum;
138
+ }
139
+ }
140
+
141
+ // Compute gradInput
142
+ const flippedKernel = this.W[oc][ic].map(row => [...row].reverse()).reverse();
143
+ for (let y = 0; y < outGrad.length; y++) {
144
+ for (let x = 0; x < outGrad[0].length; x++) {
145
+ for (let i = 0; i < this.kernel; i++) {
146
+ for (let j = 0; j < this.kernel; j++) {
147
+ const inY = y * this.stride + i;
148
+ const inX = x * this.stride + j;
149
+ if (inY < gradInputPadded[ic].length && inX < gradInputPadded[ic][0].length) {
150
+ gradInputPadded[ic][inY][inX] += flippedKernel[i][j] * outGrad[y][x];
151
+ }
152
+ }
153
+ }
154
+ }
155
+ }
156
+ }
157
+ }
158
+
159
+ // Remove padding from gradInput
160
+ if (this.padding > 0) {
161
+ for (let ic = 0; ic < this.inC; ic++) {
162
+ const padded = gradInputPadded[ic];
163
+ const cropped = padded.slice(this.padding, padded.length - this.padding)
164
+ .map(row => row.slice(this.padding, row.length - this.padding));
165
+ gradInput[b][ic] = cropped;
166
+ }
167
+ } else {
168
+ for (let ic = 0; ic < this.inC; ic++) gradInput[b][ic] = gradInputPadded[ic];
169
+ }
170
+ }
171
+
172
+ this.gradW = gradW;
173
+ return gradInput;
174
+ }
175
+
176
+ parameters(){ return this.W.flatMap((w,oc)=>w.map((wc,ic)=>({param:wc,grad:this.gradW[oc][ic]}))); }
177
+ }
178
+
179
+ // ---------------------- Sequential ----------------------
180
+ export class Sequential {
181
+ constructor(layers=[]){ this.layers=layers; }
182
+ forward(x){ return this.layers.reduce((acc,l)=>l.forward(acc), x); }
183
+ backward(grad){ return this.layers.reduceRight((g,l)=>l.backward(g), grad); }
184
+ parameters(){ return this.layers.flatMap(l=>l.parameters?l.parameters():[]); }
185
+ }
186
+
187
+ // ---------------------- Activations ----------------------
188
+ export class ReLU{ constructor(){ this.out=null; } forward(x){ this.out=x.map(r=>r.map(v=>Math.max(0,v))); return this.out; } backward(grad){ return grad.map((r,i)=>r.map((v,j)=>v*(this.out[i][j]>0?1:0))); } }
189
+ export class Sigmoid{ constructor(){ this.out=null; } forward(x){ const fn=v=>1/(1+Math.exp(-v)); this.out=x.map(r=>r.map(fn)); return this.out; } backward(grad){ return grad.map((r,i)=>r.map((v,j)=>v*this.out[i][j]*(1-this.out[i][j]))); } }
190
+ export class Tanh{ constructor(){ this.out=null; } forward(x){ this.out=x.map(r=>r.map(v=>Math.tanh(v))); return this.out; } backward(grad){ return grad.map((r,i)=>r.map((v,j)=>v*(1-this.out[i][j]**2))); } }
191
+ export class LeakyReLU{ constructor(alpha=0.01){ this.alpha=alpha; this.out=null; } forward(x){ this.out=x.map(r=>r.map(v=>v>0?v:v*this.alpha)); return this.out; } backward(grad){ return grad.map((r,i)=>r.map((v,j)=>v*(this.out[i][j]>0?1:this.alpha))); } }
192
+ export class GELU{ constructor(){ this.out=null; } forward(x){ const fn=v=>0.5*v*(1+Math.tanh(Math.sqrt(2/Math.PI)*(v+0.044715*v**3))); this.out=x.map(r=>r.map(fn)); return this.out; } backward(grad){ return grad.map((r,i)=>r.map(v=>v*1)); } }
193
+
194
+ // ---------------------- Dropout ----------------------
195
+ export class Dropout{ constructor(p=0.5){ this.p=p; } forward(x){ return x.map(r=>r.map(v=>v*Math.random()>=this.p?v:0)); } backward(grad){ return grad.map(r=>r.map(v=>v*(1-this.p))); } }
196
+
197
+ // ---------------------- Losses ----------------------
198
+ export class MSELoss{ forward(pred,target){ this.pred=pred; this.target=target; const losses=pred.map((row,i)=>row.reduce((sum,v,j)=>sum+(v-target[i][j])**2,0)/row.length); return losses.reduce((a,b)=>a+b,0)/pred.length; } backward(){ return this.pred.map((row,i)=>row.map((v,j)=>2*(v-this.target[i][j])/row.length)); } }
199
+ export class CrossEntropyLoss{ forward(pred,target){ this.pred=pred; this.target=target; const losses=pred.map((p,i)=>crossEntropy(softmax(p),target[i])); return losses.reduce((a,b)=>a+b,0)/pred.length; } backward(){ return this.pred.map((p,i)=>{ const s=softmax(p); return s.map((v,j)=>(v-this.target[i][j])/this.pred.length); }); } }
200
+
201
+ // ---------------------- Optimizers ----------------------
202
+ export class Adam{
203
+ constructor(params,lr=0.001,b1=0.9,b2=0.999,eps=1e-8){
204
+ this.params=params; this.lr=lr; this.beta1=b1; this.beta2=b2; this.eps=eps;
205
+ this.m=params.map(p=>zeros(p.param.length,p.param[0].length||1));
206
+ this.v=params.map(p=>zeros(p.param.length,p.param[0].length||1));
207
+ this.t=0;
208
+ }
209
+ step(){
210
+ this.t++;
211
+ this.params.forEach((p,idx)=>{
212
+ for(let i=0;i<p.param.length;i++)
213
+ for(let j=0;j<(p.param[0].length||1);j++){
214
+ const g=p.grad[i][j];
215
+ this.m[idx][i][j]=this.beta1*this.m[idx][i][j]+(1-this.beta1)*g;
216
+ this.v[idx][i][j]=this.beta2*this.v[idx][i][j]+(1-this.beta2)*g*g;
217
+ const mHat=this.m[idx][i][j]/(1-Math.pow(this.beta1,this.t));
218
+ const vHat=this.v[idx][i][j]/(1-Math.pow(this.beta2,this.t));
219
+ p.param[i][j]-=this.lr*mHat/(Math.sqrt(vHat)+this.eps);
220
+ }
221
+ });
222
+ }
223
+ }
224
+
225
+ export class SGD{ constructor(params,lr=0.01){ this.params=params; this.lr=lr; } step(){ this.params.forEach(p=>{ for(let i=0;i<p.param.length;i++) for(let j=0;j<(p.param[0].length||1);j++) p.param[i][j]-=this.lr*p.grad[i][j]; }); } }
226
+
227
+ // ---------------------- Model Save/Load ----------------------
228
+ export function saveModel(model){
229
+ if(!(model instanceof Sequential)) throw new Error("saveModel supports only Sequential");
230
+ const weights=model.layers.map(layer=>({weights:layer.W||null,biases:layer.b||null}));
231
+ return JSON.stringify(weights);
232
+ }
233
+
234
+ export function loadModel(model,json){
235
+ if(!(model instanceof Sequential)) throw new Error("loadModel supports only Sequential");
236
+ const weights=JSON.parse(json);
237
+ model.layers.forEach((layer,i)=>{
238
+ if(layer.W && weights[i].weights) layer.W=weights[i].weights;
239
+ if(layer.b && weights[i].biases) layer.b=weights[i].biases;
240
+ });
241
+ }
242
+
243
+ // ---------------------- Advanced Utils ----------------------
244
+ export function flattenBatch(batch){ return batch.flat(2); }
245
+ export function stack(tensors){ return tensors.map(t=>t.data); }
246
+ export function eye(n){ return Array.from({length:n},(_,i)=>Array.from({length:n},(_,j)=>i===j?1:0)); }
247
+ export function concat(a,b,axis=0){ /* concat along axis */ if(axis===0) return [...a,...b]; if(axis===1) return a.map((row,i)=>[...row,...b[i]]); }
248
+ export function reshape(tensor, rows, cols) {
249
+ let flat = tensor.data.flat(); // flatten dulu
250
+ if(flat.length < rows*cols) throw new Error("reshape size mismatch");
251
+ const out = Array.from({length: rows}, (_, i) =>
252
+ flat.slice(i*cols, i*cols + cols)
253
+ );
254
+ return out;
255
+ }
package/tests/proj1.js ADDED
@@ -0,0 +1,85 @@
1
+ // TEST JSTORCH WHOLE SYSTEMS AT ONCE
2
+ import { Tensor, Linear, Sequential, ReLU, Sigmoid, Tanh, LeakyReLU, GELU, Dropout, Conv2D, MSELoss, CrossEntropyLoss, Adam, SGD, saveModel, loadModel, flattenBatch, reshape, stack, concat, eye } from '../src/MainEngine.js';
3
+
4
+ // ---------------------- Linear Test ----------------------
5
+ console.log("=== Linear Test ===");
6
+ const lin = new Linear(3,2);
7
+ const linInput = [[1,2,3],[4,5,6]];
8
+ const linOut = lin.forward(linInput);
9
+ console.log("Linear forward:", linOut);
10
+ const linGrad = [[0.1,0.2],[0.3,0.4]];
11
+ const linBack = lin.backward(linGrad);
12
+ console.log("Linear backward gradInput:", linBack);
13
+
14
+ // ---------------------- Sequential + Activations Test ----------------------
15
+ console.log("\n=== Sequential + Activations Test ===");
16
+ const model = new Sequential([new Linear(2,2), new ReLU(), new Linear(2,1), new Sigmoid()]);
17
+ const seqInput = [[0.5,1.0],[1.5,2.0]];
18
+ const seqOut = model.forward(seqInput);
19
+ console.log("Sequential forward:", seqOut);
20
+ const seqGrad = [[0.1],[0.2]];
21
+ const seqBack = model.backward(seqGrad);
22
+ console.log("Sequential backward gradInput:", seqBack);
23
+
24
+ // ---------------------- Conv2D Test ----------------------
25
+ console.log("\n=== Conv2D Test ===");
26
+ const conv = new Conv2D(1,1,3);
27
+ const convInput = [[[ [1,2,3],[4,5,6],[7,8,9] ]]]; // batch=1, inC=1, HxW=3x3
28
+ const convOut = conv.forward(convInput);
29
+ console.log("Conv2D forward:", convOut);
30
+
31
+ // Conv2D backward test
32
+ const convGrad = [[[ [0.1,0.2,0.1],[0.2,0.3,0.2],[0.1,0.2,0.1] ]]];
33
+ const convBack = conv.backward(convGrad);
34
+ console.log("Conv2D backward gradInput:", convBack);
35
+
36
+ // ---------------------- Tensor & Broadcast Test ----------------------
37
+ console.log("\n=== Tensor & Broadcast Test ===");
38
+ const a = Tensor.random(2,3);
39
+ const b = Tensor.ones(2,3);
40
+ const sum = a.add(b);
41
+ console.log("Tensor add broadcast:", sum);
42
+
43
+ // ---------------------- Loss + Optimizer Test ----------------------
44
+ console.log("\n=== Loss + Optimizer Test ===");
45
+ const lossModel = new Sequential([new Linear(2,2)]);
46
+ const pred = lossModel.forward([[1,2]]);
47
+ const target = [[0,1]];
48
+ const ceLoss = new CrossEntropyLoss();
49
+ const lval = ceLoss.forward(pred,target);
50
+ console.log("CrossEntropyLoss value:", lval);
51
+
52
+ const gradLoss = ceLoss.backward();
53
+ lossModel.backward(gradLoss);
54
+
55
+ const opt = new Adam(lossModel.parameters());
56
+ opt.step();
57
+ console.log("Updated parameters after Adam:", lossModel.parameters());
58
+
59
+ // ---------------------- Dropout Test ----------------------
60
+ console.log("\n=== Dropout Test ===");
61
+ const drop = new Dropout(0.5);
62
+ const dropInput = [[1,2],[3,4]];
63
+ const dropOut = drop.forward(dropInput);
64
+ console.log("Dropout forward:", dropOut);
65
+ const dropBack = drop.backward([[0.1,0.2],[0.3,0.4]]);
66
+ console.log("Dropout backward:", dropBack);
67
+
68
+ // ---------------------- Save / Load Model Test ----------------------
69
+ console.log("\n=== Save / Load Model Test ===");
70
+ const modelSave = new Sequential([new Linear(2,2)]);
71
+ const json = saveModel(modelSave);
72
+ console.log("Saved model JSON:", json);
73
+ const modelLoad = new Sequential([new Linear(2,2)]);
74
+ loadModel(modelLoad,json);
75
+ console.log("Loaded model parameters:", modelLoad.parameters());
76
+
77
+ // ---------------------- Advanced Utils Test ----------------------
78
+ console.log("\n=== Advanced Utils Test ===");
79
+ const batch = [[[1,2],[3,4]],[[5,6],[7,8]]];
80
+ console.log("Flatten batch:", flattenBatch(batch));
81
+ console.log("Eye 3:", eye(3));
82
+ console.log("Reshape:", reshape({data:[[1,2,3,4]]},2,2));
83
+ console.log("Stack:", stack([Tensor.ones(2,2), Tensor.zeros(2,2)]));
84
+ console.log("Concat axis0:", concat([[1,2],[3,4]], [[5,6],[7,8]], 0));
85
+ console.log("Concat axis1:", concat([[1,2],[3,4]], [[5,6],[7,8]], 1));
package/tests/proj2.js ADDED
@@ -0,0 +1,129 @@
1
+ /**
2
+ * Mini-JSTorch Next Word Prediction Test (Self-contained Softmax)
3
+ * - Softmax function defined in this file
4
+ * - Beam search prediction
5
+ * - Large vocab & diverse sentences
6
+ * - Sequence length 2
7
+ * - Training loop 2000 epochs
8
+ */
9
+
10
+ import { Sequential, Linear, ReLU, CrossEntropyLoss, Adam } from "../src/MainEngine.js";
11
+
12
+ // ------------------------
13
+ // Softmax Function
14
+ // ------------------------
15
+ function softmaxVector(logits) {
16
+ const maxVal = Math.max(...logits);
17
+ const exps = logits.map(v => Math.exp(v - maxVal));
18
+ const sumExps = exps.reduce((a,b)=>a+b, 0);
19
+ return exps.map(v => v/sumExps);
20
+ }
21
+
22
+ // ------------------------
23
+ // Vocabulary & Tokenization
24
+ // ------------------------
25
+ const vocab = [
26
+ "i","you","he","she","we","they",
27
+ "like","love","hate","pizza","coding","game","movie","music","coffee","tea",
28
+ "run","walk","play","read","book","eat","drink","watch","listen","reads",
29
+ "drink","drinks"
30
+ ];
31
+
32
+ const word2idx = Object.fromEntries(vocab.map((w,i)=>[w,i]));
33
+ const idx2word = Object.fromEntries(vocab.map((w,i)=>[i,w]));
34
+
35
+ function oneHot(index, vocabSize) {
36
+ return Array.from({length:vocabSize}, (_,i)=>i===index?1:0);
37
+ }
38
+
39
+ // ------------------------
40
+ // Dataset (sequence length 2)
41
+ // ------------------------
42
+ const sentences = [
43
+ ["i","like","pizza"], ["i","like","coding"], ["i","love","music"],
44
+ ["i","read","book"], ["i","watch","movie"], ["you","like","pizza"],
45
+ ["you","love","coffee"], ["you","read","book"], ["you","play","game"],
46
+ ["he","hate","coffee"], ["he","like","music"], ["he","play","game"],
47
+ ["she","love","tea"], ["she","read","book"], ["she","watch","movie"],
48
+ ["we","play","game"], ["we","read","book"], ["we","love","coffee"],
49
+ ["they","eat","pizza"], ["they","play","game"], ["they","listen","music"],
50
+ ["i","drink","coffee"], ["he","drink","tea"], ["she","drink","coffee"],
51
+ ["we","drink","tea"], ["they","drink","coffee"], ["i","play","game"],
52
+ ["you","watch","movie"], ["he","read","book"], ["she","listen","music"],
53
+ ["he","reads","book"], ["you","read","book"], ["they","read","book"],
54
+ ["they","watch","movie"], ["we","listen","music"], ["we","watch","movie"],
55
+ ["we","reads","book"],["we","drinks","coffee"],["we","love","you"],
56
+ ["i","read","book"], ["i","love","you"]
57
+ ];
58
+
59
+ // Convert to input-output pairs
60
+ const X = [], Y = [];
61
+ const seqLength = 2;
62
+ for(const s of sentences){
63
+ for(let i=0;i<=s.length-seqLength;i++){
64
+ const inpSeq = s.slice(i,i+seqLength).map(w=>oneHot(word2idx[w],vocab.length)).flat();
65
+ const outWord = s[i+seqLength] ? oneHot(word2idx[s[i+seqLength]], vocab.length)
66
+ : oneHot(word2idx[s[i+seqLength-1]], vocab.length);
67
+ X.push(inpSeq);
68
+ Y.push(outWord);
69
+ }
70
+ }
71
+
72
+ // ------------------------
73
+ // Model Definition
74
+ // ------------------------
75
+ const model = new Sequential([
76
+ new Linear(vocab.length*seqLength, 128),
77
+ new ReLU(),
78
+ new Linear(128, vocab.length)
79
+ ]);
80
+
81
+ const lossFn = new CrossEntropyLoss();
82
+ const optimizer = new Adam(model.parameters(), 0.01);
83
+
84
+ // ------------------------
85
+ // Training Loop
86
+ // ------------------------
87
+ for(let epoch=1; epoch<=1600; epoch++){
88
+ const pred = model.forward(X);
89
+ const loss = lossFn.forward(pred, Y);
90
+ const grad = lossFn.backward();
91
+ model.backward(grad);
92
+ optimizer.step();
93
+
94
+ if(epoch % 500 === 0) console.log(`Epoch ${epoch}, Loss: ${loss.toFixed(4)}`);
95
+ }
96
+
97
+ // ------------------------
98
+ // Beam Search Prediction
99
+ // ------------------------
100
+ function beamSearch(inputWords, beamWidth=2, predLength=3){
101
+ let sequences = [{words:[...inputWords], score:1}];
102
+ for(let step=0; step<predLength; step++){
103
+ const allCandidates = [];
104
+ for(const seq of sequences){
105
+ const inp = seq.words.slice(-seqLength).map(w=>oneHot(word2idx[w],vocab.length)).flat();
106
+ const out = model.forward([inp])[0];
107
+ const probs = softmaxVector(out); // softmax applied here
108
+ const top = probs.map((v,i)=>[i,v]).sort((a,b)=>b[1]-a[1]).slice(0,beamWidth);
109
+ for(const [idx,score] of top){
110
+ allCandidates.push({words:[...seq.words, idx2word[idx]], score:seq.score*score});
111
+ }
112
+ }
113
+ sequences = allCandidates.sort((a,b)=>b.score-a.score).slice(0,beamWidth);
114
+ }
115
+ return sequences.map(s=>({sequence:s.words, score:s.score.toFixed(3)}));
116
+ }
117
+
118
+ // ------------------------
119
+ // Test Predictions
120
+ // ------------------------
121
+ const testInputs = [
122
+ ["i","like"], ["you","love"], ["they","play"], ["he","hate"], ["she","reads"]
123
+ ];
124
+
125
+ for(const inp of testInputs){
126
+ const results = beamSearch(inp, 2, 3); // beam width 2, predict next 3 words
127
+ console.log(`Input: ${inp.join(" ")}`);
128
+ results.forEach(r=>console.log(` Sequence: ${r.sequence.join(" ")}, Score: ${r.score}`));
129
+ }
package/tests/proj3.js ADDED
@@ -0,0 +1,76 @@
1
+ // JSTORCH TESTS TEMPLATE FILES
2
+ // CIRCLE CLASSIFICATION TESTS FILES
3
+ import { Sequential, Linear, ReLU, Sigmoid, CrossEntropyLoss, Adam } from '../src/MainEngine.js';
4
+
5
+ // === Generate Circle Dataset ===
6
+ function generateCircleData(n) {
7
+ const X = [], Y = [];
8
+ for (let i = 0; i < n; i++) {
9
+ const x = Math.random() * 2 - 1; // -1 to 1
10
+ const y = Math.random() * 2 - 1;
11
+ const label = (x*x + y*y < 0.5*0.5) ? [1,0] : [0,1]; // inside circle radius 0.5
12
+ X.push([x, y]);
13
+ Y.push(label);
14
+ }
15
+ return { X, Y };
16
+ }
17
+
18
+ const { X, Y } = generateCircleData(300);
19
+
20
+ // === Build Model (bigger hidden layers) ===
21
+ const model = new Sequential([
22
+ new Linear(2, 16),
23
+ new ReLU(),
24
+ new Linear(16, 8),
25
+ new ReLU(),
26
+ new Linear(8, 2),
27
+ new Sigmoid()
28
+ ]);
29
+
30
+ // Loss & Optimizer
31
+ const lossFn = new CrossEntropyLoss();
32
+ const optimizer = new Adam(model.parameters(), 0.01); // smaller LR for not causing the model get stuck.
33
+
34
+ // === Training ===
35
+ console.log("=== Circle Classification Training (Fixed) ===");
36
+ const start = Date.now();
37
+ for (let epoch = 1; epoch <= 2000; epoch++) {
38
+ const pred = model.forward(X);
39
+ const loss = lossFn.forward(pred, Y);
40
+ const gradLoss = lossFn.backward();
41
+ model.backward(gradLoss);
42
+ optimizer.step();
43
+
44
+ if (epoch % 500 === 0) {
45
+ console.log(`Epoch ${epoch}, Loss: ${loss.toFixed(4)}`);
46
+ }
47
+ }
48
+ console.log(`Training Time: ${((Date.now()-start)/1000).toFixed(3)}s`); // FOR real time monitoring while training if this module is literally lightweight and does not take a long time to train.
49
+
50
+ // === Predictions ===
51
+ console.log("\n=== Predictions ===");
52
+ const testInputs = [
53
+ [0,0],
54
+ [0.7,0.7],
55
+ [0.2,0.2],
56
+ [0.9,0.1],
57
+ [-0.5,-0.5],
58
+ [0.6,-0.2]
59
+ ];
60
+ testInputs.forEach(inp => {
61
+ const out = model.forward([inp])[0];
62
+ const predClass = out.indexOf(Math.max(...out));
63
+ console.log(
64
+ `Input: ${inp}, Pred: ${predClass}, Raw: ${out.map(v => v.toFixed(3))}`
65
+ );
66
+ });
67
+
68
+ /**
69
+ * === How to Run On Your VS Code Projects ===
70
+ * 1. Make sure Node.js (v20+ recommended) is installed on your system.
71
+ * 2. Open this file in VS Code (or any editor).
72
+ * 3. Open the terminal in the project root folder.
73
+ * 4. Run: node tests/proj3.js
74
+ *
75
+ * You should see the training logs and prediction outputs directly.
76
+ */
@@ -1,256 +0,0 @@
1
- // MINI JSTORCH ENGINE
2
- // ENGINE VER: 1.2.0
3
- // LICENSE: MIT (C) Rizal 2025
4
-
5
- // ---------------------- Utilities ----------------------
6
- function zeros(rows, cols) { return Array.from({length:rows},()=>Array(cols).fill(0)); }
7
- function ones(rows, cols) { return Array.from({length:rows},()=>Array(cols).fill(1)); }
8
- function randomMatrix(rows, cols, scale=0.1){ return Array.from({length:rows},()=>Array.from({length:cols},()=> (Math.random()*2-1)*scale)); }
9
- function transpose(matrix){ return matrix[0].map((_,i)=>matrix.map(row=>row[i])); }
10
- function addMatrices(a,b){ return a.map((row,i)=>row.map((v,j)=>v+(b[i] && b[i][j]!==undefined?b[i][j]:0))); }
11
- function dot(a,b){ const res=zeros(a.length,b[0].length); for(let i=0;i<a.length;i++) for(let j=0;j<b[0].length;j++) for(let k=0;k<a[0].length;k++) res[i][j]+=a[i][k]*b[k][j]; return res; }
12
- function softmax(x){ const m=Math.max(...x); const exps=x.map(v=>Math.exp(v-m)); const s=exps.reduce((a,b)=>a+b,0); return exps.map(v=>v/s); }
13
- function crossEntropy(pred,target){ const eps=1e-12; return -target.reduce((sum,t,i)=>sum+t*Math.log(pred[i]+eps),0); }
14
-
15
- // ---------------------- Tensor ----------------------
16
- export class Tensor {
17
- constructor(data){ this.data=data; this.grad=zeros(data.length,data[0].length); }
18
- shape(){ return [this.data.length,this.data[0].length]; }
19
- add(t){ return t instanceof Tensor?this.data.map((r,i)=>r.map((v,j)=>v+t.data[i][j])):this.data.map(r=>r.map(v=>v+t)); }
20
- sub(t){ return t instanceof Tensor?this.data.map((r,i)=>r.map((v,j)=>v-t.data[i][j])):this.data.map(r=>r.map(v=>v-t)); }
21
- mul(t){ return t instanceof Tensor?this.data.map((r,i)=>r.map((v,j)=>v*t.data[i][j])):this.data.map(r=>r.map(v=>v*t)); }
22
- matmul(t){ if(t instanceof Tensor) return dot(this.data,t.data); else throw new Error("matmul requires Tensor"); }
23
- transpose(){ return transpose(this.data); }
24
- flatten(){ return this.data.flat(); }
25
- static zeros(r,c){ return new Tensor(zeros(r,c)); }
26
- static ones(r,c){ return new Tensor(ones(r,c)); }
27
- static random(r,c,scale=0.1){ return new Tensor(randomMatrix(r,c,scale)); }
28
- }
29
-
30
- // ---------------------- Layers ----------------------
31
- export class Linear {
32
- constructor(inputDim,outputDim){
33
- this.W=randomMatrix(inputDim,outputDim);
34
- this.b=Array(outputDim).fill(0);
35
- this.gradW=zeros(inputDim,outputDim);
36
- this.gradb=Array(outputDim).fill(0);
37
- this.x=null;
38
- }
39
-
40
- forward(x){
41
- this.x=x;
42
- const out=dot(x,this.W);
43
- return out.map((row,i)=>row.map((v,j)=>v+this.b[j]));
44
- }
45
-
46
- backward(grad){
47
- for(let i=0;i<this.W.length;i++) for(let j=0;j<this.W[0].length;j++)
48
- this.gradW[i][j]=this.x.reduce((sum,row,k)=>sum+row[i]*grad[k][j],0);
49
- for(let j=0;j<this.b.length;j++)
50
- this.gradb[j]=grad.reduce((sum,row)=>sum+row[j],0);
51
-
52
- const gradInput=zeros(this.x.length,this.W.length);
53
- for(let i=0;i<this.x.length;i++)
54
- for(let j=0;j<this.W.length;j++)
55
- for(let k=0;k<this.W[0].length;k++)
56
- gradInput[i][j]+=grad[i][k]*this.W[j][k];
57
- return gradInput;
58
- }
59
-
60
- parameters(){ return [ {param:this.W,grad:this.gradW}, {param:[this.b],grad:[this.gradb]} ]; }
61
- }
62
-
63
- // ---------------------- Conv2D ----------------------
64
- export class Conv2D {
65
- constructor(inC,outC,kernel,stride=1,padding=0){
66
- this.inC=inC; this.outC=outC; this.kernel=kernel;
67
- this.stride=stride; this.padding=padding;
68
- this.W=Array(outC).fill(0).map(()=>Array(inC).fill(0).map(()=>randomMatrix(kernel,kernel)));
69
- this.gradW=Array(outC).fill(0).map(()=>Array(inC).fill(0).map(()=>zeros(kernel,kernel)));
70
- this.x=null;
71
- }
72
-
73
- pad2D(input,pad){
74
- return input.map(channel=>{
75
- const rows=channel.length+2*pad;
76
- const cols=channel[0].length+2*pad;
77
- const out=Array.from({length:rows},()=>Array(cols).fill(0));
78
- for(let i=0;i<channel.length;i++) for(let j=0;j<channel[0].length;j++) out[i+pad][j+pad]=channel[i][j];
79
- return out;
80
- });
81
- }
82
-
83
- conv2DSingle(input,kernel){
84
- const rows=input.length-kernel.length+1;
85
- const cols=input[0].length-kernel[0].length+1;
86
- const out=zeros(rows,cols);
87
- for(let i=0;i<rows;i++) for(let j=0;j<cols;j++)
88
- for(let ki=0;ki<kernel.length;ki++) for(let kj=0;kj<kernel[0].length;kj++)
89
- out[i][j]+=input[i+ki][j+kj]*kernel[ki][kj];
90
- return out;
91
- }
92
-
93
- forward(batch){
94
- this.x=batch;
95
- return batch.map(sample=>{
96
- const channelsOut=[];
97
- for(let oc=0;oc<this.outC;oc++){
98
- let outChan=zeros(sample[0].length,sample[0][0].length);
99
- for(let ic=0;ic<this.inC;ic++){
100
- let inputChan=sample[ic];
101
- if(this.padding>0) inputChan=this.pad2D([inputChan],this.padding)[0];
102
- const conv=this.conv2DSingle(inputChan,this.W[oc][ic]);
103
- outChan=addMatrices(outChan,conv);
104
- }
105
- channelsOut.push(outChan);
106
- }
107
- return channelsOut;
108
- });
109
- }
110
-
111
- backward(grad) {
112
- const batchSize = this.x.length;
113
- const gradInput = this.x.map(sample => sample.map(chan => zeros(chan.length, chan[0].length)));
114
- const gradW = this.W.map(oc => oc.map(ic => zeros(this.kernel,this.kernel)));
115
-
116
- for (let b = 0; b < batchSize; b++) {
117
- const xPadded = this.pad2D(this.x[b], this.padding);
118
- const gradInputPadded = xPadded.map(chan => zeros(chan.length, chan[0].length));
119
-
120
- for (let oc = 0; oc < this.outC; oc++) {
121
- for (let ic = 0; ic < this.inC; ic++) {
122
- const outGrad = grad[b][oc];
123
- const inChan = xPadded[ic];
124
-
125
- // Compute gradW
126
- for (let i = 0; i < this.kernel; i++) {
127
- for (let j = 0; j < this.kernel; j++) {
128
- let sum = 0;
129
- for (let y = 0; y < outGrad.length; y++) {
130
- for (let x = 0; x < outGrad[0].length; x++) {
131
- const inY = y * this.stride + i;
132
- const inX = x * this.stride + j;
133
- if (inY < inChan.length && inX < inChan[0].length) {
134
- sum += inChan[inY][inX] * outGrad[y][x];
135
- }
136
- }
137
- }
138
- gradW[oc][ic][i][j] += sum;
139
- }
140
- }
141
-
142
- // Compute gradInput
143
- const flippedKernel = this.W[oc][ic].map(row => [...row].reverse()).reverse();
144
- for (let y = 0; y < outGrad.length; y++) {
145
- for (let x = 0; x < outGrad[0].length; x++) {
146
- for (let i = 0; i < this.kernel; i++) {
147
- for (let j = 0; j < this.kernel; j++) {
148
- const inY = y * this.stride + i;
149
- const inX = x * this.stride + j;
150
- if (inY < gradInputPadded[ic].length && inX < gradInputPadded[ic][0].length) {
151
- gradInputPadded[ic][inY][inX] += flippedKernel[i][j] * outGrad[y][x];
152
- }
153
- }
154
- }
155
- }
156
- }
157
- }
158
- }
159
-
160
- // Remove padding from gradInput
161
- if (this.padding > 0) {
162
- for (let ic = 0; ic < this.inC; ic++) {
163
- const padded = gradInputPadded[ic];
164
- const cropped = padded.slice(this.padding, padded.length - this.padding)
165
- .map(row => row.slice(this.padding, row.length - this.padding));
166
- gradInput[b][ic] = cropped;
167
- }
168
- } else {
169
- for (let ic = 0; ic < this.inC; ic++) gradInput[b][ic] = gradInputPadded[ic];
170
- }
171
- }
172
-
173
- this.gradW = gradW;
174
- return gradInput;
175
- }
176
-
177
- parameters(){ return this.W.flatMap((w,oc)=>w.map((wc,ic)=>({param:wc,grad:this.gradW[oc][ic]}))); }
178
- }
179
-
180
- // ---------------------- Sequential ----------------------
181
- export class Sequential {
182
- constructor(layers=[]){ this.layers=layers; }
183
- forward(x){ return this.layers.reduce((acc,l)=>l.forward(acc), x); }
184
- backward(grad){ return this.layers.reduceRight((g,l)=>l.backward(g), grad); }
185
- parameters(){ return this.layers.flatMap(l=>l.parameters?l.parameters():[]); }
186
- }
187
-
188
- // ---------------------- Activations ----------------------
189
- export class ReLU{ constructor(){ this.out=null; } forward(x){ this.out=x.map(r=>r.map(v=>Math.max(0,v))); return this.out; } backward(grad){ return grad.map((r,i)=>r.map((v,j)=>v*(this.out[i][j]>0?1:0))); } }
190
- export class Sigmoid{ constructor(){ this.out=null; } forward(x){ const fn=v=>1/(1+Math.exp(-v)); this.out=x.map(r=>r.map(fn)); return this.out; } backward(grad){ return grad.map((r,i)=>r.map((v,j)=>v*this.out[i][j]*(1-this.out[i][j]))); } }
191
- export class Tanh{ constructor(){ this.out=null; } forward(x){ this.out=x.map(r=>r.map(v=>Math.tanh(v))); return this.out; } backward(grad){ return grad.map((r,i)=>r.map((v,j)=>v*(1-this.out[i][j]**2))); } }
192
- export class LeakyReLU{ constructor(alpha=0.01){ this.alpha=alpha; this.out=null; } forward(x){ this.out=x.map(r=>r.map(v=>v>0?v:v*this.alpha)); return this.out; } backward(grad){ return grad.map((r,i)=>r.map((v,j)=>v*(this.out[i][j]>0?1:this.alpha))); } }
193
- export class GELU{ constructor(){ this.out=null; } forward(x){ const fn=v=>0.5*v*(1+Math.tanh(Math.sqrt(2/Math.PI)*(v+0.044715*v**3))); this.out=x.map(r=>r.map(fn)); return this.out; } backward(grad){ return grad.map((r,i)=>r.map(v=>v*1)); } }
194
-
195
- // ---------------------- Dropout ----------------------
196
- export class Dropout{ constructor(p=0.5){ this.p=p; } forward(x){ return x.map(r=>r.map(v=>v*Math.random()>=this.p?v:0)); } backward(grad){ return grad.map(r=>r.map(v=>v*(1-this.p))); } }
197
-
198
- // ---------------------- Losses ----------------------
199
- export class MSELoss{ forward(pred,target){ this.pred=pred; this.target=target; const losses=pred.map((row,i)=>row.reduce((sum,v,j)=>sum+(v-target[i][j])**2,0)/row.length); return losses.reduce((a,b)=>a+b,0)/pred.length; } backward(){ return this.pred.map((row,i)=>row.map((v,j)=>2*(v-this.target[i][j])/row.length)); } }
200
- export class CrossEntropyLoss{ forward(pred,target){ this.pred=pred; this.target=target; const losses=pred.map((p,i)=>crossEntropy(softmax(p),target[i])); return losses.reduce((a,b)=>a+b,0)/pred.length; } backward(){ return this.pred.map((p,i)=>{ const s=softmax(p); return s.map((v,j)=>(v-this.target[i][j])/this.pred.length); }); } }
201
-
202
- // ---------------------- Optimizers ----------------------
203
- export class Adam{
204
- constructor(params,lr=0.001,b1=0.9,b2=0.999,eps=1e-8){
205
- this.params=params; this.lr=lr; this.beta1=b1; this.beta2=b2; this.eps=eps;
206
- this.m=params.map(p=>zeros(p.param.length,p.param[0].length||1));
207
- this.v=params.map(p=>zeros(p.param.length,p.param[0].length||1));
208
- this.t=0;
209
- }
210
- step(){
211
- this.t++;
212
- this.params.forEach((p,idx)=>{
213
- for(let i=0;i<p.param.length;i++)
214
- for(let j=0;j<(p.param[0].length||1);j++){
215
- const g=p.grad[i][j];
216
- this.m[idx][i][j]=this.beta1*this.m[idx][i][j]+(1-this.beta1)*g;
217
- this.v[idx][i][j]=this.beta2*this.v[idx][i][j]+(1-this.beta2)*g*g;
218
- const mHat=this.m[idx][i][j]/(1-Math.pow(this.beta1,this.t));
219
- const vHat=this.v[idx][i][j]/(1-Math.pow(this.beta2,this.t));
220
- p.param[i][j]-=this.lr*mHat/(Math.sqrt(vHat)+this.eps);
221
- }
222
- });
223
- }
224
- }
225
-
226
- export class SGD{ constructor(params,lr=0.01){ this.params=params; this.lr=lr; } step(){ this.params.forEach(p=>{ for(let i=0;i<p.param.length;i++) for(let j=0;j<(p.param[0].length||1);j++) p.param[i][j]-=this.lr*p.grad[i][j]; }); } }
227
-
228
- // ---------------------- Model Save/Load ----------------------
229
- export function saveModel(model){
230
- if(!(model instanceof Sequential)) throw new Error("saveModel supports only Sequential");
231
- const weights=model.layers.map(layer=>({weights:layer.W||null,biases:layer.b||null}));
232
- return JSON.stringify(weights);
233
- }
234
-
235
- export function loadModel(model,json){
236
- if(!(model instanceof Sequential)) throw new Error("loadModel supports only Sequential");
237
- const weights=JSON.parse(json);
238
- model.layers.forEach((layer,i)=>{
239
- if(layer.W && weights[i].weights) layer.W=weights[i].weights;
240
- if(layer.b && weights[i].biases) layer.b=weights[i].biases;
241
- });
242
- }
243
-
244
- // ---------------------- Advanced Utils ----------------------
245
- export function flattenBatch(batch){ return batch.flat(2); }
246
- export function stack(tensors){ return tensors.map(t=>t.data); }
247
- export function eye(n){ return Array.from({length:n},(_,i)=>Array.from({length:n},(_,j)=>i===j?1:0)); }
248
- export function concat(a,b,axis=0){ /* concat along axis */ if(axis===0) return [...a,...b]; if(axis===1) return a.map((row,i)=>[...row,...b[i]]); }
249
- export function reshape(tensor, rows, cols) {
250
- let flat = tensor.data.flat(); // flatten dulu
251
- if(flat.length < rows*cols) throw new Error("reshape size mismatch");
252
- const out = Array.from({length: rows}, (_, i) =>
253
- flat.slice(i*cols, i*cols + cols)
254
- );
255
- return out;
256
- }