mini-jstorch 1.3.0 β†’ 1.4.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,50 +1,42 @@
1
- # Mini-JSTorch Major Update
1
+ # Mini-JSTorch
2
2
 
3
- # MAJOR UPDATE IS RELEASED NOW!! [got little late]
4
-
5
- ---
6
-
7
- ## πŸ™ Big Thanks πŸ™
8
- I genuinely appreciate everyone who has taken the time to install and try out this module. Every download isn’t just a number it’s a real person spending their time to explore, learn, or build something with this module!
9
-
10
- Reaching 400+ weekly downloads might sound small compared to giant frameworks, but for me it’s huge. It means this project is actually helping people out there, and that alone makes every late-night coding session worth it.
11
-
12
- Thank you for giving this module a chance. Your support and trust are what keep this project moving forward! πŸš€
13
-
14
- ---
3
+ A lightweight JavaScript neural network library for rapid frontend AI experimentation on low-resource devices, inspired by PyTorch.
15
4
 
16
5
  ## Overview
17
6
 
18
- Mini-JSTorch is a lightweight, high-performance JavaScript library for building neural networks that runs efficiently in both frontend and backend environments, including low-end devices. The library enables experimentation and learning in AI without compromising stability, accuracy, or training reliability.
7
+ Mini-JSTorch is a high-performance, minimalist JavaScript library for building neural networks. It runs efficiently in both frontend and backend environments, including low-end devices. The library enables quick experimentation and learning in AI without compromising stability, accuracy, or training reliability.
19
8
 
20
- This release, **version 1.2.3**, is a **major update** that introduces full feature coverage, including convolutional layers, advanced activations, dropout, tensor broadcasting, and enhanced utilities. The engine remains pure JavaScript, lightweight, and compatible with both Node.js and browser environments.
9
+ This release, **version 1.4.3**, we introduces **learning rate schedulers**, improved testing/demo templates, and other minor enhancements.
21
10
 
22
11
  ---
23
12
 
24
- ## Major Updates in 1.2.3
25
-
26
- - Full **Conv2D support** with forward and backward operations.
27
- - **Tensor operations** now support broadcasting and reshaping.
28
- - Added new activations: `LeakyReLU`, `GELU`.
29
- - Optimizers: `Adam` and `SGD` fully integrated with gradient updates.
30
- - Dropout layer added for regularization.
31
- - Advanced utilities for tensor manipulation: `flatten`, `stack`, `concat`, `eye`, `reshape`.
32
- - End-to-end training and prediction workflow now fully tested.
33
- - Save and load model functionality included for seamless persistence.
34
- - Optimized performance for both frontend and backend usage.
35
- - Maintained backward compatibility for previous core layers and activations.
36
- - folder tests files template that you can use it before make your models.
13
+ ## Feature Highlights
14
+
15
+ - **Learning Rate Schedulers:** New `StepLR` and `LambdaLR` for dynamic optimizer learning rate adjustment.
16
+ - **Full Conv2D support:** Forward and backward operations for convolutional layers.
17
+ - **Tensor operations:** Broadcasting, reshaping, and reduction utilities.
18
+ - **Advanced Activations:** Includes `LeakyReLU`, `GELU`, `Mish`, `SiLU`, `ELU`, and more.
19
+ - **Optimizers:** `Adam` and `SGD` with gradient updates.
20
+ - **Dropout Layer:** For regularization during training.
21
+ - **BatchNorm2D:** For stable training in convolutional models.
22
+ - **Tensor Manipulation:** Utilities like `flatten`, `stack`, `concat`, `eye`, `reshape`.
23
+ - **Model Save & Load:** Easy persistence and restore of models.
24
+ - **Test/Demo Templates:** The `tests/` folder provides ready-to-run examples for model building and feature usage.
25
+ - **Performance Optimized:** Suitable for both frontend and backend usage.
26
+ - **Backward Compatibility:** Maintained for core layers and activations.
37
27
 
38
28
  ---
39
29
 
40
- ## Features
30
+ ## Core Features
41
31
 
42
- - **Core Layers**: Linear, Conv2D
43
- - **Activations**: ReLU, Sigmoid, Tanh, LeakyReLU, GELU
44
- - **Loss Functions**: MSELoss, CrossEntropyLoss
45
- - **Optimizers**: Adam, SGD
46
- - **Utilities**: zeros, randomMatrix, softmax, crossEntropy, dot, addMatrices, reshape, stack, flatten, eye, concat
47
- - **Model Container**: Sequential (for stacking layers with forward/backward passes)
32
+ - **Layers:** Linear, Conv2D
33
+ - **Activations:** ReLU, Sigmoid, Tanh, LeakyReLU, GELU, Mish, SiLU, ELU
34
+ - **Loss Functions:** MSELoss, CrossEntropyLoss
35
+ - **Optimizers:** Adam, SGD
36
+ - **Schedulers:** StepLR, LambdaLR
37
+ - **Regularization:** Dropout, BatchNorm2D
38
+ - **Utilities:** zeros, randomMatrix, softmax, crossEntropy, dot, addMatrices, reshape, stack, flatten, eye, concat
39
+ - **Model Container:** Sequential (for stacking layers with forward/backward passes)
48
40
 
49
41
  ---
50
42
 
@@ -60,7 +52,7 @@ npm install mini-jstorch
60
52
  ## Quick Start Example
61
53
 
62
54
  ```javascript
63
- import { Sequential, Linear, ReLU, Sigmoid, CrossEntropyLoss, Adam } from './src/MainEngine.js';
55
+ import { Sequential, Linear, ReLU, Sigmoid, CrossEntropyLoss, Adam, StepLR } from 'mini-jstorch';
64
56
 
65
57
  // Build model
66
58
  const model = new Sequential([
@@ -81,6 +73,7 @@ const Y = [
81
73
  // Loss & optimizer
82
74
  const lossFn = new CrossEntropyLoss();
83
75
  const optimizer = new Adam(model.parameters(), 0.1);
76
+ const scheduler = new StepLR(optimizer, 20, 0.5); // Halve LR every 20 epochs
84
77
 
85
78
  // Training loop
86
79
  for (let epoch = 1; epoch <= 100; epoch++) {
@@ -89,7 +82,8 @@ for (let epoch = 1; epoch <= 100; epoch++) {
89
82
  const gradLoss = lossFn.backward();
90
83
  model.backward(gradLoss);
91
84
  optimizer.step();
92
- if (epoch % 20 === 0) console.log(`Epoch ${epoch}, Loss: ${loss.toFixed(4)}`);
85
+ scheduler.step();
86
+ if (epoch % 20 === 0) console.log(`Epoch ${epoch}, Loss: ${loss.toFixed(4)}, LR: ${optimizer.lr.toFixed(4)}`);
93
87
  }
94
88
 
95
89
  // Prediction
@@ -105,7 +99,7 @@ predTest.forEach((p,i) => {
105
99
  ## Save & Load Models
106
100
 
107
101
  ```javascript
108
- import { saveModel, loadModel } from 'mini-jstorch';
102
+ import { saveModel, loadModel, Sequential } from './src/MainEngine.js';
109
103
 
110
104
  const json = saveModel(model);
111
105
  const model2 = new Sequential([...]); // same architecture
@@ -114,7 +108,22 @@ loadModel(model2, json);
114
108
 
115
109
  ---
116
110
 
111
+ ## Demos & Testing
112
+
113
+ Check the `tests/` directory for ready-to-run demos:
114
+ - **tests/MakeModel.js:** Build and run a simple neural network.
115
+ - **tests/scheduler.js:** Experiment with learning rate schedulers.
116
+ - Add your own scripts for quick prototyping!
117
+
118
+ ```bash
119
+ node tests/MakeModel.js
120
+ node tests/scheduler.js
121
+ ```
122
+
123
+ ---
124
+
117
125
  ## Intended Use Cases
126
+
118
127
  - Rapid prototyping of neural networks in frontend and backend.
119
128
  - Learning and teaching foundational neural network concepts.
120
129
  - Experimentation on low-end devices or mobile browsers.
@@ -125,12 +134,3 @@ loadModel(model2, json);
125
134
  # License
126
135
 
127
136
  **MIT Β© 2025 Rizal**
128
-
129
- ---
130
-
131
- ## Facts
132
-
133
- - **This module is implemented entirely in pure JavaScript.**
134
- - **The `Dummy` folder contains modules used for development, testing, and debugging before integration into the main engine.**
135
- - **files startup.cpu is actually an some random files lol.**
136
- - **This module was created by a `single` developer.**
package/index.js CHANGED
@@ -1,6 +1,2 @@
1
- // Entry point of the library, export main classes and functions [DEPRECATED]
2
- export { Seq } from './models/seq.js';
3
- export { Dense } from './layers/dense.js';
4
- export * as act from './act/linear.js';
5
- export { SGD } from './optim/sgd.js';
6
- export { train } from './train/loop.js';
1
+ // package root
2
+ export * from "./src/MainEngine.js";
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "mini-jstorch",
3
- "version": "1.3.0",
3
+ "version": "1.4.2",
4
4
  "type": "module",
5
5
  "description": "A lightweight JavaScript neural network library for rapid frontend AI experimentation on low-resource devices Inspired by PyTorch.",
6
6
  "main": "index.js",
package/src/MainEngine.js CHANGED
@@ -1,5 +1,7 @@
1
- // MINI JSTORCH ENGINE - MAJOR ULTRA OPTIMIZED 1.2.3
2
- // LICENSE: MIT (C) Rizal 2025
1
+
2
+ // OFFICIAL MINI-JSTORCH ENGINE
3
+ // LICENSED UNDER MIT LICENSE
4
+ // MIT (C) Rizal 2025
3
5
 
4
6
  // ---------------------- Utilities ----------------------
5
7
  function zeros(rows, cols) { return Array.from({length:rows},()=>Array(cols).fill(0)); }
@@ -222,8 +224,350 @@ export class Adam{
222
224
  }
223
225
  }
224
226
 
227
+ // ---------------------- Learning Rate Schedulers ----------------------
228
+ export class StepLR {
229
+ constructor(optimizer, step_size, gamma=1.0) {
230
+ this.optimizer = optimizer;
231
+ this.step_size = step_size;
232
+ this.gamma = gamma;
233
+ this.last_epoch = 0;
234
+ this.base_lr = optimizer.lr;
235
+ }
236
+
237
+ step() {
238
+ this.last_epoch += 1;
239
+ if (this.last_epoch % this.step_size === 0) {
240
+ this.optimizer.lr *= this.gamma;
241
+ }
242
+ }
243
+
244
+ get_lr() {
245
+ return this.optimizer.lr;
246
+ }
247
+ }
248
+
249
+ export class LambdaLR {
250
+ constructor(optimizer, lr_lambda) {
251
+ this.optimizer = optimizer;
252
+ this.lr_lambda = lr_lambda;
253
+ this.last_epoch = 0;
254
+ this.base_lr = optimizer.lr;
255
+ }
256
+
257
+ step() {
258
+ this.last_epoch += 1;
259
+ this.optimizer.lr = this.base_lr * this.lr_lambda(this.last_epoch);
260
+ }
261
+
262
+ get_lr() {
263
+ return this.optimizer.lr;
264
+ }
265
+ }
266
+
267
+ // ---------------------- ELU Activation ----------------------
268
+ export class ELU {
269
+ constructor(alpha=1.0) {
270
+ this.alpha = alpha;
271
+ this.out = null;
272
+ }
273
+
274
+ forward(x) {
275
+ this.out = x.map(row =>
276
+ row.map(v => v > 0 ? v : this.alpha * (Math.exp(v) - 1))
277
+ );
278
+ return this.out;
279
+ }
280
+
281
+ backward(grad) {
282
+ return grad.map((row, i) =>
283
+ row.map((v, j) =>
284
+ v * (this.out[i][j] > 0 ? 1 : this.alpha * Math.exp(this.out[i][j]))
285
+ )
286
+ );
287
+ }
288
+ }
289
+
290
+ // ---------------------- Mish Activation ----------------------
291
+ export class Mish {
292
+ constructor() {
293
+ this.x = null;
294
+ }
295
+
296
+ forward(x) {
297
+ this.x = x;
298
+ return x.map(row =>
299
+ row.map(v => {
300
+ // Mish(x) = x * tanh(softplus(x)) = x * tanh(ln(1 + e^x))
301
+ const softplus = Math.log(1 + Math.exp(v));
302
+ return v * Math.tanh(softplus);
303
+ })
304
+ );
305
+ }
306
+
307
+ backward(grad) {
308
+ return grad.map((row, i) =>
309
+ row.map((v, j) => {
310
+ const x_val = this.x[i][j];
311
+
312
+ // Gradient of Mish:
313
+ // Ξ΄ = Ο‰ * (4(x+1) + 4e^2x + e^3x + e^x(4x+6)) / (2e^x + e^2x + 2)^2
314
+ // where Ο‰ = sech^2(softplus(x))
315
+
316
+ const exp_x = Math.exp(x_val);
317
+ const exp_2x = Math.exp(2 * x_val);
318
+ const exp_3x = Math.exp(3 * x_val);
319
+ const softplus = Math.log(1 + exp_x);
320
+
321
+ const sech_softplus = 1 / Math.cosh(softplus);
322
+ const numerator = 4 * (x_val + 1) + 4 * exp_2x + exp_3x + exp_x * (4 * x_val + 6);
323
+ const denominator = Math.pow(2 * exp_x + exp_2x + 2, 2);
324
+
325
+ const mish_grad = (sech_softplus * sech_softplus) * (numerator / denominator);
326
+ return v * mish_grad;
327
+ })
328
+ );
329
+ }
330
+ }
331
+
332
+ // ---------------------- SiLU Activation ----------------------
333
+ export class SiLU {
334
+ constructor() {
335
+ this.x = null;
336
+ }
337
+
338
+ forward(x) {
339
+ this.x = x;
340
+ return x.map(row =>
341
+ row.map(v => v / (1 + Math.exp(-v))) // x * sigmoid(x)
342
+ );
343
+ }
344
+
345
+ backward(grad) {
346
+ return grad.map((row, i) =>
347
+ row.map((v, j) => {
348
+ const x_val = this.x[i][j];
349
+ const sigmoid = 1 / (1 + Math.exp(-x_val));
350
+ return v * (sigmoid * (1 + x_val * (1 - sigmoid)));
351
+ })
352
+ );
353
+ }
354
+ }
355
+
225
356
  export class SGD{ constructor(params,lr=0.01){ this.params=params; this.lr=lr; } step(){ this.params.forEach(p=>{ for(let i=0;i<p.param.length;i++) for(let j=0;j<(p.param[0].length||1);j++) p.param[i][j]-=this.lr*p.grad[i][j]; }); } }
226
357
 
358
+ // ---------------------- BatchNorm2D ----------------------
359
+ export class BatchNorm2d {
360
+ constructor(numFeatures, eps=1e-5, momentum=0.1, affine=true) {
361
+ this.numFeatures = numFeatures;
362
+ this.eps = eps;
363
+ this.momentum = momentum;
364
+ this.affine = affine;
365
+
366
+ // Parameters
367
+ if (affine) {
368
+ this.weight = Array(numFeatures).fill(1);
369
+ this.bias = Array(numFeatures).fill(0);
370
+ this.gradWeight = Array(numFeatures).fill(0);
371
+ this.gradBias = Array(numFeatures).fill(0);
372
+ }
373
+
374
+ // Running statistics
375
+ this.runningMean = Array(numFeatures).fill(0);
376
+ this.runningVar = Array(numFeatures).fill(1);
377
+
378
+ // Training state
379
+ this.training = true;
380
+ this.x = null;
381
+ this.xCentered = null;
382
+ this.std = null;
383
+ }
384
+
385
+ forward(x) {
386
+ // x shape: [batch, channels, height, width]
387
+ this.x = x;
388
+ const batchSize = x.length;
389
+ const channels = x[0].length;
390
+
391
+ if (this.training) {
392
+ // Calculate mean per channel
393
+ const means = Array(channels).fill(0);
394
+ for (let b = 0; b < batchSize; b++) {
395
+ for (let c = 0; c < channels; c++) {
396
+ const channelData = x[b][c];
397
+ let sum = 0;
398
+ for (let i = 0; i < channelData.length; i++) {
399
+ for (let j = 0; j < channelData[0].length; j++) {
400
+ sum += channelData[i][j];
401
+ }
402
+ }
403
+ means[c] += sum / (channelData.length * channelData[0].length);
404
+ }
405
+ }
406
+ means.forEach((_, c) => means[c] /= batchSize);
407
+
408
+ // Calculate variance per channel
409
+ const variances = Array(channels).fill(0);
410
+ for (let b = 0; b < batchSize; b++) {
411
+ for (let c = 0; c < channels; c++) {
412
+ const channelData = x[b][c];
413
+ let sum = 0;
414
+ for (let i = 0; i < channelData.length; i++) {
415
+ for (let j = 0; j < channelData[0].length; j++) {
416
+ sum += Math.pow(channelData[i][j] - means[c], 2);
417
+ }
418
+ }
419
+ variances[c] += sum / (channelData.length * channelData[0].length);
420
+ }
421
+ }
422
+ variances.forEach((_, c) => variances[c] /= batchSize);
423
+
424
+ // Update running statistics
425
+ for (let c = 0; c < channels; c++) {
426
+ this.runningMean[c] = this.momentum * means[c] + (1 - this.momentum) * this.runningMean[c];
427
+ this.runningVar[c] = this.momentum * variances[c] + (1 - this.momentum) * this.runningVar[c];
428
+ }
429
+
430
+ // Normalize
431
+ this.xCentered = [];
432
+ this.std = Array(channels).fill(0).map(() => []);
433
+
434
+ const output = [];
435
+ for (let b = 0; b < batchSize; b++) {
436
+ const batchOut = [];
437
+ for (let c = 0; c < channels; c++) {
438
+ const channelData = x[b][c];
439
+ const channelOut = zeros(channelData.length, channelData[0].length);
440
+ const channelCentered = zeros(channelData.length, channelData[0].length);
441
+ const channelStd = Math.sqrt(variances[c] + this.eps);
442
+ this.std[c].push(channelStd);
443
+
444
+ for (let i = 0; i < channelData.length; i++) {
445
+ for (let j = 0; j < channelData[0].length; j++) {
446
+ channelCentered[i][j] = channelData[i][j] - means[c];
447
+ channelOut[i][j] = channelCentered[i][j] / channelStd;
448
+
449
+ // Apply affine transformation if enabled
450
+ if (this.affine) {
451
+ channelOut[i][j] = channelOut[i][j] * this.weight[c] + this.bias[c];
452
+ }
453
+ }
454
+ }
455
+
456
+ batchOut.push(channelOut);
457
+ if (b === 0) this.xCentered.push(channelCentered);
458
+ else this.xCentered[c] = addMatrices(this.xCentered[c], channelCentered);
459
+ }
460
+ output.push(batchOut);
461
+ }
462
+
463
+ return output;
464
+ } else {
465
+ // Inference mode - use running statistics
466
+ const output = [];
467
+ for (let b = 0; b < batchSize; b++) {
468
+ const batchOut = [];
469
+ for (let c = 0; c < channels; c++) {
470
+ const channelData = x[b][c];
471
+ const channelOut = zeros(channelData.length, channelData[0].length);
472
+ const channelStd = Math.sqrt(this.runningVar[c] + this.eps);
473
+
474
+ for (let i = 0; i < channelData.length; i++) {
475
+ for (let j = 0; j < channelData[0].length; j++) {
476
+ channelOut[i][j] = (channelData[i][j] - this.runningMean[c]) / channelStd;
477
+
478
+ // Apply affine transformation if enabled
479
+ if (this.affine) {
480
+ channelOut[i][j] = channelOut[i][j] * this.weight[c] + this.bias[c];
481
+ }
482
+ }
483
+ }
484
+
485
+ batchOut.push(channelOut);
486
+ }
487
+ output.push(batchOut);
488
+ }
489
+
490
+ return output;
491
+ }
492
+ }
493
+
494
+ backward(gradOutput) {
495
+ if (!this.training) {
496
+ throw new Error("Backward should only be called in training mode");
497
+ }
498
+
499
+ const batchSize = gradOutput.length;
500
+ const channels = gradOutput[0].length;
501
+
502
+ // Initialize gradients
503
+ const gradInput = this.x.map(batch =>
504
+ batch.map(channel =>
505
+ zeros(channel.length, channel[0].length)
506
+ )
507
+ );
508
+
509
+ if (this.affine) {
510
+ this.gradWeight.fill(0);
511
+ this.gradBias.fill(0);
512
+ }
513
+
514
+ for (let c = 0; c < channels; c++) {
515
+ let sumGradWeight = 0;
516
+ let sumGradBias = 0;
517
+
518
+ for (let b = 0; b < batchSize; b++) {
519
+ const channelGrad = gradOutput[b][c];
520
+ const channelData = this.x[b][c];
521
+
522
+ // Calculate gradients for bias and weight
523
+ if (this.affine) {
524
+ for (let i = 0; i < channelGrad.length; i++) {
525
+ for (let j = 0; j < channelGrad[0].length; j++) {
526
+ sumGradBias += channelGrad[i][j];
527
+ sumGradWeight += channelGrad[i][j] * (this.xCentered[c][i][j] / this.std[c][b]);
528
+ }
529
+ }
530
+ }
531
+
532
+ // Calculate gradient for input
533
+ const n = channelData.length * channelData[0].length;
534
+ const stdInv = 1 / this.std[c][b];
535
+
536
+ for (let i = 0; i < channelGrad.length; i++) {
537
+ for (let j = 0; j < channelGrad[0].length; j++) {
538
+ let grad = channelGrad[i][j];
539
+
540
+ if (this.affine) {
541
+ grad *= this.weight[c];
542
+ }
543
+
544
+ grad *= stdInv;
545
+ gradInput[b][c][i][j] = grad;
546
+ }
547
+ }
548
+ }
549
+
550
+ if (this.affine) {
551
+ this.gradWeight[c] = sumGradWeight / batchSize;
552
+ this.gradBias[c] = sumGradBias / batchSize;
553
+ }
554
+ }
555
+
556
+ return gradInput;
557
+ }
558
+
559
+ parameters() {
560
+ if (!this.affine) return [];
561
+ return [
562
+ { param: [this.weight], grad: [this.gradWeight] },
563
+ { param: [this.bias], grad: [this.gradBias] }
564
+ ];
565
+ }
566
+
567
+ train() { this.training = true; }
568
+ eval() { this.training = false; }
569
+ }
570
+
227
571
  // ---------------------- Model Save/Load ----------------------
228
572
  export function saveModel(model){
229
573
  if(!(model instanceof Sequential)) throw new Error("saveModel supports only Sequential");
@@ -246,10 +590,10 @@ export function stack(tensors){ return tensors.map(t=>t.data); }
246
590
  export function eye(n){ return Array.from({length:n},(_,i)=>Array.from({length:n},(_,j)=>i===j?1:0)); }
247
591
  export function concat(a,b,axis=0){ /* concat along axis */ if(axis===0) return [...a,...b]; if(axis===1) return a.map((row,i)=>[...row,...b[i]]); }
248
592
  export function reshape(tensor, rows, cols) {
249
- let flat = tensor.data.flat(); // flatten dulu
593
+ let flat = tensor.data.flat(); // flatten
250
594
  if(flat.length < rows*cols) throw new Error("reshape size mismatch");
251
595
  const out = Array.from({length: rows}, (_, i) =>
252
596
  flat.slice(i*cols, i*cols + cols)
253
597
  );
254
598
  return out;
255
- }
599
+ }
package/src/startup.cpu CHANGED
@@ -1,12 +1,15 @@
1
1
  // you can delete this files this files are not important for the engine runtime.
2
2
 
3
3
  e=run=[cpu[runtime]]
4
- e.set.runtime('beta')
5
- e.rnt()
6
- e.set()
7
- e.register('vanilla',expe='Experiments.js',main='MainEngine.js')
8
- l=e.prog('asm')
4
+ devices=e.getdata[devices[5]]
5
+ env.set.runtime('beta')
6
+ env.rnt()
7
+ env.set()
8
+ env.register('vanilla',expe='Experiments.js',main='MainEngine.js',tgver=latest)
9
+ resources=e.find(tag='resources')
10
+ resources.ld(env)
11
+ l=env.prog('asm')
9
12
  r=l.gv=[0xCAFEBABE]
10
- eng=e.load(register,r,'vanilla')
11
- eng.boot(e,r,'dp')
12
- eng.load()
13
+ eng=env.load(register,r,'vanilla')
14
+ eng.boot(env,r,'dp')
15
+ eng.load(resources,runtime,devices)
@@ -0,0 +1,38 @@
1
+ // Example: Build and run a simple neural network model using mini-jstorch
2
+
3
+ import { Sequential, Linear, ReLU, MSELoss, SGD, Tensor } from "../src/MainEngine.js";
4
+
5
+ // Create dummy input and target data
6
+ const input = new Tensor([[0.5, -1.0], [1.5, 2.0]]); // shape: [2,2]
7
+ const target = new Tensor([[1.0, 0.0], [0.0, 1.0]]); // shape: [2,2]
8
+
9
+ // Build a simple model: Linear -> ReLU -> Linear
10
+ const model = new Sequential([
11
+ new Linear(2, 4),
12
+ new ReLU(),
13
+ new Linear(4, 2)
14
+ ]);
15
+
16
+ const criterion = new MSELoss();
17
+ const optimizer = new SGD(model.parameters(), 0.01);
18
+
19
+ // Forward pass
20
+ const output = model.forward(input.data);
21
+ console.log("Model output:", output);
22
+
23
+ // Compute loss
24
+ const loss = criterion.forward(output, target.data);
25
+ console.log("Loss:", loss);
26
+
27
+ // Backward pass
28
+ const grad = criterion.backward();
29
+ model.backward(grad);
30
+
31
+ // Optimizer step
32
+ optimizer.step();
33
+ console.log("Parameters updated!");
34
+
35
+ // Run again to show change
36
+ const output2 = model.forward(input.data);
37
+ const loss2 = criterion.forward(output2, target.data);
38
+ console.log("New Loss:", loss2);
@@ -0,0 +1,23 @@
1
+ // Example: Test learning rate schedulers (StepLR and LambdaLR) with mini-jstorch optimizers
2
+
3
+ import { SGD, StepLR, LambdaLR, Tensor } from "../src/MainEngine.js";
4
+
5
+ const param = { param: [[1, 2], [3, 4]], grad: [[0, 0], [0, 0]] };
6
+ const optimizer = new SGD([param], 0.1);
7
+
8
+ // --- Test StepLR ---
9
+ console.log("Testing StepLR...");
10
+ const stepScheduler = new StepLR(optimizer, 3, 0.5);
11
+ for (let epoch = 1; epoch <= 10; epoch++) {
12
+ stepScheduler.step();
13
+ console.log(`Epoch ${epoch}: LR = ${optimizer.lr.toFixed(4)}`);
14
+ }
15
+
16
+ // --- Test LambdaLR ---
17
+ console.log("\nTesting LambdaLR...");
18
+ optimizer.lr = 0.1; // Reset LR
19
+ const lambdaScheduler = new LambdaLR(optimizer, epoch => 1.0 / (1 + epoch));
20
+ for (let epoch = 1; epoch <= 5; epoch++) {
21
+ lambdaScheduler.step();
22
+ console.log(`Epoch ${epoch}: LR = ${optimizer.lr.toFixed(4)}`);
23
+ }
package/MODULE.md DELETED
@@ -1,46 +0,0 @@
1
- ## MODULE STATS ##
2
-
3
- New Files that automatically will All notable changes status state to *Mini-JSTorch* will be documented in this file.
4
-
5
- # MSG
6
-
7
- btw. This files is actually would be modified and note all changes system state automatically without manual i type with myself.
8
-
9
- ---
10
-
11
- **OFFICIAL RELEASE:** 2025-Monday-August-23 time: 2:22 AM (estimated time release)
12
- **VERSION:** 1.3.0
13
- **LICENSE:** MIT Β© 2025
14
- **AUTHOR:** Rizal
15
- **MODULE NAME:** mini-jstorch
16
- **MODULE DESC:** A lightweight JavaScript neural network library for rapid frontend AI experimentation on low-resource devices Inspired by PyTorch.
17
- **MODULE TYPE:** module
18
- **ENGINE VERSIONS:** 1.2.0
19
- **UPDATE TITLE:** `MAJOR` update.
20
- **ADDED FILES/FOLDER:** {
21
- "tests" //folder
22
- "tests/proj1.js" //files
23
- "tests/tests.js" //files [npmignore detected]
24
- "tests/proj2.js" //files
25
- "tests/proj3.js" //files
26
- "N/A" //N/A [N/A]
27
- }
28
-
29
- ---
30
-
31
- **MODIFIED FILES:** {
32
- "src" //folder
33
- "src/startup.cpu" //files
34
- "src/MainEngine.js" //files
35
- "tests/tests.js" //files [npmignore detected]
36
- "tests" //folder
37
- "src/Dummy/exp.js" //files [npmignore detected]
38
- "package.json" //files
39
- "src/EngState.json" //files [npmignore detected]
40
- "src/state.txt" //files [npmignore detected]
41
- "README.md" //files
42
- ".npmignore" //files [npmignore detected]
43
- "N/A" //N/A [N/A]
44
- }
45
-
46
- ---
package/tests/proj1.js DELETED
@@ -1,85 +0,0 @@
1
- // TEST JSTORCH WHOLE SYSTEMS AT ONCE
2
- import { Tensor, Linear, Sequential, ReLU, Sigmoid, Tanh, LeakyReLU, GELU, Dropout, Conv2D, MSELoss, CrossEntropyLoss, Adam, SGD, saveModel, loadModel, flattenBatch, reshape, stack, concat, eye } from '../src/MainEngine.js';
3
-
4
- // ---------------------- Linear Test ----------------------
5
- console.log("=== Linear Test ===");
6
- const lin = new Linear(3,2);
7
- const linInput = [[1,2,3],[4,5,6]];
8
- const linOut = lin.forward(linInput);
9
- console.log("Linear forward:", linOut);
10
- const linGrad = [[0.1,0.2],[0.3,0.4]];
11
- const linBack = lin.backward(linGrad);
12
- console.log("Linear backward gradInput:", linBack);
13
-
14
- // ---------------------- Sequential + Activations Test ----------------------
15
- console.log("\n=== Sequential + Activations Test ===");
16
- const model = new Sequential([new Linear(2,2), new ReLU(), new Linear(2,1), new Sigmoid()]);
17
- const seqInput = [[0.5,1.0],[1.5,2.0]];
18
- const seqOut = model.forward(seqInput);
19
- console.log("Sequential forward:", seqOut);
20
- const seqGrad = [[0.1],[0.2]];
21
- const seqBack = model.backward(seqGrad);
22
- console.log("Sequential backward gradInput:", seqBack);
23
-
24
- // ---------------------- Conv2D Test ----------------------
25
- console.log("\n=== Conv2D Test ===");
26
- const conv = new Conv2D(1,1,3);
27
- const convInput = [[[ [1,2,3],[4,5,6],[7,8,9] ]]]; // batch=1, inC=1, HxW=3x3
28
- const convOut = conv.forward(convInput);
29
- console.log("Conv2D forward:", convOut);
30
-
31
- // Conv2D backward test
32
- const convGrad = [[[ [0.1,0.2,0.1],[0.2,0.3,0.2],[0.1,0.2,0.1] ]]];
33
- const convBack = conv.backward(convGrad);
34
- console.log("Conv2D backward gradInput:", convBack);
35
-
36
- // ---------------------- Tensor & Broadcast Test ----------------------
37
- console.log("\n=== Tensor & Broadcast Test ===");
38
- const a = Tensor.random(2,3);
39
- const b = Tensor.ones(2,3);
40
- const sum = a.add(b);
41
- console.log("Tensor add broadcast:", sum);
42
-
43
- // ---------------------- Loss + Optimizer Test ----------------------
44
- console.log("\n=== Loss + Optimizer Test ===");
45
- const lossModel = new Sequential([new Linear(2,2)]);
46
- const pred = lossModel.forward([[1,2]]);
47
- const target = [[0,1]];
48
- const ceLoss = new CrossEntropyLoss();
49
- const lval = ceLoss.forward(pred,target);
50
- console.log("CrossEntropyLoss value:", lval);
51
-
52
- const gradLoss = ceLoss.backward();
53
- lossModel.backward(gradLoss);
54
-
55
- const opt = new Adam(lossModel.parameters());
56
- opt.step();
57
- console.log("Updated parameters after Adam:", lossModel.parameters());
58
-
59
- // ---------------------- Dropout Test ----------------------
60
- console.log("\n=== Dropout Test ===");
61
- const drop = new Dropout(0.5);
62
- const dropInput = [[1,2],[3,4]];
63
- const dropOut = drop.forward(dropInput);
64
- console.log("Dropout forward:", dropOut);
65
- const dropBack = drop.backward([[0.1,0.2],[0.3,0.4]]);
66
- console.log("Dropout backward:", dropBack);
67
-
68
- // ---------------------- Save / Load Model Test ----------------------
69
- console.log("\n=== Save / Load Model Test ===");
70
- const modelSave = new Sequential([new Linear(2,2)]);
71
- const json = saveModel(modelSave);
72
- console.log("Saved model JSON:", json);
73
- const modelLoad = new Sequential([new Linear(2,2)]);
74
- loadModel(modelLoad,json);
75
- console.log("Loaded model parameters:", modelLoad.parameters());
76
-
77
- // ---------------------- Advanced Utils Test ----------------------
78
- console.log("\n=== Advanced Utils Test ===");
79
- const batch = [[[1,2],[3,4]],[[5,6],[7,8]]];
80
- console.log("Flatten batch:", flattenBatch(batch));
81
- console.log("Eye 3:", eye(3));
82
- console.log("Reshape:", reshape({data:[[1,2,3,4]]},2,2));
83
- console.log("Stack:", stack([Tensor.ones(2,2), Tensor.zeros(2,2)]));
84
- console.log("Concat axis0:", concat([[1,2],[3,4]], [[5,6],[7,8]], 0));
85
- console.log("Concat axis1:", concat([[1,2],[3,4]], [[5,6],[7,8]], 1));
package/tests/proj2.js DELETED
@@ -1,129 +0,0 @@
1
- /**
2
- * Mini-JSTorch Next Word Prediction Test (Self-contained Softmax)
3
- * - Softmax function defined in this file
4
- * - Beam search prediction
5
- * - Large vocab & diverse sentences
6
- * - Sequence length 2
7
- * - Training loop 2000 epochs
8
- */
9
-
10
- import { Sequential, Linear, ReLU, CrossEntropyLoss, Adam } from "../src/MainEngine.js";
11
-
12
- // ------------------------
13
- // Softmax Function
14
- // ------------------------
15
- function softmaxVector(logits) {
16
- const maxVal = Math.max(...logits);
17
- const exps = logits.map(v => Math.exp(v - maxVal));
18
- const sumExps = exps.reduce((a,b)=>a+b, 0);
19
- return exps.map(v => v/sumExps);
20
- }
21
-
22
- // ------------------------
23
- // Vocabulary & Tokenization
24
- // ------------------------
25
- const vocab = [
26
- "i","you","he","she","we","they",
27
- "like","love","hate","pizza","coding","game","movie","music","coffee","tea",
28
- "run","walk","play","read","book","eat","drink","watch","listen","reads",
29
- "drink","drinks"
30
- ];
31
-
32
- const word2idx = Object.fromEntries(vocab.map((w,i)=>[w,i]));
33
- const idx2word = Object.fromEntries(vocab.map((w,i)=>[i,w]));
34
-
35
- function oneHot(index, vocabSize) {
36
- return Array.from({length:vocabSize}, (_,i)=>i===index?1:0);
37
- }
38
-
39
- // ------------------------
40
- // Dataset (sequence length 2)
41
- // ------------------------
42
- const sentences = [
43
- ["i","like","pizza"], ["i","like","coding"], ["i","love","music"],
44
- ["i","read","book"], ["i","watch","movie"], ["you","like","pizza"],
45
- ["you","love","coffee"], ["you","read","book"], ["you","play","game"],
46
- ["he","hate","coffee"], ["he","like","music"], ["he","play","game"],
47
- ["she","love","tea"], ["she","read","book"], ["she","watch","movie"],
48
- ["we","play","game"], ["we","read","book"], ["we","love","coffee"],
49
- ["they","eat","pizza"], ["they","play","game"], ["they","listen","music"],
50
- ["i","drink","coffee"], ["he","drink","tea"], ["she","drink","coffee"],
51
- ["we","drink","tea"], ["they","drink","coffee"], ["i","play","game"],
52
- ["you","watch","movie"], ["he","read","book"], ["she","listen","music"],
53
- ["he","reads","book"], ["you","read","book"], ["they","read","book"],
54
- ["they","watch","movie"], ["we","listen","music"], ["we","watch","movie"],
55
- ["we","reads","book"],["we","drinks","coffee"],["we","love","you"],
56
- ["i","read","book"], ["i","love","you"]
57
- ];
58
-
59
- // Convert to input-output pairs
60
- const X = [], Y = [];
61
- const seqLength = 2;
62
- for(const s of sentences){
63
- for(let i=0;i<=s.length-seqLength;i++){
64
- const inpSeq = s.slice(i,i+seqLength).map(w=>oneHot(word2idx[w],vocab.length)).flat();
65
- const outWord = s[i+seqLength] ? oneHot(word2idx[s[i+seqLength]], vocab.length)
66
- : oneHot(word2idx[s[i+seqLength-1]], vocab.length);
67
- X.push(inpSeq);
68
- Y.push(outWord);
69
- }
70
- }
71
-
72
- // ------------------------
73
- // Model Definition
74
- // ------------------------
75
- const model = new Sequential([
76
- new Linear(vocab.length*seqLength, 128),
77
- new ReLU(),
78
- new Linear(128, vocab.length)
79
- ]);
80
-
81
- const lossFn = new CrossEntropyLoss();
82
- const optimizer = new Adam(model.parameters(), 0.01);
83
-
84
- // ------------------------
85
- // Training Loop
86
- // ------------------------
87
- for(let epoch=1; epoch<=1600; epoch++){
88
- const pred = model.forward(X);
89
- const loss = lossFn.forward(pred, Y);
90
- const grad = lossFn.backward();
91
- model.backward(grad);
92
- optimizer.step();
93
-
94
- if(epoch % 500 === 0) console.log(`Epoch ${epoch}, Loss: ${loss.toFixed(4)}`);
95
- }
96
-
97
- // ------------------------
98
- // Beam Search Prediction
99
- // ------------------------
100
- function beamSearch(inputWords, beamWidth=2, predLength=3){
101
- let sequences = [{words:[...inputWords], score:1}];
102
- for(let step=0; step<predLength; step++){
103
- const allCandidates = [];
104
- for(const seq of sequences){
105
- const inp = seq.words.slice(-seqLength).map(w=>oneHot(word2idx[w],vocab.length)).flat();
106
- const out = model.forward([inp])[0];
107
- const probs = softmaxVector(out); // softmax applied here
108
- const top = probs.map((v,i)=>[i,v]).sort((a,b)=>b[1]-a[1]).slice(0,beamWidth);
109
- for(const [idx,score] of top){
110
- allCandidates.push({words:[...seq.words, idx2word[idx]], score:seq.score*score});
111
- }
112
- }
113
- sequences = allCandidates.sort((a,b)=>b.score-a.score).slice(0,beamWidth);
114
- }
115
- return sequences.map(s=>({sequence:s.words, score:s.score.toFixed(3)}));
116
- }
117
-
118
- // ------------------------
119
- // Test Predictions
120
- // ------------------------
121
- const testInputs = [
122
- ["i","like"], ["you","love"], ["they","play"], ["he","hate"], ["she","reads"]
123
- ];
124
-
125
- for(const inp of testInputs){
126
- const results = beamSearch(inp, 2, 3); // beam width 2, predict next 3 words
127
- console.log(`Input: ${inp.join(" ")}`);
128
- results.forEach(r=>console.log(` Sequence: ${r.sequence.join(" ")}, Score: ${r.score}`));
129
- }
package/tests/proj3.js DELETED
@@ -1,76 +0,0 @@
1
- // JSTORCH TESTS TEMPLATE FILES
2
- // CIRCLE CLASSIFICATION TESTS FILES
3
- import { Sequential, Linear, ReLU, Sigmoid, CrossEntropyLoss, Adam } from '../src/MainEngine.js';
4
-
5
- // === Generate Circle Dataset ===
6
- function generateCircleData(n) {
7
- const X = [], Y = [];
8
- for (let i = 0; i < n; i++) {
9
- const x = Math.random() * 2 - 1; // -1 to 1
10
- const y = Math.random() * 2 - 1;
11
- const label = (x*x + y*y < 0.5*0.5) ? [1,0] : [0,1]; // inside circle radius 0.5
12
- X.push([x, y]);
13
- Y.push(label);
14
- }
15
- return { X, Y };
16
- }
17
-
18
- const { X, Y } = generateCircleData(300);
19
-
20
- // === Build Model (bigger hidden layers) ===
21
- const model = new Sequential([
22
- new Linear(2, 16),
23
- new ReLU(),
24
- new Linear(16, 8),
25
- new ReLU(),
26
- new Linear(8, 2),
27
- new Sigmoid()
28
- ]);
29
-
30
- // Loss & Optimizer
31
- const lossFn = new CrossEntropyLoss();
32
- const optimizer = new Adam(model.parameters(), 0.01); // smaller LR for not causing the model get stuck.
33
-
34
- // === Training ===
35
- console.log("=== Circle Classification Training (Fixed) ===");
36
- const start = Date.now();
37
- for (let epoch = 1; epoch <= 2000; epoch++) {
38
- const pred = model.forward(X);
39
- const loss = lossFn.forward(pred, Y);
40
- const gradLoss = lossFn.backward();
41
- model.backward(gradLoss);
42
- optimizer.step();
43
-
44
- if (epoch % 500 === 0) {
45
- console.log(`Epoch ${epoch}, Loss: ${loss.toFixed(4)}`);
46
- }
47
- }
48
- console.log(`Training Time: ${((Date.now()-start)/1000).toFixed(3)}s`); // FOR real time monitoring while training if this module is literally lightweight and does not take a long time to train.
49
-
50
- // === Predictions ===
51
- console.log("\n=== Predictions ===");
52
- const testInputs = [
53
- [0,0],
54
- [0.7,0.7],
55
- [0.2,0.2],
56
- [0.9,0.1],
57
- [-0.5,-0.5],
58
- [0.6,-0.2]
59
- ];
60
- testInputs.forEach(inp => {
61
- const out = model.forward([inp])[0];
62
- const predClass = out.indexOf(Math.max(...out));
63
- console.log(
64
- `Input: ${inp}, Pred: ${predClass}, Raw: ${out.map(v => v.toFixed(3))}`
65
- );
66
- });
67
-
68
- /**
69
- * === How to Run On Your VS Code Projects ===
70
- * 1. Make sure Node.js (v20+ recommended) is installed on your system.
71
- * 2. Open this file in VS Code (or any editor).
72
- * 3. Open the terminal in the project root folder.
73
- * 4. Run: node tests/proj3.js
74
- *
75
- * You should see the training logs and prediction outputs directly.
76
- */