mini-jstorch 1.4.5 → 1.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -4,36 +4,28 @@ A lightweight JavaScript neural network library for rapid frontend AI experiment
4
4
 
5
5
  ## Overview
6
6
 
7
- Mini-JSTorch is a high-performance, minimalist JavaScript library for building neural networks. It runs efficiently in both frontend and backend environments, including low-end devices. The library enables quick experimentation and learning in AI without compromising stability, accuracy, or training reliability.
7
+ Mini-JSTorch is a high-performance, minimalist JavaScript library for building neural networks. It runs efficiently in frontend environments, including low-end devices. The library enables quick experimentation and learning in AI without compromising stability, accuracy, or training reliability.
8
8
 
9
- This release, **version 1.4.5**, We make `Matrix Utils` now can be used in others Files.
9
+ This release, **version 1.6.0:** Adds **LION** Optimizer, Adds **ReduceLROnPlateau** Scheduler, enhanced stability, and improved architecture compatibility.
10
10
 
11
11
  ---
12
12
 
13
- ## Feature Highlights
14
-
15
- - **Learning Rate Schedulers:** New `StepLR` and `LambdaLR` for dynamic optimizer learning rate adjustment.
16
- - **Full Conv2D support:** Forward and backward operations for convolutional layers.
17
- - **Tensor operations:** Broadcasting, reshaping, and reduction utilities.
18
- - **Advanced Activations:** Includes `LeakyReLU`, `GELU`, `Mish`, `SiLU`, `ELU`, and more.
19
- - **Optimizers:** `Adam` and `SGD` with gradient updates.
20
- - **Dropout Layer:** For regularization during training.
21
- - **BatchNorm2D:** For stable training in convolutional models.
22
- - **Tensor Manipulation:** Utilities like `flatten`, `stack`, `concat`, `eye`, `reshape`.
23
- - **Model Save & Load:** Easy persistence and restore of models.
24
- - **Test/Demo Templates:** The `tests/` folder provides ready-to-run examples for model building and feature usage.
25
- - **Performance Optimized:** Suitable for both frontend and backend usage.
26
- - **Backward Compatibility:** Maintained for core layers and activations.
13
+ ## New Features Highlights
14
+
15
+ - **LIONS Optimizer:** State-of-the-art optimizer with superior stability and convergence
16
+ - **ReduceLROnPlateau Scheduler:** Adaptive learning rate scheduling based on loss plateaus
17
+ - **Enhanced Stability:** Gradient clipping, better weight initialization, and NaN prevention
18
+
27
19
 
28
20
  ---
29
21
 
30
22
  ## Core Features
31
23
 
32
- - **Layers:** Linear, Conv2D
24
+ - **Layers:** Linear, Flatten, Conv2D
33
25
  - **Activations:** ReLU, Sigmoid, Tanh, LeakyReLU, GELU, Mish, SiLU, ELU
34
26
  - **Loss Functions:** MSELoss, CrossEntropyLoss
35
- - **Optimizers:** Adam, SGD
36
- - **Schedulers:** StepLR, LambdaLR
27
+ - **Optimizers:** Adam, SGD, **LION**
28
+ - **Schedulers:** StepLR, LambdaLR, **ReduceLROnPlateau**
37
29
  - **Regularization:** Dropout, BatchNorm2D
38
30
  - **Utilities:** zeros, randomMatrix, softmax, crossEntropy, dot, addMatrices, reshape, stack, flatten, eye, concat
39
31
  - **Model Container:** Sequential (for stacking layers with forward/backward passes)
@@ -52,7 +44,7 @@ npm install mini-jstorch
52
44
  ## Quick Start Example
53
45
 
54
46
  ```javascript
55
- import { Sequential, Linear, ReLU, Sigmoid, CrossEntropyLoss, Adam, StepLR } from 'mini-jstorch';
47
+ import { Sequential, Linear, ReLU, Sigmoid, CrossEntropyLoss, Adam, StepLR } from './src/jstorch.js';
56
48
 
57
49
  // Build model
58
50
  const model = new Sequential([
@@ -99,7 +91,7 @@ predTest.forEach((p,i) => {
99
91
  ## Save & Load Models
100
92
 
101
93
  ```javascript
102
- import { saveModel, loadModel, Sequential } from './src/MainEngine.js';
94
+ import { saveModel, loadModel, Sequential } from '.jstorch.js';
103
95
 
104
96
  const json = saveModel(model);
105
97
  const model2 = new Sequential([...]); // same architecture
@@ -110,21 +102,23 @@ loadModel(model2, json);
110
102
 
111
103
  ## Demos & Testing
112
104
 
113
- Check the `tests/` directory for ready-to-run demos:
114
- - **tests/MakeModel.js:** Build and run a simple neural network.
115
- - **tests/scheduler.js:** Experiment with learning rate schedulers.
105
+ Check the `demo/` directory for ready-to-run demos:
106
+ - **demo/MakeModel.js:** Build and run a simple neural network.
107
+ - **demo/scheduler.js:** Experiment with learning rate schedulers.
108
+ - **demo/fu_fun.js:** Tests All fu_functions (for users).
116
109
  - Add your own scripts for quick prototyping!
117
110
 
118
111
  ```bash
119
- node tests/MakeModel.js
120
- node tests/scheduler.js
112
+ node demo/MakeModel.js
113
+ node demo/scheduler.js
114
+ node demo/fu_fun.js
121
115
  ```
122
116
 
123
117
  ---
124
118
 
125
119
  ## Intended Use Cases
126
120
 
127
- - Rapid prototyping of neural networks in frontend and backend.
121
+ - Rapid prototyping of neural networks in frontend.
128
122
  - Learning and teaching foundational neural network concepts.
129
123
  - Experimentation on low-end devices or mobile browsers.
130
124
  - Lightweight AI projects without GPU dependency.
@@ -135,22 +129,4 @@ node tests/scheduler.js
135
129
 
136
130
  `MIT License`
137
131
 
138
- **Copyright (c) 2025 rizal-editors**
139
-
140
- Permission is hereby granted, free of charge, to any person obtaining a copy
141
- of this software and associated documentation files (the "Software"), to deal
142
- in the Software without restriction, including without limitation the rights
143
- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
144
- copies of the Software, and to permit persons to whom the Software is
145
- furnished to do so, subject to the following conditions:
146
-
147
- The above copyright notice and this permission notice shall be included in all
148
- copies or substantial portions of the Software.
149
-
150
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
151
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
152
- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
153
- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
154
- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
155
- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
156
- SOFTWARE.
132
+ **Copyright (c) 2025 rizal-editors**
@@ -0,0 +1,36 @@
1
+ import { Sequential, Linear, ReLU, MSELoss, Adam, StepLR, Tanh } from '../src/jstorch.js';
2
+
3
+ const model = new Sequential([
4
+ new Linear(2, 16),
5
+ new Tanh(),
6
+ new Linear(16, 8),
7
+ new ReLU(),
8
+ new Linear(8, 1)
9
+ ]);
10
+
11
+ const X = [[0,0], [0,1], [1,0], [1,1]];
12
+ const y = [[0], [1], [1], [0]];
13
+
14
+ const criterion = new MSELoss();
15
+ const optimizer = new Adam(model.parameters(), 0.1);
16
+ const scheduler = new StepLR(optimizer, 25, 0.5); // LR * 0.5 every 25 epochs
17
+
18
+ console.log("Training Progress:");
19
+ for (let epoch = 0; epoch < 1000; epoch++) {
20
+ const pred = model.forward(X);
21
+ const loss = criterion.forward(pred, y);
22
+ const grad = criterion.backward();
23
+ model.backward(grad);
24
+ optimizer.step();
25
+ scheduler.step();
26
+
27
+ if (epoch % 100 === 0) {
28
+ console.log(`Epoch ${epoch}: Loss = ${loss.toFixed(6)}, LR = ${optimizer.lr.toFixed(6)}`);
29
+ }
30
+ }
31
+
32
+ console.log("\nFinal Predictions:");
33
+ const predictions = model.forward(X);
34
+ predictions.forEach((pred, i) => {
35
+ console.log(`Input: ${X[i]} -> ${pred[0].toFixed(4)} (target: ${y[i][0]})`);
36
+ });
package/demo/fu_fun.js ADDED
@@ -0,0 +1,72 @@
1
+
2
+ import {
3
+ fu_tensor, fu_add, fu_mul, fu_matmul, fu_sum, fu_mean,
4
+ fu_relu, fu_sigmoid, fu_tanh, fu_softmax, fu_flatten, fu_reshape
5
+ } from '../src/jstorch.js';
6
+
7
+ function testAllFuFunctions() {
8
+ console.log("TESTING ALL FU_FUNCTIONS\n");
9
+
10
+ // Test 1: fu_tensor
11
+ console.log("1. fu_tensor");
12
+ const t1 = fu_tensor([[1, 2], [3, 4]]);
13
+ console.log("", t1.data);
14
+
15
+ // Test 2: fu_add
16
+ console.log("\n2. fu_add");
17
+ const a = fu_tensor([[1, 2]]);
18
+ const b = fu_tensor([[3, 4]]);
19
+ const c = fu_add(a, b);
20
+ console.log("", a.data, "+", b.data, "=", c.data);
21
+
22
+ // Test 3: fu_mul
23
+ console.log("\n3. fu_mul");
24
+ const d = fu_mul(a, b);
25
+ console.log("", a.data, "*", b.data, "=", d.data);
26
+
27
+ // Test 4: fu_matmul
28
+ console.log("\n4. fu_matmul");
29
+ const e = fu_tensor([[1, 2]]);
30
+ const f = fu_tensor([[3], [4]]);
31
+ const g = fu_matmul(e, f);
32
+ console.log("matmul =", g.data);
33
+
34
+ // Test 5: fu_sum & fu_mean
35
+ console.log("\n5. fu_sum & fu_mean");
36
+ const h = fu_tensor([[1, 2], [3, 4]]);
37
+ const sum = fu_sum(h);
38
+ const mean = fu_mean(h);
39
+ console.log("sum =", sum.data, "mean =", mean.data);
40
+
41
+ // Test 6: fu_relu
42
+ console.log("\n6. fu_relu");
43
+ const i = fu_tensor([[-1, 0], [1, 2]]);
44
+ const relu = fu_relu(i);
45
+ console.log("relu =", relu.data);
46
+
47
+ // Test 7: fu_sigmoid
48
+ console.log("\n7. fu_sigmoid");
49
+ const sigmoid = fu_sigmoid(i);
50
+ console.log("sigmoid =", sigmoid.data);
51
+
52
+ // Test 8: fu_tanh
53
+ console.log("\n8. fu_tanh");
54
+ const tanh = fu_tanh(i);
55
+ console.log("tanh =", tanh.data);
56
+
57
+ // Test 9: fu_softmax
58
+ console.log("\n9. fu_softmax");
59
+ const j = fu_tensor([[1, 2, 3]]);
60
+ const softmax = fu_softmax(j);
61
+ console.log("softmax =", softmax.data);
62
+
63
+ // Test 10: fu_flatten & fu_reshape
64
+ console.log("\n10. fu_flatten & fu_reshape");
65
+ const k = fu_tensor([[1, 2], [3, 4]]);
66
+ const flat = fu_flatten(k);
67
+ const reshaped = fu_reshape(flat, 1, 4);
68
+ console.log("flatten =", flat.data);
69
+ console.log("reshape =", reshaped.data);
70
+ }
71
+
72
+ testAllFuFunctions();
@@ -0,0 +1,69 @@
1
+ // Example: Test ALL learning rate schedulers with mini-jstorch optimizers
2
+
3
+ import { SGD, StepLR, LambdaLR, ReduceLROnPlateau, Tensor } from "../src/jstorch.js";
4
+
5
+ const param = { param: [[1, 2], [3, 4]], grad: [[0, 0], [0, 0]] };
6
+ const optimizer = new SGD([param], 0.1);
7
+
8
+ // --- Test StepLR ---
9
+ console.log("Testing StepLR...");
10
+ const stepScheduler = new StepLR(optimizer, 3, 0.5);
11
+ for (let epoch = 1; epoch <= 10; epoch++) {
12
+ stepScheduler.step();
13
+ console.log(`Epoch ${epoch}: LR = ${optimizer.lr.toFixed(4)}`);
14
+ }
15
+
16
+ // --- Test LambdaLR ---
17
+ console.log("\nTesting LambdaLR...");
18
+ optimizer.lr = 0.1; // Reset LR
19
+ const lambdaScheduler = new LambdaLR(optimizer, epoch => 1.0 / (1 + epoch));
20
+ for (let epoch = 1; epoch <= 5; epoch++) {
21
+ lambdaScheduler.step();
22
+ console.log(`Epoch ${epoch}: LR = ${optimizer.lr.toFixed(4)}`);
23
+ }
24
+
25
+ // --- Test ReduceLROnPlateau ---
26
+ console.log("\nTesting ReduceLROnPlateau...");
27
+ optimizer.lr = 0.1; // Reset LR
28
+ const plateauScheduler = new ReduceLROnPlateau(optimizer, {
29
+ patience: 2,
30
+ factor: 0.5,
31
+ min_lr: 0.01,
32
+ verbose: true
33
+ });
34
+
35
+ // Simulate training with plateauing loss
36
+ const losses = [0.9, 0.8, 0.7, 0.69, 0.68, 0.68, 0.68, 0.67, 0.67, 0.67];
37
+ console.log("Simulated training with plateauing loss:");
38
+ for (let epoch = 0; epoch < losses.length; epoch++) {
39
+ plateauScheduler.step(losses[epoch]);
40
+ console.log(`Epoch ${epoch + 1}: Loss = ${losses[epoch].toFixed(3)}, LR = ${optimizer.lr.toFixed(4)}, Wait = ${plateauScheduler.wait}`);
41
+ }
42
+
43
+ // --- Test ReduceLROnPlateau with Cooldown ---
44
+ console.log("\nTesting ReduceLROnPlateau with Cooldown...");
45
+ optimizer.lr = 0.1; // Reset LR
46
+ const plateauWithCooldown = new ReduceLROnPlateau(optimizer, {
47
+ patience: 2,
48
+ factor: 0.5,
49
+ min_lr: 0.01,
50
+ cooldown: 2,
51
+ verbose: true
52
+ });
53
+
54
+ // Simulate training with multiple plateaus
55
+ const losses2 = [0.9, 0.9, 0.9, 0.9, 0.8, 0.8, 0.8, 0.8, 0.7, 0.7];
56
+ console.log("Simulated training with cooldown:");
57
+ for (let epoch = 0; epoch < losses2.length; epoch++) {
58
+ plateauWithCooldown.step(losses2[epoch]);
59
+ console.log(`Epoch ${epoch + 1}: Loss = ${losses2[epoch].toFixed(3)}, LR = ${optimizer.lr.toFixed(4)}, Wait = ${plateauWithCooldown.wait}, Cooldown = ${plateauWithCooldown.cooldown_counter}`);
60
+ }
61
+
62
+ // --- Summary ---
63
+ console.log("\nSCHEDULER SUMMARY:");
64
+ console.log(`StepLR: ${stepScheduler.last_epoch} epochs processed`);
65
+ console.log(`LambdaLR: ${lambdaScheduler.last_epoch} epochs processed`);
66
+ console.log(`ReduceLROnPlateau: ${plateauScheduler.num_reductions} LR reductions`);
67
+ console.log(`ReduceLROnPlateau with Cooldown: ${plateauWithCooldown.num_reductions} LR reductions`);
68
+
69
+ console.log("\nAll schedulers tested successfully!");
package/index.js CHANGED
@@ -1,2 +1,2 @@
1
1
  // package root
2
- export * from "./src/MainEngine.js";
2
+ export * from "./src/jstorch.js";
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "mini-jstorch",
3
- "version": "1.4.5",
3
+ "version": "1.6.0",
4
4
  "type": "module",
5
5
  "description": "A lightweight JavaScript neural network library for rapid frontend AI experimentation on low-resource devices Inspired by PyTorch.",
6
6
  "main": "index.js",