mini-jstorch 1.7.1 → 1.8.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,100 +1,282 @@
1
- # Mini-JSTorch
1
+ ## Mini-JSTorch
2
2
 
3
- A lightweight JavaScript neural network library for rapid frontend AI experimentation on low-resource devices, inspired by PyTorch.
4
3
 
5
- ## Overview
4
+ Mini-JSTorch is a lightweight, `dependency-free` JavaScript neural network library designed for `education`, `experimentation`, and `small-scale models`.
5
+ It runs in Node.js and modern browsers, with a simple API inspired by PyTorch-style workflows.
6
6
 
7
- Mini-JSTorch is a high-performance, minimalist JavaScript library for building neural networks. It runs efficiently in Frontend environments, including low-end devices. The library enables quick experimentation and learning in AI without compromising stability, accuracy, or training reliability.
7
+ This project prioritizes `clarity`, `numerical correctness`, and `accessibility` over performance or large-scale production use.
8
+
9
+ In this version `1.8.0`, we Introduce the **SoftmaxCrossEntropyLoss**, and **BCEWithLogitsLoss**
10
+
11
+ ---
12
+
13
+ # Overview
14
+
15
+ **Mini-JSTorch provides a minimal neural network engine implemented entirely in plain JavaScript.**
16
+
17
+ *It is intended for:*
18
+
19
+ - learning how neural networks work internally
20
+ - experimenting with small models
21
+ - running simple training loops in the browser
22
+ - environments where large frameworks are unnecessary or unavailable
23
+
24
+ `Mini-JSTorch is NOT a replacement for PyTorch, TensorFlow, or TensorFlow.js.`
25
+
26
+ `It is intentionally scoped to remain small, readable, and easy to debug.`
8
27
 
9
- This release, **1.7.1** allows users to access the JST package from the `browser global scope` or `via HTML` (Sorry I was forgot this feature for a long time).
10
28
  ---
11
29
 
12
- ## New Features Highlights
30
+ # Key Characteristics
13
31
 
14
- - **Softmax Layer:** Professional classification output with proper gradient computation
15
- - **Tokenizer:** Lightweight text preprocessing for NLP tasks
16
- - **AdamW Optimizer:** Modern optimizer with decoupled weight decay
32
+ - Zero dependencies
33
+ - ESM-first (`type: module`)
34
+ - Works in Node.js or others enviornments and browser environments
35
+ - Explicit, manual forward and backward passes
36
+ - Focused on 2D training logic (`[batch][features]`)
37
+ - Designed for educational and experimental use
17
38
 
18
39
  ---
19
40
 
20
- ## Core Features
41
+ # Browser Support
42
+
43
+ Now, Mini-JSTorch can be used directly in browsers:
44
+
45
+ - via ESM imports
46
+ - via CDN / `<script>` with a global `JST` object
47
+
48
+ This makes it suitable for:
49
+
50
+ - demos
51
+ - learning environments
52
+ - lightweight frontend experiments
53
+
54
+ Here example code to make a simple Model with JSTorch.
55
+ In Browser/Website:
56
+
57
+ ```html
58
+ <!DOCTYPE html>
59
+ <html>
60
+ <body style="font-family:monospace; padding:20px;">
61
+ <h3>mini-jstorch XOR Demo</h3>
62
+ <div id="status">Initializing...</div>
63
+ <pre id="log" style="background:#eee; padding:10px;"></pre>
64
+ <div id="res"></div>
65
+
66
+ <script type="module">
67
+ import { Sequential, Linear, ReLU, MSELoss, Adam, StepLR, Tanh } from 'https://unpkg.com/mini-jstorch@1.8.0/index.js';
68
+
69
+ async function train() {
70
+ const statusEl = document.getElementById('status');
71
+ const logEl = document.getElementById('log');
72
+
73
+ try {
74
+ const model = new Sequential([
75
+ new Linear(2, 16), new Tanh(),
76
+ new Linear(16, 8), new ReLU(),
77
+ new Linear(8, 1)
78
+ ]);
79
+
80
+ const X = [[0,0], [0,1], [1,0], [1,1]];
81
+ const y = [[0], [1], [1], [0]];
82
+ const criterion = new MSELoss();
83
+ const optimizer = new Adam(model.parameters(), 0.1);
84
+ const scheduler = new StepLR(optimizer, 25, 0.5);
85
+
86
+ for (let epoch = 0; epoch <= 1000; epoch++) {
87
+ const loss = criterion.forward(model.forward(X), y);
88
+ model.backward(criterion.backward());
89
+ optimizer.step();
90
+ scheduler.step();
91
+
92
+ if (epoch % 200 === 0) {
93
+ logEl.textContent += `Epoch ${epoch} | Loss: ${loss.toFixed(6)}\n`;
94
+ statusEl.textContent = `Training: ${epoch}/1000`;
95
+ await new Promise(r => setTimeout(r, 1));
96
+ }
97
+ }
98
+
99
+ statusEl.textContent = '✅ Done';
100
+ const preds = model.forward(X);
101
+ document.getElementById('res').innerHTML = `<h4>Results:</h4>` +
102
+ X.map((input, i) => `[${input}] -> <b>${preds[i][0].toFixed(4)}</b> (Target: ${y[i][0]})`).join('<br>');
103
+
104
+ } catch (e) {
105
+ statusEl.textContent = '❌ Error: ' + e.message;
106
+ }
107
+ }
108
+ train();
109
+ </script>
110
+ </body>
111
+ </html>
112
+ ```
113
+
114
+ ---
115
+
116
+ # Core Features
117
+
118
+ # Layers
21
119
 
22
- - **Layers:** Linear, Flatten, Conv2D
23
- - **Activations:** ReLU, Sigmoid, Tanh, LeakyReLU, GELU, Mish, SiLU, ELU
24
- - **Loss Functions:** MSELoss, CrossEntropyLoss
25
- - **Optimizers:** Adam, SGD, LION, AdamW
26
- - **Schedulers:** StepLR, LambdaLR, ReduceLROnPlateau
27
- - **Regularization:** Dropout, BatchNorm2D
28
- - **Utilities:** zeros, randomMatrix, softmax, crossEntropy, dot, addMatrices, reshape, stack, flatten, eye, concat
29
- - **Model Container:** Sequential (for stacking layers with forward/backward passes)
120
+ - Linear
121
+ - Flatten
122
+ - Conv2D (*experimental*)
30
123
 
31
- # Others
124
+ # Activations
32
125
 
33
- - **Tokenizer**
34
- - **Softmax Layer**
126
+ - ReLU
127
+ - Sigmoid
128
+ - Tanh
129
+ - LeakyReLU
130
+ - GELU
131
+ - Mish
132
+ - SiLU
133
+ - ELU
134
+
135
+ # Loss Functions
136
+
137
+ - MSELoss
138
+ - CrossEntropyLoss (*legacy*)
139
+ - SoftmaxCrossEntropyLoss (**recommended**)
140
+ - BCEWithLogitsLoss (**recommended**)
141
+
142
+ # Optimizers
143
+
144
+ - SGD
145
+ - Adam
146
+ - AdamW
147
+ - Lion
148
+
149
+ # Learning Rate Schedulers
150
+
151
+ - StepLR
152
+ - LambdaLR
153
+ - ReduceLROnPlateau
154
+ - Regularization
155
+ - Dropout (*basic*, *educational*)
156
+ - BatchNorm2D (*experimental*)
157
+
158
+ # Utilities
159
+
160
+ - zeros
161
+ - randomMatrix
162
+ - dot
163
+ - addMatrices
164
+ - reshape
165
+ - stack
166
+ - flatten
167
+ - concat
168
+ - softmax
169
+ - crossEntropy
170
+
171
+ # Model Container
172
+
173
+ - Sequential
35
174
 
36
175
  ---
37
176
 
38
- ## Installation
177
+ # Installation
39
178
 
179
+ ## Node.js
40
180
  ```bash
41
181
  npm install mini-jstorch
42
- # Node.js v20+ recommended for best performance
182
+ ```
183
+ Node.js v18+ or any modern browser with ES module support is recommended.
184
+
185
+ ## Git
186
+ ```bash
187
+ git clone https://github.com/Rizal-HID11/mini-jstorch-github
43
188
  ```
44
189
 
45
190
  ---
46
191
 
47
- ## Quick Start Example
192
+ # Quick Start (Recommended Loss)
193
+
194
+ # Multi-class Classification (SoftmaxCrossEntropy)
48
195
 
49
196
  ```javascript
50
- import { Sequential, Linear, ReLU, Sigmoid, CrossEntropyLoss, Adam, StepLR } from './src/jstorch.js';
197
+ import {
198
+ Sequential,
199
+ Linear,
200
+ ReLU,
201
+ SoftmaxCrossEntropyLoss,
202
+ Adam
203
+ } from "./src/jstorch.js";
51
204
 
52
- // Build model
53
205
  const model = new Sequential([
54
- new Linear(2,4),
206
+ new Linear(2, 4),
55
207
  new ReLU(),
56
- new Linear(4,2),
57
- new Sigmoid()
208
+ new Linear(4, 2) // logits output
58
209
  ]);
59
210
 
60
- // Sample XOR dataset
61
211
  const X = [
62
212
  [0,0], [0,1], [1,0], [1,1]
63
213
  ];
214
+
64
215
  const Y = [
65
216
  [1,0], [0,1], [0,1], [1,0]
66
217
  ];
67
218
 
68
- // Loss & optimizer
69
- const lossFn = new CrossEntropyLoss();
219
+ const lossFn = new SoftmaxCrossEntropyLoss();
70
220
  const optimizer = new Adam(model.parameters(), 0.1);
71
- const scheduler = new StepLR(optimizer, 20, 0.5); // Halve LR every 20 epochs
72
-
73
- // Training loop
74
- for (let epoch = 1; epoch <= 100; epoch++) {
75
- const pred = model.forward(X);
76
- const loss = lossFn.forward(pred, Y);
77
- const gradLoss = lossFn.backward();
78
- model.backward(gradLoss);
221
+
222
+ for (let epoch = 1; epoch <= 300; epoch++) {
223
+ const logits = model.forward(X);
224
+ const loss = lossFn.forward(logits, Y);
225
+ const grad = lossFn.backward();
226
+ model.backward(grad);
79
227
  optimizer.step();
80
- scheduler.step();
81
- if (epoch % 20 === 0) console.log(`Epoch ${epoch}, Loss: ${loss.toFixed(4)}, LR: ${optimizer.lr.toFixed(4)}`);
228
+
229
+ if (epoch % 50 === 0) {
230
+ console.log(`Epoch ${epoch}, Loss: ${loss.toFixed(4)}`);
231
+ }
82
232
  }
233
+ ```
234
+ Do not combine `SoftmaxCrossEntropyLoss` with a `Softmax` layer.
235
+
236
+ # Binary Classifiaction (BCEWithLogitsLoss)
237
+
238
+ ```javascript
239
+ import {
240
+ Sequential,
241
+ Linear,
242
+ ReLU,
243
+ BCEWithLogitsLoss,
244
+ Adam
245
+ } from "./src/jstorch.js";
83
246
 
84
- // Prediction
85
- const predTest = model.forward(X);
86
- predTest.forEach((p,i) => {
87
- const predictedClass = p.indexOf(Math.max(...p));
88
- console.log(`Input: ${X[i]}, Predicted class: ${predictedClass}, Raw output: ${p.map(v => v.toFixed(3))}`);
89
- });
247
+ const model = new Sequential([
248
+ new Linear(2, 4),
249
+ new ReLU(),
250
+ new Linear(4, 1) // logit
251
+ ]);
252
+
253
+ const X = [
254
+ [0,0], [0,1], [1,0], [1,1]
255
+ ];
256
+
257
+ const Y = [
258
+ [0], [1], [1], [0]
259
+ ];
260
+
261
+ const lossFn = new BCEWithLogitsLoss();
262
+ const optimizer = new Adam(model.parameters(), 0.1);
263
+
264
+ for (let epoch = 1; epoch <= 300; epoch++) {
265
+ const logits = model.forward(X);
266
+ const loss = lossFn.forward(logits, Y);
267
+ const grad = lossFn.backward();
268
+ model.backward(grad);
269
+ optimizer.step();
270
+ }
90
271
  ```
272
+ Do not combine `BCEWithLogitsLoss` with a `Sigmoid` layer.
91
273
 
92
274
  ---
93
275
 
94
- ## Save & Load Models
276
+ # Save & Load Models
95
277
 
96
278
  ```javascript
97
- import { saveModel, loadModel, Sequential } from '.jstorch.js';
279
+ import { saveModel, loadModel, Sequential } from "mini-jstorch";
98
280
 
99
281
  const json = saveModel(model);
100
282
  const model2 = new Sequential([...]); // same architecture
@@ -103,13 +285,12 @@ loadModel(model2, json);
103
285
 
104
286
  ---
105
287
 
106
- ## Demos & Testing
288
+ # Demos
107
289
 
108
- Check the `demo/` directory for ready-to-run demos:
109
- - **demo/MakeModel.js:** Build and run a simple neural network.
110
- - **demo/scheduler.js:** Experiment with learning rate schedulers.
111
- - **demo/fu_fun.js:** Test all user-friendly (fu or For Users/Friendly Users) functions
112
- - Add your own scripts for quick prototyping!
290
+ See the `demo/` directory for runnable examples:
291
+ - `demo/MakeModel.js` simple training loop
292
+ - `demo/scheduler.js` learning rate schedulers
293
+ - `demo/fu_fun.js` utility functions
113
294
 
114
295
  ```bash
115
296
  node demo/MakeModel.js
@@ -119,17 +300,30 @@ node demo/fu_fun.js
119
300
 
120
301
  ---
121
302
 
122
- ## Intended Use Cases
303
+ # Design Notes & Limitations
123
304
 
124
- - Rapid prototyping of neural networks in frontend.
125
- - Learning and teaching foundational neural network concepts.
126
- - Experimentation on low-end devices or mobile browsers.
127
- - Lightweight AI projects without GPU dependency.
305
+ - Training logic is 2D-first: `[batch][features]`
306
+ - Higher-dimensional data is reshaped internally by specific layers (e.g. Conv2D, Flatten)
307
+ - No automatic broadcasting or autograd graph
308
+ - Some components (Conv2D, BatchNorm2D, Dropout) are educational / experimental
309
+ - Not intended for large-scale or production ML workloads
128
310
 
129
311
  ---
130
312
 
313
+ # Intended Use Cases
314
+
315
+ - Learning how neural networks work internally
316
+ - Teaching ML fundamentals
317
+ - Small experiments in Node.js or the browser
318
+ - Lightweight AI demos without GPU or large frameworks
319
+
320
+ ---
321
+
131
322
  # License
132
323
 
133
- `MIT License`
324
+ MIT License
325
+
326
+ Copyright (c) 2024
327
+ rizal-editors
134
328
 
135
- **Copyright (c) 2024 rizal-editors**
329
+ ---
package/index.js CHANGED
@@ -1,7 +1,6 @@
1
1
  // package root
2
- // ugh, i forgot provide JST can use in browser global scope...
3
2
 
4
- // now we provided JST use in browser global scope
3
+ // provide JST in browser global scope
5
4
  import * as JST from './src/jstorch.js';
6
5
 
7
6
  if (typeof window !== 'undefined') {
package/package.json CHANGED
@@ -1,29 +1,19 @@
1
1
  {
2
2
  "name": "mini-jstorch",
3
- "version": "1.7.1",
3
+ "version": "1.8.1",
4
4
  "type": "module",
5
5
  "description": "A lightweight JavaScript neural network library for learning AI concepts and rapid Frontend experimentation. PyTorch-inspired, zero dependencies, perfect for educational use.",
6
6
  "main": "index.js",
7
7
  "keywords": [
8
- "neural-network",
9
- "JST",
10
- "javascript",
11
- "lightweight-torch",
12
- "lightweight",
13
- "small-torch",
8
+ "lightweight-ml",
14
9
  "javascript-torch",
15
- "jstorch",
16
10
  "front-end-torch",
17
- "machine-learning",
18
11
  "tiny-ml",
19
- "frontend-nn",
20
- "frontend-ai",
21
- "mini-neural-network"
12
+ "mini-neural-network",
13
+ "mini-ml-library",
14
+ "mini-js-ml",
15
+ "educational-ml"
22
16
  ],
23
17
  "author": "Rizal",
24
- "license": "MIT",
25
- "repository": {
26
- "type": "git",
27
- "url": "https://github.com/rizal-editors/mini-jstorch.git"
28
- }
18
+ "license": "MIT"
29
19
  }
package/src/jstorch.js CHANGED
@@ -1,9 +1,8 @@
1
1
  /*!
2
- * Project: mini-jstorch
3
2
  * File: jstorch.js
4
- * Author: Rizal-editors
3
+ * Author: rizal-editors
5
4
  * License: MIT
6
- * Copyright (C) 2025 Rizal-editors
5
+ * Copyright (C) 2025 rizal-editors
7
6
  *
8
7
  * Permission is hereby granted, free of charge, to any person obtaining a copy
9
8
  * of this software and associated documentation files (the "Software"), to deal
@@ -30,7 +29,7 @@
30
29
  // See the Documentation for more details.
31
30
  // --------------------------------------------------------------
32
31
 
33
- // ---------------------- DONOT USE THESE (ENGINE INTERNALS) ----------------------
32
+ // ---------------------- DONOT USE THESE (ENGINE INTERNALS) ERROR/BUG ARE EXPECTED ----------------------
34
33
  export function zeros(rows, cols) {
35
34
  return Array.from({length:rows},()=>Array(cols).fill(0));
36
35
  }
@@ -80,7 +79,7 @@ export function crossEntropy(pred,target){
80
79
  return -target.reduce((sum,t,i)=>sum+t*Math.log(pred[i]+eps),0);
81
80
  }
82
81
 
83
- // ---------------------- USERS FRIENDLY UTILS (USE THIS!) ----------------
82
+ // ---------------------- USERS FRIENDLY UTILS (USE THIS FOR YOUR UTILS!) ----------------
84
83
  export function fu_tensor(data, requiresGrad = false) {
85
84
  if (!Array.isArray(data) || !Array.isArray(data[0])) {
86
85
  throw new Error("fu_tensor: Data must be 2D array");
@@ -751,6 +750,69 @@ export class Dropout{ constructor(p=0.5){ this.p=p; } forward(x){ return x.map(r
751
750
  export class MSELoss{ forward(pred,target){ this.pred=pred; this.target=target; const losses=pred.map((row,i)=>row.reduce((sum,v,j)=>sum+(v-target[i][j])**2,0)/row.length); return losses.reduce((a,b)=>a+b,0)/pred.length; } backward(){ return this.pred.map((row,i)=>row.map((v,j)=>2*(v-this.target[i][j])/row.length)); } }
752
751
  export class CrossEntropyLoss{ forward(pred,target){ this.pred=pred; this.target=target; const losses=pred.map((p,i)=>crossEntropy(softmax(p),target[i])); return losses.reduce((a,b)=>a+b,0)/pred.length; } backward(){ return this.pred.map((p,i)=>{ const s=softmax(p); return s.map((v,j)=>(v-this.target[i][j])/this.pred.length); }); } }
753
752
 
753
+ export class SoftmaxCrossEntropyLoss {
754
+ forward(logits, targets) {
755
+ this.targets = targets;
756
+ const batch = logits.length;
757
+
758
+ // stable softmax
759
+ this.probs = logits.map(row => {
760
+ const max = Math.max(...row);
761
+ const exps = row.map(v => Math.exp(v - max));
762
+ const sum = exps.reduce((a,b)=>a+b, 0);
763
+ return exps.map(v => v / sum);
764
+ });
765
+
766
+ let loss = 0;
767
+ for (let i = 0; i < batch; i++) {
768
+ for (let j = 0; j < this.probs[i].length; j++) {
769
+ if (targets[i][j] === 1) {
770
+ loss -= Math.log(this.probs[i][j] + 1e-12);
771
+ }
772
+ }
773
+ }
774
+
775
+ return loss / batch;
776
+ }
777
+
778
+ backward() {
779
+ const batch = this.targets.length;
780
+ return this.probs.map((row,i) =>
781
+ row.map((p,j) => (p - this.targets[i][j]) / batch)
782
+ );
783
+ }
784
+ }
785
+
786
+ export class BCEWithLogitsLoss {
787
+ forward(logits, targets) {
788
+ this.logits = logits;
789
+ this.targets = targets;
790
+ const batch = logits.length;
791
+ let loss = 0;
792
+
793
+ for (let i = 0; i < batch; i++) {
794
+ for (let j = 0; j < logits[i].length; j++) {
795
+ const x = logits[i][j];
796
+ const y = targets[i][j];
797
+ // stable BCE
798
+ loss += Math.max(x, 0) - x*y + Math.log(1 + Math.exp(-Math.abs(x)));
799
+ }
800
+ }
801
+
802
+ return loss / batch;
803
+ }
804
+
805
+ backward() {
806
+ const batch = this.logits.length;
807
+ return this.logits.map((row,i) =>
808
+ row.map((x,j) => {
809
+ const sigmoid = 1 / (1 + Math.exp(-x));
810
+ return (sigmoid - this.targets[i][j]) / batch;
811
+ })
812
+ );
813
+ }
814
+ }
815
+
754
816
  // ---------------------- Optimizers ----------------------
755
817
  export class Adam{
756
818
  constructor(params, lr = 0.001, b1 = 0.9, b2 = 0.999, eps = 1e-8, max_grad_norm = 1.0){