mini-jstorch 1.6.0 → 1.7.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/Docs/About.md ADDED
@@ -0,0 +1,83 @@
1
+ # Mini-JSTorch — Technical Information
2
+
3
+ ---
4
+
5
+ ## General Information
6
+
7
+ - **Project Name:** Mini-JSTorch
8
+ - **Internal Name:** JST (JSTorch)
9
+
10
+ > Note:
11
+ > Early versions of Mini-JSTorch do not strictly follow semantic versioning conventions
12
+ > (e.g. `0.0.1` for patches, `0.1.0` for minor releases, `1.0.0` for major releases).
13
+ > This inconsistency reflects the early learning and experimental phase of the project.
14
+
15
+ ---
16
+
17
+ ## 1. Engine Architecture Limitations (JSTorch Core)
18
+
19
+ This section outlines the known structural weaknesses of the JSTorch engine.
20
+ Although the architecture may appear complex, it is currently sensitive and tightly coupled.
21
+
22
+ ### Identified Limitations
23
+
24
+ - **High dependency on Utilities**
25
+ Every core class depends directly on the Utilities module, which is defined at the top of the `jstorch.js` file. This creates strong coupling across the engine.
26
+
27
+ - **Limited Tensor dimensionality**
28
+ Tensor implementations currently support only two dimensions.
29
+ Extending support to higher-dimensional tensors would require significant architectural changes due to the existing complexity.
30
+
31
+ - **Uneven class complexity**
32
+ New or recently modified classes often become significantly more complex than others, leading to inconsistency in maintainability and internal design balance.
33
+
34
+ ---
35
+
36
+ ## 2. Rationale Behind the `fu_` Utilities
37
+
38
+ This section explains why the `fu_` utilities were introduced despite the existence of internal Utilities.
39
+
40
+ ### Issues with Internal Utilities
41
+
42
+ - The Utilities defined at the beginning of `jstorch.js` are **internal engine helpers**, not intended for direct user interaction.
43
+
44
+ - These Utilities are heavily reused across multiple core classes.
45
+ Any modification to a utility function may trigger **cascading (domino) errors** throughout the engine due to tight dependencies.
46
+
47
+ - Some utility functions intentionally diverge from standard or expected formulas.
48
+ For example:
49
+ - Expected formula:
50
+ `Param1 - Param4 * Param3`
51
+ - Internal Utilities implementation:
52
+ `Param1 - Param2 * Param3 + Param4`
53
+
54
+ This behavior exists because internal Utilities are optimized for class-level computations, not for user-facing correctness or predictability.
55
+
56
+ ### Purpose of `fu_` Utilities
57
+
58
+ The `fu_` utilities were designed to improve the **user experience** by providing:
59
+
60
+ - Predictable and correct computational behavior
61
+ - User-friendly and stable helper functions
62
+ - Isolation from internal engine changes
63
+ - Reduced risk of incorrect outputs and dependency-based cascading errors
64
+
65
+ In short, `fu_` exists to ensure safety, clarity, and consistency for end users of Mini-JSTorch.
66
+
67
+ ---
68
+
69
+ ## 3. SJK (Shortcut JST Keywords) Reference
70
+
71
+ This section lists commonly used abbreviations and keywords within the Mini-JSTorch ecosystem.
72
+
73
+ **Format:** `"KEYWORD" : "Full Name / Meaning"`
74
+
75
+ - `"JST"` : JSTorch
76
+ - `"fu"` : For User / User-Friendly
77
+ - `"fun"` : Function
78
+ - `"Dummy"` : Experimental
79
+ - `"Exp"` : Restricted experimental entity
80
+ - `"msg"` : Message, comment, warning, announcement
81
+ - `"donot"` : Do not / Don't
82
+
83
+ ---
@@ -0,0 +1,129 @@
1
+ # Project File Structure #
2
+
3
+ This document describes the directory and file structure of the **mini-JSTorch** package.
4
+ It provides an overview of how the project is organized and the purpose of each major component.
5
+
6
+ ---
7
+
8
+ ## Repository Overview
9
+
10
+ ```text
11
+ mini-jstorch/
12
+ ├── demo/
13
+ │ ├── fu_fun.js
14
+ │ ├── MakeModel.js
15
+ │ └── scheduler.js
16
+ ├── Docs/
17
+ │ ├── About.md
18
+ │ └── Structure.md
19
+ ├── src/
20
+ │ ├── jstorch.js
21
+ │ └── Dummy/
22
+ │ └── msg/
23
+ ├── index.js
24
+ ├── package.json
25
+ └── README.md
26
+ ```
27
+
28
+ ---
29
+
30
+ ## Directory Descriptions
31
+
32
+ `/demo`
33
+
34
+ - Contains demonstration and testing files.
35
+
36
+ - Used for unit testing, quick system checks, and example usage
37
+ - Intended for users who prefer practical examples over reading full API documentation
38
+ - Allows testing features without writing extensive manual code
39
+
40
+ `/Docs`
41
+
42
+ - Contains detailed documentation related to the mini-JSTorch package.
43
+
44
+ - Provides deeper explanations of internal design and usage
45
+ - Intended for contributors and advanced users
46
+
47
+ `/src`
48
+
49
+ - Contains the source code of the JSTorch engine.
50
+
51
+ - Houses all core logic and internal implementations
52
+ - Modifications in this directory directly affect engine behavior
53
+
54
+ `/src/Dummy`
55
+
56
+ - Experimental and restricted directory.
57
+
58
+ - Used for experimental purposes and future development
59
+ - Files inside this directory may be unstable or incomplete
60
+ - Not intended for public or production use
61
+
62
+ `/src/Dummy/msg`
63
+
64
+ - Contains warning or message files.
65
+
66
+ - Indicates that files within the `Dummy` directory are restricted
67
+ - Serves as a notification mechanism for experimental or future-update-related content
68
+
69
+ ---
70
+
71
+ ## File Descriptions
72
+
73
+ `/demo/fu_fun.js`
74
+
75
+ - Purpose: Tests all user-facing (`fu_`) functions
76
+ - Notes: Focuses on friendly and predictable helper utilities
77
+
78
+ `/demo/MakeModel.js`
79
+
80
+ - Purpose: Demonstrates creation of a simple model
81
+ - Notes: Uses the `StepLR` scheduler as part of the example workflow
82
+
83
+ `/demo/scheduler.js`
84
+
85
+ - Purpose: Tests scheduler-related functionality
86
+ - Notes: Intended to validate learning rate scheduling behavior
87
+
88
+ `/Docs/About.md`
89
+
90
+ - Purpose: Contains additional information about the mini-JSTorch package
91
+ - Notes: May include background, design decisions, or non-API-related explanations
92
+
93
+ `/Docs/Structure.md`
94
+
95
+ - Purpose: Documents the repository file and folder structure
96
+ - Notes: This file
97
+
98
+ `/src/jstorch.js`
99
+
100
+ - Purpose: Core engine implementation
101
+
102
+ - Notes:
103
+
104
+ - Contains all JSTorch engine logic and functions
105
+ - Central file of the entire package
106
+ - Changes here have wide-ranging effects
107
+
108
+ `index.js`
109
+
110
+ - Purpose: Package entry point
111
+ - Notes: Exposes public APIs and connects internal modules
112
+
113
+ `package.json`
114
+
115
+ - Purpose: Project configuration and metadata
116
+ - Notes: Defines dependencies, scripts, and package information
117
+
118
+ `README.md`
119
+
120
+ - Purpose: Main documentation entry
121
+ - Notes: Provides overview, installation instructions, and basic usage
122
+
123
+ **Notes**
124
+
125
+ - Experimental files may change or be restricted without notice
126
+ - Users are encouraged to rely on public APIs and documented utilities
127
+ - Internal structures are subject to refactoring as the project evolves
128
+
129
+ ---
package/README.md CHANGED
@@ -4,18 +4,17 @@ A lightweight JavaScript neural network library for rapid frontend AI experiment
4
4
 
5
5
  ## Overview
6
6
 
7
- Mini-JSTorch is a high-performance, minimalist JavaScript library for building neural networks. It runs efficiently in frontend environments, including low-end devices. The library enables quick experimentation and learning in AI without compromising stability, accuracy, or training reliability.
7
+ Mini-JSTorch is a high-performance, minimalist JavaScript library for building neural networks. It runs efficiently in Frontend environments, including low-end devices. The library enables quick experimentation and learning in AI without compromising stability, accuracy, or training reliability.
8
8
 
9
- This release, **version 1.6.0:** Adds **LION** Optimizer, Adds **ReduceLROnPlateau** Scheduler, enhanced stability, and improved architecture compatibility.
9
+ This release, **version 1.7.0:** We introduce **Softmax Layer**, **Tokenizer**, **AdamW Optimizer**, and enhanced NLP capabilities.
10
10
 
11
11
  ---
12
12
 
13
13
  ## New Features Highlights
14
14
 
15
- - **LIONS Optimizer:** State-of-the-art optimizer with superior stability and convergence
16
- - **ReduceLROnPlateau Scheduler:** Adaptive learning rate scheduling based on loss plateaus
17
- - **Enhanced Stability:** Gradient clipping, better weight initialization, and NaN prevention
18
-
15
+ - **Softmax Layer:** Professional classification output with proper gradient computation
16
+ - **Tokenizer:** Lightweight text preprocessing for NLP tasks
17
+ - **AdamW Optimizer:** Modern optimizer with decoupled weight decay
19
18
 
20
19
  ---
21
20
 
@@ -24,12 +23,17 @@ This release, **version 1.6.0:** Adds **LION** Optimizer, Adds **ReduceLROnPlate
24
23
  - **Layers:** Linear, Flatten, Conv2D
25
24
  - **Activations:** ReLU, Sigmoid, Tanh, LeakyReLU, GELU, Mish, SiLU, ELU
26
25
  - **Loss Functions:** MSELoss, CrossEntropyLoss
27
- - **Optimizers:** Adam, SGD, **LION**
28
- - **Schedulers:** StepLR, LambdaLR, **ReduceLROnPlateau**
26
+ - **Optimizers:** Adam, SGD, LION, **AdamW**
27
+ - **Schedulers:** StepLR, LambdaLR, ReduceLROnPlateau
29
28
  - **Regularization:** Dropout, BatchNorm2D
30
29
  - **Utilities:** zeros, randomMatrix, softmax, crossEntropy, dot, addMatrices, reshape, stack, flatten, eye, concat
31
30
  - **Model Container:** Sequential (for stacking layers with forward/backward passes)
32
31
 
32
+ # Others
33
+
34
+ - **Tokenizer**
35
+ - **Softmax Layer**
36
+
33
37
  ---
34
38
 
35
39
  ## Installation
@@ -105,7 +109,7 @@ loadModel(model2, json);
105
109
  Check the `demo/` directory for ready-to-run demos:
106
110
  - **demo/MakeModel.js:** Build and run a simple neural network.
107
111
  - **demo/scheduler.js:** Experiment with learning rate schedulers.
108
- - **demo/fu_fun.js:** Tests All fu_functions (for users).
112
+ - **demo/fu_fun.js:** Test all user-friendly (fu or For Users/Friendly Users) functions
109
113
  - Add your own scripts for quick prototyping!
110
114
 
111
115
  ```bash
@@ -129,4 +133,4 @@ node demo/fu_fun.js
129
133
 
130
134
  `MIT License`
131
135
 
132
- **Copyright (c) 2025 rizal-editors**
136
+ **Copyright (c) 2024 rizal-editors**
package/demo/scheduler.js CHANGED
@@ -1,4 +1,4 @@
1
- // Example: Test ALL learning rate schedulers with mini-jstorch optimizers
1
+ // Example: Test ALL learning rate schedulers in mini-jstorch with mini-jstorch optimizers
2
2
 
3
3
  import { SGD, StepLR, LambdaLR, ReduceLROnPlateau, Tensor } from "../src/jstorch.js";
4
4
 
package/index.js CHANGED
@@ -1,2 +1,3 @@
1
1
  // package root
2
+ // * = all exports from src/jstorch.js
2
3
  export * from "./src/jstorch.js";
package/package.json CHANGED
@@ -1,22 +1,24 @@
1
1
  {
2
2
  "name": "mini-jstorch",
3
- "version": "1.6.0",
3
+ "version": "1.7.0",
4
4
  "type": "module",
5
- "description": "A lightweight JavaScript neural network library for rapid frontend AI experimentation on low-resource devices Inspired by PyTorch.",
5
+ "description": "A lightweight JavaScript neural network library for learning AI concepts and rapid Frontend experimentation. PyTorch-inspired, zero dependencies, perfect for educational use.",
6
6
  "main": "index.js",
7
7
  "keywords": [
8
8
  "neural-network",
9
9
  "javascript",
10
10
  "lightweight-torch",
11
11
  "lightweight",
12
- "small",
12
+ "small-torch",
13
13
  "javascript-torch",
14
- "ai",
14
+ "ai-model",
15
15
  "jstorch",
16
16
  "pytorch",
17
17
  "front-end",
18
18
  "machine-learning",
19
- "mini"
19
+ "tiny-ml",
20
+ "frontend-ai",
21
+ "mini-neural-network"
20
22
  ],
21
23
  "author": "Rizal",
22
24
  "license": "MIT",
package/src/jstorch.js CHANGED
@@ -1,6 +1,6 @@
1
1
  /*!
2
2
  * Project: mini-jstorch
3
- * File: MainEngine.js
3
+ * File: jstorch.js
4
4
  * Author: Rizal-editors
5
5
  * License: MIT
6
6
  * Copyright (C) 2025 Rizal-editors
@@ -24,8 +24,7 @@
24
24
  * SOFTWARE.
25
25
  */
26
26
 
27
-
28
- // ---------------------- Utilities ----------------------
27
+ // ---------------------- DONOT USE THESE (ENGINE INTERNALS) ----------------------
29
28
  export function zeros(rows, cols) {
30
29
  return Array.from({length:rows},()=>Array(cols).fill(0));
31
30
  }
@@ -75,7 +74,7 @@ export function crossEntropy(pred,target){
75
74
  return -target.reduce((sum,t,i)=>sum+t*Math.log(pred[i]+eps),0);
76
75
  }
77
76
 
78
- // ---------------------- USERS FRIENDLY UTILS ----------------
77
+ // ---------------------- USERS FRIENDLY UTILS (USE THIS!) ----------------
79
78
  export function fu_tensor(data, requiresGrad = false) {
80
79
  if (!Array.isArray(data) || !Array.isArray(data[0])) {
81
80
  throw new Error("fu_tensor: Data must be 2D array");
@@ -592,6 +591,148 @@ export class ReLU{
592
591
  }
593
592
  }
594
593
 
594
+ // ---------------------- Softmax ----------------------
595
+ export class Softmax {
596
+ constructor(dim = -1) {
597
+ this.dim = dim;
598
+ this.output = null;
599
+ this.input = null;
600
+ }
601
+
602
+ forward(x) {
603
+ this.input = x;
604
+
605
+ // x: [batch_size, num_classes]
606
+ this.output = x.map(row => {
607
+ const maxVal = Math.max(...row);
608
+ const exps = row.map(v => Math.exp(v - maxVal));
609
+ const sumExps = exps.reduce((a, b) => a + b, 0);
610
+ return exps.map(v => v / sumExps);
611
+ });
612
+ return this.output;
613
+ }
614
+
615
+ backward(grad) {
616
+ // grad: [batch_size, num_classes] - gradient from next layer
617
+ const batchSize = grad.length;
618
+ const numClasses = grad[0].length;
619
+
620
+ const gradInput = zeros(batchSize, numClasses);
621
+
622
+ for (let i = 0; i < batchSize; i++) {
623
+ const s = this.output[i]; // Softmax output for this sample
624
+ const gradOut = grad[i]; // Gradient from loss
625
+
626
+ // Compute Jacobian matrix: J_ij = s_i * (δ_ij - s_j)
627
+ for (let j = 0; j < numClasses; j++) {
628
+ let sum = 0;
629
+ for (let k = 0; k < numClasses; k++) {
630
+ // J[j][k] = s[j] * ((j === k ? 1 : 0) - s[k])
631
+ const jacobian = s[j] * ((j === k ? 1 : 0) - s[k]);
632
+ sum += jacobian * gradOut[k];
633
+ }
634
+ gradInput[i][j] = sum;
635
+ }
636
+ }
637
+
638
+ return gradInput;
639
+ }
640
+
641
+ parameters() {
642
+ return []; // Softmax has no trainable parameters
643
+ }
644
+ }
645
+
646
+ // ---------------------- Tokenizer ----------------------
647
+ export class Tokenizer {
648
+ constructor(vocabSize = 2000){
649
+ this.vocabSize = vocabSize;
650
+ this.wordToIndex = new Map();
651
+ this.indexToWord = new Map();
652
+ this.fitted = false;
653
+ }
654
+
655
+ fit(texts){
656
+ const wordCounts = new Map();
657
+
658
+ // Count word frequencies from all texts
659
+ texts.forEach(text => {
660
+ const words = this._preprocess(text);
661
+ words.forEach(word => {
662
+ wordCounts.set(word, (wordCounts.get(word) || 0) + 1);
663
+ });
664
+ });
665
+
666
+ // Sort by frequency and take top words
667
+ const sortedWords = [...wordCounts.entries()]
668
+ .sort((a, b) => a[1] - a[1])
669
+ .slice(0, this.vocabSize - 1); // Reverse 1 for unknown
670
+
671
+ // Build vocabulary
672
+ this.wordToIndex.clear();
673
+ this.indexToWord.clear();
674
+
675
+ // Add unk token
676
+ this.wordToIndex.set('<UNK>', 0);
677
+ this.indexToWord.set(0, '<UNK>');
678
+
679
+ // Add most frequent words
680
+ sortedWords.forEach(([word], index) =>{
681
+ this.wordToIndex.set(word, index + 1);
682
+ this.indexToWord.set(index + 1, word);
683
+ })
684
+
685
+ this.fitted = true;
686
+ return this;
687
+ }
688
+
689
+ tokenize(text){
690
+ if (!this.fitted) throw new Error("Tokenizer not fitted. Call fit() first.");
691
+
692
+ const words = this._preprocess(text);
693
+ return words.map(word => this.wordToIndex.get(word) || 0);
694
+ }
695
+
696
+ tokenizeBatch(texts, maxLength=null){
697
+ if (!this.fitted) throw new Error("Tokenizer not fitted. Call fit() first.");
698
+
699
+ return texts.map(text => {
700
+ const tokens = this.tokenize(text);
701
+
702
+ if (maxLength !== null){
703
+ // Pad or truncate to maxLength
704
+ if (tokens.length > maxLength){
705
+ return tokens.slice(0, maxLength);
706
+ } else {
707
+ return [...tokens, ...Array(maxLength - tokens.length).fill(0)];
708
+ }
709
+ }
710
+
711
+ return tokens;
712
+ })
713
+ }
714
+
715
+ detokenize(tokens){
716
+ return tokens.map(token => this.indexToWord.get(token) || '<UNK>').join(' ');
717
+ }
718
+
719
+ detokenizeBatch(tokenBatches){
720
+ return tokenBatches.map(tokens => this.detokenize(tokens));
721
+ }
722
+
723
+ getVocabSize(){
724
+ return this.wordToIndex.size;
725
+ }
726
+
727
+ _preprocess(text) {
728
+ return text.toLowerCase()
729
+ .replace(/[^\w\s]/g, ' ') // Remove punctuation
730
+ .split(/\s+/) // Split by whitespace
731
+ .filter(word => word.length > 0); // Remove empty strings
732
+ }
733
+ }
734
+
735
+ // I'm too lazy to break lines here, so everything stays in one line
595
736
  export class Sigmoid{ constructor(){ this.out=null; } forward(x){ const fn=v=>1/(1+Math.exp(-v)); this.out=x.map(r=>r.map(fn)); return this.out; } backward(grad){ return grad.map((r,i)=>r.map((v,j)=>v*this.out[i][j]*(1-this.out[i][j]))); } }
596
737
  export class Tanh{ constructor(){ this.out=null; } forward(x){ this.out=x.map(r=>r.map(v=>Math.tanh(v))); return this.out; } backward(grad){ return grad.map((r,i)=>r.map((v,j)=>v*(1-this.out[i][j]**2))); } }
597
738
  export class LeakyReLU{ constructor(alpha=0.01){ this.alpha=alpha; this.out=null; } forward(x){ this.out=x.map(r=>r.map(v=>v>0?v:v*this.alpha)); return this.out; } backward(grad){ return grad.map((r,i)=>r.map((v,j)=>v*(this.out[i][j]>0?1:this.alpha))); } }
@@ -664,6 +805,70 @@ export class Adam{
664
805
  }
665
806
  }
666
807
 
808
+ // ---------------------- AdamW Optimizer ----------------------
809
+ export class AdamW {
810
+ constructor(params, options = {}) {
811
+ const {
812
+ lr = 0.001,
813
+ beta1 = 0.9,
814
+ beta2 = 0.999,
815
+ eps = 1e-8,
816
+ weight_decay = 0.01,
817
+ max_grad_norm = 1.0
818
+ } = options;
819
+
820
+ this.params = params;
821
+ this.lr = lr;
822
+ this.beta1 = beta1;
823
+ this.beta2 = beta2;
824
+ this.eps = eps;
825
+ this.weight_decay = weight_decay;
826
+ this.max_grad_norm = max_grad_norm;
827
+
828
+ this.m = params.map(p => zeros(p.param.length, p.param[0].length || 1));
829
+ this.v = params.map(p => zeros(p.param.length, p.param[0].length || 1));
830
+ this.t = 0;
831
+ }
832
+
833
+ step() {
834
+ this.t++;
835
+ this.params.forEach((p, idx) => {
836
+ // Gradient clipping (same as Adam)
837
+ let grad_norm_sq = 0;
838
+ for (let i = 0; i < p.param.length; i++) {
839
+ for (let j = 0; j < (p.param[0].length || 1); j++) {
840
+ const grad_val = p.grad[i] && p.grad[i][j] !== undefined ? p.grad[i][j] : 0;
841
+ grad_norm_sq += grad_val * grad_val;
842
+ }
843
+ }
844
+ const grad_norm = Math.sqrt(grad_norm_sq);
845
+ const clip_scale = grad_norm > this.max_grad_norm ? this.max_grad_norm / grad_norm : 1.0;
846
+
847
+ // AdamW update: weight decay applied separately
848
+ for (let i = 0; i < p.param.length; i++) {
849
+ for (let j = 0; j < (p.param[0].length || 1); j++) {
850
+ if (p.grad[i] && p.grad[i][j] !== undefined) {
851
+ const g = p.grad[i][j] * clip_scale;
852
+
853
+ // Adam moments
854
+ this.m[idx][i][j] = this.beta1 * this.m[idx][i][j] + (1 - this.beta1) * g;
855
+ this.v[idx][i][j] = this.beta2 * this.v[idx][i][j] + (1 - this.beta2) * g * g;
856
+
857
+ const mHat = this.m[idx][i][j] / (1 - Math.pow(this.beta1, this.t));
858
+ const vHat = this.v[idx][i][j] / (1 - Math.pow(this.beta2, this.t));
859
+
860
+ // AdamW key difference: weight decay applied to weights, not gradients
861
+ p.param[i][j] -= this.lr * (
862
+ mHat / (Math.sqrt(vHat) + this.eps) +
863
+ this.weight_decay * p.param[i][j] // Decoupled weight decay
864
+ );
865
+ }
866
+ }
867
+ }
868
+ });
869
+ }
870
+ }
871
+
667
872
  export class SGD{
668
873
  constructor(params, lr = 0.01, max_grad_norm = 1.0) {
669
874
  this.params = params;