mini-jstorch 1.1.7 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md ADDED
@@ -0,0 +1,32 @@
1
+ ## Changelog ##
2
+
3
+ **Official Release:** 2025-Monday-August-18
4
+ **Version:** 1.1.9
5
+
6
+ New Files that will All notable changes to *Mini-JSTorch* will be documented in this file.
7
+
8
+ ---
9
+
10
+ ## [1.1.9] - 2025-Monday-August-18
11
+
12
+ ### Added
13
+ - Complete overhaul of core engine for stability and full feature support.
14
+ - Adam optimizer fully integrated with batch gradient handling.
15
+ - NaN-safe operations for forward/backward passes and loss calculations.
16
+ - Sequential container with robust parameter tracking.
17
+ - Utility functions enhanced for numeric safety and matrix operations.
18
+ - Ready-to-use example scripts with multiple hidden layers for frontend experimentation.
19
+ - Full handling for lightweight devices, optimized for speed and low memory usage.
20
+
21
+ ### Fixed
22
+ - bugs in ReLU and gradient propagation causing NaN loss values.
23
+ - Backward propagation errors for batch matrices corrected.
24
+ - Critical Error while startup Engine.
25
+
26
+ ### Changed
27
+ - Layer and activation API standardized for consistency.
28
+ - CrossEntropyLoss now safely handles edge cases in probability computations.
29
+ - Improvements systems for make experiences more stable and not heavy.
30
+ - Minor numeric stability improvements.
31
+
32
+ ---
package/README.md CHANGED
@@ -1,17 +1,51 @@
1
- ## INTRODUCING MINI-JSTORCH ##
1
+ ## Mini-JSTorch ##
2
2
 
3
- **Mini-JSTorch** is a lightweight JavaScript Module Packages where it was inspired by **PyTorch** and this Module was designed to be used at any devices and any like frontend or backend. This JSTorch finally at their versions and not *BETA* again.
4
- This Module only Friendly at Frontend may he not run well while running at GPU or Backend We're gonna still update it so it still stable at backend and GPU.
3
+ ## SORRY GUYS FOR THE DISTURB BECAUSE VERSION 1.1.9 I FORGOT PLACE THE FULL SYSTEM FOR THE UPDATE NOT PLACED ON MAINENGINE BUT AT EXPERIMENTS FILES
5
4
 
5
+ # IMPORTANT!
6
+
7
+ This Module will not gonna Run *Well* In at GPU or a Backend.
8
+ We're will gonna *Optimize* So all User can still use this Module.
9
+
10
+ ---
11
+
12
+ **Mini JSTorch** is a lightweight, high-performance JavaScript library designed for rapid prototyping of neural networks directly in the frontend environment and for low-end devices. Its core purpose is to enable experimentation and learning on devices with limited computing resources, without compromising stability or training reliability.
6
13
 
7
14
  ---
8
15
 
9
- ## Key Features
16
+ ## Vision
10
17
 
11
- - **Custom Model Creation:** Configure layers, neurons, learning rate, and more.
12
- - **Manual & Auto Gradient:** Toggle gradient computation mode per experiment.
13
- - **Sequence Handling:** Supports input sequences for NLP or time series tasks.
14
- - **Lightweight & Frontend-Friendly:** Pure JS implementation, no GPU dependency.
18
+ Our vision is to democratize AI experimentation by providing a tool that is:
19
+
20
+ - **Accessible**: Runs efficiently on *low-end* laptops or mobile devices.
21
+ - **Stable**: Handles gradient calculations and NaN values robustly.
22
+ - **Educational**: Clear abstractions for neural network layers, activations, and optimizers.
23
+ - **Fast**: Immediate output without lengthy training delays.
24
+
25
+ *Mini JSTorch* aims to bridge the gap between conceptual AI learning and hands-on experimentation, making it possible to explore neural network behavior interactively, even on minimal hardware.
26
+
27
+ ---
28
+
29
+ ## Features
30
+
31
+ - **Core Layers**
32
+ - `Linear`: Fully connected layer with weight and bias support.
33
+ - **Activations**
34
+ - `ReLU`: Rectified Linear Unit activation.
35
+ - `Sigmoid`: Sigmoid activation function.
36
+ - **Loss Functions**
37
+ - `CrossEntropyLoss`: Combines softmax and cross-entropy for classification tasks.
38
+ - **Optimizers**
39
+ - `Adam`: Adaptive optimizer for stable and efficient training.
40
+ - **Utilities**
41
+ - `zeros(rows, cols)`: Generate a zero matrix.
42
+ - `randomMatrix(rows, cols, scale)`: Random initialization for weights.
43
+ - `softmax(x)`: Compute softmax probabilities.
44
+ - `crossEntropy(pred, target)`: Compute cross-entropy loss.
45
+ - `dot(a, b)`: Matrix multiplication.
46
+ - `addMatrices(a, b)`: Element-wise addition of matrices.
47
+ - **Model Container**
48
+ - `Sequential`: Container to stack layers sequentially with easy forward and backward passes.
15
49
 
16
50
  ---
17
51
 
@@ -19,16 +53,81 @@ This Module only Friendly at Frontend may he not run well while running at GPU o
19
53
 
20
54
  ```bash
21
55
  npm install mini-jstorch
56
+ # Node.js v20+ recommended for a good experience while using module
22
57
  ```
23
58
 
24
- ## Patch v1.1.7
59
+ ---
60
+
61
+ ## Example Scripts
62
+
63
+ ```javascript
64
+ import { Sequential, Linear, ReLU, Sigmoid, CrossEntropy, Adam } from './engine/MainEngine.js'; // use import to import the modules
65
+
66
+ // Construct a model with two hidden layers
67
+ const model = new Sequential([
68
+ new Linear(2, 4),
69
+ new ReLU(),
70
+ new Linear(4, 4),
71
+ new ReLU(),
72
+ new Linear(4, 2),
73
+ new Sigmoid()
74
+ ]);
75
+
76
+ // Training data (XOR)
77
+ const inputs = [
78
+ [0,0], [0,1], [1,0], [1,1]
79
+ ];
80
+ const targets = [
81
+ [1,0], [0,1], [0,1], [1,0]
82
+ ];
83
+
84
+ // Optimizer and loss
85
+ const optimizer = new Adam(model.parameters(), 0.01);
86
+ const lossFunc = CrossEntropy;
87
+
88
+ // Training loop
89
+ for(let epoch = 1; epoch <= 5000; epoch++) {
90
+ let totalLoss = 0;
91
+ for(let i = 0; i < inputs.length; i++) {
92
+ const output = model.forward([inputs[i]]);
93
+ const loss = lossFunc(output, [targets[i]]);
94
+ totalLoss += loss;
95
+ model.zeroGrad();
96
+ model.backward(lossFunc.backward());
97
+ optimizer.step();
98
+ }
99
+ if(epoch % 500 === 0) console.log(`Epoch ${epoch}, Loss: ${totalLoss.toFixed(6)}`);
100
+ }
101
+
102
+ // Predictions
103
+ inputs.forEach(inp => {
104
+ const pred = model.forward([inp]);
105
+ console.log(`${inp} -> ${pred.map(p => p.toFixed(4))}`);
106
+ });
107
+ ```
108
+ ---
109
+
110
+ ## Intended Use Cases
111
+
112
+ - **Experimentation on low-end devices or mobile browsers.**
113
+ - **Learning and teaching foundational neural network concepts.**
114
+ - **Testing small to medium feedforward models in real-time.**
115
+ - **Quick prototyping without GPU dependency or complex setup.**
116
+
117
+ ---
118
+
119
+ ## Roadmap
25
120
 
26
- - *ONLY* Adding a README.md Because i'm forgot btw.
121
+ - **Browser-based interactive playground.**
122
+ - **Additional activation functions and loss options.**
123
+ - **Visualization of training metrics and loss curves in real-time.**
124
+ - **Support for small convolutional networks in frontend environments.**
27
125
 
28
126
  ---
29
127
 
30
- ## Notes
128
+ ## Facts
31
129
 
32
- - **One More. This Module are would not Run *WELL* at GPU or Backend**
33
- - **The Features from the Module are Nearly little near same system Feature Engine like *PyTorch***
34
- - **If you confusing about file 'startup.cpu' because the syntax are not similar to a program Language it was probably used for booting the Engine**
130
+ - **This module is implemented entirely in pure JavaScript.**
131
+ - **The `Dummy` folder contains modules used for development, testing, and debugging before integration into the main engine.**
132
+ - **This module was created by a `single` developer.**
133
+ - **Date 23 Would be `CRAZY`.**
@@ -1,148 +1,209 @@
1
- // ================================
2
- // MINI JS AI ENGINE v1
3
- // ================================
4
1
 
5
- // Basic tensor ops
6
- const zeros = n => Array(n).fill(0);
7
- const randn = () => (Math.random() * 2 - 1) * 0.1;
2
+ // MAINENGINE FILES [PACK IN AT ONE FILES]
3
+ // CURRENT VERSIONS: 0.0.4
4
+ // AUTHOR: Rizal
5
+ // LICENSE: MIT(R)
8
6
 
9
- const dot = (a,b) => a.map(row=>b[0].map((_,j)=>row.reduce((sum,v,k)=>sum+v*b[k][j],0)));
10
- const add = (a,b) => a.map((row,i)=>row.map((v,j)=>v+b[i][j]));
11
- const sub = (a,b) => a.map((row,i)=>row.map((v,j)=>v-b[i][j]));
12
- const mulScalar = (a,s) => a.map(row=>row.map(v=>v*s));
13
- const transpose = m => m[0].map((_,i)=>m.map(row=>row[i]));
7
+ // Utils
8
+ export function zeros(rows, cols) {
9
+ return Array.from({length: rows}, () => Array(cols).fill(0));
10
+ }
14
11
 
15
- // Activations
16
- const Activations = {
17
- relu: x => x.map(v=>Math.max(0,v)),
18
- linear: x => x,
19
- leakyRelu: (x, alpha=0.01) => x.map(v=>v>0?v:alpha*v)
20
- };
21
- const dActivations = {
22
- relu: x => x.map(v=>v>0?1:0),
23
- linear: x => x.map(_=>1),
24
- leakyRelu: (x, alpha=0.01) => x.map(v=>v>0?1:alpha)
25
- };
26
-
27
- // Dense layer with manual grad & auto-grad
28
- class Dense {
29
- constructor(inputSize, outputSize, activation='linear'){
30
- this.inputSize = inputSize;
31
- this.outputSize = outputSize;
32
- this.activation = activation;
33
- this.W = Array.from({length:inputSize},()=>Array.from({length:outputSize},()=>randn()*Math.sqrt(2/inputSize)));
34
- this.b = Array(outputSize).fill(0);
35
-
36
- // Adam variables
37
- this.mW = mulScalar(this.W,0);
38
- this.vW = mulScalar(this.W,0);
39
- this.mb = Array(outputSize).fill(0);
40
- this.vb = Array(outputSize).fill(0);
41
- this.lastInput = null;
42
- this.lastOutput = null;
43
- }
44
-
45
- forward(X){
46
- this.lastInput = X;
47
- let output = dot(X,this.W);
48
- output = output.map((row,i)=>row.map((v,j)=>v+this.b[j]));
49
- this.lastOutput = output.map(row => Activations[this.activation](row));
50
- return this.lastOutput;
51
- }
52
-
53
- backward(dLoss, lr=0.001, beta1=0.9, beta2=0.999, t=1){
54
- const flatOut = this.lastOutput.flat();
55
- const actGrad = dActivations[this.activation](flatOut);
56
- const dOut = dLoss.flat().map((v,i)=>v*actGrad[i]);
57
-
58
- const gradW = Array.from({length:this.inputSize},()=>Array(this.outputSize).fill(0));
59
- const gradB = Array(this.outputSize).fill(0);
60
-
61
- for(let k=0;k<this.lastInput.length;k++){
62
- for(let i=0;i<this.inputSize;i++){
63
- for(let j=0;j<this.outputSize;j++){
64
- gradW[i][j] += this.lastInput[k][i]*dOut[j]/this.lastInput.length;
65
- }
66
- }
67
- }
68
- for(let j=0;j<this.outputSize;j++) gradB[j] = dOut[j]/this.lastInput.length;
69
-
70
- // Adam update with bias correction
71
- for(let i=0;i<this.inputSize;i++){
72
- for(let j=0;j<this.outputSize;j++){
73
- this.mW[i][j] = beta1*this.mW[i][j]+(1-beta1)*gradW[i][j];
74
- this.vW[i][j] = beta2*this.vW[i][j]+(1-beta2)*gradW[i][j]*gradW[i][j];
75
- const mHat = this.mW[i][j]/(1-Math.pow(beta1,t));
76
- const vHat = this.vW[i][j]/(1-Math.pow(beta2,t));
77
- this.W[i][j] -= lr*mHat/(Math.sqrt(vHat)+1e-8);
78
- }
79
- }
12
+ export function randomMatrix(rows, cols, scale=0.1) {
13
+ return Array.from({length: rows}, () =>
14
+ Array.from({length: cols}, () => (Math.random()*2-1)*scale)
15
+ );
16
+ }
80
17
 
81
- for(let j=0;j<this.outputSize;j++){
82
- this.mb[j] = beta1*this.mb[j]+(1-beta1)*gradB[j];
83
- this.vb[j] = beta2*this.vb[j]+(1-beta2)*gradB[j]*gradB[j];
84
- const mHat = this.mb[j]/(1-Math.pow(beta1,t));
85
- const vHat = this.vb[j]/(1-Math.pow(beta2,t));
86
- this.b[j] -= lr*mHat/(Math.sqrt(vHat)+1e-8);
87
- }
18
+ export function softmax(x) {
19
+ const maxVal = Math.max(...x);
20
+ const exps = x.map(v => Math.exp(v - maxVal));
21
+ const sumExps = exps.reduce((a,b)=>a+b, 0);
22
+ return exps.map(v => v / sumExps);
23
+ }
88
24
 
89
- // Return dLoss for next layer (manual grad)
90
- const dNext = Array(this.lastInput.length).fill(0).map(_=>Array(this.inputSize).fill(0));
91
- for(let i=0;i<this.inputSize;i++){
92
- for(let j=0;j<this.outputSize;j++){
93
- for(let k=0;k<this.lastInput.length;k++){
94
- dNext[k][i] += dOut[j]*this.W[i][j];
95
- }
96
- }
25
+ export function crossEntropy(pred, target) {
26
+ const eps = 1e-12;
27
+ return -target.reduce((sum, t, i) => sum + t * Math.log(pred[i] + eps), 0);
28
+ }
29
+
30
+ export function addMatrices(a, b) {
31
+ return a.map((row, i) =>
32
+ row.map((v, j) => v + (b[i] && b[i][j] !== undefined ? b[i][j] : 0))
33
+ );
34
+ }
35
+
36
+ export function dot(a, b) {
37
+ const result = zeros(a.length, b[0].length);
38
+ for (let i=0;i<a.length;i++) {
39
+ for (let j=0;j<b[0].length;j++) {
40
+ let sum=0;
41
+ for (let k=0;k<a[0].length;k++) sum += a[i][k]*b[k][j];
42
+ result[i][j]=sum;
97
43
  }
98
- return dNext;
99
44
  }
45
+ return result;
100
46
  }
101
47
 
102
- // Sequential model
103
- class Seq {
104
- constructor(layers){
105
- this.layers = layers;
106
- this.logs = [];
48
+ // Layers
49
+ export class Linear {
50
+ constructor(inputDim, outputDim) {
51
+ this.W = randomMatrix(inputDim, outputDim);
52
+ this.b = Array(outputDim).fill(0);
53
+ this.gradW = zeros(inputDim, outputDim);
54
+ this.gradb = Array(outputDim).fill(0);
55
+ this.x = null;
107
56
  }
108
57
 
109
- forward(X){
110
- return this.layers.reduce((inp,layer)=>layer.forward(inp), X);
58
+ forward(x) {
59
+ this.x = x;
60
+ const out = dot(x, this.W);
61
+ return out.map((row,i) => row.map((v,j)=>v+this.b[j]));
111
62
  }
112
63
 
113
- train(X,y,epochs=200, lr=0.001){
114
- for(let epoch=1;epoch<=epochs;epoch++){
115
- const yPred = this.forward(X);
116
- const dLoss = yPred.map((row,i)=>row.map((v,j)=>2*(v - y[i][j])/y.length));
64
+ backward(grad) {
65
+ // grad shape: batch x outputDim
66
+ for (let i=0;i<this.W.length;i++)
67
+ for (let j=0;j<this.W[0].length;j++)
68
+ this.gradW[i][j] = this.x.reduce((sum,row,k)=>sum+row[i]*grad[k][j],0);
69
+
70
+ for (let j=0;j<this.b.length;j++)
71
+ this.gradb[j] = grad.reduce((sum,row)=>sum+row[j],0);
72
+
73
+ // propagate to input
74
+ const gradInput = zeros(this.x.length, this.W.length);
75
+ for (let i=0;i<this.x.length;i++)
76
+ for (let j=0;j<this.W.length;j++)
77
+ for (let k=0;k<this.W[0].length;k++)
78
+ gradInput[i][j]+=grad[i][k]*this.W[j][k];
79
+ return gradInput;
80
+ }
117
81
 
118
- let grad = dLoss;
119
- for(let l=this.layers.length-1;l>=0;l--){
120
- grad = this.layers[l].backward(grad, lr, 0.9, 0.999, epoch);
121
- }
82
+ parameters() {
83
+ return [
84
+ {param: this.W, grad: this.gradW},
85
+ {param: [this.b], grad: [this.gradb]} // wrap b in array for consistency
86
+ ];
87
+ }
88
+ }
122
89
 
123
- const loss = yPred.map((row,i)=>row.map((v,j)=>Math.pow(v - y[i][j],2))).flat().reduce((a,b)=>a+b,0)/y.length;
124
- if(epoch%50===0) console.log(`Epoch ${epoch}, Loss ${loss.toFixed(4)}, Pred sample ${yPred[0][0].toFixed(2)}`);
125
- }
90
+ // Activations
91
+ export class ReLU {
92
+ constructor() { this.out=null; }
93
+ forward(x) {
94
+ this.out = Array.isArray(x[0]) ? x.map(r=>r.map(v=>Math.max(0,v))) : x.map(v=>Math.max(0,v));
95
+ return this.out;
96
+ }
97
+ backward(grad) {
98
+ return Array.isArray(grad[0])
99
+ ? grad.map((r,i)=>r.map((v,j)=>v*(this.out[i][j]>0?1:0)))
100
+ : grad.map((v,i)=>v*(this.out[i]>0?1:0));
101
+ }
102
+ }
103
+
104
+ export class Sigmoid {
105
+ constructor() { this.out=null; }
106
+ forward(x) {
107
+ const sigmoidFn = v=>1/(1+Math.exp(-v));
108
+ this.out = Array.isArray(x[0]) ? x.map(r=>r.map(sigmoidFn)) : x.map(sigmoidFn);
109
+ return this.out;
110
+ }
111
+ backward(grad) {
112
+ return Array.isArray(grad[0])
113
+ ? grad.map((r,i)=>r.map((v,j)=>v*this.out[i][j]*(1-this.out[i][j])))
114
+ : grad.map((v,i)=>v*this.out[i]*(1-this.out[i]));
115
+ }
116
+ }
117
+
118
+ // Loss wrapper
119
+ export class CrossEntropyLoss {
120
+ forward(pred, target) {
121
+ // pred: batch x classes, target: batch x classes
122
+ const losses = pred.map((p,i)=>crossEntropy(softmax(p), target[i]));
123
+ this.pred = pred;
124
+ this.target = target;
125
+ return losses.reduce((a,b)=>a+b,0)/pred.length;
126
126
  }
127
127
 
128
- predict(X){
129
- return this.forward(X);
128
+ backward() {
129
+ // gradient of softmax + CE
130
+ const grad = [];
131
+ for (let i=0;i<this.pred.length;i++) {
132
+ const s = softmax(this.pred[i]);
133
+ grad.push(s.map((v,j)=>v-this.target[i][j]));
134
+ }
135
+ return grad.map(r=>r.map(v=>v/this.pred.length));
130
136
  }
131
137
  }
132
138
 
133
- // ================================
134
- // EXAMPLE USAGE
135
- // ================================
139
+ // Optimizer: Adam
140
+ export class Adam {
141
+ constructor(params, lr=0.001, beta1=0.9, beta2=0.999, eps=1e-8) {
142
+ this.params = params;
143
+ this.lr = lr;
144
+ this.beta1 = beta1;
145
+ this.beta2 = beta2;
146
+ this.eps = eps;
147
+ this.m = params.map(p=>zeros(p.param.length, p.param[0].length || 1));
148
+ this.v = params.map(p=>zeros(p.param.length, p.param[0].length || 1));
149
+ this.t=0;
150
+ }
136
151
 
137
- const model = new Seq([
138
- new Dense(2,16,'relu'),
139
- new Dense(16,12,'relu'),
140
- new Dense(12,1,'linear')
141
- ]);
152
+ step() {
153
+ this.t++;
154
+ this.params.forEach((p,idx)=>{
155
+ for (let i=0;i<p.param.length;i++){
156
+ for (let j=0;j<(p.param[0].length||1);j++){
157
+ const g = p.grad[i][j];
158
+ this.m[idx][i][j] = this.beta1*this.m[idx][i][j] + (1-this.beta1)*g;
159
+ this.v[idx][i][j] = this.beta2*this.v[idx][i][j] + (1-this.beta2)*g*g;
160
+ const mHat = this.m[idx][i][j]/(1-Math.pow(this.beta1,this.t));
161
+ const vHat = this.v[idx][i][j]/(1-Math.pow(this.beta2,this.t));
162
+ p.param[i][j]-= this.lr*mHat/(Math.sqrt(vHat)+this.eps);
163
+ }
164
+ }
165
+ });
166
+ }
167
+ }
142
168
 
143
- const X = [[1,2],[2,3],[3,4]];
144
- const y = [[3],[5],[7]];
169
+ // Sequential container
170
+ export class Sequential {
171
+ constructor(layers=[]) { this.layers=layers; }
172
+ forward(x) {
173
+ return this.layers.reduce((input, layer)=>layer.forward(input), x);
174
+ }
175
+ backward(grad) {
176
+ return this.layers.reduceRight((g,layer)=>layer.backward(g), grad);
177
+ }
178
+ parameters() {
179
+ return this.layers.flatMap(l=>l.parameters ? l.parameters() : []);
180
+ }
181
+ }
145
182
 
146
- model.train(X,y,200,0.01);
183
+ // Tambahin ke akhir file utama mini-jstorch
184
+ export function saveModel(model) {
185
+ if (!(model instanceof Sequential)) {
186
+ throw new Error("saveModel only supports Sequential models");
187
+ }
188
+ const weights = model.layers.map(layer => ({
189
+ weights: layer.W ? layer.W : null,
190
+ biases: layer.b ? layer.b : null
191
+ }));
192
+ return JSON.stringify(weights);
193
+ }
194
+
195
+ export function loadModel(model, json) {
196
+ if (!(model instanceof Sequential)) {
197
+ throw new Error("loadModel only supports Sequential models");
198
+ }
199
+ const weights = JSON.parse(json);
200
+ model.layers.forEach((layer, i) => {
201
+ if (layer.W && weights[i].weights) {
202
+ layer.W = weights[i].weights;
203
+ }
204
+ if (layer.b && weights[i].biases) {
205
+ layer.b = weights[i].biases;
206
+ }
207
+ });
208
+ }
147
209
 
148
- console.log('Prediction [7,8]:', model.predict([[7,8]]));
package/index.js CHANGED
@@ -1,4 +1,4 @@
1
- // Entry point of the library, export main classes and functions
1
+ // Entry point of the library, export main classes and functions [DEPRECATED]
2
2
  export { Seq } from './models/seq.js';
3
3
  export { Dense } from './layers/dense.js';
4
4
  export * as act from './act/linear.js';
package/package.json CHANGED
@@ -1,16 +1,18 @@
1
1
  {
2
2
  "name": "mini-jstorch",
3
- "version": "1.1.7",
3
+ "version": "1.2.0",
4
4
  "type": "module",
5
- "description": "A lightweight JavaScript neural network framework for browser & Node.js, inspired by PyTorch.",
5
+ "description": "A lightweight JavaScript neural network library for rapid frontend AI experimentation on low-resource devices Inspired by PyTorch.",
6
6
  "main": "index.js",
7
7
  "keywords": [
8
8
  "neural-network",
9
9
  "javascript",
10
10
  "lightweight",
11
11
  "ai",
12
+ "jstorch",
13
+ "pytorch",
14
+ "front-end",
12
15
  "machine-learning",
13
- "browser",
14
16
  "mini"
15
17
  ],
16
18
  "author": "Rizal",