xy-scale 1.0.2 → 1.0.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +59 -31
- package/dist/xy-scale.min.js +1 -1
- package/index.js +2 -2
- package/package.json +3 -2
- package/src/datasets.js +41 -41
- package/test/test.js +64 -0
package/README.md
CHANGED
|
@@ -5,7 +5,7 @@
|
|
|
5
5
|
|
|
6
6
|
This repository provides utilities for scaling and preparing datasets in JavaScript, with a primary focus on data preprocessing for machine learning applications. The main functionality includes scaling numerical and categorical data and splitting datasets into training and testing sets.
|
|
7
7
|
|
|
8
|
-
The primary functions, `
|
|
8
|
+
The primary functions, `parseTrainingXY` and `parseProductionX`, offer a flexible and modular approach to data handling, allowing users to define custom scaling approaches, weighting of X, and specific parsing rules for X and Y.
|
|
9
9
|
|
|
10
10
|
---
|
|
11
11
|
|
|
@@ -17,60 +17,60 @@ The primary functions, `parseTrainingDataset` and `parseProductionDataset`, offe
|
|
|
17
17
|
|
|
18
18
|
## Main Functions
|
|
19
19
|
|
|
20
|
-
### 1. `
|
|
20
|
+
### 1. `parseTrainingXY`
|
|
21
21
|
|
|
22
22
|
This function prepares a dataset for supervised learning by parsing, scaling, and splitting it into training and testing subsets. It includes configurable options for feature weighting and scaling approaches.
|
|
23
23
|
|
|
24
24
|
#### Parameters:
|
|
25
|
-
- `arrObj` (Array of Objects): Input data array containing all
|
|
25
|
+
- `arrObj` (Array of Objects): Input data array containing all X and Y.
|
|
26
26
|
- `trainingSplit` (Number, optional): Defines the training dataset size (default `0.8`).
|
|
27
27
|
- `weights` (Object, optional): Feature weights for scaling.
|
|
28
|
-
- `
|
|
29
|
-
- `
|
|
28
|
+
- `yCallbackFunc` (Function): Custom function to parse Y for each object.
|
|
29
|
+
- `xCallbackFunc` (Function): Custom function to parse X for each object.
|
|
30
30
|
- `forceScaling` (String, optional): Forces a specific scaling approach for each feature.
|
|
31
31
|
|
|
32
32
|
#### Features:
|
|
33
|
-
- **
|
|
34
|
-
- **Configurable Scaling and Splitting**: Scales
|
|
33
|
+
- **Y and X Parsing**: Custom parsing for Y and X based on user-defined functions.
|
|
34
|
+
- **Configurable Scaling and Splitting**: Scales X and Y independently and splits data into training and testing sets.
|
|
35
35
|
|
|
36
36
|
#### Scaling Approaches:
|
|
37
37
|
- **Normalization**: Scales values to a range of `[0, 1]`.
|
|
38
38
|
- **Standardization**: Scales values to have a mean of `0` and standard deviation of `1`.
|
|
39
39
|
- **Automatic Selection (Default)**: If `forceScaling = null`, the function automatically selects between `'normalization'` and `'standardization'` for each feature.
|
|
40
|
-
- **Normalization** is chosen for
|
|
40
|
+
- **Normalization** is chosen for X with lower variance (small difference between mean and standard deviation), scaling values to a `[0, 1]` range.
|
|
41
41
|
- **Standardization** is applied when higher variance is detected (large difference between mean and standard deviation), centering values with a mean of `0` and a standard deviation of `1`.
|
|
42
42
|
|
|
43
43
|
This adaptive scaling approach ensures the most effective transformation is applied based on each feature's statistical properties.
|
|
44
44
|
|
|
45
45
|
#### Returns:
|
|
46
|
-
- `
|
|
47
|
-
- `
|
|
48
|
-
- `
|
|
46
|
+
- `trainX`, `trainY`, `testX`, `testY`: Scaled feature and label arrays for training and testing sets.
|
|
47
|
+
- `trainXConfig`, `trainYConfig`: Scaling configuration for X and Y.
|
|
48
|
+
- `trainXKeyNames`, `trainLabelKeyNames`: Key names reflecting feature weights.
|
|
49
49
|
|
|
50
|
-
### 2. `
|
|
50
|
+
### 2. `parseProductionX`
|
|
51
51
|
|
|
52
|
-
Designed for production environments, this function parses and scales feature data for unseen production datasets. Like `
|
|
52
|
+
Designed for production environments, this function parses and scales feature data for unseen production datasets. Like `parseTrainingXY`, it includes options for feature weighting and scaling.
|
|
53
53
|
|
|
54
54
|
#### Parameters:
|
|
55
55
|
- `arrObj` (Array of Objects): Input data array for production.
|
|
56
56
|
- `weights` (Object, optional): Feature weights for scaling.
|
|
57
|
-
- `
|
|
57
|
+
- `xCallbackFunc` (Function): Custom function to parse X for each object.
|
|
58
58
|
- `forceScaling` (String, optional): Forces a specific scaling approach for each feature.
|
|
59
59
|
|
|
60
60
|
#### Returns:
|
|
61
|
-
- `
|
|
62
|
-
- `
|
|
63
|
-
- `
|
|
61
|
+
- `x`: Scaled feature array for production data.
|
|
62
|
+
- `xConfig`: Scaling configuration for production data.
|
|
63
|
+
- `xKeyNames`: Key names reflecting feature weights.
|
|
64
64
|
|
|
65
65
|
## Helper Callback Functions for Custom Data Parsing
|
|
66
66
|
|
|
67
|
-
### `
|
|
67
|
+
### `xCallbackFunc`
|
|
68
68
|
|
|
69
|
-
The `
|
|
69
|
+
The `xCallbackFunc` function is used to extract specific feature values from each row of data, defining what the model will use as input. By selecting relevant fields in the dataset, `xCallbackFunc` ensures only the necessary values are included in the model’s feature set, allowing for streamlined preprocessing and improved model performance.
|
|
70
70
|
|
|
71
|
-
### `
|
|
71
|
+
### `yCallbackFunc`
|
|
72
72
|
|
|
73
|
-
The `
|
|
73
|
+
The `yCallbackFunc` function defines the target output (or Y) that the machine learning model will learn to predict. This function typically creates Y by comparing each row of data with a future data point, which is especially useful in time-series data for predictive tasks. In our example, `yCallbackFunc` generates Y based on changes between the current and next rows, which can help the model learn to predict directional trends.
|
|
74
74
|
|
|
75
75
|
|
|
76
76
|
---
|
|
@@ -80,7 +80,7 @@ The `parseLabels` function defines the target output (or labels) that the machin
|
|
|
80
80
|
1. **Parsing and Splitting a Training Dataset:**
|
|
81
81
|
|
|
82
82
|
```javascript
|
|
83
|
-
import {
|
|
83
|
+
import { parseTrainingXY } from './scale.js';
|
|
84
84
|
|
|
85
85
|
const myArray = [
|
|
86
86
|
{ open: 135.23, high: 137.45, low: 134.56, sma_200: 125.34, sma_100: 130.56 },
|
|
@@ -88,7 +88,7 @@ The `parseLabels` function defines the target output (or labels) that the machin
|
|
|
88
88
|
{ open: 137.89, high: 139.34, low: 136.34, sma_200: 127.56, sma_100: 132.78 }
|
|
89
89
|
];
|
|
90
90
|
|
|
91
|
-
const
|
|
91
|
+
const xCallbackFunc = ({ objRow, index }) => {
|
|
92
92
|
const curr = objRow[index];
|
|
93
93
|
const { open, high, low, sma_200, sma_100 } = curr;
|
|
94
94
|
|
|
@@ -101,7 +101,7 @@ The `parseLabels` function defines the target output (or labels) that the machin
|
|
|
101
101
|
};
|
|
102
102
|
};
|
|
103
103
|
|
|
104
|
-
const
|
|
104
|
+
const yCallbackFunc = ({ objRow, index }) => {
|
|
105
105
|
const curr = objRow[index];
|
|
106
106
|
const next = objRow[index + 1];
|
|
107
107
|
|
|
@@ -116,12 +116,12 @@ The `parseLabels` function defines the target output (or labels) that the machin
|
|
|
116
116
|
};
|
|
117
117
|
};
|
|
118
118
|
|
|
119
|
-
const trainingData =
|
|
119
|
+
const trainingData = parseTrainingXY({
|
|
120
120
|
arrObj: myArray,
|
|
121
121
|
trainingSplit: 0.75,
|
|
122
122
|
weights: { open: 1, high: 1, low: 1, sma_200: 1, sma_100: 1 },
|
|
123
|
-
|
|
124
|
-
|
|
123
|
+
yCallbackFunc,
|
|
124
|
+
xCallbackFunc,
|
|
125
125
|
forceScaling: 'normalization'
|
|
126
126
|
});
|
|
127
127
|
```
|
|
@@ -129,9 +129,9 @@ The `parseLabels` function defines the target output (or labels) that the machin
|
|
|
129
129
|
2. **Parsing a Production Dataset:**
|
|
130
130
|
|
|
131
131
|
```javascript
|
|
132
|
-
import {
|
|
132
|
+
import { parseProductionX } from './scale.js';
|
|
133
133
|
|
|
134
|
-
const
|
|
134
|
+
const xCallbackFunc = ({ objRow, index }) => {
|
|
135
135
|
const curr = objRow[index];
|
|
136
136
|
const { open, high, low, sma_200, sma_100 } = curr;
|
|
137
137
|
|
|
@@ -144,16 +144,44 @@ The `parseLabels` function defines the target output (or labels) that the machin
|
|
|
144
144
|
};
|
|
145
145
|
};
|
|
146
146
|
|
|
147
|
-
const productionData =
|
|
147
|
+
const productionData = parseProductionX({
|
|
148
148
|
arrObj: productionArray,
|
|
149
149
|
weights: { open: 2, high: 1, low: 1, sma_200: 1, sma_100: 1 },
|
|
150
|
-
|
|
150
|
+
xCallbackFunc,
|
|
151
151
|
forceScaling: null
|
|
152
152
|
});
|
|
153
153
|
```
|
|
154
154
|
|
|
155
155
|
---
|
|
156
156
|
|
|
157
|
+
### Upcoming Feature: Optional Precision Handling with Big.js and BigNumber.js
|
|
158
|
+
|
|
159
|
+
In the next release, we are introducing an optional **precision** feature to enhance decimal precision in financial and scientific datasets. This feature will allow users to integrate **Big.js** or **BigNumber.js** libraries seamlessly into their data processing workflow by adding a new `precision` property to the parameters of `parseTrainingXY` and `parseProductionX`.
|
|
160
|
+
|
|
161
|
+
#### How Precision Handling Will Work
|
|
162
|
+
|
|
163
|
+
With the new `precision` property, users can pass either Big.js or BigNumber.js as callback functions to handle high-precision decimal calculations. This makes the integration fully optional, allowing flexibility based on the precision requirements of the dataset. When `precision` is set, the toolkit will use the specified library for all numeric computations, ensuring high precision and minimizing rounding errors.
|
|
164
|
+
|
|
165
|
+
1. **Future Example Usage:**
|
|
166
|
+
|
|
167
|
+
```javascript
|
|
168
|
+
import Big from 'big.js';
|
|
169
|
+
import BigNumber from 'bignumber.js';
|
|
170
|
+
import { parseTrainingXY, parseProductionX } from './scale.js';
|
|
171
|
+
|
|
172
|
+
const trainingData = parseTrainingXY({
|
|
173
|
+
arrObj: myArray,
|
|
174
|
+
trainingSplit: 0.75,
|
|
175
|
+
weights: { open: 1, high: 1, low: 1, sma_200: 1, sma_100: 1 },
|
|
176
|
+
yCallbackFunc,
|
|
177
|
+
xCallbackFunc,
|
|
178
|
+
precision: Big, // Big or BigNumber callbacks for high-precision calculations
|
|
179
|
+
forceScaling: 'normalization'
|
|
180
|
+
});
|
|
181
|
+
```
|
|
182
|
+
|
|
183
|
+
---
|
|
184
|
+
|
|
157
185
|
## Technical Details
|
|
158
186
|
|
|
159
187
|
- **Error Handling**: Validates scaling approach values and ensures positive feature weights.
|
package/dist/xy-scale.min.js
CHANGED
|
@@ -1 +1 @@
|
|
|
1
|
-
var XY_Scale;(()=>{"use strict";var e={d:(t,r)=>{for(var n in r)e.o(r,n)&&!e.o(t,n)&&Object.defineProperty(t,n,{enumerable:!0,get:r[n]})},o:(e,t)=>Object.prototype.hasOwnProperty.call(e,t),r:e=>{"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})}},t={};e.r(t),e.d(t,{
|
|
1
|
+
var XY_Scale;(()=>{"use strict";var e={d:(t,r)=>{for(var n in r)e.o(r,n)&&!e.o(t,n)&&Object.defineProperty(t,n,{enumerable:!0,get:r[n]})},o:(e,t)=>Object.prototype.hasOwnProperty.call(e,t),r:e=>{"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})}},t={};e.r(t),e.d(t,{parseProductionX:()=>o,parseTrainingXY:()=>n});const r=({arrObj:e,weights:t={},forceScaling:r=null})=>{if(null!==r&&"normalization"!==r&&"standardization"!==r)throw Error('forceScalling should be null, "normalization" or "standardization"');const n=e.length;if(0===n)return{scaledOutput:[],scaledConfig:{},keyNames:[]};const o=Object.keys(e[0]),a=o.map((e=>{if(t.hasOwnProperty(e)){const r=t[e];if(r<=0)throw new Error(`Weight for key "${e}" must be positive.`);return r}return 1})),s=a.reduce(((e,t)=>e+t),0),i=new Array(s);let l=0;for(let e=0;e<o.length;e++){const t=o[e],r=a[e];for(let e=0;e<r;e++)i[l++]=t}const c={},u={},f={},d={},g={},p={},y={},b={};for(const t of o){const r=e[0][t];c[t]=typeof r,"string"===c[t]&&(y[t]={}),u[t]=1/0,f[t]=-1/0,d[t]=0,g[t]=0,b[t]=0}for(const t of e)for(const e of o){let r=t[e];if("string"===c[e]){const n=y[e];n.hasOwnProperty(r)||(n[r]=Object.keys(n).length),r=n[r],t[e]=r}r<u[e]&&(u[e]=r),r>f[e]&&(f[e]=r),b[e]++;const n=r-d[e];d[e]+=n/b[e],g[e]+=n*(r-d[e])}const h={};for(const e of o)h[e]=b[e]>1?Math.sqrt(g[e]/(b[e]-1)):0,p[e]="normalization"===r||"standardization"===r?r:h[e]<1?"normalization":"standardization";const m=new Array(n);for(let t=0;t<n;t++){const r=e[t],n=new Array(s);let i=0;for(let e=0;e<o.length;e++){const t=o[e],s=r[t],l=u[t],c=f[t],g=d[t],y=h[t];let b;b="normalization"===p[t]?c!==l?(s-l)/(c-l):0:0!==y?(s-g)/y:0;const m=a[e];for(let e=0;e<m;e++)n[i++]=b}m[t]=n}return{scaledOutput:m,scaledConfig:{min:u,max:f,std:h,mean:d,approach:p,inputTypes:c,uniqueStringIndexes:y},scaledKeyNames:i}},n=({arrObj:e,trainingSplit:t=.8,weights:n={},yCallbackFunc:o,xCallbackFunc:a,forceScaling:s})=>{const i=[],l=[];for(let t=0;t<e.length;t++){const r=a({objRow:e,index:t}),n=o({objRow:e,index:t});r&&n&&(i.push(r),l.push(n))}const{scaledOutput:c,scaledConfig:u,scaledKeyNames:f}=r({arrObj:i,weights:n,forceScaling:s}),{scaledOutput:d,scaledConfig:g,scaledKeyNames:p}=r({arrObj:l,weights:n,forceScaling:s}),y=Math.floor(c.length*t);return{trainX:c.slice(0,y),trainY:d.slice(0,y),testX:c.slice(y),testY:d.slice(y),trainXConfig:u,trainXKeyNames:f,trainYConfig:g,trainLabelKeyNames:p}},o=({arrObj:e,weights:t={},xCallbackFunc:n,forceScaling:o})=>{const a=[];for(let t=0;t<e.length;t++){const r=n({objRow:e,index:t});r&&a.push(r)}const{scaledOutput:s,scaledConfig:i,scaledKeyNames:l}=r({arrObj:a,weights:t,forceScaling:o});return{x:s,xConfig:i,xKeyNames:l}};XY_Scale=t})();
|
package/index.js
CHANGED
|
@@ -1,3 +1,3 @@
|
|
|
1
|
-
import {
|
|
1
|
+
import { parseTrainingXY, parseProductionX } from "./src/datasets.js"
|
|
2
2
|
|
|
3
|
-
export {
|
|
3
|
+
export { parseTrainingXY, parseProductionX }
|
package/package.json
CHANGED
package/src/datasets.js
CHANGED
|
@@ -1,75 +1,75 @@
|
|
|
1
1
|
import { scaleArrayObj } from "./scale.js";
|
|
2
2
|
|
|
3
|
-
export const
|
|
4
|
-
const
|
|
5
|
-
const
|
|
3
|
+
export const parseTrainingXY = ({ arrObj, trainingSplit = 0.8, weights = {}, yCallbackFunc, xCallbackFunc, forceScaling }) => {
|
|
4
|
+
const X = [];
|
|
5
|
+
const Y = [];
|
|
6
6
|
|
|
7
7
|
for (let x = 0; x < arrObj.length; x++) {
|
|
8
|
-
const
|
|
9
|
-
const
|
|
8
|
+
const parsedX = xCallbackFunc({ objRow: arrObj, index: x });
|
|
9
|
+
const parsedY = yCallbackFunc({ objRow: arrObj, index: x });
|
|
10
10
|
|
|
11
|
-
if (
|
|
12
|
-
|
|
13
|
-
|
|
11
|
+
if (parsedX && parsedY) {
|
|
12
|
+
X.push(parsedX)
|
|
13
|
+
Y.push(parsedY)
|
|
14
14
|
}
|
|
15
15
|
}
|
|
16
16
|
|
|
17
|
-
// Scale
|
|
17
|
+
// Scale X and Y, if applicable
|
|
18
18
|
const {
|
|
19
|
-
scaledOutput:
|
|
20
|
-
scaledConfig:
|
|
21
|
-
scaledKeyNames:
|
|
19
|
+
scaledOutput: scaledX,
|
|
20
|
+
scaledConfig: trainXConfig,
|
|
21
|
+
scaledKeyNames: trainXKeyNames
|
|
22
22
|
|
|
23
|
-
} = scaleArrayObj({arrObj:
|
|
23
|
+
} = scaleArrayObj({arrObj: X, weights, forceScaling})
|
|
24
24
|
|
|
25
25
|
const {
|
|
26
|
-
scaledOutput:
|
|
27
|
-
scaledConfig:
|
|
28
|
-
scaledKeyNames:
|
|
29
|
-
} = scaleArrayObj({arrObj:
|
|
26
|
+
scaledOutput: scaledY,
|
|
27
|
+
scaledConfig: trainYConfig,
|
|
28
|
+
scaledKeyNames: trainYKeyNames
|
|
29
|
+
} = scaleArrayObj({arrObj: Y, weights, forceScaling})
|
|
30
30
|
|
|
31
|
-
const splitIndex = Math.floor(
|
|
31
|
+
const splitIndex = Math.floor(scaledX.length * trainingSplit)
|
|
32
32
|
|
|
33
33
|
// Split into training and testing sets
|
|
34
34
|
return {
|
|
35
|
-
|
|
36
|
-
|
|
37
|
-
|
|
38
|
-
|
|
39
|
-
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
|
|
43
|
-
|
|
35
|
+
trainX: scaledX.slice(0, splitIndex),
|
|
36
|
+
trainY: scaledY.slice(0, splitIndex),
|
|
37
|
+
testX: scaledX.slice(splitIndex),
|
|
38
|
+
testY: scaledY.slice(splitIndex),
|
|
39
|
+
|
|
40
|
+
trainXConfig,
|
|
41
|
+
trainXKeyNames,
|
|
42
|
+
trainYConfig,
|
|
43
|
+
trainYKeyNames
|
|
44
44
|
};
|
|
45
45
|
};
|
|
46
46
|
|
|
47
47
|
|
|
48
|
-
export const
|
|
49
|
-
const
|
|
48
|
+
export const parseProductionX = ({ arrObj, weights = {}, xCallbackFunc, forceScaling }) => {
|
|
49
|
+
const X = [];
|
|
50
50
|
|
|
51
51
|
for (let x = 0; x < arrObj.length; x++) {
|
|
52
|
-
const
|
|
52
|
+
const parsedX = xCallbackFunc({ objRow: arrObj, index: x })
|
|
53
53
|
|
|
54
|
-
if (
|
|
55
|
-
|
|
54
|
+
if (parsedX) {
|
|
55
|
+
X.push(parsedX)
|
|
56
56
|
}
|
|
57
57
|
}
|
|
58
58
|
|
|
59
|
-
// Scale
|
|
60
|
-
// Scale
|
|
59
|
+
// Scale X and Y, if applicable
|
|
60
|
+
// Scale X and Y, if applicable
|
|
61
61
|
const {
|
|
62
|
-
scaledOutput:
|
|
63
|
-
scaledConfig:
|
|
64
|
-
scaledKeyNames:
|
|
62
|
+
scaledOutput: scaledX,
|
|
63
|
+
scaledConfig: xConfig,
|
|
64
|
+
scaledKeyNames: xKeyNames
|
|
65
65
|
|
|
66
|
-
} = scaleArrayObj({arrObj:
|
|
66
|
+
} = scaleArrayObj({arrObj: X, weights, forceScaling})
|
|
67
67
|
|
|
68
68
|
|
|
69
69
|
// Split into training and testing sets
|
|
70
70
|
return {
|
|
71
|
-
|
|
72
|
-
|
|
73
|
-
|
|
71
|
+
x: scaledX,
|
|
72
|
+
xConfig,
|
|
73
|
+
xKeyNames
|
|
74
74
|
}
|
|
75
75
|
};
|
package/test/test.js
ADDED
|
@@ -0,0 +1,64 @@
|
|
|
1
|
+
import { parseTrainingXY, parseProductionX } from "../src/datasets.js";
|
|
2
|
+
|
|
3
|
+
|
|
4
|
+
const test = () => {
|
|
5
|
+
|
|
6
|
+
|
|
7
|
+
const myArray = [
|
|
8
|
+
{ open: 135.23, high: 137.45, low: 134.56, sma_200: 125.34, sma_100: 130.56 },
|
|
9
|
+
{ open: 136.45, high: 138.67, low: 135.67, sma_200: 126.78, sma_100: 131.45 },
|
|
10
|
+
{ open: 137.89, high: 139.34, low: 136.34, sma_200: 127.56, sma_100: 132.78 }
|
|
11
|
+
];
|
|
12
|
+
|
|
13
|
+
const xCallbackFunc = ({ objRow, index }) => {
|
|
14
|
+
const curr = objRow[index];
|
|
15
|
+
const { open, high, low, sma_200, sma_100 } = curr;
|
|
16
|
+
|
|
17
|
+
return {
|
|
18
|
+
open,
|
|
19
|
+
high,
|
|
20
|
+
low,
|
|
21
|
+
sma_200,
|
|
22
|
+
sma_100
|
|
23
|
+
};
|
|
24
|
+
};
|
|
25
|
+
|
|
26
|
+
const yCallbackFunc = ({ objRow, index }) => {
|
|
27
|
+
const curr = objRow[index];
|
|
28
|
+
const next = objRow[index + 1];
|
|
29
|
+
|
|
30
|
+
if (typeof next === 'undefined') return null;
|
|
31
|
+
|
|
32
|
+
return {
|
|
33
|
+
label_1: next.open > curr.open, // Label indicating if the next open price is higher than the current
|
|
34
|
+
label_2: next.high > curr.high, // Label indicating if the next high price is higher than the current
|
|
35
|
+
label_3: next.low > curr.low, // Label indicating if the next low price is higher than the current
|
|
36
|
+
label_4: next.sma_200 > curr.sma_200, // Label indicating if the next 200-day SMA is higher than the current
|
|
37
|
+
label_5: next.sma_100 > curr.sma_100 // Label indicating if the next 100-day SMA is higher than the current
|
|
38
|
+
};
|
|
39
|
+
};
|
|
40
|
+
|
|
41
|
+
const trainingData = parseTrainingXY({
|
|
42
|
+
arrObj: myArray,
|
|
43
|
+
trainingSplit: 0.75,
|
|
44
|
+
weights: { open: 1, high: 1, low: 1, sma_200: 1, sma_100: 1 },
|
|
45
|
+
yCallbackFunc,
|
|
46
|
+
xCallbackFunc,
|
|
47
|
+
forceScaling: 'normalization'
|
|
48
|
+
});
|
|
49
|
+
|
|
50
|
+
//console.log(trainingData)
|
|
51
|
+
|
|
52
|
+
|
|
53
|
+
const productionData = parseProductionX({
|
|
54
|
+
arrObj: myArray,
|
|
55
|
+
weights: { open: 2, high: 1, low: 1, sma_200: 1, sma_100: 1 },
|
|
56
|
+
xCallbackFunc,
|
|
57
|
+
forceScaling: null
|
|
58
|
+
})
|
|
59
|
+
|
|
60
|
+
console.log(productionData)
|
|
61
|
+
|
|
62
|
+
}
|
|
63
|
+
|
|
64
|
+
test()
|