@winm2m/inferential-stats-js 0.1.4 → 0.1.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +58 -58
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -84,9 +84,9 @@ Computes a frequency distribution for a categorical variable, including absolute
84
84
 
85
85
  **Relative frequency:**
86
86
 
87
- ![formula](https://latex.codecogs.com/svg.image?f_i=\frac{n_i}{N})
87
+ ![formula](https://latex.codecogs.com/svg.image?f_i%3D%5Cfrac%7Bn_i%7D%7BN%7D)
88
88
 
89
- where $n_i$ is the count of category $i$ and $N$ is the total number of observations. Cumulative percentage is the running sum of $f_i \times 100$.
89
+ where ![formula](https://latex.codecogs.com/svg.image?n_i) is the count of category ![formula](https://latex.codecogs.com/svg.image?i) and ![formula](https://latex.codecogs.com/svg.image?N) is the total number of observations. Cumulative percentage is the running sum of ![formula](https://latex.codecogs.com/svg.image?f_i%5Ctimes%20100).
90
90
 
91
91
  ---
92
92
 
@@ -98,19 +98,19 @@ Produces summary statistics for one or more numeric variables: count, mean, stan
98
98
 
99
99
  **Arithmetic mean:**
100
100
 
101
- ![formula](https://latex.codecogs.com/svg.image?\bar{x}=\frac{1}{N}\sum_{i=1}^{N}x_i)
101
+ ![formula](https://latex.codecogs.com/svg.image?%5Cbar%7Bx%7D%3D%5Cfrac%7B1%7D%7BN%7D%5Csum_%7Bi%3D1%7D%5E%7BN%7Dx_i)
102
102
 
103
103
  **Sample standard deviation (Bessel-corrected):**
104
104
 
105
- ![formula](https://latex.codecogs.com/svg.image?s=\sqrt{\frac{1}{N-1}\sum_{i=1}^{N}(x_i-\bar{x})^2})
105
+ ![formula](https://latex.codecogs.com/svg.image?s%3D%5Csqrt%7B%5Cfrac%7B1%7D%7BN-1%7D%5Csum_%7Bi%3D1%7D%5E%7BN%7D(x_i-%5Cbar%7Bx%7D)%5E2%7D)
106
106
 
107
107
  **Skewness (Fisher):**
108
108
 
109
- ![formula](https://latex.codecogs.com/svg.image?g_1=\frac{m_3}{m_2^{3/2}},\quad m_k=\frac{1}{N}\sum_{i=1}^{N}(x_i-\bar{x})^k)
109
+ ![formula](https://latex.codecogs.com/svg.image?g_1%3D%5Cfrac%7Bm_3%7D%7Bm_2%5E%7B3%2F2%7D%7D%2C%5Cquad%20m_k%3D%5Cfrac%7B1%7D%7BN%7D%5Csum_%7Bi%3D1%7D%5E%7BN%7D(x_i-%5Cbar%7Bx%7D)%5Ek)
110
110
 
111
111
  **Excess kurtosis (Fisher):**
112
112
 
113
- ![formula](https://latex.codecogs.com/svg.image?g_2=\frac{m_4}{m_2^2}-3)
113
+ ![formula](https://latex.codecogs.com/svg.image?g_2%3D%5Cfrac%7Bm_4%7D%7Bm_2%5E2%7D-3)
114
114
 
115
115
  ---
116
116
 
@@ -122,15 +122,15 @@ Cross-tabulates two categorical variables and tests for independence using Pears
122
122
 
123
123
  **Pearson's Chi-square statistic:**
124
124
 
125
- ![formula](https://latex.codecogs.com/svg.image?\chi^2=\sum\frac{(O_{ij}-E_{ij})^2}{E_{ij}})
125
+ ![formula](https://latex.codecogs.com/svg.image?%5Cchi%5E2%3D%5Csum%5Cfrac%7B(O_%7Bij%7D-E_%7Bij%7D)%5E2%7D%7BE_%7Bij%7D%7D)
126
126
 
127
- where $O_{ij}$ is the observed frequency in cell $(i, j)$ and $E_{ij} = \frac{R_i \cdot C_j}{N}$ is the expected frequency under independence.
127
+ where ![formula](https://latex.codecogs.com/svg.image?O_%7Bij%7D) is the observed frequency in cell (![formula](https://latex.codecogs.com/svg.image?i%2C%20j)) and ![formula](https://latex.codecogs.com/svg.image?E_%7Bij%7D%3D%5Cfrac%7BR_i%5Ccdot%20C_j%7D%7BN%7D) is the expected frequency under independence.
128
128
 
129
129
  **Cramér's V:**
130
130
 
131
- ![formula](https://latex.codecogs.com/svg.image?V=\sqrt{\frac{\chi^2}{N\cdot(k-1)}})
131
+ ![formula](https://latex.codecogs.com/svg.image?V%3D%5Csqrt%7B%5Cfrac%7B%5Cchi%5E2%7D%7BN%5Ccdot(k-1)%7D%7D)
132
132
 
133
- where $k = \min(\text{rows}, \text{cols})$.
133
+ where ![formula](https://latex.codecogs.com/svg.image?k%3D%5Cmin(%5Ctext%7Brows%7D%2C%5Ctext%7Bcols%7D)).
134
134
 
135
135
  ---
136
136
 
@@ -144,15 +144,15 @@ Compares the means of a numeric variable between two independent groups. Automat
144
144
 
145
145
  **T-statistic (equal variance assumed):**
146
146
 
147
- ![formula](https://latex.codecogs.com/svg.image?t=\frac{\bar{X}_1-\bar{X}_2}{S_p\sqrt{\frac{1}{n_1}+\frac{1}{n_2}}})
147
+ ![formula](https://latex.codecogs.com/svg.image?t%3D%5Cfrac%7B%5Cbar%7BX%7D_1-%5Cbar%7BX%7D_2%7D%7BS_p%5Csqrt%7B%5Cfrac%7B1%7D%7Bn_1%7D%2B%5Cfrac%7B1%7D%7Bn_2%7D%7D%7D)
148
148
 
149
149
  **Pooled standard deviation:**
150
150
 
151
- ![formula](https://latex.codecogs.com/svg.image?S_p=\sqrt{\frac{(n_1-1)s_1^2+(n_2-1)s_2^2}{n_1+n_2-2}})
151
+ ![formula](https://latex.codecogs.com/svg.image?S_p%3D%5Csqrt%7B%5Cfrac%7B(n_1-1)s_1%5E2%2B(n_2-1)s_2%5E2%7D%7Bn_1%2Bn_2-2%7D%7D)
152
152
 
153
- **Degrees of freedom:** $df = n_1 + n_2 - 2$
153
+ **Degrees of freedom:** ![formula](https://latex.codecogs.com/svg.image?df%3Dn_1%2Bn_2-2)
154
154
 
155
- When Levene's test is significant ($p < 0.05$), Welch's t-test is recommended, which uses the Welch–Satterthwaite approximation for degrees of freedom.
155
+ When Levene's test is significant (![formula](https://latex.codecogs.com/svg.image?p%3C0.05)), Welch's t-test is recommended, which uses the Welch–Satterthwaite approximation for degrees of freedom.
156
156
 
157
157
  ---
158
158
 
@@ -164,11 +164,11 @@ Tests whether the mean difference between two paired measurements is significant
164
164
 
165
165
  **T-statistic:**
166
166
 
167
- ![formula](https://latex.codecogs.com/svg.image?t=\frac{\bar{D}}{S_D/\sqrt{n}})
167
+ ![formula](https://latex.codecogs.com/svg.image?t%3D%5Cfrac%7B%5Cbar%7BD%7D%7D%7BS_D%2F%5Csqrt%7Bn%7D%7D)
168
168
 
169
- where $\bar{D} = \frac{1}{n}\sum_{i=1}^{n}(X_{1i} - X_{2i})$ is the mean difference and $S_D$ is the standard deviation of the differences.
169
+ where ![formula](https://latex.codecogs.com/svg.image?%5Cbar%7BD%7D%3D%5Cfrac%7B1%7D%7Bn%7D%5Csum_%7Bi%3D1%7D%5E%7Bn%7D(X_%7B1i%7D-X_%7B2i%7D)) is the mean difference and ![formula](https://latex.codecogs.com/svg.image?S_D) is the standard deviation of the differences.
170
170
 
171
- **Degrees of freedom:** $df = n - 1$
171
+ **Degrees of freedom:** ![formula](https://latex.codecogs.com/svg.image?df%3Dn-1)
172
172
 
173
173
  ---
174
174
 
@@ -180,23 +180,23 @@ Tests whether the means of a numeric variable differ significantly across three
180
180
 
181
181
  **F-statistic:**
182
182
 
183
- ![formula](https://latex.codecogs.com/svg.image?F=\frac{MS_{between}}{MS_{within}})
183
+ ![formula](https://latex.codecogs.com/svg.image?F%3D%5Cfrac%7BMS_%7Bbetween%7D%7D%7BMS_%7Bwithin%7D%7D)
184
184
 
185
185
  **Sum of Squares Between Groups:**
186
186
 
187
- ![formula](https://latex.codecogs.com/svg.image?SS_{between}=\sum_{j=1}^{k}n_j(\bar{X}_j-\bar{X})^2)
187
+ ![formula](https://latex.codecogs.com/svg.image?SS_%7Bbetween%7D%3D%5Csum_%7Bj%3D1%7D%5E%7Bk%7Dn_j(%5Cbar%7BX%7D_j-%5Cbar%7BX%7D)%5E2)
188
188
 
189
189
  **Sum of Squares Within Groups:**
190
190
 
191
- ![formula](https://latex.codecogs.com/svg.image?SS_{within}=\sum_{j=1}^{k}\sum_{i=1}^{n_j}(X_{ij}-\bar{X}_j)^2)
191
+ ![formula](https://latex.codecogs.com/svg.image?SS_%7Bwithin%7D%3D%5Csum_%7Bj%3D1%7D%5E%7Bk%7D%5Csum_%7Bi%3D1%7D%5E%7Bn_j%7D(X_%7Bij%7D-%5Cbar%7BX%7D_j)%5E2)
192
192
 
193
193
  **Mean Squares:**
194
194
 
195
- ![formula](https://latex.codecogs.com/svg.image?MS_{between}=\frac{SS_{between}}{k-1},\quad MS_{within}=\frac{SS_{within}}{N-k})
195
+ ![formula](https://latex.codecogs.com/svg.image?MS_%7Bbetween%7D%3D%5Cfrac%7BSS_%7Bbetween%7D%7D%7Bk-1%7D%2C%5Cquad%20MS_%7Bwithin%7D%3D%5Cfrac%7BSS_%7Bwithin%7D%7D%7BN-k%7D)
196
196
 
197
197
  **Effect size (Eta-squared):**
198
198
 
199
- ![formula](https://latex.codecogs.com/svg.image?\eta^2=\frac{SS_{between}}{SS_{total}})
199
+ ![formula](https://latex.codecogs.com/svg.image?%5Ceta%5E2%3D%5Cfrac%7BSS_%7Bbetween%7D%7D%7BSS_%7Btotal%7D%7D)
200
200
 
201
201
  ---
202
202
 
@@ -208,9 +208,9 @@ Performs pairwise comparisons of group means following a significant ANOVA resul
208
208
 
209
209
  **Studentized range statistic:**
210
210
 
211
- ![formula](https://latex.codecogs.com/svg.image?q=\frac{\bar{X}_i-\bar{X}_j}{\sqrt{MS_W/n}})
211
+ ![formula](https://latex.codecogs.com/svg.image?q%3D%5Cfrac%7B%5Cbar%7BX%7D_i-%5Cbar%7BX%7D_j%7D%7B%5Csqrt%7BMS_W%2Fn%7D%7D)
212
212
 
213
- where $MS_W$ is the within-group mean square from the ANOVA and $n$ is the harmonic mean of group sizes. The critical $q$ value is obtained from the Studentized Range distribution with $k$ groups and $N - k$ degrees of freedom.
213
+ where ![formula](https://latex.codecogs.com/svg.image?MS_W) is the within-group mean square from the ANOVA and ![formula](https://latex.codecogs.com/svg.image?n) is the harmonic mean of group sizes. The critical ![formula](https://latex.codecogs.com/svg.image?q) value is obtained from the Studentized Range distribution with ![formula](https://latex.codecogs.com/svg.image?k) groups and ![formula](https://latex.codecogs.com/svg.image?N-k) degrees of freedom.
214
214
 
215
215
  ---
216
216
 
@@ -218,43 +218,43 @@ where $MS_W$ is the within-group mean square from the ANOVA and $n$ is the harmo
218
218
 
219
219
  #### Linear Regression (OLS)
220
220
 
221
- Fits an Ordinary Least Squares regression model with one or more independent variables. Reports regression coefficients, standard errors, t-statistics, p-values, confidence intervals, $R^2$, adjusted $R^2$, F-test, and the Durbin-Watson statistic for autocorrelation detection.
221
+ Fits an Ordinary Least Squares regression model with one or more independent variables. Reports regression coefficients, standard errors, t-statistics, p-values, confidence intervals, ![formula](https://latex.codecogs.com/svg.image?R%5E2), adjusted ![formula](https://latex.codecogs.com/svg.image?R%5E2), F-test, and the Durbin-Watson statistic for autocorrelation detection.
222
222
 
223
223
  **Python implementation:** `statsmodels.api.OLS`
224
224
 
225
225
  **Model:**
226
226
 
227
- ![formula](https://latex.codecogs.com/svg.image?Y=\beta_0+\beta_1X_1+\cdots+\beta_pX_p+\epsilon)
227
+ ![formula](https://latex.codecogs.com/svg.image?Y%3D%5Cbeta_0%2B%5Cbeta_1X_1%2B%5Ccdots%2B%5Cbeta_pX_p%2B%5Cepsilon)
228
228
 
229
- where $\epsilon \sim N(0, \sigma^2)$.
229
+ where ![formula](https://latex.codecogs.com/svg.image?%5Cepsilon%5Csim%20N(0%2C%5Csigma%5E2)).
230
230
 
231
231
  **OLS estimator:**
232
232
 
233
- ![formula](https://latex.codecogs.com/svg.image?\hat{\beta}=(X^TX)^{-1}X^TY)
233
+ ![formula](https://latex.codecogs.com/svg.image?%5Chat%7B%5Cbeta%7D%3D(X%5ETX)%5E%7B-1%7DX%5ETY)
234
234
 
235
235
  **Coefficient of determination:**
236
236
 
237
- ![formula](https://latex.codecogs.com/svg.image?R^2=1-\frac{SS_{res}}{SS_{tot}})
237
+ ![formula](https://latex.codecogs.com/svg.image?R%5E2%3D1-%5Cfrac%7BSS_%7Bres%7D%7D%7BSS_%7Btot%7D%7D)
238
238
 
239
- where $SS_{res} = \sum(Y_i - \hat{Y}_i)^2$ and $SS_{tot} = \sum(Y_i - \bar{Y})^2$.
239
+ where ![formula](https://latex.codecogs.com/svg.image?SS_%7Bres%7D%3D%5Csum(Y_i-%5Chat%7BY%7D_i)%5E2) and ![formula](https://latex.codecogs.com/svg.image?SS_%7Btot%7D%3D%5Csum(Y_i-%5Cbar%7BY%7D)%5E2).
240
240
 
241
241
  ---
242
242
 
243
243
  #### Binary Logistic Regression
244
244
 
245
- Models the probability of a binary outcome as a function of one or more independent variables. Reports coefficients (log-odds), odds ratios, z-statistics, p-values, pseudo-$R^2$, AIC, and BIC.
245
+ Models the probability of a binary outcome as a function of one or more independent variables. Reports coefficients (log-odds), odds ratios, z-statistics, p-values, pseudo-![formula](https://latex.codecogs.com/svg.image?R%5E2), AIC, and BIC.
246
246
 
247
247
  **Python implementation:** `statsmodels.discrete.discrete_model.Logit`
248
248
 
249
249
  **Logit link function:**
250
250
 
251
- ![formula](https://latex.codecogs.com/svg.image?\ln\left(\frac{p}{1-p}\right)=\beta_0+\beta_1X_1+\cdots+\beta_pX_p)
251
+ ![formula](https://latex.codecogs.com/svg.image?%5Cln%5Cleft(%5Cfrac%7Bp%7D%7B1-p%7D%5Cright)%3D%5Cbeta_0%2B%5Cbeta_1X_1%2B%5Ccdots%2B%5Cbeta_pX_p)
252
252
 
253
253
  **Predicted probability:**
254
254
 
255
- ![formula](https://latex.codecogs.com/svg.image?P(Y=1|X)=\frac{1}{1+e^{-(\beta_0+\beta_1X_1+\cdots+\beta_pX_p)}})
255
+ ![formula](https://latex.codecogs.com/svg.image?P(Y%3D1%7CX)%3D%5Cfrac%7B1%7D%7B1%2Be%5E%7B-(%5Cbeta_0%2B%5Cbeta_1X_1%2B%5Ccdots%2B%5Cbeta_pX_p)%7D%7D)
256
256
 
257
- Coefficients are estimated by Maximum Likelihood Estimation (MLE). The odds ratio for predictor $j$ is $e^{\beta_j}$.
257
+ Coefficients are estimated by Maximum Likelihood Estimation (MLE). The odds ratio for predictor j is ![formula](https://latex.codecogs.com/svg.image?e%5E%7B%5Cbeta_j%7D).
258
258
 
259
259
  ---
260
260
 
@@ -264,15 +264,15 @@ Extends binary logistic regression to outcomes with more than two unordered cate
264
264
 
265
265
  **Python implementation:** `sklearn.linear_model.LogisticRegression(multi_class='multinomial')`
266
266
 
267
- **Log-odds relative to reference category $K$:**
267
+ **Log-odds relative to reference category ![formula](https://latex.codecogs.com/svg.image?K):**
268
268
 
269
- ![formula](https://latex.codecogs.com/svg.image?\ln\left(\frac{P(Y=k)}{P(Y=K)}\right)=\beta_{k0}+\beta_{k1}X_1+\cdots+\beta_{kp}X_p)
269
+ ![formula](https://latex.codecogs.com/svg.image?%5Cln%5Cleft(%5Cfrac%7BP(Y%3Dk)%7D%7BP(Y%3DK)%7D%5Cright)%3D%5Cbeta_%7Bk0%7D%2B%5Cbeta_%7Bk1%7DX_1%2B%5Ccdots%2B%5Cbeta_%7Bkp%7DX_p)
270
270
 
271
- for each category $k \neq K$.
271
+ for each category ![formula](https://latex.codecogs.com/svg.image?k%5Cneq%20K).
272
272
 
273
273
  **Predicted probability via softmax:**
274
274
 
275
- ![formula](https://latex.codecogs.com/svg.image?P(Y=k|X)=\frac{e^{\beta_{k0}+\beta_{k1}X_1+\cdots+\beta_{kp}X_p}}{\sum_{j=1}^{K}e^{\beta_{j0}+\beta_{j1}X_1+\cdots+\beta_{jp}X_p}})
275
+ ![formula](https://latex.codecogs.com/svg.image?P(Y%3Dk%7CX)%3D%5Cfrac%7Be%5E%7B%5Cbeta_%7Bk0%7D%2B%5Cbeta_%7Bk1%7DX_1%2B%5Ccdots%2B%5Cbeta_%7Bkp%7DX_p%7D%7D%7B%5Csum_%7Bj%3D1%7D%5E%7BK%7De%5E%7B%5Cbeta_%7Bj0%7D%2B%5Cbeta_%7Bj1%7DX_1%2B%5Ccdots%2B%5Cbeta_%7Bjp%7DX_p%7D%7D)
276
276
 
277
277
  ---
278
278
 
@@ -280,15 +280,15 @@ for each category $k \neq K$.
280
280
 
281
281
  #### K-Means Clustering
282
282
 
283
- Partitions observations into $K$ clusters by iteratively assigning points to the nearest centroid and updating centroids until convergence.
283
+ Partitions observations into ![formula](https://latex.codecogs.com/svg.image?K) clusters by iteratively assigning points to the nearest centroid and updating centroids until convergence.
284
284
 
285
285
  **Python implementation:** `sklearn.cluster.KMeans`
286
286
 
287
287
  **Objective function (inertia):**
288
288
 
289
- ![formula](https://latex.codecogs.com/svg.image?J=\sum_{j=1}^{K}\sum_{i\in C_j}\|x_i-\mu_j\|^2)
289
+ ![formula](https://latex.codecogs.com/svg.image?J%3D%5Csum_%7Bj%3D1%7D%5E%7BK%7D%5Csum_%7Bi%5Cin%20C_j%7D%5C%7Cx_i-%5Cmu_j%5C%7C%5E2)
290
290
 
291
- where $C_j$ is the set of observations in cluster $j$ and $\mu_j$ is the centroid. The algorithm minimizes $J$ using Lloyd's algorithm (Expectation-Maximization style).
291
+ where ![formula](https://latex.codecogs.com/svg.image?C_j) is the set of observations in cluster j and ![formula](https://latex.codecogs.com/svg.image?%5Cmu_j) is the centroid. The algorithm minimizes J using Lloyd's algorithm (Expectation-Maximization style).
292
292
 
293
293
  ---
294
294
 
@@ -300,9 +300,9 @@ Builds a hierarchy of clusters using a bottom-up approach. Supports Ward, comple
300
300
 
301
301
  **Ward's minimum variance method** (default):
302
302
 
303
- ![formula](https://latex.codecogs.com/svg.image?\Delta(A,B)=\frac{n_A n_B}{n_A+n_B}\|\bar{x}_A-\bar{x}_B\|^2)
303
+ ![formula](https://latex.codecogs.com/svg.image?%5CDelta(A%2CB)%3D%5Cfrac%7Bn_A%20n_B%7D%7Bn_A%2Bn_B%7D%5C%7C%5Cbar%7Bx%7D_A-%5Cbar%7Bx%7D_B%5C%7C%5E2)
304
304
 
305
- At each step, the pair of clusters $(A, B)$ that produces the smallest increase in total within-cluster variance is merged. Ward's method tends to produce compact, equally sized clusters.
305
+ At each step, the pair of clusters (A, B) that produces the smallest increase in total within-cluster variance is merged. Ward's method tends to produce compact, equally sized clusters.
306
306
 
307
307
  ---
308
308
 
@@ -316,15 +316,15 @@ Discovers latent factors underlying a set of observed variables. Supports varima
316
316
 
317
317
  **Factor model:**
318
318
 
319
- ![formula](https://latex.codecogs.com/svg.image?X=\Lambda F+\epsilon)
319
+ ![formula](https://latex.codecogs.com/svg.image?X%3D%5CLambda%20F%2B%5Cepsilon)
320
320
 
321
- where $X$ is the observed variable vector, $\Lambda$ is the matrix of factor loadings, $F$ is the vector of latent factors, and $\epsilon$ is the unique variance.
321
+ where ![formula](https://latex.codecogs.com/svg.image?X) is the observed variable vector, ![formula](https://latex.codecogs.com/svg.image?%5CLambda) is the matrix of factor loadings, ![formula](https://latex.codecogs.com/svg.image?F) is the vector of latent factors, and ![formula](https://latex.codecogs.com/svg.image?%5Cepsilon) is the unique variance.
322
322
 
323
323
  **Kaiser-Meyer-Olkin (KMO) measure:**
324
324
 
325
- ![formula](https://latex.codecogs.com/svg.image?KMO=\frac{\sum\sum_{i\neq j} r_{ij}^2}{\sum\sum_{i\neq j} r_{ij}^2+\sum\sum_{i\neq j} u_{ij}^2})
325
+ ![formula](https://latex.codecogs.com/svg.image?KMO%3D%5Cfrac%7B%5Csum%5Csum_%7Bi%5Cneq%20j%7D%20r_%7Bij%7D%5E2%7D%7B%5Csum%5Csum_%7Bi%5Cneq%20j%7D%20r_%7Bij%7D%5E2%2B%5Csum%5Csum_%7Bi%5Cneq%20j%7D%20u_%7Bij%7D%5E2%7D)
326
326
 
327
- where $r_{ij}$ are elements of the correlation matrix and $u_{ij}$ are elements of the partial correlation matrix. KMO values above 0.6 are generally considered acceptable for factor analysis.
327
+ where ![formula](https://latex.codecogs.com/svg.image?r_%7Bij%7D) are elements of the correlation matrix and ![formula](https://latex.codecogs.com/svg.image?u_%7Bij%7D) are elements of the partial correlation matrix. KMO values above 0.6 are generally considered acceptable for factor analysis.
328
328
 
329
329
  ---
330
330
 
@@ -334,15 +334,15 @@ Finds orthogonal components that maximize variance in the data. Reports componen
334
334
 
335
335
  **Python implementation:** `sklearn.decomposition.PCA`
336
336
 
337
- **Objective:** Find the weight vector $w$ that maximizes projected variance:
337
+ **Objective:** Find the weight vector ![formula](https://latex.codecogs.com/svg.image?w) that maximizes projected variance:
338
338
 
339
- ![formula](https://latex.codecogs.com/svg.image?\text{Var}(Xw)\to\max\quad\text{subject to}\quad\|w\|=1)
339
+ ![formula](https://latex.codecogs.com/svg.image?%5Ctext%7BVar%7D(Xw)%5Cto%5Cmax%5Cquad%5Ctext%7Bsubject%20to%7D%5Cquad%5C%7Cw%5C%7C%3D1)
340
340
 
341
- This is equivalent to finding the eigenvectors of the covariance matrix $\Sigma = \frac{1}{N-1}X^TX$. The eigenvalues $\lambda_1 \geq \lambda_2 \geq \cdots$ represent the variance explained by each component.
341
+ This is equivalent to finding the eigenvectors of the covariance matrix ![formula](https://latex.codecogs.com/svg.image?%5CSigma%3D%5Cfrac%7B1%7D%7BN-1%7DX%5ETX). The eigenvalues ![formula](https://latex.codecogs.com/svg.image?%5Clambda_1%5Cgeq%5Clambda_2%5Cgeq%5Ccdots) represent the variance explained by each component.
342
342
 
343
343
  **Explained variance ratio:**
344
344
 
345
- ![formula](https://latex.codecogs.com/svg.image?\text{EVR}_k=\frac{\lambda_k}{\sum_{i=1}^{p}\lambda_i})
345
+ ![formula](https://latex.codecogs.com/svg.image?%5Ctext%7BEVR%7D_k%3D%5Cfrac%7B%5Clambda_k%7D%7B%5Csum_%7Bi%3D1%7D%5E%7Bp%7D%5Clambda_i%7D)
346
346
 
347
347
  ---
348
348
 
@@ -354,9 +354,9 @@ Projects high-dimensional data into a lower-dimensional space (typically 2D) whi
354
354
 
355
355
  **Stress function (Kruskal's Stress-1):**
356
356
 
357
- ![formula](https://latex.codecogs.com/svg.image?\sigma=\sqrt{\frac{\sum_{i<j}(d_{ij}-\delta_{ij})^2}{\sum_{i<j}d_{ij}^2}})
357
+ ![formula](https://latex.codecogs.com/svg.image?%5Csigma%3D%5Csqrt%7B%5Cfrac%7B%5Csum_%7Bi%3Cj%7D(d_%7Bij%7D-%5Cdelta_%7Bij%7D)%5E2%7D%7B%5Csum_%7Bi%3Cj%7Dd_%7Bij%7D%5E2%7D%7D)
358
358
 
359
- where $d_{ij}$ is the distance in the reduced space and $\delta_{ij}$ is the original distance (or a monotonic transformation for non-metric MDS). A stress value below 0.1 is generally considered a good fit.
359
+ where ![formula](https://latex.codecogs.com/svg.image?d_%7Bij%7D) is the distance in the reduced space and ![formula](https://latex.codecogs.com/svg.image?%5Cdelta_%7Bij%7D) is the original distance (or a monotonic transformation for non-metric MDS). A stress value below 0.1 is generally considered a good fit.
360
360
 
361
361
  ---
362
362
 
@@ -370,17 +370,17 @@ Measures the internal consistency (reliability) of a set of scale items. Reports
370
370
 
371
371
  **Cronbach's alpha (raw):**
372
372
 
373
- ![formula](https://latex.codecogs.com/svg.image?\alpha=\frac{K}{K-1}\left(1-\frac{\sum_{i=1}^{K}\sigma_{Y_i}^2}{\sigma_X^2}\right))
373
+ ![formula](https://latex.codecogs.com/svg.image?%5Calpha%3D%5Cfrac%7BK%7D%7BK-1%7D%5Cleft(1-%5Cfrac%7B%5Csum_%7Bi%3D1%7D%5E%7BK%7D%5Csigma_%7BY_i%7D%5E2%7D%7B%5Csigma_X%5E2%7D%5Cright))
374
374
 
375
- where $K$ is the number of items, $\sigma_{Y_i}^2$ is the variance of item $i$, and $\sigma_X^2$ is the variance of the total score.
375
+ where ![formula](https://latex.codecogs.com/svg.image?K) is the number of items, ![formula](https://latex.codecogs.com/svg.image?%5Csigma_%7BY_i%7D%5E2) is the variance of item i, and ![formula](https://latex.codecogs.com/svg.image?%5Csigma_X%5E2) is the variance of the total score.
376
376
 
377
377
  **Standardized alpha (based on mean inter-item correlation):**
378
378
 
379
- ![formula](https://latex.codecogs.com/svg.image?\alpha_{std}=\frac{K\bar{r}}{1+(K-1)\bar{r}})
379
+ ![formula](https://latex.codecogs.com/svg.image?%5Calpha_%7Bstd%7D%3D%5Cfrac%7BK%5Cbar%7Br%7D%7D%7B1%2B(K-1)%5Cbar%7Br%7D%7D)
380
380
 
381
- where $\bar{r}$ is the mean of all pairwise Pearson correlations among items.
381
+ where ![formula](https://latex.codecogs.com/svg.image?%5Cbar%7Br%7D) is the mean of all pairwise Pearson correlations among items.
382
382
 
383
- | $\alpha$ Range | Interpretation |
383
+ | Alpha Range | Interpretation |
384
384
  |---|---|
385
385
  | ≥ 0.9 | Excellent |
386
386
  | 0.8 – 0.9 | Good |
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@winm2m/inferential-stats-js",
3
- "version": "0.1.4",
3
+ "version": "0.1.5",
4
4
  "description": "A headless JavaScript SDK for advanced statistical analysis in the browser using WebAssembly (Pyodide). Performs SPSS-level inferential statistics entirely client-side with no backend required.",
5
5
  "author": "Youngjune Kwon <yjkwon@winm2m.com>",
6
6
  "license": "MIT",