nano-benchmark 1.0.8 → 1.0.10

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,20 +1,19 @@
1
1
  # nano-benchmark [![NPM version][npm-img]][npm-url]
2
2
 
3
- [npm-img]: https://img.shields.io/npm/v/nano-benchmark.svg
4
- [npm-url]: https://npmjs.org/package/nano-benchmark
3
+ [npm-img]: https://img.shields.io/npm/v/nano-benchmark.svg
4
+ [npm-url]: https://npmjs.org/package/nano-benchmark
5
5
 
6
- `nano-benchmark` provides command-line utilities for benchmarking code and related statistical modules.
6
+ `nano-benchmark` provides command-line utilities for micro-benchmarking code
7
+ with nonparametric statistics and significance testing.
7
8
 
8
9
  Two utilities are available:
9
10
 
10
- * `nano-watch` — provides statistics in a streaming mode continuously running your code,
11
- watching memory usage and updating the output.
12
- * `nano-bench` — runs benchmark tests on your code, calculating statistics and
13
- statistical significance, and presenting them in a tabular format.
11
+ - `nano-watch` — continuously benchmarks a single function, showing live statistics
12
+ and memory usage.
13
+ - `nano-bench` — benchmarks and compares multiple functions, calculating confidence
14
+ intervals and statistical significance.
14
15
 
15
- The utilities are mostly used to measure performance of your code and compare it with other variants.
16
- It is geared toward benchmarking and performance tuning of small fast snippets of code, e.g.,
17
- used in tight loops.
16
+ Designed for performance tuning of small, fast code snippets used in tight loops.
18
17
 
19
18
  ## Visual samples
20
19
 
@@ -29,19 +28,14 @@ used in tight loops.
29
28
  ## Installation
30
29
 
31
30
  ```bash
32
- npm install --save nano-benchmark
31
+ npm install nano-benchmark
33
32
  ```
34
33
 
35
34
  ### Deno and Bun support
36
35
 
37
36
  Both [deno](https://deno.land/) and [bun](https://bun.sh/) are supported.
38
37
 
39
- If you want to run the benchmark in Deno, Bun, etc. you can specify `self` as the `file` argument
40
- or the `--self` option.
41
- In this case the utility will print out its file name to `stdout` and exit. It allows running
42
- the utility with alternative JavaScript interpreters.
43
-
44
- Examples with `bash`:
38
+ Use `--self` to get the script path for running with alternative interpreters:
45
39
 
46
40
  ```bash
47
41
  npx nano-bench benchmark.js
@@ -51,9 +45,8 @@ deno run -A `npx nano-bench --self` benchmark.js
51
45
  node `npx nano-bench --self` benchmark.js
52
46
  ```
53
47
 
54
- Don't forget to specify the appropriate permissions for Deno to run the benchmark scripts:
55
- `--allow-read` (required) and `--allow-hrtime` (optional but recommended). Or consider using
56
- `-A` or `--allow-all` to allow all permissions (used it only in safe environments!).
48
+ For Deno, `--allow-read` is required and `--allow-hrtime` is recommended.
49
+ Use `-A` for convenience in safe environments.
57
50
 
58
51
  ## Documentation
59
52
 
@@ -64,10 +57,9 @@ your `package.json` file or from the command line by prefixing them with `npx`,
64
57
 
65
58
  Utilities are self-documented — run them with `--help` flag to learn about arguments.
66
59
 
67
- Both utilities import a module to benchmark using its (default) export.
68
- `nano-bench` assumes that it is an object with functional properties,
69
- which should be benchmarked and compared. `nano-watch` can use the same file format
70
- as `nano-bench` or it can use a single function.
60
+ Both utilities import a module and benchmark its (default) export.
61
+ `nano-bench` expects an object whose properties are the functions to compare.
62
+ `nano-watch` accepts the same format or a single function.
71
63
 
72
64
  Example of a module for `nano-bench` called `bench-strings-concat.js`:
73
65
 
@@ -97,7 +89,7 @@ export default {
97
89
  };
98
90
  ```
99
91
 
100
- The way to use it:
92
+ Usage:
101
93
 
102
94
  ```bash
103
95
  npx nano-bench bench-strings-concat.js
@@ -106,18 +98,32 @@ npx nano-watch bench-strings-concat.js backticks
106
98
 
107
99
  See [wiki](https://github.com/uhop/nano-bench/wiki) for more details.
108
100
 
101
+ ## AI agents and contributing
102
+
103
+ If you are an AI agent or an AI-assisted developer working on this project, read
104
+ [AGENTS.md](./AGENTS.md) first — it contains the project rules and conventions.
105
+
106
+ Other useful files:
107
+
108
+ - [ARCHITECTURE.md](./ARCHITECTURE.md) — module map, dependency graph, how benchmarking works.
109
+ - [CONTRIBUTING.md](./CONTRIBUTING.md) — development workflow and coding conventions.
110
+ - [llms.txt](./llms.txt) — project summary for LLMs.
111
+ - [llms-full.txt](./llms-full.txt) — detailed CLI reference for LLMs.
112
+
109
113
  ## License
110
114
 
111
115
  BSD 3-Clause License
112
116
 
113
117
  ## Release history
114
118
 
115
- * 1.0.8: *Updated dependencies.*
116
- * 1.0.7: *Updated dependencies.*
117
- * 1.0.6: *Updated dependencies.*
118
- * 1.0.5: *Updated dependencies.*
119
- * 1.0.4: *Updated dependencies + added more tests.*
120
- * 1.0.3: *Updated dependencies.*
121
- * 1.0.2: *Added the `--self` option.*
122
- * 1.0.1: *Added "self" argument to utilities so it can be used with Deno, Bun, etc.*
123
- * 1.0.0: *Initial release.*
119
+ - 1.0.10: _Added Prettier lint scripts, GitHub issue templates, Copilot instructions, and Windsurf workflows._
120
+ - 1.0.9: _Updated dependencies._
121
+ - 1.0.8: _Updated dependencies._
122
+ - 1.0.7: _Updated dependencies._
123
+ - 1.0.6: _Updated dependencies._
124
+ - 1.0.5: _Updated dependencies._
125
+ - 1.0.4: _Updated dependencies + added more tests._
126
+ - 1.0.3: _Updated dependencies._
127
+ - 1.0.2: _Added the `--self` option._
128
+ - 1.0.1: _Added "self" argument to utilities so it can be used with Deno, Bun, etc._
129
+ - 1.0.0: _Initial release._
package/bin/nano-bench.js CHANGED
@@ -299,8 +299,6 @@ if (results.length > 1) {
299
299
  if (significance) {
300
300
  const sortedStats = stats.slice().sort((a, b) => a.median - b.median),
301
301
  tableData = [[' ', bold('#'), bold('name')]];
302
- let rabbitIndex = -1,
303
- turtleIndex = -1;
304
302
  for (let i = 0; i < names.length; ++i) {
305
303
  tableData[0].push({value: bold(formatInteger(i + 1)), align: 'c'});
306
304
  const row = [null, formatInteger(i + 1), bold(names[i])],
package/llms-full.txt ADDED
@@ -0,0 +1,208 @@
1
+ # nano-benchmark
2
+
3
+ > Command-line utilities for micro-benchmarking JavaScript code with nonparametric statistics and significance testing.
4
+
5
+ - NPM: https://npmjs.org/package/nano-benchmark
6
+ - GitHub: https://github.com/uhop/nano-bench
7
+ - Wiki: https://github.com/uhop/nano-bench/wiki
8
+ - License: BSD-3-Clause
9
+ - Runtime: Node.js 20+, Bun, Deno
10
+ - Module system: ESM only (`"type": "module"`)
11
+
12
+ ## Installation
13
+
14
+ ```bash
15
+ npm install nano-benchmark
16
+ ```
17
+
18
+ ## CLI tool: nano-bench
19
+
20
+ Benchmarks multiple functions, compares them with bootstrap confidence intervals and significance tests, outputs a styled table.
21
+
22
+ ### Usage
23
+
24
+ ```
25
+ nano-bench [options] <file>
26
+ ```
27
+
28
+ ### Arguments
29
+
30
+ - `file` — JavaScript module to benchmark. If `"self"`, prints its own file path and exits.
31
+
32
+ ### Options
33
+
34
+ - `-m, --ms <ms>` — measurement time in milliseconds per sample (default: 50). The tool auto-discovers the batch size where one call takes at least this long.
35
+ - `-i, --iterations <iterations>` — fixed iteration count per sample (overrides `--ms`).
36
+ - `--min-iterations <n>` — minimum iterations per sample (default: 1).
37
+ - `-s, --samples <samples>` — number of samples to collect (default: 100).
38
+ - `-b, --bootstrap <bootstrap>` — number of bootstrap resamples for CI estimation (default: 1000).
39
+ - `-a, --alpha <alpha>` — significance level for confidence interval and tests (default: 0.05 = 95% CI).
40
+ - `-p, --parallel` — collect samples in parallel (useful for async benchmarks).
41
+ - `-e, --export <name>` — name of the export to use from the file (default: `"default"`).
42
+ - `--self` — print the script's file path to stdout and exit (for Deno/Bun usage).
43
+
44
+ ### Output
45
+
46
+ A styled table with columns:
47
+
48
+ | Column | Description |
49
+ |--------|-------------|
50
+ | name | Function name |
51
+ | median | Median execution time |
52
+ | + | Upper bound of confidence interval (median to high) |
53
+ | − | Lower bound of confidence interval (median to low) |
54
+ | op/s | Operations per second (1000 / median) |
55
+ | batch | Iterations per sample (batch size) |
56
+
57
+ If differences are statistically significant, a significance matrix is printed showing pairwise comparisons with percentage or ratio differences. The fastest function is marked with 🐇 and the slowest with 🐢.
58
+
59
+ ### How it works
60
+
61
+ 1. **Find level**: auto-discovers batch size `n` where `fn(n)` takes ≥ `--ms` milliseconds.
62
+ 2. **Collect samples**: runs `fn(n)` `--samples` times, collecting timing data.
63
+ 3. **Bootstrap CI**: uses bootstrap resampling to estimate median and confidence interval.
64
+ 4. **Significance test**: Mann-Whitney U (2 functions) or Kruskal-Wallis with post-hoc tests (3+ functions).
65
+
66
+ ### Example
67
+
68
+ ```bash
69
+ npx nano-bench bench/bench-string-concat.js
70
+ npx nano-bench -s 200 -b 2000 -a 0.01 bench/bench-string-concat.js
71
+ ```
72
+
73
+ ---
74
+
75
+ ## CLI tool: nano-watch
76
+
77
+ Continuously benchmarks a single function in streaming mode, showing live statistics and memory usage. Runs indefinitely until stopped with Ctrl+C (or until `--iterations` is reached).
78
+
79
+ ### Usage
80
+
81
+ ```
82
+ nano-watch [options] <file> [method]
83
+ ```
84
+
85
+ ### Arguments
86
+
87
+ - `file` — JavaScript module to benchmark. If `"self"`, prints its own file path and exits.
88
+ - `method` — optional method name if the export is an object of functions (same format as nano-bench).
89
+
90
+ ### Options
91
+
92
+ - `-m, --ms <ms>` — milliseconds per measurement iteration (default: 500).
93
+ - `-i, --iterations <number>` — number of iterations to run (default: Infinity).
94
+ - `-e, --export <name>` — name of the export to use from the file (default: `"default"`).
95
+ - `--self` — print the script's file path to stdout and exit.
96
+
97
+ ### Output
98
+
99
+ A live-updating styled table with:
100
+
101
+ | Row | Columns |
102
+ |-----|---------|
103
+ | Stats | #, time, mean, stdDev, median, skewness, kurtosis |
104
+ | op/s | operations per second for time, mean, median |
105
+ | memory | heapUsed, heapTotal, rss |
106
+
107
+ All statistics are computed using online/streaming algorithms (constant memory):
108
+ - **StatCounter**: streaming mean, variance, skewness, kurtosis (Welford's algorithm).
109
+ - **MedianCounter**: approximate streaming median (median-of-medians).
110
+
111
+ ### Example
112
+
113
+ ```bash
114
+ npx nano-watch bench/watch-sample.js
115
+ npx nano-watch bench/bench-string-concat.js backticks
116
+ npx nano-watch -i 50 bench/watch-sample.js
117
+ ```
118
+
119
+ ---
120
+
121
+ ## Benchmark file format
122
+
123
+ Both tools import a JavaScript module. The module should export (default or named) either:
124
+
125
+ ### Object of functions (for nano-bench, or nano-watch with method argument)
126
+
127
+ ```js
128
+ export default {
129
+ variant1: n => {
130
+ const a = 'a', b = 'b';
131
+ for (let i = 0; i < n; ++i) {
132
+ const x = a + '-' + b;
133
+ }
134
+ },
135
+ variant2: n => {
136
+ const a = 'a', b = 'b';
137
+ for (let i = 0; i < n; ++i) {
138
+ const x = `${a}-${b}`;
139
+ }
140
+ }
141
+ };
142
+ ```
143
+
144
+ ### Single function (for nano-watch without method argument)
145
+
146
+ ```js
147
+ export default n => {
148
+ const a = 'a', b = 'b';
149
+ for (let i = 0; i < n; ++i) {
150
+ const x = a + '-' + b;
151
+ }
152
+ };
153
+ ```
154
+
155
+ ### Key design principle
156
+
157
+ Each function takes `n` (iteration count) and runs the measured code in a `for` loop. This amortizes function-call overhead over `n` iterations, which is critical for micro-benchmarks where the measured code is faster than the overhead of calling a function.
158
+
159
+ The batch size `n` is either specified via `--iterations` or auto-discovered by the tool so that one call takes at least `--ms` milliseconds.
160
+
161
+ ### Async functions
162
+
163
+ Benchmark functions can return a Promise (be async). The tools detect thenables and measure the time until resolution.
164
+
165
+ ---
166
+
167
+ ## Deno and Bun support
168
+
169
+ Use `--self` to get the script path, then run with the alternative interpreter:
170
+
171
+ ```bash
172
+ # nano-bench
173
+ bun `npx nano-bench --self` benchmark.js
174
+ deno run -A `npx nano-bench --self` benchmark.js
175
+
176
+ # nano-watch
177
+ bun `npx nano-watch --self` benchmark.js methodName
178
+ deno run -A `npx nano-watch --self` benchmark.js methodName
179
+ ```
180
+
181
+ For Deno, `--allow-read` is required and `--allow-hrtime` is recommended. Use `-A` for convenience in safe environments.
182
+
183
+ ---
184
+
185
+ ## Statistical methods
186
+
187
+ ### Bootstrap resampling
188
+
189
+ Used by nano-bench to estimate confidence intervals. Resamples the collected timing data `--bootstrap` times, computing the median of each resample, then takes the mean of those medians for the final estimate.
190
+
191
+ ### Mann-Whitney U test
192
+
193
+ Nonparametric two-sample test used when comparing exactly 2 functions. Tests whether the two timing distributions are significantly different at the given `--alpha` level. Does not assume normal distribution.
194
+
195
+ ### Kruskal-Wallis test
196
+
197
+ Nonparametric k-sample test used when comparing 3+ functions. Uses beta approximation for the critical value. If significant, performs post-hoc pairwise comparisons to identify which specific pairs differ.
198
+
199
+ ### Kolmogorov-Smirnov test
200
+
201
+ Two-sample distribution comparison test. Available in the internal API (`src/significance/kstest.js`).
202
+
203
+ ### Online/streaming algorithms
204
+
205
+ Used by nano-watch for indefinite monitoring with constant memory:
206
+
207
+ - **StatCounter** — Welford's online algorithm for streaming mean, variance (M2), skewness (M3), and kurtosis (M4). Numerically stable single-pass computation.
208
+ - **MedianCounter** — approximate streaming median using a hierarchical median-of-three structure. Provides O(1) memory approximate median without storing all values.
package/llms.txt ADDED
@@ -0,0 +1,80 @@
1
+ # nano-benchmark
2
+
3
+ > Command-line utilities for micro-benchmarking JavaScript code with nonparametric statistics and significance testing.
4
+
5
+ ## Overview
6
+
7
+ nano-benchmark provides two CLI tools for benchmarking code. It uses nonparametric statistics (bootstrap resampling, quantile-based confidence intervals) and rank-based significance tests (Mann-Whitney U, Kruskal-Wallis) to produce statistically rigorous results. Designed for micro-benchmarks where function-call overhead matters.
8
+
9
+ - NPM: https://npmjs.org/package/nano-benchmark
10
+ - GitHub: https://github.com/uhop/nano-bench
11
+ - Wiki: https://github.com/uhop/nano-bench/wiki
12
+ - License: BSD-3-Clause
13
+ - Runtime: Node.js 20+, Bun, Deno
14
+ - Module system: ESM only (`"type": "module"`)
15
+
16
+ ## Installation
17
+
18
+ ```bash
19
+ npm install nano-benchmark
20
+ ```
21
+
22
+ ## CLI tools
23
+
24
+ ### nano-bench
25
+
26
+ Benchmarks multiple functions, compares them with bootstrap confidence intervals and significance tests, outputs a styled table.
27
+
28
+ ```bash
29
+ npx nano-bench benchmark.js
30
+ npx nano-bench -s 200 -b 2000 -a 0.01 benchmark.js
31
+ ```
32
+
33
+ Options: `--ms` (measurement time, default 50), `--iterations` (overrides --ms), `--samples` (default 100), `--bootstrap` (default 1000), `--alpha` (significance level, default 0.05), `--parallel`, `--export` (default "default"), `--self`.
34
+
35
+ ### nano-watch
36
+
37
+ Continuously benchmarks a single function in streaming mode, showing live stats (mean, stdDev, median, skewness, kurtosis, ops/sec) and memory usage.
38
+
39
+ ```bash
40
+ npx nano-watch benchmark.js
41
+ npx nano-watch benchmark.js methodName
42
+ ```
43
+
44
+ Options: `--ms` (measurement time, default 500), `--iterations` (default Infinity), `--export` (default "default"), `--self`.
45
+
46
+ ## Benchmark file format
47
+
48
+ Both tools import a module. `nano-bench` expects an object of functions; `nano-watch` can use a single function or an object with a method name argument. Each function takes `n` (iteration count) and runs the measured code in a loop:
49
+
50
+ ```js
51
+ export default {
52
+ variant1: n => {
53
+ for (let i = 0; i < n; ++i) {
54
+ // measured code
55
+ }
56
+ },
57
+ variant2: n => {
58
+ for (let i = 0; i < n; ++i) {
59
+ // measured code
60
+ }
61
+ }
62
+ };
63
+ ```
64
+
65
+ ## Deno and Bun support
66
+
67
+ Use `--self` to get the script path for running with alternative interpreters:
68
+
69
+ ```bash
70
+ bun `npx nano-bench --self` benchmark.js
71
+ deno run -A `npx nano-bench --self` benchmark.js
72
+ ```
73
+
74
+ ## Statistical methods
75
+
76
+ - **Bootstrap resampling** for confidence intervals (median, percentiles).
77
+ - **Mann-Whitney U test** for comparing two samples.
78
+ - **Kruskal-Wallis test** with post-hoc pairwise tests for comparing 3+ samples.
79
+ - **Kolmogorov-Smirnov test** for two-sample distribution comparison.
80
+ - **Online algorithms** (streaming mean, variance, skewness, kurtosis, approximate median) for continuous monitoring.
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "nano-benchmark",
3
- "version": "1.0.8",
4
- "description": "Small utilities to benchmark code with Node.",
3
+ "version": "1.0.10",
4
+ "description": "CLI micro-benchmarking with nonparametric statistics and significance testing.",
5
5
  "type": "module",
6
6
  "main": "src/index.js",
7
7
  "exports": {
@@ -17,8 +17,13 @@
17
17
  "test:bun": "tape6-bun --flags FO",
18
18
  "test:deno": "tape6-deno --flags FO",
19
19
  "test:proc": "tape6-proc --flags FO",
20
- "test:proc:bun": "bun run `npx tape6-proc --self` --flags FO",
21
- "test:proc:deno": "deno run -A `npx tape6-proc --self` --flags FO -r -A"
20
+ "test:proc:bun": "bun run `tape6-proc --self` --flags FO",
21
+ "test:proc:deno": "deno run -A `tape6-proc --self` --flags FO -r -A",
22
+ "test:seq": "tape6-seq --flags FO",
23
+ "test:seq:bun": "bun run `tape6-seq --self` --flags FO",
24
+ "test:seq:deno": "deno run -A `tape6-seq --self` --flags FO",
25
+ "lint": "prettier --check .",
26
+ "lint:fix": "prettier --write ."
22
27
  },
23
28
  "repository": {
24
29
  "type": "git",
@@ -26,8 +31,16 @@
26
31
  },
27
32
  "keywords": [
28
33
  "benchmark",
34
+ "micro-benchmark",
29
35
  "performance",
30
- "statistics"
36
+ "profiling",
37
+ "statistics",
38
+ "significance",
39
+ "bootstrap",
40
+ "mann-whitney",
41
+ "kruskal-wallis",
42
+ "cli",
43
+ "compare"
31
44
  ],
32
45
  "author": "Eugene Lazutkin <eugene.lazutkin@gmail.com> (https://www.lazutkin.com/)",
33
46
  "license": "BSD-3-Clause",
@@ -40,15 +53,19 @@
40
53
  },
41
54
  "homepage": "https://github.com/uhop/nano-bench#readme",
42
55
  "files": [
43
- "src"
56
+ "src",
57
+ "bin",
58
+ "llms.txt",
59
+ "llms-full.txt"
44
60
  ],
45
61
  "devDependencies": {
46
- "tape-six": "^1.5.1",
47
- "tape-six-proc": "^1.2.1"
62
+ "prettier": "^3.8.1",
63
+ "tape-six": "^1.7.2",
64
+ "tape-six-proc": "^1.2.3"
48
65
  },
49
66
  "dependencies": {
50
- "commander": "^14.0.2",
51
- "console-toolkit": "^1.2.8"
67
+ "commander": "^14.0.3",
68
+ "console-toolkit": "^1.2.11"
52
69
  },
53
70
  "tape6": {
54
71
  "tests": [
@@ -22,7 +22,10 @@ const compare = async (inputs, options = {}, report) => {
22
22
  report?.('calculating-significance', {stats, options});
23
23
  let results;
24
24
  if (keys.length > 2) {
25
- results = kwtest(stats.map(stat => stat.data), options.alpha);
25
+ results = kwtest(
26
+ stats.map(stat => stat.data),
27
+ options.alpha
28
+ );
26
29
  } else {
27
30
  results = mwtest(stats[0].data, stats[1].data, options.alpha);
28
31
  }
package/src/index.js ADDED
@@ -0,0 +1,27 @@
1
+ export {
2
+ mean,
3
+ variance,
4
+ stdDev,
5
+ skewness,
6
+ kurtosis,
7
+ excessKurtosis,
8
+ bootstrap,
9
+ getWeightedValue
10
+ } from './stats.js';
11
+ export {median} from './median.js';
12
+ export {StatCounter, streamStats} from './stream-stats.js';
13
+ export {MedianCounter, streamMedian} from './stream-median.js';
14
+ export {
15
+ findLevel,
16
+ benchmark,
17
+ benchmarkSeries,
18
+ benchmarkSeriesPar,
19
+ measure,
20
+ measurePar,
21
+ Stats,
22
+ wrapper
23
+ } from './bench/runner.js';
24
+ export {default as compare} from './bench/compare.js';
25
+ export {default as mwtest} from './significance/mwtest.js';
26
+ export {default as kwtest} from './significance/kwtest.js';
27
+ export {default as kstest} from './significance/kstest.js';
@@ -37,7 +37,7 @@ export const rankData = groups => {
37
37
  for (let i = 0; i < t.length; ++i) {
38
38
  const x = t[i].rank - avgRank;
39
39
  denominator += x * x;
40
- S2 = t[i].rank * t[i].rank - avgRankC;
40
+ S2 += t[i].rank * t[i].rank - avgRankC;
41
41
  }
42
42
 
43
43
  S2 /= N - 1;
@@ -1,8 +1,5 @@
1
1
  import erf from './erf.js';
2
2
 
3
- const LIMIT = 1000;
4
- const EPSILON = 1e-30;
5
-
6
3
  const SQRT_2 = Math.sqrt(2),
7
4
  SQRT_2_PI = Math.sqrt(2 * Math.PI);
8
5
 
package/src/stats/rank.js CHANGED
@@ -33,7 +33,8 @@ export const rank = groups => {
33
33
  }
34
34
  i = ahead;
35
35
  }
36
- const avgRank = (N + 1) / 2, avgGroupRank = groupRank.map((rank, i) => rank / groups[i].length);
36
+ const avgRank = (N + 1) / 2,
37
+ avgGroupRank = groupRank.map((rank, i) => rank / groups[i].length);
37
38
 
38
39
  return {ranked: t, N, k, avgRank, groupRank, avgGroupRank, groups};
39
40
  };
@@ -2,7 +2,7 @@ import {zCdf, zPdf} from './z.js';
2
2
  import ppf from './ppf.js';
3
3
 
4
4
  // percent point function
5
- const zPpf = (z) => {
5
+ const zPpf = z => {
6
6
  // find the lower bound
7
7
  let x = -6,
8
8
  p = zCdf(x);
package/src/stats/z.js CHANGED
@@ -3,5 +3,5 @@ import erf from './erf.js';
3
3
  const SQRT_2 = Math.sqrt(2),
4
4
  SQRT_2_PI = Math.sqrt(2 * Math.PI);
5
5
 
6
- export const zCdf = (z) => 0.5 * (1 + erf((z) / SQRT_2));
7
- export const zPdf = (z) => Math.exp(-0.5 * z * z) / SQRT_2_PI;
6
+ export const zCdf = z => 0.5 * (1 + erf(z / SQRT_2));
7
+ export const zPdf = z => Math.exp(-0.5 * z * z) / SQRT_2_PI;
package/src/utils/rk.js CHANGED
@@ -24,7 +24,7 @@ export const rk23 = (fn, {a = 0, b = 1, tolerance = 1e-6, initialValue = 0} = {}
24
24
 
25
25
  if (error < maxError) {
26
26
  ts.push((t += h));
27
- us.push(u = uNew);
27
+ us.push((u = uNew));
28
28
  s1 = s4;
29
29
  }
30
30