testaro 4.0.1 → 4.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +82 -50
- package/commands.js +0 -7
- package/job.js +92 -0
- package/package.json +1 -1
- package/{index.js → run.js} +5 -30
- package/samples/batches/weborgs.json +16 -0
- package/scoring/correlation.js +0 -76
- package/scoring/correlations.json +0 -327
- package/scoring/data.json +0 -26021
- package/scoring/dupCounts.js +0 -39
- package/scoring/dupCounts.json +0 -112
- package/scoring/duplications.json +0 -253
- package/scoring/issues.json +0 -304
- package/scoring/packageData.js +0 -171
- package/scoring/packageIssues.js +0 -34
- package/scoring/rulesetData.json +0 -15
package/README.md
CHANGED
|
@@ -6,9 +6,7 @@ Federated accessibility test automation
|
|
|
6
6
|
|
|
7
7
|
Testaro is a collection of collections of web accessibility tests.
|
|
8
8
|
|
|
9
|
-
The purpose of Testaro is to provide programmatic access to over
|
|
10
|
-
|
|
11
|
-
Running Testaro requires telling it which operations (including tests) to perform and which URLs to perform them on, and giving Testaro an object to put its output into.
|
|
9
|
+
The purpose of Testaro is to provide programmatic access to over 800 accessibility tests defined in several test packages and in Testaro itself.
|
|
12
10
|
|
|
13
11
|
## System requirements
|
|
14
12
|
|
|
@@ -72,7 +70,7 @@ Once you have done that, you can install Testaro as you would install any `npm`
|
|
|
72
70
|
|
|
73
71
|
## Payment
|
|
74
72
|
|
|
75
|
-
All of the tests that Testaro can perform are free of cost, except those in the WAVE
|
|
73
|
+
All of the tests that Testaro can perform are free of cost, except those in the Tenon and WAVE packages. The owner of each of those packages gives new registrants a free allowance of credits before it becomes necessary to pay for use of the API of the package. The required environment variables for authentication and payment are described below under “Environment variables”.
|
|
76
74
|
|
|
77
75
|
## Specification
|
|
78
76
|
|
|
@@ -82,7 +80,7 @@ To use Testaro, you must specify what it should do. You do this with a script an
|
|
|
82
80
|
|
|
83
81
|
### Introduction
|
|
84
82
|
|
|
85
|
-
|
|
83
|
+
To use Testaro, you provide a **script** to it. The script contains **commands**. Testaro __runs__ the script, i.e. performs the commands in it and writes a report of the results.
|
|
86
84
|
|
|
87
85
|
A script is a JSON file with the properties:
|
|
88
86
|
|
|
@@ -163,7 +161,6 @@ The subsequent commands can tell Testaro to perform any of:
|
|
|
163
161
|
- navigations (browser launches, visits to URLs, waits for page conditions, etc.)
|
|
164
162
|
- alterations (changes to the page)
|
|
165
163
|
- tests (whether in dependency packages or defined within Testaro)
|
|
166
|
-
- scoring (aggregating test results into total scores)
|
|
167
164
|
- branching (continuing from a command other than the next one)
|
|
168
165
|
|
|
169
166
|
##### Moves
|
|
@@ -272,37 +269,6 @@ In case you want to perform more than one `tenon` test, you can do so. Just give
|
|
|
272
269
|
|
|
273
270
|
Tenon recommends giving it a public URL rather than giving it the content of a page, if possible. So, it is best to give the `withNewContent` property of the `tenonRequest` command the value `true`, unless the page is not public.
|
|
274
271
|
|
|
275
|
-
##### Scoring
|
|
276
|
-
|
|
277
|
-
An example of a **scoring** command is:
|
|
278
|
-
|
|
279
|
-
```json
|
|
280
|
-
{
|
|
281
|
-
"type": "score",
|
|
282
|
-
"which": "asp09",
|
|
283
|
-
"what": "5 packages and 16 custom tests, with duplication discounts"
|
|
284
|
-
}
|
|
285
|
-
```
|
|
286
|
-
|
|
287
|
-
In this case, Testaro executes the procedure specified in the `asp09` score proc (in the `procs/score` directory) to compute a total score for the script or (if there is a batch) the host. The proc is a JavaScript module whose `scorer` function returns an object containing a total score and the itemized scores that yield the total.
|
|
288
|
-
|
|
289
|
-
The `scorer` function inspects the script report to find the required data, applies specific weights and formulas to yield the itemized scores, and combines the itemized scores to yield the total score.
|
|
290
|
-
|
|
291
|
-
The data for scores can include not only test results, but also log statistics. Testaro includes in each report the properties:
|
|
292
|
-
- `logCount`: how many log items the browser generated
|
|
293
|
-
- `logSize`: how large the log items were in the aggregate, in characters
|
|
294
|
-
- `prohibitedCount`: how many log items contain (case-insensitively) `403` and `status`, or `prohibited`
|
|
295
|
-
- `visitTimeoutCount`: how many times an attempt to visit a URL timed out
|
|
296
|
-
- `visitRejectionCount`: how many times a URL visit got an HTTP status other than 200 or 304
|
|
297
|
-
|
|
298
|
-
Those log statistics can provide data for a log-based test defined in a score proc.
|
|
299
|
-
|
|
300
|
-
A good score proc takes account of duplications between test packages: two or more packages that discover the same accessibility defects. Score procs can apply discounts to reflect duplications between test packages, so that, if two or more packages discover the same defect, the defect will not be overweighted.
|
|
301
|
-
|
|
302
|
-
The procedures in the `scoring` directory have produced data there that score procs can use for the calibration of discounts.
|
|
303
|
-
|
|
304
|
-
Some documents are implemented in such a way that some tests are prevented from being conducted on them. When that occurs, the score proc can **infer** a score for that test.
|
|
305
|
-
|
|
306
272
|
##### Branching
|
|
307
273
|
|
|
308
274
|
An example of a **branching** command is:
|
|
@@ -399,15 +365,35 @@ A typical use for an `expect` property is checking the correctness of a Testaro
|
|
|
399
365
|
|
|
400
366
|
## Batches
|
|
401
367
|
|
|
402
|
-
|
|
368
|
+
You may wish to have Testaro perform the same sequence of tests on multiple web pages. In that case, you can create a _batch_, with the following structure:
|
|
369
|
+
|
|
370
|
+
```javascript
|
|
371
|
+
{
|
|
372
|
+
what: 'Web leaders',
|
|
373
|
+
hosts: {
|
|
374
|
+
id: 'w3c',
|
|
375
|
+
which: 'https://www.w3.org/',
|
|
376
|
+
what: 'W3C'
|
|
377
|
+
},
|
|
378
|
+
{
|
|
379
|
+
id: 'wikimedia',
|
|
380
|
+
which: 'https://www.wikimedia.org/',
|
|
381
|
+
what: 'Wikimedia'
|
|
382
|
+
}
|
|
383
|
+
}
|
|
384
|
+
```
|
|
403
385
|
|
|
404
|
-
|
|
386
|
+
With a batch, you can execute a single statement to run a script multiple times, one per host. On each call, Testaro takes one of the hosts in the batch and substitutes it for each host specified in a `url` command of the script. Testaro thereby creates and sequentially runs multiple scripts.
|
|
405
387
|
|
|
406
388
|
## Execution
|
|
407
389
|
|
|
408
390
|
### Invocation
|
|
409
391
|
|
|
410
|
-
|
|
392
|
+
There are two methods for using Testaro.
|
|
393
|
+
|
|
394
|
+
#### Low-level
|
|
395
|
+
|
|
396
|
+
Create a report object like this:
|
|
411
397
|
|
|
412
398
|
```javascript
|
|
413
399
|
const report = {
|
|
@@ -418,20 +404,46 @@ const report = {
|
|
|
418
404
|
};
|
|
419
405
|
```
|
|
420
406
|
|
|
421
|
-
Replace `{…}` with a script object, like the example script shown above.
|
|
407
|
+
Replace `{…}` with a script object, like the example script shown above. The low-level method does not allow the use of batches.
|
|
408
|
+
|
|
409
|
+
Then execute the `run` module with the `report` object as an argument.
|
|
410
|
+
- Another Node.js package that has Testaro as a dependency can execute `require('testaro').run(report)`.
|
|
411
|
+
- In a command environment with the Testaro project directory as the current directory, you can execute `node run report`.
|
|
412
|
+
|
|
413
|
+
Either statement will make Testaro run the script and populate the `log` and `acts` arrays of the `report` object. When Testaro finishes, the `log` and `acts` properties will contain the results.
|
|
414
|
+
|
|
415
|
+
You or a dependent package can then save or further process the `report` object as desired.
|
|
416
|
+
|
|
417
|
+
#### High-level
|
|
418
|
+
|
|
419
|
+
Make sure that you have defined these environment variables, with absolute or relative paths to directories as their values:
|
|
420
|
+
- `SCRIPTDIR`
|
|
421
|
+
- `BATCHDIR`
|
|
422
|
+
- `REPORTDIR`
|
|
423
|
+
|
|
424
|
+
Relative paths must be relative to the Testaro project directory. For example, if the script directory is `scripts` in a `testing` directory that is a sibling of the Testaro directory, then `SCRIPTDIR` must have the value `../testing/scripts`.
|
|
425
|
+
|
|
426
|
+
Also ensure that Testaro can read all those directories and write to `REPORTDIR`.
|
|
427
|
+
|
|
428
|
+
Place a script into `SCRIPTDIR` and, optionally, a batch into `BATCHDIR`. Each should be named `idValue.json`, where `idValue` is replaced with the value of its `id` property. That value must consist of only lower-case ASCII letters and digits.
|
|
422
429
|
|
|
423
|
-
Then execute the statement `
|
|
430
|
+
Then execute the statement `node job scriptID` or `node job scriptID batchID`, replacing `scriptID` and `batchID` with the `id` values of the script and the batch, respectively.
|
|
424
431
|
|
|
425
|
-
|
|
432
|
+
The `job` module will call the `run` module on the script, or, if there is a batch, will create multiple scripts, one per host, and sequentially call the `run` module on each script. The results will be saved in report files in the `REPORTDIR` directory.
|
|
426
433
|
|
|
427
|
-
|
|
434
|
+
If there is no batch, the report file will be named with a unique timestamp, suffixed with a `.json` extension. If there is a batch, then the base of each file’s name will be the same timestamp, suffixed with `-hostID`, where `hostID` is the value of the `id` property of the `host` object in the batch file. For example, if you execute `node job script01 wikis`, you might get these report files deposited into `REPORTDIR`:
|
|
435
|
+
- `enp46j-wikipedia.json`
|
|
436
|
+
- `enp45j-wiktionary.json`
|
|
437
|
+
- `enp45j-wikidata.json`
|
|
428
438
|
|
|
429
439
|
### Environment variables
|
|
430
440
|
|
|
431
|
-
|
|
441
|
+
As mentioned above, using the high-level method to run Testaro jobs requires `SCRIPTDIR`, `BATCHDIR`, and `REPORTDIR` environment variables.
|
|
432
442
|
|
|
433
443
|
If a `tenon` test is included in the script, environment variables named `TESTARO_TENON_USER` and `TESTARO_TENON_PASSWORD` must exist, with your Tenon username and password, respectively, as their values.
|
|
434
444
|
|
|
445
|
+
If a `wave` test is included in the script, an environment variable named `TESTARO_WAVE_KEY` must exist, with your WAVE API key as its value.
|
|
446
|
+
|
|
435
447
|
The `text` command can interpolate the value of an environment variable into text that it enters on a page, as documented in the `commands.js` file.
|
|
436
448
|
|
|
437
449
|
Before executing a Testaro script, you can optionally also set the environment variables `TESTARO_DEBUG` (to `'true'` or anything else) and/or `TESTARO_WAITS` (to a non-negative integer). The effects of these variables are described in the `index.js` file.
|
|
@@ -440,9 +452,15 @@ You may store these environment variables in an untracked `.env` file if you wis
|
|
|
440
452
|
|
|
441
453
|
## Validation
|
|
442
454
|
|
|
455
|
+
### Samples
|
|
456
|
+
|
|
457
|
+
The `samples` directory contains scripts and a batch that you can use to test Testaro with with the high-level, by giving `SCRIPTDIR` the value `'samples/scripts'` and `BATCHDIR` the value `'samples/batches'`. Do to this, you must also define `REPORTDIR`.
|
|
458
|
+
|
|
459
|
+
### Validators
|
|
460
|
+
|
|
443
461
|
_Executors_ for Testaro validation are located in the `validation` directory.
|
|
444
462
|
|
|
445
|
-
A basic executor is the `test.js` file. It
|
|
463
|
+
A basic executor is the `test.js` file. It uses the low-level method to run Testaro with the `simple.js` sample script and outputs the log and the acts to the standard output.
|
|
446
464
|
|
|
447
465
|
The other executors are commonJS JavaScript modules that run Testaro and report whether the results are correct.
|
|
448
466
|
|
|
@@ -450,6 +468,8 @@ The other executors are:
|
|
|
450
468
|
- `app.js`: Reports whether Testaro runs correctly with a script.
|
|
451
469
|
- `tests.js`: Runs Testaro with each custom test and reports whether the results are correct.
|
|
452
470
|
|
|
471
|
+
There are no executors for validating the test packages.
|
|
472
|
+
|
|
453
473
|
To execute any executor `xyz.js`, call it with the statement `node validation/executors/xyz`. The results will appear in the standard output.
|
|
454
474
|
|
|
455
475
|
The `tests.js` executor makes use of the scripts in the `validation/tests/scripts` directory, and they, in turn, run tests on HTML files in the `validation/tests/targets` directory.
|
|
@@ -460,7 +480,7 @@ You can define additional Testaro commands and functionality. Contributions are
|
|
|
460
480
|
|
|
461
481
|
## Accessibility principles
|
|
462
482
|
|
|
463
|
-
The rationales motivating the Testaro-defined tests
|
|
483
|
+
The rationales motivating the Testaro-defined tests can be found in comments within the files of those tests, in the `tests` directory. Unavoidably, each test is opinionated. Testaro itself, however, can accommodate other tests representing different opinions. Testaro is intended to be neutral with respect to questions such as the criteria for accessibility, the severities of accessibility issues, whether accessibility is binary or graded, and the distinction between usability and accessibility.
|
|
464
484
|
|
|
465
485
|
## Testing challenges
|
|
466
486
|
|
|
@@ -472,11 +492,23 @@ The Playwright “Receives Events” actionability check does **not** check whet
|
|
|
472
492
|
|
|
473
493
|
### Test-package duplication
|
|
474
494
|
|
|
475
|
-
Test packages sometimes do redundant testing, in that two or more packages test for the same issues
|
|
495
|
+
Test packages sometimes do redundant testing, in that two or more packages test for the same issues, although such duplications are not necessarily perfect. This fact creates three problems:
|
|
496
|
+
- One cannot be confident in excluding some tests of some packages on the assumption that they perfectly duplicate tests of other packages.
|
|
497
|
+
- The Testaro report from a script documents each package’s results separately, so a single difect may be documented in multiple locations within the report, making the consumption of the report inefficient.
|
|
498
|
+
- An effort to aggregate the results into a single score may distort the scores by inflating the weights of defects that happen to be discovered by multiple packages.
|
|
499
|
+
|
|
500
|
+
The tests provided with Testaro do not exclude any apparently duplicative tests from packages.
|
|
501
|
+
|
|
502
|
+
To deal with the above problems, you can:
|
|
503
|
+
- revise package `test` commands to exclude tests that you consider duplicative
|
|
504
|
+
- create derivative reports that organize results by defect types rather than by package
|
|
505
|
+
- take duplication into account when defining scoring rules
|
|
506
|
+
|
|
507
|
+
Some measures of these kinds are included in the scoring and reporting features of the Testilo package.
|
|
476
508
|
|
|
477
509
|
## Repository exclusions
|
|
478
510
|
|
|
479
|
-
The files in the `temp` directory are presumed ephemeral and are not tracked by `git`.
|
|
511
|
+
The files in the `temp` directory are presumed ephemeral and are not tracked by `git`.
|
|
480
512
|
|
|
481
513
|
## Related packages
|
|
482
514
|
|
|
@@ -486,7 +518,7 @@ Testaro is derived from [Autotest](https://github.com/jrpool/autotest).
|
|
|
486
518
|
|
|
487
519
|
Testaro omits some functionalities of Autotest, such as:
|
|
488
520
|
- tests producing results intended to be human-inspected
|
|
489
|
-
-
|
|
521
|
+
- scoring
|
|
490
522
|
- file operations for score aggregation, report revision, and HTML reports
|
|
491
523
|
- a web user interface
|
|
492
524
|
|
package/commands.js
CHANGED
|
@@ -89,13 +89,6 @@ exports.commands = {
|
|
|
89
89
|
what: [false, 'string', 'hasLength', 'comment']
|
|
90
90
|
}
|
|
91
91
|
],
|
|
92
|
-
score: [
|
|
93
|
-
'Compute and report a score',
|
|
94
|
-
{
|
|
95
|
-
which: [true, 'string', 'hasLength', 'score-proc name'],
|
|
96
|
-
what: [false, 'string', 'hasLength', 'comment']
|
|
97
|
-
}
|
|
98
|
-
],
|
|
99
92
|
select: [
|
|
100
93
|
'Select a select option',
|
|
101
94
|
{
|
package/job.js
ADDED
|
@@ -0,0 +1,92 @@
|
|
|
1
|
+
/*
|
|
2
|
+
job.js
|
|
3
|
+
Manages jobs.
|
|
4
|
+
*/
|
|
5
|
+
|
|
6
|
+
// ########## IMPORTS
|
|
7
|
+
|
|
8
|
+
// Module to keep secrets.
|
|
9
|
+
require('dotenv').config();
|
|
10
|
+
// Module to read and write files.
|
|
11
|
+
const fs = require('fs/promises');
|
|
12
|
+
const { handleRequest } = require('./run');
|
|
13
|
+
|
|
14
|
+
// ########## CONSTANTS
|
|
15
|
+
const scriptDir = process.env.SCRIPTDIR;
|
|
16
|
+
const batchDir = process.env.BATCHDIR;
|
|
17
|
+
const reportDir = process.env.REPORTDIR;
|
|
18
|
+
|
|
19
|
+
// ########## FUNCTIONS
|
|
20
|
+
|
|
21
|
+
// Converts a script to a batch-based array of scripts.
|
|
22
|
+
const batchify = (script, batch, timeStamp) => {
|
|
23
|
+
const {hosts} = batch;
|
|
24
|
+
const specs = hosts.map(host => {
|
|
25
|
+
const newScript = JSON.parse(JSON.stringify(script));
|
|
26
|
+
newScript.commands.forEach(command => {
|
|
27
|
+
if (command.type === 'url') {
|
|
28
|
+
command.which = host.which;
|
|
29
|
+
command.what = host.what;
|
|
30
|
+
}
|
|
31
|
+
});
|
|
32
|
+
const spec = {
|
|
33
|
+
id: `${timeStamp}-${host.id}`,
|
|
34
|
+
script: newScript
|
|
35
|
+
};
|
|
36
|
+
return spec;
|
|
37
|
+
});
|
|
38
|
+
return specs;
|
|
39
|
+
};
|
|
40
|
+
// Runs a no-batch script.
|
|
41
|
+
const runHost = async (id, script) => {
|
|
42
|
+
const report = {
|
|
43
|
+
id,
|
|
44
|
+
log: [],
|
|
45
|
+
script,
|
|
46
|
+
acts: []
|
|
47
|
+
};
|
|
48
|
+
await require('./run').handleRequest(report);
|
|
49
|
+
const reportJSON = JSON.stringify(report, null, 2);
|
|
50
|
+
await fs.writeFile(`${reportDir}/${id}.json`, reportJSON);
|
|
51
|
+
};
|
|
52
|
+
// Runs a job.
|
|
53
|
+
exports.handleRequest = async (scriptID, batchID) => {
|
|
54
|
+
if (scriptID) {
|
|
55
|
+
try {
|
|
56
|
+
const scriptJSON = await fs.readFile(`${scriptDir}/${scriptID}.json`, 'utf8');
|
|
57
|
+
const script = JSON.parse(scriptJSON);
|
|
58
|
+
// Identify the start time and a timestamp.
|
|
59
|
+
const timeStamp = Math.floor((Date.now() - Date.UTC(2022, 1)) / 2000).toString(36);
|
|
60
|
+
// If there is a batch:
|
|
61
|
+
let batch = null;
|
|
62
|
+
if (batchID) {
|
|
63
|
+
// Convert the script to a batch-based set of scripts.
|
|
64
|
+
const batchJSON = await fs.readFile(`${batchDir}/${batchID}.json`, 'utf8');
|
|
65
|
+
batch = JSON.parse(batchJSON);
|
|
66
|
+
const specs = batchify(script, batch, timeStamp);
|
|
67
|
+
// For each script:
|
|
68
|
+
while (specs.length) {
|
|
69
|
+
const spec = specs.shift();
|
|
70
|
+
const {id, script} = spec;
|
|
71
|
+
// Run it and save the result with a host-suffixed ID.
|
|
72
|
+
await runHost(id, script);
|
|
73
|
+
}
|
|
74
|
+
}
|
|
75
|
+
// Otherwise, i.e. if there is no batch:
|
|
76
|
+
else {
|
|
77
|
+
// Run the script and save the result with a timestamp ID.
|
|
78
|
+
await runHost(timeStamp, script);
|
|
79
|
+
}
|
|
80
|
+
}
|
|
81
|
+
catch(error) {
|
|
82
|
+
console.log(`ERROR: ${error.message}\n${error.stack}`);
|
|
83
|
+
}
|
|
84
|
+
}
|
|
85
|
+
else {
|
|
86
|
+
console.log('ERROR: no script specified');
|
|
87
|
+
}
|
|
88
|
+
};
|
|
89
|
+
|
|
90
|
+
// ########## OPERATION
|
|
91
|
+
|
|
92
|
+
handleRequest(process.argv[2], process.argv[3]);
|
package/package.json
CHANGED
package/{index.js → run.js}
RENAMED
|
@@ -7,6 +7,7 @@
|
|
|
7
7
|
require('dotenv').config();
|
|
8
8
|
// Requirements for commands.
|
|
9
9
|
const {commands} = require('./commands');
|
|
10
|
+
const { handleRequest } = require('./job');
|
|
10
11
|
// ########## CONSTANTS
|
|
11
12
|
// Set DEBUG environment variable to 'true' to add debugging features.
|
|
12
13
|
const debug = process.env.TESTARO_DEBUG === 'true';
|
|
@@ -613,17 +614,6 @@ const doActs = async (report, actIndex, page) => {
|
|
|
613
614
|
// Identify its only page as current.
|
|
614
615
|
page = browserContext.pages()[0];
|
|
615
616
|
}
|
|
616
|
-
// Otherwise, if it is a score:
|
|
617
|
-
else if (act.type === 'score') {
|
|
618
|
-
// Compute and report the score.
|
|
619
|
-
try {
|
|
620
|
-
const {scorer} = require(`./procs/score/${act.which}`);
|
|
621
|
-
act.result = scorer(report.acts);
|
|
622
|
-
}
|
|
623
|
-
catch (error) {
|
|
624
|
-
act.error = `ERROR: ${error.message}\n${error.stack}`;
|
|
625
|
-
}
|
|
626
|
-
}
|
|
627
617
|
// Otherwise, if a current page exists:
|
|
628
618
|
else if (page) {
|
|
629
619
|
// If the command is a url:
|
|
@@ -1200,25 +1190,6 @@ const doScript = async (report) => {
|
|
|
1200
1190
|
report.prohibitedCount = prohibitedCount;
|
|
1201
1191
|
report.visitTimeoutCount = visitTimeoutCount;
|
|
1202
1192
|
report.visitRejectionCount = visitRejectionCount;
|
|
1203
|
-
// If logs are to be scored, do so.
|
|
1204
|
-
const scoreTables = report.acts.filter(act => act.type === 'score');
|
|
1205
|
-
if (scoreTables.length) {
|
|
1206
|
-
const scoreTable = scoreTables[0];
|
|
1207
|
-
const {result} = scoreTable;
|
|
1208
|
-
if (result) {
|
|
1209
|
-
const {logWeights, scores} = result;
|
|
1210
|
-
if (logWeights && scores) {
|
|
1211
|
-
scores.log = Math.floor(
|
|
1212
|
-
logWeights.count * logCount
|
|
1213
|
-
+ logWeights.size * logSize
|
|
1214
|
-
+ logWeights.prohibited * prohibitedCount
|
|
1215
|
-
+ logWeights.visitTimeout * visitTimeoutCount
|
|
1216
|
-
+ logWeights.visitRejection * visitRejectionCount
|
|
1217
|
-
);
|
|
1218
|
-
scores.total += scores.log;
|
|
1219
|
-
}
|
|
1220
|
-
}
|
|
1221
|
-
}
|
|
1222
1193
|
// Add the end time and duration to the report.
|
|
1223
1194
|
const endTime = new Date();
|
|
1224
1195
|
report.endTime = endTime.toISOString().slice(0, 19);
|
|
@@ -1285,3 +1256,7 @@ exports.handleRequest = async report => {
|
|
|
1285
1256
|
console.log('ERROR: options missing or invalid');
|
|
1286
1257
|
}
|
|
1287
1258
|
};
|
|
1259
|
+
|
|
1260
|
+
// ########## OPERATION
|
|
1261
|
+
|
|
1262
|
+
handleRequest(process.argv[2]);
|
package/scoring/correlation.js
DELETED
|
@@ -1,76 +0,0 @@
|
|
|
1
|
-
/*
|
|
2
|
-
correlation
|
|
3
|
-
Compiles a list of the correlations between distinct-package issue types and creates a file,
|
|
4
|
-
correlations.json, containing the list.
|
|
5
|
-
*/
|
|
6
|
-
const fs = require('fs');
|
|
7
|
-
const compile = () => {
|
|
8
|
-
const issuesJSON = fs.readFileSync(`${__dirname}/package/issues.json`, 'utf8');
|
|
9
|
-
const issues = JSON.parse(issuesJSON);
|
|
10
|
-
const dataJSON = fs.readFileSync(`${__dirname}/package/data.json`, 'utf8');
|
|
11
|
-
const reportData = JSON.parse(dataJSON);
|
|
12
|
-
const reports = Object.values(reportData);
|
|
13
|
-
// Initialize the list.
|
|
14
|
-
const data = {
|
|
15
|
-
aatt_alfa: {},
|
|
16
|
-
aatt_axe: {},
|
|
17
|
-
aatt_ibm: {},
|
|
18
|
-
aatt_wave: {},
|
|
19
|
-
alfa_axe: {},
|
|
20
|
-
alfa_ibm: {},
|
|
21
|
-
alfa_wave: {},
|
|
22
|
-
axe_ibm: {},
|
|
23
|
-
axe_wave: {},
|
|
24
|
-
ibm_wave: {}
|
|
25
|
-
};
|
|
26
|
-
// For each pair of packages:
|
|
27
|
-
const packagePairs = Object.keys(data);
|
|
28
|
-
packagePairs.forEach(packagePair => {
|
|
29
|
-
console.log(`=== Starting package pair ${packagePair}`);
|
|
30
|
-
const packages = packagePair.split('_');
|
|
31
|
-
// Identify the reports containing results from both packages.
|
|
32
|
-
const pairReports = reports.filter(report => report[packages[0]] && report[packages[1]]);
|
|
33
|
-
// For each pair of issues:
|
|
34
|
-
issues[packages[0]].forEach(issueA => {
|
|
35
|
-
issues[packages[1]].forEach(issueB => {
|
|
36
|
-
// Initialize an array of score pairs.
|
|
37
|
-
const scorePairs = [];
|
|
38
|
-
// For each applicable report:
|
|
39
|
-
pairReports.forEach(report => {
|
|
40
|
-
// Add the scores for the issues to the array of score pairs.
|
|
41
|
-
const scorePair = [report[packages[0]][issueA] || 0, report[packages[1]][issueB] || 0];
|
|
42
|
-
scorePairs.push(scorePair);
|
|
43
|
-
});
|
|
44
|
-
// Get the correlation between the issues.
|
|
45
|
-
const aSum = scorePairs.reduce((sum, current) => sum + current[0], 0);
|
|
46
|
-
const bSum = scorePairs.reduce((sum, current) => sum + current[1], 0);
|
|
47
|
-
const abSum = scorePairs.reduce((sum, current) => sum + current[0] * current[1], 0);
|
|
48
|
-
const aSqSum = scorePairs.reduce((sum, current) => sum + current[0] ** 2, 0);
|
|
49
|
-
const bSqSum = scorePairs.reduce((sum, current) => sum + current[1] ** 2, 0);
|
|
50
|
-
const n = scorePairs.length;
|
|
51
|
-
const correlation
|
|
52
|
-
= (abSum - aSum * bSum / n) / n
|
|
53
|
-
/ (Math.sqrt(aSqSum / n - (aSum / n) ** 2) * Math.sqrt(bSqSum / n - (bSum / n) ** 2));
|
|
54
|
-
// If the correlation is large enough:
|
|
55
|
-
if (correlation > 0.7) {
|
|
56
|
-
const roundedCorr = correlation.toFixed(2);
|
|
57
|
-
// Record it and the count of non-zero scores.
|
|
58
|
-
const nonZero = scorePairs.reduce(
|
|
59
|
-
(count, current) => count + current.filter(score => score).length, 0
|
|
60
|
-
);
|
|
61
|
-
const corrPlusNZ = `${roundedCorr} (${nonZero})`;
|
|
62
|
-
if (data[packagePair][issueA]) {
|
|
63
|
-
data[packagePair][issueA][issueB] = corrPlusNZ;
|
|
64
|
-
}
|
|
65
|
-
else {
|
|
66
|
-
data[packagePair][issueA] = {[issueB]: corrPlusNZ};
|
|
67
|
-
}
|
|
68
|
-
}
|
|
69
|
-
});
|
|
70
|
-
});
|
|
71
|
-
});
|
|
72
|
-
return data;
|
|
73
|
-
};
|
|
74
|
-
fs.writeFileSync(
|
|
75
|
-
`${__dirname}/package/correlations.json`, JSON.stringify(compile(), null, 2)
|
|
76
|
-
);
|