testaro 4.0.1 → 4.1.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -6,9 +6,7 @@ Federated accessibility test automation
6
6
 
7
7
  Testaro is a collection of collections of web accessibility tests.
8
8
 
9
- The purpose of Testaro is to provide programmatic access to over 600 accessibility tests defined in several test packages and in Testaro itself.
10
-
11
- Running Testaro requires telling it which operations (including tests) to perform and which URLs to perform them on, and giving Testaro an object to put its output into.
9
+ The purpose of Testaro is to provide programmatic access to over 800 accessibility tests defined in several test packages and in Testaro itself.
12
10
 
13
11
  ## System requirements
14
12
 
@@ -72,7 +70,7 @@ Once you have done that, you can install Testaro as you would install any `npm`
72
70
 
73
71
  ## Payment
74
72
 
75
- All of the tests that Testaro can perform are free of cost, except those in the WAVE package. WebAIM requires an API key for those tests. If you wish to have Testaro perform the WAVE tests, you will need to have a WAVE API key. Visit the URL above in order to obtain your key. It costs 1 to 3 credits to perform the WAVE tests on one URL. WebAIM gives you 100 credits without cost, before you need to begin paying.
73
+ All of the tests that Testaro can perform are free of cost, except those in the Tenon and WAVE packages. The owner of each of those packages gives new registrants a free allowance of credits before it becomes necessary to pay for use of the API of the package. The required environment variables for authentication and payment are described below under “Environment variables”.
76
74
 
77
75
  ## Specification
78
76
 
@@ -82,7 +80,7 @@ To use Testaro, you must specify what it should do. You do this with a script an
82
80
 
83
81
  ### Introduction
84
82
 
85
- When you run Testaro, you provide a **script** to it. The script contains **commands**. Testaro performs those commands.
83
+ To use Testaro, you provide a **script** to it. The script contains **commands**. Testaro __runs__ the script, i.e. performs the commands in it and writes a report of the results.
86
84
 
87
85
  A script is a JSON file with the properties:
88
86
 
@@ -163,7 +161,6 @@ The subsequent commands can tell Testaro to perform any of:
163
161
  - navigations (browser launches, visits to URLs, waits for page conditions, etc.)
164
162
  - alterations (changes to the page)
165
163
  - tests (whether in dependency packages or defined within Testaro)
166
- - scoring (aggregating test results into total scores)
167
164
  - branching (continuing from a command other than the next one)
168
165
 
169
166
  ##### Moves
@@ -272,37 +269,6 @@ In case you want to perform more than one `tenon` test, you can do so. Just give
272
269
 
273
270
  Tenon recommends giving it a public URL rather than giving it the content of a page, if possible. So, it is best to give the `withNewContent` property of the `tenonRequest` command the value `true`, unless the page is not public.
274
271
 
275
- ##### Scoring
276
-
277
- An example of a **scoring** command is:
278
-
279
- ```json
280
- {
281
- "type": "score",
282
- "which": "asp09",
283
- "what": "5 packages and 16 custom tests, with duplication discounts"
284
- }
285
- ```
286
-
287
- In this case, Testaro executes the procedure specified in the `asp09` score proc (in the `procs/score` directory) to compute a total score for the script or (if there is a batch) the host. The proc is a JavaScript module whose `scorer` function returns an object containing a total score and the itemized scores that yield the total.
288
-
289
- The `scorer` function inspects the script report to find the required data, applies specific weights and formulas to yield the itemized scores, and combines the itemized scores to yield the total score.
290
-
291
- The data for scores can include not only test results, but also log statistics. Testaro includes in each report the properties:
292
- - `logCount`: how many log items the browser generated
293
- - `logSize`: how large the log items were in the aggregate, in characters
294
- - `prohibitedCount`: how many log items contain (case-insensitively) `403` and `status`, or `prohibited`
295
- - `visitTimeoutCount`: how many times an attempt to visit a URL timed out
296
- - `visitRejectionCount`: how many times a URL visit got an HTTP status other than 200 or 304
297
-
298
- Those log statistics can provide data for a log-based test defined in a score proc.
299
-
300
- A good score proc takes account of duplications between test packages: two or more packages that discover the same accessibility defects. Score procs can apply discounts to reflect duplications between test packages, so that, if two or more packages discover the same defect, the defect will not be overweighted.
301
-
302
- The procedures in the `scoring` directory have produced data there that score procs can use for the calibration of discounts.
303
-
304
- Some documents are implemented in such a way that some tests are prevented from being conducted on them. When that occurs, the score proc can **infer** a score for that test.
305
-
306
272
  ##### Branching
307
273
 
308
274
  An example of a **branching** command is:
@@ -399,15 +365,35 @@ A typical use for an `expect` property is checking the correctness of a Testaro
399
365
 
400
366
  ## Batches
401
367
 
402
- In some cases you may wish to repeatedly run Testaro with the same script, changing only its `url` commands. The purpose would be to perform the same set of tests on multiple web pages. Such a use would apply only to scripts whose `url` commands are all identical, not to a script that moves from one host to another.
368
+ You may wish to have Testaro perform the same sequence of tests on multiple web pages. In that case, you can create a _batch_, with the following structure:
403
369
 
404
- Testaro does not support such batch processing, but Testilo does. See its `README.md` file for instructions.
370
+ ```javascript
371
+ {
372
+ what: 'Web leaders',
373
+ hosts: {
374
+ id: 'w3c',
375
+ which: 'https://www.w3.org/',
376
+ what: 'W3C'
377
+ },
378
+ {
379
+ id: 'wikimedia',
380
+ which: 'https://www.wikimedia.org/',
381
+ what: 'Wikimedia'
382
+ }
383
+ }
384
+ ```
385
+
386
+ With a batch, you can execute a single statement to run a script multiple times, one per host. On each call, Testaro takes one of the hosts in the batch and substitutes it for each host specified in a `url` command of the script. Testaro thereby creates and sequentially runs multiple scripts.
405
387
 
406
388
  ## Execution
407
389
 
408
390
  ### Invocation
409
391
 
410
- To run Testaro, create a report object like this:
392
+ There are two methods for using Testaro.
393
+
394
+ #### Low-level
395
+
396
+ Create a report object like this:
411
397
 
412
398
  ```javascript
413
399
  const report = {
@@ -418,20 +404,46 @@ const report = {
418
404
  };
419
405
  ```
420
406
 
421
- Replace `{…}` with a script object, like the example script shown above.
407
+ Replace `{…}` with a script object, like the example script shown above. The low-level method does not allow the use of batches.
408
+
409
+ Then execute the `run` module with the `report` object as an argument.
410
+ - Another Node.js package that has Testaro as a dependency can execute `require('testaro').run(report)`.
411
+ - In a command environment with the Testaro project directory as the current directory, you can execute `node run report`.
412
+
413
+ Either statement will make Testaro run the script and populate the `log` and `acts` arrays of the `report` object. When Testaro finishes, the `log` and `acts` properties will contain the results.
414
+
415
+ You or a dependent package can then save or further process the `report` object as desired.
416
+
417
+ #### High-level
418
+
419
+ Make sure that you have defined these environment variables, with absolute or relative paths to directories as their values:
420
+ - `SCRIPTDIR`
421
+ - `BATCHDIR`
422
+ - `REPORTDIR`
423
+
424
+ Relative paths must be relative to the Testaro project directory. For example, if the script directory is `scripts` in a `testing` directory that is a sibling of the Testaro directory, then `SCRIPTDIR` must have the value `../testing/scripts`.
425
+
426
+ Also ensure that Testaro can read all those directories and write to `REPORTDIR`.
422
427
 
423
- Then execute the statement `require('testaro').handleRequest(report)`. That statement will run Testaro.
428
+ Place a script into `SCRIPTDIR` and, optionally, a batch into `BATCHDIR`. Each should be named `idValue.json`, where `idValue` is replaced with the value of its `id` property. That value must consist of only lower-case ASCII letters and digits.
424
429
 
425
- While it runs, Testaro will populate the `log` and `acts` arrays of the report object. When Testaro finishes, the `log` and `acts` properties will contain its results.
430
+ Then execute the statement `node job scriptID` or `node job scriptID batchID`, replacing `scriptID` and `batchID` with the `id` values of the script and the batch, respectively.
426
431
 
427
- Another way to run Testaro is to use Testilo, which can handle batches and saves results to files. Testilo prepopulates the report object with an `id` property consisting of a timestamp and, if a batch is used, the host ID. If Testaro finds a non-empty `id` property in the `report` object, Testaro leaves it unchanged; if not, Testaro creates an `id` property with a timestamp value.
432
+ The `job` module will call the `run` module on the script, or, if there is a batch, will create multiple scripts, one per host, and sequentially call the `run` module on each script. The results will be saved in report files in the `REPORTDIR` directory.
433
+
434
+ If there is no batch, the report file will be named with a unique timestamp, suffixed with a `.json` extension. If there is a batch, then the base of each file’s name will be the same timestamp, suffixed with `-hostID`, where `hostID` is the value of the `id` property of the `host` object in the batch file. For example, if you execute `node job script01 wikis`, you might get these report files deposited into `REPORTDIR`:
435
+ - `enp46j-wikipedia.json`
436
+ - `enp45j-wiktionary.json`
437
+ - `enp45j-wikidata.json`
428
438
 
429
439
  ### Environment variables
430
440
 
431
- If a `wave` test is included in the script, an environment variable named `TESTARO_WAVE_KEY` must exist, with your WAVE API key as its value.
441
+ As mentioned above, using the high-level method to run Testaro jobs requires `SCRIPTDIR`, `BATCHDIR`, and `REPORTDIR` environment variables.
432
442
 
433
443
  If a `tenon` test is included in the script, environment variables named `TESTARO_TENON_USER` and `TESTARO_TENON_PASSWORD` must exist, with your Tenon username and password, respectively, as their values.
434
444
 
445
+ If a `wave` test is included in the script, an environment variable named `TESTARO_WAVE_KEY` must exist, with your WAVE API key as its value.
446
+
435
447
  The `text` command can interpolate the value of an environment variable into text that it enters on a page, as documented in the `commands.js` file.
436
448
 
437
449
  Before executing a Testaro script, you can optionally also set the environment variables `TESTARO_DEBUG` (to `'true'` or anything else) and/or `TESTARO_WAITS` (to a non-negative integer). The effects of these variables are described in the `index.js` file.
@@ -440,19 +452,25 @@ You may store these environment variables in an untracked `.env` file if you wis
440
452
 
441
453
  ## Validation
442
454
 
443
- _Executors_ for Testaro validation are located in the `validation` directory.
455
+ ### Samples
456
+
457
+ The `samples` directory contains scripts and a batch that you can use to test Testaro with with the high-level method, by giving `SCRIPTDIR` the value `'samples/scripts'` and `BATCHDIR` the value `'samples/batches'`. Do to this, you must also define `REPORTDIR`. Then execute `node job simple` or `node job simple weborgs` to run the `simple` script alone or with the `weborgs` batch.
444
458
 
445
- A basic executor is the `test.js` file. It runs Testaro with a simple sample script and outputs the log and the acts.
459
+ ### Validators
446
460
 
447
- The other executors are commonJS JavaScript modules that run Testaro and report whether the results are correct.
461
+ Testaro can be validated with the _executors_ located in the `validation/executors` directory. Executors are modules that run Testaro with the low-level method and write the results to the standard output.
448
462
 
449
- The other executors are:
450
- - `app.js`: Reports whether Testaro runs correctly with a script.
451
- - `tests.js`: Runs Testaro with each custom test and reports whether the results are correct.
463
+ The executors are:
452
464
 
453
- To execute any executor `xyz.js`, call it with the statement `node validation/executors/xyz`. The results will appear in the standard output.
465
+ - `app`: reports whether Testaro runs correctly with a script
466
+ - `test`: runs the `simple` sample script
467
+ - `tests`: makes Testaro perform each custom test and reports whether the results are correct
454
468
 
455
- The `tests.js` executor makes use of the scripts in the `validation/tests/scripts` directory, and they, in turn, run tests on HTML files in the `validation/tests/targets` directory.
469
+ There are no executors for validating the test packages.
470
+
471
+ To execute any executor `xyz`, call it with the statement `node validation/executors/xyz`.
472
+
473
+ The `tests` executor makes use of the scripts in the `validation/tests/scripts` directory, and they, in turn, run tests on HTML files in the `validation/tests/targets` directory.
456
474
 
457
475
  ## Contribution
458
476
 
@@ -460,7 +478,7 @@ You can define additional Testaro commands and functionality. Contributions are
460
478
 
461
479
  ## Accessibility principles
462
480
 
463
- The rationales motivating the Testaro-defined tests and scoring procs can be found in comments within the files of those tests and procs, in the `tests` and `procs/score` directories. Unavoidably, each test is opinionated. Testaro itself, however, can accommodate other tests representing different opinions. Testaro is intended to be neutral with respect to questions such as the criteria for accessibility, the severities of accessibility issues, whether accessibility is binary or graded, and the distinction between usability and accessibility.
481
+ The rationales motivating the Testaro-defined tests can be found in comments within the files of those tests, in the `tests` directory. Unavoidably, each test is opinionated. Testaro itself, however, can accommodate other tests representing different opinions. Testaro is intended to be neutral with respect to questions such as the criteria for accessibility, the severities of accessibility issues, whether accessibility is binary or graded, and the distinction between usability and accessibility.
464
482
 
465
483
  ## Testing challenges
466
484
 
@@ -472,11 +490,23 @@ The Playwright “Receives Events” actionability check does **not** check whet
472
490
 
473
491
  ### Test-package duplication
474
492
 
475
- Test packages sometimes do redundant testing, in that two or more packages test for the same issues. But such duplications are not necessarily perfect. Therefore, the scoring procs currently defined by Testaro do not select a single package to test for a single issue. Instead, they allow all packages to test for all the issues they can test for, but decrease the weights placed on issues that multiple packages test for. The more packages test for an issue, the smaller the weight placed on each package’s finding of that issue.
493
+ Test packages sometimes do redundant testing, in that two or more packages test for the same issues, although such duplications are not necessarily perfect. This fact creates three problems:
494
+ - One cannot be confident in excluding some tests of some packages on the assumption that they perfectly duplicate tests of other packages.
495
+ - The Testaro report from a script documents each package’s results separately, so a single difect may be documented in multiple locations within the report, making the consumption of the report inefficient.
496
+ - An effort to aggregate the results into a single score may distort the scores by inflating the weights of defects that happen to be discovered by multiple packages.
497
+
498
+ The tests provided with Testaro do not exclude any apparently duplicative tests from packages.
499
+
500
+ To deal with the above problems, you can:
501
+ - revise package `test` commands to exclude tests that you consider duplicative
502
+ - create derivative reports that organize results by defect types rather than by package
503
+ - take duplication into account when defining scoring rules
504
+
505
+ Some measures of these kinds are included in the scoring and reporting features of the Testilo package.
476
506
 
477
507
  ## Repository exclusions
478
508
 
479
- The files in the `temp` directory are presumed ephemeral and are not tracked by `git`. When tests require temporary files to be written, Testaro writes them there.
509
+ The files in the `temp` directory are presumed ephemeral and are not tracked by `git`.
480
510
 
481
511
  ## Related packages
482
512
 
@@ -486,7 +516,7 @@ Testaro is derived from [Autotest](https://github.com/jrpool/autotest).
486
516
 
487
517
  Testaro omits some functionalities of Autotest, such as:
488
518
  - tests producing results intended to be human-inspected
489
- - previous versions of scoring algorithms
519
+ - scoring
490
520
  - file operations for score aggregation, report revision, and HTML reports
491
521
  - a web user interface
492
522
 
package/commands.js CHANGED
@@ -89,13 +89,6 @@ exports.commands = {
89
89
  what: [false, 'string', 'hasLength', 'comment']
90
90
  }
91
91
  ],
92
- score: [
93
- 'Compute and report a score',
94
- {
95
- which: [true, 'string', 'hasLength', 'score-proc name'],
96
- what: [false, 'string', 'hasLength', 'comment']
97
- }
98
- ],
99
92
  select: [
100
93
  'Select a select option',
101
94
  {
package/job.js ADDED
@@ -0,0 +1,92 @@
1
+ /*
2
+ job.js
3
+ Manages jobs.
4
+ */
5
+
6
+ // ########## IMPORTS
7
+
8
+ // Module to keep secrets.
9
+ require('dotenv').config();
10
+ // Module to read and write files.
11
+ const fs = require('fs/promises');
12
+ const {handleRequest} = require('./run');
13
+
14
+ // ########## CONSTANTS
15
+ const scriptDir = process.env.SCRIPTDIR;
16
+ const batchDir = process.env.BATCHDIR;
17
+ const reportDir = process.env.REPORTDIR;
18
+
19
+ // ########## FUNCTIONS
20
+
21
+ // Converts a script to a batch-based array of scripts.
22
+ const batchify = (script, batch, timeStamp) => {
23
+ const {hosts} = batch;
24
+ const specs = hosts.map(host => {
25
+ const newScript = JSON.parse(JSON.stringify(script));
26
+ newScript.commands.forEach(command => {
27
+ if (command.type === 'url') {
28
+ command.which = host.which;
29
+ command.what = host.what;
30
+ }
31
+ });
32
+ const spec = {
33
+ id: `${timeStamp}-${host.id}`,
34
+ script: newScript
35
+ };
36
+ return spec;
37
+ });
38
+ return specs;
39
+ };
40
+ // Runs a no-batch script.
41
+ const runHost = async (id, script) => {
42
+ const report = {
43
+ id,
44
+ log: [],
45
+ script,
46
+ acts: []
47
+ };
48
+ await handleRequest(report);
49
+ const reportJSON = JSON.stringify(report, null, 2);
50
+ await fs.writeFile(`${reportDir}/${id}.json`, reportJSON);
51
+ };
52
+ // Runs a job.
53
+ exports.job = async (scriptID, batchID) => {
54
+ if (scriptID) {
55
+ try {
56
+ const scriptJSON = await fs.readFile(`${scriptDir}/${scriptID}.json`, 'utf8');
57
+ const script = JSON.parse(scriptJSON);
58
+ // Identify the start time and a timestamp.
59
+ const timeStamp = Math.floor((Date.now() - Date.UTC(2022, 1)) / 2000).toString(36);
60
+ // If there is a batch:
61
+ let batch = null;
62
+ if (batchID) {
63
+ // Convert the script to a batch-based set of scripts.
64
+ const batchJSON = await fs.readFile(`${batchDir}/${batchID}.json`, 'utf8');
65
+ batch = JSON.parse(batchJSON);
66
+ const specs = batchify(script, batch, timeStamp);
67
+ // For each script:
68
+ while (specs.length) {
69
+ const spec = specs.shift();
70
+ const {id, script} = spec;
71
+ // Run it and save the result with a host-suffixed ID.
72
+ await runHost(id, script);
73
+ }
74
+ }
75
+ // Otherwise, i.e. if there is no batch:
76
+ else {
77
+ // Run the script and save the result with a timestamp ID.
78
+ await runHost(timeStamp, script);
79
+ }
80
+ }
81
+ catch(error) {
82
+ console.log(`ERROR: ${error.message}\n${error.stack}`);
83
+ }
84
+ }
85
+ else {
86
+ console.log('ERROR: no script specified');
87
+ }
88
+ };
89
+
90
+ // ########## OPERATION
91
+
92
+ exports.job(process.argv[2], process.argv[3]);
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "testaro",
3
- "version": "4.0.1",
3
+ "version": "4.1.2",
4
4
  "description": "Automation of accessibility testing",
5
5
  "main": "index.js",
6
6
  "scripts": {
@@ -613,17 +613,6 @@ const doActs = async (report, actIndex, page) => {
613
613
  // Identify its only page as current.
614
614
  page = browserContext.pages()[0];
615
615
  }
616
- // Otherwise, if it is a score:
617
- else if (act.type === 'score') {
618
- // Compute and report the score.
619
- try {
620
- const {scorer} = require(`./procs/score/${act.which}`);
621
- act.result = scorer(report.acts);
622
- }
623
- catch (error) {
624
- act.error = `ERROR: ${error.message}\n${error.stack}`;
625
- }
626
- }
627
616
  // Otherwise, if a current page exists:
628
617
  else if (page) {
629
618
  // If the command is a url:
@@ -1200,25 +1189,6 @@ const doScript = async (report) => {
1200
1189
  report.prohibitedCount = prohibitedCount;
1201
1190
  report.visitTimeoutCount = visitTimeoutCount;
1202
1191
  report.visitRejectionCount = visitRejectionCount;
1203
- // If logs are to be scored, do so.
1204
- const scoreTables = report.acts.filter(act => act.type === 'score');
1205
- if (scoreTables.length) {
1206
- const scoreTable = scoreTables[0];
1207
- const {result} = scoreTable;
1208
- if (result) {
1209
- const {logWeights, scores} = result;
1210
- if (logWeights && scores) {
1211
- scores.log = Math.floor(
1212
- logWeights.count * logCount
1213
- + logWeights.size * logSize
1214
- + logWeights.prohibited * prohibitedCount
1215
- + logWeights.visitTimeout * visitTimeoutCount
1216
- + logWeights.visitRejection * visitRejectionCount
1217
- );
1218
- scores.total += scores.log;
1219
- }
1220
- }
1221
- }
1222
1192
  // Add the end time and duration to the report.
1223
1193
  const endTime = new Date();
1224
1194
  report.endTime = endTime.toISOString().slice(0, 19);
@@ -0,0 +1,16 @@
1
+ {
2
+ "id": "weborgs",
3
+ "what": "Web organizations",
4
+ "hosts": [
5
+ {
6
+ "id": "mozilla",
7
+ "which": "https://www.mozilla.org/en-US/",
8
+ "what": "Mozilla"
9
+ },
10
+ {
11
+ "id": "w3c",
12
+ "which": "https://www.w3.org/",
13
+ "what": "W3C"
14
+ }
15
+ ]
16
+ }
@@ -1,4 +1,5 @@
1
1
  {
2
+ "id": "simple",
2
3
  "what": "Test example.com with bulk",
3
4
  "strict": true,
4
5
  "commands": [
@@ -1,4 +1,5 @@
1
1
  {
2
+ "id": "tenon",
2
3
  "what": "Test Wikipedia with tenon",
3
4
  "strict": true,
4
5
  "commands": [
@@ -3,6 +3,7 @@
3
3
 
4
4
  const report = {
5
5
  script: {
6
+ id: 'script0',
6
7
  what: 'Sample Testaro executor with 1 test',
7
8
  strict: true,
8
9
  commands: [
@@ -31,7 +32,7 @@ const report = {
31
32
  log: [],
32
33
  acts: []
33
34
  };
34
- const {handleRequest} = require(`${__dirname}/../../index`);
35
+ const {handleRequest} = require(`${__dirname}/../../run`);
35
36
  handleRequest(report)
36
37
  .then(
37
38
  () => {
@@ -2,7 +2,7 @@
2
2
  // Test executor for tenon sample script.
3
3
 
4
4
  const fs = require('fs');
5
- const {handleRequest} = require('../../index');
5
+ const {handleRequest} = require('../../run');
6
6
  const scriptJSON = fs.readFileSync('samples/scripts/tenon.json', 'utf8');
7
7
  const script = JSON.parse(scriptJSON);
8
8
  const report = {
@@ -2,8 +2,8 @@
2
2
  // Test executor.
3
3
 
4
4
  const fs = require('fs');
5
- const {handleRequest} = require('../../index');
6
- const scriptJSON = fs.readFileSync('samples/scripts/simple.json', 'utf8');
5
+ const {handleRequest} = require(`${__dirname}/../../run`);
6
+ const scriptJSON = fs.readFileSync(`${__dirname}/../../samples/scripts/simple.json`, 'utf8');
7
7
  const script = JSON.parse(scriptJSON);
8
8
  const report = {
9
9
  id: '',
@@ -11,8 +11,38 @@ const report = {
11
11
  log: [],
12
12
  acts: []
13
13
  };
14
- (async () => {
15
- await handleRequest(report);
16
- console.log(`Report log:\n${JSON.stringify(report.log, null, 2)}\n`);
17
- console.log(`Report acts:\n${JSON.stringify(report.acts, null, 2)}`);
18
- })();
14
+ handleRequest(report)
15
+ .then(
16
+ () => {
17
+ const {log, acts} = report;
18
+ if (
19
+ log.length === 2
20
+ && log[1].event === 'endTime'
21
+ && /^\d{4}-.+$/.test(log[0].value)
22
+ && log[1].value >= log[0].value
23
+ ) {
24
+ console.log('Success: Log has been correctly populated');
25
+ }
26
+ else {
27
+ console.log('Failure: Log empty or invalid');
28
+ console.log(JSON.stringify(log, null, 2));
29
+ }
30
+ if (
31
+ acts.length === 3
32
+ && acts[0]
33
+ && acts[0].type === 'launch'
34
+ && acts[2].result
35
+ && acts[2].result.visibleElements
36
+ && typeof acts[2].result.visibleElements === 'number'
37
+ ) {
38
+ console.log('Success: Acts have been correctly populated');
39
+ }
40
+ else {
41
+ console.log('Failure: Acts empty or invalid');
42
+ console.log(JSON.stringify(acts, null, 2));
43
+ }
44
+ },
45
+ rejection => {
46
+ console.log(`Failure: ${rejection}`);
47
+ }
48
+ );
@@ -2,7 +2,7 @@
2
2
  // Validator for Testaro tests.
3
3
 
4
4
  const fs = require('fs').promises;
5
- const {handleRequest} = require(`${__dirname}/../../index`);
5
+ const {handleRequest} = require(`${__dirname}/../../run`);
6
6
  const validateTests = async () => {
7
7
  const totals = {
8
8
  attempts: 0,
@@ -1,76 +0,0 @@
1
- /*
2
- correlation
3
- Compiles a list of the correlations between distinct-package issue types and creates a file,
4
- correlations.json, containing the list.
5
- */
6
- const fs = require('fs');
7
- const compile = () => {
8
- const issuesJSON = fs.readFileSync(`${__dirname}/package/issues.json`, 'utf8');
9
- const issues = JSON.parse(issuesJSON);
10
- const dataJSON = fs.readFileSync(`${__dirname}/package/data.json`, 'utf8');
11
- const reportData = JSON.parse(dataJSON);
12
- const reports = Object.values(reportData);
13
- // Initialize the list.
14
- const data = {
15
- aatt_alfa: {},
16
- aatt_axe: {},
17
- aatt_ibm: {},
18
- aatt_wave: {},
19
- alfa_axe: {},
20
- alfa_ibm: {},
21
- alfa_wave: {},
22
- axe_ibm: {},
23
- axe_wave: {},
24
- ibm_wave: {}
25
- };
26
- // For each pair of packages:
27
- const packagePairs = Object.keys(data);
28
- packagePairs.forEach(packagePair => {
29
- console.log(`=== Starting package pair ${packagePair}`);
30
- const packages = packagePair.split('_');
31
- // Identify the reports containing results from both packages.
32
- const pairReports = reports.filter(report => report[packages[0]] && report[packages[1]]);
33
- // For each pair of issues:
34
- issues[packages[0]].forEach(issueA => {
35
- issues[packages[1]].forEach(issueB => {
36
- // Initialize an array of score pairs.
37
- const scorePairs = [];
38
- // For each applicable report:
39
- pairReports.forEach(report => {
40
- // Add the scores for the issues to the array of score pairs.
41
- const scorePair = [report[packages[0]][issueA] || 0, report[packages[1]][issueB] || 0];
42
- scorePairs.push(scorePair);
43
- });
44
- // Get the correlation between the issues.
45
- const aSum = scorePairs.reduce((sum, current) => sum + current[0], 0);
46
- const bSum = scorePairs.reduce((sum, current) => sum + current[1], 0);
47
- const abSum = scorePairs.reduce((sum, current) => sum + current[0] * current[1], 0);
48
- const aSqSum = scorePairs.reduce((sum, current) => sum + current[0] ** 2, 0);
49
- const bSqSum = scorePairs.reduce((sum, current) => sum + current[1] ** 2, 0);
50
- const n = scorePairs.length;
51
- const correlation
52
- = (abSum - aSum * bSum / n) / n
53
- / (Math.sqrt(aSqSum / n - (aSum / n) ** 2) * Math.sqrt(bSqSum / n - (bSum / n) ** 2));
54
- // If the correlation is large enough:
55
- if (correlation > 0.7) {
56
- const roundedCorr = correlation.toFixed(2);
57
- // Record it and the count of non-zero scores.
58
- const nonZero = scorePairs.reduce(
59
- (count, current) => count + current.filter(score => score).length, 0
60
- );
61
- const corrPlusNZ = `${roundedCorr} (${nonZero})`;
62
- if (data[packagePair][issueA]) {
63
- data[packagePair][issueA][issueB] = corrPlusNZ;
64
- }
65
- else {
66
- data[packagePair][issueA] = {[issueB]: corrPlusNZ};
67
- }
68
- }
69
- });
70
- });
71
- });
72
- return data;
73
- };
74
- fs.writeFileSync(
75
- `${__dirname}/package/correlations.json`, JSON.stringify(compile(), null, 2)
76
- );