testaro 3.0.1 → 4.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -6,19 +6,7 @@ Federated accessibility test automation
6
6
 
7
7
  Testaro is a collection of collections of web accessibility tests.
8
8
 
9
- The purpose of Testaro is to provide programmatic access to over 600 accessibility tests defined in several test packages and in Testaro itself.
10
-
11
- Running Testaro requires telling it which operations (including tests) to perform and which URLs to perform them on, and giving Testaro an object to put its output into.
12
-
13
- ## Origin
14
-
15
- Work on the custom tests in this package began in 2017, and work on the multi-package federation that Testaro implements began in early 2018. These two aspects were combined into the [Autotest](https://github.com/jrpool/autotest) package in early 2021 and into this more limited-purpose package, Testaro, in January 2022.
16
-
17
- Testaro omits some functionalities of Autotest, such as:
18
- - tests producing results intended to be human-inspected
19
- - previous versions of scoring algorithms
20
- - file operations for score aggregation, report revision, and HTML reports
21
- - a web user interface
9
+ The purpose of Testaro is to provide programmatic access to over 800 accessibility tests defined in several test packages and in Testaro itself.
22
10
 
23
11
  ## System requirements
24
12
 
@@ -35,6 +23,7 @@ Testaro includes some of its own accessibility tests. In addition, it performs t
35
23
  - [alfa](https://alfa.siteimprove.com/) (Siteimprove alfa)
36
24
  - [Automated Accessibility Testing Tool](https://www.npmjs.com/package/aatt) (Paypal AATT, running HTML CodeSniffer)
37
25
  - [axe-playwright](https://www.npmjs.com/package/axe-playwright) (Deque Axe-core)
26
+ - [Tenon](https://tenon.io/documentation/what-tenon-tests.php)
38
27
  - [WAVE API](https://wave.webaim.org/api/) (WebAIM WAVE)
39
28
 
40
29
  As of this version, the counts of tests in the packages referenced above were:
@@ -42,14 +31,11 @@ As of this version, the counts of tests in the packages referenced above were:
42
31
  - Alfa: 103
43
32
  - Axe-core: 138
44
33
  - Equal Access: 163
34
+ - Tenon: 180
45
35
  - WAVE: 110
46
36
  - subtotal: 612
47
37
  - Testaro tests: 16
48
- - grand total: 628
49
-
50
- ## Related packages
51
-
52
- [Testilo](https://www.npmjs.com/package/testilo) is an application that facilitates the use of Testaro.
38
+ - grand total: 808
53
39
 
54
40
  ## Code organization
55
41
 
@@ -84,7 +70,7 @@ Once you have done that, you can install Testaro as you would install any `npm`
84
70
 
85
71
  ## Payment
86
72
 
87
- All of the tests that Testaro can perform are free of cost, except those in the WAVE package. WebAIM requires an API key for those tests. If you wish to have Testaro perform the WAVE tests, you will need to have a WAVE API key. Visit the URL above in order to obtain your key. It costs 1 to 3 credits to perform the WAVE tests on one URL. WebAIM gives you 100 credits without cost, before you need to begin paying.
73
+ All of the tests that Testaro can perform are free of cost, except those in the Tenon and WAVE packages. The owner of each of those packages gives new registrants a free allowance of credits before it becomes necessary to pay for use of the API of the package. The required environment variables for authentication and payment are described below under “Environment variables”.
88
74
 
89
75
  ## Specification
90
76
 
@@ -94,7 +80,7 @@ To use Testaro, you must specify what it should do. You do this with a script an
94
80
 
95
81
  ### Introduction
96
82
 
97
- When you run Testaro, you provide a **script** to it. The script contains **commands**. Testaro performs those commands.
83
+ To use Testaro, you provide a **script** to it. The script contains **commands**. Testaro __runs__ the script, i.e. performs the commands in it and writes a report of the results.
98
84
 
99
85
  A script is a JSON file with the properties:
100
86
 
@@ -175,7 +161,6 @@ The subsequent commands can tell Testaro to perform any of:
175
161
  - navigations (browser launches, visits to URLs, waits for page conditions, etc.)
176
162
  - alterations (changes to the page)
177
163
  - tests (whether in dependency packages or defined within Testaro)
178
- - scoring (aggregating test results into total scores)
179
164
  - branching (continuing from a command other than the next one)
180
165
 
181
166
  ##### Moves
@@ -266,36 +251,23 @@ An example of a **Testaro-defined** test is:
266
251
 
267
252
  In this case, Testaro runs the `motion` test with the specified parameters.
268
253
 
269
- ##### Scoring
270
-
271
- An example of a **scoring** command is:
254
+ ###### Tenon
272
255
 
273
- ```json
274
- {
275
- "type": "score",
276
- "which": "asp09",
277
- "what": "5 packages and 16 custom tests, with duplication discounts"
278
- }
279
- ```
256
+ The `tenon` test requires two commands:
257
+ - A command of type `tenonRequest`.
258
+ - A command of type `test` with `tenon` as the value of `which`.
280
259
 
281
- In this case, Testaro executes the procedure specified in the `asp09` score proc (in the `procs/score` directory) to compute a total score for the script or (if there is a batch) the host. The proc is a JavaScript module whose `scorer` function returns an object containing a total score and the itemized scores that yield the total.
260
+ The reason for this is that the Tenon API operates asynchronously. You ask it to perform a test, and it puts your request into a queue. To learn whether Tenon has completed your test, you make a status request. You can continue making status requests until Tenon replies that your test has been completed. Then you submit a request for the test result, and Tenon replies with the result. (As of May 2022, status requests were observed to misreport still-running tests as completed. The `tenon` test works around that.)
282
261
 
283
- The `scorer` function inspects the script report to find the required data, applies specific weights and formulas to yield the itemized scores, and combines the itemized scores to yield the total score.
262
+ Tenon says that tests are typically completed in 3 to 6 seconds but that the latency can be longer, depending on demand.
284
263
 
285
- The data for scores can include not only test results, but also log statistics. Testaro includes in each report the properties:
286
- - `logCount`: how many log items the browser generated
287
- - `logSize`: how large the log items were in the aggregate, in characters
288
- - `prohibitedCount`: how many log items contain (case-insensitively) `403` and `status`, or `prohibited`
289
- - `visitTimeoutCount`: how many times an attempt to visit a URL timed out
290
- - `visitRejectionCount`: how many times a URL visit got an HTTP status other than 200 or 304
264
+ Therefore, you can include a `tenonRequest` command early in your script, and a `tenon` test late in your script. Tenon will move your request through its queue while Testaro is processing your script. When Testaro reaches your `tenon` test command, Tenon will most likely have completed your test. If not, the `tenon` test will wait and then make a second request before giving up.
291
265
 
292
- Those log statistics can provide data for a log-based test defined in a score proc.
266
+ Thus, a `tenon` test actually does not perform any test; it merely collects the result. The page that was active when the `tenonRequest` command was performed is the one that Tenon tests.
293
267
 
294
- A good score proc takes account of duplications between test packages: two or more packages that discover the same accessibility defects. Score procs can apply discounts to reflect duplications between test packages, so that, if two or more packages discover the same defect, the defect will not be overweighted.
268
+ In case you want to perform more than one `tenon` test, you can do so. Just give each pair of commands a distinct `id` property, so each `tenon` test command will request the correct result.
295
269
 
296
- The procedures in the `scoring` directory have produced data there that score procs can use for the calibration of discounts.
297
-
298
- Some documents are implemented in such a way that some tests are prevented from being conducted on them. When that occurs, the score proc can **infer** a score for that test.
270
+ Tenon recommends giving it a public URL rather than giving it the content of a page, if possible. So, it is best to give the `withNewContent` property of the `tenonRequest` command the value `true`, unless the page is not public.
299
271
 
300
272
  ##### Branching
301
273
 
@@ -393,15 +365,35 @@ A typical use for an `expect` property is checking the correctness of a Testaro
393
365
 
394
366
  ## Batches
395
367
 
396
- In some cases you may wish to repeatedly run Testaro with the same script, changing only its `url` commands. The purpose would be to perform the same set of tests on multiple web pages. Such a use would apply only to scripts whose `url` commands are all identical, not to a script that moves from one host to another.
368
+ You may wish to have Testaro perform the same sequence of tests on multiple web pages. In that case, you can create a _batch_, with the following structure:
397
369
 
398
- Testaro does not support such batch processing, but Testilo does. See its `README.md` file for instructions.
370
+ ```javascript
371
+ {
372
+ what: 'Web leaders',
373
+ hosts: {
374
+ id: 'w3c',
375
+ which: 'https://www.w3.org/',
376
+ what: 'W3C'
377
+ },
378
+ {
379
+ id: 'wikimedia',
380
+ which: 'https://www.wikimedia.org/',
381
+ what: 'Wikimedia'
382
+ }
383
+ }
384
+ ```
385
+
386
+ With a batch, you can execute a single statement to run a script multiple times, one per host. On each call, Testaro takes one of the hosts in the batch and substitutes it for each host specified in a `url` command of the script. Testaro thereby creates and sequentially runs multiple scripts.
399
387
 
400
388
  ## Execution
401
389
 
402
390
  ### Invocation
403
391
 
404
- To run Testaro, create a report object like this:
392
+ There are two methods for using Testaro.
393
+
394
+ #### Low-level
395
+
396
+ Create a report object like this:
405
397
 
406
398
  ```javascript
407
399
  const report = {
@@ -412,25 +404,63 @@ const report = {
412
404
  };
413
405
  ```
414
406
 
415
- Replace `{…}` with a script object, like the example script shown above.
407
+ Replace `{…}` with a script object, like the example script shown above. The low-level method does not allow the use of batches.
408
+
409
+ Then execute the `run` module with the `report` object as an argument.
410
+ - Another Node.js package that has Testaro as a dependency can execute `require('testaro').run(report)`.
411
+ - In a command environment with the Testaro project directory as the current directory, you can execute `node run report`.
412
+
413
+ Either statement will make Testaro run the script and populate the `log` and `acts` arrays of the `report` object. When Testaro finishes, the `log` and `acts` properties will contain the results.
414
+
415
+ You or a dependent package can then save or further process the `report` object as desired.
416
+
417
+ #### High-level
418
+
419
+ Make sure that you have defined these environment variables, with absolute or relative paths to directories as their values:
420
+ - `SCRIPTDIR`
421
+ - `BATCHDIR`
422
+ - `REPORTDIR`
423
+
424
+ Relative paths must be relative to the Testaro project directory. For example, if the script directory is `scripts` in a `testing` directory that is a sibling of the Testaro directory, then `SCRIPTDIR` must have the value `../testing/scripts`.
416
425
 
417
- Then execute the statement `require('testaro').handleRequest(report)`. That statement will run Testaro.
426
+ Also ensure that Testaro can read all those directories and write to `REPORTDIR`.
418
427
 
419
- While it runs, Testaro will populate the `log` and `acts` arrays of the report object. When Testaro finishes, the `log` and `acts` properties will contain its results.
428
+ Place a script into `SCRIPTDIR` and, optionally, a batch into `BATCHDIR`. Each should be named `idValue.json`, where `idValue` is replaced with the value of its `id` property. That value must consist of only lower-case ASCII letters and digits.
420
429
 
421
- Another way to run Testaro is to use Testilo, which can handle batches and saves results to files. Testilo prepopulates the report object with an `id` property consisting of a timestamp and, if a batch is used, the host ID. If Testaro finds a non-empty `id` property in the `report` object, Testaro leaves it unchanged; if not, Testaro creates an `id` property with a timestamp value.
430
+ Then execute the statement `node job scriptID` or `node job scriptID batchID`, replacing `scriptID` and `batchID` with the `id` values of the script and the batch, respectively.
431
+
432
+ The `job` module will call the `run` module on the script, or, if there is a batch, will create multiple scripts, one per host, and sequentially call the `run` module on each script. The results will be saved in report files in the `REPORTDIR` directory.
433
+
434
+ If there is no batch, the report file will be named with a unique timestamp, suffixed with a `.json` extension. If there is a batch, then the base of each file’s name will be the same timestamp, suffixed with `-hostID`, where `hostID` is the value of the `id` property of the `host` object in the batch file. For example, if you execute `node job script01 wikis`, you might get these report files deposited into `REPORTDIR`:
435
+ - `enp46j-wikipedia.json`
436
+ - `enp45j-wiktionary.json`
437
+ - `enp45j-wikidata.json`
422
438
 
423
439
  ### Environment variables
424
440
 
441
+ As mentioned above, using the high-level method to run Testaro jobs requires `SCRIPTDIR`, `BATCHDIR`, and `REPORTDIR` environment variables.
442
+
443
+ If a `tenon` test is included in the script, environment variables named `TESTARO_TENON_USER` and `TESTARO_TENON_PASSWORD` must exist, with your Tenon username and password, respectively, as their values.
444
+
425
445
  If a `wave` test is included in the script, an environment variable named `TESTARO_WAVE_KEY` must exist, with your WAVE API key as its value.
426
446
 
447
+ The `text` command can interpolate the value of an environment variable into text that it enters on a page, as documented in the `commands.js` file.
448
+
427
449
  Before executing a Testaro script, you can optionally also set the environment variables `TESTARO_DEBUG` (to `'true'` or anything else) and/or `TESTARO_WAITS` (to a non-negative integer). The effects of these variables are described in the `index.js` file.
428
450
 
451
+ You may store these environment variables in an untracked `.env` file if you wish, and Testaro will recognize them.
452
+
429
453
  ## Validation
430
454
 
455
+ ### Samples
456
+
457
+ The `samples` directory contains scripts and a batch that you can use to test Testaro with with the high-level, by giving `SCRIPTDIR` the value `'samples/scripts'` and `BATCHDIR` the value `'samples/batches'`. Do to this, you must also define `REPORTDIR`.
458
+
459
+ ### Validators
460
+
431
461
  _Executors_ for Testaro validation are located in the `validation` directory.
432
462
 
433
- A basic executor is the `test.js` file. It runs Testaro with a simple sample script and outputs the log and the acts.
463
+ A basic executor is the `test.js` file. It uses the low-level method to run Testaro with the `simple.js` sample script and outputs the log and the acts to the standard output.
434
464
 
435
465
  The other executors are commonJS JavaScript modules that run Testaro and report whether the results are correct.
436
466
 
@@ -438,6 +468,8 @@ The other executors are:
438
468
  - `app.js`: Reports whether Testaro runs correctly with a script.
439
469
  - `tests.js`: Runs Testaro with each custom test and reports whether the results are correct.
440
470
 
471
+ There are no executors for validating the test packages.
472
+
441
473
  To execute any executor `xyz.js`, call it with the statement `node validation/executors/xyz`. The results will appear in the standard output.
442
474
 
443
475
  The `tests.js` executor makes use of the scripts in the `validation/tests/scripts` directory, and they, in turn, run tests on HTML files in the `validation/tests/targets` directory.
@@ -448,22 +480,7 @@ You can define additional Testaro commands and functionality. Contributions are
448
480
 
449
481
  ## Accessibility principles
450
482
 
451
- The rationales motivating the Testaro-defined tests and scoring procs can be found in comments within the files of those tests and procs, in the `tests` and `procs/score` directories. Unavoidably, each test is opinionated. Testaro itself, however, can accommodate other tests representing different opinions. Testaro is intended to be neutral with respect to questions such as the criteria for accessibility, the severities of accessibility issues, whether accessibility is binary or graded, and the distinction between usability and accessibility.
452
-
453
- ### Future work
454
-
455
- Further development is contemplated, is taking place, or is welcomed, on:
456
- - addition of Tenon to the set of packages
457
- - links with href="#"
458
- - links and buttons styled non-distinguishably
459
- - first focused element not first focusable element in DOM
460
- - never-visible skip links
461
- - buttons with no text content
462
- - modal dialogs
463
- - autocomplete attributes
464
- - inclusion of other test packages, such as:
465
- - FAE (https://github.com/opena11y/evaluation-library)
466
- - Tenon
483
+ The rationales motivating the Testaro-defined tests can be found in comments within the files of those tests, in the `tests` directory. Unavoidably, each test is opinionated. Testaro itself, however, can accommodate other tests representing different opinions. Testaro is intended to be neutral with respect to questions such as the criteria for accessibility, the severities of accessibility issues, whether accessibility is binary or graded, and the distinction between usability and accessibility.
467
484
 
468
485
  ## Testing challenges
469
486
 
@@ -475,22 +492,74 @@ The Playwright “Receives Events” actionability check does **not** check whet
475
492
 
476
493
  ### Test-package duplication
477
494
 
478
- Test packages sometimes do redundant testing, in that two or more packages test for the same issues. But such duplications are not necessarily perfect. Therefore, the scoring procs currently defined by Testaro do not select a single package to test for a single issue. Instead, they allow all packages to test for all the issues they can test for, but decrease the weights placed on issues that multiple packages test for. The more packages test for an issue, the smaller the weight placed on each package’s finding of that issue.
495
+ Test packages sometimes do redundant testing, in that two or more packages test for the same issues, although such duplications are not necessarily perfect. This fact creates three problems:
496
+ - One cannot be confident in excluding some tests of some packages on the assumption that they perfectly duplicate tests of other packages.
497
+ - The Testaro report from a script documents each package’s results separately, so a single difect may be documented in multiple locations within the report, making the consumption of the report inefficient.
498
+ - An effort to aggregate the results into a single score may distort the scores by inflating the weights of defects that happen to be discovered by multiple packages.
499
+
500
+ The tests provided with Testaro do not exclude any apparently duplicative tests from packages.
501
+
502
+ To deal with the above problems, you can:
503
+ - revise package `test` commands to exclude tests that you consider duplicative
504
+ - create derivative reports that organize results by defect types rather than by package
505
+ - take duplication into account when defining scoring rules
506
+
507
+ Some measures of these kinds are included in the scoring and reporting features of the Testilo package.
479
508
 
480
509
  ## Repository exclusions
481
510
 
482
- The files in the `temp` directory are presumed ephemeral and are not tracked by `git`. When tests require temporary files to be written, Testaro writes them there.
511
+ The files in the `temp` directory are presumed ephemeral and are not tracked by `git`.
483
512
 
484
- ## Origin
513
+ ## Related packages
485
514
 
486
- Testaro is derived from [Autotest](https://github.com/jrpool/autotest), which in turn is derived from accessibility test investigations beginning in 2018.
515
+ [Testilo](https://www.npmjs.com/package/testilo) is an application that facilitates the use of Testaro.
516
+
517
+ Testaro is derived from [Autotest](https://github.com/jrpool/autotest).
487
518
 
488
519
  Testaro omits some functionalities of Autotest, such as:
489
520
  - tests producing results intended to be human-inspected
490
- - previous versions of scoring algorithms
521
+ - scoring
491
522
  - file operations for score aggregation, report revision, and HTML reports
492
523
  - a web user interface
493
524
 
525
+ ## Origin
526
+
527
+ Work on the custom tests in this package began in 2017, and work on the multi-package federation that Testaro implements began in early 2018. These two aspects were combined into the [Autotest](https://github.com/jrpool/autotest) package in early 2021 and into this more single-purpose package, Testaro, in January 2022.
528
+
494
529
  ## Etymology
495
530
 
496
531
  “Testaro” means “collection of tests” in Esperanto.
532
+
533
+ ## Future work
534
+
535
+ ### Improvements
536
+
537
+ Further development is contemplated, is taking place, or is welcomed, on:
538
+ - addition of Tenon to the set of packages
539
+ - links with href="#"
540
+ - links and buttons styled non-distinguishably
541
+ - first focused element not first focusable element in DOM
542
+ - never-visible skip links
543
+ - buttons with no text content
544
+ - modal dialogs
545
+ - autocomplete attributes
546
+ - inclusion of other test packages, such as:
547
+ - FAE (https://github.com/opena11y/evaluation-library)
548
+ - Tenon
549
+
550
+ ## Corrections
551
+
552
+ Issues found or reported with the current version that need diagnosis and correction include:
553
+
554
+ ### hover
555
+
556
+ There seem to be a couple of problems with the hover test:
557
+ - The score for unhoverability is documented as 2 times the count of unhoverables, but is reported as 1 time that count.
558
+ - The list of unhoverables in the report is empty.
559
+ Observed after inquiry by Tobias Christian Jensen of Siteimprove on 2022-05-09.
560
+
561
+ ### axe
562
+
563
+ Configuration to include best practices and experimental tests.
564
+
565
+ Investigation of tags, including wcag2a, wcag2aa, wcag21a, wcag21aa, best-practice, wcag***, ACT, cat.*.
package/commands.js CHANGED
@@ -89,13 +89,6 @@ exports.commands = {
89
89
  what: [false, 'string', 'hasLength', 'comment']
90
90
  }
91
91
  ],
92
- score: [
93
- 'Compute and report a score',
94
- {
95
- which: [true, 'string', 'hasLength', 'score-proc name'],
96
- what: [false, 'string', 'hasLength', 'comment']
97
- }
98
- ],
99
92
  select: [
100
93
  'Select a select option',
101
94
  {
@@ -118,6 +111,14 @@ exports.commands = {
118
111
  what: [false, 'string', 'hasLength', 'comment']
119
112
  }
120
113
  ],
114
+ tenonRequest: [
115
+ 'Request a Tenon test',
116
+ {
117
+ id: [true, 'string', 'hasLength', 'ID for this test instance'],
118
+ withNewContent: [true, 'boolean', '', 'true: use a URL; false: use page content'],
119
+ what: [false, 'string', 'hasLength', 'comment']
120
+ }
121
+ ],
121
122
  text: [
122
123
  'Enter text into a text input, optionally with 1 placeholder for an all-caps literal environment variable',
123
124
  {
@@ -233,6 +234,12 @@ exports.commands = {
233
234
  withItems: [true, 'boolean']
234
235
  }
235
236
  ],
237
+ tenon: [
238
+ 'Perform a Tenon test',
239
+ {
240
+ id: [true, 'string', 'hasLength', 'ID of the requested test instance']
241
+ }
242
+ ],
236
243
  wave: [
237
244
  'Perform a WebAIM WAVE test',
238
245
  {
package/job.js ADDED
@@ -0,0 +1,92 @@
1
+ /*
2
+ job.js
3
+ Manages jobs.
4
+ */
5
+
6
+ // ########## IMPORTS
7
+
8
+ // Module to keep secrets.
9
+ require('dotenv').config();
10
+ // Module to read and write files.
11
+ const fs = require('fs/promises');
12
+ const { handleRequest } = require('./run');
13
+
14
+ // ########## CONSTANTS
15
+ const scriptDir = process.env.SCRIPTDIR;
16
+ const batchDir = process.env.BATCHDIR;
17
+ const reportDir = process.env.REPORTDIR;
18
+
19
+ // ########## FUNCTIONS
20
+
21
+ // Converts a script to a batch-based array of scripts.
22
+ const batchify = (script, batch, timeStamp) => {
23
+ const {hosts} = batch;
24
+ const specs = hosts.map(host => {
25
+ const newScript = JSON.parse(JSON.stringify(script));
26
+ newScript.commands.forEach(command => {
27
+ if (command.type === 'url') {
28
+ command.which = host.which;
29
+ command.what = host.what;
30
+ }
31
+ });
32
+ const spec = {
33
+ id: `${timeStamp}-${host.id}`,
34
+ script: newScript
35
+ };
36
+ return spec;
37
+ });
38
+ return specs;
39
+ };
40
+ // Runs a no-batch script.
41
+ const runHost = async (id, script) => {
42
+ const report = {
43
+ id,
44
+ log: [],
45
+ script,
46
+ acts: []
47
+ };
48
+ await require('./run').handleRequest(report);
49
+ const reportJSON = JSON.stringify(report, null, 2);
50
+ await fs.writeFile(`${reportDir}/${id}.json`, reportJSON);
51
+ };
52
+ // Runs a job.
53
+ exports.handleRequest = async (scriptID, batchID) => {
54
+ if (scriptID) {
55
+ try {
56
+ const scriptJSON = await fs.readFile(`${scriptDir}/${scriptID}.json`, 'utf8');
57
+ const script = JSON.parse(scriptJSON);
58
+ // Identify the start time and a timestamp.
59
+ const timeStamp = Math.floor((Date.now() - Date.UTC(2022, 1)) / 2000).toString(36);
60
+ // If there is a batch:
61
+ let batch = null;
62
+ if (batchID) {
63
+ // Convert the script to a batch-based set of scripts.
64
+ const batchJSON = await fs.readFile(`${batchDir}/${batchID}.json`, 'utf8');
65
+ batch = JSON.parse(batchJSON);
66
+ const specs = batchify(script, batch, timeStamp);
67
+ // For each script:
68
+ while (specs.length) {
69
+ const spec = specs.shift();
70
+ const {id, script} = spec;
71
+ // Run it and save the result with a host-suffixed ID.
72
+ await runHost(id, script);
73
+ }
74
+ }
75
+ // Otherwise, i.e. if there is no batch:
76
+ else {
77
+ // Run the script and save the result with a timestamp ID.
78
+ await runHost(timeStamp, script);
79
+ }
80
+ }
81
+ catch(error) {
82
+ console.log(`ERROR: ${error.message}\n${error.stack}`);
83
+ }
84
+ }
85
+ else {
86
+ console.log('ERROR: no script specified');
87
+ }
88
+ };
89
+
90
+ // ########## OPERATION
91
+
92
+ handleRequest(process.argv[2], process.argv[3]);
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "testaro",
3
- "version": "3.0.1",
3
+ "version": "4.1.0",
4
4
  "description": "Automation of accessibility testing",
5
5
  "main": "index.js",
6
6
  "scripts": {
@@ -30,6 +30,7 @@
30
30
  "aatt": "*",
31
31
  "accessibility-checker": "*",
32
32
  "axe-playwright": "*",
33
+ "dotenv": "*",
33
34
  "pixelmatch": "*",
34
35
  "playwright": "*"
35
36
  },