testilo 44.2.4 → 44.4.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +171 -52
- package/package.json +1 -1
- package/procs/score/tsp.js +112 -95
package/README.md
CHANGED
|
@@ -1,4 +1,5 @@
|
|
|
1
1
|
# testilo
|
|
2
|
+
|
|
2
3
|
Utilities for ensemble web accessibility testing
|
|
3
4
|
|
|
4
5
|
## Introduction
|
|
@@ -52,13 +53,14 @@ The use of these environment variables is explained below.
|
|
|
52
53
|
|
|
53
54
|
## Job preparation
|
|
54
55
|
|
|
55
|
-
###
|
|
56
|
+
### Job preparation introduction
|
|
56
57
|
|
|
57
58
|
Testaro executes _jobs_. In a job, Testaro performs _acts_ (tests and other operations) on _targets_ (typically, web pages). The Testaro `README.md` file specifies the requirements for a job.
|
|
58
59
|
|
|
59
60
|
You can create a job for Testaro directly, without using Testilo.
|
|
60
61
|
|
|
61
62
|
Testilo can, however, make job preparation more efficient in these scenarios:
|
|
63
|
+
|
|
62
64
|
- You want to perform a battery of tests on multiple targets.
|
|
63
65
|
- You want to test targets only for particular issues, using whichever tools happen to have tests for those issues.
|
|
64
66
|
|
|
@@ -144,11 +146,11 @@ A batch is a JavaScript object. It can be converted to JSON and stored in a file
|
|
|
144
146
|
|
|
145
147
|
If you have a target list, the `batch` module of Testilo can convert it to a simple batch. The batch will contain, for each target, only one act group, named `main`, containing no acts.
|
|
146
148
|
|
|
147
|
-
####
|
|
149
|
+
#### Batch invocation
|
|
148
150
|
|
|
149
151
|
There are two ways to use the `batch` module.
|
|
150
152
|
|
|
151
|
-
#####
|
|
153
|
+
##### Batch invocation by a module
|
|
152
154
|
|
|
153
155
|
A module can invoke `batch()` in this way:
|
|
154
156
|
|
|
@@ -169,7 +171,7 @@ The `batch()` function of the `batch` module generates a batch and returns it as
|
|
|
169
171
|
|
|
170
172
|
The invoking module can further dispose of the batch as needed.
|
|
171
173
|
|
|
172
|
-
#####
|
|
174
|
+
##### Batch invocation by a user
|
|
173
175
|
|
|
174
176
|
A user can invoke `batch()` in this way:
|
|
175
177
|
|
|
@@ -251,6 +253,7 @@ Here is a script:
|
|
|
251
253
|
```
|
|
252
254
|
|
|
253
255
|
A script has several properties that specify facts about the jobs to be created. They include:
|
|
256
|
+
|
|
254
257
|
- `id`: an ID. A script can be converted from a JavaScript object to JSON and saved in a file in the `SPECDIR` directory, where it will be named by its ID (e.g., if the ID is `ts99`, the file name will be `ts99.json`). Each script needs an `id` with a unique value composed of alphanumeric ASCII characters.
|
|
255
258
|
- `what`: a description of the script.
|
|
256
259
|
- `strict`: `true` if Testaro is to abort jobs when a target redirects a request to a URL differing substantially from the one specified. If `false` Testaro is to allow redirection. All differences are considered substantial unless the URLs differ only in the presence and absence of a trailing slash.
|
|
@@ -274,18 +277,20 @@ As shown in this example, it is possible for any particular placeholder to overr
|
|
|
274
277
|
|
|
275
278
|
You can use the `script()` function of the `script` module to simplify the creation of scripts.
|
|
276
279
|
|
|
277
|
-
####
|
|
280
|
+
#### Script creation without options
|
|
278
281
|
|
|
279
282
|
In its simplest form, `script()` requires 3 string arguments:
|
|
283
|
+
|
|
280
284
|
1. An ID for the script
|
|
281
285
|
1. A description of the script
|
|
282
286
|
1. A device ID
|
|
283
287
|
|
|
284
288
|
Called in this way, `script()` produces a script that tells Testaro to perform the tests for all of the evaluation rules defined by all of the tools integrated by Testaro. In this case, the script launches a new Chromium browser before performing the tests of each tool.
|
|
285
289
|
|
|
286
|
-
####
|
|
290
|
+
#### Script creation with options
|
|
287
291
|
|
|
288
292
|
If you want a more focused script, you can add an additional option argument to `script()`. The option argument lets you restrict the rules to be tested for. You may choose between restrictions of two types:
|
|
293
|
+
|
|
289
294
|
- Tools
|
|
290
295
|
- Issues
|
|
291
296
|
|
|
@@ -318,11 +323,11 @@ If you specify issue options, the script will prescribe the tests for all evalua
|
|
|
318
323
|
|
|
319
324
|
For example, one issue in the `tic43.js` file is `mainNot1`. Four rules are classified as belonging to that issue: rule `main_element_only_one` of the `aslint` tool and 3 more rules defined by 3 other tools. You can also create custom classifications and save them in a `score` subdirectory of the `FUNCTIONDIR` directory.
|
|
320
325
|
|
|
321
|
-
####
|
|
326
|
+
#### Script invocation
|
|
322
327
|
|
|
323
328
|
There are two ways to use the `script` module.
|
|
324
329
|
|
|
325
|
-
#####
|
|
330
|
+
##### Script invocation by a module
|
|
326
331
|
|
|
327
332
|
A module can invoke `script()` in one of these ways:
|
|
328
333
|
|
|
@@ -341,7 +346,7 @@ In this example, the script will have `'monthly'` as its ID, `'landmarks'` as it
|
|
|
341
346
|
|
|
342
347
|
The invoking module can further modify and use the script (`scriptObj`) as needed.
|
|
343
348
|
|
|
344
|
-
#####
|
|
349
|
+
##### Script invocation by a user
|
|
345
350
|
|
|
346
351
|
A user can invoke `script()` by executing one of these statements in the Testilo project directory:
|
|
347
352
|
|
|
@@ -363,9 +368,10 @@ The `call` module will retrieve the named classification, if any.
|
|
|
363
368
|
The `script` module will create a script.
|
|
364
369
|
The `call` module will save the script as a JSON file in the `scripts` subdirectory of the `SPECDIR` directory, using the `id` value as the base of the file name.
|
|
365
370
|
|
|
366
|
-
####
|
|
371
|
+
#### Script properties
|
|
367
372
|
|
|
368
373
|
When the `script` module creates a script for you, it does not ask you for all of the property values that the script may require. Instead, it chooses these default values:
|
|
374
|
+
|
|
369
375
|
- `strict`: `false`
|
|
370
376
|
- `isolate`: `true`
|
|
371
377
|
- `standard`: `'only'`
|
|
@@ -393,11 +399,11 @@ After you invoke `script`, you can edit the script that it creates to revise any
|
|
|
393
399
|
|
|
394
400
|
Testilo merges batches with scripts, producing Testaro jobs, by means of the `merge` module.
|
|
395
401
|
|
|
396
|
-
####
|
|
402
|
+
#### Merge invocation
|
|
397
403
|
|
|
398
404
|
There are two ways to use the `merge` module.
|
|
399
405
|
|
|
400
|
-
#####
|
|
406
|
+
##### Merge invocation by a module
|
|
401
407
|
|
|
402
408
|
A module can invoke `merge()` in this way:
|
|
403
409
|
|
|
@@ -413,7 +419,7 @@ The first two arguments are a script and a batch obtained from files or from pri
|
|
|
413
419
|
|
|
414
420
|
The `merge()` function returns an array of jobs, one job per target in the batch. The invoking module can further dispose of the jobs as needed.
|
|
415
421
|
|
|
416
|
-
#####
|
|
422
|
+
##### Merge invocation by a user
|
|
417
423
|
|
|
418
424
|
A user can invoke `merge()` in this way:
|
|
419
425
|
|
|
@@ -426,6 +432,7 @@ node call merge scriptID batchID executionTimeStamp todoDir
|
|
|
426
432
|
```
|
|
427
433
|
|
|
428
434
|
In this statement, replace:
|
|
435
|
+
|
|
429
436
|
- `scriptID` with the ID (which is also the base of the file name) of the script.
|
|
430
437
|
- `batchID` with the ID (which is also the base of the file name) of the batch.
|
|
431
438
|
- `executionTimeStamp` with a time stamp in format `yymmddThhMM` representing the UTC date and time before which the jobs are not to be executed, or `''` if it is now.
|
|
@@ -435,9 +442,10 @@ The `call` module will retrieve the named script and batch from their respective
|
|
|
435
442
|
The `merge` module will create an array of jobs.
|
|
436
443
|
The `call` module will save the jobs as JSON files in the `todo` or `pending` subdirectory of the `JOBDIR` directory.
|
|
437
444
|
|
|
438
|
-
####
|
|
445
|
+
#### Merge output
|
|
439
446
|
|
|
440
447
|
A Testaro job produced by `merge` will be identical to the script from which it was derived (see the example above), except that:
|
|
448
|
+
|
|
441
449
|
- The `id` property of the job will be revised to uniquely identify the job.
|
|
442
450
|
- The originally empty properties will be populated, as in this example:
|
|
443
451
|
|
|
@@ -458,19 +466,21 @@ A Testaro job produced by `merge` will be identical to the script from which it
|
|
|
458
466
|
…
|
|
459
467
|
```
|
|
460
468
|
|
|
461
|
-
####
|
|
469
|
+
#### Merge validation
|
|
462
470
|
|
|
463
471
|
To test the `merge` module, in the project directory you can execute the statement `node validation/merge/validate`. If `merge` is valid, all logging statements will begin with “Success” and none will begin with “ERROR”.
|
|
464
472
|
|
|
465
473
|
## Report enhancement
|
|
466
474
|
|
|
467
|
-
###
|
|
475
|
+
### Report enhancement introduction
|
|
468
476
|
|
|
469
477
|
Testaro executes jobs and produces reports of test results. A report is identical to a job (see the example above), except that:
|
|
478
|
+
|
|
470
479
|
- The acts contain additional data recorded by Testaro to describe the results of the performance of the acts. Acts of type `test` have additional data describing test results (successes, failures, and details).
|
|
471
480
|
- Testaro also adds a `jobData` property, describing information not specific to any particular act.
|
|
472
481
|
|
|
473
482
|
Thus, a report produced by Testaro contains these properties:
|
|
483
|
+
|
|
474
484
|
- `id`
|
|
475
485
|
- `what`
|
|
476
486
|
- `strict`
|
|
@@ -485,6 +495,7 @@ Thus, a report produced by Testaro contains these properties:
|
|
|
485
495
|
- `jobData`
|
|
486
496
|
|
|
487
497
|
Testilo can enhance such a report by:
|
|
498
|
+
|
|
488
499
|
- adding scores
|
|
489
500
|
- creating digests
|
|
490
501
|
- creating difgests
|
|
@@ -498,6 +509,7 @@ Testilo can enhance such a report by:
|
|
|
498
509
|
The `score` module of Testilo performs computations on test results and adds a `score` property to a report.
|
|
499
510
|
|
|
500
511
|
The `score()` function of the `score` module takes two arguments:
|
|
512
|
+
|
|
501
513
|
- a scoring function
|
|
502
514
|
- a report object
|
|
503
515
|
|
|
@@ -553,15 +565,11 @@ The `quality` property is usually 1, but if the test of the rule is known to be
|
|
|
553
565
|
|
|
554
566
|
Some issue objects (such as `flash` in `tic40.js`) have a `max` property, equal to the maximum possible count of instances. That property allows a scorer to ascribe a greater weight to an instance of that issue.
|
|
555
567
|
|
|
556
|
-
|
|
557
|
-
|
|
558
|
-
A scorer adds a `score` property to the report that it scores.
|
|
559
|
-
|
|
560
|
-
#### Invocation
|
|
568
|
+
#### Score invocation
|
|
561
569
|
|
|
562
570
|
There are two ways to invoke the `score` module.
|
|
563
571
|
|
|
564
|
-
#####
|
|
572
|
+
##### Score invocation by a module
|
|
565
573
|
|
|
566
574
|
A module can invoke `score()` in this way:
|
|
567
575
|
|
|
@@ -578,7 +586,7 @@ The second argument to `score()` is a report object. It may have been read from
|
|
|
578
586
|
|
|
579
587
|
The invoking module can further dispose of the scored report as needed.
|
|
580
588
|
|
|
581
|
-
#####
|
|
589
|
+
##### Score invocation by a user
|
|
582
590
|
|
|
583
591
|
A user can invoke `score()` in this way:
|
|
584
592
|
|
|
@@ -588,12 +596,110 @@ node call score tsp99 240922
|
|
|
588
596
|
```
|
|
589
597
|
|
|
590
598
|
When a user invokes `score()` in this example, the `call` module:
|
|
599
|
+
|
|
591
600
|
- gets the scoring module `tsp99` from its JSON file `tsp99.json` in the `score` subdirectory of the `FUNCTIONDIR` directory.
|
|
592
601
|
- gets all reports, or if the third argument to `call()` exists the reports whose file names begin with `'240922'`, from the `raw` subdirectory of the `REPORTDIR` directory.
|
|
593
602
|
- adds score data to each report.
|
|
594
603
|
- writes each scored report in JSON format to the `scored` subdirectory of the `REPORTDIR` directory.
|
|
595
604
|
|
|
596
|
-
####
|
|
605
|
+
#### Scorer output
|
|
606
|
+
|
|
607
|
+
The scorer module contains a `scorer` function. The function takes a report as its only argument and modifies the report in place by adding a `score` property to it. You can create any `scorer` function and use it to create a `score` property that can have any structure that your intended subsequent analysis will require.
|
|
608
|
+
|
|
609
|
+
Testilo provides a scorer module with a `scorer` function in `procs/score/tsp.js`. If you wish, you may reference that function when you call the `score` function. If you do, the `score` property added to the report will have this structure:
|
|
610
|
+
|
|
611
|
+
```javascript
|
|
612
|
+
score: {
|
|
613
|
+
scoreProcID: 'tsp',
|
|
614
|
+
weights: { // Weights determining the contributions of facts to the total score.
|
|
615
|
+
severities: [1, 2, 3, 4],
|
|
616
|
+
tool: 0.1,
|
|
617
|
+
element: 2,
|
|
618
|
+
log: {
|
|
619
|
+
logCount: 0.1,
|
|
620
|
+
logSize: 0.002,
|
|
621
|
+
errorLogCount: 0.2,
|
|
622
|
+
errorLogSize: 0.004,
|
|
623
|
+
prohibitedCount: 3,
|
|
624
|
+
visitRejectionCount: 2
|
|
625
|
+
},
|
|
626
|
+
latency: 2,
|
|
627
|
+
prevention: 300,
|
|
628
|
+
testaroRulePrevention: 30,
|
|
629
|
+
maxInstanceCount: 30
|
|
630
|
+
},
|
|
631
|
+
normalLatency: 22,
|
|
632
|
+
summary: {
|
|
633
|
+
total: 0, // Total score, the sum of the following subscores.
|
|
634
|
+
issueCount: 0,
|
|
635
|
+
issue: 0,
|
|
636
|
+
solo: 0,
|
|
637
|
+
tool: 0,
|
|
638
|
+
element: 0,
|
|
639
|
+
prevention: 0,
|
|
640
|
+
log: 0,
|
|
641
|
+
latency: 0
|
|
642
|
+
},
|
|
643
|
+
details: {
|
|
644
|
+
severity: { // Counts of violations by the severities assigned by their tools.
|
|
645
|
+
total: [0, 0, 0, 0],
|
|
646
|
+
byTool: {
|
|
647
|
+
toolA: [0, 0, 0, 0]
|
|
648
|
+
}
|
|
649
|
+
},
|
|
650
|
+
prevention: { // Subscores due to pages preventing tools from performing tests.
|
|
651
|
+
toolB: 300,
|
|
652
|
+
testaro: 90
|
|
653
|
+
},
|
|
654
|
+
issue: {
|
|
655
|
+
issueA: { // Details on violations of rules classified as belonging to issue A.
|
|
656
|
+
summary: 'Summary of issue A',
|
|
657
|
+
wcag: '1.1.1',
|
|
658
|
+
score: 0,
|
|
659
|
+
maxCount: 0, // Count of violations after discounting for inferred duplication.
|
|
660
|
+
weight: 4,
|
|
661
|
+
countLimit: 30, // Adjuster if the count of violations per page is inherently limited.
|
|
662
|
+
instanceCounts: {
|
|
663
|
+
toolC: 0
|
|
664
|
+
},
|
|
665
|
+
tools: {
|
|
666
|
+
toolC: {
|
|
667
|
+
ruleC0: {
|
|
668
|
+
quality: 1, // Estimated quality of the test for the rule (0 to 1).
|
|
669
|
+
what: 'Description of rule C0',
|
|
670
|
+
violations: {
|
|
671
|
+
countTotal: 0,
|
|
672
|
+
descriptions: [
|
|
673
|
+
'Description 0 of violation',
|
|
674
|
+
'Description 1 of violation'
|
|
675
|
+
]
|
|
676
|
+
}
|
|
677
|
+
}
|
|
678
|
+
}
|
|
679
|
+
}
|
|
680
|
+
}
|
|
681
|
+
},
|
|
682
|
+
solo: { // Rules violated but not classified as belonging to any issue.
|
|
683
|
+
toolA: {
|
|
684
|
+
ruleA0: 1 // Rule and count of violations.
|
|
685
|
+
}
|
|
686
|
+
},
|
|
687
|
+
tool: { // Subscores due to tools reporting violations of their rules.
|
|
688
|
+
toolA: 0
|
|
689
|
+
},
|
|
690
|
+
element: { // Xpaths of elements reported by sets of tools as violating rules in issues.
|
|
691
|
+
issueA: {
|
|
692
|
+
'toolA + toolB': [
|
|
693
|
+
'/html/body/div[2]',
|
|
694
|
+
'/html/body/div[2]/p[1]'
|
|
695
|
+
]
|
|
696
|
+
}
|
|
697
|
+
}
|
|
698
|
+
}
|
|
699
|
+
}
|
|
700
|
+
```
|
|
701
|
+
|
|
702
|
+
#### Score validation
|
|
597
703
|
|
|
598
704
|
To test the `score` module, in the project directory you can execute the statement `node validation/score/validate`. If `score` is valid, all logging statements will begin with “Success” and none will begin with “ERROR”.
|
|
599
705
|
|
|
@@ -606,6 +712,7 @@ Any scored report is based on a set of tests of a set of tools. Suppose you want
|
|
|
606
712
|
A typical use case is your desire to examine results for only one or only some of the tools that were used for a report. All the needed information is in the report, so it is not necessary to create, perform, and await a new job and report. You want a new report whose standard results and score data are what a new job would have produced.
|
|
607
713
|
|
|
608
714
|
The `rescore()` function of the `rescore` module takes four arguments:
|
|
715
|
+
|
|
609
716
|
- a scoring function
|
|
610
717
|
- a report object
|
|
611
718
|
- a restriction type (`'tools'` or `'issues'`)
|
|
@@ -614,6 +721,7 @@ The `rescore()` function of the `rescore` module takes four arguments:
|
|
|
614
721
|
Then the `rescore()` function copies the report, removes the no-longer-relevant acts, removes the no-longer-relevant instances from and revises the totals of the `standardResult` properties, replaces the `score` property with a new one, and returns the revised report.
|
|
615
722
|
|
|
616
723
|
The new report is not identical to the report that a new job would have produced, because:
|
|
724
|
+
|
|
617
725
|
- Any original (non-standardized) results and data that survive in the new report are not revised.
|
|
618
726
|
- Any scores arising from causes other than test results, such as latency or browser warnings, are not revised.
|
|
619
727
|
- The `score` property object now includes a `rescore` property that identifies the original report ID (in case it is later changed), the date and time of the rescoring, the restriction type, and an array of the tool or issue IDs included by the restriction.
|
|
@@ -647,6 +755,7 @@ node call rescore tsp99 240922 tools axe nuVal
|
|
|
647
755
|
```
|
|
648
756
|
|
|
649
757
|
When a user invokes `rescore()` in this example, the `call` module:
|
|
758
|
+
|
|
650
759
|
- gets the scoring module `tsp99` from its JSON file `tsp99.json` in the `score` subdirectory of the `FUNCTIONDIR` directory.
|
|
651
760
|
- gets all reports, or if the third argument to `call()` is nonempty the reports whose file names begin with `'240922'`, from the `scored` subdirectory of the `REPORTDIR` directory.
|
|
652
761
|
- defines an ID suffix.
|
|
@@ -655,17 +764,18 @@ When a user invokes `rescore()` in this example, the `call` module:
|
|
|
655
764
|
- appends the ID suffix to the ID of each report.
|
|
656
765
|
- writes each rescored report in JSON format to the `scored` subdirectory of the `REPORTDIR` directory.
|
|
657
766
|
|
|
658
|
-
####
|
|
767
|
+
#### Rescore validation
|
|
659
768
|
|
|
660
769
|
To test the `rescore` module, in the project directory you can execute the statement `node validation/rescore/validate`. If `rescore` is valid, all logging statements will begin with “Success” and none will begin with “ERROR”.
|
|
661
770
|
|
|
662
771
|
### Digesting
|
|
663
772
|
|
|
664
|
-
####
|
|
773
|
+
#### Digesting introduction
|
|
665
774
|
|
|
666
775
|
Reports from Testaro are JavaScript objects. When represented as JSON, they are human-readable, but not human-friendly. They are basically designed for machine tractability. This is equally true for reports that have been scored by Testilo. But Testilo can _digest_ a scored report, converting it to a human-oriented HTML document, or _digest_.
|
|
667
776
|
|
|
668
777
|
The `digest` module digests a scored report. Its `digest()` function takes two arguments:
|
|
778
|
+
|
|
669
779
|
- a digester (a digesting function)
|
|
670
780
|
- a scored report object
|
|
671
781
|
|
|
@@ -673,11 +783,11 @@ The digester populates an HTML digest template. A copy of the template, with its
|
|
|
673
783
|
|
|
674
784
|
The included templates format placeholders with leading and trailing underscore pairs (such as `__issueCount__`).
|
|
675
785
|
|
|
676
|
-
####
|
|
786
|
+
#### Digest invocation
|
|
677
787
|
|
|
678
788
|
There are two ways to use the `digest` module.
|
|
679
789
|
|
|
680
|
-
#####
|
|
790
|
+
##### Digest invocation by a module
|
|
681
791
|
|
|
682
792
|
A module can invoke `digest()` in this way:
|
|
683
793
|
|
|
@@ -696,7 +806,7 @@ The second argument to `digest()` is a scored report object. It may have been re
|
|
|
696
806
|
|
|
697
807
|
The `digest()` function returns a promise resolved with a digest. The invoking module can further dispose of the digest as needed.
|
|
698
808
|
|
|
699
|
-
#####
|
|
809
|
+
##### Digest invocation by a user
|
|
700
810
|
|
|
701
811
|
A user can invoke `digest()` in this way:
|
|
702
812
|
|
|
@@ -706,6 +816,7 @@ node call digest tdp99 241105
|
|
|
706
816
|
```
|
|
707
817
|
|
|
708
818
|
When a user invokes `digest()` in this example, the `call` module:
|
|
819
|
+
|
|
709
820
|
- gets the template and the digesting module from subdirectory `tdp99` in the `digest` subdirectory of the `FUNCTIONDIR` directory.
|
|
710
821
|
- gets all reports, or if the third argument to `call()` exists all reports whose file names begin with `'241105'`, from the `scored` subdirectory of the `REPORTDIR` directory.
|
|
711
822
|
- digests each report.
|
|
@@ -718,7 +829,7 @@ The digests created by `digest()` are HTML files, and they expect a `style.css`
|
|
|
718
829
|
|
|
719
830
|
### Difgesting
|
|
720
831
|
|
|
721
|
-
####
|
|
832
|
+
#### Difgesting introduction
|
|
722
833
|
|
|
723
834
|
A _difgest_ is a digest that compares two reports. They can be reports of different targets, or reports of the same target from two different times or under two different conditions.
|
|
724
835
|
|
|
@@ -732,11 +843,11 @@ The `difgest` module difgests two scored reports. Its `difgest()` function takes
|
|
|
732
843
|
|
|
733
844
|
The difgest template and module operate like the digest ones.
|
|
734
845
|
|
|
735
|
-
####
|
|
846
|
+
#### Difgest invocation
|
|
736
847
|
|
|
737
848
|
There are two ways to use the `difgest` module.
|
|
738
849
|
|
|
739
|
-
#####
|
|
850
|
+
##### Difgest invocation by a module
|
|
740
851
|
|
|
741
852
|
A module can invoke `difgest()` in this way:
|
|
742
853
|
|
|
@@ -756,7 +867,7 @@ The difgest will include links to the two digests, which, in turn, contain links
|
|
|
756
867
|
|
|
757
868
|
`difgest()` returns a difgest. The invoking module can further dispose of the difgest as needed.
|
|
758
869
|
|
|
759
|
-
#####
|
|
870
|
+
##### Difgest invocation by a user
|
|
760
871
|
|
|
761
872
|
A user can invoke `difgest()` in this way:
|
|
762
873
|
|
|
@@ -765,6 +876,7 @@ node call difgest tfp99 20141215T1200-x7-3 20141215T1200-x7-4
|
|
|
765
876
|
```
|
|
766
877
|
|
|
767
878
|
When a user invokes `difgest` in this example, the `call` module:
|
|
879
|
+
|
|
768
880
|
- gets the template and the difgesting module from subdirectory `tfp99` in the `difgest` subdirectory of the `FUNCTIONDIR` directory.
|
|
769
881
|
- gets reports `20141215T1200-x7-3` and `20141215T1200-x7-4` from the `scored` subdirectory of the `REPORTDIR` directory.
|
|
770
882
|
- writes the difgested report to the `difgested` subdirectory of the `REPORTDIR` directory.
|
|
@@ -773,7 +885,7 @@ Difgests include links to the digests of the two reports. The destinations of th
|
|
|
773
885
|
|
|
774
886
|
Difgests expect a `style.css` file to exist in their directory, as digests do.
|
|
775
887
|
|
|
776
|
-
####
|
|
888
|
+
#### Difgest validation
|
|
777
889
|
|
|
778
890
|
To test the `digest` module, in the project directory you can execute the statement `node validation/digest/validate`. If `digest` is valid, all logging statements will begin with “Success” and none will begin with “ERROR”.
|
|
779
891
|
|
|
@@ -783,11 +895,11 @@ The `summarize` module of Testilo can summarize a scored report. The summary is
|
|
|
783
895
|
|
|
784
896
|
Report summaries make some operations more efficient by allowing other modules to get needed data from summaries instead of from reports. The size of a summary tends to be about 0.01% of the size of a report.
|
|
785
897
|
|
|
786
|
-
####
|
|
898
|
+
#### Summarization invocation
|
|
787
899
|
|
|
788
900
|
The `summarize` module summarizes one report when invoked by a module, but the `call` module invoked by a user can call `summarize` multiple times to summarize multiple reports and combine those summaries into a file.
|
|
789
901
|
|
|
790
|
-
#####
|
|
902
|
+
##### Summarization invocation by a module
|
|
791
903
|
|
|
792
904
|
A module can invoke `summarize()` in this way:
|
|
793
905
|
|
|
@@ -800,7 +912,7 @@ const summary = summarize(report);
|
|
|
800
912
|
|
|
801
913
|
The `report` argument is a scored report. The `summary` constant is an object. The module can further dispose of `summary` as needed.
|
|
802
914
|
|
|
803
|
-
#####
|
|
915
|
+
##### Summarization invocation by a user
|
|
804
916
|
|
|
805
917
|
A user can invoke `summarize()` in either of these two ways:
|
|
806
918
|
|
|
@@ -810,6 +922,7 @@ node call summarize 'company divisions' 2411
|
|
|
810
922
|
```
|
|
811
923
|
|
|
812
924
|
When a user invokes `summarize` in this example, the `call` module:
|
|
925
|
+
|
|
813
926
|
- gets all the reports in the `scored` subdirectory of the `REPORTDIR` directory, or (if the third argument is present) all those whose file names begin with `2411`.
|
|
814
927
|
- creates a summary of each report.
|
|
815
928
|
- combines the summaries into an array.
|
|
@@ -821,6 +934,7 @@ When a user invokes `summarize` in this example, the `call` module:
|
|
|
821
934
|
If you use Testilo to perform a battery of tests on multiple targets, you may want a single report that compares the total scores received by the targets. Testilo can produce such a _comparison_.
|
|
822
935
|
|
|
823
936
|
The `compare` module compares the scores in a summary report. The `compare()` function of the `compare` module takes two arguments:
|
|
937
|
+
|
|
824
938
|
- a comparison function
|
|
825
939
|
- a summary report
|
|
826
940
|
|
|
@@ -828,11 +942,11 @@ The comparison function defines the rules for generating an HTML file comparing
|
|
|
828
942
|
|
|
829
943
|
The built-in comparison functions compare all of the scores in the summary report. Thus, if the summary report contains multiple scores for the same target, based on tests performed at various times, those scores will all appear in the comparison, labeled identically with the `what` description of the target. If you want only one score per target to appear, you can create a new summary report that includes only one summary per target in its `summaries` array.
|
|
830
944
|
|
|
831
|
-
####
|
|
945
|
+
#### Comparison invocation
|
|
832
946
|
|
|
833
947
|
There are two ways to use the `compare` module.
|
|
834
948
|
|
|
835
|
-
#####
|
|
949
|
+
##### Comparison invocation by a module
|
|
836
950
|
|
|
837
951
|
A module can invoke `compare()` in this way:
|
|
838
952
|
|
|
@@ -847,7 +961,7 @@ compare(id, comparer, summaryReport)
|
|
|
847
961
|
|
|
848
962
|
The first argument to `compare()` is an ID that will be named in the comparison. The second argument is a comparison function. In this example, it been obtained from a file in the Testilo package, but it could be custom-made. The third argument is a summary report. The `compare()` function returns a comparison. The invoking module can further dispose of the comparison as needed.
|
|
849
963
|
|
|
850
|
-
#####
|
|
964
|
+
##### Comparison invocation by a user
|
|
851
965
|
|
|
852
966
|
A user can invoke `compare()` in this way:
|
|
853
967
|
|
|
@@ -856,6 +970,7 @@ node call compare 'state legislators' tcp99 240813
|
|
|
856
970
|
```
|
|
857
971
|
|
|
858
972
|
When a user invokes `compare` in this example, the `call` module:
|
|
973
|
+
|
|
859
974
|
- gets the comparison module from subdirectory `tcp99` of the subdirectory `compare` in the `FUNCTIONDIR` directory.
|
|
860
975
|
- gets the last summary report whose file name begins with `'240813'` from the `summarized` subdirectory of the `REPORTDIR` directory.
|
|
861
976
|
- creates an ID for the comparison.
|
|
@@ -864,7 +979,7 @@ When a user invokes `compare` in this example, the `call` module:
|
|
|
864
979
|
|
|
865
980
|
The comparative report created by `compare` is an HTML file, and it expects a `style.css` file to exist in its directory. The `reports/comparative/style.css` file in Testilo is an appropriate stylesheet to be copied into the directory where comparative reports are written.
|
|
866
981
|
|
|
867
|
-
####
|
|
982
|
+
#### Comparison validation
|
|
868
983
|
|
|
869
984
|
To test the `compare` module, in the project directory you can execute the statement `node validation/compare/validate`. If `compare` is valid, all logging statements will begin with “Success” and none will begin with “ERROR”.
|
|
870
985
|
|
|
@@ -874,9 +989,9 @@ The `track` module of Testilo selects, organizes, and presents data from summari
|
|
|
874
989
|
|
|
875
990
|
A typical use case for tracking is monitoring, i.e. periodic auditing of one or more web pages.
|
|
876
991
|
|
|
877
|
-
####
|
|
992
|
+
#### Tracking invocation
|
|
878
993
|
|
|
879
|
-
#####
|
|
994
|
+
##### Tracking invocation by a module
|
|
880
995
|
|
|
881
996
|
A module can invoke `track()` in this way:
|
|
882
997
|
|
|
@@ -890,7 +1005,7 @@ const [reportID, 'main competitors', trackReport] = track(tracker, summaryReport
|
|
|
890
1005
|
|
|
891
1006
|
The `track()` function returns, as an array, an ID and an HTML tracking report that shows data for all of the results in the summary report and identifies “main competitors” as its subject. The invoking module can further dispose of the tracking report as needed.
|
|
892
1007
|
|
|
893
|
-
#####
|
|
1008
|
+
##### Tracking invocation by a user
|
|
894
1009
|
|
|
895
1010
|
A user can invoke `track()` in one of these ways:
|
|
896
1011
|
|
|
@@ -902,6 +1017,7 @@ node call track ttp99a 'main competitors' 241016 'ABC Foundation'
|
|
|
902
1017
|
```
|
|
903
1018
|
|
|
904
1019
|
When a user invokes `track()` in this example, the `call` module:
|
|
1020
|
+
|
|
905
1021
|
- gets the summary report from the last file in the `summarized` subdirectory of the `REPORTDIR` directory, or if the third argument to `call()` exists and is not empty the last one whose name begins with `'241016'`.
|
|
906
1022
|
- selects the summarized data for all results in the summary report, or if the fourth argument to `call()` exists from all results whose `target.what` property has the value `'ABC Foundation'`.
|
|
907
1023
|
- uses tracker `ttp99a` to create a tracking report that identifies “main competitors” as its subject.
|
|
@@ -919,18 +1035,19 @@ If you use Testaro to perform all the tests of all the tools on multiple targets
|
|
|
919
1035
|
The `credit` module tabulates the contribution of each tool to the discovery of issue instances in a collection of scored reports. Its `credit()` function takes two arguments: a report description and an array of `score` properties of scored reports.
|
|
920
1036
|
|
|
921
1037
|
The function produces a credit report containing four sections:
|
|
1038
|
+
|
|
922
1039
|
- `counts`: for each issue, how many instances each tool reported
|
|
923
1040
|
- `onlies`: for each issue of which only 1 tool reported instances, which tool it was
|
|
924
1041
|
- `mosts`: for each issue of which at least 2 tools reported instances, which tool(s) reported the maximum instance count
|
|
925
1042
|
- `tools`: for each tool, two sections:
|
|
926
|
-
|
|
927
|
-
|
|
1043
|
+
- `onlies`: a list of the issues that only the tool reported instances of
|
|
1044
|
+
- `mosts`: a list of the issues for which the instance count of the tool was not surpassed by that of any other tool
|
|
928
1045
|
|
|
929
|
-
#####
|
|
1046
|
+
##### Tool crediting invocation
|
|
930
1047
|
|
|
931
1048
|
There are two ways to use the `credit` module.
|
|
932
1049
|
|
|
933
|
-
######
|
|
1050
|
+
###### Tool crediting by a module
|
|
934
1051
|
|
|
935
1052
|
A module can invoke `credit()` in this way:
|
|
936
1053
|
|
|
@@ -942,7 +1059,7 @@ const creditReport = credit('June 2025', reportScores);
|
|
|
942
1059
|
|
|
943
1060
|
The first argument to `credit()` is a description to be included in the credit report. The second argument is an array of `score` properties of scored report objects. The `credit()` function returns a credit report. The invoking module can further dispose of the credit report as needed.
|
|
944
1061
|
|
|
945
|
-
######
|
|
1062
|
+
###### Tool crediting by a user
|
|
946
1063
|
|
|
947
1064
|
A user can invoke `credit()` in one of these ways:
|
|
948
1065
|
|
|
@@ -952,6 +1069,7 @@ node call credit legislators 241106
|
|
|
952
1069
|
```
|
|
953
1070
|
|
|
954
1071
|
When a user invokes `credit` in this example, the `call` module:
|
|
1072
|
+
|
|
955
1073
|
- gets all reports, or if the third argument to `call()` exists all reports whose file names begin with `'241106'`, in the `scored` subdirectory of the `REPORTDIR` directory.
|
|
956
1074
|
- gets the `score` properties of those reports.
|
|
957
1075
|
- creates an ID for the credit report.
|
|
@@ -963,11 +1081,11 @@ The `issues` module tabulates total issue scores. Its `issues()` function takes
|
|
|
963
1081
|
|
|
964
1082
|
The function produces an issue report, an object with issue properties, whose values are the totals of the scores of the respective issues.
|
|
965
1083
|
|
|
966
|
-
#####
|
|
1084
|
+
##### Issue scoring invocation
|
|
967
1085
|
|
|
968
1086
|
There are two ways to use the `credit` module.
|
|
969
1087
|
|
|
970
|
-
######
|
|
1088
|
+
###### Issue scoring by a module
|
|
971
1089
|
|
|
972
1090
|
A module can invoke `issues()` in this way:
|
|
973
1091
|
|
|
@@ -979,7 +1097,7 @@ const issuesReport = issues('legislators', reportScores);
|
|
|
979
1097
|
|
|
980
1098
|
The arguments to `issues()` are a report description and an array of `score` properties of scored report objects. The `issues()` function returns an issues report. The invoking module can further dispose of the issues report as needed.
|
|
981
1099
|
|
|
982
|
-
######
|
|
1100
|
+
###### Issue scoring by a user
|
|
983
1101
|
|
|
984
1102
|
A user can invoke `issues()` in one of these ways:
|
|
985
1103
|
|
|
@@ -989,6 +1107,7 @@ node call issues legislators 241106
|
|
|
989
1107
|
```
|
|
990
1108
|
|
|
991
1109
|
When a user invokes `issues` in this example, the `call` module:
|
|
1110
|
+
|
|
992
1111
|
- gets all reports, or if the third argument to `call()` exists all reports whose file names begin with `'241106'`, in the `scored` subdirectory of the `REPORTDIR` directory.
|
|
993
1112
|
- gets the `score` properties of those reports.
|
|
994
1113
|
- creates an ID for the issues report.
|
package/package.json
CHANGED
package/procs/score/tsp.js
CHANGED
|
@@ -24,7 +24,7 @@
|
|
|
24
24
|
tsp
|
|
25
25
|
Testilo score proc
|
|
26
26
|
|
|
27
|
-
Computes
|
|
27
|
+
Computes score data and adds them to a Testaro report.
|
|
28
28
|
*/
|
|
29
29
|
|
|
30
30
|
// IMPORTS
|
|
@@ -41,6 +41,7 @@ const scoreProcID = 'tsp';
|
|
|
41
41
|
// How much is added to the page score by each component.
|
|
42
42
|
|
|
43
43
|
// 1. Issue
|
|
44
|
+
|
|
44
45
|
// Each issue.
|
|
45
46
|
const issueCountWeight = 10;
|
|
46
47
|
/*
|
|
@@ -52,10 +53,8 @@ const issueCountWeight = 10;
|
|
|
52
53
|
const maxWeight = 30;
|
|
53
54
|
|
|
54
55
|
// 2. Tool
|
|
55
|
-
|
|
56
|
-
|
|
57
|
-
through 3.
|
|
58
|
-
*/
|
|
56
|
+
|
|
57
|
+
// Severity: amount the ordinal severity of each violation adds to the raw tool score.
|
|
59
58
|
const severityWeights = [1, 2, 3, 4];
|
|
60
59
|
// Final: multiplier of the raw tool score to obtain the final tool score.
|
|
61
60
|
const toolWeight = 0.1;
|
|
@@ -91,21 +90,21 @@ const latencyWeight = 2;
|
|
|
91
90
|
|
|
92
91
|
// Initialize a directory of issue-classified tool rules.
|
|
93
92
|
const issueIndex = {};
|
|
94
|
-
// Initialize an array of
|
|
95
|
-
const
|
|
96
|
-
// For each issue:
|
|
97
|
-
Object.keys(issues).forEach(
|
|
93
|
+
// Initialize an array of variable rule IDs.
|
|
94
|
+
const variableRuleIDs = [];
|
|
95
|
+
// For each classified issue:
|
|
96
|
+
Object.keys(issues).forEach(issueID => {
|
|
98
97
|
// For each tool with rules belonging to that issue:
|
|
99
|
-
Object.keys(issues[
|
|
98
|
+
Object.keys(issues[issueID].tools).forEach(toolID => {
|
|
100
99
|
// For each of those rules:
|
|
101
|
-
Object.keys(issues[
|
|
102
|
-
issueIndex[
|
|
103
|
-
// Add
|
|
104
|
-
issueIndex[
|
|
105
|
-
// If it
|
|
106
|
-
if (issues[
|
|
107
|
-
// Add
|
|
108
|
-
|
|
100
|
+
Object.keys(issues[issueID].tools[toolID]).forEach(ruleID => {
|
|
101
|
+
issueIndex[toolID] ??= {};
|
|
102
|
+
// Add its ID to the directory of tool rule IDs.
|
|
103
|
+
issueIndex[toolID][ruleID] = issueID;
|
|
104
|
+
// If it has a variable ID:
|
|
105
|
+
if (issues[issueID].tools[toolID][ruleID].variable) {
|
|
106
|
+
// Add its ID to the array of variable rule IDs.
|
|
107
|
+
variableRuleIDs.push(ruleID);
|
|
109
108
|
}
|
|
110
109
|
})
|
|
111
110
|
});
|
|
@@ -115,8 +114,8 @@ Object.keys(issues).forEach(issueName => {
|
|
|
115
114
|
|
|
116
115
|
// Scores a report.
|
|
117
116
|
exports.scorer = report => {
|
|
118
|
-
// If there are any acts in the report:
|
|
119
117
|
const {acts} = report;
|
|
118
|
+
// If there are any acts in the report:
|
|
120
119
|
if (Array.isArray(acts) && acts.length) {
|
|
121
120
|
const testActs = acts.filter(act => act.type === 'test');
|
|
122
121
|
const testTools = new Set(testActs.map(act => act.which));
|
|
@@ -159,7 +158,7 @@ exports.scorer = report => {
|
|
|
159
158
|
element: {}
|
|
160
159
|
}
|
|
161
160
|
};
|
|
162
|
-
// Initialize the
|
|
161
|
+
// Initialize the job and issue-specific sets of path-identified elements.
|
|
163
162
|
const pathIDs = new Set();
|
|
164
163
|
const issuePaths = {};
|
|
165
164
|
const {summary, details} = score;
|
|
@@ -195,93 +194,80 @@ exports.scorer = report => {
|
|
|
195
194
|
const {ordinalSeverity, pathID, ruleID, what} = instance;
|
|
196
195
|
const count = instance.count || 1;
|
|
197
196
|
let canonicalRuleID = ruleID;
|
|
198
|
-
// If the rule
|
|
197
|
+
// If the rule is not classified:
|
|
199
198
|
if (! issueIndex[which][ruleID]) {
|
|
200
|
-
// Convert
|
|
201
|
-
canonicalRuleID =
|
|
199
|
+
// Convert its ID to the variable rule ID that it matches, if any.
|
|
200
|
+
canonicalRuleID = variableRuleIDs.find(pattern => {
|
|
202
201
|
const patternRE = new RegExp(pattern);
|
|
203
202
|
return patternRE.test(ruleID);
|
|
204
203
|
});
|
|
205
204
|
}
|
|
206
|
-
// If the
|
|
205
|
+
// If the rule is classified:
|
|
207
206
|
if (canonicalRuleID) {
|
|
208
207
|
// Get the issue of the rule.
|
|
209
|
-
const
|
|
208
|
+
const issueID = issueIndex[which][canonicalRuleID];
|
|
210
209
|
// If the issue is non-ignorable:
|
|
211
|
-
if (
|
|
210
|
+
if (issueID !== 'ignorable') {
|
|
212
211
|
// Initialize the issue details if necessary.
|
|
213
|
-
details.issue[
|
|
214
|
-
summary: issues[
|
|
215
|
-
wcag: issues[
|
|
212
|
+
details.issue[issueID] ??= {
|
|
213
|
+
summary: issues[issueID].summary,
|
|
214
|
+
wcag: issues[issueID].wcag || '',
|
|
216
215
|
score: 0,
|
|
217
216
|
maxCount: 0,
|
|
218
|
-
weight: issues[
|
|
219
|
-
countLimit: issues[
|
|
217
|
+
weight: issues[issueID].weight,
|
|
218
|
+
countLimit: issues[issueID].max,
|
|
220
219
|
instanceCounts: {},
|
|
221
220
|
tools: {}
|
|
222
221
|
};
|
|
223
|
-
const issueDetails = details.issue[
|
|
222
|
+
const issueDetails = details.issue[issueID];
|
|
224
223
|
if (! issueDetails.countLimit) {
|
|
225
224
|
delete issueDetails.countLimit;
|
|
226
225
|
}
|
|
227
226
|
issueDetails.tools[which] ??= {};
|
|
228
227
|
issueDetails.instanceCounts[which] ??= 0;
|
|
229
|
-
// Add
|
|
228
|
+
// Add the instance count to the tool instance count.
|
|
230
229
|
issueDetails.instanceCounts[which] += count;
|
|
231
|
-
|
|
232
|
-
|
|
233
|
-
|
|
234
|
-
|
|
235
|
-
|
|
236
|
-
|
|
237
|
-
|
|
238
|
-
|
|
239
|
-
|
|
240
|
-
|
|
241
|
-
|
|
242
|
-
|
|
243
|
-
|
|
244
|
-
.
|
|
245
|
-
.complaints
|
|
230
|
+
const ruleData = issues[issueID].tools[which][canonicalRuleID];
|
|
231
|
+
// Initialize the the issue details for the rule if necessary.
|
|
232
|
+
issueDetails.tools[which][canonicalRuleID] ??= {
|
|
233
|
+
quality: ruleData.quality,
|
|
234
|
+
what: ruleData.what,
|
|
235
|
+
violations: {
|
|
236
|
+
countTotal: 0,
|
|
237
|
+
descriptions: new Set()
|
|
238
|
+
}
|
|
239
|
+
};
|
|
240
|
+
const ruleDetails = issueDetails.tools[which][canonicalRuleID];
|
|
241
|
+
// Add the instance count to the rule instance count.
|
|
242
|
+
ruleDetails
|
|
243
|
+
.violations
|
|
246
244
|
.countTotal += count || 1;
|
|
247
|
-
|
|
248
|
-
|
|
249
|
-
|
|
250
|
-
|
|
251
|
-
|
|
252
|
-
.texts
|
|
253
|
-
.includes(what)
|
|
254
|
-
) {
|
|
255
|
-
details
|
|
256
|
-
.issue[issueName]
|
|
257
|
-
.tools[which][canonicalRuleID]
|
|
258
|
-
.complaints
|
|
259
|
-
.texts
|
|
260
|
-
.push(what);
|
|
261
|
-
}
|
|
262
|
-
issuePaths[issueName] ??= new Set();
|
|
245
|
+
// Ensure that the violation description is among the violation descriptions.
|
|
246
|
+
ruleDetails
|
|
247
|
+
.violations
|
|
248
|
+
.descriptions
|
|
249
|
+
.add(what);
|
|
263
250
|
// If the element has a path ID:
|
|
264
251
|
if (pathID) {
|
|
265
|
-
|
|
266
|
-
issuePaths[
|
|
252
|
+
issuePaths[issueID] ??= {};
|
|
253
|
+
issuePaths[issueID][pathID] ??= new Set();
|
|
254
|
+
// Ensure that the tool is among those reporting the issue for the element.
|
|
255
|
+
issuePaths[issueID][pathID].add(which);
|
|
267
256
|
}
|
|
268
257
|
}
|
|
269
258
|
}
|
|
270
|
-
// Otherwise, i.e. if the rule
|
|
259
|
+
// Otherwise, i.e. if the rule is not classified:
|
|
271
260
|
else {
|
|
272
261
|
// Add the instance to the solo details of the score data.
|
|
273
|
-
|
|
274
|
-
|
|
275
|
-
}
|
|
276
|
-
if (! details.solo[which][ruleID]) {
|
|
277
|
-
details.solo[which][ruleID] = 0;
|
|
278
|
-
}
|
|
262
|
+
details.solo[which] ??= {};
|
|
263
|
+
details.solo[which][ruleID] ??= 0;
|
|
279
264
|
details.solo[which][ruleID] += (count || 1) * (ordinalSeverity + 1);
|
|
280
265
|
// Report this.
|
|
281
|
-
console.log(`ERROR:
|
|
266
|
+
console.log(`ERROR: Unclassified rule of ${which}: ${ruleID}`);
|
|
282
267
|
}
|
|
283
|
-
//
|
|
268
|
+
// If the element has a path ID:
|
|
284
269
|
if (pathID) {
|
|
270
|
+
// Ensure it is among the job path IDs.
|
|
285
271
|
pathIDs.add(pathID);
|
|
286
272
|
}
|
|
287
273
|
});
|
|
@@ -292,31 +278,36 @@ exports.scorer = report => {
|
|
|
292
278
|
details.prevention[which] = preventionWeight;
|
|
293
279
|
}
|
|
294
280
|
});
|
|
295
|
-
// For each non-ignorable issue with any
|
|
296
|
-
Object.keys(details.issue).forEach(
|
|
297
|
-
const
|
|
298
|
-
// For each tool with any
|
|
299
|
-
Object.keys(
|
|
281
|
+
// For each non-ignorable issue with any instances:
|
|
282
|
+
Object.keys(details.issue).forEach(issueID => {
|
|
283
|
+
const issueDetails = details.issue[issueID];
|
|
284
|
+
// For each tool with any instances in the issue:
|
|
285
|
+
Object.keys(issueDetails.tools).forEach(toolID => {
|
|
300
286
|
// Get the sum of the quality-weighted counts of its issue rules.
|
|
301
287
|
let weightedCount = 0;
|
|
302
|
-
Object.values(
|
|
303
|
-
weightedCount += ruleData.quality * ruleData.
|
|
288
|
+
Object.values(issueDetails.tools[toolID]).forEach(ruleData => {
|
|
289
|
+
weightedCount += ruleData.quality * ruleData.violations.countTotal;
|
|
290
|
+
});
|
|
291
|
+
// Update the maximum count for the issue if necessary.
|
|
292
|
+
issueDetails.maxCount = Math.max(issueDetails.maxCount, weightedCount);
|
|
293
|
+
// Convert the set of violation descriptions to an array.
|
|
294
|
+
Object.keys(issueDetails.tools[toolID]).forEach(ruleID => {
|
|
295
|
+
issueDetails.tools[toolID][ruleID].violations.descriptions = Array
|
|
296
|
+
.from(issueDetails.tools[toolID][ruleID].violations.descriptions)
|
|
297
|
+
.sort();
|
|
304
298
|
});
|
|
305
|
-
// If the sum exceeds the existing maximum weighted count for the issue:
|
|
306
|
-
if (weightedCount > issueData.maxCount) {
|
|
307
|
-
// Change the maximum count for the issue to the sum.
|
|
308
|
-
issueData.maxCount = weightedCount;
|
|
309
|
-
}
|
|
310
299
|
});
|
|
311
300
|
// Get the score for the issue, including any addition for the instance count limit.
|
|
312
|
-
const maxAddition =
|
|
313
|
-
|
|
314
|
-
|
|
315
|
-
|
|
301
|
+
const maxAddition = issueDetails.countLimit ? maxWeight / issueDetails.countLimit : 0;
|
|
302
|
+
issueDetails.score = Math.round(
|
|
303
|
+
issueDetails.weight * issueDetails.maxCount * (1 + maxAddition)
|
|
304
|
+
);
|
|
305
|
+
// For each tool that has any rule in the issue:
|
|
306
|
+
Object.keys(issues[issueID].tools).forEach(toolID => {
|
|
316
307
|
// If the tool was in the job and has no instances of the issue:
|
|
317
|
-
if (testTools.has(
|
|
308
|
+
if (testTools.has(toolID) && ! issueDetails.instanceCounts[toolID]) {
|
|
318
309
|
// Report its instance count as 0.
|
|
319
|
-
|
|
310
|
+
issueDetails.instanceCounts[toolID] = 0;
|
|
320
311
|
}
|
|
321
312
|
});
|
|
322
313
|
});
|
|
@@ -329,9 +320,35 @@ exports.scorer = report => {
|
|
|
329
320
|
});
|
|
330
321
|
return severityTotals;
|
|
331
322
|
}, details.severity.total);
|
|
332
|
-
|
|
323
|
+
const elementDetails = details.element;
|
|
324
|
+
// For each issue:
|
|
333
325
|
Object.keys(issuePaths).forEach(issueID => {
|
|
334
|
-
|
|
326
|
+
elementDetails[issueID] ??= {};
|
|
327
|
+
const issueElementDetails = elementDetails[issueID];
|
|
328
|
+
// For each element reported as exhibiting it:
|
|
329
|
+
Object.keys(issuePaths[issueID]).forEach(pathID => {
|
|
330
|
+
// Convert the set of tools reporting it to a string.
|
|
331
|
+
const toolList = Array.from(issuePaths[issueID][pathID]).sort().join(' + ');
|
|
332
|
+
issueElementDetails[toolList] ??= new Set();
|
|
333
|
+
// Classify the XPath by the set of tools reporting its element for the issue.
|
|
334
|
+
issueElementDetails[toolList].add(pathID);
|
|
335
|
+
});
|
|
336
|
+
// Convert the set of XPaths to an array.
|
|
337
|
+
Object.keys(issueElementDetails).forEach(toolList => {
|
|
338
|
+
issueElementDetails[toolList] = Array.from(elementDetails[issueID][toolList]).sort();
|
|
339
|
+
});
|
|
340
|
+
// Sort the tool lists by their tool counts and then alphabetically.
|
|
341
|
+
const toolLists = Object.keys(elementDetails[issueID]);
|
|
342
|
+
toolLists.sort((a, b) => {
|
|
343
|
+
const aToolCount = a.replace(/[^+]/g, '').length;
|
|
344
|
+
const bToolCount = b.replace(/[^+]/g, '').length;
|
|
345
|
+
if (aToolCount === bToolCount) {
|
|
346
|
+
return a.localeCompare(b);
|
|
347
|
+
}
|
|
348
|
+
else {
|
|
349
|
+
return bToolCount - aToolCount;
|
|
350
|
+
};
|
|
351
|
+
});
|
|
335
352
|
});
|
|
336
353
|
// Add the summary issue-count total to the score.
|
|
337
354
|
summary.issueCount = Object.keys(details.issue).length * issueCountWeight;
|