reffy 4.0.5 → 5.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,12 +1,11 @@
1
1
  # Reffy
2
2
 
3
+ <img align="right" width="256" height="256" src="images/reffy-512.png" alt="Reffy, represented as a brave little worm with a construction helmet, ready to crawl specs">
4
+
3
5
  Reffy is a **Web spec crawler** tool. It is notably used to update [Webref](https://github.com/w3c/webref#webref) every 6 hours.
4
6
 
5
7
  The code features a generic crawler that can fetch Web specifications and generate machine-readable extracts out of them. Created extracts include lists of CSS properties, definitions, IDL, links and references contained in the specification.
6
8
 
7
- The code also currently includes a set of individual tools to study extracts and create human-readable reports (such as the [crawl report in Webref](https://w3c.github.io/webref/ed/)). Please note the on-going plan to move this part out of Reffy into a dedicated companion analysis tool (see [issue #747](https://github.com/w3c/reffy/issues/747)).
8
-
9
-
10
9
  ## How to use
11
10
 
12
11
  ### Pre-requisites
@@ -107,69 +106,7 @@ The **crawl results merger** merges a new JSON crawl report into a reference one
107
106
 
108
107
  ### Analysis tools
109
108
 
110
- **Note:** Plan is to move analysis tools out of Reffy's codebase into a dedicated companion analysis tool (see [issue #747](https://github.com/w3c/reffy/issues/747)).
111
-
112
- #### Study tool
113
-
114
- **Reffy's report study tool** takes the machine-readable report generated by the crawler, and creates a study report of *potential* anomalies found in the report. The study report can then easily be converted to a human-readable Markdown report. Reported potential anomalies are:
115
-
116
- 1. specs that do not seem to reference any other spec normatively;
117
- 2. specs that define WebIDL terms but do not normatively reference the WebIDL spec;
118
- 3. specs that contain invalid WebIDL terms definitions;
119
- 4. specs that use obsolete WebIDL constructs (e.g. `[]` instead of `FrozenArray`);
120
- 5. specs that define WebIDL terms that are *also* defined in another spec;
121
- 6. specs that use WebIDL terms defined in another spec without referencing that spec normatively;
122
- 7. specs that use WebIDL terms for which the crawler could not find any definition in any of the specs it studied;
123
- 8. specs that link to another spec but do not include a reference to that other spec;
124
- 9. specs that link to another spec inconsistently in the body of the document and in the list of references (e.g. because the body of the document references the Editor's draft while the reference is to the latest published version).
125
-
126
- For instance:
127
-
128
- ```bash
129
- node src/cli/study-crawl.js reports/ed/crawl.json > reports/ed/study.json.
130
- ```
131
-
132
- #### Markdown report generator
133
-
134
- The **markdown report generator** produces a human-readable report in Markdown format out of the report returned by the study step, or directly out of the results of the crawling step. To run the generator:
135
-
136
- ```bash
137
- node src/cli/generate-report.js reports/ed/study.json [perspec|dep]`
138
- ```
139
-
140
- By default, the tool generates a report per anomaly, pass `perspec` to create a report per specification and `dep` to generate a dependencies report. You will probably want to redirect the output to a file, e.g. using `node src/cli/generate-report.js reports/ed/study.json > reports/ed/index.md`.
141
-
142
- The markdown report generator may also produce diff reports, e.g.:
143
-
144
- ```bash
145
- node src/cli/generate-report.js reports/ed/study.json diff https://w3c.github.io/webref/ed/study.json
146
- ```
147
-
148
- #### Spec checker
149
-
150
- The **spec checker** takes the URL of a spec, a reference crawl report and the name of the study report to create as inputs. It crawls and studies the given spec against the reference crawl report. Essentially, it applies the **crawler**, the **merger** and the **study** tool in order, to produces the anomalies report for the given spec. Note the URL can check multiple specs at once, provided the URLs are passed as a comma-separated value list without spaces. To run the spec checker: `node src/cli/check-specs.js [url] [reference crawl report] [study report to create]`
151
-
152
- For instance:
153
-
154
- ```bash
155
- node src/cli/check-specs.js https://www.w3.org/TR/webstorage/ reports/ed/crawl.json reports/study-webstorage.json
156
- ```
157
-
158
- #### Crawl and study all at once
159
-
160
- **Note:** You will need to install [Pandoc](http://pandoc.org/) for HTML report generation to succeed.
161
-
162
- To crawl all specs, generate a crawl report and an anomaly report, follow these steps:
163
-
164
- 1. To produce a report using Editor's Drafts, run `npm run ed`.
165
- 2. To produce a report using latest published versions in `/TR/`, run `npm run tr`.
166
-
167
- These commands run the `src/cli/crawl-and-study.js` script. Under the hoods, this script runs the following tools in turn:
168
- 1. **Crawler**: crawls all specs with [Reffy](#launch-reffy)
169
- 2. **Analysis**: Runs the [study tool](#study-tool)
170
- 3. **Markdown report generation**: Runs the [markdown report generator](#markdown-report-generator)
171
- 4. **Conversion to HTML**: Runs `pandoc` to prepare an HTML report with expandable sections out of the Takes the markdown report per specification. Typically runs `pandoc reports/ed/index.md -f markdown -t html5 --section-divs -s --template report-template.html -o reports/ed/index.html` (where `report.md` is the Markdown report)
172
- 5. **Diff with latest published version of the crawl report**: Compares a crawl analysis with the latest published crawl analysis and produce a human-readable diff in Markdown format with the [markdown report generator](#markdown-report-generator)
109
+ Starting with Reffy v5, analysis tools that used to be part of Reffy's suite of tools to study extracts and create human-readable reports of potential spec anomalies migrated to a companion tool named [Strudy](https://github.com/w3c/strudy). The actual reports get published in a separate [w3c/webref-analysis](https://github.com/w3c/webref-analysis) repository as well.
173
110
 
174
111
 
175
112
  ### WebIDL terms explorer
package/index.js CHANGED
@@ -1,4 +1,7 @@
1
1
  module.exports = {
2
2
  parseIdl: require("./src/cli/parse-webidl").parse,
3
- crawlSpecs: require("./src/lib/specs-crawler").crawlList
3
+ crawlSpecs: require("./src/lib/specs-crawler").crawlList,
4
+ expandCrawlResult: require("./src/lib/util").expandCrawlResult,
5
+ mergeCrawlResults: require("./src/lib/util").mergeCrawlResults,
6
+ isLatestLevelThatPasses: require("./src/lib/util").isLatestLevelThatPasses
4
7
  };
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "reffy",
3
- "version": "4.0.5",
3
+ "version": "5.2.1",
4
4
  "description": "W3C/WHATWG spec dependencies exploration companion. Features a short set of tools to study spec references as well as WebIDL term definitions and references found in W3C specifications.",
5
5
  "repository": {
6
6
  "type": "git",
@@ -32,40 +32,22 @@
32
32
  "bin": "./reffy.js",
33
33
  "dependencies": {
34
34
  "abortcontroller-polyfill": "1.7.3",
35
- "browser-specs": "2.15.1",
36
- "commander": "8.2.0",
35
+ "browser-specs": "2.18.0",
36
+ "commander": "8.3.0",
37
37
  "fetch-filecache-for-crawling": "4.0.2",
38
- "node-pandoc": "0.3.0",
39
- "puppeteer": "10.4.0",
38
+ "puppeteer": "11.0.0",
40
39
  "semver": "^7.3.5",
41
- "webidl2": "24.1.2"
40
+ "webidl2": "24.2.0"
42
41
  },
43
42
  "devDependencies": {
44
43
  "chai": "4.3.4",
45
- "mocha": "9.1.2",
46
- "nock": "13.1.3",
47
- "respec": "26.14.0",
44
+ "mocha": "9.1.3",
45
+ "nock": "13.2.1",
46
+ "respec": "28.0.6",
48
47
  "respec-hljs": "2.1.1",
49
- "rollup": "2.58.0"
48
+ "rollup": "2.60.2"
50
49
  },
51
50
  "scripts": {
52
- "all": "node src/cli/crawl-and-study.js run ed all && node src/cli/crawl-and-study.js run tr all",
53
- "diff": "node src/cli/crawl-and-study.js run ed diff && node src/cli/crawl-and-study.js run tr diff",
54
- "diffnew": "node src/cli/crawl-and-study.js run ed diffnew && node src/cli/crawl-and-study.js run tr diffnew",
55
- "tr": "node src/cli/crawl-and-study.js run tr all",
56
- "tr-crawl": "node src/cli/crawl-and-study.js run tr crawl",
57
- "tr-study": "node src/cli/crawl-and-study.js run tr study",
58
- "tr-markdown": "node src/cli/crawl-and-study.js run tr markdown",
59
- "tr-html": "node src/cli/crawl-and-study.js run tr html",
60
- "tr-diff": "node src/cli/crawl-and-study.js run tr diff",
61
- "tr-diffnew": "node src/cli/crawl-and-study.js run tr diffnew",
62
- "ed": "node src/cli/crawl-and-study.js run ed all",
63
- "ed-crawl": "node --max-old-space-size=8192 src/cli/crawl-and-study.js run ed crawl",
64
- "ed-study": "node src/cli/crawl-and-study.js run ed study",
65
- "ed-markdown": "node src/cli/crawl-and-study.js run ed markdown",
66
- "ed-html": "node src/cli/crawl-and-study.js run ed html",
67
- "ed-diff": "node src/cli/crawl-and-study.js run ed diff",
68
- "ed-diffnew": "node src/cli/crawl-and-study.js run ed diffnew",
69
51
  "test": "mocha --recursive tests/"
70
52
  }
71
53
  }
package/reffy.js CHANGED
File without changes
@@ -350,9 +350,7 @@ function parseType(idltype, idlReport, contextName) {
350
350
  return;
351
351
  }
352
352
  var wellKnownTypes = ["undefined", "any", "boolean", "byte", "octet", "short", "unsigned short", "long", "unsigned long", "long long", "unsigned long long", "float", "unrestricted float", "double", "unrestricted double", "DOMString", "ByteString", "USVString", "object",
353
- "RegExp", "Error", "DOMException", "ArrayBuffer", "DataView", "Int8Array", "Int16Array", "Int32Array", "Uint8Array", "Uint16Array", "Uint32Array", "Uint8ClampedArray", "Float32Array", "Float64Array",
354
- "BigUint64Array", "BigInt64Array",
355
- "ArrayBufferView", "BufferSource", "DOMTimeStamp", "Function", "VoidFunction"];
353
+ "RegExp", "Error", "DOMException"];
356
354
  if (wellKnownTypes.indexOf(idltype.idlType) === -1) {
357
355
  addDependency(idltype.idlType, idlReport.idlNames, idlReport.externalDependencies);
358
356
  if (contextName) {
@@ -74,9 +74,9 @@ nock("https://api.specref.org")
74
74
 
75
75
  nock("https://www.w3.org")
76
76
  .persist()
77
- .get("/scripts/TR/2016/fixup.js").reply(200, '')
78
- .get("/StyleSheets/TR/2016/logos/W3C").reply(200, '')
79
- .get("/StyleSheets/TR/2016/base.css").reply(200, '')
77
+ .get("/scripts/TR/2021/fixup.js").reply(200, '')
78
+ .get("/StyleSheets/TR/2021/logos/W3C").reply(200, '')
79
+ .get("/StyleSheets/TR/2021/base.css").reply(200, '')
80
80
  .get("/Tools/respec/respec-highlight").replyWithFile(200, path.join(modulesFolder, "respec-hljs", "dist", "respec-highlight.js"), {"Content-Type": "application/js"})
81
81
  .get("/Tools/respec/respec-w3c").replyWithFile(200, path.join(modulesFolder, "respec", "builds", "respec-w3c.js"), {"Content-Type": "application/js"});
82
82
 
package/src/lib/util.js CHANGED
@@ -525,8 +525,8 @@ async function processSpecification(spec, processFunction, args, options) {
525
525
  if (counter > 60) {
526
526
  throw new Error('Respec generation took too long');
527
527
  }
528
- if (window.document.respecIsReady) {
529
- await window.document.respecIsReady;
528
+ if (window.document.respec?.ready) {
529
+ await window.document.respec.ready;
530
530
  }
531
531
  else if (usesRespec) {
532
532
  await sleep(1000);
@@ -1,148 +0,0 @@
1
- #!/usr/bin/env node
2
- /**
3
- * The spec checker crawls a spec (or a list of specs) and creates an anomalies
4
- * report for it (or for them). The analysis is made against a knowledge base
5
- * that must also be provided as input under the form of a reference crawl
6
- * report.
7
- *
8
- * Essentially, the spec checker runs the [spec crawler]{@link module:crawler}
9
- * on the given spec(s), applies the [crawl results merger]{@link module:merger}
10
- * to update the reference knowledge with the newly crawled results and run the
11
- * [crawl study]{@link module:study} tool to produce the anomalies report.
12
- *
13
- * The spec checker can be called directly through:
14
- *
15
- * `node check-specs.js [url] [ref crawl report] [study report] [option]`
16
- *
17
- * where `url` is the URL of the spec to check, or a comma-separated value list
18
- * (without spaces) of URLs, `ref crawl report` is the local name of the
19
- * reference crawl report file to use as knowledge base, `study report` is the
20
- * name the of the anomalies report file to create (JSON file), and `option`
21
- * gives the crawl options (see the spec crawler for details).
22
- *
23
- * @module checker
24
- */
25
-
26
- const fs = require('fs');
27
- const path = require('path');
28
- const browserSpecs = require('browser-specs');
29
- const requireFromWorkingDirectory = require('../lib/util').requireFromWorkingDirectory;
30
- const expandCrawlResult = require('../lib/util').expandCrawlResult;
31
- const crawlList = require('../lib/specs-crawler').crawlList;
32
- const mergeCrawlResults = require('./merge-crawl-results').mergeCrawlResults;
33
- const studyCrawl = require('./study-crawl').studyCrawl;
34
-
35
-
36
- /**
37
- * Shortcut that returns a property extractor iterator
38
- */
39
- const prop = p => x => x[p];
40
-
41
-
42
- /**
43
- * Crawl one or more specs and study them against a reference crawl report.
44
- *
45
- * The reference crawl report acts as the knowledge database. Knowledge about
46
- * the specs given as parameter is automatically replaced by the knowledge
47
- * obtained by crawling these specs.
48
- *
49
- * @function
50
- * @param {Array(Object)} speclist The list of specs to check. Each spec should
51
- * have a "url" and/or an "html" property.
52
- * @param {Object} refCrawl The reference crawl report against which the specs
53
- * should be checked
54
- * @param {Object} options Crawl options
55
- * @return {Promise} The promise to get the study report for the requested list
56
- * of specs
57
- */
58
- async function checkSpecs(speclist, refCrawl, options) {
59
- const specs = speclist.map(spec => (typeof spec === 'string') ?
60
- browserSpecs.find(s => s.url === spec || s.shortname === spec) :
61
- spec)
62
- .filter(spec => !!spec);
63
-
64
- const crawl = await crawlList(specs, options);
65
- const report = {
66
- type: 'crawl',
67
- title: 'Anomalies in spec: ' + specs.map(prop('url')).join(', '),
68
- description: 'Study of anomalies in the given spec against a reference crawl report',
69
- date: (new Date()).toJSON(),
70
- options: options,
71
- stats: {
72
- crawled: crawl.length,
73
- errors: crawl.filter(spec => !!spec.error).length
74
- },
75
- results: crawl
76
- };
77
- const mergedReport = await mergeCrawlResults(report, refCrawl);
78
- const study = await studyCrawl(mergedReport, { include: specs });
79
- return study;
80
- }
81
-
82
-
83
- /**
84
- * Crawl the given spec and study it against a reference crawl report.
85
- *
86
- * Shortcut for the checkSpecs method when there is only one spec to check.
87
- *
88
- * @function
89
- * @param {Object} spec The spec to check. It should have a "url" and/or an
90
- * "html" property.
91
- * @param {Object} refCrawl The reference crawl report against which the spec
92
- * should be checked
93
- * @param {Object} options Crawl options
94
- * @return {Promise} The promise to get the study report for the requested spec
95
- */
96
- function checkSpec(spec, refCrawl, options) {
97
- return checkSpecs([spec], refCrawl, options);
98
- }
99
-
100
-
101
- /**************************************************
102
- Export methods for use as module
103
- **************************************************/
104
- module.exports.checkSpecs = checkSpecs;
105
- module.exports.checkSpec = checkSpec;
106
-
107
-
108
- /**************************************************
109
- Code run if the code is run as a stand-alone module
110
- **************************************************/
111
- if (require.main === module) {
112
- const specUrls = (process.argv[2] ? process.argv[2].split(',') : []);
113
- const refCrawlPath = process.argv[3];
114
- const resPath = process.argv[4];
115
- const crawlOptions = { publishedVersion: (process.argv[5] === 'tr') };
116
-
117
- if (specUrls.length === 0) {
118
- console.error('URL(s) of the specification(s) to check must be passed as first parameter');
119
- process.exit(2);
120
- }
121
- if (!refCrawlPath) {
122
- console.error('A reference crawl results must be passed as second parameter');
123
- process.exit(2);
124
- }
125
- if (!resPath) {
126
- console.error('Result file to create must be passed as third parameter');
127
- process.exit(3);
128
- }
129
-
130
- let refCrawl;
131
- try {
132
- refCrawl = requireFromWorkingDirectory(refCrawlPath);
133
- refCrawl = expandCrawlResult(refCrawl, path.dirname(refCrawlPath));
134
- } catch(e) {
135
- console.error("Impossible to read " + crawlResultsPath + ": " + e);
136
- process.exit(3);
137
- }
138
-
139
- checkSpecs(specUrls, refCrawl, crawlOptions)
140
- .then(study => new Promise((resolve, reject) =>
141
- fs.writeFile(resPath, JSON.stringify(study, null, 2),
142
- err => { if (err) return reject(err); resolve();})))
143
- .then(_ => console.log('Finished'))
144
- .catch(err => {
145
- console.error(err);
146
- process.exit(64);
147
- });
148
- }
@@ -1,212 +0,0 @@
1
- #!/usr/bin/env node
2
- /**
3
- * Reffy's command line interface that you can use to crawl and study spec
4
- * references. The tool runs the crawler, then the study tools to create the
5
- * full reports that typically show up under w3c/webref.
6
- *
7
- * Tool can be called directly through:
8
- *
9
- * `node crawl-and-study.js [command]`
10
- *
11
- * Run `node crawl-and-study.js -h` for help
12
- *
13
- * @module crawler
14
- */
15
-
16
- const program = require('commander');
17
- const version = require('../../package.json').version;
18
- const fs = require('fs');
19
- const path = require('path');
20
- const crawlSpecs = require('../lib/specs-crawler').crawlSpecs;
21
- const studyCrawl = require('./study-crawl').studyCrawl;
22
- const generateReport = require('./generate-report').generateReport;
23
- const pandoc = require('node-pandoc');
24
-
25
-
26
- // List of possible perspectives and associated parameters
27
- // Note the "ed" perspective produces reports under "whatwg" for backward
28
- // compatibility reason.
29
- const perspectives = {
30
- 'ed': {
31
- description: 'Crawls the latest Editor\'s Drafts',
32
- refStudy: 'https://w3c.github.io/webref/ed/study.json'
33
- },
34
- 'tr': {
35
- description: 'Crawls the latest published versions of specifications in /TR/ space instead of the latest Editor\'s Drafts',
36
- publishedVersion: true,
37
- refStudy: 'https://w3c.github.io/webref/tr/study.json'
38
- }
39
- };
40
-
41
- // List of possible actions for each perspective
42
- const possibleActions = {
43
- 'all': 'crawl specs, study report and generate markdown, HTML and diff reports. Default action',
44
- 'crawl': 'crawl specs and generate a machine-readable report with facts about each spec',
45
- 'study': 'parse the machine-readable report generated by the crawler, and create a study report of potential anomalies found in the report',
46
- 'markdown': 'produce a human-readable report in Markdown format out of the anomalies report returned by the study action',
47
- 'html': 'produce an HTML report out of the Markdown report generated by the markdown action',
48
- 'diff': 'compare the anomalies report with the latest published anomalies report and generate diff report',
49
- 'diffnew': 'compare the anomalies report with the latest published anomalies report and generate diff report that only contains new anomalies'
50
- };
51
-
52
- let command = null;
53
- program
54
- .version(version)
55
- .option('-d, --debug', 'run crawl in debug mode (single process, one spec at a time)');
56
-
57
- program
58
- .command('run <perspective> [action]')
59
- .description('run a new crawl and study from the given perspective')
60
- .option('-d, --debug', 'run crawl in debug mode (single process, one spec at a time)')
61
- .action(async (perspective, action, cmdObj) => {
62
- command = 'run';
63
- if (!(perspective in perspectives)) {
64
- return program.help();
65
- }
66
- if (action && !(action in possibleActions)) {
67
- return program.help();
68
- }
69
-
70
- let debug = cmdObj.debug || program.debug;
71
- let publishedVersion = perspectives[perspective].publishedVersion;
72
- let refStudy = perspectives[perspective].refStudy;
73
- let reportFolder = perspectives[perspective].reportFolder ||
74
- 'reports/' + perspective;
75
- let crawlReport = path.join(reportFolder, 'index.json');
76
- let studyReport = path.join(reportFolder, 'study.json');
77
-
78
- let promise = Promise.resolve();
79
- let actions = (!action || (action === 'all')) ?
80
- ['crawl', 'study', 'markdown', 'html', 'diff', 'diffnew'] :
81
- [action];
82
-
83
- actions.forEach(action => {
84
- switch (action) {
85
- case 'crawl':
86
- promise = promise
87
- .then(_ => crawlSpecs(
88
- { publishedVersion, debug, output: reportFolder }));
89
- break;
90
-
91
- case 'study':
92
- const options = {};
93
- if (perspective === 'ed') {
94
- const trFolder = perspectives.tr.reportFolder || 'reports/tr';
95
- const trReport = path.join(trFolder, 'index.json');
96
- if (fs.existsSync(trReport)) {
97
- options.trResults = trReport;
98
- }
99
- }
100
- promise = promise
101
- .then(_ => studyCrawl(crawlReport, options))
102
- .then(results => {
103
- fs.writeFileSync(path.join(reportFolder, 'study.json'),
104
- JSON.stringify(results, null, 2));
105
- });
106
- break;
107
-
108
- case 'markdown':
109
- promise = promise
110
- .then(_ => generateReport(studyReport, { perSpec: true }))
111
- .then(report => fs.writeFileSync(path.join(reportFolder, 'index.md'), report))
112
- .then(_ => generateReport(studyReport, { perSpec: false }))
113
- .then(report => fs.writeFileSync(path.join(reportFolder, 'perissue.md'), report));
114
- break;
115
-
116
- case 'html':
117
- promise = promise
118
- .then(_ => new Promise((resolve, reject) => {
119
- let args = [
120
- '-f', 'markdown', '-t', 'html5', '--section-divs', '-s',
121
- '--template', path.join(__dirname, '..', 'templates', 'report-template.html'),
122
- '-o', path.join(reportFolder, 'index.html')
123
- ];
124
- pandoc(path.join(reportFolder, 'index.md'), args,
125
- (err => {
126
- if (err) {
127
- return reject(err);
128
- }
129
- args = [
130
- '-f', 'markdown', '-t', 'html5', '--section-divs', '-s',
131
- '--template', path.join(__dirname, '..', 'templates', 'report-perissue-template.html'),
132
- '-o', path.join(reportFolder, 'perissue.html')];
133
- pandoc(path.join(reportFolder, 'perissue.md'), args,
134
- (err => {
135
- if (err) {
136
- return reject(err);
137
- }
138
- return resolve();
139
- }));
140
- }));
141
- }));
142
- break;
143
-
144
- case 'diff':
145
- promise = promise
146
- .then(_ => generateReport(studyReport, {
147
- diffReport: true,
148
- refStudyFile: refStudy
149
- }))
150
- .then(report => fs.writeFileSync(path.join(reportFolder, 'diff.md'), report));
151
- break;
152
-
153
- case 'diffnew':
154
- promise = promise
155
- .then(_ => generateReport(studyReport, {
156
- diffReport: true,
157
- refStudyFile: refStudy,
158
- onlyNew: true
159
- }))
160
- .then(report => fs.writeFileSync(path.join(reportFolder, 'diffnew.md'), report));
161
- break;
162
- }
163
- });
164
-
165
- return promise;
166
- });
167
-
168
- program.on('--help', function() {
169
- console.log('');
170
- console.log(' Possible perspectives:');
171
- console.log('');
172
- Object.keys(perspectives).forEach(perspective => {
173
- console.log(' ' + perspective + ': ' + perspectives[perspective].description);
174
- });
175
- console.log('');
176
-
177
- console.log(' Possible actions:');
178
- console.log('');
179
- Object.keys(possibleActions).forEach(action => {
180
- console.log(' ' + action + ': ' + possibleActions[action]);
181
- });
182
- console.log('');
183
-
184
- console.log(' Possible options:');
185
- console.log('');
186
- console.log(' -d, --debug: run crawl in debug mode (single process, one spec at a time)');
187
- console.log('');
188
- });
189
-
190
- program.on('command:*', function () {
191
- console.error('Invalid command: %s.\n', program.args.join(' '));
192
- program.outputHelp();
193
- process.exit(1);
194
- });
195
-
196
- if (!process.argv.slice(2).length) {
197
- console.error('Cannot run program without arguments.\n');
198
- program.outputHelp();
199
- process.exit(1);
200
- }
201
-
202
- program
203
- .parseAsync(process.argv)
204
- .then(_ => {
205
- console.log('-- THE END -- ');
206
- process.exit(0);
207
- })
208
- .catch(err => {
209
- console.error('-- ERROR CAUGHT --');
210
- console.error(err);
211
- process.exit(1);
212
- });