testeranto 0.128.0 → 0.129.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -16,13 +16,16 @@ example test report: [chromapdx.github.io/kokomoBay](https://chromapdx.github.io
16
16
 
17
17
  example repo: [kokomo bay](https://github.com/ChromaPDX/kokomoBay)
18
18
 
19
+ example repo V2: [testeranto-starter](https://github.com/adamwong246/testeranto-starter)
20
+
19
21
  ## What is testeranto?
20
22
 
21
23
  - Testeranto produces test results which can be fed to Aider.ai to automatically fix failing tests.
22
24
  - Testeranto tests are specified in a strongly-typed gherkin-like syntax. Rather than testing your code directly, Testeranto requires you wrap your code with a semantic interface which is based on TS type signatures.
23
25
  - Testeranto can be run in the frontend or the backend, or both.
24
26
  - Testeranto can be used to test anything that can be bundled with esbuild.
25
- - Testeranto is _reasonably_ efficient. It is less performant than other similar js testing libraries.
27
+ - Testeranto connects "features" to "tests". This allows the AI to read feature documentation from external systems, like Jira.
28
+ - Testeranto generates test results into static a website which can be deployed to github pages easily.
26
29
 
27
30
  ## tech of note
28
31
 
@@ -31,16 +34,255 @@ example repo: [kokomo bay](https://github.com/ChromaPDX/kokomoBay)
31
34
  - puppeteer - provides access to both node and chrome runtimes
32
35
  - esbuild - used to quickly generate test bundles
33
36
  - aider - AI to automatically fix broken tests
37
+ - eslint - runs upon the input files to generate a file of static analysis errors
38
+ - tsc - runs upon the input files to generate a file of type errors
39
+ - markdown - Markdown is used record feature files
40
+
41
+ ## scripts
42
+
43
+ `yarn t-init`: startup a new testeranto project
44
+
45
+ `yarn t-build <someTest> <once|dev>`: build the "someTest" project once, or continuously
46
+
47
+ `yarn t-run <someTest> <once|dev>`: run the "someTest" project once, or continuously
48
+
49
+ `yarn t-report` Run the report server
50
+
51
+ `yarn t-aider PATH_TO_PROMPT_FILE`: Execute a generated prompt file to fix broken tests.
34
52
 
35
- ## Do's and Don't
53
+ ## AI
36
54
 
37
- When writing your test, be careful when using platform specific features, like "fs" on node, or "window" in the browser. If you need to write to a file, or to log information, use the `utils`. Instead of platform specific libraries, like node's "assert", use a cross-platform alternative like "chai".
55
+ Testeranto generates a "prompt" alongside test results. This prompt is passed to aider as input.
56
+
57
+ ```
58
+ // input src files which can be edited by aider
59
+ /add test/node.ts
60
+
61
+ // test report files that inform aider but should not be edited
62
+ /read testeranto/reports/allTests/node/test/node/tests.json
63
+ /read testeranto/reports/allTests/test/node/node/lint_errors.json
64
+ /read testeranto/reports/allTests/test/node/node/type_errors.txt
65
+
66
+ // A list of features which can inform aider.
67
+ /load testeranto/reports/allTests/node/test/node/featurePrompt.txt
68
+
69
+ // tell the AI what to do
70
+ /code Fix the failing tests described in testeranto/reports/allTests/node/test/node/tests.json. Correct any type signature errors described in the files testeranto/reports/allTests/test/node/node/type_errors.txt. Implement any method which throws "Function not implemented. Resolve the lint errors described in testeranto/reports/allTests/test/node/node/lint_errors.json"
71
+ ```
72
+
73
+ ## "Features"
74
+
75
+ Testeranto connects "features" to tests. The features may be simple strings, but they can also take the form of local markdown files, or remote URLs to external feature tracking systems. For instance, this could be a jira ticket or a github issue. These features are used to inform the AI context.
38
76
 
39
77
  ## Platforms
40
78
 
41
- Testeranto runs tests in multiple runtimes. You can run the same test (more or less) in multiple contexts. Depending on your test subject, not all may be applicable
79
+ Testeranto runs tests in multiple runtimes. You can run the same test (more or less) in multiple contexts, but depending on your test subject, not all may be applicable. For instance, if you are testing an http node server, you'll can't use the web runtime. If your code references `document` or `window`, you must use the web style. And if you wish to capture console.logs in a node context, you should use the `pure` runtime.
42
80
 
43
81
  1. Node - the test is run in node v8 via fork.
44
- 2. Web - the test is run in chrome, in a page. If you code relies upon `window` or `document`, you should use this style.
82
+ 2. Web - the test is run in chrome, in a page.
45
83
  3. Pure - the test is dynamically imported into the main thread. It will not have access to IO (console.log, etc) but it is more performant.
46
- 4. WebWorker - the test is tested in a web worker, in the browser, but not in a page.
84
+
85
+ ## Concepts
86
+
87
+ Testeranto tests take some piece of javascript as input, and wraps it in testing apparatus, and then executes that test on the given platform. You must provide this apparatus in the following form:
88
+
89
+ ```js
90
+ export default async <I extends IT, O extends OT, M>(
91
+
92
+ // the thing that is being tested.
93
+ input: I["iinput"],
94
+
95
+ testSpecification: ITestSpecification<I, O>,
96
+ testImplementation: ITestImplementation<I, O, M>,
97
+ testInterface: Partial<IWebTestInterface<I>>,
98
+ testResourceRequirement: ITTestResourceRequest = defaultTestResourceRequirement
99
+ ) => {
100
+
101
+ // or WebTesteranto<I, O, M> or PureTesteranto<I, O, M>
102
+ return new NodeTesteranto<I, O, M>(
103
+ input,
104
+ testSpecification,
105
+ testImplementation,
106
+ testResourceRequirement,
107
+ testInterface
108
+ );
109
+ };
110
+
111
+ ```
112
+
113
+ Practically speaking, for each thing you test, you will need to implement 3 types and 4 objects.
114
+
115
+ ### type I
116
+
117
+ this type describes the shape of the BDD test
118
+
119
+ ```ts
120
+ export type I = Ibdd_in<
121
+ null,
122
+ null,
123
+ Rectangle,
124
+ Rectangle,
125
+ Rectangle,
126
+ (...x) => (rectangle: Rectangle, utils: IPM) => Rectangle,
127
+ (rectangle: Rectangle, utils: IPM) => Rectangle
128
+ >;
129
+ ```
130
+
131
+ ### type O
132
+
133
+ this type describes the shape of the "interface"
134
+
135
+ ```ts
136
+ export type O = Ibdd_out<
137
+ // Suite
138
+ {
139
+ Default: [string];
140
+ },
141
+ // "Given" are initial states
142
+ {
143
+ Default;
144
+ WidthOfOneAndHeightOfOne;
145
+ WidthAndHeightOf: [number, number];
146
+ },
147
+ // "Whens" are steps which change the state of the test subject
148
+ {
149
+ HeightIsPubliclySetTo: [number];
150
+ WidthIsPubliclySetTo: [number];
151
+ setWidth: [number];
152
+ setHeight: [number];
153
+ },
154
+ // "Thens" are steps which make assertions of the test subject
155
+ {
156
+ AreaPlusCircumference: [number];
157
+ circumference: [number];
158
+ getWidth: [number];
159
+ getHeight: [number];
160
+ area: [number];
161
+ prototype: [];
162
+ },
163
+ // "Checks" are similar to "Givens"
164
+ {
165
+ Default;
166
+ WidthOfOneAndHeightOfOne;
167
+ WidthAndHeightOf: [number, number];
168
+ }
169
+ >;
170
+ ```
171
+
172
+ ### type M (optional)
173
+
174
+ this type describes the modifications to the shape of the "specification". It can be used to make your BDD tests DRYer but is not necessary
175
+
176
+ ```ts
177
+ export type M = {
178
+ givens: {
179
+ [K in keyof O["givens"]]: (...Iw: O["givens"][K]) => Rectangle;
180
+ };
181
+ whens: {
182
+ [K in keyof O["whens"]]: (
183
+ ...Iw: O["whens"][K]
184
+ ) => (rectangle: Rectangle, utils: PM) => Rectangle;
185
+ };
186
+ thens: {
187
+ [K in keyof O["thens"]]: (
188
+ ...Iw: O["thens"][K]
189
+ ) => (rectangle: Rectangle, utils: PM) => Rectangle;
190
+ };
191
+ };
192
+ ```
193
+
194
+ ### the "specification" aka ITestSpecification<I, O>
195
+
196
+ The test specification is the BDD tests logic. The specification implements BDD directives "Given", "When", and Then"
197
+
198
+ ```ts
199
+ export const RectangleTesterantoBaseTestSpecification: ITestSpecification<
200
+ I,
201
+ O
202
+ > = (Suite, Given, When, Then, Check) => {
203
+ return [
204
+ Suite.Default(
205
+ "Testing the Rectangle class",
206
+ {
207
+ // A "given" is a strict BDD test. It starts with an initial state, then executes the "whens" which update the test subject, and then executes the "thens" as a assertions.
208
+ test0: Given.Default(
209
+ // a list of features
210
+ ["https://api.github.com/repos/adamwong246/testeranto/issues/8"],
211
+ // a list of "whens"
212
+ [When.setWidth(4), When.setHeight(19)],
213
+ // a list of "thens"
214
+ [Then.getWidth(4), Then.getHeight(19)]
215
+ ),
216
+ },
217
+
218
+ [
219
+ // a "check" is a less strict style of test. Instead of lists of whens and thens, you get a function callback.
220
+ Check.Default("imperative style?!", [], async (rectangle) => {
221
+ Then.getWidth(2).thenCB(rectangle);
222
+ Then.getHeight(2).thenCB(rectangle);
223
+ When.setHeight(22).whenCB(rectangle);
224
+ Then.getHeight(232).thenCB(rectangle);
225
+ }),
226
+ ]
227
+ ),
228
+ ];
229
+ };
230
+ ```
231
+
232
+ ### the "interface" aka testInterface: Partial<IWebTestInterface<I>>
233
+
234
+ The test interface is code which is NOT BDD steps. The interface implements "before all", "after all", "before each", and "after each", all of which are optional. f
235
+
236
+ ```ts
237
+ export const RectangleTesterantoBaseInterface: IPartialInterface<I> = {
238
+ beforeEach: async (subject, i) => {
239
+ return i();
240
+ },
241
+ andWhen: async function (s, whenCB, tr, utils) {
242
+ return whenCB(s)(s, utils);
243
+ },
244
+ butThen: async (s, t, tr, pm) => {
245
+ return t(s, pm);
246
+ },
247
+ };
248
+ ```
249
+
250
+ ## the "test resource requirement" aka ITTestResourceRequest (optional)
251
+
252
+ The test resource requirement describes things that the test needs to run, namely network ports. It is optional, but you should add this argument if your test needs to rely upon network ports
253
+
254
+ ```ts
255
+ // TODO add example of test resource requirement
256
+ ```
257
+
258
+ ## Sidecars (COMING SOON)
259
+
260
+ Along side your test, you can include a number of "sidecars" which are other bundled javascript assets upon which your test depends. For example, suppose you have an app with a frontend and backend component. You could run a react test in the web and include the node http server as a sidecar.
261
+
262
+ ## `eslint` and `tsc`
263
+
264
+ Alongside the bdd tests, testeranto runs eslint and tsc upon the input files to generate a list of static analysis errors and a list of type errors, respectively.
265
+
266
+ ## Subprojects
267
+
268
+ Testeranto has a core repo, but there are also subprojects which implement tests by type and by technology
269
+
270
+ ### testeranto-solidity
271
+
272
+ Test a solidity contract. Also included is an example of deploying a contrct to a ganache server.
273
+
274
+ ### testeranto-reduxtoolkit
275
+
276
+ Test a redux store.
277
+
278
+ ### testeranto-http
279
+
280
+ Test a node http server.
281
+
282
+ ### testeranto-react (COMING SOON)
283
+
284
+ Test a react component. You can choose from a variety of types (jsx functions, class style, etc) and you can test with `react`, `react-dom`, or `react-test-renderer`
285
+
286
+ ### testeranto-express (COMING SOON)
287
+
288
+ ### testeranto-xstate (COMING SOON)
@@ -73,7 +73,7 @@ async function fileHash(filePath, algorithm = "md5") {
73
73
  }
74
74
  const statusMessagePretty = (failures, test) => {
75
75
  if (failures === 0) {
76
- console.log(ansi_colors_1.default.green(ansi_colors_1.default.inverse(`> ${test} completed successfully`)));
76
+ console.log(ansi_colors_1.default.green(ansi_colors_1.default.inverse(`> ${test} completed successfully?!?`)));
77
77
  }
78
78
  else {
79
79
  console.log(ansi_colors_1.default.red(ansi_colors_1.default.inverse(`> ${test} failed ${failures} times`)));
@@ -507,30 +507,28 @@ ${addableFiles
507
507
  stdio: ["pipe", "pipe", "pipe", "ipc"],
508
508
  // silent: true
509
509
  });
510
- // const child = spawn(
511
- // "node",
512
- // ["inspect", builtfile, testResources, "--trace-warnings"],
513
- // {
514
- // stdio: ["pipe", "pipe", "pipe", "ipc"],
515
- // env: {
516
- // // NODE_INSPECT_RESUME_ON_START: "1",
517
- // },
518
- // // silent: true
519
- // }
520
- // );
521
- // console.log(
522
- // "spawning",
523
- // "node",
524
- // ["inspect", builtfile, testResources, "--trace-warnings"],
525
- // {
526
- // NODE_INSPECT_RESUME_ON_START: "1",
527
- // }
528
- // );
529
- const p = destFolder + "/pipe";
510
+ const p = destFolder + "/tpipe";
511
+ // exec(`lsof`, (ec, out, err) => {
512
+ // console.log(ec, out, err);
513
+ // });
514
+ // if (fs.existsSync(p)) {
515
+ // fs.rmSync(p);
516
+ // }
530
517
  const errFile = `${reportDest}/error.txt`;
531
518
  if (fs_1.default.existsSync(errFile)) {
532
519
  fs_1.default.rmSync(errFile);
533
520
  }
521
+ // server.on("error", (e) => {
522
+ // if (e.code === "EADDRINUSE") {
523
+ // console.error(e);
524
+ // process.exit(-1);
525
+ // // console.error("Address in use, retrying...");
526
+ // // setTimeout(() => {
527
+ // // server.close();
528
+ // // server.listen(p);
529
+ // // }, 1000);
530
+ // }
531
+ // });
534
532
  server.listen(p, () => {
535
533
  var _a, _b;
536
534
  (_a = child.stderr) === null || _a === void 0 ? void 0 : _a.on("data", (data) => {
@@ -540,6 +538,11 @@ ${addableFiles
540
538
  oStream.write(`stdout data ${data}`);
541
539
  });
542
540
  child.on("close", (code) => {
541
+ console.log("close");
542
+ console.log("deleting", p);
543
+ if (fs_1.default.existsSync(p)) {
544
+ fs_1.default.rmSync(p);
545
+ }
543
546
  oStream.close();
544
547
  server.close();
545
548
  if (code === null) {
@@ -554,19 +557,23 @@ ${addableFiles
554
557
  this.bddTestIsNowDone(src, code);
555
558
  statusMessagePretty(code, src);
556
559
  }
557
- if (fs_1.default.existsSync(p)) {
558
- fs_1.default.rmSync(p);
559
- }
560
560
  haltReturns = true;
561
561
  });
562
562
  child.on("exit", (code) => {
563
+ console.log("exit");
564
+ console.log("deleting", p);
565
+ if (fs_1.default.existsSync(p)) {
566
+ fs_1.default.rmSync(p);
567
+ }
563
568
  haltReturns = true;
564
569
  });
565
570
  child.on("error", (e) => {
566
- haltReturns = true;
571
+ console.log("error");
572
+ console.log("deleting", p);
567
573
  if (fs_1.default.existsSync(p)) {
568
574
  fs_1.default.rmSync(p);
569
575
  }
576
+ haltReturns = true;
570
577
  console.log(ansi_colors_1.default.red(ansi_colors_1.default.inverse(`${src} errored with: ${e.name}. Check ${errFile}for more info`)));
571
578
  this.writeFileSync(`${reportDest}/error.txt`, e.toString(), src);
572
579
  this.bddTestIsNowDone(src, -1);