@checksum-ai/runtime 1.0.29 → 1.0.31
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +55 -1
- package/checksum-root/README.md +55 -1
- package/checksum-root/checksum-tests.example.yml +55 -0
- package/checksum-root/checksum.config.ts +13 -27
- package/checksum-root/example.checksum.spec.ts +18 -0
- package/checksum-root/login.ts +23 -10
- package/checksum-root/playwright.config.ts +41 -6
- package/cli.js +3 -3
- package/package.json +1 -1
- package/test-run-monitor.js +7 -7
package/README.md
CHANGED
|
@@ -1 +1,55 @@
|
|
|
1
|
-
Checksum
|
|
1
|
+
# Checksum.ai Tests
|
|
2
|
+
|
|
3
|
+
## Quick Start
|
|
4
|
+
|
|
5
|
+
1. Run `npm i @checksum-ai/runtime`
|
|
6
|
+
2. Navigate to the directory you want to add Checksum tests and run `npm run checksum init`
|
|
7
|
+
3. Run `npx playwright install --with-deps` to install Playwright dependencies.
|
|
8
|
+
4. Update `checksum.config.ts`. At the very least you need to add:
|
|
9
|
+
1. apiKey
|
|
10
|
+
2. baseURL
|
|
11
|
+
3. username
|
|
12
|
+
4. password
|
|
13
|
+
5. Update `login.ts` with your login function using Playwright (see bellow)
|
|
14
|
+
6. Run `npm run checksum test` to run the example test and make sure login is successful
|
|
15
|
+
7. If you haven't already done so, go to [app.checksum.ai](https://app.checksum.ai) finish the configuration and generate a test. Then wait for the PR to be created and approve it.
|
|
16
|
+
|
|
17
|
+
## Login Function
|
|
18
|
+
|
|
19
|
+
1. This function will be run at the beginning of each test.
|
|
20
|
+
2. We recommend using a consistent seeded user for each test. For example, before each test, call a webhook that creates a user, seeds it with data and returns the username and password. Doing so will keep tests reliable and allow running tests in parallel. If you do use a webhook, make sure to configure it [in your project](https://app.checksum.ai/#/settings/wizard) as well so test generation runs in the same context.
|
|
21
|
+
3. After login-in, assert that the login was successful. Playwright waits for assertions to be correct, so adding an assertion assures that the page is ready fo interaction before returning.
|
|
22
|
+
4. If you'd like to reuse authentication state between tests, follow Playwright guide https://playwright.dev/docs/auth. Then, check at the beginning of the login function if user is already authenticated and if so return.
|
|
23
|
+
|
|
24
|
+
## Checksum AI Magic
|
|
25
|
+
|
|
26
|
+
The tests Checksum generates are Playwright tests. However, when executed using Checksum CLI with an API key, Checksum extends Playwright functionality to improve test reliability and automatically maintain tests.
|
|
27
|
+
|
|
28
|
+
### Autonomous Test Agent
|
|
29
|
+
|
|
30
|
+
Checksum runs your Playwright tests regularly, but we added a few extra features to make tests more reliable. All of the features can be turned on/off through `checksum.config.ts`
|
|
31
|
+
|
|
32
|
+
**Smart Selectors**
|
|
33
|
+
when the test is generated, Checksum stores vast metadata for every action (see test-data folder). When a classic selector fails, we use the metadata to fix it. For example, if a test identifies an element by its ID, but the ID changed, Checksum looks at hundreds of other data points (eg element class, text, parents) to find the element. To connect an action to its metadata, we use the `checksumSelector("<id>")` method. Do not change the IDs.
|
|
34
|
+
|
|
35
|
+
**Checksum AI**
|
|
36
|
+
If Smart Selectors fail as well, Checksum can use our custom-trained model to completely regenerate the failed section. In that case, the model might add, remove of take different actions to complete the same goals. The model will not change the assertions and the assumption is that as long as the assertions pass, the model has fixed the test. `.checksumAI("<natural language description of the test>")` method is used to instruct the model on how to fix the test.
|
|
37
|
+
|
|
38
|
+
You can edit the description as needed to help inform our model. You can also add steps with only ChecksumAI descriptions so our model will generate the Playwright code. For example, adding `await page.checksumAI("Click on 'New Task' button")` without the actual action will have our model generate the Playwright code for this action. You can even author full tests this way.
|
|
39
|
+
|
|
40
|
+
### Run Modes
|
|
41
|
+
|
|
42
|
+
Checksum has three run modes:
|
|
43
|
+
|
|
44
|
+
1. Normal - tests are run using the Autonomous Test Agent as defined in the config file.
|
|
45
|
+
2. Heal - If the Autonomous Test Agent corrects a test, we create a new test file with the fix. By default the test file will be created locally, but you can also have the Agent open a PR to your github repo by setting `autoHealPRs` to true
|
|
46
|
+
3. Refactor (wip) - Checksum Autonomous Test Agent will run the test and for each action, regenerate a regular Playwright selector, a Smart Selector and a Checksum AI description.
|
|
47
|
+
|
|
48
|
+
### Mock Data
|
|
49
|
+
|
|
50
|
+
When Checksum generates the test, we record all of the Backend responses so you can run the tests with exactly the same Backend context. Its useful when debugging a test, or when running it for the first time, especially if your testing DB/context is different then the one used for test generation. If your Backend response format, changes the Mocked data might not work as expected anymore.
|
|
51
|
+
|
|
52
|
+
### CLI Commands (Needs to be updated)
|
|
53
|
+
|
|
54
|
+
1. `init` - initialize Checksum directory and configs
|
|
55
|
+
2. `test` - Run Checksum tests. Accepts all [Playwright command line flags](https://playwright.dev/docs/test-cli). To override the`checksum.config.ts` you can pass full or partial json as a string. E.g. `--checksum-config='{"baseURL" = "https://example.com"}'`
|
package/checksum-root/README.md
CHANGED
|
@@ -1 +1,55 @@
|
|
|
1
|
-
Checksum
|
|
1
|
+
# Checksum.ai Tests
|
|
2
|
+
|
|
3
|
+
## Quick Start
|
|
4
|
+
|
|
5
|
+
1. Run `npm i @checksum-ai/runtime`
|
|
6
|
+
2. Navigate to the directory you want to add Checksum tests and run `npm run checksum init`
|
|
7
|
+
3. Run `npx playwright install --with-deps` to install Playwright dependencies.
|
|
8
|
+
4. Update `checksum.config.ts`. At the very least you need to add:
|
|
9
|
+
1. apiKey
|
|
10
|
+
2. baseURL
|
|
11
|
+
3. username
|
|
12
|
+
4. password
|
|
13
|
+
5. Update `login.ts` with your login function using Playwright (see bellow)
|
|
14
|
+
6. Run `npm run checksum test` to run the example test and make sure login is successful
|
|
15
|
+
7. If you haven't already done so, go to [app.checksum.ai](https://app.checksum.ai) finish the configuration and generate a test. Then wait for the PR to be created and approve it.
|
|
16
|
+
|
|
17
|
+
## Login Function
|
|
18
|
+
|
|
19
|
+
1. This function will be run at the beginning of each test.
|
|
20
|
+
2. We recommend using a consistent seeded user for each test. For example, before each test, call a webhook that creates a user, seeds it with data and returns the username and password. Doing so will keep tests reliable and allow running tests in parallel. If you do use a webhook, make sure to configure it [in your project](https://app.checksum.ai/#/settings/wizard) as well so test generation runs in the same context.
|
|
21
|
+
3. After login-in, assert that the login was successful. Playwright waits for assertions to be correct, so adding an assertion assures that the page is ready fo interaction before returning.
|
|
22
|
+
4. If you'd like to reuse authentication state between tests, follow Playwright guide https://playwright.dev/docs/auth. Then, check at the beginning of the login function if user is already authenticated and if so return.
|
|
23
|
+
|
|
24
|
+
## Checksum AI Magic
|
|
25
|
+
|
|
26
|
+
The tests Checksum generates are Playwright tests. However, when executed using Checksum CLI with an API key, Checksum extends Playwright functionality to improve test reliability and automatically maintain tests.
|
|
27
|
+
|
|
28
|
+
### Autonomous Test Agent
|
|
29
|
+
|
|
30
|
+
Checksum runs your Playwright tests regularly, but we added a few extra features to make tests more reliable. All of the features can be turned on/off through `checksum.config.ts`
|
|
31
|
+
|
|
32
|
+
**Smart Selectors**
|
|
33
|
+
when the test is generated, Checksum stores vast metadata for every action (see test-data folder). When a classic selector fails, we use the metadata to fix it. For example, if a test identifies an element by its ID, but the ID changed, Checksum looks at hundreds of other data points (eg element class, text, parents) to find the element. To connect an action to its metadata, we use the `checksumSelector("<id>")` method. Do not change the IDs.
|
|
34
|
+
|
|
35
|
+
**Checksum AI**
|
|
36
|
+
If Smart Selectors fail as well, Checksum can use our custom-trained model to completely regenerate the failed section. In that case, the model might add, remove of take different actions to complete the same goals. The model will not change the assertions and the assumption is that as long as the assertions pass, the model has fixed the test. `.checksumAI("<natural language description of the test>")` method is used to instruct the model on how to fix the test.
|
|
37
|
+
|
|
38
|
+
You can edit the description as needed to help inform our model. You can also add steps with only ChecksumAI descriptions so our model will generate the Playwright code. For example, adding `await page.checksumAI("Click on 'New Task' button")` without the actual action will have our model generate the Playwright code for this action. You can even author full tests this way.
|
|
39
|
+
|
|
40
|
+
### Run Modes
|
|
41
|
+
|
|
42
|
+
Checksum has three run modes:
|
|
43
|
+
|
|
44
|
+
1. Normal - tests are run using the Autonomous Test Agent as defined in the config file.
|
|
45
|
+
2. Heal - If the Autonomous Test Agent corrects a test, we create a new test file with the fix. By default the test file will be created locally, but you can also have the Agent open a PR to your github repo by setting `autoHealPRs` to true
|
|
46
|
+
3. Refactor (wip) - Checksum Autonomous Test Agent will run the test and for each action, regenerate a regular Playwright selector, a Smart Selector and a Checksum AI description.
|
|
47
|
+
|
|
48
|
+
### Mock Data
|
|
49
|
+
|
|
50
|
+
When Checksum generates the test, we record all of the Backend responses so you can run the tests with exactly the same Backend context. Its useful when debugging a test, or when running it for the first time, especially if your testing DB/context is different then the one used for test generation. If your Backend response format, changes the Mocked data might not work as expected anymore.
|
|
51
|
+
|
|
52
|
+
### CLI Commands (Needs to be updated)
|
|
53
|
+
|
|
54
|
+
1. `init` - initialize Checksum directory and configs
|
|
55
|
+
2. `test` - Run Checksum tests. Accepts all [Playwright command line flags](https://playwright.dev/docs/test-cli). To override the`checksum.config.ts` you can pass full or partial json as a string. E.g. `--checksum-config='{"baseURL" = "https://example.com"}'`
|
|
@@ -0,0 +1,55 @@
|
|
|
1
|
+
#################################
|
|
2
|
+
# This file has two example Github workflows to run Checksum tests
|
|
3
|
+
# 1. Runs Checksum tests on every push or PR to main/master
|
|
4
|
+
# 2. Runs the test using Docker container
|
|
5
|
+
|
|
6
|
+
|
|
7
|
+
|
|
8
|
+
name: Checksum Tests
|
|
9
|
+
on:
|
|
10
|
+
push:
|
|
11
|
+
branches: [ main, master ]
|
|
12
|
+
pull_request:
|
|
13
|
+
branches: [ main, master ]
|
|
14
|
+
jobs:
|
|
15
|
+
test:
|
|
16
|
+
timeout-minutes: 120
|
|
17
|
+
runs-on: ubuntu-latest
|
|
18
|
+
steps:
|
|
19
|
+
- uses: actions/checkout@v3
|
|
20
|
+
- uses: actions/setup-node@v3
|
|
21
|
+
with:
|
|
22
|
+
node-version: 18
|
|
23
|
+
# Installing deps, which should Include Checksum runtime
|
|
24
|
+
- name: Install dependencies
|
|
25
|
+
run: npm ci
|
|
26
|
+
- name: Install Playwright Browsers
|
|
27
|
+
run: npx playwright install --with-deps
|
|
28
|
+
# Run tests
|
|
29
|
+
- name: Run Checksum tests
|
|
30
|
+
run: npm run checksum test
|
|
31
|
+
|
|
32
|
+
|
|
33
|
+
name: Checksum Tests with Docker
|
|
34
|
+
on:
|
|
35
|
+
push:
|
|
36
|
+
branches: [ main, master ]
|
|
37
|
+
pull_request:
|
|
38
|
+
branches: [ main, master ]
|
|
39
|
+
jobs:
|
|
40
|
+
playwright:
|
|
41
|
+
name: 'Checksum Tests with Docker'
|
|
42
|
+
runs-on: ubuntu-latest
|
|
43
|
+
container:
|
|
44
|
+
image: mcr.microsoft.com/playwright:v1.40.0-jammy
|
|
45
|
+
steps:
|
|
46
|
+
- uses: actions/checkout@v3
|
|
47
|
+
- uses: actions/setup-node@v3
|
|
48
|
+
with:
|
|
49
|
+
node-version: 18
|
|
50
|
+
- name: Install dependencies
|
|
51
|
+
run: npm ci
|
|
52
|
+
- name: Run Checksum tests
|
|
53
|
+
run: npm run checksum test
|
|
54
|
+
env:
|
|
55
|
+
HOME: /root
|
|
@@ -2,55 +2,41 @@ import { RunMode, getChecksumConfig } from "@checksum-ai/runtime";
|
|
|
2
2
|
|
|
3
3
|
export default getChecksumConfig({
|
|
4
4
|
/**
|
|
5
|
-
* Checksum
|
|
6
|
-
* normal - tests run normally
|
|
7
|
-
* heal - checksum will attempt to heal tests that failed using fallback
|
|
8
|
-
* refactor - checksum will attempt to refactor and improve your tests
|
|
5
|
+
* Checksum Run mode. See Readme for more info
|
|
9
6
|
*/
|
|
10
7
|
runMode: RunMode.Normal,
|
|
11
8
|
|
|
12
9
|
/**
|
|
13
|
-
* Insert here your Checksum API key
|
|
10
|
+
* Insert here your Checksum API key. You can find it in https://app.checksum.ai/#/settings/
|
|
14
11
|
*/
|
|
15
12
|
apiKey: "<API key>",
|
|
16
13
|
|
|
17
14
|
/**
|
|
18
|
-
* This is the base URL of the tested app
|
|
15
|
+
* This is the base URL of the tested app. E.g. https://example.com. URLs in the tests will be relative to the base URL.
|
|
19
16
|
*/
|
|
20
17
|
baseURL: "<base URL>",
|
|
21
18
|
|
|
22
|
-
/**
|
|
23
|
-
* Insert the account's username that will be used
|
|
24
|
-
* to login into your testing environment
|
|
25
|
-
*/
|
|
26
|
-
username: "<username>",
|
|
27
|
-
|
|
28
|
-
/**
|
|
29
|
-
* Insert the account's password that will be used
|
|
30
|
-
* to login into your testing environment
|
|
31
|
-
*/
|
|
32
|
-
password: "<password>",
|
|
33
|
-
|
|
34
19
|
options: {
|
|
35
20
|
/**
|
|
36
|
-
* Whether to use Checksum Smart Selector when
|
|
21
|
+
* Whether to use Checksum Smart Selector when an action fails (see README)
|
|
37
22
|
*/
|
|
38
23
|
useChecksumSelectors: true,
|
|
39
|
-
|
|
40
24
|
/**
|
|
41
|
-
* Whether to use Checksum AI when
|
|
25
|
+
* Whether to use Checksum AI when an action fails (see README)
|
|
42
26
|
*/
|
|
43
27
|
useChecksumAI: true,
|
|
44
|
-
|
|
45
28
|
/**
|
|
46
|
-
* Whether to use mock API data when running your tests
|
|
29
|
+
* Whether to use mock API data when running your tests (see README)
|
|
47
30
|
*/
|
|
48
31
|
useMockData: false,
|
|
49
|
-
|
|
50
32
|
/**
|
|
51
|
-
*
|
|
52
|
-
*
|
|
33
|
+
* Whether to Upload HTML test reports to app.checksum.ai so they can be viewed through the UI. Only relevant if Playwright reporter config is set to HTML
|
|
34
|
+
* Reports will be saved locally either way (according to Playwright Configs) and can be viewed using the CLI command show-reports.
|
|
35
|
+
*/
|
|
36
|
+
hostReports: !!process.env.CI,
|
|
37
|
+
/**
|
|
38
|
+
* Whether to create a PR with healed tests. Only relevant when in Heal mode.
|
|
53
39
|
*/
|
|
54
|
-
|
|
40
|
+
autoHealPRs: !!process.env.CI,
|
|
55
41
|
},
|
|
56
42
|
});
|
|
@@ -0,0 +1,18 @@
|
|
|
1
|
+
/* Checksum.ai autogenerated test */
|
|
2
|
+
import { test as base, expect } from "@playwright/test";
|
|
3
|
+
import { init, IChecksumPage } from "@checksum-ai/runtime";
|
|
4
|
+
|
|
5
|
+
const { test, defineChecksumTest, login } = init(base);
|
|
6
|
+
|
|
7
|
+
test.describe("Taskboard", () => {
|
|
8
|
+
test.beforeEach(async ({ page }: { page: IChecksumPage }) => {
|
|
9
|
+
await login(page);
|
|
10
|
+
});
|
|
11
|
+
|
|
12
|
+
test(
|
|
13
|
+
defineChecksumTest("Navigate to home page", "GPzdp"),
|
|
14
|
+
async ({ page }) => {
|
|
15
|
+
await page.goto("/");
|
|
16
|
+
}
|
|
17
|
+
);
|
|
18
|
+
});
|
package/checksum-root/login.ts
CHANGED
|
@@ -1,19 +1,32 @@
|
|
|
1
1
|
import { ChecksumConfig, IChecksumPage } from "@checksum-ai/runtime";
|
|
2
|
+
import { expect, request } from "@playwright/test";
|
|
2
3
|
|
|
3
|
-
/**
|
|
4
|
-
* Login method
|
|
5
|
-
*/
|
|
6
4
|
export default async function login(
|
|
7
5
|
page: IChecksumPage,
|
|
8
6
|
config: ChecksumConfig
|
|
9
7
|
) {
|
|
10
8
|
/**
|
|
11
|
-
*
|
|
12
|
-
*
|
|
13
|
-
*
|
|
14
|
-
*
|
|
15
|
-
* await page.getByPlaceholder("Password...").fill(config.password);
|
|
16
|
-
* await page.getByText("Continue").click();
|
|
17
|
-
* await page.waitForURL("/main");
|
|
9
|
+
* This code provides examples of how to write functions for different login scenarios.
|
|
10
|
+
* See README for more details
|
|
11
|
+
*
|
|
12
|
+
* Example with Seed Function:
|
|
18
13
|
*/
|
|
14
|
+
const apiContext = await request.newContext();
|
|
15
|
+
const response = await apiContext.get("https://example.com/createseed");
|
|
16
|
+
const { username, password } = await response.json();
|
|
17
|
+
await page.goto("/login");
|
|
18
|
+
await page.getByPlaceholder("Email...").fill(process.env.username);
|
|
19
|
+
await page.getByPlaceholder("Password...").fill(process.env.password);
|
|
20
|
+
await page.getByText("Login").click();
|
|
21
|
+
await expect(page.getByText("Login Successful")).toBeVisible();
|
|
22
|
+
|
|
23
|
+
/**
|
|
24
|
+
* Example with Default Username and Password:
|
|
25
|
+
* This example demonstrates how to log in to a page using a predefined username and password from a config file.
|
|
26
|
+
*/
|
|
27
|
+
await page.goto("/login");
|
|
28
|
+
await page.getByPlaceholder("Email...").fill(config.username);
|
|
29
|
+
await page.getByPlaceholder("Password...").fill(config.password);
|
|
30
|
+
await page.getByText("Login").click();
|
|
31
|
+
await expect(page.getByText("Login Successful")).toBeVisible();
|
|
19
32
|
}
|
|
@@ -1,27 +1,62 @@
|
|
|
1
1
|
import { defineConfig, devices } from "@playwright/test";
|
|
2
2
|
|
|
3
|
+
/**
|
|
4
|
+
* Read environment variables from file.
|
|
5
|
+
* https://github.com/motdotla/dotenv
|
|
6
|
+
*/
|
|
7
|
+
require("dotenv").config();
|
|
8
|
+
|
|
9
|
+
/**
|
|
10
|
+
* See https://playwright.dev/docs/test-configuration.
|
|
11
|
+
*/
|
|
3
12
|
export default defineConfig({
|
|
4
|
-
timeout: 120000,
|
|
5
|
-
testMatch: [/.*.[.]checksum.spec.ts/],
|
|
6
13
|
testDir: "..",
|
|
7
|
-
/*
|
|
8
|
-
|
|
14
|
+
/* Set test timeout to 10 minutes (relatively long) as Checksum implements its own timeout mechanism */
|
|
15
|
+
timeout: 1000 * 50 * 10,
|
|
9
16
|
/* Run tests in files in parallel */
|
|
10
17
|
fullyParallel: false,
|
|
18
|
+
/* Fail the build on CI if you accidentally left test.only in the source code. */
|
|
19
|
+
forbidOnly: !!process.env.CI,
|
|
20
|
+
/* Retry on CI only */
|
|
21
|
+
retries: process.env.CI ? 2 : 0,
|
|
22
|
+
/* Opt out of parallel tests on CI. */
|
|
23
|
+
workers: process.env.CI ? 1 : 1,
|
|
11
24
|
/* Reporter to use. See https://playwright.dev/docs/test-reporters */
|
|
12
|
-
reporter:
|
|
25
|
+
reporter: process.env.CI
|
|
26
|
+
? [["html", { open: "never", outputFolder: "test-results" }], ["line"]]
|
|
27
|
+
: "html",
|
|
13
28
|
/* Shared settings for all the projects below. See https://playwright.dev/docs/api/class-testoptions. */
|
|
14
29
|
use: {
|
|
30
|
+
/* Base URL to use in actions like `await page.goto('/')`. */
|
|
31
|
+
// baseURL: 'http://127.0.0.1:3000',
|
|
32
|
+
|
|
15
33
|
/* Collect trace when retrying the failed test. See https://playwright.dev/docs/trace-viewer */
|
|
16
34
|
trace: "on",
|
|
17
35
|
video: "on",
|
|
36
|
+
screenshot: "on",
|
|
37
|
+
locale: "en-US",
|
|
38
|
+
timezoneId: "America/Los_Angeles",
|
|
39
|
+
permissions: ["clipboard-read"],
|
|
40
|
+
actionTimeout: undefined, // Keep action timeout undefined as Checksum implements its own timeout
|
|
41
|
+
},
|
|
42
|
+
expect: {
|
|
43
|
+
toHaveScreenshot: { maxDiffPixelRatio: 0.05, maxDiffPixels: 200 },
|
|
18
44
|
},
|
|
19
45
|
|
|
20
46
|
/* Configure projects for major browsers */
|
|
21
47
|
projects: [
|
|
22
48
|
{
|
|
23
49
|
name: "chromium",
|
|
24
|
-
|
|
50
|
+
testMatch: /^(?!.*refactored).*spec.*/,
|
|
51
|
+
use: {
|
|
52
|
+
...devices["Desktop Chrome"],
|
|
53
|
+
},
|
|
25
54
|
},
|
|
26
55
|
],
|
|
56
|
+
/* Run your local dev server before starting the tests */
|
|
57
|
+
// webServer: {
|
|
58
|
+
// command: 'npm run start',
|
|
59
|
+
// url: 'http://127.0.0.1:3000',
|
|
60
|
+
// reuseExistingServer: !process.env.CI,
|
|
61
|
+
// },
|
|
27
62
|
});
|
package/cli.js
CHANGED
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
var f=Object.create;var g=Object.defineProperty;var C=Object.getOwnPropertyDescriptor;var w=Object.getOwnPropertyNames;var y=Object.getPrototypeOf,P=Object.prototype.hasOwnProperty;var u=(n,t)=>g(n,"name",{value:t,configurable:!0});var k=(n,t,e,s)=>{if(t&&typeof t=="object"||typeof t=="function")for(let o of w(t))!P.call(n,o)&&o!==e&&g(n,o,{get:()=>t[o],enumerable:!(s=C(t,o))||s.enumerable});return n};var R=(n,t,e)=>(e=n!=null?f(y(n)):{},k(t||!n||!n.__esModule?g(e,"default",{value:n,enumerable:!0}):e,n));var i=require("fs"),l=R(require("child_process")),r=require("path");var
|
|
1
|
+
var f=Object.create;var g=Object.defineProperty;var C=Object.getOwnPropertyDescriptor;var w=Object.getOwnPropertyNames;var y=Object.getPrototypeOf,P=Object.prototype.hasOwnProperty;var u=(n,t)=>g(n,"name",{value:t,configurable:!0});var k=(n,t,e,s)=>{if(t&&typeof t=="object"||typeof t=="function")for(let o of w(t))!P.call(n,o)&&o!==e&&g(n,o,{get:()=>t[o],enumerable:!(s=C(t,o))||s.enumerable});return n};var R=(n,t,e)=>(e=n!=null?f(y(n)):{},k(t||!n||!n.__esModule?g(e,"default",{value:n,enumerable:!0}):e,n));var i=require("fs"),l=R(require("child_process")),r=require("path");var m="checksum";var h=class{constructor(){this.TEST_RUN_MONITOR_PATH=(0,r.join)(__dirname,"test-run-monitor.js");this.CHECKSUM_API_URL="https://api.checksum.ai";this.CHECKSUM_APP_URL="https://app.checksum.ai";this.didFail=!1;this.isolatedMode=!1;this.completeIndicators={upload:!1,tests:!1,report:!1};this.guardReturn=async(t,e=1e3,s="action hang guard timed out")=>{let o="guard-timed-out",a=u(async()=>(await this.awaitSleep(e+1e3),o),"guard"),c=await Promise.race([t,a()]);if(typeof c=="string"&&c===o)throw new Error(s);return c};this.awaitSleep=t=>new Promise(e=>setTimeout(e,t))}async execute(){switch(process.argv.find(t=>t==="--help"||t==="-h")&&(await this.printHelp(process.argv[2]),process.exit(0)),process.argv[2]){case"init":this.install();break;case"test":await this.test(process.argv.slice(3));break;case"show-report":this.showReport(process.argv.slice(3));break;default:await this.printHelp()}process.exit(0)}async execCmd(t){let e=await l.spawn(t,{shell:!0,stdio:"inherit"});return new Promise((o,a)=>{e.on("exit",c=>{c===0?o(!0):a(new Error(`Checsum failed execution with code: ${c} `))})})}async getCmdOutput(t){return new Promise(function(e,s){l.exec(t,(o,a,c)=>{if(o){s(`Error executing command: ${o.message}`);return}e(a)})})}async printHelp(t){switch(t){default:console.log(`
|
|
2
2
|
Checksum CLI
|
|
3
3
|
Usage: checksum [command] [options]
|
|
4
4
|
|
|
@@ -10,10 +10,10 @@ show-report [options] [report] show HTML report
|
|
|
10
10
|
`);break;case"test":try{let e="npx playwright test --help",s=(await this.getCmdOutput(e)).replace(/npx playwright/g,"yarn checksum").split(`
|
|
11
11
|
`);s.splice(5,0," --checksum-config=<config> Checksum configuration in JSON format").join(`
|
|
12
12
|
`),console.log(s.join(`
|
|
13
|
-
`))}catch(e){console.log("Error",e.message)}break;case"show-report":try{let e="npx playwright show-report --help",s=(await this.getCmdOutput(e)).replace(/npx playwright/g,"yarn checksum");console.log(s)}catch(e){console.log("Error",e.message)}break}}async showReport(t){let e=`npx playwright show-report ${t.join(" ")}`;try{await this.execCmd(e)}catch(s){console.log("Error showing report",s.message)}}async test(t){this.processChecksumArguments(t),this.setChecksumConfig(),await this.getSession();let e;try{e=await this.guardReturn(this.startTestRunMonitor(this.testSession),1e4,"test run monitor timeout")}catch{console.log("Error starting test run monitor. Test results will not be available on checksum.")}this.buildVolatileConfig();let s=`${e?`CHECKSUM_UPLOAD_AGENT_PORT=${e} `:""} npx playwright test --config ${this.getPlaywrightConfigFile()} ${t.join(" ")}`;await this.patchPlaywright();try{await this.execCmd(s),console.log("Tests execution finished")}catch(o){this.didFail=!0,console.log("Error during test",o.message)}finally{let o=this.getPlaywrightReportPath();(0,i.existsSync)(o)?this.testRunMonitorProcess.stdin.write(`cli:report=${o}`):console.log(`Could not find report file at ${o}`),await this.patchPlaywright(!0),this.completeIndicators.tests=!0,await this.handleCompleteMessage()}}async patchPlaywright(t=!1){let e=`bash ${(0,r.join)(__dirname,"scripts/patch.sh")}${t?" off":""}`;try{await this.execCmd(e)}catch(s){console.log("Error patching playwright",s.message)}}getPlaywrightReportPath(){var o,a;let t=(0,r.join)(process.cwd(),"playwright-report"),e=require(this.getPlaywrightConfigFile()),{reporter:s}=e;return s instanceof Array&&s.length>1&&((o=s[1])!=null&&o.outputFolder)&&(t=(a=s[1])==null?void 0:a.outputFolder),process.env.PLAYWRIGHT_HTML_REPORT&&(t=process.env.PLAYWRIGHT_HTML_REPORT),(0,r.join)(t,"index.html")}getPlaywrightConfigFile(){return(0,r.join)(this.getRootDirPath(),"playwright.config.ts")}startTestRunMonitor(t){return new Promise(e=>{console.log("Starting test run monitor"),this.testRunMonitorProcess=l.spawn("node",[this.TEST_RUN_MONITOR_PATH,JSON.stringify({sessionId:t,checksumApiURL:this.CHECKSUM_API_URL,apiKey:this.config.apiKey}),...this.
|
|
13
|
+
`))}catch(e){console.log("Error",e.message)}break;case"show-report":try{let e="npx playwright show-report --help",s=(await this.getCmdOutput(e)).replace(/npx playwright/g,"yarn checksum");console.log(s)}catch(e){console.log("Error",e.message)}break}}async showReport(t){let e=`npx playwright show-report ${t.join(" ")}`;try{await this.execCmd(e)}catch(s){console.log("Error showing report",s.message)}}async test(t){this.processChecksumArguments(t),this.setChecksumConfig(),await this.getSession();let e;try{e=await this.guardReturn(this.startTestRunMonitor(this.testSession),1e4,"test run monitor timeout")}catch{console.log("Error starting test run monitor. Test results will not be available on checksum.")}this.buildVolatileConfig();let s=`${e?`CHECKSUM_UPLOAD_AGENT_PORT=${e} `:""}${this.config.options.hostReports&&!this.isolatedMode?" PW_TEST_HTML_REPORT_OPEN=never":""} npx playwright test --config ${this.getPlaywrightConfigFile()} ${t.join(" ")}`;await this.patchPlaywright();try{await this.execCmd(s),console.log("Tests execution finished")}catch(o){this.didFail=!0,console.log("Error during test",o.message)}finally{let o=this.getPlaywrightReportPath();(0,i.existsSync)(o)?this.testRunMonitorProcess.stdin.write(`cli:report=${o}`):console.log(`Could not find report file at ${o}`),await this.patchPlaywright(!0),this.completeIndicators.tests=!0,await this.handleCompleteMessage()}}async patchPlaywright(t=!1){let e=`bash ${(0,r.join)(__dirname,"scripts/patch.sh")}${t?" off":""}`;try{await this.execCmd(e)}catch(s){console.log("Error patching playwright",s.message)}}getPlaywrightReportPath(){var o,a;let t=(0,r.join)(process.cwd(),"playwright-report"),e=require(this.getPlaywrightConfigFile()),{reporter:s}=e;return s instanceof Array&&s.length>1&&((o=s[1])!=null&&o.outputFolder)&&(t=(a=s[1])==null?void 0:a.outputFolder),process.env.PLAYWRIGHT_HTML_REPORT&&(t=process.env.PLAYWRIGHT_HTML_REPORT),(0,r.join)(t,"index.html")}getPlaywrightConfigFile(){return(0,r.join)(this.getRootDirPath(),"playwright.config.ts")}startTestRunMonitor(t){return new Promise(e=>{console.log("Starting test run monitor"),this.testRunMonitorProcess=l.spawn("node",[this.TEST_RUN_MONITOR_PATH,JSON.stringify({sessionId:t,checksumApiURL:this.CHECKSUM_API_URL,apiKey:this.config.apiKey}),...this.isolatedMode?["isolated"]:[]]),this.testRunMonitorProcess.stdout.on("data",s=>{var p,d;let o=s.toString().trim();if(!o.startsWith("monitor")){(d=(p=this.config)==null?void 0:p.options)!=null&&d.printLogs&&console.log(`Message from test run monitor: ${o}`);return}let[a,c]=o.substring(o.indexOf(":")+1).split("=");a==="port"?e(c):this.handleTestRunMonitorMessage(a,c)}),this.testRunMonitorProcess.on("exit",(s,o)=>{console.log(`test run monitor process exited with code ${s} and signal ${o}`)}),this.testRunMonitorProcess.on("error",s=>{console.error(`Error starting test run monitor: ${s.message}`)})})}async handleTestRunMonitorMessage(t,e){switch(t){case"complete":this.isolatedMode||console.log("Test artifacts uploaded successfully"),this.sendUploadsComplete().then(()=>{this.completeIndicators.upload=!0});break;case"report-uploaded":{if(this.isolatedMode){this.completeIndicators.report=!0;break}console.log("Report generated and uploaded to checksum");let s={};try{s=JSON.parse(e)}catch(o){console.log("Error parsing stats",o.message)}await this.sendTestRunEnd(s),this.completeIndicators.report=!0,console.log(`*******************
|
|
14
14
|
* Checksum report URL: ${this.CHECKSUM_APP_URL}/#/test-runs/${this.testSession}
|
|
15
15
|
*******************`);break}default:console.warn(`Unhandled test run monitor message: ${t}=${e}`)}}async handleCompleteMessage(){for(;;)Object.keys(this.completeIndicators).find(t=>!this.completeIndicators[t])?await this.awaitSleep(1e3):(console.log("Checksum test run complete"),this.shutdown(this.didFail?1:0))}shutdown(t=0){this.cleanup(),process.exit(t)}buildVolatileConfig(){if(!this.volatileChecksumConfig)return;let t=this.getVolatileConfigPath(),e=`
|
|
16
16
|
import { RunMode, getChecksumConfig } from "@checksum-ai/runtime";
|
|
17
17
|
|
|
18
18
|
export default getChecksumConfig(${JSON.stringify(this.config,null,2)});
|
|
19
|
-
`;(0,i.writeFileSync)(t,e)}cleanup(){this.deleteVolatileConfig(),this.testRunMonitorProcess.stdin.write("cli:shutdown"),this.testRunMonitorProcess.kill()}async getSession(){try{if(this.
|
|
19
|
+
`;(0,i.writeFileSync)(t,e)}cleanup(){this.deleteVolatileConfig(),this.testRunMonitorProcess.stdin.write("cli:shutdown"),this.testRunMonitorProcess.kill()}async getSession(){try{if(!this.config.options.hostReports){this.setIsolatedMode();return}let t=this.config.apiKey;(!t||t==="<API key>")&&(console.error("No API key found in checksum config"),this.shutdown(1));let e=JSON.stringify(await this.getEnvInfo()),s=await fetch(`${this.CHECKSUM_API_URL}/client-api/test-runs`,{method:"POST",headers:{Accept:"application/json","Content-Type":"application/json",ChecksumAppCode:t},body:e});this.testSession=(await s.json()).uuid}catch{console.log("Error getting checksum test session, will run in isolation mode"),this.setIsolatedMode()}}setIsolatedMode(){this.isolatedMode=!0,this.testSession="isolated-session"}async sendTestRunEnd(t){if(!this.isolatedMode)try{let e=JSON.stringify({...t,endedAt:Date.now()});await this.updateTestRun(`${this.CHECKSUM_API_URL}/client-api/test-runs/${this.testSession}`,"PATCH",e)}catch(e){return console.log("Error sending test run end",e.message),null}}async sendUploadsComplete(){if(!this.isolatedMode)try{await this.updateTestRun(`${this.CHECKSUM_API_URL}/client-api/test-runs/${this.testSession}/uploads-completed`,"PATCH")}catch(t){console.log("Error sending test run uploads complete",t.message)}}async updateTestRun(t,e,s=void 0){return(await fetch(t,{method:e,headers:{Accept:"application/json","Content-Type":"application/json",ChecksumAppCode:this.config.apiKey},body:s})).json()}async getEnvInfo(){let t={commitHash:"",branch:"branch",environment:process.env.CI?"CI":"local",name:"name",startedAt:Date.now()};try{t.commitHash=(await this.getCmdOutput("git rev-parse HEAD")).toString().trim()}catch(e){console.log("Error getting git hash",e.message)}try{t.branch=(await this.getCmdOutput("git rev-parse --abbrev-ref HEAD")).toString().trim()}catch(e){console.log("Error getting branch name",e.message)}return t}getVolatileConfigPath(){return(0,r.join)(this.getRootDirPath(),"checksum.config.tmp.ts")}deleteVolatileConfig(){let t=this.getVolatileConfigPath();(0,i.existsSync)(t)&&(0,i.rmSync)(t)}setChecksumConfig(){this.config={...require((0,r.join)(this.getRootDirPath(),"checksum.config.ts")).default||{},...this.volatileChecksumConfig||{}},this.config.apiURL&&(this.CHECKSUM_API_URL=this.config.apiURL)}processChecksumArguments(t){this.deleteVolatileConfig();for(let e of t)if(e.startsWith("--checksum-config"))try{this.volatileChecksumConfig=JSON.parse(e.split("=")[1]),t=t.filter(s=>s!==e)}catch(s){console.log("Error parsing checksum config",s.message),this.volatileChecksumConfig=void 0}}install(){console.log("Creating Checksum directory and necessary files to run your tests");let t=this.getRootDirPath();if((0,i.existsSync)(this.getRootDirPath())||(0,i.mkdirSync)(t),!(0,i.existsSync)(this.getChecksumRootOrigin()))throw new Error("Could not find checksum root directory, please install @checksum-ai/runtime package");["checksum.config.ts","playwright.config.ts","login.ts","README.md"].forEach(e=>{(0,i.copyFileSync)((0,r.join)(this.getChecksumRootOrigin(),e),(0,r.join)(t,e))}),(0,i.mkdirSync)((0,r.join)(t,"tests"),{recursive:!0}),["esra","har","trace","log"].forEach(e=>{(0,i.mkdirSync)((0,r.join)(t,"test-data",e),{recursive:!0})})}getRootDirPath(){return(0,r.join)(process.cwd(),m)}getChecksumRootOrigin(){return(0,r.join)(process.cwd(),"node_modules","@checksum-ai","runtime","checksum-root")}};u(h,"ChecksumCLI");(async()=>await new h().execute())();
|