@checksum-ai/runtime 1.0.40 → 1.0.41

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE CHANGED
@@ -1 +1,3 @@
1
- Checksum license
1
+ The code provided in this package is a proprietary code owned by Checksum AI INC. Checksum retains all right, title and interest in to the code. The Customer shall maintain the confidentiality of the Software and such related information, technical data, and know-how.
2
+
3
+ Customer will not copy, merge, publish, distribute, publicly display, transfer, sublicense, and/or sell the Software and will not, directly or indirectly: reverse engineer, decompile, disassemble or otherwise attempt to discover the source code, object code or underlying structure, ideas, know-how or algorithms relevant to the Services or any software, documentation or data related to the Services (“Software”); modify, translate, or create derivative works based on the Services or any Software (except to the extent expressly permitted by Company or authorized within the Services); use the Services or any Software for timesharing or service bureau purposes or otherwise for the benefit of a third; or remove any proprietary notices or labels. Checksum hereby grants Customer a non-exclusive, non-transferable, non-sublicensable license to use such Software during the Term only in connection with the Services.
package/README.md CHANGED
@@ -2,54 +2,64 @@
2
2
 
3
3
  ## Quick Start
4
4
 
5
- 1. Run `npm i @checksum-ai/runtime`
6
- 2. Navigate to the directory you want to add Checksum tests and run `npm run checksum init`
5
+ 1. Run `npm install -D checksumai`.
6
+ 2. Navigate to the directory where you want to add Checksum tests and run `npx checksumai init`.
7
7
  3. Run `npx playwright install --with-deps` to install Playwright dependencies.
8
- 4. Update `checksum.config.ts`. At the very least you need to add:
9
- 1. apiKey
10
- 2. baseURL
11
- 3. username
12
- 4. password
13
- 5. Update `login.ts` with your login function using Playwright (see bellow)
14
- 6. Run `npm run checksum test` to run the example test and make sure login is successful
15
- 7. If you haven't already done so, go to [app.checksum.ai](https://app.checksum.ai) finish the configuration and generate a test. Then wait for the PR to be created and approve it.
8
+ 4. Edit `checksum.config.ts` to include necessary configurations such as:
9
+ - `apiKey`
10
+ - `baseURL`
11
+ - `username`
12
+ - `password`
13
+ 5. Update `login.ts` with your login function using Playwright. See the Login Function section below for guidance.
14
+ 6. Run `npx checksumai test` to execute the example test and verify successful login.
15
+ 7. If you haven't already, visit [app.checksum.ai](https://app.checksum.ai) to complete the configuration and generate a test. Then, wait for the pull request (PR) to be created and approve it.
16
16
 
17
17
  ## Login Function
18
18
 
19
- 1. This function will be run at the beginning of each test.
20
- 2. We recommend using a consistent seeded user for each test. For example, before each test, call a webhook that creates a user, seeds it with data and returns the username and password. Doing so will keep tests reliable and allow running tests in parallel. If you do use a webhook, make sure to configure it [in your project](https://app.checksum.ai/#/settings/wizard) as well so test generation runs in the same context.
21
- 3. After login-in, assert that the login was successful. Playwright waits for assertions to be correct, so adding an assertion assures that the page is ready fo interaction before returning.
22
- 4. If you'd like to reuse authentication state between tests, follow Playwright guide https://playwright.dev/docs/auth. Then, check at the beginning of the login function if user is already authenticated and if so return.
19
+ 1. This function is executed at the start of each test.
20
+ 2. We recommend using a consistent seeded user for every test. For example, before each test, call a webhook that creates a user, seeds it with data, and returns the username and password. This approach ensures test reliability and allows parallel test execution. Configure this webhook [in your project](https://app.checksum.ai/#/settings/wizard) for consistent test generation.
21
+ 3. After logging in, assert that the login was successful. Playwright waits for assertions to be correct, ensuring that the page is ready for interaction before proceeding.
22
+ 4. To reuse authentication states between tests, refer to the Playwright guide on [authentication](https://playwright.dev/docs/auth). At the start of the login function, check if the user is already authenticated and return if so.
23
23
 
24
24
  ## Checksum AI Magic
25
25
 
26
- The tests Checksum generates are Playwright tests. However, when executed using Checksum CLI with an API key, Checksum extends Playwright functionality to improve test reliability and automatically maintain tests.
26
+ The tests generated by Checksum are based on Playwright. When executed using the Checksum CLI with an API key, Checksum enhances Playwright's functionality, improving test reliability and automatically maintaining tests.
27
27
 
28
28
  ### Autonomous Test Agent
29
29
 
30
30
  Checksum runs your Playwright tests regularly, but we added a few extra features to make tests more reliable. All of the features can be turned on/off through `checksum.config.ts`
31
31
 
32
32
  **Smart Selectors**
33
- when the test is generated, Checksum stores vast metadata for every action (see test-data folder). When a classic selector fails, we use the metadata to fix it. For example, if a test identifies an element by its ID, but the ID changed, Checksum looks at hundreds of other data points (eg element class, text, parents) to find the element. To connect an action to its metadata, we use the `checksumSelector("<id>")` method. Do not change the IDs.
33
+ When generating tests, Checksum stores extensive metadata for each action (see the `test-data` folder). If a traditional selector fails, this metadata is used for correction. For example, if a test identifies an element by its ID but the ID changes, Checksum utilizes other data points (e.g., element class, text, parents) to locate the element. Use the `checksumSelector("<id>")` method to link an action to its metadata. Do not alter the IDs.
34
34
 
35
35
  **Checksum AI**
36
- If Smart Selectors fail as well, Checksum can use our custom-trained model to completely regenerate the failed section. In that case, the model might add, remove of take different actions to complete the same goals. The model will not change the assertions and the assumption is that as long as the assertions pass, the model has fixed the test. `.checksumAI("<natural language description of the test>")` method is used to instruct the model on how to fix the test.
36
+ If Smart Selectors also fail, Checksum's custom-trained model can regenerate the failed section of the test. In such cases, the model might add, remove, or alter actions to achieve the same objectives, without changing the assertions. The assumption is that as long as the assertions pass, the model has successfully fixed the test. Use the `.checksumAI("<natural language description of the test>")` method to guide the model in fixing the test.
37
37
 
38
- You can edit the description as needed to help inform our model. You can also add steps with only ChecksumAI descriptions so our model will generate the Playwright code. For example, adding `await page.checksumAI("Click on 'New Task' button")` without the actual action will have our model generate the Playwright code for this action. You can even author full tests this way.
38
+ You can modify the description as needed for our model. Additionally, you can include steps with only ChecksumAI descriptions, prompting our model to generate the Playwright code. For example, `await page.checksumAI("Click on 'New Task' button")` without the actual action will have our model generate the necessary Playwright code. You can even author entire tests in this manner.
39
39
 
40
40
  ### Run Modes
41
41
 
42
- Checksum has three run modes:
42
+ Checksum offers three run modes:
43
43
 
44
- 1. Normal - tests are run using the Autonomous Test Agent as defined in the config file.
45
- 2. Heal - If the Autonomous Test Agent corrects a test, we create a new test file with the fix. By default the test file will be created locally, but you can also have the Agent open a PR to your github repo by setting `autoHealPRs` to true
46
- 3. Refactor (wip) - Checksum Autonomous Test Agent will run the test and for each action, regenerate a regular Playwright selector, a Smart Selector and a Checksum AI description.
44
+ 1. **Normal** - Tests are executed using the Autonomous Test Agent as defined in the config file.
45
+ 2. **Heal** - If the Autonomous Test Agent corrects a test, a new test file with the fix is created. By default, this file is created locally, but you can also configure the Agent to open a PR to your GitHub repository by setting `autoHealPRs` to true.
46
+ 3. **Refactor (Work in Progress)** - The Autonomous Test Agent runs the test and, for each action, regenerates a regular Playwright selector, a Smart Selector, and a Checksum AI description.
47
47
 
48
48
  ### Mock Data
49
49
 
50
- When Checksum generates the test, we record all of the Backend responses so you can run the tests with exactly the same Backend context. Its useful when debugging a test, or when running it for the first time, especially if your testing DB/context is different then the one used for test generation. If your Backend response format, changes the Mocked data might not work as expected anymore.
50
+ When generating tests, Checksum records all backend responses, allowing tests to run in the same backend context. This is particularly useful for debugging or initial test runs, especially if your testing database/context differs from that used for test generation. Note that if your backend response format changes, the mocked data may not work as expected.
51
51
 
52
- ### CLI Commands (Needs to be updated)
52
+ ### CLI Commands
53
53
 
54
- 1. `init` - initialize Checksum directory and configs
55
- 2. `test` - Run Checksum tests. Accepts all [Playwright command line flags](https://playwright.dev/docs/test-cli). To override the`checksum.config.ts` you can pass full or partial json as a string. E.g. `--checksum-config='{"baseURL" = "https://example.com"}'`
54
+ 1. `init` - Initialize the Checksum directory and configurations.
55
+ 2. `test` - Run Checksum tests. Accepts all [Playwright command line flags](https://playwright.dev/docs/test-cli). To override `checksum.config.ts`, pass full or partial JSON as a string, e.g., `--checksum-config='{"baseURL": "https://example.com"}'`.
56
+
57
+ ## Running with GitHub Actions
58
+
59
+ See the example file `github-actions.example.yml`.
60
+
61
+ ## Troubleshooting
62
+
63
+ **Q: I'm seeing various exceptions when I run "npx checksumai test", even before the test starts.**
64
+
65
+ A: If you had a pre-installed version of Playwright, it might not be compatible with Checksum. Remove both Playwright and Checksum libraries, delete the relevant folder from `node_modules`, and run `npm install -D checksumai`.
@@ -2,54 +2,64 @@
2
2
 
3
3
  ## Quick Start
4
4
 
5
- 1. Run `npm i @checksum-ai/runtime`
6
- 2. Navigate to the directory you want to add Checksum tests and run `npm run checksum init`
5
+ 1. Run `npm install -D checksumai`.
6
+ 2. Navigate to the directory where you want to add Checksum tests and run `npx checksumai init`.
7
7
  3. Run `npx playwright install --with-deps` to install Playwright dependencies.
8
- 4. Update `checksum.config.ts`. At the very least you need to add:
9
- 1. apiKey
10
- 2. baseURL
11
- 3. username
12
- 4. password
13
- 5. Update `login.ts` with your login function using Playwright (see bellow)
14
- 6. Run `npm run checksum test` to run the example test and make sure login is successful
15
- 7. If you haven't already done so, go to [app.checksum.ai](https://app.checksum.ai) finish the configuration and generate a test. Then wait for the PR to be created and approve it.
8
+ 4. Edit `checksum.config.ts` to include necessary configurations such as:
9
+ - `apiKey`
10
+ - `baseURL`
11
+ - `username`
12
+ - `password`
13
+ 5. Update `login.ts` with your login function using Playwright. See the Login Function section below for guidance.
14
+ 6. Run `npx checksumai test` to execute the example test and verify successful login.
15
+ 7. If you haven't already, visit [app.checksum.ai](https://app.checksum.ai) to complete the configuration and generate a test. Then, wait for the pull request (PR) to be created and approve it.
16
16
 
17
17
  ## Login Function
18
18
 
19
- 1. This function will be run at the beginning of each test.
20
- 2. We recommend using a consistent seeded user for each test. For example, before each test, call a webhook that creates a user, seeds it with data and returns the username and password. Doing so will keep tests reliable and allow running tests in parallel. If you do use a webhook, make sure to configure it [in your project](https://app.checksum.ai/#/settings/wizard) as well so test generation runs in the same context.
21
- 3. After login-in, assert that the login was successful. Playwright waits for assertions to be correct, so adding an assertion assures that the page is ready fo interaction before returning.
22
- 4. If you'd like to reuse authentication state between tests, follow Playwright guide https://playwright.dev/docs/auth. Then, check at the beginning of the login function if user is already authenticated and if so return.
19
+ 1. This function is executed at the start of each test.
20
+ 2. We recommend using a consistent seeded user for every test. For example, before each test, call a webhook that creates a user, seeds it with data, and returns the username and password. This approach ensures test reliability and allows parallel test execution. Configure this webhook [in your project](https://app.checksum.ai/#/settings/wizard) for consistent test generation.
21
+ 3. After logging in, assert that the login was successful. Playwright waits for assertions to be correct, ensuring that the page is ready for interaction before proceeding.
22
+ 4. To reuse authentication states between tests, refer to the Playwright guide on [authentication](https://playwright.dev/docs/auth). At the start of the login function, check if the user is already authenticated and return if so.
23
23
 
24
24
  ## Checksum AI Magic
25
25
 
26
- The tests Checksum generates are Playwright tests. However, when executed using Checksum CLI with an API key, Checksum extends Playwright functionality to improve test reliability and automatically maintain tests.
26
+ The tests generated by Checksum are based on Playwright. When executed using the Checksum CLI with an API key, Checksum enhances Playwright's functionality, improving test reliability and automatically maintaining tests.
27
27
 
28
28
  ### Autonomous Test Agent
29
29
 
30
30
  Checksum runs your Playwright tests regularly, but we added a few extra features to make tests more reliable. All of the features can be turned on/off through `checksum.config.ts`
31
31
 
32
32
  **Smart Selectors**
33
- when the test is generated, Checksum stores vast metadata for every action (see test-data folder). When a classic selector fails, we use the metadata to fix it. For example, if a test identifies an element by its ID, but the ID changed, Checksum looks at hundreds of other data points (eg element class, text, parents) to find the element. To connect an action to its metadata, we use the `checksumSelector("<id>")` method. Do not change the IDs.
33
+ When generating tests, Checksum stores extensive metadata for each action (see the `test-data` folder). If a traditional selector fails, this metadata is used for correction. For example, if a test identifies an element by its ID but the ID changes, Checksum utilizes other data points (e.g., element class, text, parents) to locate the element. Use the `checksumSelector("<id>")` method to link an action to its metadata. Do not alter the IDs.
34
34
 
35
35
  **Checksum AI**
36
- If Smart Selectors fail as well, Checksum can use our custom-trained model to completely regenerate the failed section. In that case, the model might add, remove of take different actions to complete the same goals. The model will not change the assertions and the assumption is that as long as the assertions pass, the model has fixed the test. `.checksumAI("<natural language description of the test>")` method is used to instruct the model on how to fix the test.
36
+ If Smart Selectors also fail, Checksum's custom-trained model can regenerate the failed section of the test. In such cases, the model might add, remove, or alter actions to achieve the same objectives, without changing the assertions. The assumption is that as long as the assertions pass, the model has successfully fixed the test. Use the `.checksumAI("<natural language description of the test>")` method to guide the model in fixing the test.
37
37
 
38
- You can edit the description as needed to help inform our model. You can also add steps with only ChecksumAI descriptions so our model will generate the Playwright code. For example, adding `await page.checksumAI("Click on 'New Task' button")` without the actual action will have our model generate the Playwright code for this action. You can even author full tests this way.
38
+ You can modify the description as needed for our model. Additionally, you can include steps with only ChecksumAI descriptions, prompting our model to generate the Playwright code. For example, `await page.checksumAI("Click on 'New Task' button")` without the actual action will have our model generate the necessary Playwright code. You can even author entire tests in this manner.
39
39
 
40
40
  ### Run Modes
41
41
 
42
- Checksum has three run modes:
42
+ Checksum offers three run modes:
43
43
 
44
- 1. Normal - tests are run using the Autonomous Test Agent as defined in the config file.
45
- 2. Heal - If the Autonomous Test Agent corrects a test, we create a new test file with the fix. By default the test file will be created locally, but you can also have the Agent open a PR to your github repo by setting `autoHealPRs` to true
46
- 3. Refactor (wip) - Checksum Autonomous Test Agent will run the test and for each action, regenerate a regular Playwright selector, a Smart Selector and a Checksum AI description.
44
+ 1. **Normal** - Tests are executed using the Autonomous Test Agent as defined in the config file.
45
+ 2. **Heal** - If the Autonomous Test Agent corrects a test, a new test file with the fix is created. By default, this file is created locally, but you can also configure the Agent to open a PR to your GitHub repository by setting `autoHealPRs` to true.
46
+ 3. **Refactor (Work in Progress)** - The Autonomous Test Agent runs the test and, for each action, regenerates a regular Playwright selector, a Smart Selector, and a Checksum AI description.
47
47
 
48
48
  ### Mock Data
49
49
 
50
- When Checksum generates the test, we record all of the Backend responses so you can run the tests with exactly the same Backend context. Its useful when debugging a test, or when running it for the first time, especially if your testing DB/context is different then the one used for test generation. If your Backend response format, changes the Mocked data might not work as expected anymore.
50
+ When generating tests, Checksum records all backend responses, allowing tests to run in the same backend context. This is particularly useful for debugging or initial test runs, especially if your testing database/context differs from that used for test generation. Note that if your backend response format changes, the mocked data may not work as expected.
51
51
 
52
- ### CLI Commands (Needs to be updated)
52
+ ### CLI Commands
53
53
 
54
- 1. `init` - initialize Checksum directory and configs
55
- 2. `test` - Run Checksum tests. Accepts all [Playwright command line flags](https://playwright.dev/docs/test-cli). To override the`checksum.config.ts` you can pass full or partial json as a string. E.g. `--checksum-config='{"baseURL" = "https://example.com"}'`
54
+ 1. `init` - Initialize the Checksum directory and configurations.
55
+ 2. `test` - Run Checksum tests. Accepts all [Playwright command line flags](https://playwright.dev/docs/test-cli). To override `checksum.config.ts`, pass full or partial JSON as a string, e.g., `--checksum-config='{"baseURL": "https://example.com"}'`.
56
+
57
+ ## Running with GitHub Actions
58
+
59
+ See the example file `github-actions.example.yml`.
60
+
61
+ ## Troubleshooting
62
+
63
+ **Q: I'm seeing various exceptions when I run "npx checksumai test", even before the test starts.**
64
+
65
+ A: If you had a pre-installed version of Playwright, it might not be compatible with Checksum. Uninstall the Playwright and Checksum libraries, delete the relevant folder from `node_modules`, and run `npm install -D checksumai`.
@@ -16,6 +16,18 @@ export default getChecksumConfig({
16
16
  */
17
17
  baseURL: "<base URL>",
18
18
 
19
+ /**
20
+ * Insert the account's username that will be used
21
+ * to login into your testing environment
22
+ */
23
+ username: "<username>",
24
+
25
+ /**
26
+ * Insert the account's password that will be used
27
+ * to login into your testing environment
28
+ */
29
+ password: "<password>",
30
+
19
31
  options: {
20
32
  /**
21
33
  * Whether to use Checksum Smart Selector when an action fails (see README)
@@ -1,5 +1,5 @@
1
1
  #################################
2
- # This file has two example Github workflows to run Checksum tests
2
+ # This file has two example Github Action workflows to run Checksum tests
3
3
  # 1. Runs Checksum tests on every push or PR to main/master
4
4
  # 2. Runs the test using Docker container
5
5
 
@@ -11,22 +11,22 @@ export default async function login(
11
11
  *
12
12
  * Example with Seed Function:
13
13
  */
14
- const apiContext = await request.newContext();
15
- const response = await apiContext.get("https://example.com/createseed");
16
- const { username, password } = await response.json();
17
- await page.goto("/login");
18
- await page.getByPlaceholder("Email...").fill(process.env.username);
19
- await page.getByPlaceholder("Password...").fill(process.env.password);
20
- await page.getByText("Login").click();
21
- await expect(page.getByText("Login Successful")).toBeVisible();
14
+ // const apiContext = await request.newContext();
15
+ // const response = await apiContext.get("https://example.com/createseed");
16
+ // const { username, password } = await response.json();
17
+ // await page.goto("/login");
18
+ // await page.getByPlaceholder("Email...").fill(process.env.username);
19
+ // await page.getByPlaceholder("Password...").fill(process.env.password);
20
+ // await page.getByText("Login").click();
21
+ // await expect(page.getByText("Login Successful")).toBeVisible();
22
22
 
23
23
  /**
24
24
  * Example with Default Username and Password:
25
25
  * This example demonstrates how to log in to a page using a predefined username and password from a config file.
26
26
  */
27
- await page.goto("/login");
28
- await page.getByPlaceholder("Email...").fill(config.username);
29
- await page.getByPlaceholder("Password...").fill(config.password);
30
- await page.getByText("Login").click();
31
- await expect(page.getByText("Login Successful")).toBeVisible();
27
+ // await page.goto("/login");
28
+ // await page.getByPlaceholder("Email...").fill(config.username);
29
+ // await page.getByPlaceholder("Password...").fill(config.password);
30
+ // await page.getByText("Login").click();
31
+ // await expect(page.getByText("Login Successful")).toBeVisible();
32
32
  }
@@ -47,7 +47,7 @@ export default defineConfig({
47
47
  projects: [
48
48
  {
49
49
  name: "chromium",
50
- testMatch: /^(?!.*refactored).*spec.*/,
50
+ testMatch: /checksum.spec/,
51
51
  use: {
52
52
  ...devices["Desktop Chrome"],
53
53
  },
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@checksum-ai/runtime",
3
- "version": "1.0.40",
3
+ "version": "1.0.41",
4
4
  "description": "Checksum.ai test runtime",
5
5
  "main": "index.js",
6
6
  "scripts": {