testdriverai 5.2.2 → 5.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.github/workflows/test-install.yml +1 -1
- package/README.md +5 -11
- package/agent.js +135 -99
- package/docs/30x30.mdx +84 -0
- package/docs/action/browser.mdx +129 -0
- package/docs/action/os.mdx +157 -0
- package/docs/action/output.mdx +98 -0
- package/docs/action/performance.mdx +71 -0
- package/docs/action/prerun.mdx +80 -0
- package/docs/action/secrets.mdx +103 -0
- package/docs/action/setup.mdx +115 -0
- package/docs/bugs/jira.mdx +208 -0
- package/docs/cli/overview.mdx +65 -0
- package/docs/commands/assert.mdx +31 -0
- package/docs/commands/exec.mdx +42 -0
- package/docs/commands/focus-application.mdx +29 -0
- package/docs/commands/hover-image.mdx +32 -0
- package/docs/commands/hover-text.mdx +37 -0
- package/docs/commands/if.mdx +43 -0
- package/docs/commands/match-image.mdx +41 -0
- package/docs/commands/press-keys.mdx +30 -0
- package/docs/commands/run.mdx +30 -0
- package/docs/commands/scroll-until-image.mdx +33 -0
- package/docs/commands/scroll-until-text.mdx +37 -0
- package/docs/commands/scroll.mdx +33 -0
- package/docs/commands/type.mdx +29 -0
- package/docs/commands/wait-for-image.mdx +31 -0
- package/docs/commands/wait-for-text.mdx +35 -0
- package/docs/commands/wait.mdx +30 -0
- package/docs/docs.json +226 -0
- package/docs/exporting/playwright.mdx +159 -0
- package/docs/features/auto-healing.mdx +124 -0
- package/docs/features/cross-platform.mdx +106 -0
- package/docs/features/generation.mdx +180 -0
- package/docs/features/github.mdx +161 -0
- package/docs/features/parallel-testing.mdx +130 -0
- package/docs/features/reusable-snippets.mdx +124 -0
- package/docs/features/selectorless.mdx +62 -0
- package/docs/features/visual-assertions.mdx +123 -0
- package/docs/getting-started/ci.mdx +196 -0
- package/docs/getting-started/generating.mdx +210 -0
- package/docs/getting-started/running.mdx +67 -0
- package/docs/getting-started/setup.mdx +133 -0
- package/docs/getting-started/writing.mdx +99 -0
- package/docs/guide/assertions.mdx +195 -0
- package/docs/guide/authentication.mdx +150 -0
- package/docs/guide/code.mdx +169 -0
- package/docs/guide/locating.mdx +136 -0
- package/docs/guide/setup-teardown.mdx +161 -0
- package/docs/guide/variables.mdx +218 -0
- package/docs/guide/waiting.mdx +199 -0
- package/docs/importing/csv.mdx +196 -0
- package/docs/importing/gherkin.mdx +142 -0
- package/docs/importing/jira.mdx +172 -0
- package/docs/importing/testrail.mdx +161 -0
- package/docs/integrations/electron.mdx +152 -0
- package/docs/integrations/netlify.mdx +98 -0
- package/docs/integrations/vercel.mdx +177 -0
- package/docs/interactive/assert.mdx +51 -0
- package/docs/interactive/generate.mdx +41 -0
- package/docs/interactive/run.mdx +36 -0
- package/docs/interactive/save.mdx +53 -0
- package/docs/interactive/undo.mdx +47 -0
- package/docs/issues.mdx +9 -0
- package/docs/overview/comparison.mdx +82 -0
- package/docs/overview/faq.mdx +122 -0
- package/docs/overview/quickstart.mdx +66 -0
- package/docs/overview/what-is-testdriver.mdx +73 -0
- package/docs/quickstart.mdx +66 -0
- package/docs/reference/commands/scroll.mdx +0 -0
- package/docs/reference/interactive/assert.mdx +0 -0
- package/docs/security/action.mdx +62 -0
- package/docs/security/agent.mdx +62 -0
- package/docs/security/dashboard.mdx +0 -0
- package/docs/security/platform.mdx +54 -0
- package/docs/tutorials/advanced-test.mdx +79 -0
- package/docs/tutorials/basic-test.mdx +41 -0
- package/electron/icon.png +0 -0
- package/electron/overlay.html +7 -3
- package/electron/overlay.js +75 -15
- package/electron/tray-buffered.png +0 -0
- package/electron/tray.png +0 -0
- package/index.js +75 -34
- package/lib/commander.js +22 -1
- package/lib/commands.js +87 -19
- package/lib/config.js +10 -1
- package/lib/focus-application.js +30 -23
- package/lib/generator.js +58 -7
- package/lib/init.js +48 -19
- package/lib/ipc.js +50 -0
- package/lib/logger.js +19 -6
- package/lib/overlay.js +82 -36
- package/lib/parser.js +9 -7
- package/lib/resources/prerun.yaml +17 -0
- package/lib/sandbox.js +2 -3
- package/lib/sdk.js +0 -2
- package/lib/session.js +3 -1
- package/lib/speak.js +0 -2
- package/lib/subimage/opencv.js +0 -4
- package/lib/system.js +56 -39
- package/lib/upload-secrets.js +65 -0
- package/lib/validation.js +175 -0
- package/package.json +2 -1
- package/postinstall.js +0 -24
- package/lib/websockets.js +0 -85
- package/test.md +0 -8
- package/test.yml +0 -18
|
@@ -0,0 +1,98 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: "Netlify"
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
# Testing Netlify Deployments with TestDriver.ai GitHub Action
|
|
6
|
+
|
|
7
|
+
This guide explains how to integrate **TestDriver.ai** with **Netlify deployments** using the **GitHub Actions workflow**. By combining these tools, you can automatically test your Netlify preview deployments or production builds to ensure they meet your quality standards before merging or releasing.
|
|
8
|
+
|
|
9
|
+
---
|
|
10
|
+
|
|
11
|
+
## Workflow Overview
|
|
12
|
+
|
|
13
|
+
1. **Trigger Netlify Deployment**: Use Netlify's GitHub integration to deploy your application on every pull request or push to the main branch.
|
|
14
|
+
2. **Run Tests on the Deployment URL**: Use the TestDriver.ai GitHub Action to test the deployed application using the Netlify deployment URL.
|
|
15
|
+
3. **Report Results**: View test results in the GitHub Actions dashboard or as comments on the pull request.
|
|
16
|
+
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
## Prerequisites
|
|
20
|
+
|
|
21
|
+
1. **Netlify GitHub Integration**: Ensure your repository is connected to Netlify for automatic deployments.
|
|
22
|
+
2. **TestDriver.ai API Key**: Store your API key as a GitHub secret (e.g., `TESTDRIVER_API_KEY`).
|
|
23
|
+
3. **Netlify Deployment URL**: Use the `DEPLOY_URL` environment variable provided by Netlify to access the deployment.
|
|
24
|
+
|
|
25
|
+
---
|
|
26
|
+
|
|
27
|
+
## GitHub Actions Workflow
|
|
28
|
+
|
|
29
|
+
Here’s a complete workflow to test Netlify deployments with TestDriver.ai:
|
|
30
|
+
|
|
31
|
+
### Workflow File: `.github/workflows/test-netlify.yml````yaml
|
|
32
|
+
name: Test Netlify Deployment with TestDriver.ai
|
|
33
|
+
|
|
34
|
+
on:
|
|
35
|
+
pull_request:
|
|
36
|
+
push:
|
|
37
|
+
branches:
|
|
38
|
+
- main
|
|
39
|
+
workflow_dispatch:
|
|
40
|
+
|
|
41
|
+
jobs:
|
|
42
|
+
test-netlify:
|
|
43
|
+
name: Test Netlify Deployment
|
|
44
|
+
runs-on: ubuntu-latest
|
|
45
|
+
steps:
|
|
46
|
+
- name: Check out repository
|
|
47
|
+
uses: actions/checkout@v2
|
|
48
|
+
|
|
49
|
+
- name: Wait for Netlify Deployment
|
|
50
|
+
id: netlify
|
|
51
|
+
run: |
|
|
52
|
+
echo "Waiting for Netlify deployment..."
|
|
53
|
+
echo "Deployment URL: $DEPLOY_URL"
|
|
54
|
+
if [ -z "$DEPLOY_URL" ]; then
|
|
55
|
+
echo "Netlify deployment URL not found. Exiting."
|
|
56
|
+
exit 1
|
|
57
|
+
fi
|
|
58
|
+
|
|
59
|
+
- name: Run Tests with TestDriver.ai
|
|
60
|
+
uses: testdriverai/action@main
|
|
61
|
+
with:
|
|
62
|
+
key: ${{ secrets.TESTDRIVER_API_KEY }}
|
|
63
|
+
prompt: |
|
|
64
|
+
1. Open the deployment URL: $DEPLOY_URL
|
|
65
|
+
2. Verify the homepage loads correctly
|
|
66
|
+
3. Click the "Sign Up" button
|
|
67
|
+
4. Fill out the registration form
|
|
68
|
+
5. Submit the form and verify the success message
|
|
69
|
+
env:
|
|
70
|
+
DEPLOY_URL: $DEPLOY_URL
|
|
71
|
+
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
72
|
+
FORCE_COLOR: "3"
|
|
73
|
+
|
|
74
|
+
```
|
|
75
|
+
|
|
76
|
+
---
|
|
77
|
+
|
|
78
|
+
## Workflow Steps Explained
|
|
79
|
+
|
|
80
|
+
### 1. **Wait for Netlify Deployment**
|
|
81
|
+
Netlify automatically sets the `DEPLOY_URL` environment variable for each deployment. This step ensures the deployment URL is available before running tests.```yaml
|
|
82
|
+
- name: Wait for Netlify Deployment
|
|
83
|
+
id: netlify
|
|
84
|
+
run: |
|
|
85
|
+
echo "Waiting for Netlify deployment..."
|
|
86
|
+
echo "Deployment URL: $DEPLOY_URL"
|
|
87
|
+
if [ -z "$DEPLOY_URL" ]; then
|
|
88
|
+
echo "Netlify deployment URL not found. Exiting."
|
|
89
|
+
exit 1
|
|
90
|
+
fi
|
|
91
|
+
|
|
92
|
+
```
|
|
93
|
+
|
|
94
|
+
---
|
|
95
|
+
|
|
96
|
+
### 2. **Run Tests with TestDriver.ai**
|
|
97
|
+
The TestDriver.ai GitHub Action runs tests on the deployed application using the deployment URL. The `prompt` field specifies the test steps to execute.
|
|
98
|
+
|
|
@@ -0,0 +1,177 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: "Vercel"
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
# Testing Vercel Deployments with TestDriver.ai GitHub Action
|
|
6
|
+
|
|
7
|
+
This guide explains how to integrate **TestDriver.ai** with **Vercel deployments** using the **GitHub Actions workflow**. By combining these tools, you can automatically test your Vercel preview deployments or production builds to ensure they meet your quality standards before merging or releasing.
|
|
8
|
+
|
|
9
|
+
---
|
|
10
|
+
|
|
11
|
+
## Workflow Overview
|
|
12
|
+
|
|
13
|
+
1. **Trigger Vercel Deployment**: Use Vercel's GitHub integration to deploy your application on every pull request or push to the main branch.
|
|
14
|
+
2. **Run Tests on the Deployment URL**: Use the TestDriver.ai GitHub Action to test the deployed application using the Vercel deployment URL.
|
|
15
|
+
3. **Report Results**: View test results in the GitHub Actions dashboard or as comments on the pull request.
|
|
16
|
+
|
|
17
|
+
---
|
|
18
|
+
|
|
19
|
+
## Prerequisites
|
|
20
|
+
|
|
21
|
+
1. **Vercel GitHub Integration**: Ensure your repository is connected to Vercel for automatic deployments.
|
|
22
|
+
2. **TestDriver.ai API Key**: Store your API key as a GitHub secret (e.g., `TESTDRIVER_API_KEY`).
|
|
23
|
+
3. **Vercel Deployment URL**: Use the `VERCEL_URL` environment variable provided by Vercel to access the deployment.
|
|
24
|
+
|
|
25
|
+
---
|
|
26
|
+
|
|
27
|
+
## GitHub Actions Workflow
|
|
28
|
+
|
|
29
|
+
Here’s a complete workflow to test Vercel deployments with TestDriver.ai:
|
|
30
|
+
|
|
31
|
+
### Workflow File: `.github/workflows/test-vercel.yml`
|
|
32
|
+
|
|
33
|
+
```yaml
|
|
34
|
+
name: Test Vercel Deployment with TestDriver.ai
|
|
35
|
+
|
|
36
|
+
on:
|
|
37
|
+
pull_request:
|
|
38
|
+
push:
|
|
39
|
+
branches:
|
|
40
|
+
- main
|
|
41
|
+
workflow_dispatch:
|
|
42
|
+
|
|
43
|
+
jobs:
|
|
44
|
+
test-vercel:
|
|
45
|
+
name: Test Vercel Deployment
|
|
46
|
+
runs-on: ubuntu-latest
|
|
47
|
+
steps:
|
|
48
|
+
- name: Check out repository
|
|
49
|
+
uses: actions/checkout@v2
|
|
50
|
+
|
|
51
|
+
- name: Wait for Vercel Deployment
|
|
52
|
+
id: vercel
|
|
53
|
+
uses: amondnet/vercel-action@v20
|
|
54
|
+
with:
|
|
55
|
+
vercel-token: ${{ secrets.VERCEL_TOKEN }}
|
|
56
|
+
github-token: ${{ secrets.GITHUB_TOKEN }}
|
|
57
|
+
vercel-args: '--prod' # Optional: Use '--prod' for production builds
|
|
58
|
+
env:
|
|
59
|
+
VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }}
|
|
60
|
+
VERCEL_PROJECT_ID: ${{ secrets.VERCEL_PROJECT_ID }}
|
|
61
|
+
|
|
62
|
+
- name: Run Tests with TestDriver.ai
|
|
63
|
+
uses: testdriverai/action@main
|
|
64
|
+
with:
|
|
65
|
+
key: ${{ secrets.TESTDRIVER_API_KEY }}
|
|
66
|
+
prompt: |
|
|
67
|
+
1. Open the deployment URL: ${{ steps.vercel.outputs.url }}
|
|
68
|
+
2. Verify the homepage loads correctly
|
|
69
|
+
3. Click the "Sign Up" button
|
|
70
|
+
4. Fill out the registration form
|
|
71
|
+
5. Submit the form and verify the success message
|
|
72
|
+
env:
|
|
73
|
+
DEPLOYMENT_URL: ${{ steps.vercel.outputs.url }}
|
|
74
|
+
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
75
|
+
FORCE_COLOR: "3"
|
|
76
|
+
|
|
77
|
+
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
---
|
|
81
|
+
|
|
82
|
+
## Workflow Steps Explained
|
|
83
|
+
|
|
84
|
+
### 1. **Wait for Vercel Deployment**
|
|
85
|
+
The `amondnet/vercel-action` waits for the Vercel deployment to complete and retrieves the deployment URL. This URL is stored in the `steps.vercel.outputs.url` variable.
|
|
86
|
+
|
|
87
|
+
```yaml
|
|
88
|
+
- name: Wait for Vercel Deployment
|
|
89
|
+
id: vercel
|
|
90
|
+
uses: amondnet/vercel-action@v20
|
|
91
|
+
with:
|
|
92
|
+
vercel-token: ${{ secrets.VERCEL_TOKEN }}
|
|
93
|
+
github-token: ${{ secrets.GITHUB_TOKEN }}
|
|
94
|
+
vercel-args: '--prod'
|
|
95
|
+
env:
|
|
96
|
+
VERCEL_ORG_ID: ${{ secrets.VERCEL_ORG_ID }}
|
|
97
|
+
VERCEL_PROJECT_ID: ${{ secrets.VERCEL_PROJECT_ID }}
|
|
98
|
+
|
|
99
|
+
|
|
100
|
+
```
|
|
101
|
+
|
|
102
|
+
---
|
|
103
|
+
|
|
104
|
+
### 2. **Run Tests with TestDriver.ai**
|
|
105
|
+
The TestDriver.ai GitHub Action runs tests on the deployed application using the deployment URL. The `prompt` field specifies the test steps to execute.
|
|
106
|
+
|
|
107
|
+
```yaml
|
|
108
|
+
- name: Run Tests with TestDriver.ai
|
|
109
|
+
uses: testdriverai/action@main
|
|
110
|
+
with:
|
|
111
|
+
key: ${{ secrets.TESTDRIVER_API_KEY }}
|
|
112
|
+
prompt: |
|
|
113
|
+
1. Open the deployment URL: ${{ steps.vercel.outputs.url }}
|
|
114
|
+
2. Verify the homepage loads correctly
|
|
115
|
+
3. Click the "Sign Up" button
|
|
116
|
+
4. Fill out the registration form
|
|
117
|
+
5. Submit the form and verify the success message
|
|
118
|
+
env:
|
|
119
|
+
DEPLOYMENT_URL: ${{ steps.vercel.outputs.url }}
|
|
120
|
+
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
121
|
+
FORCE_COLOR: "3"
|
|
122
|
+
|
|
123
|
+
|
|
124
|
+
```
|
|
125
|
+
|
|
126
|
+
---
|
|
127
|
+
|
|
128
|
+
## Example TestDriver.ai Prompt
|
|
129
|
+
|
|
130
|
+
The `prompt` field in the TestDriver.ai action specifies the steps to test the Vercel deployment. For example:```yaml
|
|
131
|
+
prompt: |
|
|
132
|
+
1. Open the deployment URL: ${{ steps.vercel.outputs.url }}
|
|
133
|
+
2. Verify the homepage loads correctly
|
|
134
|
+
3. Click the "Sign Up" button
|
|
135
|
+
4. Fill out the registration form
|
|
136
|
+
5. Submit the form and verify the success message
|
|
137
|
+
|
|
138
|
+
|
|
139
|
+
```
|
|
140
|
+
|
|
141
|
+
---
|
|
142
|
+
|
|
143
|
+
## Secrets Configuration
|
|
144
|
+
|
|
145
|
+
Add the following secrets to your GitHub repository:
|
|
146
|
+
1. **`TESTDRIVER_API_KEY`**: Your TestDriver.ai API key.
|
|
147
|
+
2. **`VERCEL_TOKEN`**: Your Vercel API token.
|
|
148
|
+
3. **`VERCEL_ORG_ID`**: Your Vercel organization ID.
|
|
149
|
+
4. **`VERCEL_PROJECT_ID`**: Your Vercel project ID.
|
|
150
|
+
|
|
151
|
+
---
|
|
152
|
+
|
|
153
|
+
## Benefits of This Workflow
|
|
154
|
+
|
|
155
|
+
1. **Automated Deployment Testing**: Automatically test every Vercel deployment, including preview and production builds.
|
|
156
|
+
2. **Early Issue Detection**: Catch issues in pull requests before merging.
|
|
157
|
+
3. **Detailed Feedback**: View test results directly in the GitHub Actions dashboard.
|
|
158
|
+
4. **Seamless Integration**: Combine Vercel's deployment capabilities with TestDriver.ai's testing power.
|
|
159
|
+
|
|
160
|
+
---
|
|
161
|
+
|
|
162
|
+
## Example Output
|
|
163
|
+
|
|
164
|
+
### GitHub Actions Dashboard:
|
|
165
|
+
- ✅ **Test Vercel Deployment**: All tests passed.
|
|
166
|
+
- ❌ **Test Vercel Deployment**: 1 test failed. View logs for details.
|
|
167
|
+
|
|
168
|
+
### TestDriver.ai Logs:
|
|
169
|
+
- **Step 1**: Opened the deployment URL.
|
|
170
|
+
- **Step 2**: Verified the homepage loaded correctly.
|
|
171
|
+
- **Step 3**: Clicked the "Sign Up" button.
|
|
172
|
+
- **Step 4**: Filled out the registration form.
|
|
173
|
+
- **Step 5**: Failed to verify the success message.
|
|
174
|
+
|
|
175
|
+
---
|
|
176
|
+
|
|
177
|
+
By integrating TestDriver.ai with Vercel deployments, you can ensure that every deployment is thoroughly tested, reducing the risk of bugs reaching production and improving the overall quality of your application.
|
|
@@ -0,0 +1,51 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: "/assert"
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
# Command: `/assert`
|
|
6
|
+
|
|
7
|
+
## Description
|
|
8
|
+
The `/assert` command ensures that a specific condition is true within your test. This is useful for verifying that tasks were completed successfully, just as a user would observe.
|
|
9
|
+
|
|
10
|
+
## Usage
|
|
11
|
+
```bash
|
|
12
|
+
/assert <criteria>
|
|
13
|
+
```
|
|
14
|
+
|
|
15
|
+
## Behavior
|
|
16
|
+
- The `/assert` command generates an assertion based on the specified criteria.
|
|
17
|
+
- TestDriver.ai takes a screenshot and uses it to verify that the condition described in the `expect` field is true.
|
|
18
|
+
- If the condition is not met, the test will fail and exit immediately.
|
|
19
|
+
|
|
20
|
+
## Example Usage
|
|
21
|
+
|
|
22
|
+
### Basic Assertion
|
|
23
|
+
```bash
|
|
24
|
+
/assert No error message is displayed
|
|
25
|
+
```
|
|
26
|
+
This generates the following command:
|
|
27
|
+
```yaml
|
|
28
|
+
- command: assert
|
|
29
|
+
expect: There is no error message
|
|
30
|
+
```
|
|
31
|
+
|
|
32
|
+
### Asynchronous Assertion
|
|
33
|
+
To speed up tests, use `async: true` to allow the test to continue without waiting for the assertion to pass:
|
|
34
|
+
```yaml
|
|
35
|
+
- command: assert
|
|
36
|
+
expect: There is no error message
|
|
37
|
+
async: true
|
|
38
|
+
```
|
|
39
|
+
|
|
40
|
+
## Protips
|
|
41
|
+
- Use assertions sparingly to avoid slowing down your tests.
|
|
42
|
+
- Combine `async: true` with assertions to improve test performance while still validating critical conditions.
|
|
43
|
+
- Ensure the `expect` field clearly describes the condition to avoid ambiguity.
|
|
44
|
+
|
|
45
|
+
## Gotchas
|
|
46
|
+
- If the condition in `expect` is not met, the test will fail and exit immediately.
|
|
47
|
+
- Overusing assertions can make tests slower and harder to maintain.
|
|
48
|
+
|
|
49
|
+
## Notes
|
|
50
|
+
- The `/assert` command is ideal for validating key checkpoints in your test workflow.
|
|
51
|
+
- Use this command to ensure that critical tasks, such as error-free execution or successful navigation, are completed as expected.
|
|
@@ -0,0 +1,41 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: "/generate"
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
# Command: `/generate`
|
|
6
|
+
|
|
7
|
+
## Description
|
|
8
|
+
The `/generate` command is an experimental feature that instructs TestDriver.ai to create its own exploratory prompts. This command is used in the "Generate a Test Suite" demo and is designed to help automate the creation of exploratory test cases.
|
|
9
|
+
|
|
10
|
+
## Usage
|
|
11
|
+
```bash
|
|
12
|
+
/generate
|
|
13
|
+
```
|
|
14
|
+
|
|
15
|
+
## Behavior
|
|
16
|
+
- The `/generate` command analyzes the display and generates exploratory prompts for testing.
|
|
17
|
+
- Each exploratory test is saved as a simple Markdown file containing a list of steps.
|
|
18
|
+
- TestDriver.ai generates 10 Markdown files every time the `/generate` command is called.
|
|
19
|
+
- The generated files are stored in the `./testdriver/generate/*.md` directory.
|
|
20
|
+
|
|
21
|
+
## Example Output
|
|
22
|
+
Here’s an example of a generated test file (`test-search-function.md`):
|
|
23
|
+
```markdown
|
|
24
|
+
1. Click on the search icon.
|
|
25
|
+
2. Type "real-time chat" into the search bar.
|
|
26
|
+
3. Assert that search results are relevant and displayed.
|
|
27
|
+
```
|
|
28
|
+
|
|
29
|
+
## Protips
|
|
30
|
+
- Use `/generate` to quickly create exploratory tests for regression testing or feature validation.
|
|
31
|
+
- Combine `/generate` with the GitHub action to generate and run regression tests in parallel.
|
|
32
|
+
- Review the generated Markdown files and convert them into YAML test scripts for reuse.
|
|
33
|
+
|
|
34
|
+
## Gotchas
|
|
35
|
+
- The `/generate` command is experimental and may not always produce perfect test cases. Review and refine the generated tests as needed.
|
|
36
|
+
- Ensure the `./testdriver/generate/` directory exists and has write permissions.
|
|
37
|
+
|
|
38
|
+
## Notes
|
|
39
|
+
- The `/generate` command is ideal for automating the creation of exploratory test cases and generating regression tests.
|
|
40
|
+
- Generated tests can be merged into a regression test suite for continuous testing and validation.
|
|
41
|
+
```
|
|
@@ -0,0 +1,36 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: "/run"
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
## Description
|
|
6
|
+
The `/run` command is used to execute a test plan from a specified file. This command performs each step defined in the test plan and outputs the results.
|
|
7
|
+
|
|
8
|
+
## Usage
|
|
9
|
+
```bash
|
|
10
|
+
/run <file>
|
|
11
|
+
```
|
|
12
|
+
|
|
13
|
+
## Example Usage
|
|
14
|
+
```bash
|
|
15
|
+
testdriverai
|
|
16
|
+
> /run helloworld.yml
|
|
17
|
+
```
|
|
18
|
+
This command runs the `helloworld.yml` test plan, executing each command in the file sequentially.
|
|
19
|
+
|
|
20
|
+
## Behavior
|
|
21
|
+
- TestDriver will execute the test plan, performing each command as defined in the file.
|
|
22
|
+
- If the test completes successfully, the program will exit with code `0`.
|
|
23
|
+
- If any failures occur during the test, the program will output the errors and exit with code `1`.
|
|
24
|
+
|
|
25
|
+
## Protips
|
|
26
|
+
- Ensure the test file path is correct and accessible before running the command.
|
|
27
|
+
- Use descriptive filenames for your test plans to make them easier to identify.
|
|
28
|
+
- Combine `/run` with debugging tools to troubleshoot failing tests.
|
|
29
|
+
|
|
30
|
+
## Gotchas
|
|
31
|
+
- This command will exit the program upon execution, so ensure all necessary setup is complete before running it.
|
|
32
|
+
- Any errors in the test plan (e.g., invalid commands or missing arguments) will cause the test to fail.
|
|
33
|
+
|
|
34
|
+
## Notes
|
|
35
|
+
- The `/run` command is ideal for executing pre-created test plans in an interactive session.
|
|
36
|
+
- Use this command to validate and debug your test plans during development.
|
|
@@ -0,0 +1,53 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: "/save"
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
## Description
|
|
6
|
+
The `/save` command saves the current state of the test script to a file. This command generates a YAML file containing the history of executed commands and tasks, allowing you to reuse or modify the test script later.
|
|
7
|
+
|
|
8
|
+
## Usage
|
|
9
|
+
```bash
|
|
10
|
+
/save
|
|
11
|
+
```
|
|
12
|
+
|
|
13
|
+
## Example Usage
|
|
14
|
+
```bash
|
|
15
|
+
testdriverai
|
|
16
|
+
> /save
|
|
17
|
+
|
|
18
|
+
saving...
|
|
19
|
+
|
|
20
|
+
Current test script:
|
|
21
|
+
|
|
22
|
+
version: 4.0.0
|
|
23
|
+
steps:
|
|
24
|
+
- prompt: navigate to fiber.google.com
|
|
25
|
+
commands:
|
|
26
|
+
- command: focus-application
|
|
27
|
+
name: Google Chrome
|
|
28
|
+
- command: hover-text
|
|
29
|
+
text: Search Google or type a URL
|
|
30
|
+
description: main google search
|
|
31
|
+
action: click
|
|
32
|
+
- command: type
|
|
33
|
+
text: fiber.google.com
|
|
34
|
+
- command: press-keys
|
|
35
|
+
keys:
|
|
36
|
+
- enter
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
## Behavior
|
|
40
|
+
- The `/save` command generates a YAML file with the current test script, including all executed steps and commands.
|
|
41
|
+
- The file can be used as a reusable test plan for future executions.
|
|
42
|
+
|
|
43
|
+
## Protips
|
|
44
|
+
- Use `/save` frequently during interactive sessions to preserve your progress and avoid losing work.
|
|
45
|
+
- Combine `/save` with `/run` to quickly test and iterate on your scripts.
|
|
46
|
+
|
|
47
|
+
## Gotchas
|
|
48
|
+
- Ensure you have write permissions in the directory where the file will be saved.
|
|
49
|
+
- The saved script reflects the current state of the session. Any unexecuted commands will not be included.
|
|
50
|
+
|
|
51
|
+
## Notes
|
|
52
|
+
- The `/save` command is ideal for creating reusable test scripts from interactive sessions.
|
|
53
|
+
- Use this command to document and share your test workflows with your team.
|
|
@@ -0,0 +1,47 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: "/undo"
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
## Description
|
|
6
|
+
The `/undo` command removes the last generated command or step from the current test script. This is useful for quickly correcting mistakes or removing unintended actions during an interactive session.
|
|
7
|
+
|
|
8
|
+
## Usage
|
|
9
|
+
```bash
|
|
10
|
+
/undo
|
|
11
|
+
```
|
|
12
|
+
|
|
13
|
+
## Behavior
|
|
14
|
+
- The `/undo` command removes the most recently added command or step from the test script.
|
|
15
|
+
- You can call `/undo` multiple times to remove multiple commands or steps.
|
|
16
|
+
|
|
17
|
+
## Example Usage
|
|
18
|
+
|
|
19
|
+
### Before `/undo`
|
|
20
|
+
```yaml
|
|
21
|
+
- step:
|
|
22
|
+
- command: scroll-until-text
|
|
23
|
+
text: Add to cart
|
|
24
|
+
- step:
|
|
25
|
+
- command: hover-text
|
|
26
|
+
text: Add to cart
|
|
27
|
+
action: click
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
### After `/undo`
|
|
31
|
+
```yaml
|
|
32
|
+
- step:
|
|
33
|
+
- command: scroll-until-text
|
|
34
|
+
text: Add to cart
|
|
35
|
+
```
|
|
36
|
+
|
|
37
|
+
## Protips
|
|
38
|
+
- Use `/undo` immediately after generating a command to quickly fix mistakes or refine your test script.
|
|
39
|
+
- Combine `/undo` with `/save` to iteratively build and refine your test scripts.
|
|
40
|
+
|
|
41
|
+
## Gotchas
|
|
42
|
+
- The `/undo` command only removes the last generated command or step. If you need to undo multiple actions, call `/undo` repeatedly.
|
|
43
|
+
- Once a session is closed, you cannot undo previously saved commands.
|
|
44
|
+
|
|
45
|
+
## Notes
|
|
46
|
+
- The `/undo` command is ideal for interactive sessions where you are experimenting with test generation.
|
|
47
|
+
- This command helps maintain a clean and accurate test script by allowing quick corrections.
|
package/docs/issues.mdx
ADDED
|
@@ -0,0 +1,82 @@
|
|
|
1
|
+
|
|
2
|
+
### Application Support
|
|
3
|
+
|
|
4
|
+
TestDriver operates a full desktop environment, so it can run any application.
|
|
5
|
+
|
|
6
|
+
| Application | TestDriver | Playwright | Selenium |
|
|
7
|
+
|:-------------------|:--------------|:-----------|:---------|
|
|
8
|
+
| Web Apps | ✅ | ✅ | ✅ |
|
|
9
|
+
| Desktop Apps | ✅ | | |
|
|
10
|
+
| Chrome Extensions | ✅ | | |
|
|
11
|
+
|
|
12
|
+
|
|
13
|
+
### Testing Features
|
|
14
|
+
|
|
15
|
+
TestDriver is AI first.
|
|
16
|
+
|
|
17
|
+
| Feature | TestDriver | Playwright | Selenium |
|
|
18
|
+
|:-----------------------|:--------------|:-----------|:---------|
|
|
19
|
+
| Test Generation | ✅ | | |
|
|
20
|
+
| Adaptive Testing | ✅ | | |
|
|
21
|
+
| Visual Assertions | ✅ | | |
|
|
22
|
+
| Self Healing | ✅ | | |
|
|
23
|
+
| Application Switching | ✅ | | |
|
|
24
|
+
| GitHub Actions | ✅ | ✅ | |
|
|
25
|
+
| Team Dashboard | ✅ | | |
|
|
26
|
+
| Team Collaboration | ✅ | | |
|
|
27
|
+
|
|
28
|
+
### Test Coverage
|
|
29
|
+
|
|
30
|
+
TestDriver has more coverage than selector-based frameworks.
|
|
31
|
+
|
|
32
|
+
| Feature | TestDriver | Playwright | Selenium |
|
|
33
|
+
|:--------------------|:--------------|:-----------|:---------|
|
|
34
|
+
| Browser Viewport | ✅ | ✅ | ✅ |
|
|
35
|
+
| Browser App | ✅ | | |
|
|
36
|
+
| Operating System | ✅ | | |
|
|
37
|
+
| PDFs | ✅ | | |
|
|
38
|
+
| File System | ✅ | | |
|
|
39
|
+
| Push Notifications | ✅ | | |
|
|
40
|
+
| Image Content | ✅ | | |
|
|
41
|
+
| Video Content | ✅ | | |
|
|
42
|
+
| `<iframe>` | ✅ | | |
|
|
43
|
+
| `<canvas>` | ✅ | | |
|
|
44
|
+
| `<video>` | ✅ | | |
|
|
45
|
+
|
|
46
|
+
### Debugging Features
|
|
47
|
+
|
|
48
|
+
Debugging features are powered by [Dashcam.io](https://dashcam.io).
|
|
49
|
+
|
|
50
|
+
| Feature | TestDriver | Playwright | Selenium |
|
|
51
|
+
|:--------------------|:--------------|:-----------|:---------|
|
|
52
|
+
| AI Summary | ✅ | | |
|
|
53
|
+
| Video Replay | ✅ | ✅ | |
|
|
54
|
+
| Browser Logs | ✅ | ✅ | |
|
|
55
|
+
| Desktop Logs | ✅ | | |
|
|
56
|
+
| Network Requests | ✅ | ✅ | |
|
|
57
|
+
| Team Dashboard | ✅ | | |
|
|
58
|
+
| Team Collaboration | ✅ | | |
|
|
59
|
+
|
|
60
|
+
### Web Browser Support
|
|
61
|
+
|
|
62
|
+
TestDriver is browser agnostic and supports any version of any browser.
|
|
63
|
+
|
|
64
|
+
| Feature | TestDriver | Playwright | Selenium |
|
|
65
|
+
|:----------|:--------------|:-----------|:---------|
|
|
66
|
+
| Chrome | ✅ | ✅ | ✅ |
|
|
67
|
+
| Firefox | ✅ | ✅ | ✅ |
|
|
68
|
+
| Webkit | ✅ | ✅ | ✅ |
|
|
69
|
+
| IE | ✅ | | ✅ |
|
|
70
|
+
| Edge | ✅ | ✅ | ✅ |
|
|
71
|
+
| Opera | ✅ | | ✅ |
|
|
72
|
+
| Safari | ✅ | | ✅ |
|
|
73
|
+
|
|
74
|
+
### Operating System Support
|
|
75
|
+
|
|
76
|
+
TestDriver currently supports Mac and Windows!
|
|
77
|
+
|
|
78
|
+
| Feature | TestDriver | Playwright | Selenium |
|
|
79
|
+
|:----------|:--------------|:-----------|:---------|
|
|
80
|
+
| Windows | ✅ | ✅ | ✅ |
|
|
81
|
+
| Mac | ✅ | ✅ | ✅ |
|
|
82
|
+
| Linux | | ✅ | ✅ |
|