testdriverai 6.0.16-canary.f0eefe0.0 → 6.0.16
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/agent/events.js +11 -1
- package/agent/index.js +9 -13
- package/agent/lib/censorship.js +70 -0
- package/debugger/index.html +2 -0
- package/docs/_scripts/link-replacer.js +164 -0
- package/docs/account/dashboard.mdx +5 -2
- package/docs/account/pricing.mdx +12 -5
- package/docs/account/projects.mdx +3 -2
- package/docs/account/team.mdx +7 -3
- package/docs/apps/chrome-extensions.mdx +8 -2
- package/docs/apps/desktop-apps.mdx +5 -5
- package/docs/apps/mobile-apps.mdx +7 -2
- package/docs/cli/overview.mdx +6 -6
- package/docs/commands/assert.mdx +9 -6
- package/docs/commands/hover-image.mdx +9 -6
- package/docs/commands/if.mdx +11 -6
- package/docs/commands/match-image.mdx +13 -6
- package/docs/commands/remember.mdx +10 -5
- package/docs/commands/run.mdx +8 -2
- package/docs/commands/type.mdx +11 -5
- package/docs/commands/wait-for-image.mdx +9 -3
- package/docs/commands/wait.mdx +9 -4
- package/docs/features/generation.mdx +1 -1
- package/docs/features/reusable-snippets.mdx +8 -5
- package/docs/features/selectorless.mdx +13 -6
- package/docs/features/visual-assertions.mdx +53 -39
- package/docs/getting-started/editing.mdx +1 -1
- package/docs/guide/assertions.mdx +12 -17
- package/docs/guide/code.mdx +2 -2
- package/docs/guide/lifecycle.mdx +3 -3
- package/docs/guide/locating.mdx +13 -6
- package/docs/guide/waiting.mdx +41 -32
- package/docs/interactive/assert.mdx +11 -0
- package/docs/interactive/generate.mdx +11 -0
- package/docs/interactive/run.mdx +8 -0
- package/docs/interactive/save.mdx +7 -0
- package/docs/interactive/undo.mdx +11 -3
- package/docs/overview/comparison.mdx +50 -49
- package/docs/overview/faq.mdx +40 -2
- package/docs/overview/performance.mdx +25 -25
- package/docs/overview/what-is-testdriver.mdx +24 -22
- package/docs/scenarios/ai-chatbot.mdx +6 -5
- package/docs/scenarios/cookie-banner.mdx +4 -1
- package/docs/scenarios/file-upload.mdx +7 -4
- package/docs/scenarios/form-filling.mdx +4 -1
- package/docs/scenarios/pdf-generation.mdx +5 -2
- package/docs/scenarios/spell-check.mdx +3 -3
- package/docs/security/agent.mdx +14 -2
- package/docs/security/platform.mdx +17 -5
- package/docs/snippets/calendar-link.mdx +4 -1
- package/docs/snippets/comments.mdx +1 -1
- package/docs/snippets/gitignore-warning.mdx +6 -3
- package/docs/snippets/lifecycle-warning.mdx +6 -1
- package/docs/snippets/tests/assert-replay.mdx +5 -1
- package/docs/snippets/tests/exec-js-replay.mdx +5 -1
- package/docs/snippets/tests/exec-shell-replay.mdx +5 -1
- package/docs/snippets/tests/hover-image-replay.mdx +5 -1
- package/docs/snippets/tests/hover-image-yaml.mdx +2 -2
- package/docs/snippets/tests/hover-text-replay.mdx +5 -1
- package/docs/snippets/tests/hover-text-with-description-replay.mdx +5 -2
- package/docs/snippets/tests/hover-text-with-description-yaml.mdx +2 -2
- package/docs/snippets/tests/match-image-replay.mdx +5 -1
- package/docs/snippets/tests/match-image-yaml.mdx +2 -2
- package/docs/snippets/tests/press-keys-replay.mdx +5 -1
- package/docs/snippets/tests/prompt-replay.mdx +5 -1
- package/docs/snippets/tests/prompt-yaml.mdx +1 -1
- package/docs/snippets/tests/remember-replay.mdx +5 -1
- package/docs/snippets/tests/scroll-replay.mdx +5 -1
- package/docs/snippets/tests/scroll-until-text-replay.mdx +5 -1
- package/docs/snippets/tests/type-repeated-replay.mdx +5 -1
- package/docs/snippets/tests/type-replay.mdx +5 -1
- package/docs/snippets/yml-warning.mdx +4 -1
- package/docs/styles.css +6 -0
- package/interfaces/logger.js +1 -19
- package/package.json +6 -2
- package/styles/Microsoft/HeadingAcronyms.yml +1 -1
- package/styles/Microsoft/Headings.yml +1 -1
- package/styles/Microsoft/Quotes.yml +1 -1
- package/styles/Microsoft/Semicolon.yml +1 -1
- package/styles/Microsoft/SentenceLength.yml +0 -1
- package/styles/Microsoft/Spacing.yml +2 -2
- package/styles/Microsoft/Units.yml +4 -4
- package/styles/Microsoft/Vocab.yml +1 -1
- package/styles/alex/Ablist.yml +58 -29
- package/styles/alex/Gendered.yml +4 -2
- package/styles/alex/Press.yml +2 -1
- package/styles/alex/ProfanityLikely.yml +1284 -1284
- package/styles/alex/ProfanityMaybe.yml +1 -1
- package/styles/alex/ProfanityUnlikely.yml +1 -1
- package/styles/alex/Race.yml +4 -2
- package/styles/alex/Suicide.yml +4 -2
- package/styles/alex/meta.json +1 -1
- package/styles/proselint/AnimalLabels.yml +39 -39
- package/styles/proselint/DenizenLabels.yml +44 -44
- package/styles/proselint/GenderBias.yml +37 -37
- package/styles/proselint/GroupTerms.yml +31 -31
- package/styles/proselint/Hedging.yml +1 -1
- package/styles/proselint/Hyperbole.yml +1 -1
- package/styles/proselint/LGBTTerms.yml +8 -8
- package/styles/proselint/Needless.yml +5 -5
- package/styles/proselint/RASSyndrome.yml +3 -3
- package/styles/proselint/Typography.yml +1 -1
- package/styles/proselint/Uncomparables.yml +5 -5
- package/styles/proselint/meta.json +1 -3
- package/testdriver/acceptance/remember.yaml +1 -1
- package/testdriver/behavior/secrets.yaml +7 -0
- /package/{upload-docs-to-openai.js → docs/_scripts/upload-docs-to-openai.js} +0 -0
|
@@ -7,7 +7,7 @@ icon: "square-check"
|
|
|
7
7
|
|
|
8
8
|
# Guide: Using assertions in TestDriver
|
|
9
9
|
|
|
10
|
-
Assertions in TestDriver allow you to validate that your application behaves as expected during a test. By using the `assert` command and visual assertions, you can ensure that specific conditions are met, such as verifying the presence of text, images, or UI elements on the screen.
|
|
10
|
+
Assertions in TestDriver allow you to validate that your application behaves as expected during a test. By using the [`assert`](/commands/assert) command and visual assertions, you can ensure that specific conditions are met, such as verifying the presence of text, images, or UI elements on the screen.
|
|
11
11
|
|
|
12
12
|
---
|
|
13
13
|
|
|
@@ -16,15 +16,16 @@ Assertions in TestDriver allow you to validate that your application behaves as
|
|
|
16
16
|
Assertions are checks that validate whether a specific condition is true. If the condition isn't met, the test will fail, providing feedback on what went wrong.
|
|
17
17
|
|
|
18
18
|
### Types of assertions in TestDriver:
|
|
19
|
+
|
|
19
20
|
1. **Text Assertions**: Verify that specific text is visible on the screen.
|
|
20
21
|
2. **Visual Assertions**: Validate the presence of images, icons, or UI elements.
|
|
21
22
|
3. **Custom Assertions**: Use descriptive conditions to check for specific outcomes.
|
|
22
23
|
|
|
23
24
|
---
|
|
24
25
|
|
|
25
|
-
## How to use the `assert` command
|
|
26
|
+
## How to use the [`assert`](/commands/assert) command
|
|
26
27
|
|
|
27
|
-
The `assert` command is used to validate conditions during a test. It checks whether the specified expectation is true.
|
|
28
|
+
The [`assert`](/commands/assert) command is used to validate conditions during a test. It checks whether the specified expectation is true.
|
|
28
29
|
|
|
29
30
|
### Syntax:
|
|
30
31
|
|
|
@@ -32,10 +33,8 @@ The `assert` command is used to validate conditions during a test. It checks whe
|
|
|
32
33
|
- command: assert
|
|
33
34
|
expect: <condition to check>
|
|
34
35
|
async: <true|false> # Optional, defaults to false
|
|
35
|
-
|
|
36
36
|
```
|
|
37
37
|
|
|
38
|
-
|
|
39
38
|
- **`expect`**: The condition to validate (for example, "The login form is displayed").
|
|
40
39
|
- **`async`**: (Optional) If set to `true`, the test will continue running without waiting for the assertion to pass.
|
|
41
40
|
|
|
@@ -48,10 +47,8 @@ The `assert` command is used to validate conditions during a test. It checks whe
|
|
|
48
47
|
```yaml
|
|
49
48
|
- command: assert
|
|
50
49
|
expect: The login form is displayed
|
|
51
|
-
|
|
52
50
|
```
|
|
53
51
|
|
|
54
|
-
|
|
55
52
|
This assertion checks if the login form is visible on the screen.
|
|
56
53
|
|
|
57
54
|
---
|
|
@@ -64,10 +61,8 @@ This assertion checks if the login form is visible on the screen.
|
|
|
64
61
|
- command: assert
|
|
65
62
|
expect: The success message is displayed
|
|
66
63
|
async: true
|
|
67
|
-
|
|
68
64
|
```
|
|
69
65
|
|
|
70
|
-
|
|
71
66
|
This assertion runs asynchronously, allowing the test to continue without waiting for the success message to appear.
|
|
72
67
|
|
|
73
68
|
---
|
|
@@ -84,10 +79,8 @@ Visual assertions validate the presence of images, icons, or UI elements on the
|
|
|
84
79
|
- command: hover-image
|
|
85
80
|
description: Company logo in the top-left corner
|
|
86
81
|
action: hover
|
|
87
|
-
|
|
88
82
|
```
|
|
89
83
|
|
|
90
|
-
|
|
91
84
|
This command hovers over the company logo to ensure it's present on the screen.
|
|
92
85
|
|
|
93
86
|
---
|
|
@@ -101,10 +94,8 @@ This command hovers over the company logo to ensure it's present on the screen.
|
|
|
101
94
|
text: Submit
|
|
102
95
|
description: Blue button with the text 'Submit' at the bottom of the form
|
|
103
96
|
action: hover
|
|
104
|
-
|
|
105
97
|
```
|
|
106
98
|
|
|
107
|
-
|
|
108
99
|
This command hovers over the "Submit" button to confirm its presence.
|
|
109
100
|
|
|
110
101
|
---
|
|
@@ -155,18 +146,18 @@ steps:
|
|
|
155
146
|
commands:
|
|
156
147
|
- command: assert
|
|
157
148
|
expect: The dashboard is displayed
|
|
158
|
-
|
|
159
149
|
```
|
|
160
150
|
|
|
161
|
-
|
|
162
151
|
---
|
|
163
152
|
|
|
164
153
|
## Debugging assertions
|
|
165
154
|
|
|
166
155
|
1. **Review Error Messages**:
|
|
156
|
+
|
|
167
157
|
- If an assertion fails, TestDriver provides detailed error messages to help identify the issue.
|
|
168
158
|
|
|
169
159
|
2. **Use Visual Feedback**:
|
|
160
|
+
|
|
170
161
|
- Leverage screenshots and visual feedback to verify the state of the application during the assertion.
|
|
171
162
|
|
|
172
163
|
3. **Refine Descriptions**:
|
|
@@ -177,15 +168,19 @@ steps:
|
|
|
177
168
|
## Best practices for assertions
|
|
178
169
|
|
|
179
170
|
1. **Be Specific**:
|
|
171
|
+
|
|
180
172
|
- Use clear and concise conditions for assertions (for example, "The login form is displayed").
|
|
181
173
|
|
|
182
174
|
2. **Use Visual Assertions for Non-Text Elements**:
|
|
183
|
-
|
|
175
|
+
|
|
176
|
+
- Validate images, icons, and other UI elements using [`hover-image`](/commands/hover-image) or [`hover-text`](/commands/hover-text).
|
|
184
177
|
|
|
185
178
|
3. **Combine Assertions with Navigation**:
|
|
179
|
+
|
|
186
180
|
- Place assertions after navigation or interaction steps to validate the application's state.
|
|
187
181
|
|
|
188
182
|
4. **Leverage Async Assertions**:
|
|
183
|
+
|
|
189
184
|
- Use `async: true` for non-blocking checks, especially for dynamic content.
|
|
190
185
|
|
|
191
186
|
5. **Test Incrementally**:
|
|
@@ -193,4 +188,4 @@ steps:
|
|
|
193
188
|
|
|
194
189
|
---
|
|
195
190
|
|
|
196
|
-
By using the `assert` command and visual assertions effectively, you can create robust and reliable tests that ensure your application behaves as expected.
|
|
191
|
+
By using the [`assert`](/commands/assert) command and visual assertions effectively, you can create robust and reliable tests that ensure your application behaves as expected.
|
package/docs/guide/code.mdx
CHANGED
|
@@ -5,7 +5,7 @@ description: "Learn how to integrate custom Node.js scripts into your TestDriver
|
|
|
5
5
|
icon: "code"
|
|
6
6
|
---
|
|
7
7
|
|
|
8
|
-
TestDriver allows you to execute custom **Node.js** scripts and shell scripts within your test workflows using the `exec` command. This feature, introduced in version `5.1.0`, enables you to integrate custom logic, such as generating one-time passwords (OTPs), hitting APIs, or performing other dynamic operations, directly into your tests.
|
|
8
|
+
TestDriver allows you to execute custom **Node.js** scripts and shell scripts within your test workflows using the [`exec`](/commands/exec) command. This feature, introduced in version `5.1.0`, enables you to integrate custom logic, such as generating one-time passwords (OTPs), hitting APIs, or performing other dynamic operations, directly into your tests.
|
|
9
9
|
|
|
10
10
|
## Key features
|
|
11
11
|
|
|
@@ -50,7 +50,7 @@ steps:
|
|
|
50
50
|
|
|
51
51
|
## Additional details
|
|
52
52
|
|
|
53
|
-
- The `exec` command now takes a `lang` argument and supports different operating systems (`linux`, `mac`, and `windows`).
|
|
53
|
+
- The [`exec`](/commands/exec) command now takes a `lang` argument and supports different operating systems (`linux`, `mac`, and `windows`).
|
|
54
54
|
- Supported `lang` values are `js` or `shell`:
|
|
55
55
|
- `js` code is executed in a Node.js VM on the host machine (your computer).
|
|
56
56
|
- `shell` code is executed in the shell on the runner.
|
package/docs/guide/lifecycle.mdx
CHANGED
|
@@ -159,8 +159,8 @@ Each lifecycle file should have a specific purpose:
|
|
|
159
159
|
|
|
160
160
|
### Use appropriate commands
|
|
161
161
|
|
|
162
|
-
- Use `exec` with `lang: pwsh` for system commands
|
|
163
|
-
- Use TestDriver commands (`wait-for-text
|
|
162
|
+
- Use [`exec`](/commands/exec) with `lang: pwsh` for system commands
|
|
163
|
+
- Use TestDriver commands ([`wait-for-text`](/commands/wait-for-text), [`hover-text`](/commands/hover-text), etc.) for UI interactions
|
|
164
164
|
- Include appropriate timeouts for reliability
|
|
165
165
|
|
|
166
166
|
### Error handling
|
|
@@ -224,7 +224,7 @@ When TestDriver runs, lifecycle files execute in this order:
|
|
|
224
224
|
|
|
225
225
|
### Commands failing
|
|
226
226
|
|
|
227
|
-
- Check PowerShell syntax for `exec` commands
|
|
227
|
+
- Check PowerShell syntax for [`exec`](/commands/exec) commands
|
|
228
228
|
- Verify file paths exist on the target system
|
|
229
229
|
- Add error handling and logging
|
|
230
230
|
- Use appropriate timeouts for operations
|
package/docs/guide/locating.mdx
CHANGED
|
@@ -12,14 +12,17 @@ TestDriver uses **visual understanding** to locate elements on the screen. Unlik
|
|
|
12
12
|
## KeypPrinciples for locating elements
|
|
13
13
|
|
|
14
14
|
1. **Describe the Element Visually**:
|
|
15
|
+
|
|
15
16
|
- Focus on the **appearance** of the element, not its behavior or function.
|
|
16
17
|
- Mention unique visual traits such as **text**, **color**, **size**, **position**, or **icon**.
|
|
17
18
|
|
|
18
19
|
2. **Avoid Behavioral Descriptions**:
|
|
20
|
+
|
|
19
21
|
- don't describe what the element does (for example, "button that submits the form").
|
|
20
22
|
- Instead, describe how it looks (for example, "blue button with the text 'Submit' in the bottom-right corner").
|
|
21
23
|
|
|
22
24
|
3. **Use Unique Identifiers**:
|
|
25
|
+
|
|
23
26
|
- If the element has visible text, include it in the description.
|
|
24
27
|
- For icons or images, describe their shape, color, or associated label.
|
|
25
28
|
|
|
@@ -32,22 +35,27 @@ TestDriver uses **visual understanding** to locate elements on the screen. Unlik
|
|
|
32
35
|
## Examples of effective descriptions
|
|
33
36
|
|
|
34
37
|
### 1. Buttons
|
|
38
|
+
|
|
35
39
|
- **Good**: "Blue button with the text 'Sign In' in the top-right corner."
|
|
36
40
|
- **Bad**: "Button that logs the user in."
|
|
37
41
|
|
|
38
42
|
### 2. Links
|
|
43
|
+
|
|
39
44
|
- **Good**: "Hyperlink with the text 'Learn More' in the footer."
|
|
40
45
|
- **Bad**: "Link that navigates to the About page."
|
|
41
46
|
|
|
42
47
|
### 3. Icons
|
|
48
|
+
|
|
43
49
|
- **Good**: "Magnifying glass icon next to the search bar."
|
|
44
50
|
- **Bad**: "Search icon that opens the search feature."
|
|
45
51
|
|
|
46
52
|
### 4. Input fields
|
|
53
|
+
|
|
47
54
|
- **Good**: "White input box labeled 'Email Address' above the password field."
|
|
48
55
|
- **Bad**: "Field where the user enters their email."
|
|
49
56
|
|
|
50
57
|
### 5. Images
|
|
58
|
+
|
|
51
59
|
- **Good**: "Company logo in the top-left corner, a blue circle with white text."
|
|
52
60
|
- **Bad**: "Logo that redirects to the homepage."
|
|
53
61
|
|
|
@@ -64,11 +72,8 @@ TestDriver uses **visual understanding** to locate elements on the screen. Unlik
|
|
|
64
72
|
text: Sign In
|
|
65
73
|
description: Blue button with the text 'Sign In' in the top-right corner
|
|
66
74
|
action: click
|
|
67
|
-
|
|
68
|
-
|
|
69
75
|
```
|
|
70
76
|
|
|
71
|
-
|
|
72
77
|
---
|
|
73
78
|
|
|
74
79
|
### Example: Locating an icon
|
|
@@ -92,20 +97,19 @@ TestDriver uses **visual understanding** to locate elements on the screen. Unlik
|
|
|
92
97
|
text: Learn More
|
|
93
98
|
description: Hyperlink with the text 'Learn More' in the footer
|
|
94
99
|
action: click
|
|
95
|
-
|
|
96
|
-
|
|
97
100
|
```
|
|
98
101
|
|
|
99
|
-
|
|
100
102
|
---
|
|
101
103
|
|
|
102
104
|
## Debugging element detection
|
|
103
105
|
|
|
104
106
|
1. **Run the Test**:
|
|
107
|
+
|
|
105
108
|
- Execute the test using TestDriver.
|
|
106
109
|
- TestDriver will draw **yellow boxes** around detected elements.
|
|
107
110
|
|
|
108
111
|
2. **Verify the Correct Element**:
|
|
112
|
+
|
|
109
113
|
- Ensure the yellow box highlights the intended element.
|
|
110
114
|
- If the wrong element is highlighted, refine your description.
|
|
111
115
|
|
|
@@ -118,12 +122,15 @@ TestDriver uses **visual understanding** to locate elements on the screen. Unlik
|
|
|
118
122
|
## Best practices
|
|
119
123
|
|
|
120
124
|
1. **Be Specific**:
|
|
125
|
+
|
|
121
126
|
- Include as many unique visual traits as possible to differentiate the element from others.
|
|
122
127
|
|
|
123
128
|
2. **Test Incrementally**:
|
|
129
|
+
|
|
124
130
|
- Run tests after adding or modifying commands to verify element detection.
|
|
125
131
|
|
|
126
132
|
3. **Use Visual Feedback**:
|
|
133
|
+
|
|
127
134
|
- Leverage the yellow boxes drawn by TestDriver to confirm the correct element is being targeted.
|
|
128
135
|
|
|
129
136
|
4. **Avoid Overloading Descriptions**:
|
package/docs/guide/waiting.mdx
CHANGED
|
@@ -7,44 +7,45 @@ icon: "clock"
|
|
|
7
7
|
|
|
8
8
|
# Waiting in TestDriver: Ensuring stability and reducing flakiness
|
|
9
9
|
|
|
10
|
-
Waiting is a critical feature in TestDriver that ensures tests are stable and reliable, even in dynamic or slow-loading environments.
|
|
10
|
+
Waiting is a critical feature in TestDriver that ensures tests are stable and reliable, even in dynamic or slow-loading environments.
|
|
11
11
|
|
|
12
12
|
- Automatically adjusts to varying load times, reducing the need for hardcoded delays.
|
|
13
13
|
- Ensures that tests wait for dynamic elements to appear, reducing false negatives caused by incomplete rendering.
|
|
14
14
|
- By waiting for the screen and network to stabilize, `redraw` minimizes the risk of interacting with incomplete or incorrect elements.
|
|
15
15
|
|
|
16
|
-
|
|
17
16
|
## Summary of waiting features
|
|
18
17
|
|
|
19
|
-
| **Feature**
|
|
20
|
-
|
|
21
|
-
| **`redraw`(automatic)**
|
|
22
|
-
|
|
|
23
|
-
|
|
|
18
|
+
| **Feature** | **Description** |
|
|
19
|
+
| ------------------------------------------------ | ------------------------------------------------------------------------------ |
|
|
20
|
+
| **`redraw`(automatic)** | TestDriver automatically waits for UI changes and network activity to resolve. |
|
|
21
|
+
| **[`wait-for-text`](/commands/wait-for-text)** | Waits for specific text to appear on the screen. |
|
|
22
|
+
| **[`wait-for-image`](/commands/wait-for-image)** | Waits for a specific image or visual element to appear on the screen. |
|
|
24
23
|
|
|
25
24
|
## Key waiting features in TestDriver
|
|
26
25
|
|
|
27
26
|
1. **Automatic Waiting with `redraw`**:
|
|
28
|
-
- TestDriver automatically waits for the machine before moving to the next step.
|
|
29
|
-
- This includes waiting for:
|
|
30
|
-
- UI changes to complete.
|
|
31
|
-
- Network activity to stabilize (for example, API calls).
|
|
32
|
-
- Reduces the need for manual waits, making tests faster and less prone to flakiness.
|
|
33
27
|
|
|
34
|
-
|
|
35
|
-
|
|
36
|
-
-
|
|
28
|
+
- TestDriver automatically waits for the machine before moving to the next step.
|
|
29
|
+
- This includes waiting for:
|
|
30
|
+
- UI changes to complete.
|
|
31
|
+
- Network activity to stabilize (for example, API calls).
|
|
32
|
+
- Reduces the need for manual waits, making tests faster and less prone to flakiness.
|
|
33
|
+
|
|
34
|
+
2. **[`wait-for-text`](/commands/wait-for-text) Command**:
|
|
35
|
+
|
|
36
|
+
- Waits for specific text to appear on the screen.
|
|
37
|
+
- Useful for validating dynamic content or ensuring that a page has fully loaded before proceeding.
|
|
37
38
|
|
|
38
|
-
3.
|
|
39
|
-
- Waits for a specific image or visual element to appear on the screen.
|
|
40
|
-
- Ideal for verifying the presence of icons, logos, or other graphical elements.
|
|
39
|
+
3. **[`wait-for-image`](/commands/wait-for-image) Command**:
|
|
41
40
|
|
|
41
|
+
- Waits for a specific image or visual element to appear on the screen.
|
|
42
|
+
- Ideal for verifying the presence of icons, logos, or other graphical elements.
|
|
42
43
|
|
|
43
44
|
## Commands for explicit waiting
|
|
44
45
|
|
|
45
|
-
### 1.
|
|
46
|
+
### 1. **[`wait-for-text`](/commands/wait-for-text)**
|
|
46
47
|
|
|
47
|
-
The `wait-for-text` command pauses the test until the specified text appears on the screen. This is particularly useful for dynamic content that takes time to load.
|
|
48
|
+
The [`wait-for-text`](/commands/wait-for-text) command pauses the test until the specified text appears on the screen. This is particularly useful for dynamic content that takes time to load.
|
|
48
49
|
|
|
49
50
|
#### Syntax
|
|
50
51
|
|
|
@@ -64,9 +65,9 @@ The `wait-for-text` command pauses the test until the specified text appears on
|
|
|
64
65
|
|
|
65
66
|
In this example, the test waits up to 10 seconds for the text "Welcome, Test User!" to appear.
|
|
66
67
|
|
|
67
|
-
###
|
|
68
|
+
### **[`wait-for-image`](/commands/wait-for-image)**
|
|
68
69
|
|
|
69
|
-
The `wait-for-image` command pauses the test until the specified image or visual element appears on the screen.
|
|
70
|
+
The [`wait-for-image`](/commands/wait-for-image) command pauses the test until the specified image or visual element appears on the screen.
|
|
70
71
|
|
|
71
72
|
#### Syntax:
|
|
72
73
|
|
|
@@ -91,31 +92,39 @@ In this example, the test waits up to 8 seconds for the company logo to appear i
|
|
|
91
92
|
TestDriver's **`redraw` function** is a built-in mechanism that automatically waits for the screen to stabilize before proceeding to the next step. This includes:
|
|
92
93
|
|
|
93
94
|
1. **UI Changes**:
|
|
94
|
-
|
|
95
|
-
|
|
95
|
+
|
|
96
|
+
- Waits for animations, transitions, or DOM updates to complete.
|
|
97
|
+
- Ensures that the screen is fully rendered before interacting with elements.
|
|
96
98
|
|
|
97
99
|
2. **Network Stabilization**:
|
|
98
|
-
|
|
99
|
-
|
|
100
|
+
|
|
101
|
+
- Waits for network activity (for example, API calls, AJAX requests) to finish.
|
|
102
|
+
- Ensures that dynamic content is fully loaded before proceeding.
|
|
100
103
|
|
|
101
104
|
3. **Screen Stabilization**:
|
|
102
|
-
|
|
105
|
+
|
|
106
|
+
- Continuously monitors the screen for changes and only moves forward when the screen is stable.
|
|
103
107
|
|
|
104
108
|
---
|
|
105
109
|
|
|
106
110
|
## Best practices for waiting
|
|
107
111
|
|
|
108
112
|
2. **Leverage Automatic Waiting**:
|
|
109
|
-
|
|
113
|
+
|
|
114
|
+
- Rely on TestDriver's `redraw` function to handle most waiting scenarios automatically.
|
|
110
115
|
|
|
111
116
|
1. **Use Explicit Waiting for Dynamic Elements**:
|
|
112
|
-
|
|
117
|
+
|
|
118
|
+
- Use [`wait-for-text`](/commands/wait-for-text) or [`wait-for-image`](/commands/wait-for-image) for elements that take time to load.
|
|
113
119
|
|
|
114
120
|
3. **Avoid Hardcoded Delays**:
|
|
115
|
-
|
|
121
|
+
|
|
122
|
+
- Replace hardcoded `sleep` or [`wait`](/commands/wait) commands with dynamic waiting commands to improve test reliability.
|
|
116
123
|
|
|
117
124
|
4. **Set Appropriate Timeouts**:
|
|
118
|
-
|
|
125
|
+
|
|
126
|
+
- Use reasonable timeouts for explicit waiting commands to balance reliability and test execution time.
|
|
119
127
|
|
|
120
128
|
5. **Test Incrementally**:
|
|
121
|
-
|
|
129
|
+
|
|
130
|
+
- Add waiting commands step-by-step to ensure each part of the workflow is stable.
|
|
@@ -8,14 +8,17 @@ icon: "clipboard-check"
|
|
|
8
8
|
# Command: `/assert`
|
|
9
9
|
|
|
10
10
|
## Description
|
|
11
|
+
|
|
11
12
|
The `/assert` command ensures that a specific condition is true within your test. This is useful for verifying that tasks were completed successfully, just as a user would observe.
|
|
12
13
|
|
|
13
14
|
## Usage
|
|
15
|
+
|
|
14
16
|
```bash
|
|
15
17
|
/assert <criteria>
|
|
16
18
|
```
|
|
17
19
|
|
|
18
20
|
## Behavior
|
|
21
|
+
|
|
19
22
|
- The `/assert` command generates an assertion based on the specified criteria.
|
|
20
23
|
- TestDriver takes a screenshot and uses it to verify that the condition described in the `expect` field is true.
|
|
21
24
|
- If the condition isn't met, the test will fail and exit immediately.
|
|
@@ -23,17 +26,22 @@ The `/assert` command ensures that a specific condition is true within your test
|
|
|
23
26
|
## Example usage
|
|
24
27
|
|
|
25
28
|
### Basic assertion
|
|
29
|
+
|
|
26
30
|
```bash
|
|
27
31
|
/assert No error message is displayed
|
|
28
32
|
```
|
|
33
|
+
|
|
29
34
|
This generates the following command:
|
|
35
|
+
|
|
30
36
|
```yaml
|
|
31
37
|
- command: assert
|
|
32
38
|
expect: There is no error message
|
|
33
39
|
```
|
|
34
40
|
|
|
35
41
|
### Asynchronous assertion
|
|
42
|
+
|
|
36
43
|
To speed up tests, use `async: true` to allow the test to continue without waiting for the assertion to pass:
|
|
44
|
+
|
|
37
45
|
```yaml
|
|
38
46
|
- command: assert
|
|
39
47
|
expect: There is no error message
|
|
@@ -41,14 +49,17 @@ To speed up tests, use `async: true` to allow the test to continue without waiti
|
|
|
41
49
|
```
|
|
42
50
|
|
|
43
51
|
## Protips
|
|
52
|
+
|
|
44
53
|
- Use assertions sparingly to avoid slowing down your tests.
|
|
45
54
|
- Combine `async: true` with assertions to improve test performance while still validating critical conditions.
|
|
46
55
|
- Ensure the `expect` field clearly describes the condition to avoid ambiguity.
|
|
47
56
|
|
|
48
57
|
## Gotchas
|
|
58
|
+
|
|
49
59
|
- If the condition in `expect` isn't met, the test will fail and exit immediately.
|
|
50
60
|
- Overusing assertions can make tests slower and harder to maintain.
|
|
51
61
|
|
|
52
62
|
## Notes
|
|
63
|
+
|
|
53
64
|
- The `/assert` command is ideal for validating key checkpoints in your test workflow.
|
|
54
65
|
- Use this command to ensure that critical tasks, such as error-free execution or successful navigation, are completed as expected.
|
|
@@ -8,21 +8,26 @@ icon: "wand-sparkles"
|
|
|
8
8
|
# Command: `/generate`
|
|
9
9
|
|
|
10
10
|
## Description
|
|
11
|
+
|
|
11
12
|
The `/generate` command is an experimental feature that instructs TestDriver to create its own exploratory prompts. This command is used in the "Generate a Test Suite" demo and is designed to help automate the creation of exploratory test cases.
|
|
12
13
|
|
|
13
14
|
## Usage
|
|
15
|
+
|
|
14
16
|
```bash
|
|
15
17
|
/generate
|
|
16
18
|
```
|
|
17
19
|
|
|
18
20
|
## Behavior
|
|
21
|
+
|
|
19
22
|
- The `/generate` command analyzes the display and generates exploratory prompts for testing.
|
|
20
23
|
- Each exploratory test is saved as a simple Markdown file containing a list of steps.
|
|
21
24
|
- TestDriver generates 10 Markdown files every time the `/generate` command is called.
|
|
22
25
|
- The generated files are stored in the `./testdriver/generate/*.md` directory.
|
|
23
26
|
|
|
24
27
|
## Example output
|
|
28
|
+
|
|
25
29
|
Here's an example of a generated test file (`test-search-function.md`):
|
|
30
|
+
|
|
26
31
|
```markdown
|
|
27
32
|
1. Click on the search icon.
|
|
28
33
|
2. Type "real-time chat" into the search bar.
|
|
@@ -30,15 +35,21 @@ Here's an example of a generated test file (`test-search-function.md`):
|
|
|
30
35
|
```
|
|
31
36
|
|
|
32
37
|
## Protips
|
|
38
|
+
|
|
33
39
|
- Use `/generate` to quickly create exploratory tests for regression testing or feature validation.
|
|
34
40
|
- Combine `/generate` with the GitHub Action to generate and run regression tests in parallel.
|
|
35
41
|
- Review the generated Markdown files and convert them into YAML test scripts for reuse.
|
|
36
42
|
|
|
37
43
|
## Gotchas
|
|
44
|
+
|
|
38
45
|
- The `/generate` command is experimental and may not always produce perfect test cases. Review and refine the generated tests as needed.
|
|
39
46
|
- Ensure the `./testdriver/generate/` directory exists and has write permissions.
|
|
40
47
|
|
|
41
48
|
## Notes
|
|
49
|
+
|
|
42
50
|
- The `/generate` command is ideal for automating the creation of exploratory test cases and generating regression tests.
|
|
43
51
|
- Generated tests can be merged into a regression test suite for continuous testing and validation.
|
|
52
|
+
|
|
53
|
+
```
|
|
54
|
+
|
|
44
55
|
```
|
package/docs/interactive/run.mdx
CHANGED
|
@@ -6,34 +6,42 @@ icon: "play"
|
|
|
6
6
|
---
|
|
7
7
|
|
|
8
8
|
## Description
|
|
9
|
+
|
|
9
10
|
The `/run` command is used to execute a test file from a specified file. This command performs each step defined in the test file and outputs the results.
|
|
10
11
|
|
|
11
12
|
## Usage
|
|
13
|
+
|
|
12
14
|
```bash
|
|
13
15
|
/run <file>
|
|
14
16
|
```
|
|
15
17
|
|
|
16
18
|
## Example usage
|
|
19
|
+
|
|
17
20
|
```bash
|
|
18
21
|
testdriverai
|
|
19
22
|
> /run helloworld.yaml
|
|
20
23
|
```
|
|
24
|
+
|
|
21
25
|
This command runs the `helloworld.yaml` test file, executing each command in the file sequentially.
|
|
22
26
|
|
|
23
27
|
## Behavior
|
|
28
|
+
|
|
24
29
|
- TestDriver will execute the test file, performing each command as defined in the file.
|
|
25
30
|
- If the test completes successfully, the program will exit with code `0`.
|
|
26
31
|
- If any failures occur during the test, the program will output the errors and exit with code `1`.
|
|
27
32
|
|
|
28
33
|
## Protips
|
|
34
|
+
|
|
29
35
|
- Ensure the test file path is correct and accessible before running the command.
|
|
30
36
|
- Use descriptive filenames for your test files to make them easier to identify.
|
|
31
37
|
- Combine `/run` with debugging tools to troubleshoot failing tests.
|
|
32
38
|
|
|
33
39
|
## Gotchas
|
|
40
|
+
|
|
34
41
|
- This command will exit the program upon execution, so ensure all necessary setup is complete before running it.
|
|
35
42
|
- Any errors in the test file (for example, invalid commands or missing arguments) will cause the test to fail.
|
|
36
43
|
|
|
37
44
|
## Notes
|
|
45
|
+
|
|
38
46
|
- The `/run` command is ideal for executing pre-created test files in an interactive session.
|
|
39
47
|
- Use this command to validate and debug your test files during development.
|
|
@@ -6,14 +6,17 @@ icon: "floppy-disk"
|
|
|
6
6
|
---
|
|
7
7
|
|
|
8
8
|
## Description
|
|
9
|
+
|
|
9
10
|
The `/save` command saves the current state of the test script to a file. This command generates a YAML file containing the history of executed commands and tasks, allowing you to reuse or modify the test script later.
|
|
10
11
|
|
|
11
12
|
## Usage
|
|
13
|
+
|
|
12
14
|
```bash
|
|
13
15
|
/save
|
|
14
16
|
```
|
|
15
17
|
|
|
16
18
|
## Example usage
|
|
19
|
+
|
|
17
20
|
```bash
|
|
18
21
|
testdriverai
|
|
19
22
|
> /save
|
|
@@ -40,17 +43,21 @@ testdriverai
|
|
|
40
43
|
```
|
|
41
44
|
|
|
42
45
|
## Behavior
|
|
46
|
+
|
|
43
47
|
- The `/save` command generates a YAML file with the current test script, including all executed steps and commands.
|
|
44
48
|
- The file can be used as a reusable test file for future executions.
|
|
45
49
|
|
|
46
50
|
## Protips
|
|
51
|
+
|
|
47
52
|
- Use `/save` frequently during interactive sessions to preserve your progress and avoid losing work.
|
|
48
53
|
- Combine `/save` with `/run` to quickly test and iterate on your scripts.
|
|
49
54
|
|
|
50
55
|
## Gotchas
|
|
56
|
+
|
|
51
57
|
- Ensure you have write permissions in the directory where the file will be saved.
|
|
52
58
|
- The saved script reflects the current state of the session. Any unexecuted commands won't be included.
|
|
53
59
|
|
|
54
60
|
## Notes
|
|
61
|
+
|
|
55
62
|
- The `/save` command is ideal for creating reusable test scripts from interactive sessions.
|
|
56
63
|
- Use this command to document and share your test workflows with your team.
|
|
@@ -6,45 +6,53 @@ icon: "arrow-rotate-left"
|
|
|
6
6
|
---
|
|
7
7
|
|
|
8
8
|
## Description
|
|
9
|
+
|
|
9
10
|
The `/undo` command removes the last generated command or step from the current test script. This is useful for quickly correcting mistakes or removing unintended actions during an interactive session.
|
|
10
11
|
|
|
11
12
|
## Usage
|
|
13
|
+
|
|
12
14
|
```bash
|
|
13
15
|
/undo
|
|
14
16
|
```
|
|
15
17
|
|
|
16
18
|
## Behavior
|
|
19
|
+
|
|
17
20
|
- The `/undo` command removes the most recently added command or step from the test script.
|
|
18
21
|
- You can call `/undo` multiple times to remove multiple commands or steps.
|
|
19
22
|
|
|
20
23
|
## Example usage
|
|
21
24
|
|
|
22
25
|
### Before `/undo`
|
|
26
|
+
|
|
23
27
|
```yaml
|
|
24
|
-
- step:
|
|
28
|
+
- step:
|
|
25
29
|
- command: scroll-until-text
|
|
26
30
|
text: Add to cart
|
|
27
|
-
- step:
|
|
31
|
+
- step:
|
|
28
32
|
- command: hover-text
|
|
29
33
|
text: Add to cart
|
|
30
34
|
action: click
|
|
31
35
|
```
|
|
32
36
|
|
|
33
37
|
### After `/undo`
|
|
38
|
+
|
|
34
39
|
```yaml
|
|
35
|
-
- step:
|
|
40
|
+
- step:
|
|
36
41
|
- command: scroll-until-text
|
|
37
42
|
text: Add to cart
|
|
38
43
|
```
|
|
39
44
|
|
|
40
45
|
## Protips
|
|
46
|
+
|
|
41
47
|
- Use `/undo` immediately after generating a command to quickly fix mistakes or refine your test script.
|
|
42
48
|
- Combine `/undo` with `/save` to iteratively build and refine your test scripts.
|
|
43
49
|
|
|
44
50
|
## Gotchas
|
|
51
|
+
|
|
45
52
|
- The `/undo` command only removes the last generated command or step. If you need to undo multiple actions, call `/undo` repeatedly.
|
|
46
53
|
- Once a session is closed, you can't undo previously saved commands.
|
|
47
54
|
|
|
48
55
|
## Notes
|
|
56
|
+
|
|
49
57
|
- The `/undo` command is ideal for interactive sessions where you are experimenting with test generation.
|
|
50
58
|
- This command helps maintain a clean and accurate test script by allowing quick corrections.
|