testdriverai 5.6.3 → 5.6.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (98) hide show
  1. package/agent.js +17 -15
  2. package/docs/account/dashboard.mdx +3 -6
  3. package/docs/account/enterprise.mdx +16 -15
  4. package/docs/account/pricing.mdx +4 -4
  5. package/docs/account/projects.mdx +2 -2
  6. package/docs/account/team.mdx +3 -3
  7. package/docs/action/browser.mdx +9 -9
  8. package/docs/action/os.mdx +12 -12
  9. package/docs/action/output.mdx +10 -10
  10. package/docs/action/performance.mdx +11 -11
  11. package/docs/{guide/lifecycle.mdx → action/pre-post-scripts.mdx} +87 -69
  12. package/docs/action/secrets.mdx +11 -11
  13. package/docs/action/setup.mdx +8 -8
  14. package/docs/apps/chrome-extensions.mdx +9 -9
  15. package/docs/apps/desktop-apps.mdx +6 -5
  16. package/docs/apps/mobile-apps.mdx +3 -3
  17. package/docs/apps/static-websites.mdx +2 -3
  18. package/docs/bugs/jira.mdx +16 -16
  19. package/docs/cli/overview.mdx +8 -8
  20. package/docs/commands/exec.mdx +152 -10
  21. package/docs/commands/focus-application.mdx +1 -1
  22. package/docs/commands/hover-image.mdx +1 -1
  23. package/docs/commands/hover-text.mdx +2 -2
  24. package/docs/commands/if.mdx +1 -1
  25. package/docs/commands/issues.mdx +8 -6
  26. package/docs/commands/match-image.mdx +2 -2
  27. package/docs/commands/press-keys.mdx +19 -2
  28. package/docs/commands/run.mdx +1 -1
  29. package/docs/commands/scroll-until-image.mdx +1 -1
  30. package/docs/commands/scroll-until-text.mdx +2 -2
  31. package/docs/commands/scroll.mdx +2 -2
  32. package/docs/commands/type.mdx +1 -1
  33. package/docs/commands/wait-for-image.mdx +1 -1
  34. package/docs/commands/wait-for-text.mdx +1 -1
  35. package/docs/commands/wait.mdx +1 -1
  36. package/docs/docs.json +9 -9
  37. package/docs/exporting/playwright.mdx +16 -16
  38. package/docs/features/auto-healing.mdx +5 -5
  39. package/docs/features/cross-platform.mdx +4 -4
  40. package/docs/features/generation.mdx +7 -4
  41. package/docs/features/github.mdx +21 -21
  42. package/docs/features/parallel-testing.mdx +9 -9
  43. package/docs/features/reusable-snippets.mdx +7 -7
  44. package/docs/features/selectorless.mdx +4 -4
  45. package/docs/features/visual-assertions.mdx +26 -23
  46. package/docs/getting-started/ci.mdx +19 -19
  47. package/docs/getting-started/editing.mdx +12 -8
  48. package/docs/getting-started/generating.mdx +4 -5
  49. package/docs/getting-started/running.mdx +3 -3
  50. package/docs/getting-started/setup.mdx +9 -9
  51. package/docs/getting-started/vscode.mdx +63 -85
  52. package/docs/getting-started/writing.mdx +11 -7
  53. package/docs/guide/assertions.mdx +21 -21
  54. package/docs/guide/authentication.mdx +11 -11
  55. package/docs/guide/code.mdx +6 -6
  56. package/docs/guide/environment-variables.mdx +9 -9
  57. package/docs/guide/lifecycle-prerun.mdx +191 -0
  58. package/docs/guide/locating.mdx +15 -18
  59. package/docs/guide/variables.mdx +8 -8
  60. package/docs/guide/waiting.mdx +10 -10
  61. package/docs/importing/csv.mdx +23 -24
  62. package/docs/importing/gherkin.mdx +20 -20
  63. package/docs/importing/jira.mdx +22 -23
  64. package/docs/importing/testrail.mdx +17 -17
  65. package/docs/integrations/electron.mdx +7 -7
  66. package/docs/integrations/netlify.mdx +7 -7
  67. package/docs/integrations/vercel.mdx +13 -13
  68. package/docs/interactive/assert.mdx +3 -3
  69. package/docs/interactive/dry.mdx +7 -7
  70. package/docs/interactive/explore.mdx +9 -9
  71. package/docs/interactive/generate.mdx +2 -2
  72. package/docs/interactive/run.mdx +10 -10
  73. package/docs/interactive/save.mdx +3 -3
  74. package/docs/interactive/undo.mdx +1 -1
  75. package/docs/overview/comparison.mdx +6 -6
  76. package/docs/overview/faq.mdx +2 -2
  77. package/docs/overview/quickstart.mdx +1 -1
  78. package/docs/overview/upgrading.mdx +8 -4
  79. package/docs/overview/what-is-testdriver.mdx +6 -6
  80. package/docs/quickstart.mdx +1 -1
  81. package/docs/scenarios/ai-chatbot.mdx +3 -3
  82. package/docs/scenarios/cookie-banner.mdx +2 -2
  83. package/docs/scenarios/file-upload.mdx +2 -2
  84. package/docs/scenarios/form-filling.mdx +3 -3
  85. package/docs/scenarios/log-in.mdx +3 -3
  86. package/docs/scenarios/pdf-generation.mdx +3 -3
  87. package/docs/scenarios/spell-check.mdx +2 -2
  88. package/docs/security/action.mdx +6 -6
  89. package/docs/security/agent.mdx +10 -10
  90. package/docs/security/platform.mdx +10 -8
  91. package/docs/snippets/test-prereqs.mdx +1 -1
  92. package/docs/snippets/yml-warning.mdx +1 -0
  93. package/docs/tutorials/advanced-test.mdx +9 -9
  94. package/docs/tutorials/basic-test.mdx +3 -3
  95. package/package.json +1 -1
  96. package/styles/config/vocabularies/Docs/accept.txt +32 -4
  97. package/testdriver/spotify.yaml +5 -0
  98. package/docs/action/prerun.mdx +0 -137
@@ -0,0 +1,191 @@
1
+ ---
2
+ title: "Prerun Scripts"
3
+ sidebarTitle: "Prerun Scripts"
4
+ description: "Learn how to customize and set up your TestDriver environment to optimize your CI/CD pipeline."
5
+ icon: "terminal"
6
+ ---
7
+
8
+ ## Prerun configuration
9
+
10
+ ## The default `prerun` script
11
+ When setting up your TestDriver environment, a default prerun script is automatically generated when you run:
12
+ ```bash
13
+ testdriverai init
14
+ ```
15
+
16
+ The directory structure is created like this:
17
+ ```bash
18
+ testdriver
19
+ ├── lifecycle
20
+ │ └── prerun.yaml
21
+ ```
22
+
23
+ This script includes essential commands to ensure your VM is ready for testing. You can customize this script to suit your specific needs, but it's important to understand that it serves as a baseline for your test environment. It looks like this:
24
+
25
+
26
+ ```yaml ./lifecycle/prerun.yaml
27
+ version: 5.1.1
28
+ steps:
29
+ - prompt: launch chrome
30
+ commands:
31
+ - command: exec
32
+ lang: shell
33
+ linux: |
34
+ jumpapp google-chrome --disable-fre --no-default-browser-check --no-first-run "${TD_WEBSITE}" &
35
+ exit
36
+ mac: |
37
+ open -na "Google Chrome" --args --disable-fre --no-default-browser-check --no-first-run --disable-features=PasswordManagerEnabled "${TD_WEBSITE}" &
38
+ exit
39
+ windows:
40
+ Start-Process "C:/Program Files/Google/Chrome/Application/chrome.exe" -ArgumentList "--start-maximized", "${TD_WEBSITE}"
41
+ exit
42
+ - command: wait-for-text
43
+ text: "Google Chrome"
44
+ timeout: 30000
45
+ ```
46
+ Take note that it defaults to launching the Chrome browser. If you want to use a different browser, you can modify the `prompt` and `commands` sections accordingly.
47
+
48
+ <Warning>
49
+ The `wait-For-text` command is crucial for ensuring that the browser is fully loaded before proceeding with the test steps. If you are testing a web application, you may want to adjust the `text` parameter to match the expected app name, title or content of the page.
50
+ </Warning>
51
+
52
+ If a file is found at `./lifecycle/prerun.yaml`, it's executed _before_ the test begins. _This will happen even if you have a pre or post test script defined in your GitHub Actions workflow._ This file works like any other TestDriver file and is commonly used to perform tasks such as opening a browser, navigating to a specific page, or resetting the application state.
53
+
54
+ ### Prerun diagram
55
+ ```mermaid
56
+ graph TD
57
+ A{Is prerun available at ./lifecycle/prerun.yaml and a testfile present in ./somedirectory?}
58
+ A -->|Yes| B[Run ./lifecycle/prerun.yaml]
59
+ B --> C[Run ./somedirectory/testfile.yaml]
60
+ A -->|No| D{Is ./somedirectory/testfile.yaml present?}
61
+ D -->|Yes| C
62
+ ```
63
+ ---
64
+
65
+ ## Examples
66
+ For example, the `prerun.yaml` file can be combined with the [`exec`](/commands/exec) command to open the Chrome browser and navigate to a page, similar to the example provided above. This ensures that the test environment is properly set up before the test starts. _Note that the prerun is a TestDriver test file like any other, combining commands to complete a task_.
67
+
68
+ ---
69
+
70
+ ### Opening a different browser
71
+ Here is an example of a `prerun.yaml` file that opens Firefox instead of Chrome (note the `wait-for-text` command at the end):
72
+
73
+ ```yaml {5, 16} ./lifecycle/prerun.yaml
74
+ version: 5.1.1
75
+ steps:
76
+ - prompt: launch firefox
77
+ commands:
78
+ - command: exec
79
+ lang: shell
80
+ linux: |
81
+ jumpapp firefox --no-default-browser-check --no-first-run "${TD_WEBSITE}" &
82
+ exit
83
+
84
+ mac: |
85
+ open -na "Firefox" --args --no-default-browser-check --no-first-run "${TD_WEBSITE}" &
86
+ exit
87
+
88
+ windows: |
89
+ Start-Process "C:/Program Files/Mozilla Firefox/firefox.exe" -ArgumentList "--no-default-browser-check", "--no-first-run", "${TD_WEBSITE}"
90
+ exit
91
+ - command: wait-for-text
92
+ text: "Firefox"
93
+ timeout: 30000
94
+ ```
95
+
96
+ ---
97
+ ### Loading a calculator app on Linux, MacOS or Windows
98
+ ```yaml ./lifecycle/prerun.yaml
99
+ version: 5.5.5
100
+ steps:
101
+ - prompt: launch a calculator
102
+ commands:
103
+ - command: exec
104
+ lang: shell
105
+ linux: |
106
+ linux: |
107
+ jumpapp /usr/bin/galculator > /dev/null 2>&1 &
108
+ exit
109
+ mac: |
110
+ open "/Applications/Calculator.app"
111
+ wait
112
+ windows:
113
+ start /B calc.exe
114
+ timeout /t 5
115
+ - command: wait-for-text
116
+ text: "galculator"
117
+ timeout: 30000
118
+ ```
119
+
120
+ ## Key points
121
+ - **Provisioning**: Prerun scripts allow you to provision the VM with the necessary tools and configurations before running tests.
122
+ - **Reproducibility**: By ensuring a consistent environment, prerun scripts help prevent flaky tests caused by environmental differences.
123
+ - **Flexibility**: You can use prerun scripts to customize the VM for specific test scenarios, such as installing alternative browsers or setting up staging environments.
124
+
125
+ ---
126
+
127
+ ## Best practices
128
+ - **Keep It Simple**: Write clear and concise prerun scripts to minimize setup time and reduce complexity.
129
+ - **Error Handling**: Include checks to verify that dependencies are installed successfully. Log errors to help debug issues.
130
+ - **Optimize Performance**: Cache dependencies or use lightweight tools to speed up the setup process.
131
+ - **Security**: Avoid hardcoding sensitive information in prerun scripts. Use GitHub secrets to securely pass credentials or tokens.
132
+
133
+
134
+ ---
135
+ ## Example
136
+
137
+ ### Using `exec` shell commands in prerun to set up a test file
138
+ Here we're doing the exact same thing as the previous example, except the `shell` context is setting up the calculator app to be used by the test file. This method is useful if you are going to use the same application over again in many tests.
139
+
140
+ ```yaml ./lifecycle/prerun.yaml
141
+ version: 5.5.5
142
+ steps:
143
+ - prompt: launch a calculator
144
+ commands:
145
+ - command: exec
146
+ lang: shell
147
+ linux: |
148
+ linux: |
149
+ jumpapp /usr/bin/galculator > /dev/null 2>&1 &
150
+ exit
151
+ mac: |
152
+ open "/Applications/Calculator.app"
153
+ wait
154
+ windows: start /B calc.exe
155
+ timeout /t 5
156
+ - command: wait-for-text
157
+ text: "calculator"
158
+ timeout: 30000
159
+ ```
160
+
161
+ Then in the test file, the runner will already have the calculator loaded before it runs this test:
162
+ ```yaml testfile.yaml
163
+ version: 5.5.5
164
+ steps:
165
+ - prompt: /try performing the operation 2 + 2 = on the calculator that is opened
166
+ commands:
167
+ - command: focus-application
168
+ name: galculator
169
+ - command: hover-image
170
+ description: button with number 2 on the calculator
171
+ action: click
172
+ - command: hover-image
173
+ description: plus button on the calculator
174
+ action: click
175
+ - command: hover-image
176
+ description: button with number 2 on the calculator
177
+ action: click
178
+ - command: hover-image
179
+ description: equals button on the calculator
180
+ action: click
181
+ ```
182
+
183
+ <Tip>See [`exec`](/commands/exec) for more examples and use cases of including `shell` or `js` context and proper usage within your prerun!</Tip>
184
+
185
+ ---
186
+
187
+ ## Notes
188
+ - Prerun scripts are executed on the VM before the test suite begins.
189
+ - They're essential for ensuring a consistent and reliable test environment.
190
+ - For advanced workflows, combine prerun scripts with TestDriver prompts to create dynamic and flexible test setups.
191
+ - Currently, there is no teardown task implemented within TestDriver itself. This is due to the nature of deployed tests: when running on GitHub Actions, the virtual machine (VM) is _ephemeral_, meaning everything is destroyed after the test run.
@@ -9,15 +9,15 @@ TestDriver uses **visual understanding** to locate elements on the screen. Unlik
9
9
 
10
10
  ---
11
11
 
12
- ## Key Principles for Locating Elements
12
+ ## KeypPrinciples for locating elements
13
13
 
14
14
  1. **Describe the Element Visually**:
15
15
  - Focus on the **appearance** of the element, not its behavior or function.
16
16
  - Mention unique visual traits such as **text**, **color**, **size**, **position**, or **icon**.
17
17
 
18
18
  2. **Avoid Behavioral Descriptions**:
19
- - don't describe what the element does (e.g., "button that submits the form").
20
- - Instead, describe how it looks (e.g., "blue button with the text 'Submit' in the bottom-right corner").
19
+ - don't describe what the element does (for example, "button that submits the form").
20
+ - Instead, describe how it looks (for example, "blue button with the text 'Submit' in the bottom-right corner").
21
21
 
22
22
  3. **Use Unique Identifiers**:
23
23
  - If the element has visible text, include it in the description.
@@ -29,7 +29,7 @@ TestDriver uses **visual understanding** to locate elements on the screen. Unlik
29
29
 
30
30
  ---
31
31
 
32
- ## Examples of Effective Descriptions
32
+ ## Examples of effective descriptions
33
33
 
34
34
  ### 1. Buttons
35
35
  - **Good**: "Blue button with the text 'Sign In' in the top-right corner."
@@ -43,7 +43,7 @@ TestDriver uses **visual understanding** to locate elements on the screen. Unlik
43
43
  - **Good**: "Magnifying glass icon next to the search bar."
44
44
  - **Bad**: "Search icon that opens the search feature."
45
45
 
46
- ### 4. Input Fields
46
+ ### 4. Input fields
47
47
  - **Good**: "White input box labeled 'Email Address' above the password field."
48
48
  - **Bad**: "Field where the user enters their email."
49
49
 
@@ -53,11 +53,11 @@ TestDriver uses **visual understanding** to locate elements on the screen. Unlik
53
53
 
54
54
  ---
55
55
 
56
- ## How to Write TestDriver Commands
56
+ ## How to write TestDriver commands
57
57
 
58
- ### Example: Locating a Button
58
+ ### Example: Locating a button
59
59
 
60
- #### YAML Command:
60
+ #### TestDriver command:
61
61
 
62
62
  ```yaml
63
63
  - command: hover-text
@@ -71,24 +71,21 @@ TestDriver uses **visual understanding** to locate elements on the screen. Unlik
71
71
 
72
72
  ---
73
73
 
74
- ### Example: Locating an Icon
74
+ ### Example: Locating an icon
75
75
 
76
- #### YAML Command:
76
+ #### TestDriver command:
77
77
 
78
78
  ```yaml
79
79
  - command: hover-image
80
80
  description: Magnifying glass icon next to the search bar
81
81
  action: click
82
-
83
-
84
82
  ```
85
83
 
86
-
87
84
  ---
88
85
 
89
- ### Example: Locating a Link
86
+ ### Example: Locating a link
90
87
 
91
- #### YAML Command:
88
+ #### TestDriver command:
92
89
 
93
90
  ```yaml
94
91
  - command: hover-text
@@ -102,7 +99,7 @@ TestDriver uses **visual understanding** to locate elements on the screen. Unlik
102
99
 
103
100
  ---
104
101
 
105
- ## Debugging Element Detection
102
+ ## Debugging element detection
106
103
 
107
104
  1. **Run the Test**:
108
105
  - Execute the test using TestDriver.
@@ -113,12 +110,12 @@ TestDriver uses **visual understanding** to locate elements on the screen. Unlik
113
110
  - If the wrong element is highlighted, refine your description.
114
111
 
115
112
  3. **Adjust the Description**:
116
- - Add more specific visual details (e.g., color, position, associated text).
113
+ - Add more specific visual details (for example, color, position, associated text).
117
114
  - Avoid generic terms like "button" or "icon" without additional context.
118
115
 
119
116
  ---
120
117
 
121
- ## Best Practices
118
+ ## Best practices
122
119
 
123
120
  1. **Be Specific**:
124
121
  - Include as many unique visual traits as possible to differentiate the element from others.
@@ -5,7 +5,7 @@ description: "Learn how to use variables in TestDriver for dynamic data handling
5
5
  icon: "square-root-variable"
6
6
  ---
7
7
 
8
- # Using Variables in TestDriver
8
+ # Using variables in TestDriver
9
9
 
10
10
  Variables in **TestDriver** allow you to dynamically store and reuse data during test execution. This feature is particularly useful for handling dynamic content, passing data between steps, and customizing test behavior based on runtime conditions.
11
11
 
@@ -16,7 +16,7 @@ Variables in **TestDriver** allow you to dynamically store and reuse data during
16
16
 
17
17
  By leveraging variables in TestDriver, you can create dynamic, flexible, and reusable test scripts that adapt to changing conditions and data.
18
18
 
19
- #### Test File Example:
19
+ #### Test file example:
20
20
 
21
21
  ```yaml
22
22
  version: 5.1.0
@@ -41,7 +41,7 @@ steps:
41
41
  text: ${OUTPUT.randomNumber}
42
42
  ```
43
43
 
44
- ## GitHub Actions Example
44
+ ## GitHub Actions example
45
45
 
46
46
  ```yaml
47
47
  name: TestDriver with Variables
@@ -74,7 +74,7 @@ jobs:
74
74
  GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
75
75
  FORCE_COLOR: "3"
76
76
  ```
77
- #### Test File
77
+ #### Test file
78
78
 
79
79
  ```yaml
80
80
  version: 4.2.18
@@ -99,9 +99,9 @@ steps:
99
99
  action: click
100
100
  ```
101
101
 
102
- ### **Capturing Outputs as Variables**
102
+ ### **Capturing outputs as variables**
103
103
 
104
- #### Test File Example:
104
+ #### Test file example:
105
105
 
106
106
  ```yaml
107
107
  version: 5.1.0
@@ -121,10 +121,10 @@ steps:
121
121
  expect: "${OUTPUT.welcomeMessage}" == "Welcome, Test User!"
122
122
  ```
123
123
 
124
- ## Best Practices for Using Variables
124
+ ## Best practices for using variables
125
125
 
126
126
  1. **Use Descriptive Names**:
127
- - Name variables clearly to indicate their purpose (e.g., `capturedText`, `randomNumber`).
127
+ - Name variables clearly to indicate their purpose (for example, `capturedText`, `randomNumber`).
128
128
 
129
129
  2. **Secure Sensitive Data**:
130
130
  - Use environment variables for sensitive information like credentials or API keys.
@@ -5,7 +5,7 @@ description: "Waiting in TestDriver: Ensuring Stability and Reducing Flakiness"
5
5
  icon: "clock"
6
6
  ---
7
7
 
8
- # Waiting in TestDriver: Ensuring Stability and Reducing Flakiness
8
+ # Waiting in TestDriver: Ensuring stability and reducing flakiness
9
9
 
10
10
  Waiting is a critical feature in TestDriver that ensures tests are stable and reliable, even in dynamic or slow-loading environments.
11
11
 
@@ -14,7 +14,7 @@ Waiting is a critical feature in TestDriver that ensures tests are stable and re
14
14
  - By waiting for the screen and network to stabilize, `redraw` minimizes the risk of interacting with incomplete or incorrect elements.
15
15
 
16
16
 
17
- ## Summary of Waiting Features
17
+ ## Summary of waiting features
18
18
 
19
19
  | **Feature** | **Description** |
20
20
  |-----------------------|---------------------------------------------------------------------------------|
@@ -22,13 +22,13 @@ Waiting is a critical feature in TestDriver that ensures tests are stable and re
22
22
  | **`wait-for-text`** | Waits for specific text to appear on the screen. |
23
23
  | **`wait-for-image`** | Waits for a specific image or visual element to appear on the screen. |
24
24
 
25
- ## Key Waiting Features in TestDriver
25
+ ## Key waiting features in TestDriver
26
26
 
27
27
  1. **Automatic Waiting with `redraw`**:
28
28
  - TestDriver automatically waits for the machine before moving to the next step.
29
29
  - This includes waiting for:
30
30
  - UI changes to complete.
31
- - Network activity to stabilize (e.g., API calls).
31
+ - Network activity to stabilize (for example, API calls).
32
32
  - Reduces the need for manual waits, making tests faster and less prone to flakiness.
33
33
 
34
34
  2. **`wait-for-text` Command**:
@@ -40,13 +40,13 @@ Waiting is a critical feature in TestDriver that ensures tests are stable and re
40
40
  - Ideal for verifying the presence of icons, logos, or other graphical elements.
41
41
 
42
42
 
43
- ## Commands for Explicit Waiting
43
+ ## Commands for explicit waiting
44
44
 
45
45
  ### 1. **`wait-for-text`**
46
46
 
47
47
  The `wait-for-text` command pauses the test until the specified text appears on the screen. This is particularly useful for dynamic content that takes time to load.
48
48
 
49
- #### Syntax:
49
+ #### Syntax
50
50
 
51
51
  ```yaml
52
52
  - command: wait-for-text
@@ -54,7 +54,7 @@ The `wait-for-text` command pauses the test until the specified text appears on
54
54
  timeout: <time in milliseconds> # Optional, defaults to 5000ms
55
55
  ```
56
56
 
57
- #### Example:
57
+ #### Example
58
58
 
59
59
  ```yaml
60
60
  - command: wait-for-text
@@ -86,7 +86,7 @@ The `wait-for-image` command pauses the test until the specified image or visual
86
86
 
87
87
  In this example, the test waits up to 8 seconds for the company logo to appear in the top-left corner.
88
88
 
89
- ## Automatic Waiting with `redraw`
89
+ ## Automatic waiting with `redraw`
90
90
 
91
91
  TestDriver's **`redraw` function** is a built-in mechanism that automatically waits for the screen to stabilize before proceeding to the next step. This includes:
92
92
 
@@ -95,7 +95,7 @@ TestDriver's **`redraw` function** is a built-in mechanism that automatically wa
95
95
  - Ensures that the screen is fully rendered before interacting with elements.
96
96
 
97
97
  2. **Network Stabilization**:
98
- - Waits for network activity (e.g., API calls, AJAX requests) to finish.
98
+ - Waits for network activity (for example, API calls, AJAX requests) to finish.
99
99
  - Ensures that dynamic content is fully loaded before proceeding.
100
100
 
101
101
  3. **Screen Stabilization**:
@@ -103,7 +103,7 @@ TestDriver's **`redraw` function** is a built-in mechanism that automatically wa
103
103
 
104
104
  ---
105
105
 
106
- ## Best Practices for Waiting
106
+ ## Best practices for waiting
107
107
 
108
108
  2. **Leverage Automatic Waiting**:
109
109
  - Rely on TestDriver's `redraw` function to handle most waiting scenarios automatically.
@@ -9,7 +9,7 @@ This guide demonstrates how to convert user stories from **CSV files** into indi
9
9
 
10
10
  ---
11
11
 
12
- ## Workflow Overview
12
+ ## Workflow overview
13
13
 
14
14
  1. **Export User Stories**: Extract user stories from TestRail, CSV, or Jira.
15
15
  2. **Convert to Test Files**: Use a Node.js script to generate individual YAML test files for each user story.
@@ -17,11 +17,11 @@ This guide demonstrates how to convert user stories from **CSV files** into indi
17
17
 
18
18
  ---
19
19
 
20
- ## Step 1: Export User Stories
20
+ ## Step 1: Export user stories
21
21
 
22
- Export user stories from your source (TestRail, CSV, or Jira) and save them in a structured format (e.g., JSON or CSV). For example:
22
+ Export user stories from your source (TestRail, CSV, or Jira) and save them in a structured format (for example, JSON or CSV). For example:
23
23
 
24
- ### Example CSV File (`user_stories.csv`):
24
+ ### Example CSV file (`user_stories.csv`):
25
25
 
26
26
  | User Story ID | User Story Title | Description |
27
27
  |---------------|---------------------------|--------------------------------------|
@@ -31,18 +31,18 @@ Export user stories from your source (TestRail, CSV, or Jira) and save them in a
31
31
 
32
32
  ---
33
33
 
34
- ## Step 2: Node.js Script to Convert User Stories into Test Files
34
+ ## Step 2: Node.js script to convert user stories into test files
35
35
 
36
36
  Create a Node.js script to read the exported data and generate individual YAML test files for each user story.
37
37
 
38
- ### Install Required Dependencies
38
+ ### Install required dependencies
39
39
 
40
40
  ```bash
41
41
  npm install yaml fs csv-parser
42
42
 
43
43
  ```
44
44
 
45
- ### Node.js Script (`generate-tests.js`)
45
+ ### Node.js script (`generate-tests.js`)
46
46
 
47
47
  ```javascript
48
48
  const fs = require('fs');
@@ -73,7 +73,7 @@ function generateTestFile(userStory) {
73
73
  };
74
74
 
75
75
  const yamlContent = yaml.stringify(testContent);
76
- const fileName = `test_${id}.yml`;
76
+ const fileName = `test_${id}.yaml`;
77
77
  const filePath = path.join(outputDir, fileName);
78
78
 
79
79
  fs.writeFileSync(filePath, yamlContent, 'utf8');
@@ -92,23 +92,22 @@ fs.createReadStream(inputFile)
92
92
 
93
93
  ```
94
94
 
95
- ### Run the Script
95
+ ### Run the script
96
96
 
97
97
  ```bash
98
98
  node generate-tests.js
99
-
100
99
  ```
101
100
 
102
- This script will generate individual YAML test files (e.g., `test_1.yml`, `test_2.yml`) in the `testdriver/` directory.
101
+ This script will generate individual YAML test files (for example, `test_1.yaml`, `test_2.yaml`) in the `testdriver/` directory.
103
102
 
104
103
  ---
105
104
 
106
- ## Step 3: GitHub Actions Workflow to Run Tests
105
+ ## Step 3: GitHub Actions workflow to run tests
107
106
 
108
107
  Create a GitHub Actions workflow to execute the generated test files in parallel.
109
108
 
110
- ### GitHub Actions Workflow (`.github/workflows/run-tests.yml`)
111
- ```yaml
109
+ ### GitHub Actions workflow (`.github/workflows/run-tests.yaml`)
110
+ ```yaml [expandable]
112
111
  name: Run TestDriver Tests
113
112
 
114
113
  on:
@@ -131,7 +130,7 @@ jobs:
131
130
  - name: Find all test files
132
131
  id: test_list
133
132
  run: |
134
- FILES=$(ls ./testdriver/*.yml)
133
+ FILES=$(ls ./testdriver/*.yaml)
135
134
  FILES_JSON=$(echo "$FILES" | jq -R -s -c 'split("\n")[:-1]')
136
135
  echo "::set-output name=files::$FILES_JSON"
137
136
 
@@ -161,20 +160,20 @@ jobs:
161
160
 
162
161
  ---
163
162
 
164
- ## Step 4: Secure API Key
163
+ ## Step 4: Secure API key
165
164
 
166
- Store your TestDriver API key as a GitHub secret (e.g., `TD_API_KEY`) to securely authenticate your tests.
165
+ Store your TestDriver API key as a GitHub secret (for example, `TD_API_KEY`) to securely authenticate your tests.
167
166
 
168
167
  ---
169
168
 
170
- ## Step 5: Commit and Push
169
+ ## Step 5: Commit and push
171
170
 
172
- 1. Commit the `generate-tests.js` script, `user_stories.csv`, and `.github/workflows/run-tests.yml` to your repository.
171
+ 1. Commit the `generate-tests.js` script, `user_stories.csv`, and `.github/workflows/run-tests.yaml` to your repository.
173
172
  2. Push the changes to the `main` branch.
174
173
 
175
174
  ---
176
175
 
177
- ## Step 6: Run the Workflow
176
+ ## Step 6: Run the workflow
178
177
 
179
178
  The GitHub Actions workflow will automatically:
180
179
  1. Gather all test files in the `testdriver/` directory.
@@ -182,12 +181,12 @@ The GitHub Actions workflow will automatically:
182
181
 
183
182
  ---
184
183
 
185
- ## Example Output
184
+ ## Example output
186
185
 
187
186
  - **Generated Test Files**:
188
- - `testdriver/test_1.yml`
189
- - `testdriver/test_2.yml`
190
- - `testdriver/test_3.yml`
187
+ - `testdriver/test_1.yaml`
188
+ - `testdriver/test_2.yaml`
189
+ - `testdriver/test_3.yaml`
191
190
 
192
191
  - **GitHub Actions Dashboard**:
193
192
  - Each test file is executed as a separate job, and the results are displayed in the Actions tab.