testdriverai 6.1.8-canary.5d001de.0 → 6.1.8-canary.e81d80a.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -7,6 +7,8 @@ on:
7
7
  # So that we don't do expensive tests until approved
8
8
  push:
9
9
  branches: [main]
10
+ paths-ignore:
11
+ - "docs/**"
10
12
  # So that we can manually trigger tests when there's flake
11
13
  workflow_dispatch:
12
14
 
@@ -5,6 +5,8 @@ on:
5
5
  push:
6
6
  branches:
7
7
  - main
8
+ paths-ignore:
9
+ - "docs/**"
8
10
  pull_request:
9
11
  branches:
10
12
  - main
@@ -1,7 +1,10 @@
1
1
  # Ensure affected code follows standards and is formatted correctly. Otherwise, automatic formatting in future changes will cause larger diffs.
2
2
  name: Lint + Prettier
3
3
 
4
- on: push
4
+ on:
5
+ push:
6
+ paths-ignore:
7
+ - "docs/**"
5
8
 
6
9
  jobs:
7
10
  lint:
@@ -10,6 +10,8 @@ on:
10
10
  # So that we publish for every push to `main`, despite tests
11
11
  push:
12
12
  branches: [main]
13
+ paths-ignore:
14
+ - "docs/**"
13
15
  workflow_dispatch:
14
16
 
15
17
  jobs:
@@ -2,6 +2,7 @@
2
2
  name: Publish @latest to NPM
3
3
 
4
4
  on:
5
+ workflow_dispatch:
5
6
  workflow_run:
6
7
  workflows: ["Acceptance Tests"]
7
8
  branches: [main]
@@ -3,6 +3,8 @@ name: AWS
3
3
  on:
4
4
  workflow_dispatch:
5
5
  push:
6
+ paths-ignore:
7
+ - "docs/**"
6
8
 
7
9
  jobs:
8
10
  gather:
@@ -67,6 +69,8 @@ jobs:
67
69
  AWS_REGION: us-east-2
68
70
  AWS_LAUNCH_TEMPLATE_ID: lt-00d02f31cfc602f27
69
71
  AMI_ID: ami-085f872ca0cd80fed
72
+ RESOLUTION_WIDTH: 1920
73
+ RESOLUTION_HEIGHT: 1080
70
74
  - name: Run TestDriver
71
75
  run: node bin/testdriverai.js run testdriver/acceptance/${{ matrix.test }} --ip="${{ steps.aws-setup.outputs.public-ip }}" --junit=out.xml
72
76
  env:
@@ -24,15 +24,15 @@ We specialize in testing scenarios that other tools can't handle - desktop appli
24
24
 
25
25
  ## Enterprise Plans
26
26
 
27
- TestDriver Enterprise plans start at **$995/month** and include:
27
+ TestDriver Enterprise plans start at **$2,000/month** and include:
28
28
 
29
+ - A pilot program with our expert team to create 4 custom tests within your first 4 weeks (4x4 Guarantee).
30
+ - 4 parallel tests
29
31
  - **12,500 runner minutes per month**: Sufficient capacity for continuous testing of your custom test suite.
30
- - Enterprise-grade test dashboards with advanced analytics.
31
32
  - Full CI/CD pipeline integration with custom configurations.
32
33
  - Dedicated infrastructure and ongoing support for complex testing scenarios.
33
- - Expert test creation and maintenance services.
34
34
 
35
- For detailed pricing and contract information, see [Contract Details](#contract-details). For other plans, visit our [Pricing](/account/pricing) page.
35
+ For detailed pricing and contract information our [Pricing](/account/pricing) page. Want unlimited minutes or enhanced security? We also support self-hosted options with 16 parallel tests starting at $2,000/month. See our [Self Hosting](/getting-started/self-hosting) docs for more info.
36
36
 
37
37
  <CardGroup cols={3}>
38
38
  <Card title="Custom Desktop & Extension Testing">
@@ -66,17 +66,14 @@ TestDriver Enterprise provides comprehensive support for fast-moving teams with
66
66
 
67
67
  Testing complex applications requires more than standard automation tools. Desktop applications, browser extensions, and multi-platform workflows demand specialized infrastructure, custom integrations, and deep technical expertise. TestDriver Enterprise provides the complete solution - from initial setup through ongoing maintenance and support.
68
68
 
69
- For more details, see [Contract Details](#contract-details).
70
-
71
69
  ---
72
70
 
73
71
  ## Implementation Process
74
72
 
75
73
  1. **Initial Consultation**: Discuss your specific testing challenges, application architecture, and infrastructure requirements.
76
- 2. **Custom Infrastructure Design**: Configure specialized testing environments tailored to your technology stack and workflow requirements.
77
- 3. **Expert Test Development**: Our team develops 4 custom tests designed specifically for your application's critical user flows and business logic.
78
- 4. **Integration & Deployment**: Implement tests within your CI/CD pipeline with custom monitoring and reporting configurations.
79
- 5. **Team Training & Ongoing Support**: Comprehensive training for your team plus ongoing technical support and consultation.
74
+ 2. **Expert Test Development**: Our team develops 4 custom tests designed specifically for your application's critical user flows and business logic.
75
+ 3. **Integration & Deployment**: Implement tests within your CI/CD pipeline with custom monitoring and reporting configurations.
76
+ 4. **Team Training & Ongoing Support**: Comprehensive training for your team plus ongoing technical support and consultation.
80
77
 
81
78
  Complex applications - particularly desktop software, browser extensions, and multi-platform workflows - present unique testing challenges that require specialized infrastructure and deep technical expertise. TestDriver Enterprise addresses these challenges with custom solutions designed specifically for your application and development process.
82
79
 
@@ -86,7 +83,6 @@ Complex applications - particularly desktop software, browser extensions, and mu
86
83
 
87
84
  | Service | Timeline | Description |
88
85
  | --------------------------------- | ------------- | ------------------------------------------------------------------------------------------------ |
89
- | **Infrastructure Design** | First 7 Days | Analysis and configuration of specialized testing environments for your application stack. |
90
86
  | **Requirements Analysis** | First 7 Days | Comprehensive review of testing requirements and technical specifications. |
91
87
  | **Custom Test Development** | First 4 Weeks | Expert creation of 4 fully customized tests (4x4 Guarantee) tailored to your critical workflows. |
92
88
  | **Training & Knowledge Transfer** | First 30 Days | Technical training for your team and establishment of ongoing support processes. |
@@ -103,7 +99,7 @@ Complex applications - particularly desktop software, browser extensions, and mu
103
99
  - **Service Level**: Dedicated support team and technical consultation included.
104
100
  - **Usage Tracking**: Monthly runner minute allocation with standard overage rates.
105
101
  - **Custom Infrastructure**: Specialized testing environments included for complex applications.
106
- - **Enterprise Options**: On-premises and BYOC (Bring Your Own Cloud) configurations available.
102
+ - **Enterprise Options**: [Self hosting](/getting-started/self-hosting) configurations available.
107
103
 
108
104
  ---
109
105
 
@@ -19,12 +19,12 @@ TestDriver offers a range of pricing plans to suit different needs, from individ
19
19
  </Card>
20
20
  <Card title="Enterprise" icon="shield">
21
21
  Need advanced features? Contact us for tailored solutions. Starting at
22
- $995/month.
22
+ $2,000/month.
23
23
  </Card>
24
24
  </CardGroup>
25
25
 
26
26
  <Tip>
27
- Every plan starts with $100 in TestDriver credits to get you off the starting
27
+ Every plan comes with access to the Playwright SDK to get you off the starting
28
28
  line!
29
29
  </Tip>
30
30
 
@@ -23,6 +23,11 @@ From the Project view, you can see all the replays (Dashes) stored for that proj
23
23
 
24
24
  When you create a new Project, you can also enable the <Icon icon="jira" /> Jira integration to create issues automatically each time a replay (Dash) is created.
25
25
 
26
+ <Info>
27
+ The project ID can be used in conjunction with your `lifecycle/postrun.yaml`
28
+ script to automatically assign a replay to a project. For more info see the
29
+ (Dashcam section)[/guide/dashcam].
30
+ </Info>
26
31
  <Frame caption="Click a Project to view its replays">
27
32
  <img src="/images/content/account/newprojectsettings.png" />
28
33
  </Frame>
@@ -14,10 +14,7 @@ icon: "https://tauri.app/favicon.svg"
14
14
 
15
15
  In this guide, we'll leverage [Playwright](https://playwright.dev/) and the [TestDriver Playwright SDK](/getting-started/playwright) to convert the [Tauri Quick Start](https://tauri.app/start/create-project/) to TestDriver's selectorless, Vision AI.
16
16
 
17
- <Info>
18
- View Source:
19
- https://github.com/testdriverai/demo-tauri-app
20
- </Info>
17
+ <Info>View Source: https://github.com/testdriverai/demo-tauri-app</Info>
21
18
 
22
19
  ### Requirements
23
20
 
@@ -30,7 +27,7 @@ To start testing your Tauri app with TestDriver, you need the following:
30
27
  You will need a [Free TestDriver Account](https://app.testdriver.ai/team) to get an API key.
31
28
 
32
29
  <Card title="Sign Up for TestDriver" icon="user-plus" horizontal href="https://app.testdriver.ai/team">
33
-
30
+
34
31
  </Card>
35
32
  </Step>
36
33
  <Step title="Set up your environment">
@@ -54,6 +51,7 @@ To start testing your Tauri app with TestDriver, you need the following:
54
51
  </Tabs>
55
52
  </Step>
56
53
  </Steps>
54
+
57
55
  </Accordion>
58
56
  <Accordion title="Create a Tauri project">
59
57
  <Note>
@@ -98,6 +96,7 @@ To start testing your Tauri app with TestDriver, you need the following:
98
96
  ✔ Install Playwright browsers (can be done manually via 'npx playwright install')? (Y/n)
99
97
  > Y
100
98
  ```
99
+
101
100
  </Accordion>
102
101
  <Accordion title="Install the TestDriver Playwright SDK">
103
102
  `@testdriver.ai/playwright` is an AI-powered extension of `@playwright/test`.
@@ -119,6 +118,7 @@ To start testing your Tauri app with TestDriver, you need the following:
119
118
  ```
120
119
  </Tab>
121
120
  </Tabs>
121
+
122
122
  </Accordion>
123
123
  </AccordionGroup>
124
124
 
@@ -146,6 +146,7 @@ First, we need to modify the default Playwright configuration and our Tauri proj
146
146
  },
147
147
  });
148
148
  ```
149
+
149
150
  </Step>
150
151
  <Step title="Mock Tauri APIs">
151
152
  Since we're testing the Tauri frontend, we need to [mock IPC Requests](https://tauri.app/develop/tests/mocking/)
@@ -167,6 +168,7 @@ First, we need to modify the default Playwright configuration and our Tauri proj
167
168
  ```
168
169
 
169
170
  We only need to do this once, as we'll be accessing `window.mockIPC` in our tests.
171
+
170
172
  </Step>
171
173
  <Step title="Create a new test file">
172
174
  Create a new file (e.g. `tests/testdriver.spec.ts`) with:
@@ -174,15 +176,16 @@ First, we need to modify the default Playwright configuration and our Tauri proj
174
176
  ```typescript tests/testdriver.spec.ts
175
177
  import type { mockIPC } from "@tauri-apps/api/mocks";
176
178
  import { expect, test } from "@playwright/test";
177
-
179
+
178
180
  test.beforeEach(async ({ page }) => {
179
181
  await page.goto("http://localhost:1420");
180
182
  });
181
-
183
+
182
184
  test("should have title", async ({ page }) => {
183
185
  await expect(page).toHaveTitle("Tauri + React + TypeScript");
184
186
  });
185
187
  ```
188
+
186
189
  </Step>
187
190
  <Step title="Run Playwright in UI Mode">
188
191
  Now we're ready to run Playwright and start working on our tests:
@@ -211,6 +214,7 @@ First, we need to modify the default Playwright configuration and our Tauri proj
211
214
  <Tip>
212
215
  Click the <Icon icon="eye" /> button to automatically re-run tests on save.
213
216
  </Tip>
217
+
214
218
  </Step>
215
219
  </Steps>
216
220
 
@@ -300,6 +304,7 @@ With TestDriver, we can skip the test implementation **entirely** and let AI per
300
304
  });
301
305
  });
302
306
  ```
307
+
303
308
  </Step>
304
309
  <Step title="Add an Agentic Test">
305
310
  Next, wrap a _prompt_ in `test.agent` to perform the test:
@@ -313,6 +318,7 @@ With TestDriver, we can skip the test implementation **entirely** and let AI per
313
318
  `);
314
319
  });
315
320
  ```
321
+
316
322
  </Step>
317
323
  </Steps>
318
324
 
@@ -327,27 +333,13 @@ We can use TestDriver and natural language to test our Tauri desktop app:
327
333
  <Steps>
328
334
  <Step title="Run the Desktop App">
329
335
  <Tabs>
330
- <Tab title="npm">
331
- ```bash
332
- npm run tauri dev
333
- ```
334
- </Tab>
335
- <Tab title="yarn">
336
- ```bash
337
- yarn tauri dev
338
- ```
339
- </Tab>
340
- <Tab title="pnpm">
341
- ```bash
342
- pnpm tauri dev
343
- ```
344
- </Tab>
336
+ <Tab title="npm">```bash npm run tauri dev ```</Tab>
337
+ <Tab title="yarn">```bash yarn tauri dev ```</Tab>
338
+ <Tab title="pnpm">```bash pnpm tauri dev ```</Tab>
345
339
  </Tabs>
346
340
  </Step>
347
341
  <Step title="Continued Reading">
348
- <Note>
349
- See [Desktop Apps](/apps/desktop-apps) for more information.
350
- </Note>
342
+ <Note>See [Desktop Apps](/apps/desktop-apps) for more information.</Note>
351
343
  </Step>
352
344
  </Steps>
353
345
 
@@ -358,26 +350,12 @@ We can use TestDriver and natural language to test our Tauri iOS app:
358
350
  <Steps>
359
351
  <Step title="Run the Mobile App">
360
352
  <Tabs>
361
- <Tab title="npm">
362
- ```bash
363
- npm run tauri ios dev
364
- ```
365
- </Tab>
366
- <Tab title="yarn">
367
- ```bash
368
- yarn tauri ios dev
369
- ```
370
- </Tab>
371
- <Tab title="pnpm">
372
- ```bash
373
- pnpm tauri ios dev
374
- ```
375
- </Tab>
353
+ <Tab title="npm">```bash npm run tauri ios dev ```</Tab>
354
+ <Tab title="yarn">```bash yarn tauri ios dev ```</Tab>
355
+ <Tab title="pnpm">```bash pnpm tauri ios dev ```</Tab>
376
356
  </Tabs>
377
357
  </Step>
378
358
  <Step title="Continued Reading">
379
- <Note>
380
- See [Mobile Apps](/apps/mobile-apps) for more information.
381
- </Note>
359
+ <Note>See [Mobile Apps](/apps/mobile-apps) for more information.</Note>
382
360
  </Step>
383
- </Steps>
361
+ </Steps>
@@ -15,11 +15,11 @@ npx testdriverai@latest <command> [options]
15
15
 
16
16
  ## Available commands
17
17
 
18
- | Command | Description |
19
- | :--------------------: | :----------------------------------------------------------- |
20
- | [`run`](/commands/run) | Executes a TestDriver test. |
21
- | [`edit`](/commands/edit) | Launch interactive mode. |
22
- | [`help`](/commands/help) | Displays help information for the CLI or a specific command. |
18
+ | Command | Description |
19
+ | :----------------------: | :----------------------------------------------------------- |
20
+ | [`run`](/commands/run) | Executes a TestDriver test. |
21
+ | [`edit`](/commands/edit) | Launch interactive mode. |
22
+ | [`help`](/commands/help) | Displays help information for the CLI or a specific command. |
23
23
 
24
24
  ## Available Flags
25
25
 
@@ -27,10 +27,9 @@ npx testdriverai@latest <command> [options]
27
27
  | :------------------ | :----------------------------------------------------------------------------------------- |
28
28
  | `--heal` | Launch exploratory mode and attemp to recover if an error or failing state is encountered. |
29
29
  | `--write` | Ovewrite test file with new commands resulting from agentic testing |
30
- | `--headless` | Run test without opening a browser window (useful for CI/CD environments) |
31
30
  | `--new` | Create a new sandbox environment for the test run. |
32
31
  | `--summary=<value>` | Output file where AI summary should be saved. |
33
- | `--junit=<value>` | Output file where junit report should be saved. |
32
+ | `--junit=<value>` | Output file where junit report should be saved. |
34
33
 
35
34
  ## Example usage
36
35
 
package/docs/docs.json CHANGED
@@ -103,6 +103,7 @@
103
103
  "/guide/authentication",
104
104
  "/guide/variables",
105
105
  "/guide/lifecycle",
106
+ "/guide/dashcam",
106
107
  "/guide/environment-variables",
107
108
  "/action/ami"
108
109
  ]
@@ -50,8 +50,9 @@ test.describe("get started link", () => {
50
50
  You will need a [Free TestDriver Account](https://app.testdriver.ai/team) to get an API key.
51
51
 
52
52
  <Card title="Sign Up for TestDriver" icon="user-plus" horizontal href="https://app.testdriver.ai/team">
53
-
53
+
54
54
  </Card>
55
+
55
56
  </Step>
56
57
  <Step title="Set up your environment">
57
58
  Copy your API key from [the TestDriver dashboard](https://app.testdriver.ai/team), and set it as an environment variable.
@@ -68,6 +69,7 @@ test.describe("get started link", () => {
68
69
  ```
69
70
  </Tab>
70
71
  </Tabs>
72
+
71
73
  </Step>
72
74
  </Steps>
73
75
 
@@ -131,6 +133,7 @@ test.describe("get started link", () => {
131
133
  ```
132
134
  </Tab>
133
135
  </Tabs>
136
+
134
137
  </Step>
135
138
  </Steps>
136
139
 
@@ -157,6 +160,7 @@ test.describe("get started link", () => {
157
160
  ```
158
161
  </Tab>
159
162
  </Tabs>
163
+
160
164
  </Step>
161
165
  <Step title="Run Playwright">
162
166
  Before we start using TestDriver in our tests, run Playwright in [UI Mode](https://playwright.dev/docs/test-ui-mode):
@@ -182,6 +186,7 @@ test.describe("get started link", () => {
182
186
 
183
187
  Clicking the ▶️ button should successfully run the tests in the UI,
184
188
  just as they did before with `playwright test` in the CLI.
189
+
185
190
  </Step>
186
191
  <Step title="Import TestDriver">
187
192
  For the sake of simplicity, we'll be working with one test file for now.
@@ -199,6 +204,7 @@ test.describe("get started link", () => {
199
204
  <Tip>
200
205
  Click the <Icon icon="eye" /> button to automatically re-run tests on save.
201
206
  </Tip>
207
+
202
208
  </Step>
203
209
  </Steps>
204
210
 
@@ -266,14 +272,15 @@ Now, our test uses natural language to both describe & locate the element.
266
272
  <Tip>
267
273
  In the example above, you can still use Playwright to assert that the element is indeed a link for accessibility:
268
274
 
269
- ```typescript tests/example.spec.ts icon=square-js
270
- const link = await testdriver(page).locate("Get started link");
271
- // [!code ++]
272
- expect(link).toHaveRole("link");
273
- await link.click();
274
- ```
275
+ ```typescript tests/example.spec.ts icon=square-js
276
+ const link = await testdriver(page).locate("Get started link");
277
+ // [!code ++]
278
+ expect(link).toHaveRole("link");
279
+ await link.click();
280
+ ```
281
+
282
+ This way you can write user-centric tests _and_ validate the implementation.
275
283
 
276
- This way you can write user-centric tests _and_ validate the implementation.
277
284
  </Tip>
278
285
 
279
286
  ### Performing actions with `testdriver.act`
@@ -332,4 +339,4 @@ but replaced the `test` itself with `test.agent`.
332
339
 
333
340
  ## Conclusion
334
341
 
335
- With `@testdriver.ai/playwright`, you can use as much or as little of Playwright's _or_ TestDriver's API as you need to validate correctness. It's up to you!
342
+ With `@testdriver.ai/playwright`, you can use as much or as little of Playwright's _or_ TestDriver's API as you need to validate correctness. It's up to you!
@@ -0,0 +1,118 @@
1
+ ---
2
+ title: "Dashcam Replays"
3
+ sidebarTitle: "Dashcam"
4
+ description: "Learn how to use Dashcam to record and replay test sessions in TestDriver."
5
+ icon: "video"
6
+ ---
7
+
8
+ [Dashcam](https://www.dashcam.io), from the makers of TestDriver, is a powerful feature in TestDriver that allows you to record and replay your test sessions. This is particularly useful for debugging, sharing test runs with team members, or reviewing the steps taken during a test. For the full docs see the [Dashcam docs](https://docs.dashcam.io/dashcam/).
9
+
10
+ ## Recording a Test Session
11
+
12
+ To record a test session, you can use the `dashcam` command in your lifecycle scripts. There are two main lifecycle scripts where you can integrate Dashcam: `lifecycle/prerun.yaml` and `lifecycle/postrun.yaml`.
13
+
14
+ ## Ways to use Dashcam
15
+
16
+ Dashcam comes as a standalone app and a Chrome extension. You can use either or both to capture your test sessions.
17
+
18
+ <Info>
19
+ To capture web logs, make sure to install the Dashcam Chrome extension on the
20
+ browser you are testing with. We recommend installing it via CLI to Chrome for
21
+ Testing. You can also find the extension [in the Chrome
22
+ Webstore](https://chromewebstore.google.com/detail/dashcam/dkcoeknmlfnfimigfagbcjgpokhdcbbp)
23
+ </Info>
24
+
25
+ ### Installing the Dashcam Chrome extension via command line in prerun.yaml
26
+
27
+ In this lifecycle script, we install Chrome for Testing with a user profile that has the password manager disabled and sets up TestDriver Dashcam for replays and logs.
28
+
29
+ ```yaml lifecycle/prerun.yaml [expandable]
30
+ - prompt: launch chrome for testing and setup dashcam
31
+ commands:
32
+ # this script installs chrome for testing with a userprofile that has password manager disabled and sets up TestDriver Dashcam for replays and logs
33
+ - command: exec
34
+ lang: pwsh
35
+ code: |
36
+ cd $env:TEMP
37
+ Write-Host "Changed directory to TEMP: $env:TEMP"
38
+
39
+ Write-Host "Running 'npm init -y'..."
40
+ npm init -y
41
+
42
+ Write-Host "Installing dependencies: @puppeteer/browsers and dashcam-chrome..."
43
+ npm install @puppeteer/browsers dashcam-chrome
44
+
45
+ Write-Host "Installing Chromium via '@puppeteer/browsers'..."
46
+ npx @puppeteer/browsers install chrome
47
+
48
+ # Define paths
49
+ $extensionPath = Join-Path (Get-Location) "node_modules/dashcam-chrome/build"
50
+ $profilePath = Join-Path $env:TEMP "chrome-profile-$(Get-Random)"
51
+
52
+ Write-Host "Extension path: $extensionPath"
53
+ Write-Host "Chrome user data dir: $profilePath"
54
+
55
+ # Validate extension path
56
+ if (-not (Test-Path $extensionPath)) {
57
+ Write-Host "Extension not found at $extensionPath"
58
+ }
59
+
60
+ $chromeArgs = @(
61
+ "--start-maximized",
62
+ "--load-extension=$extensionPath",
63
+ "--user-data-dir=$profilePath",
64
+ "--no-first-run",
65
+ "--no-default-browser-check",
66
+ "--disable-infobars"
67
+ "${TD_WEBSITE}"
68
+ ) -join ' '
69
+
70
+ Start-Process "cmd.exe" -ArgumentList "/c", "npx @puppeteer/browsers launch chrome -- $chromeArgs"
71
+
72
+ Write-Host "Script complete."
73
+ exit 0
74
+ ```
75
+
76
+ ### Using the Chrome extension and capturing web logs
77
+
78
+ Now in the same `lifecycle/prerun.yaml` script, we set up Dashcam to track web logs and application logs. You can customize the patterns to match your needs. Testing Desktop? You can skip the web logs and just track application logs.
79
+
80
+ ```yaml lifecycle/prerun.yaml
81
+ ...
82
+ - command: exec
83
+ lang: pwsh
84
+ code: |
85
+ dashcam track --name="Web Logs" --type="web" --pattern="*"
86
+ dashcam track --name=TestDriver --type=application --pattern="C:\Users\testdriver\Documents\testdriver.log"
87
+ ```
88
+
89
+ ### Starting Dashcam
90
+
91
+ The final step in our `lifecycle/prerun.yaml` script is to start Dashcam recording.
92
+
93
+ ```yaml lifecycle/prerun.yaml
94
+ ...
95
+ - command: exec
96
+ lang: pwsh
97
+ code: dashcam start
98
+ ```
99
+
100
+ ### Publishing replays to a project in your account
101
+
102
+ Lastly, in the `lifecycle/postrun.yaml` script, we publish the recorded Dashcam session to a project in your Dashcam account. Make sure to replace `<YOUR_PROJECT_ID>` with the actual ID of your project.
103
+
104
+ ```yaml lifecycle/postrun.yaml
105
+ - prompt: send dashcam recording to server
106
+ # this script tells TestDriver Dashcam to send the recording to the server
107
+ commands:
108
+ - command: exec
109
+ lang: pwsh
110
+ code: dashcam -t '${TD_THIS_FILE}' -p -k <YOUR_PROJECT_ID> # optional add `-k MYFOLDERID` for the id of a folder in your Projects page at app.testdriver.ai
111
+ ```
112
+
113
+ <Info>
114
+ `${TD_THIS_FILE}` is an environment variable set by TestDriver that contains
115
+ the name of the current test file being executed. This will be used as the
116
+ title of the Dashcam recording. For more info see [parallel testing
117
+ docs](/features/parallel-testing).
118
+ </Info>
@@ -11,13 +11,13 @@ import GitignoreWarning from "/snippets/gitignore-warning.mdx";
11
11
  The supported environment variables in TestDriver are:
12
12
 
13
13
  <div className="env-vars-table">
14
- | Variable | Type | Description |
14
+ | Variable | Type | Description |
15
15
  |:---------------:|:---------:|---------------------------------------------------------------------------------|
16
- | TD_ANALYTICS | boolean | Send analytics to TestDriver servers. This helps provide feedback to inform our roadmap. |
17
- | TD_API_KEY | string | Set this to spawn VMs with TestDriver Pro. |
16
+ | TD_ANALYTICS | boolean | Send analytics to TestDriver servers. This helps
17
+ provide feedback to inform our roadmap. | | TD_API_KEY | string | Set this to
18
+ spawn VMs with TestDriver Pro. |
18
19
  </div>
19
-
20
- <GitignoreWarning/>
20
+ <GitignoreWarning />
21
21
  ## Example
22
22
 
23
23
  ```bash .env
@@ -10,13 +10,10 @@ icon: boxing-glove
10
10
  TestDriver operates a full desktop environment, so it can run any application.
11
11
 
12
12
  <div className="comparison-table">
13
- | Application | TestDriver | Playwright | Selenium |
14
- |:-----------------:|:---------:|:-----------:|:--------:|
15
- | Web Apps | ✅ | ✅ | ✅ |
16
- | Mobile Apps | | | ✅ |
17
- | VS Code | ✅ | ✅ | ✅ |
18
- | Desktop Apps | ✅ | | |
19
- | Chrome Extensions | ✅ | | |
13
+ | Application | TestDriver | Playwright | Selenium |
14
+ |:-----------------:|:---------:|:-----------:|:--------:| | Web Apps | ✅ |
15
+ | | | Mobile Apps | ✅ | ✅ | ✅ | | VS Code | ✅ | ✅ | ✅ | | Desktop
16
+ Apps | | | | | Chrome Extensions | | | |
20
17
  </div>
21
18
 
22
19
  ## Testing features
@@ -24,16 +21,11 @@ TestDriver operates a full desktop environment, so it can run any application.
24
21
  TestDriver is AI first.
25
22
 
26
23
  <div className="comparison-table">
27
- | Feature | TestDriver | Playwright | Selenium |
28
- |:--------------------:|:---------:|:----------:|:--------:|
29
- | Test Generation | ✅ | | |
30
- | Adaptive Testing | ✅ | | |
31
- | Visual Assertions | ✅ | | |
32
- | Self Healing | ✅ | | |
33
- | Application Switching | ✅ | | |
34
- | GitHub Actions | ✅ | ✅ | |
35
- | Team Dashboard | ✅ | | |
36
- | Team Collaboration | ✅ | | |
24
+ | Feature | TestDriver | Playwright | Selenium |
25
+ |:--------------------:|:---------:|:----------:|:--------:| | Test Generation
26
+ | | | | | Adaptive Testing | | | | | Visual Assertions | ✅ | | | | Self
27
+ Healing | | | | | Application Switching | | | | | GitHub Actions | ✅ |
28
+ | | | Team Dashboard | | | | | Team Collaboration | ✅ | | |
37
29
  </div>
38
30
 
39
31
  ## Test coverage
@@ -61,15 +53,11 @@ TestDriver has more coverage than selector-based frameworks.
61
53
  Debugging features are powered by [Dashcam.io](https://dashcam.io).
62
54
 
63
55
  <div className="comparison-table">
64
- | Feature | TestDriver | Playwright | Selenium |
65
- |:------------------:|:----------:|:----------:|:--------:|
66
- | AI Summary | ✅ | | |
67
- | Video Replay | ✅ | ✅ | |
68
- | Browser Logs | | | |
69
- | Desktop Logs | ✅ | | |
70
- | Network Requests | ✅ | ✅ | |
71
- | Team Dashboard | ✅ | | |
72
- | Team Collaboration | ✅ | | |
56
+ | Feature | TestDriver | Playwright | Selenium |
57
+ |:------------------:|:----------:|:----------:|:--------:| | AI Summary | ✅
58
+ | | | | Video Replay | || | | Browser Logs | ✅ | ✅ | | | Desktop Logs
59
+ | | | | | Network Requests | | ✅ | | | Team Dashboard | ✅ | | | | Team
60
+ Collaboration | | | |
73
61
  </div>
74
62
 
75
63
  ## Web browser support
@@ -77,15 +65,10 @@ Debugging features are powered by [Dashcam.io](https://dashcam.io).
77
65
  TestDriver is browser agnostic and supports any version of any browser.
78
66
 
79
67
  <div className="comparison-table">
80
- | Feature | TestDriver | Playwright | Selenium |
81
- |:--------:|:----------:|:----------:|:--------:|
82
- | Chrome | ✅ | ✅ | ✅ |
83
- | Firefox | ✅ | ✅ | ✅ |
84
- | Webkit | ✅ | ✅ | ✅ |
85
- | IE | ✅ | | ✅ |
86
- | Edge | ✅ | ✅ | ✅ |
87
- | Opera | ✅ | | ✅ |
88
- | Safari | ✅ | | ✅ |
68
+ | Feature | TestDriver | Playwright | Selenium |
69
+ |:--------:|:----------:|:----------:|:--------:| | Chrome | ✅ | ✅ | ✅ | |
70
+ Firefox | | ✅ | ✅ | | Webkit | | ✅ | ✅ | | IE | ✅ | | ✅ | | Edge |
71
+ | | ✅ | | Opera | | | | | Safari | ✅ | | ✅ |
89
72
  </div>
90
73
 
91
74
  ## Operating system support
@@ -93,9 +76,7 @@ TestDriver is browser agnostic and supports any version of any browser.
93
76
  TestDriver currently supports Mac and Windows!
94
77
 
95
78
  <div className="comparison-table">
96
- | Feature | TestDriver | Playwright | Selenium |
97
- |:--------:|:----------:|:----------:|:--------:|
98
- | Windows | ✅ | ✅ | ✅ |
99
- | Mac | ✅ | ✅ | ✅ |
100
- | Linux | | ✅ | ✅ |
79
+ | Feature | TestDriver | Playwright | Selenium |
80
+ |:--------:|:----------:|:----------:|:--------:| | Windows | ✅ | ✅ | ✅ | |
81
+ Mac | | ✅ | ✅ | | Linux | | | ✅ |
101
82
  </div>
@@ -6,7 +6,6 @@ icon: "gauge-high"
6
6
  mode: "wide"
7
7
  ---
8
8
 
9
-
10
9
  <Steps>
11
10
  <Step title="Create a TestDriver Account">
12
11
 
@@ -78,7 +77,7 @@ mode: "wide"
78
77
  </Step>
79
78
  <Step title="Run the generated regression test">
80
79
 
81
- After TestDriver has run the exploratory test, you'll see that the `prompt.yaml` file has been updated with commands generated by the agent to make the test faster and more reliable.
80
+ After TestDriver has run the exploratory test, you'll see that the `prompt.yaml` file has been updated with commands generated by the agent to make the test faster and more reliable.
82
81
 
83
82
  ```yaml
84
83
  version: 6.0.0
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "testdriverai",
3
- "version": "6.1.8-canary.5d001de.0",
3
+ "version": "6.1.8-canary.e81d80a.0",
4
4
  "description": "Next generation autonomous AI agent for end-to-end testing of web & desktop",
5
5
  "main": "index.js",
6
6
  "bin": {
@@ -8,6 +8,7 @@ set -euo pipefail
8
8
  : "${AWS_LAUNCH_TEMPLATE_VERSION:=\$Latest}"
9
9
  : "${AWS_TAG_PREFIX:=td}"
10
10
  : "${RUNNER_CLASS_ID:=default}"
11
+ : "${RESOLUTION:=1440x900}"
11
12
 
12
13
  TAG_NAME="${AWS_TAG_PREFIX}-"$(date +%s)
13
14
  WS_CONFIG_PATH='C:\Windows\Temp\pyautogui-ws.json'
@@ -19,7 +20,7 @@ RUN_JSON=$(aws ec2 run-instances \
19
20
  --region "$AWS_REGION" \
20
21
  --image-id "$AMI_ID" \
21
22
  --launch-template "LaunchTemplateId=$AWS_LAUNCH_TEMPLATE_ID,Version=$AWS_LAUNCH_TEMPLATE_VERSION" \
22
- --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=${TAG_NAME}},{Key=Class,Value=${RUNNER_CLASS_ID}}]" \
23
+ --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=${TAG_NAME}},{Key=Class,Value=${RUNNER_CLASS_ID}},{Key=TD_RESOLUTION,Value=${RESOLUTION}}]" \
23
24
  --output json)
24
25
 
25
26
  INSTANCE_ID=$(jq -r '.Instances[0].InstanceId' <<<"$RUN_JSON")
@@ -143,7 +144,7 @@ done
143
144
 
144
145
  echo "Getting Public IP..."
145
146
 
146
- # # --- 4) Get instance Public IP ---
147
+ # # --- 5) Get instance Public IP ---
147
148
  DESC_JSON=$(aws ec2 describe-instances --region "$AWS_REGION" --instance-ids "$INSTANCE_ID" --output json)
148
149
  PUBLIC_IP=$(jq -r '.Reservations[0].Instances[0].PublicIpAddress // empty' <<<"$DESC_JSON")
149
150
  [ -n "$PUBLIC_IP" ] || PUBLIC_IP="No public IP assigned"
@@ -151,7 +152,7 @@ PUBLIC_IP=$(jq -r '.Reservations[0].Instances[0].PublicIpAddress // empty' <<<"$
151
152
  # echo "Getting Websocket Port..."
152
153
 
153
154
 
154
- # --- 5) Read WebSocket config JSON ---
155
+ # --- 6) Read WebSocket config JSON ---
155
156
  echo "Reading WebSocket configuration from: $WS_CONFIG_PATH"
156
157
  READ_JSON=$(aws ssm send-command \
157
158
  --region "$AWS_REGION" \
@@ -182,7 +183,7 @@ if [ -n "$STDERR" ] && [ "$STDERR" != "null" ]; then
182
183
  fi
183
184
  echo "WebSocket config raw output: $STDOUT"
184
185
 
185
- # --- 6) Output results ---
186
+ # --- 7) Output results ---
186
187
  echo "Setup complete!"
187
188
  echo "PUBLIC_IP=$PUBLIC_IP"
188
189
  echo "INSTANCE_ID=$INSTANCE_ID"