testdriverai 7.9.0 → 7.9.1-test

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (38) hide show
  1. package/agent/lib/sandbox.js +55 -6
  2. package/agent/lib/sdk.js +4 -4
  3. package/ai/skills/testdriver-enterprise/SKILL.md +2 -109
  4. package/ai/skills/testdriver-hosted/SKILL.md +156 -0
  5. package/ai/skills/testdriver-mcp/SKILL.md +2 -2
  6. package/ai/skills/testdriver-quickstart/SKILL.md +30 -2
  7. package/ai/skills/testdriver-self-hosted/SKILL.md +125 -43
  8. package/ai/skills/testdriver-test-results-json/SKILL.md +257 -0
  9. package/docs/_scripts/generate-examples.js +127 -60
  10. package/docs/docs.json +27 -28
  11. package/docs/v7/examples/ai.mdx +4 -3
  12. package/docs/v7/examples/assert.mdx +19 -4
  13. package/docs/v7/examples/chrome-extension.mdx +36 -29
  14. package/docs/v7/examples/element-not-found.mdx +2 -1
  15. package/docs/v7/examples/exec-output.mdx +3 -4
  16. package/docs/v7/examples/exec-pwsh.mdx +3 -4
  17. package/docs/v7/examples/findall-coffee-icons.mdx +88 -0
  18. package/docs/v7/examples/focus-window.mdx +3 -4
  19. package/docs/v7/examples/hover-image.mdx +4 -3
  20. package/docs/v7/examples/hover-text-with-description.mdx +104 -0
  21. package/docs/v7/examples/hover-text.mdx +4 -3
  22. package/docs/v7/examples/installer.mdx +5 -4
  23. package/docs/v7/examples/launch-vscode-linux.mdx +3 -7
  24. package/docs/v7/examples/match-image.mdx +3 -2
  25. package/docs/v7/examples/parse.mdx +66 -0
  26. package/docs/v7/examples/press-keys.mdx +8 -14
  27. package/docs/v7/examples/scroll-keyboard.mdx +4 -3
  28. package/docs/v7/examples/scroll-until-image.mdx +3 -2
  29. package/docs/v7/examples/scroll.mdx +6 -14
  30. package/docs/v7/examples/type.mdx +1 -5
  31. package/docs/v7/examples/windows-installer.mdx +10 -4
  32. package/interfaces/vitest-plugin.mjs +2 -2
  33. package/lib/core/Dashcam.js +4 -1
  34. package/lib/sentry.js +5 -1
  35. package/package.json +1 -1
  36. package/setup/aws/install-dev-runner.sh +7 -2
  37. package/setup/aws/spawn-runner.sh +12 -0
  38. package/vitest.config.mjs +1 -1
@@ -1,23 +1,48 @@
1
1
  ---
2
2
  name: testdriver:self-hosted
3
- description: Unlimited test execution, complete privacy, and the ability to customize everything — all for a predictable flat license fee.
3
+ description: Our enterprise solution with unlimited test execution, assisted setup, and dedicated support.
4
4
  ---
5
5
  <!-- Generated from self-hosted.mdx. DO NOT EDIT. -->
6
6
 
7
- Self-hosted pricing is based on **parallel test capacity**: the number of tests you can run simultaneously on **your infrastructure**.
8
-
9
- With self-hosting, you get:.
10
-
11
- - **Flat license fee** per parallel test slot
12
- - **Unlimited test execution** — run as many tests as you want
13
- - **No device-second metering** predictable monthly costs
14
- - **Use your own AI keys** control data usage with your own OpenAI, Anthropic, or other AI provider keys
15
- - **Custom hardware & software** — choose instance types, resolution, install specific software, and configure networking as needed
16
- - **Debug & Customize** — RDP into test machines, install custom software, modify the AMI, and debug issues directly. No black boxes.
17
-
18
- ## Get Started
19
-
20
- Ready to self-host? Follow our comprehensive AWS setup guide:
7
+ Self-hosted is our enterprise solution for teams that need unlimited test execution, infrastructure control, and dedicated support. Pricing is based on **parallel test capacity** with a flat license fee no per-second billing.
8
+
9
+ <CardGroup cols={2}>
10
+ <Card title="Unlimited Execution" icon="infinity">
11
+ Run as many tests as you want with no device-second metering. Predictable monthly costs.
12
+ </Card>
13
+ <Card title="Assisted Setup & Support" icon="headset">
14
+ Our team helps you deploy, configure, and optimize your infrastructure. Dedicated engineering support included.
15
+ </Card>
16
+ <Card title="Full Control" icon="gear">
17
+ Use your own AI keys, custom hardware, specific software, and network configurations. RDP into test machines for debugging.
18
+ </Card>
19
+ <Card title="Security & Compliance" icon="shield-check">
20
+ Keep data in your environment. Air-gapped deployment available for regulated industries.
21
+ </Card>
22
+ </CardGroup>
23
+
24
+ ## Deployment Options
25
+
26
+ Choose the level of control you need:
27
+
28
+ | Component | Standard | Air-Gapped |
29
+ |-----------|----------|------------|
30
+ | **Test Sandboxes** | Your AWS | Your infrastructure (any cloud or on-prem) |
31
+ | **Dashboard** | TestDriver hosted | Your infrastructure |
32
+ | **API** | TestDriver hosted | Your infrastructure |
33
+ | **AI Processing** | Your API keys | Your infrastructure |
34
+ | **Data Storage** | Your AWS account | 100% your infrastructure |
35
+ | **Network** | Internet access required | Fully air-gapped |
36
+ | **Cloud Providers** | AWS | AWS, Azure, GCP, on-prem |
37
+
38
+ ### Standard Deployment
39
+
40
+ Run test sandboxes on your AWS infrastructure while using TestDriver's hosted dashboard and API:
41
+
42
+ - **Quick setup** via CloudFormation — deploy in hours
43
+ - **Dashboard access** at [console.testdriver.ai](https://console.testdriver.ai)
44
+ - **Your AI keys** — control costs with your own OpenAI, Anthropic, or other provider
45
+ - **Custom AMIs** — install specific software, configure networking
21
46
 
22
47
  <Card
23
48
  title="AWS Setup Guide"
@@ -27,39 +52,96 @@ Ready to self-host? Follow our comprehensive AWS setup guide:
27
52
  Step-by-step instructions for deploying TestDriver on your AWS infrastructure using CloudFormation.
28
53
  </Card>
29
54
 
55
+ ### Air-Gapped Deployment
56
+
57
+ Deploy the entire TestDriver stack in your environment for complete isolation:
58
+
59
+ - **Full stack** — dashboard, API, and test infrastructure all in your environment
60
+ - **No external dependencies** — data never leaves your network perimeter
61
+ - **Any infrastructure** — AWS, Azure, GCP, or on-premises
62
+ - **Regulated industries** — government, defense, healthcare, finance
63
+
64
+ ## Custom VM Images
65
+
66
+ Build test environments with your applications, dependencies, and user data pre-installed. You get full access to:
67
+
68
+ - **Golden VM** — our pre-configured base image with TestDriver agent, drivers, and optimizations
69
+ - **Packer scripts** — build custom AMIs with your applications, user data, and configurations
70
+ - **Faster test startup** — skip installation steps by baking dependencies into your image
71
+ - **Consistent environments** — every test runs on an identical, reproducible machine
72
+
73
+ <AccordionGroup>
74
+ <Accordion title="What can you customize?">
75
+ - Install applications (browsers, desktop apps, dev tools)
76
+ - Configure user accounts and credentials
77
+ - Set up network proxies and certificates
78
+ - Install fonts, language packs, and locales
79
+ - Pre-seed databases or test fixtures
80
+ - Configure Windows/Linux settings
81
+ </Accordion>
82
+
83
+ <Accordion title="How it works">
84
+ 1. We provide our golden VM base image and Packer scripts
85
+ 2. You customize the scripts to install your software and configuration
86
+ 3. Run Packer to build your custom AMI
87
+ 4. Configure TestDriver to use your custom AMI for test sandboxes
88
+ 5. Tests spin up with everything pre-installed — no setup time wasted
89
+ </Accordion>
90
+ </AccordionGroup>
91
+
92
+ ## Implementation Process
93
+
94
+ <Steps>
95
+ <Step title="Discovery Call">
96
+ Discuss your requirements, security constraints, and integration needs with our team.
97
+ </Step>
98
+
99
+ <Step title="Architecture Review">
100
+ Our engineers design a deployment architecture that meets your security and compliance requirements.
101
+ </Step>
102
+
103
+ <Step title="Deployment">
104
+ We work with your team to deploy TestDriver, including assisted setup and configuration.
105
+ </Step>
106
+
107
+ <Step title="Integration">
108
+ Connect TestDriver to your CI/CD pipelines, internal tools, and workflows.
109
+ </Step>
110
+
111
+ <Step title="Training & Handoff">
112
+ Comprehensive training for your team on operating and maintaining the deployment.
113
+ </Step>
114
+ </Steps>
115
+
116
+ ## What's Included
30
117
 
31
- ## Who Should Self-Host?
32
-
33
- Self-hosting is ideal for teams that:
34
-
35
- - **Run high test volumes** — Flat pricing becomes more economical at scale
36
- - **Want infrastructure control** — Custom hardware, specific software dependencies, or network configurations
37
- - **Prefer predictable costs** — Budget with confidence using flat monthly fees
38
-
39
-
40
- ## How It Works
41
-
42
- With self-hosting, you run test sandboxes on your own AWS infrastructure. TestDriver still provides:
43
-
44
- - **Dashboard** — View test results, analytics, and reports at [console.testdriver.ai](https://console.testdriver.ai)
45
- - **API** — Orchestration and AI-powered test execution
46
- - **License Management** — Your parallel test capacity
47
-
48
- You provide:
49
-
50
- - **AWS Infrastructure** — EC2 instances running in your account
51
- - **AI API Keys** — Use your own OpenAI, Anthropic, or other AI provider keys
52
- - **Custom Configuration** — Hardware specs, networking, installed software
118
+ - **Flat license fee** per parallel test slot
119
+ - **Unlimited test execution** — no device-second charges
120
+ - **Assisted setup** our team helps you deploy and configure
121
+ - **Dedicated support** — direct access to our engineering team
122
+ - **Custom contract terms** — volume-based pricing, custom SLAs
123
+ - **Professional services** — implementation assistance and training
53
124
 
54
- ## Comparison vs Cloud
125
+ ## Comparison: Hosted vs Self-Hosted
55
126
 
56
- | Feature | Cloud | Self-Hosted |
57
- |---------|-------|-------------|
58
- | **Setup Time** | Minutes | Hours |
127
+ | Feature | Hosted | Self-Hosted |
128
+ |---------|--------|-------------|
129
+ | **Setup Time** | Minutes | Hours (assisted) |
59
130
  | **Pricing Model** | Device-seconds | Flat license fee |
60
- | **Infrastructure Management** | TestDriver | You |
61
- | **Device Location** | TestDriver cloud | Your AWS account |
131
+ | **Infrastructure** | TestDriver | Your AWS or any cloud |
62
132
  | **AI API Keys** | TestDriver's | Your own |
63
133
  | **Custom Software** | Limited | Full control |
64
134
  | **Hardware Selection** | Standard | Your choice |
65
135
  | **Debugging Access** | Replays only | Full RDP access |
136
+ | **Support** | Community/Standard | Dedicated engineering |
137
+ | **Air-Gapped Option** | No | Yes |
138
+
139
+ ## Get Started
140
+
141
+ <Card
142
+ title="Schedule a Consultation"
143
+ icon="calendar"
144
+ href="https://testdriver.ai/demo"
145
+ >
146
+ Discuss your requirements with our team and get a custom proposal for your self-hosted deployment.
147
+ </Card>
@@ -0,0 +1,257 @@
1
+ ---
2
+ name: testdriver:test-results-json
3
+ description: Per-test JSON result files with metadata, versions, and infrastructure details
4
+ ---
5
+ <!-- Generated from test-results-json.mdx. DO NOT EDIT. -->
6
+
7
+ ## Overview
8
+
9
+ TestDriver automatically writes a JSON result file for each test case after it finishes. These files contain comprehensive metadata about the test run, including SDK and runner versions, infrastructure details, interaction statistics, and links to recordings.
10
+
11
+ Result files are written to:
12
+
13
+ ```
14
+ .testdriver/results/<testFile>/<testName>.json
15
+ ```
16
+
17
+ For example, a test file `tests/login.test.mjs` with a test named `"should log in"` produces:
18
+
19
+ ```
20
+ .testdriver/results/tests/login.test.mjs/should_log_in.json
21
+ ```
22
+
23
+ <Note>
24
+ Test names are sanitized for filesystem use — special characters are replaced with underscores and names are truncated to 200 characters.
25
+ </Note>
26
+
27
+ ## Enabling
28
+
29
+ No configuration is required. The JSON files are written automatically by the TestDriver Vitest reporter plugin whenever tests run.
30
+
31
+ ## JSON Schema
32
+
33
+ Each result file is organized into logical groups:
34
+
35
+ ### `versions`
36
+
37
+ | Field | Type | Description |
38
+ |---|---|---|
39
+ | `versions.sdk` | `string \| null` | TestDriver SDK version (e.g. `"7.8.0"`) |
40
+ | `versions.vitest` | `string \| null` | Vitest version used to run the test |
41
+ | `versions.api` | `string \| null` | TestDriver API server version |
42
+ | `versions.runnerBefore` | `string \| null` | Runner version at sandbox start |
43
+ | `versions.runnerAfter` | `string \| null` | Runner version after auto-update |
44
+ | `versions.runnerWasUpdated` | `boolean` | Whether the runner was auto-updated during provisioning |
45
+
46
+ ### `test`
47
+
48
+ | Field | Type | Description |
49
+ |---|---|---|
50
+ | `test.file` | `string \| null` | Relative path to the test file |
51
+ | `test.name` | `string \| null` | Name of the test case |
52
+ | `test.suite` | `string \| null` | Name of the parent `describe` block |
53
+ | `test.passed` | `boolean` | Whether the test passed |
54
+ | `test.caseId` | `string \| null` | Database ID for this test case |
55
+ | `test.runId` | `string \| null` | Database ID for the overall test run |
56
+ | `test.error` | `string \| null` | Error message if the test failed |
57
+ | `test.errorStack` | `string \| null` | Error stack trace if the test failed |
58
+
59
+ ### `urls`
60
+
61
+ | Field | Type | Description |
62
+ |---|---|---|
63
+ | `urls.api` | `string \| null` | API root URL used for this test |
64
+ | `urls.console` | `string \| null` | TestDriver console base URL |
65
+ | `urls.vnc` | `string \| null` | VNC URL for the sandbox |
66
+ | `urls.testRun` | `string \| null` | Direct link to this test case in the console |
67
+
68
+ ### `replay`
69
+
70
+ The `replay` object contains the recording replay URL and derived embed links. The `gifUrl` and `embedUrl` are generated automatically from the replay URL.
71
+
72
+ | Field | Type | Description |
73
+ |---|---|---|
74
+ | `replay.url` | `string \| null` | Recording replay URL |
75
+ | `replay.gifUrl` | `string \| null` | Animated GIF thumbnail of the recording |
76
+ | `replay.embedUrl` | `string \| null` | Embeddable replay URL (appends `&embed=true`) |
77
+ | `replay.markdown` | `string \| null` | Ready-to-use Markdown embed with GIF linking to the replay |
78
+
79
+ The `replay.markdown` field produces a clickable GIF badge you can paste directly into PR comments, README files, or issue descriptions:
80
+
81
+ ```markdown
82
+ [![Test Recording](https://api.testdriver.ai/replay/abc123/gif?shareKey=xyz)](https://console.testdriver.ai/replay/abc123?share=xyz)
83
+ ```
84
+
85
+ ### `date`
86
+
87
+ | Field | Type | Description |
88
+ |---|---|---|
89
+ | `date` | `string` | ISO 8601 timestamp when the test finished |
90
+
91
+ ### `team`
92
+
93
+ | Field | Type | Description |
94
+ |---|---|---|
95
+ | `team.id` | `string \| null` | Team ID from the sandbox |
96
+ | `team.sessionId` | `string \| null` | SDK session ID |
97
+
98
+ ### `infrastructure`
99
+
100
+ | Field | Type | Description |
101
+ |---|---|---|
102
+ | `infrastructure.sandboxId` | `string \| null` | Sandbox instance ID |
103
+ | `infrastructure.instanceId` | `string \| null` | Instance ID |
104
+ | `infrastructure.os` | `string \| null` | Operating system of the sandbox (`"linux"` or `"windows"`) |
105
+ | `infrastructure.amiId` | `string \| null` | AWS AMI ID used for provisioning |
106
+ | `infrastructure.e2bTemplateId` | `string \| null` | E2B template ID used for provisioning |
107
+ | `infrastructure.imageVersion` | `string \| null` | Sandbox image version |
108
+
109
+ ### `realtime`
110
+
111
+ | Field | Type | Description |
112
+ |---|---|---|
113
+ | `realtime.channel` | `string \| null` | Ably channel name used for communication |
114
+ | `realtime.messageCount` | `number` | Number of messages published to the realtime channel |
115
+
116
+ ### `interactions`
117
+
118
+ | Field | Type | Description |
119
+ |---|---|---|
120
+ | `interactions.total` | `number` | Total number of interactions recorded |
121
+ | `interactions.cached` | `number` | Number of interactions served from cache |
122
+ | `interactions.byType` | `object` | Breakdown of interactions by type (e.g. `find`, `click`, `assert`) |
123
+
124
+ ## Example Output
125
+
126
+ ```json
127
+ {
128
+ "sdkVersion": "7.8.0",
129
+ "vitestVersion": "4.0.0",
130
+ "apiVersion": "1.45.0",
131
+ "runnerVersionBefore": "2.1.0",
132
+ "runnerVersionAfter": "2.1.1",
133
+ "wasUpdated": true,
134
+ "apiUrl": "https://api.testdriver.ai",
135
+ "consoleUrl": "https://console.testdriver.ai",
136
+ "testRunLink": "https://console.testdriver.ai/runs/abc123/def456",
137
+ "dashcamUrl": "https://app.dashcam.io/replay/abc123",
138
+ "vncUrl": "wss://sandbox-123.testdriver.ai/vnc",
139
+ "date": "2025-01-15T14:30:00.000Z",
140
+ "team": {
141
+ "id": "team_abc123",
142
+ "sessionId": "sess_xyz789"
143
+ },
144
+ "infrastructure": {
145
+ "sandboxId": "sandbox-123",
146
+ "instanceId": "i-abc123",
147
+ "os": "linux",
148
+ "amiId": "ami-0abc123",
149
+ "e2bTemplateId": null,
150
+ "imageVersion": "v2.1.0"
151
+ },
152
+ "realtime": {
153
+ "channel": "sandbox:sandbox-123",
154
+ "messageCount": 42
155
+ },
156
+ "interactions": {
157
+ "total": 15,
158
+ "cached": 3,
159
+ "byType": {
160
+ "find": 8,
161
+ "click": 5,
162
+ "assert": 2
163
+ }
164
+ }
165
+ }
166
+ ```
167
+
168
+ ## Using Result Files in CI
169
+
170
+ Result files are useful for extracting test metadata in CI pipelines without parsing log output.
171
+
172
+ ### GitHub Actions Example
173
+
174
+ Use `fromJSON` to parse a result file into a GitHub Actions expression you can reference in subsequent steps:
175
+
176
+ ```yaml
177
+ - name: Run tests
178
+ run: npx vitest run tests/login.test.mjs
179
+
180
+ - name: Parse result
181
+ id: result
182
+ run: |
183
+ # Read the first JSON result file
184
+ FILE=$(find .testdriver/results -name '*.json' | head -n 1)
185
+ echo "json=$(cat "$FILE")" >> "$GITHUB_OUTPUT"
186
+
187
+ - name: Comment on PR
188
+ if: fromJSON(steps.result.outputs.json).test.passed == false
189
+ uses: actions/github-script@v7
190
+ with:
191
+ script: |
192
+ const result = ${{ steps.result.outputs.json }};
193
+ await github.rest.issues.createComment({
194
+ owner: context.repo.owner,
195
+ repo: context.repo.repo,
196
+ issue_number: context.issue.number,
197
+ body: [
198
+ `❌ **${result.test.name}** failed`,
199
+ ``,
200
+ `Error: ${result.test.error}`,
201
+ ``,
202
+ result.replay.markdown,
203
+ ``,
204
+ `[View full recording](${result.urls.testRun})`
205
+ ].join('\n')
206
+ });
207
+ ```
208
+
209
+ You can also load all results into a matrix or iterate over them:
210
+
211
+ ```yaml
212
+ - name: Run tests
213
+ run: npx vitest run tests/*.test.mjs
214
+
215
+ - name: Collect results
216
+ id: results
217
+ run: |
218
+ # Merge all result files into a JSON array
219
+ echo "json=$(find .testdriver/results -name '*.json' -exec cat {} + | jq -s '.')" >> "$GITHUB_OUTPUT"
220
+
221
+ - name: Summary
222
+ run: |
223
+ echo '## Test Results' >> $GITHUB_STEP_SUMMARY
224
+ RESULTS='${{ steps.results.outputs.json }}'
225
+ echo "$RESULTS" | jq -r '.[] | "| \(.test.name) | \(if .test.passed then "✅" else "❌" end) | \(.urls.testRun) |"' >> $GITHUB_STEP_SUMMARY
226
+ ```
227
+
228
+ ### Reading Results Programmatically
229
+
230
+ ```javascript
231
+ import fs from "fs";
232
+ import path from "path";
233
+
234
+ const resultsDir = ".testdriver/results";
235
+
236
+ function readResults(dir) {
237
+ const results = [];
238
+ for (const testDir of fs.readdirSync(dir, { recursive: true })) {
239
+ const fullPath = path.join(dir, testDir);
240
+ if (fullPath.endsWith(".json") && fs.statSync(fullPath).isFile()) {
241
+ results.push(JSON.parse(fs.readFileSync(fullPath, "utf-8")));
242
+ }
243
+ }
244
+ return results;
245
+ }
246
+
247
+ const results = readResults(resultsDir);
248
+ const passed = results.filter(r => r.test.passed);
249
+ const failed = results.filter(r => !r.test.passed);
250
+
251
+ console.log(`${passed.length} passed, ${failed.length} failed`);
252
+ for (const r of failed) {
253
+ console.log(` FAIL: ${r.test.name} — ${r.test.error}`);
254
+ console.log(` Recording: ${r.urls.testRun}`);
255
+ console.log(` Embed: ${r.replay.markdown}`);
256
+ }
257
+ ```