testdriverai 7.1.4 → 7.2.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.github/workflows/acceptance.yaml +81 -0
- package/.github/workflows/publish.yaml +44 -0
- package/agent/index.js +18 -19
- package/agent/lib/commands.js +321 -121
- package/agent/lib/redraw.js +99 -39
- package/agent/lib/sandbox.js +98 -6
- package/agent/lib/sdk.js +25 -0
- package/agent/lib/system.js +2 -1
- package/agent/lib/validation.js +6 -6
- package/docs/docs.json +211 -101
- package/docs/snippets/tests/type-repeated-replay.mdx +1 -1
- package/docs/v7/_drafts/caching-selectors.mdx +24 -0
- package/docs/v7/api/act.mdx +1 -1
- package/docs/v7/api/assert.mdx +1 -1
- package/docs/v7/api/assertions.mdx +7 -7
- package/docs/v7/api/elements.mdx +78 -0
- package/docs/v7/api/find.mdx +38 -0
- package/docs/v7/api/focusApplication.mdx +2 -2
- package/docs/v7/api/hover.mdx +2 -2
- package/docs/v7/features/ai-native.mdx +57 -71
- package/docs/v7/features/application-logs.mdx +353 -0
- package/docs/v7/features/browser-logs.mdx +414 -0
- package/docs/v7/features/cache-management.mdx +402 -0
- package/docs/v7/features/continuous-testing.mdx +346 -0
- package/docs/v7/features/coverage.mdx +508 -0
- package/docs/v7/features/data-driven-testing.mdx +441 -0
- package/docs/v7/features/easy-to-write.mdx +2 -73
- package/docs/v7/features/enterprise.mdx +155 -39
- package/docs/v7/features/fast.mdx +63 -81
- package/docs/v7/features/managed-sandboxes.mdx +384 -0
- package/docs/v7/features/network-monitoring.mdx +568 -0
- package/docs/v7/features/observable.mdx +3 -22
- package/docs/v7/features/parallel-execution.mdx +381 -0
- package/docs/v7/features/powerful.mdx +1 -1
- package/docs/v7/features/reports.mdx +414 -0
- package/docs/v7/features/sandbox-customization.mdx +229 -0
- package/docs/v7/features/scalable.mdx +217 -2
- package/docs/v7/features/stable.mdx +106 -147
- package/docs/v7/features/system-performance.mdx +616 -0
- package/docs/v7/features/test-analytics.mdx +373 -0
- package/docs/v7/features/test-cases.mdx +393 -0
- package/docs/v7/features/test-replays.mdx +408 -0
- package/docs/v7/features/test-reports.mdx +308 -0
- package/docs/v7/getting-started/{running-and-debugging.mdx → debugging-tests.mdx} +12 -142
- package/docs/v7/getting-started/quickstart.mdx +22 -305
- package/docs/v7/getting-started/running-tests.mdx +173 -0
- package/docs/v7/overview/what-is-testdriver.mdx +2 -14
- package/docs/v7/presets/chrome-extension.mdx +147 -122
- package/interfaces/cli/commands/init.js +3 -3
- package/interfaces/cli/lib/base.js +3 -2
- package/interfaces/logger.js +0 -2
- package/interfaces/shared-test-state.mjs +0 -5
- package/interfaces/vitest-plugin.mjs +69 -42
- package/lib/core/Dashcam.js +65 -66
- package/lib/vitest/hooks.mjs +42 -50
- package/package.json +1 -1
- package/sdk-log-formatter.js +350 -175
- package/sdk.js +431 -116
- package/setup/aws/cloudformation.yaml +2 -2
- package/setup/aws/self-hosted.yml +1 -1
- package/test/testdriver/chrome-extension.test.mjs +55 -72
- package/test/testdriver/element-not-found.test.mjs +2 -1
- package/test/testdriver/hover-image.test.mjs +1 -1
- package/test/testdriver/scroll-until-text.test.mjs +10 -6
- package/test/testdriver/setup/lifecycleHelpers.mjs +19 -24
- package/test/testdriver/setup/testHelpers.mjs +18 -23
- package/vitest.config.mjs +3 -3
- package/.github/workflows/linux-tests.yml +0 -28
- package/docs/v7/getting-started/generating-tests.mdx +0 -525
- package/test/testdriver/auto-cache-key-demo.test.mjs +0 -56
|
@@ -0,0 +1,373 @@
|
|
|
1
|
+
---
|
|
2
|
+
title: "Test Analytics"
|
|
3
|
+
description: "Visualize test trends, success rates, and performance metrics over time"
|
|
4
|
+
icon: "chart-line"
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
Track test performance and trends with comprehensive analytics dashboards. Monitor pass rates, identify flaky tests, and optimize test suite performance.
|
|
8
|
+
|
|
9
|
+
## Analytics Dashboard
|
|
10
|
+
|
|
11
|
+
Access detailed analytics at [app.testdriver.ai/analytics](https://app.testdriver.ai/analytics):
|
|
12
|
+
|
|
13
|
+
<Card title="Test Analytics" icon="chart-simple" href="https://app.testdriver.ai/analytics">
|
|
14
|
+
Visualize test trends, success rates, and performance metrics
|
|
15
|
+
</Card>
|
|
16
|
+
|
|
17
|
+
### Key Metrics
|
|
18
|
+
|
|
19
|
+
<CardGroup cols={2}>
|
|
20
|
+
<Card title="Pass Rate" icon="percent" color="green">
|
|
21
|
+
Overall success rate across all tests
|
|
22
|
+
- Current: 45%
|
|
23
|
+
- Trend over time
|
|
24
|
+
- Target threshold alerts
|
|
25
|
+
</Card>
|
|
26
|
+
|
|
27
|
+
<Card title="Unique Failing Tests" icon="triangle-exclamation" color="yellow">
|
|
28
|
+
Tests that failed at least once
|
|
29
|
+
- Count: 21 unique tests
|
|
30
|
+
- Flaky test detection
|
|
31
|
+
- Failure patterns
|
|
32
|
+
</Card>
|
|
33
|
+
|
|
34
|
+
<Card title="Average Duration" icon="clock" color="blue">
|
|
35
|
+
Mean test execution time
|
|
36
|
+
- Current: 1m 57s
|
|
37
|
+
- Performance trends
|
|
38
|
+
- Slowest tests identified
|
|
39
|
+
</Card>
|
|
40
|
+
|
|
41
|
+
<Card title="Total Tests" icon="list-check" color="purple">
|
|
42
|
+
All test cases tracked
|
|
43
|
+
- Count: 111 tests
|
|
44
|
+
- Growth over time
|
|
45
|
+
- Active vs deprecated
|
|
46
|
+
</Card>
|
|
47
|
+
|
|
48
|
+
<Card title="Total Duration" icon="stopwatch" color="cyan">
|
|
49
|
+
Cumulative execution time
|
|
50
|
+
- Current: 3h 37m
|
|
51
|
+
- Cost implications
|
|
52
|
+
- Optimization opportunities
|
|
53
|
+
</Card>
|
|
54
|
+
</CardGroup>
|
|
55
|
+
|
|
56
|
+
## Pass Rate Over Time
|
|
57
|
+
|
|
58
|
+
Track test stability trends:
|
|
59
|
+
|
|
60
|
+
```
|
|
61
|
+
Pass Rate Trend Chart:
|
|
62
|
+
- X-axis: Date (Dec 2 - Dec 9)
|
|
63
|
+
- Y-axis: Pass percentage (0% - 100%)
|
|
64
|
+
- Green line showing pass rate progression
|
|
65
|
+
- Hover to see exact values per day
|
|
66
|
+
```
|
|
67
|
+
|
|
68
|
+
**Insights:**
|
|
69
|
+
- Identify when test stability declined
|
|
70
|
+
- Correlate with code changes
|
|
71
|
+
- Set pass rate goals (e.g., >95%)
|
|
72
|
+
- Alert on threshold breaches
|
|
73
|
+
|
|
74
|
+
## Test Results Over Time
|
|
75
|
+
|
|
76
|
+
Monitor test execution patterns:
|
|
77
|
+
|
|
78
|
+
```
|
|
79
|
+
Stacked Bar Chart:
|
|
80
|
+
- X-axis: Date
|
|
81
|
+
- Y-axis: Number of tests
|
|
82
|
+
- Green: Passed tests
|
|
83
|
+
- Red: Failed tests
|
|
84
|
+
- Orange: Skipped tests
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
**Use Cases:**
|
|
88
|
+
- Spot unusual failure spikes
|
|
89
|
+
- Track test suite growth
|
|
90
|
+
- Identify testing gaps
|
|
91
|
+
- Validate CI/CD health
|
|
92
|
+
|
|
93
|
+
## Status Distribution
|
|
94
|
+
|
|
95
|
+
Understand test outcomes at a glance:
|
|
96
|
+
|
|
97
|
+
```
|
|
98
|
+
Pie/Bar Chart:
|
|
99
|
+
- Passed: 50 tests (45%)
|
|
100
|
+
- Failed: 40 tests (36%)
|
|
101
|
+
- Skipped: 21 tests (19%)
|
|
102
|
+
```
|
|
103
|
+
|
|
104
|
+
**Actionable Data:**
|
|
105
|
+
- High skip rate may indicate incomplete features
|
|
106
|
+
- Persistent failures need investigation
|
|
107
|
+
- Pass rate benchmarks per team
|
|
108
|
+
|
|
109
|
+
## Platform Distribution
|
|
110
|
+
|
|
111
|
+
See where tests run:
|
|
112
|
+
|
|
113
|
+
```
|
|
114
|
+
Donut Chart:
|
|
115
|
+
- Windows: 40%
|
|
116
|
+
- Linux: 35%
|
|
117
|
+
- macOS: 25%
|
|
118
|
+
```
|
|
119
|
+
|
|
120
|
+
**Cross-Platform Testing:**
|
|
121
|
+
- Ensure coverage across all platforms
|
|
122
|
+
- Identify platform-specific issues
|
|
123
|
+
- Optimize resource allocation
|
|
124
|
+
|
|
125
|
+
## Filtering & Time Periods
|
|
126
|
+
|
|
127
|
+
Customize analytics views:
|
|
128
|
+
|
|
129
|
+
<Tabs>
|
|
130
|
+
<Tab title="Time Period">
|
|
131
|
+
```
|
|
132
|
+
Last 24 hours
|
|
133
|
+
Last week (default)
|
|
134
|
+
Last month
|
|
135
|
+
Last quarter
|
|
136
|
+
Custom date range
|
|
137
|
+
```
|
|
138
|
+
</Tab>
|
|
139
|
+
|
|
140
|
+
<Tab title="Branch">
|
|
141
|
+
```
|
|
142
|
+
main (default)
|
|
143
|
+
develop
|
|
144
|
+
All branches
|
|
145
|
+
Specific branch
|
|
146
|
+
```
|
|
147
|
+
</Tab>
|
|
148
|
+
|
|
149
|
+
<Tab title="Repository">
|
|
150
|
+
```
|
|
151
|
+
All repositories
|
|
152
|
+
frontend
|
|
153
|
+
backend
|
|
154
|
+
mobile
|
|
155
|
+
```
|
|
156
|
+
</Tab>
|
|
157
|
+
|
|
158
|
+
<Tab title="Suite">
|
|
159
|
+
```
|
|
160
|
+
All test suites
|
|
161
|
+
Specific test file
|
|
162
|
+
Test directory
|
|
163
|
+
```
|
|
164
|
+
</Tab>
|
|
165
|
+
</Tabs>
|
|
166
|
+
|
|
167
|
+
## Flaky Test Detection
|
|
168
|
+
|
|
169
|
+
Identify unreliable tests automatically:
|
|
170
|
+
|
|
171
|
+
<AccordionGroup>
|
|
172
|
+
<Accordion title="What are Flaky Tests?">
|
|
173
|
+
Tests that intermittently pass and fail without code changes.
|
|
174
|
+
|
|
175
|
+
**Common Causes:**
|
|
176
|
+
- Race conditions
|
|
177
|
+
- Timing issues
|
|
178
|
+
- External dependencies
|
|
179
|
+
- Non-deterministic behavior
|
|
180
|
+
- Network instability
|
|
181
|
+
</Accordion>
|
|
182
|
+
|
|
183
|
+
<Accordion title="Detection Algorithm">
|
|
184
|
+
TestDriver identifies flaky tests by analyzing:
|
|
185
|
+
|
|
186
|
+
1. **Pass/Fail History** - Tests with alternating results
|
|
187
|
+
2. **Failure Rate** - Between 1% and 99% (not always passing/failing)
|
|
188
|
+
3. **Time Window** - Over last 30 days or 100 runs
|
|
189
|
+
4. **Confidence Score** - Statistical significance
|
|
190
|
+
</Accordion>
|
|
191
|
+
|
|
192
|
+
<Accordion title="Flaky Test Report">
|
|
193
|
+
Dashboard shows:
|
|
194
|
+
- Test name and file
|
|
195
|
+
- Flakiness score (0-100%)
|
|
196
|
+
- Recent pass/fail pattern
|
|
197
|
+
- Suggested fixes
|
|
198
|
+
- Link to failed runs
|
|
199
|
+
</Accordion>
|
|
200
|
+
</AccordionGroup>
|
|
201
|
+
|
|
202
|
+
## Performance Trends
|
|
203
|
+
|
|
204
|
+
Monitor test execution speed:
|
|
205
|
+
|
|
206
|
+
```
|
|
207
|
+
Duration Trend Chart:
|
|
208
|
+
- X-axis: Date
|
|
209
|
+
- Y-axis: Duration (seconds/minutes)
|
|
210
|
+
- Line showing average test duration
|
|
211
|
+
- Highlight performance improvements
|
|
212
|
+
```
|
|
213
|
+
|
|
214
|
+
**Optimization Tracking:**
|
|
215
|
+
- Measure impact of caching
|
|
216
|
+
- Identify slow tests
|
|
217
|
+
- Track infrastructure changes
|
|
218
|
+
- Validate parallelization benefits
|
|
219
|
+
|
|
220
|
+
## Slowest Tests
|
|
221
|
+
|
|
222
|
+
Find performance bottlenecks:
|
|
223
|
+
|
|
224
|
+
```
|
|
225
|
+
Top 10 Slowest Tests:
|
|
226
|
+
1. integration/checkout.test.js - 3m 45s
|
|
227
|
+
2. e2e/full-workflow.test.js - 2m 58s
|
|
228
|
+
3. api/stress-test.test.js - 2m 12s
|
|
229
|
+
...
|
|
230
|
+
```
|
|
231
|
+
|
|
232
|
+
**Action Items:**
|
|
233
|
+
- Optimize slow tests first (80/20 rule)
|
|
234
|
+
- Break down into smaller tests
|
|
235
|
+
- Add caching strategies
|
|
236
|
+
- Parallelize where possible
|
|
237
|
+
|
|
238
|
+
## Export Analytics
|
|
239
|
+
|
|
240
|
+
Download analytics data:
|
|
241
|
+
|
|
242
|
+
<Tabs>
|
|
243
|
+
<Tab title="CSV">
|
|
244
|
+
```csv
|
|
245
|
+
Date,Pass Rate,Total Tests,Passed,Failed,Skipped,Avg Duration
|
|
246
|
+
2024-12-09,45%,111,50,40,21,1m57s
|
|
247
|
+
2024-12-08,52%,108,56,35,17,1m45s
|
|
248
|
+
2024-12-07,48%,105,50,38,17,2m03s
|
|
249
|
+
```
|
|
250
|
+
</Tab>
|
|
251
|
+
|
|
252
|
+
<Tab title="JSON">
|
|
253
|
+
```json
|
|
254
|
+
{
|
|
255
|
+
"analytics": {
|
|
256
|
+
"passRate": 0.45,
|
|
257
|
+
"totalTests": 111,
|
|
258
|
+
"passed": 50,
|
|
259
|
+
"failed": 40,
|
|
260
|
+
"skipped": 21,
|
|
261
|
+
"avgDuration": "1m57s",
|
|
262
|
+
"trends": [
|
|
263
|
+
{
|
|
264
|
+
"date": "2024-12-09",
|
|
265
|
+
"passRate": 0.45,
|
|
266
|
+
"totalTests": 111
|
|
267
|
+
}
|
|
268
|
+
]
|
|
269
|
+
}
|
|
270
|
+
}
|
|
271
|
+
```
|
|
272
|
+
</Tab>
|
|
273
|
+
|
|
274
|
+
<Tab title="PDF Report">
|
|
275
|
+
Generate executive summary with:
|
|
276
|
+
- Key metrics overview
|
|
277
|
+
- Trend visualizations
|
|
278
|
+
- Flaky test list
|
|
279
|
+
- Recommendations
|
|
280
|
+
- Cost analysis
|
|
281
|
+
</Tab>
|
|
282
|
+
</Tabs>
|
|
283
|
+
|
|
284
|
+
## Custom Dashboards
|
|
285
|
+
|
|
286
|
+
Create team-specific views:
|
|
287
|
+
|
|
288
|
+
```javascript
|
|
289
|
+
// Use API to build custom dashboards
|
|
290
|
+
const analytics = await fetch('https://api.testdriver.ai/v1/analytics', {
|
|
291
|
+
headers: {
|
|
292
|
+
'Authorization': `Bearer ${TD_API_KEY}`,
|
|
293
|
+
},
|
|
294
|
+
body: JSON.stringify({
|
|
295
|
+
timeRange: 'last-week',
|
|
296
|
+
branch: 'main',
|
|
297
|
+
groupBy: 'day'
|
|
298
|
+
})
|
|
299
|
+
});
|
|
300
|
+
|
|
301
|
+
const data = await analytics.json();
|
|
302
|
+
// Build custom visualizations with your preferred tool
|
|
303
|
+
```
|
|
304
|
+
|
|
305
|
+
## Alerts & Notifications
|
|
306
|
+
|
|
307
|
+
Get notified of critical changes:
|
|
308
|
+
|
|
309
|
+
<CardGroup cols={2}>
|
|
310
|
+
<Card title="Pass Rate Alert" icon="bell">
|
|
311
|
+
Notify when pass rate drops below threshold
|
|
312
|
+
- Email notification
|
|
313
|
+
- Slack/Teams message
|
|
314
|
+
- Configurable threshold (e.g., <90%)
|
|
315
|
+
</Card>
|
|
316
|
+
|
|
317
|
+
<Card title="Flaky Test Alert" icon="triangle-exclamation">
|
|
318
|
+
Alert on newly detected flaky tests
|
|
319
|
+
- Daily digest
|
|
320
|
+
- Per-test notifications
|
|
321
|
+
- Trend reports
|
|
322
|
+
</Card>
|
|
323
|
+
|
|
324
|
+
<Card title="Performance Alert" icon="gauge">
|
|
325
|
+
Warn when tests slow down
|
|
326
|
+
- 20%+ duration increase
|
|
327
|
+
- Slowest test changes
|
|
328
|
+
- Cost impact
|
|
329
|
+
</Card>
|
|
330
|
+
|
|
331
|
+
<Card title="Failure Spike Alert" icon="chart-line-up">
|
|
332
|
+
Detect unusual failure patterns
|
|
333
|
+
- 2x normal failure rate
|
|
334
|
+
- Multiple test failures
|
|
335
|
+
- Same-day correlation
|
|
336
|
+
</Card>
|
|
337
|
+
</CardGroup>
|
|
338
|
+
|
|
339
|
+
## Learn More
|
|
340
|
+
|
|
341
|
+
<CardGroup cols={2}>
|
|
342
|
+
<Card
|
|
343
|
+
title="Test Reports"
|
|
344
|
+
icon="list-check"
|
|
345
|
+
href="/v7/features/test-reports"
|
|
346
|
+
>
|
|
347
|
+
View individual test runs
|
|
348
|
+
</Card>
|
|
349
|
+
|
|
350
|
+
<Card
|
|
351
|
+
title="Test Cases"
|
|
352
|
+
icon="file-lines"
|
|
353
|
+
href="/v7/features/test-cases"
|
|
354
|
+
>
|
|
355
|
+
Detailed test case history
|
|
356
|
+
</Card>
|
|
357
|
+
|
|
358
|
+
<Card
|
|
359
|
+
title="Flake Prevention"
|
|
360
|
+
icon="shield-check"
|
|
361
|
+
href="/v7/features/stable"
|
|
362
|
+
>
|
|
363
|
+
Learn about anti-flake technology
|
|
364
|
+
</Card>
|
|
365
|
+
|
|
366
|
+
<Card
|
|
367
|
+
title="Performance"
|
|
368
|
+
icon="gauge-high"
|
|
369
|
+
href="/v7/features/fast"
|
|
370
|
+
>
|
|
371
|
+
Optimize test execution
|
|
372
|
+
</Card>
|
|
373
|
+
</CardGroup>
|