visual-ai-assertions 0.8.0 → 0.10.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,6 +1,6 @@
1
1
  # visual-ai-assertions
2
2
 
3
- AI-powered visual assertions for E2E tests. Send screenshots to Claude, GPT, or Gemini and get structured, typed results.
3
+ AI-powered visual assertions for E2E tests. Send screenshots — or short video recordings — to Claude, GPT, or Gemini and get structured, typed results.
4
4
 
5
5
  ## Installation
6
6
 
@@ -112,9 +112,9 @@ const ai = visualAI({
112
112
  });
113
113
  ```
114
114
 
115
- ### `ai.check(image, statements, options?)`
115
+ ### `ai.check(input, statements, options?)`
116
116
 
117
- Visual assertion. Returns `pass: true` only if ALL statements are true.
117
+ Visual assertion against a screenshot or short video. Returns `pass: true` only if ALL statements are true. For video inputs, a statement passes when it is true at any sampled frame.
118
118
 
119
119
  ```typescript
120
120
  // Single statement
@@ -130,6 +130,12 @@ const result = await ai.check(screenshot, [
130
130
  const result = await ai.check(screenshot, ["The form is submitted"], {
131
131
  instructions: ["Ignore loading spinners that appear briefly"],
132
132
  });
133
+
134
+ // Video input — statement is true if it ever happens during the clip
135
+ const result = await ai.check("./recording.webm", [
136
+ 'A success toast with text "Saved" briefly appears',
137
+ ]);
138
+ console.log(result.statements[0].timestampSeconds); // e.g. 3.5
133
139
  ```
134
140
 
135
141
  **Returns:** `CheckResult`
@@ -149,9 +155,9 @@ const result = await ai.check(screenshot, ["The form is submitted"], {
149
155
  }
150
156
  ```
151
157
 
152
- ### `ai.ask(image, prompt, options?)`
158
+ ### `ai.ask(input, prompt, options?)`
153
159
 
154
- Free-form analysis. Returns structured issues with priority and category.
160
+ Free-form analysis of an image or video. Returns structured issues with priority and category. Video inputs are sampled into a frame timeline; the result includes `frameReferences` indicating which frames the model relied on.
155
161
 
156
162
  ```typescript
157
163
  const result = await ai.ask(screenshot, "Analyze this page for UI issues");
@@ -312,6 +318,33 @@ await ai.check("https://example.com/screenshot.png", "...");
312
318
 
313
319
  Oversized images are automatically resized to provider limits.
314
320
 
321
+ ### Video Input
322
+
323
+ `ai.check()` and `ai.ask()` also accept short video recordings (`.mp4`, `.webm`, `.mov`, `.mkv`) — useful for asserting on transient UI like toast messages. Accepted shapes are file path, `data:video/...;base64,...` URL, raw base64 string, `Buffer`, and `Uint8Array`. HTTP/HTTPS URLs are not supported for video inputs — fetch the bytes yourself first.
324
+
325
+ ```typescript
326
+ // Playwright recording on disk
327
+ const result = await ai.check("./trace/video/recording.webm", [
328
+ 'A success toast with text "Saved" briefly appears',
329
+ ]);
330
+
331
+ // Result includes frame metadata + per-statement timestamps
332
+ console.log(result.frames);
333
+ // { count: 4, timestampsSeconds: [0.5, 1.5, 2.5, 3.5], durationSeconds: 4.0 }
334
+ console.log(result.statements[0].timestampSeconds); // 3.5
335
+
336
+ // Override sampling — defaults are 1 fps, max 10 frames, max 10 s of video
337
+ await ai.check("./long-clip.mp4", ["Loader disappears"], {
338
+ video: { fps: 2, maxFrames: 20, maxDurationSeconds: 15 },
339
+ });
340
+ ```
341
+
342
+ `maxFrames` is hard-capped at 60 to keep memory bounded. Frames are downscaled so the longer edge fits within 1568 px before being sent to the provider.
343
+
344
+ How it works: the library samples frames with ffmpeg and sends them to the provider as an ordered timeline. A statement passes when it is true at any sampled frame, unless its wording specifies otherwise (e.g. "throughout"). Template helpers (`accessibility`, `layout`, `pageLoad`, `content`, `elementsVisible`, `elementsHidden`) are image-only — pass video to `check()` or `ask()` instead.
345
+
346
+ **ffmpeg setup.** Video support works out of the box — `fluent-ffmpeg`, `@ffmpeg-installer/ffmpeg`, and `@ffprobe-installer/ffprobe` ship as regular dependencies and bundle platform-specific ffmpeg/ffprobe binaries. If you ran `npm install` you already have everything you need. On platforms where the prebuilt binary is unavailable (or if you've pruned dependencies), `check()` and `ask()` throw `VisualAIVideoError` (import from `visual-ai-assertions` to `instanceof`-narrow it) when called with video input.
347
+
315
348
  ### Formatting & Assertion Helpers
316
349
 
317
350
  ```typescript
@@ -356,6 +389,9 @@ try {
356
389
  case "IMAGE_INVALID":
357
390
  // Invalid image: corrupt, unsupported format, etc.
358
391
  break;
392
+ case "VIDEO_INVALID":
393
+ // Invalid video: missing ffmpeg deps, oversized clip, decode failure, etc.
394
+ break;
359
395
  case "RESPONSE_PARSE_FAILED":
360
396
  // AI returned unparseable response — error.rawResponse has raw text
361
397
  break;
@@ -387,13 +423,15 @@ The `VisualAIKnownError` union and `isVisualAIKnownError()` helper are useful wh
387
423
 
388
424
  ### Optional Configuration
389
425
 
390
- | Variable | Description |
391
- | -------------------------- | -------------------------------------------------------------------------------------------------------------- |
392
- | `VISUAL_AI_MODEL` | Default model when `model` is not set in config. Overrides the provider's default model. |
393
- | `VISUAL_AI_DEBUG` | Enable error diagnostic logging to stderr. Does **not** enable prompt/response logging. Use `"true"` or `"1"`. |
394
- | `VISUAL_AI_DEBUG_PROMPT` | Enable prompt-only debug logging to stderr. Use `"true"` or `"1"`. |
395
- | `VISUAL_AI_DEBUG_RESPONSE` | Enable response-only debug logging to stderr. Use `"true"` or `"1"`. |
396
- | `VISUAL_AI_TRACK_USAGE` | Enable usage tracking (token counts and cost) to stderr. Use `"true"` or `"1"`. |
426
+ | Variable | Description |
427
+ | ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
428
+ | `VISUAL_AI_MODEL` | Default model when `model` is not set in config. Overrides the provider's default model. |
429
+ | `VISUAL_AI_DEBUG` | Enable error diagnostic logging to stderr. Does **not** enable prompt/response logging. Use `"true"` or `"1"`. |
430
+ | `VISUAL_AI_DEBUG_PROMPT` | Enable prompt-only debug logging to stderr. Use `"true"` or `"1"`. |
431
+ | `VISUAL_AI_DEBUG_RESPONSE` | Enable response-only debug logging to stderr. Use `"true"` or `"1"`. |
432
+ | `VISUAL_AI_DEBUG_FRAMES` | Persist sampled video frames to disk for offline inspection. Use `"true"` or `"1"`. Frames are written to `./visual-ai-debug-frames/<timestamp>-<id>/` (override path with the next variable). Has no effect on image-only inputs. |
433
+ | `VISUAL_AI_DEBUG_FRAMES_DIR` | Override the base directory for `VISUAL_AI_DEBUG_FRAMES`. Each call still gets its own timestamped subdirectory inside it. |
434
+ | `VISUAL_AI_TRACK_USAGE` | Enable usage tracking (token counts and cost) to stderr. Use `"true"` or `"1"`. |
397
435
 
398
436
  ## Configuration
399
437
 
@@ -415,7 +453,12 @@ import type {
415
453
  AskResult,
416
454
  CheckResult,
417
455
  CompareResult,
456
+ Frame,
457
+ MediaInput,
418
458
  SupportedMimeType,
459
+ SupportedVideoMimeType,
460
+ VideoFramesMetadata,
461
+ VideoSamplingOptions,
419
462
  VisualAIConfig,
420
463
  VisualAIErrorCode,
421
464
  } from "visual-ai-assertions";