simple-ffmpegjs 0.5.1 → 0.5.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -4,1754 +4,76 @@
4
4
 
5
5
  <p align="center">
6
6
  <a href="https://www.npmjs.com/package/simple-ffmpegjs"><img src="https://img.shields.io/npm/v/simple-ffmpegjs.svg" alt="npm version"></a>
7
+ <a href="https://github.com/Fats403/simple-ffmpeg/actions/workflows/ci.yml"><img src="https://github.com/Fats403/simple-ffmpeg/actions/workflows/ci.yml/badge.svg" alt="CI"></a>
7
8
  <a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-blue.svg" alt="License: MIT"></a>
8
- <a href="https://nodejs.org"><img src="https://img.shields.io/badge/node-%3E%3D18-brightgreen.svg" alt="Node.js"></a>
9
+ <a href="https://nodejs.org"><img src="https://img.shields.io/badge/node-%3E%3D20-brightgreen.svg" alt="Node.js ≥20"></a>
10
+ <a href="https://codecov.io/gh/Fats403/simple-ffmpegjs"><img src="https://codecov.io/gh/Fats403/simple-ffmpegjs/branch/main/graph/badge.svg" alt="Coverage"></a>
9
11
  </p>
10
12
 
11
13
  <p align="center">
12
14
  A lightweight Node.js library for programmatic video composition using FFmpeg.<br>
13
- Define your timeline as a simple array of clips, and the library handles the rest.
15
+ Define your timeline as a plain array of clips and the library builds the filter graph for you.
14
16
  </p>
15
17
 
16
- ## Table of Contents
18
+ ---
17
19
 
18
- - [Why simple-ffmpeg?](#why-simple-ffmpeg)
19
- - [Features](#features)
20
- - [Installation](#installation)
21
- - [Quick Start](#quick-start)
22
- - [Pre-Validation](#pre-validation)
23
- - [Schema Export](#schema-export)
24
- - [API Reference](#api-reference)
25
- - [Constructor](#constructor)
26
- - [Methods](#methods)
27
- - [Auto-Sequencing & Duration Shorthand](#auto-sequencing--duration-shorthand)
28
- - [Clip Types](#clip-types) — Video, Image, Color, Effect, Text, Subtitle, Audio, Background Music
29
- - [Platform Presets](#platform-presets)
30
- - [Watermarks](#watermarks)
31
- - [Progress Information](#progress-information)
32
- - [Logging](#logging)
33
- - [Error Handling](#error-handling)
34
- - [Cancellation](#cancellation)
35
- - [Examples](#examples)
36
- - [Clips & Transitions](#clips--transitions)
37
- - [Text & Animations](#text--animations)
38
- - [Karaoke](#karaoke)
39
- - [Subtitles](#subtitles)
40
- - [Export Settings](#export-settings)
41
- - [Real-World Usage Patterns](#real-world-usage-patterns)
42
- - [Data Pipeline](#data-pipeline-example)
43
- - [AI Video Pipeline](#ai-video-generation-pipeline-example)
44
- - [Advanced](#advanced)
45
- - [Timeline Behavior](#timeline-behavior)
46
- - [Auto-Batching](#auto-batching)
47
- - [Testing](#testing)
48
- - [Contributing](#contributing)
49
- - [License](#license)
50
-
51
- ## Why simple-ffmpeg?
52
-
53
- FFmpeg is incredibly powerful, but its command-line interface is notoriously difficult to work with programmatically. Composing even a simple two-clip video with a crossfade requires navigating complex filter graphs, input mapping, and stream labeling. simple-ffmpeg abstracts all of that behind a declarative, config-driven API. You describe _what_ your video should look like, and the library figures out _how_ to build the FFmpeg command.
54
-
55
- The entire timeline is expressed as a plain array of clip objects, making it straightforward to generate configurations from any data source: databases, APIs, templates, or AI models. Structured validation with machine-readable error codes means you can catch problems early and handle them programmatically, whether that's logging a warning, retrying with corrected input, or surfacing feedback to an end user.
56
-
57
- ## Example Output
58
-
59
- <p align="center">
60
- <a href="https://7llpl63xkl8jovgt.public.blob.vercel-storage.com/wonders-showcase-1.mp4">
61
- <img src="https://7llpl63xkl8jovgt.public.blob.vercel-storage.com/simple-ffmpeg/wonders-thumbnail-1.jpg" alt="Example video - click to watch" width="640">
62
- </a>
63
- </p>
64
-
65
- _Click to watch a "Wonders of the World" video created with simple-ffmpeg — combining multiple video clips with crossfade transitions, animated text overlays, and background music._
66
-
67
- ## Features
68
-
69
- **Video & Images**
70
- - **Video Concatenation** — Join multiple clips with optional xfade transitions
71
- - **Image Support** — Ken Burns effects (zoom, pan) for static images with intelligent aspect ratio handling
72
- - **Image Fitting** — Automatic blur-fill, cover, or contain modes when image aspect ratio differs from output
73
- - **Color Clips** — Flat colors and gradients (linear, radial) as first-class timeline clips with full transition support
74
-
75
- **Audio**
76
- - **Audio Mixing** — Layer audio tracks, voiceovers, and background music
77
-
78
- **Overlays & Effects**
79
- - **Text Overlays** — Static, word-by-word, and cumulative text with animations
80
- - **Emoji Support** — Opt-in emoji rendering via custom font + libass; stripped by default for clean output
81
- - **Text Animations** — Typewriter, scale-in, pulse, fade effects
82
- - **Karaoke Mode** — Word-by-word highlighting with customizable colors
83
- - **Subtitle Import** — Load SRT, VTT, ASS/SSA subtitle files
84
- - **Watermarks** — Text or image overlays with positioning and timing control
85
- - **Effect Clips** — Timed overlay effects (vignette, film grain, blur, color adjust, sepia, black & white, sharpen, chromatic aberration, letterbox) with fade-in/out envelopes
86
-
87
- **Analysis & Extraction**
88
- - **Keyframe Extraction** — Scene-change detection or fixed-interval frame sampling, returning in-memory buffers or files on disk
89
-
90
- **Developer Experience**
91
- - **Platform Presets** — Quick configuration for TikTok, YouTube, Instagram, etc.
92
- - **Progress Tracking** — Real-time export progress callbacks
93
- - **Cancellation** — AbortController support for stopping exports
94
- - **Auto-Batching** — Automatically splits complex filter graphs to avoid OS command limits
95
- - **Schema Export** — Generate a structured description of the clip format for documentation, code generation, or AI context
96
- - **Pre-Validation** — Validate clip configurations before processing with structured, machine-readable error codes
97
- - **TypeScript Ready** — Full type definitions included
98
- - **Zero Dependencies** — Only requires FFmpeg on your system
99
-
100
- ## Installation
20
+ ## Install
101
21
 
102
22
  ```bash
103
23
  npm install simple-ffmpegjs
104
24
  ```
105
25
 
106
- ### Prerequisites
107
-
108
- FFmpeg must be installed and available in your PATH:
109
-
110
- ```bash
111
- # macOS
112
- brew install ffmpeg
113
-
114
- # Ubuntu/Debian
115
- apt-get install ffmpeg
116
-
117
- # Windows
118
- # Download from https://ffmpeg.org/download.html
119
- ```
120
-
121
- For text overlays, ensure your FFmpeg build includes `libfreetype` and `fontconfig`. On minimal systems (Docker, Alpine), install a font package:
122
-
123
- ```bash
124
- # Alpine
125
- apk add --no-cache ffmpeg fontconfig ttf-dejavu
126
-
127
- # Debian/Ubuntu
128
- apt-get install -y ffmpeg fontconfig fonts-dejavu-core
129
- ```
130
-
131
- **Emoji in text overlays** are handled gracefully: by default, emoji characters are automatically detected and silently stripped from text to prevent blank boxes (tofu). To render emoji, pass an `emojiFont` path in the constructor:
132
-
133
- ```javascript
134
- const project = new SIMPLEFFMPEG({
135
- width: 1920,
136
- height: 1080,
137
- emojiFont: '/path/to/NotoEmoji-Regular.ttf'
138
- });
139
- ```
140
-
141
- Recommended font: [Noto Emoji](https://fonts.google.com/noto/specimen/Noto+Emoji) (B&W outline, ~2 MB, SIL OFL). Download from [Google Fonts](https://fonts.google.com/noto/specimen/Noto+Emoji) or [GitHub](https://github.com/google/fonts/raw/main/ofl/notoemoji/NotoEmoji%5Bwght%5D.ttf). When an emoji font is configured, emoji text is routed through libass (ASS subtitle path) with inline `\fn` font switching for per-glyph rendering.
142
-
143
- > **Note:** Emoji render as monochrome outlines because libass does not yet support color emoji font formats. The shapes are recognizable and correctly spaced, just not multi-colored. Without `emojiFont`, emoji are stripped and a one-time console warning is logged.
26
+ FFmpeg must be installed and available in your `PATH`.
144
27
 
145
- ## Quick Start
28
+ ## Quick example
146
29
 
147
30
  ```js
148
31
  import SIMPLEFFMPEG from "simple-ffmpegjs";
149
32
 
150
- // Use a platform preset — or set width/height/fps manually
151
33
  const project = new SIMPLEFFMPEG({ preset: "youtube" });
152
34
 
153
35
  await project.load([
154
- // Two video clips with a crossfade transition between them
155
- { type: "video", url: "./opening-shot.mp4", position: 0, end: 6 },
36
+ { type: "video", url: "./intro.mp4", duration: 5 },
156
37
  {
157
38
  type: "video",
158
- url: "./highlights.mp4",
159
- position: 5.5,
160
- end: 18,
161
- cutFrom: 3, // start 3s into the source file
39
+ url: "./clip2.mp4",
40
+ duration: 6,
162
41
  transition: { type: "fade", duration: 0.5 },
163
42
  },
164
-
165
- // Title card with a pop animation
166
43
  {
167
44
  type: "text",
168
- text: "Summer Highlights 2025",
45
+ text: "Summer Highlights",
169
46
  position: 0.5,
170
47
  end: 4,
171
- fontFile: "./fonts/Montserrat-Bold.ttf",
172
- fontSize: 72,
173
- fontColor: "#FFFFFF",
174
- borderColor: "#000000",
175
- borderWidth: 2,
176
- xPercent: 0.5,
177
- yPercent: 0.4,
178
- animation: { type: "pop", in: 0.3 },
179
- },
180
-
181
- // Background music — loops to fill the whole video
182
- { type: "music", url: "./chill-beat.mp3", volume: 0.2, loop: true },
183
- ]);
184
-
185
- await project.export({
186
- outputPath: "./summer-highlights.mp4",
187
- onProgress: ({ percent }) => console.log(`${percent}% complete`),
188
- });
189
- ```
190
-
191
- ## Pre-Validation
192
-
193
- Validate clip configurations before creating a project. Useful for catching errors early in data pipelines, form-based editors, or any workflow where configurations are generated dynamically:
194
-
195
- ```js
196
- import SIMPLEFFMPEG from "simple-ffmpegjs";
197
-
198
- const clips = [
199
- { type: "video", url: "./intro.mp4", position: 0, end: 5 },
200
- { type: "text", text: "Hello", position: 1, end: 4 },
201
- ];
202
-
203
- // Validate without creating a project
204
- const result = SIMPLEFFMPEG.validate(clips, {
205
- skipFileChecks: true, // Skip file existence checks (useful when files aren't on disk yet)
206
- width: 1920, // Project dimensions (for Ken Burns size validation)
207
- height: 1080,
208
- strictKenBurns: false, // If true, undersized Ken Burns images error instead of warn (default: false)
209
- });
210
-
211
- if (!result.valid) {
212
- // Structured errors for programmatic handling
213
- result.errors.forEach((err) => {
214
- console.log(`[${err.code}] ${err.path}: ${err.message}`);
215
- // e.g. [MISSING_REQUIRED] clips[0].url: URL is required for media clips
216
- });
217
- }
218
-
219
- // Or get human-readable output
220
- console.log(SIMPLEFFMPEG.formatValidationResult(result));
221
- ```
222
-
223
- ### Validation Codes
224
-
225
- Access error codes programmatically for custom handling:
226
-
227
- ```js
228
- const { ValidationCodes } = SIMPLEFFMPEG;
229
-
230
- // Available codes:
231
- // INVALID_TYPE, MISSING_REQUIRED, INVALID_VALUE, INVALID_RANGE,
232
- // INVALID_TIMELINE, TIMELINE_GAP, FILE_NOT_FOUND, INVALID_FORMAT,
233
- // INVALID_WORD_TIMING, OUTSIDE_BOUNDS
234
-
235
- if (result.errors.some((e) => e.code === ValidationCodes.TIMELINE_GAP)) {
236
- // Handle gap-specific logic
237
- }
238
- ```
239
-
240
- ## Schema Export
241
-
242
- Export a structured, human-readable description of all clip types accepted by `load()`. The output is designed to serve as context for LLMs, documentation generators, code generation tools, or anything that needs to understand the library's clip format.
243
-
244
- ### Basic Usage
245
-
246
- ```js
247
- // Get the full schema (all clip types)
248
- const schema = SIMPLEFFMPEG.getSchema();
249
- console.log(schema);
250
- ```
251
-
252
- The output is a formatted text document with type definitions, allowed values, usage notes, and examples for each clip type.
253
-
254
- ### Filtering Modules
255
-
256
- The schema is broken into modules — one per clip type. You can include or exclude modules to control exactly what appears in the output:
257
-
258
- ```js
259
- // Only include video and image clip types
260
- const schema = SIMPLEFFMPEG.getSchema({ include: ["video", "image"] });
261
-
262
- // Include everything except text and subtitle
263
- const schema = SIMPLEFFMPEG.getSchema({ exclude: ["text", "subtitle"] });
264
-
265
- // See all available module IDs
266
- SIMPLEFFMPEG.getSchemaModules();
267
- // ['video', 'audio', 'image', 'color', 'effect', 'text', 'subtitle', 'music']
268
- ```
269
-
270
- Available modules:
271
-
272
- | Module | Covers |
273
- | ---------- | ----------------------------------------------------------- |
274
- | `video` | Video clips, transitions, volume, trimming |
275
- | `audio` | Standalone audio clips |
276
- | `image` | Image clips, Ken Burns effects, image fitting modes |
277
- | `color` | Color clips — flat colors, linear/radial gradients |
278
- | `effect` | Overlay adjustment effects — vignette, grain, blur, color adjust, sepia, B&W, sharpen, chromatic aberration, letterbox |
279
- | `text` | Text overlays — all modes, animations, positioning, styling |
280
- | `subtitle` | Subtitle file import (SRT, VTT, ASS, SSA) |
281
- | `music` | Background music / background audio, looping |
282
-
283
- ### Custom Instructions
284
-
285
- Embed your own instructions directly into the schema output. Top-level instructions appear at the beginning, and per-module instructions are placed inside the relevant section — formatted identically to the built-in notes:
286
-
287
- ```js
288
- const schema = SIMPLEFFMPEG.getSchema({
289
- include: ["video", "image", "music"],
290
- instructions: [
291
- "You are creating short cooking tutorials for TikTok.",
292
- "Keep all videos under 30 seconds.",
293
- ],
294
- moduleInstructions: {
295
- video: [
296
- "Always use fade transitions at 0.5s.",
297
- "Limit to 5 clips maximum.",
298
- ],
299
- music: "Always include background music at volume 0.15.",
300
- },
301
- });
302
- ```
303
-
304
- Both `instructions` and `moduleInstructions` values accept a `string` or `string[]`. Per-module instructions for excluded modules are silently ignored.
305
-
306
- ## API Reference
307
-
308
- ### Constructor
309
-
310
- ```ts
311
- new SIMPLEFFMPEG(options?: {
312
- width?: number; // Output width (default: 1920)
313
- height?: number; // Output height (default: 1080)
314
- fps?: number; // Frame rate (default: 30)
315
- validationMode?: 'warn' | 'strict'; // Validation behavior (default: 'warn')
316
- preset?: string; // Platform preset (e.g., 'tiktok', 'youtube', 'instagram-post')
317
- fontFile?: string; // Default font file for all text clips (individual clips can override)
318
- emojiFont?: string; // Path to emoji font .ttf for opt-in emoji rendering (stripped by default)
319
- tempDir?: string; // Custom temp directory for intermediate files (default: OS temp)
320
- })
321
- ```
322
-
323
- **Custom Temp Directory:**
324
-
325
- Set `tempDir` to route all temporary files (gradient images, unrotated videos, text/subtitle temp files, batch intermediate renders) to a custom location. Useful for fast SSDs, ramdisks, Docker containers with limited `/tmp`, or any environment where temp storage performance matters:
326
-
327
- ```ts
328
- const project = new SIMPLEFFMPEG({
329
- preset: "youtube",
330
- tempDir: "/mnt/fast-nvme/tmp",
331
- });
332
- ```
333
-
334
- When not set, temp files go to the OS default (`os.tmpdir()`) or next to the output file, depending on the operation. Cross-filesystem moves are handled automatically.
335
-
336
- When `fontFile` is set at the project level, every text clip (including karaoke) inherits it automatically. You can still override it on any individual clip:
337
-
338
- ```js
339
- const project = new SIMPLEFFMPEG({
340
- preset: "tiktok",
341
- fontFile: "./fonts/Montserrat-Bold.ttf", // applies to all text clips
342
- });
343
-
344
- await project.load([
345
- { type: "video", url: "intro.mp4", position: 0, end: 10 },
346
- // Uses the global font
347
- { type: "text", text: "Hello!", position: 1, end: 4, fontSize: 72 },
348
- // Overrides with a different font
349
- { type: "text", text: "Special", position: 5, end: 8, fontFile: "./fonts/Italic.otf" },
350
- ]);
351
- ```
352
-
353
- ### Methods
354
-
355
- #### `project.load(clips)`
356
-
357
- Load clip descriptors into the project. Validates the timeline and reads media metadata.
358
-
359
- ```ts
360
- await project.load(clips: Clip[]): Promise<void[]>
361
- ```
362
-
363
- #### `SIMPLEFFMPEG.getDuration(clips)`
364
-
365
- Calculate the total visual timeline duration from a clips array. Handles `duration` and auto-sequencing shorthand, and subtracts transition overlaps. Pure function — no file I/O.
366
-
367
- ```ts
368
- const clips = [
369
- { type: "video", url: "./a.mp4", duration: 5 },
370
- {
371
- type: "video",
372
- url: "./b.mp4",
373
- duration: 10,
374
- transition: { type: "fade", duration: 0.5 },
375
- },
376
- ];
377
- SIMPLEFFMPEG.getDuration(clips); // 14.5
378
- ```
379
-
380
- Useful for computing text overlay timings or background music end times before calling `load()`.
381
-
382
- #### `SIMPLEFFMPEG.probe(filePath)`
383
-
384
- Probe a media file and return comprehensive metadata using ffprobe. Works with video, audio, and image files.
385
-
386
- ```ts
387
- const info = await SIMPLEFFMPEG.probe("./video.mp4");
388
- // {
389
- // duration: 30.5, // seconds
390
- // width: 1920, // pixels
391
- // height: 1080, // pixels
392
- // hasVideo: true,
393
- // hasAudio: true,
394
- // rotation: 0, // iPhone/mobile rotation
395
- // videoCodec: "h264",
396
- // audioCodec: "aac",
397
- // format: "mov,mp4,m4a,3gp,3g2,mj2",
398
- // fps: 30,
399
- // size: 15728640, // bytes
400
- // bitrate: 4125000, // bits/sec
401
- // sampleRate: 48000, // Hz
402
- // channels: 2 // stereo
403
- // }
404
- ```
405
-
406
- Fields that don't apply to the file type are `null` (e.g. `width`/`height`/`videoCodec`/`fps` for audio-only files, `audioCodec`/`sampleRate`/`channels` for video-only files).
407
-
408
- Throws `MediaNotFoundError` if the file cannot be found or probed.
409
-
410
- ```ts
411
- // Audio file
412
- const audio = await SIMPLEFFMPEG.probe("./music.wav");
413
- console.log(audio.hasVideo); // false
414
- console.log(audio.duration); // 180.5
415
- console.log(audio.sampleRate); // 44100
416
- ```
417
-
418
- #### `SIMPLEFFMPEG.snapshot(filePath, options)`
419
-
420
- Capture a single frame from a video file and save it as an image. This is a static method — no project instance needed.
421
-
422
- The output format is determined by the `outputPath` file extension. FFmpeg handles format detection internally, so `.jpg` produces JPEG, `.png` produces PNG, `.webp` produces WebP, etc.
423
-
424
- ```ts
425
- await SIMPLEFFMPEG.snapshot("./video.mp4", {
426
- outputPath: "./frame.png",
427
- time: 5,
428
- });
429
- ```
430
-
431
- **Snapshot Options:**
432
-
433
- | Option | Type | Default | Description |
434
- | ------------ | -------- | ------- | -------------------------------------------------------------------------- |
435
- | `outputPath` | `string` | - | **Required.** Output image path (extension determines format) |
436
- | `time` | `number` | `0` | Time in seconds to capture the frame at |
437
- | `width` | `number` | - | Output width in pixels (maintains aspect ratio if height omitted) |
438
- | `height` | `number` | - | Output height in pixels (maintains aspect ratio if width omitted) |
439
- | `quality` | `number` | `2` | JPEG quality 1-31, lower is better (only applies to `.jpg`/`.jpeg` output) |
440
-
441
- **Supported formats:** `.jpg` / `.jpeg`, `.png`, `.webp`, `.bmp`, `.tiff`
442
-
443
- ```ts
444
- // Save as JPEG with quality control and resize
445
- await SIMPLEFFMPEG.snapshot("./video.mp4", {
446
- outputPath: "./thumb.jpg",
447
- time: 10,
448
- width: 640,
449
- quality: 4,
450
- });
451
-
452
- // Save as WebP
453
- await SIMPLEFFMPEG.snapshot("./video.mp4", {
454
- outputPath: "./preview.webp",
455
- time: 0,
456
- });
457
- ```
458
-
459
- #### `SIMPLEFFMPEG.extractKeyframes(filePath, options)`
460
-
461
- Extract keyframes from a video using scene-change detection or fixed time intervals. This is a static method — no project instance needed.
462
-
463
- **Scene-change mode** (default) uses FFmpeg's `select=gt(scene,N)` filter to intelligently detect visual transitions and extract frames at cut points. **Interval mode** extracts frames at fixed time intervals.
464
-
465
- When `outputDir` is provided, frames are written to disk and the method returns an array of file paths. Without `outputDir`, frames are returned as in-memory `Buffer` objects (no temp files left behind).
466
-
467
- ```ts
468
- // Scene-change detection — returns Buffer[]
469
- const frames = await SIMPLEFFMPEG.extractKeyframes("./video.mp4", {
470
- mode: "scene-change",
471
- sceneThreshold: 0.4,
472
- maxFrames: 8,
473
- format: "jpeg",
474
- });
475
-
476
- // Fixed interval — writes to disk, returns string[]
477
- const paths = await SIMPLEFFMPEG.extractKeyframes("./video.mp4", {
478
- mode: "interval",
479
- intervalSeconds: 5,
480
- outputDir: "./frames/",
481
- format: "png",
482
- });
483
- ```
484
-
485
- **Keyframe Options:**
486
-
487
- | Option | Type | Default | Description |
488
- | ----------------- | -------- | ---------------- | ------------------------------------------------------------------------------- |
489
- | `mode` | `string` | `'scene-change'` | `'scene-change'` for intelligent detection, `'interval'` for fixed time spacing |
490
- | `sceneThreshold` | `number` | `0.3` | Scene detection sensitivity 0-1 (lower = more frames). Scene-change mode only. |
491
- | `intervalSeconds` | `number` | `5` | Seconds between frames. Interval mode only. |
492
- | `maxFrames` | `number` | - | Maximum number of frames to extract |
493
- | `format` | `string` | `'jpeg'` | Output format: `'jpeg'` or `'png'` |
494
- | `quality` | `number` | - | JPEG quality 1-31, lower is better (only applies to JPEG) |
495
- | `width` | `number` | - | Output width in pixels (maintains aspect ratio if height omitted) |
496
- | `height` | `number` | - | Output height in pixels (maintains aspect ratio if width omitted) |
497
- | `outputDir` | `string` | - | Directory to write frames to. If omitted, returns `Buffer[]` instead. |
498
- | `tempDir` | `string` | `os.tmpdir()` | Custom temp directory (only when `outputDir` is not set). Useful for fast SSDs or ramdisks. |
499
-
500
- ```ts
501
- // Scene-change with resize and JPEG quality
502
- const frames = await SIMPLEFFMPEG.extractKeyframes("./long-video.mp4", {
503
- sceneThreshold: 0.25,
504
- maxFrames: 12,
505
- width: 640,
506
- quality: 4,
507
- });
508
-
509
- // One frame every 10 seconds, saved as PNG
510
- const paths = await SIMPLEFFMPEG.extractKeyframes("./presentation.mp4", {
511
- mode: "interval",
512
- intervalSeconds: 10,
513
- outputDir: "./thumbnails/",
514
- format: "png",
515
- });
516
- ```
517
-
518
- Throws `FFmpegError` if FFmpeg fails during extraction.
519
-
520
- #### `project.export(options)`
521
-
522
- Build and execute the FFmpeg command to render the final video.
523
-
524
- ```ts
525
- await project.export(options?: ExportOptions): Promise<string>
526
- ```
527
-
528
- **Export Options:**
529
-
530
- | Option | Type | Default | Description |
531
- | ----------------------- | ------------- | ---------------- | -------------------------------------------------------------------------------- |
532
- | `outputPath` | `string` | `'./output.mp4'` | Output file path |
533
- | `videoCodec` | `string` | `'libx264'` | Video codec (`libx264`, `libx265`, `libvpx-vp9`, `prores_ks`, hardware encoders) |
534
- | `crf` | `number` | `23` | Quality level (0-51, lower = better) |
535
- | `preset` | `string` | `'medium'` | Encoding preset (`ultrafast` to `veryslow`) |
536
- | `videoBitrate` | `string` | - | Target bitrate (e.g., `'5M'`). Overrides CRF. |
537
- | `audioCodec` | `string` | `'aac'` | Audio codec (`aac`, `libmp3lame`, `libopus`, `flac`, `copy`) |
538
- | `audioBitrate` | `string` | `'192k'` | Audio bitrate |
539
- | `audioSampleRate` | `number` | `48000` | Audio sample rate in Hz |
540
- | `hwaccel` | `string` | `'none'` | Hardware acceleration (`auto`, `videotoolbox`, `nvenc`, `vaapi`, `qsv`) |
541
- | `outputWidth` | `number` | - | Scale output width |
542
- | `outputHeight` | `number` | - | Scale output height |
543
- | `outputResolution` | `string` | - | Resolution preset (`'720p'`, `'1080p'`, `'4k'`) |
544
- | `audioOnly` | `boolean` | `false` | Export audio only (no video) |
545
- | `twoPass` | `boolean` | `false` | Two-pass encoding for better quality |
546
- | `metadata` | `object` | - | Embed metadata (title, artist, etc.) |
547
- | `thumbnail` | `object` | - | Generate thumbnail image |
548
- | `verbose` | `boolean` | `false` | Enable verbose logging |
549
- | `saveCommand` | `string` | - | Save FFmpeg command to file |
550
- | `onProgress` | `function` | - | Progress callback |
551
- | `onLog` | `function` | - | FFmpeg log callback (see [Logging](#logging) section) |
552
- | `signal` | `AbortSignal` | - | Cancellation signal |
553
- | `watermark` | `object` | - | Add watermark overlay (see Watermarks section) |
554
- | `compensateTransitions` | `boolean` | `true` | Auto-adjust text timings for transition overlap (see below) |
555
-
556
- #### `project.preview(options)`
557
-
558
- Get the FFmpeg command without executing it. Useful for debugging or dry runs.
559
-
560
- ```ts
561
- await project.preview(options?: ExportOptions): Promise<{
562
- command: string; // Full FFmpeg command
563
- filterComplex: string; // Filter graph
564
- totalDuration: number; // Expected output duration
565
- }>
566
- ```
567
-
568
- ### Auto-Sequencing & Duration Shorthand
569
-
570
- For video, image, and audio clips, you can use shorthand to avoid specifying explicit `position` and `end` values:
571
-
572
- - **`duration`** — Use instead of `end`. The library computes `end = position + duration`. You cannot specify both `duration` and `end` on the same clip.
573
- - **Omit `position`** — The clip is placed immediately after the previous clip on its track. Video and image clips share the visual track; audio clips have their own track. The first clip defaults to `position: 0`.
574
-
575
- These can be combined:
576
-
577
- ```ts
578
- // Before: manual position/end for every clip
579
- await project.load([
580
- { type: "video", url: "./a.mp4", position: 0, end: 5 },
581
- { type: "video", url: "./b.mp4", position: 5, end: 10 },
582
- { type: "video", url: "./c.mp4", position: 10, end: 18, cutFrom: 3 },
583
- ]);
584
-
585
- // After: auto-sequencing + duration
586
- await project.load([
587
- { type: "video", url: "./a.mp4", duration: 5 },
588
- { type: "video", url: "./b.mp4", duration: 5 },
589
- { type: "video", url: "./c.mp4", duration: 8, cutFrom: 3 },
590
- ]);
591
- ```
592
-
593
- You can mix explicit and implicit positioning freely. Clips with explicit `position` are placed there; subsequent auto-sequenced clips follow from the last clip's end:
594
-
595
- ```ts
596
- await project.load([
597
- { type: "video", url: "./a.mp4", duration: 5 }, // position: 0, end: 5
598
- { type: "video", url: "./b.mp4", position: 10, end: 15 }, // explicit gap
599
- { type: "video", url: "./c.mp4", duration: 5 }, // position: 15, end: 20
600
- ]);
601
- ```
602
-
603
- Text clips always require an explicit `position` (they're overlays on specific moments). Background music and subtitle clips already have optional `position`/`end` with their own defaults.
604
-
605
- ### Clip Types
606
-
607
- #### Video Clip
608
-
609
- ```ts
610
- {
611
- type: "video";
612
- url: string; // File path
613
- position?: number; // Timeline start (seconds). Omit to auto-sequence after previous clip.
614
- end?: number; // Timeline end (seconds). Use end OR duration, not both.
615
- duration?: number; // Duration in seconds (alternative to end). end = position + duration.
616
- cutFrom?: number; // Source offset (default: 0)
617
- volume?: number; // Audio volume (default: 1)
618
- transition?: {
619
- type: string; // Any xfade transition (e.g., 'fade', 'wipeleft', 'dissolve')
620
- duration: number; // Transition duration in seconds
621
- };
622
- }
623
- ```
624
-
625
- All [xfade transitions](https://trac.ffmpeg.org/wiki/Xfade) are supported.
626
-
627
- #### Image Clip
628
-
629
- ```ts
630
- {
631
- type: "image";
632
- url: string;
633
- position?: number; // Omit to auto-sequence after previous video/image clip
634
- end?: number; // Use end OR duration, not both
635
- duration?: number; // Duration in seconds (alternative to end)
636
- width?: number; // Optional: source image width (skip probe / override)
637
- height?: number; // Optional: source image height (skip probe / override)
638
- imageFit?: "cover" | "contain" | "blur-fill"; // How to handle aspect ratio mismatch (see below)
639
- blurIntensity?: number; // Blur strength for blur-fill background (default: 40, range: 10-80)
640
- kenBurns?:
641
- | "zoom-in" | "zoom-out" | "pan-left" | "pan-right" | "pan-up" | "pan-down"
642
- | "smart" | "custom"
643
- | {
644
- type?: "zoom-in" | "zoom-out" | "pan-left" | "pan-right" | "pan-up" | "pan-down" | "smart" | "custom";
645
- startZoom?: number;
646
- endZoom?: number;
647
- startX?: number; // 0 = left, 1 = right
648
- startY?: number; // 0 = top, 1 = bottom
649
- endX?: number;
650
- endY?: number;
651
- anchor?: "top" | "bottom" | "left" | "right";
652
- easing?: "linear" | "ease-in" | "ease-out" | "ease-in-out";
653
- };
654
- }
655
- ```
656
-
657
- **Image Fitting (`imageFit`):**
658
-
659
- When an image's aspect ratio doesn't match the output (e.g., a landscape photo in a portrait video), `imageFit` controls how the mismatch is resolved:
660
-
661
- | Mode | Behavior | Default for |
662
- |---|---|---|
663
- | `blur-fill` | Scale to fit, fill empty space with a blurred version of the image | Static images (no Ken Burns) |
664
- | `cover` | Scale to fill the entire frame, center-crop any excess | Ken Burns images |
665
- | `contain` | Scale to fit within the frame, pad with black bars | — |
666
-
667
- If `imageFit` is not specified, the library picks the best default: **`blur-fill`** for static images (produces polished output similar to TikTok/Reels) and **`cover`** for Ken Burns images (ensures full-frame cinematic motion).
668
-
669
- ```ts
670
- // Landscape photo in a portrait video — blurred background fills the bars (default)
671
- { type: "image", url: "./landscape.jpg", duration: 5 }
672
-
673
- // Explicit cover — crops to fill the frame
674
- { type: "image", url: "./landscape.jpg", duration: 5, imageFit: "cover" }
675
-
676
- // Black bars (letterbox/pillarbox)
677
- { type: "image", url: "./landscape.jpg", duration: 5, imageFit: "contain" }
678
-
679
- // Stronger blur effect
680
- { type: "image", url: "./landscape.jpg", duration: 5, imageFit: "blur-fill", blurIntensity: 70 }
681
- ```
682
-
683
- **Ken Burns + imageFit:** When using Ken Burns with `blur-fill` or `contain`, the pan/zoom motion applies only to the image content — the blurred background or black bars remain static, matching the behavior of modern phone video editors. Source dimensions (`width`/`height`) are required for KB + `blur-fill`/`contain`; without them it falls back to `cover`.
684
-
685
- ```ts
686
- // Ken Burns zoom on contained image with blurred background
687
- {
688
- type: "image",
689
- url: "./landscape.jpg",
690
- duration: 5,
691
- width: 1920,
692
- height: 1080,
693
- kenBurns: "zoom-in",
694
- imageFit: "blur-fill",
695
- }
696
-
697
- // Ken Burns pan with black bars
698
- {
699
- type: "image",
700
- url: "./landscape.jpg",
701
- duration: 5,
702
- width: 1920,
703
- height: 1080,
704
- kenBurns: "pan-right",
705
- imageFit: "contain",
706
- }
707
- ```
708
-
709
- #### Color Clip
710
-
711
- Color clips add flat colors or gradients as first-class visual elements. They support transitions, text overlays, and all the same timeline features as video and image clips. Use them for intros, outros, title cards, or anywhere you need a background.
712
-
713
- ```ts
714
- {
715
- type: "color";
716
- color: string | { // Flat color string or gradient spec
717
- type: "linear-gradient" | "radial-gradient";
718
- colors: string[]; // 2+ color stops (named, hex, or 0x hex)
719
- direction?: "vertical" | "horizontal"; // For linear gradients (default: "vertical")
720
- };
721
- position?: number; // Timeline start (seconds). Omit to auto-sequence.
722
- end?: number; // Timeline end. Use end OR duration, not both.
723
- duration?: number; // Duration in seconds (alternative to end).
724
- transition?: {
725
- type: string; // Any xfade transition (e.g., 'fade', 'wipeleft')
726
- duration: number;
727
- };
728
- }
729
- ```
730
-
731
- `color` accepts any valid FFmpeg color name or hex code:
732
-
733
- ```ts
734
- { type: "color", color: "navy", position: 0, end: 3 }
735
- { type: "color", color: "#1a1a2e", position: 0, end: 3 }
736
- ```
737
-
738
- **Gradients:**
739
-
740
- ```ts
741
- // Linear gradient (vertical by default)
742
- {
743
- type: "color",
744
- color: { type: "linear-gradient", colors: ["#0a0a2e", "#4a148c"] },
745
- position: 0,
746
- end: 4,
747
- }
748
-
749
- // Horizontal linear gradient
750
- {
751
- type: "color",
752
- color: { type: "linear-gradient", colors: ["#e74c3c", "#f1c40f", "#2ecc71"], direction: "horizontal" },
753
- position: 0,
754
- end: 4,
755
- }
756
-
757
- // Radial gradient
758
- {
759
- type: "color",
760
- color: { type: "radial-gradient", colors: ["#ff8c00", "#1a0000"] },
761
- position: 0,
762
- end: 3,
763
- }
764
- ```
765
-
766
- **With transitions:**
767
-
768
- ```ts
769
- await project.load([
770
- { type: "color", color: "black", position: 0, end: 3 },
771
- {
772
- type: "video",
773
- url: "./main.mp4",
774
- position: 3,
775
- end: 8,
776
- transition: { type: "fade", duration: 0.5 },
777
- },
778
- {
779
- type: "color",
780
- color: { type: "radial-gradient", colors: ["#2c3e50", "#000000"] },
781
- position: 8,
782
- end: 11,
783
- transition: { type: "fade", duration: 0.5 },
784
- },
785
- {
786
- type: "text",
787
- text: "The End",
788
- position: 8.5,
789
- end: 10.5,
790
48
  fontSize: 64,
791
- fontColor: "white",
792
- },
793
- ]);
794
- ```
795
-
796
- > **Note:** Timeline gaps (periods with no visual content) always produce a validation error. If a gap is intentional, fill it with a `type: "color"` clip or adjust your clip positions to close the gap.
797
-
798
- #### Effect Clip
799
-
800
- Effects are overlay adjustment layers. They apply to the already-composed video
801
- for a time window, and can ramp in/out smoothly (instead of appearing instantly):
802
-
803
- ```ts
804
- {
805
- type: "effect";
806
- effect: EffectName; // See table below
807
- position: number; // Required timeline start (seconds)
808
- end?: number; // Use end OR duration, not both
809
- duration?: number; // Duration in seconds (alternative to end)
810
- fadeIn?: number; // Optional smooth ramp-in (seconds)
811
- fadeOut?: number; // Optional smooth ramp-out (seconds)
812
- params: EffectParams; // Effect-specific parameters (see table below)
813
- }
814
- ```
815
-
816
- All effects accept `params.amount` (0-1, default 1) to control the blend intensity. Additional per-effect parameters:
817
-
818
- | Effect | Description | Extra Params |
819
- |---|---|---|
820
- | `vignette` | Darkened edges | `angle`: radians (default: PI/5) |
821
- | `filmGrain` | Noise overlay | `strength`: noise intensity 0-1 (default: 0.35), `temporal`: boolean (default: true) |
822
- | `gaussianBlur` | Gaussian blur | `sigma`: blur radius (default derived from amount) |
823
- | `colorAdjust` | Color grading | `brightness`: -1..1, `contrast`: 0..3, `saturation`: 0..3, `gamma`: 0.1..10 |
824
- | `sepia` | Warm vintage tone | — |
825
- | `blackAndWhite` | Desaturate to grayscale | `contrast`: boost 0-3 (default: 1) |
826
- | `sharpen` | Sharpen detail | `strength`: unsharp amount 0-3 (default: 1) |
827
- | `chromaticAberration` | RGB channel split | `shift`: pixel offset 0-20 (default: 4) |
828
- | `letterbox` | Cinematic bars | `size`: bar height as fraction of frame 0-0.5 (default: 0.12), `color`: string (default: "black") |
829
-
830
- #### Text Clip
831
-
832
- ```ts
833
- {
834
- type: "text";
835
- position: number;
836
- end?: number; // Use end OR duration, not both
837
- duration?: number; // Duration in seconds (alternative to end)
838
-
839
- // Content
840
- text?: string;
841
- mode?: "static" | "word-replace" | "word-sequential" | "karaoke";
842
- words?: Array<{ text: string; start: number; end: number }>;
843
- wordTimestamps?: number[];
844
-
845
- // Styling
846
- fontFile?: string; // Custom font file path
847
- fontFamily?: string; // System font (default: 'Sans')
848
- fontSize?: number; // default: 48
849
- fontColor?: string; // default: '#FFFFFF'
850
- borderColor?: string;
851
- borderWidth?: number;
852
- shadowColor?: string;
853
- shadowX?: number;
854
- shadowY?: number;
855
-
856
- // Positioning (omit x/y to center)
857
- xPercent?: number; // Horizontal position as % (0 = left, 0.5 = center, 1 = right)
858
- yPercent?: number; // Vertical position as % (0 = top, 0.5 = center, 1 = bottom)
859
- x?: number; // Absolute X position in pixels
860
- y?: number; // Absolute Y position in pixels
861
- xOffset?: number; // Pixel offset added to X (works with any positioning method)
862
- yOffset?: number; // Pixel offset added to Y (e.g., center + 50px below)
863
-
864
- // Animation
865
- animation?: {
866
- type: "none" | "fade-in" | "fade-in-out" | "fade-out" | "pop" | "pop-bounce"
867
- | "typewriter" | "scale-in" | "pulse";
868
- in?: number; // Intro duration (seconds)
869
- out?: number; // Outro duration (seconds)
870
- speed?: number; // For typewriter (chars/sec) or pulse (pulses/sec)
871
- intensity?: number; // For scale-in or pulse (size variation 0-1)
872
- };
873
-
874
- highlightColor?: string; // For karaoke mode (default: '#FFFF00')
875
- highlightStyle?: "smooth" | "instant"; // 'smooth' = gradual fill, 'instant' = immediate change (default: 'smooth')
876
- }
877
- ```
878
-
879
- #### Subtitle Clip
880
-
881
- Import external subtitle files (SRT, VTT, ASS/SSA):
882
-
883
- ```ts
884
- {
885
- type: "subtitle";
886
- url: string; // Path to subtitle file
887
- position?: number; // Time offset in seconds (default: 0)
888
-
889
- // Styling (for SRT/VTT - ASS files use their own styles)
890
- fontFamily?: string;
891
- fontSize?: number;
892
- fontColor?: string;
893
- borderColor?: string;
894
- borderWidth?: number;
895
- opacity?: number;
896
- }
897
- ```
898
-
899
- #### Audio Clip
900
-
901
- ```ts
902
- {
903
- type: "audio";
904
- url: string;
905
- position?: number; // Omit to auto-sequence after previous audio clip
906
- end?: number; // Use end OR duration, not both
907
- duration?: number; // Duration in seconds (alternative to end)
908
- cutFrom?: number;
909
- volume?: number;
910
- }
911
- ```
912
-
913
- #### Background Music
914
-
915
- ```ts
916
- {
917
- type: "music"; // or "backgroundAudio"
918
- url: string;
919
- position?: number; // default: 0
920
- end?: number; // default: project duration
921
- cutFrom?: number;
922
- volume?: number; // default: 0.2
923
- loop?: boolean; // Loop audio to fill video duration
924
- }
925
- ```
926
-
927
- Background music is mixed after transitions, so video crossfades won't affect its volume.
928
-
929
- **Looping Music:**
930
-
931
- If your music track is shorter than your video, enable looping:
932
-
933
- ```ts
934
- await project.load([
935
- { type: "video", url: "./video.mp4", position: 0, end: 120 },
936
- { type: "music", url: "./30s-track.mp3", volume: 0.3, loop: true },
937
- ]);
938
- ```
939
-
940
- ### Platform Presets
941
-
942
- Use platform presets to quickly configure optimal dimensions for social media:
943
-
944
- ```ts
945
- const project = new SIMPLEFFMPEG({ preset: "tiktok" });
946
- ```
947
-
948
- Available presets:
949
-
950
- | Preset | Resolution | Aspect Ratio | Use Case |
951
- | -------------------- | ----------- | ------------ | ----------------------- |
952
- | `tiktok` | 1080 × 1920 | 9:16 | TikTok, vertical videos |
953
- | `youtube-short` | 1080 × 1920 | 9:16 | YouTube Shorts |
954
- | `instagram-reel` | 1080 × 1920 | 9:16 | Instagram Reels |
955
- | `instagram-story` | 1080 × 1920 | 9:16 | Instagram Stories |
956
- | `snapchat` | 1080 × 1920 | 9:16 | Snapchat |
957
- | `instagram-post` | 1080 × 1080 | 1:1 | Instagram feed posts |
958
- | `instagram-square` | 1080 × 1080 | 1:1 | Square format |
959
- | `youtube` | 1920 × 1080 | 16:9 | YouTube standard |
960
- | `twitter` | 1920 × 1080 | 16:9 | Twitter/X horizontal |
961
- | `facebook` | 1920 × 1080 | 16:9 | Facebook horizontal |
962
- | `landscape` | 1920 × 1080 | 16:9 | General landscape |
963
- | `twitter-portrait` | 1080 × 1350 | 4:5 | Twitter portrait |
964
- | `instagram-portrait` | 1080 × 1350 | 4:5 | Instagram portrait |
965
-
966
- Override preset values with explicit options:
967
-
968
- ```ts
969
- const project = new SIMPLEFFMPEG({
970
- preset: "tiktok",
971
- fps: 60, // Override default 30fps
972
- });
973
- ```
974
-
975
- Query available presets programmatically:
976
-
977
- ```ts
978
- SIMPLEFFMPEG.getPresetNames(); // ['tiktok', 'youtube-short', ...]
979
- SIMPLEFFMPEG.getPresets(); // { tiktok: { width: 1080, height: 1920, fps: 30 }, ... }
980
- ```
981
-
982
- ### Watermarks
983
-
984
- Add text or image watermarks to your videos:
985
-
986
- **Text Watermark:**
987
-
988
- ```ts
989
- await project.export({
990
- outputPath: "./output.mp4",
991
- watermark: {
992
- type: "text",
993
- text: "@myhandle",
994
- position: "bottom-right", // 'top-left', 'top-right', 'bottom-left', 'bottom-right', 'center'
995
- fontSize: 24,
996
- fontColor: "#FFFFFF",
997
- opacity: 0.7,
998
- margin: 20,
999
- },
1000
- });
1001
- ```
1002
-
1003
- **Image Watermark:**
1004
-
1005
- ```ts
1006
- await project.export({
1007
- outputPath: "./output.mp4",
1008
- watermark: {
1009
- type: "image",
1010
- url: "./logo.png",
1011
- position: "top-right",
1012
- opacity: 0.8,
1013
- scale: 0.5, // Scale to 50% of original size
1014
- margin: 15,
1015
- },
1016
- });
1017
- ```
1018
-
1019
- **Timed Watermark:**
1020
-
1021
- ```ts
1022
- await project.export({
1023
- outputPath: "./output.mp4",
1024
- watermark: {
1025
- type: "text",
1026
- text: "Limited Time!",
1027
- position: "top-left",
1028
- startTime: 5, // Appear at 5 seconds
1029
- endTime: 15, // Disappear at 15 seconds
1030
- },
1031
- });
1032
- ```
1033
-
1034
- **Custom Position:**
1035
-
1036
- ```ts
1037
- await project.export({
1038
- outputPath: "./output.mp4",
1039
- watermark: {
1040
- type: "text",
1041
- text: "Custom",
1042
- x: 100, // Exact X position in pixels
1043
- y: 50, // Exact Y position in pixels
1044
- },
1045
- });
1046
- ```
1047
-
1048
- ### Progress Information
1049
-
1050
- The `onProgress` callback receives:
1051
-
1052
- ```ts
1053
- {
1054
- percent?: number; // 0-100
1055
- phase?: string; // "rendering" or "batching"
1056
- timeProcessed?: number; // Seconds processed
1057
- frame?: number; // Current frame
1058
- fps?: number; // Processing speed
1059
- speed?: number; // Multiplier (e.g., 2.0 = 2x realtime)
1060
- }
1061
- ```
1062
-
1063
- The `phase` field indicates what the export is doing:
1064
-
1065
- - `"rendering"` — main video export (includes `percent`, `frame`, etc.)
1066
- - `"batching"` — text overlay passes are running (fired once when batching starts)
1067
-
1068
- Use `phase` to update your UI when the export hits 100% but still has work to do:
1069
-
1070
- ```ts
1071
- onProgress: ({ percent, phase }) => {
1072
- if (phase === "batching") {
1073
- console.log("Applying text overlays...");
1074
- } else {
1075
- console.log(`${percent}%`);
1076
- }
1077
- };
1078
- ```
1079
-
1080
- ### Logging
1081
-
1082
- Use the `onLog` callback to receive real-time FFmpeg output. Each log entry includes a `level` (`"stderr"` or `"stdout"`) and the raw `message` string. This is useful for debugging, monitoring, or piping FFmpeg output to your own logging system.
1083
-
1084
- ```ts
1085
- await project.export({
1086
- outputPath: "./output.mp4",
1087
- onLog: ({ level, message }) => {
1088
- console.log(`[ffmpeg:${level}] ${message}`);
1089
- },
1090
- });
1091
- ```
1092
-
1093
- The callback fires for every data chunk FFmpeg writes, including encoding stats, warnings, and codec information. It works alongside `onProgress` — both can be used simultaneously.
1094
-
1095
- ### Error Handling
1096
-
1097
- The library provides custom error classes for structured error handling:
1098
-
1099
- | Error Class | When Thrown | Properties |
1100
- | ---------------------- | -------------------------- | --------------------------------------------------------------------------- |
1101
- | `ValidationError` | Invalid clip configuration | `errors[]`, `warnings[]` (structured issues with `code`, `path`, `message`) |
1102
- | `FFmpegError` | FFmpeg command fails | `stderr`, `command`, `exitCode`, `details` |
1103
- | `MediaNotFoundError` | File not found | `path` |
1104
- | `ExportCancelledError` | Export aborted | - |
1105
-
1106
- ```ts
1107
- try {
1108
- await project.export({ outputPath: "./out.mp4" });
1109
- } catch (error) {
1110
- if (error.name === "ValidationError") {
1111
- // Structured validation errors
1112
- error.errors.forEach((e) =>
1113
- console.error(`[${e.code}] ${e.path}: ${e.message}`),
1114
- );
1115
- error.warnings.forEach((w) =>
1116
- console.warn(`[${w.code}] ${w.path}: ${w.message}`),
1117
- );
1118
- } else if (error.name === "FFmpegError") {
1119
- // Structured details for bug reports (last 50 lines of stderr, command, exitCode)
1120
- console.error("FFmpeg failed:", error.details);
1121
- // { stderrTail: "...", command: "ffmpeg ...", exitCode: 1 }
1122
- } else if (error.name === "MediaNotFoundError") {
1123
- console.error("File not found:", error.path);
1124
- } else if (error.name === "ExportCancelledError") {
1125
- console.log("Export was cancelled");
1126
- }
1127
- }
1128
- ```
1129
-
1130
- ### Cancellation
1131
-
1132
- Use an `AbortController` to cancel an export in progress:
1133
-
1134
- ```ts
1135
- const controller = new AbortController();
1136
-
1137
- // Cancel after 5 seconds
1138
- setTimeout(() => controller.abort(), 5000);
1139
-
1140
- try {
1141
- await project.export({
1142
- outputPath: "./out.mp4",
1143
- signal: controller.signal,
1144
- });
1145
- } catch (error) {
1146
- if (error.name === "ExportCancelledError") {
1147
- console.log("Cancelled");
1148
- }
1149
- }
1150
- ```
1151
-
1152
- ## Examples
1153
-
1154
- ### Clips & Transitions
1155
-
1156
- ```ts
1157
- // Two clips with a crossfade
1158
- await project.load([
1159
- { type: "video", url: "./a.mp4", position: 0, end: 5 },
1160
- {
1161
- type: "video",
1162
- url: "./b.mp4",
1163
- position: 5,
1164
- end: 10,
1165
- transition: { type: "fade", duration: 0.5 },
1166
- },
1167
- ]);
1168
- ```
1169
-
1170
- **Image slideshow with Ken Burns effects:**
1171
-
1172
- ```ts
1173
- await project.load([
1174
- { type: "image", url: "./photo1.jpg", duration: 3, kenBurns: "zoom-in" },
1175
- { type: "image", url: "./photo2.jpg", duration: 3, kenBurns: "pan-right" },
1176
- { type: "image", url: "./photo3.jpg", duration: 3, kenBurns: "zoom-out" },
1177
- { type: "music", url: "./music.mp3", volume: 0.3 },
1178
- ]);
1179
- ```
1180
-
1181
- **Custom Ken Burns (smart anchor + explicit endpoints):**
1182
-
1183
- ```ts
1184
- await project.load([
1185
- {
1186
- type: "image",
1187
- url: "./portrait.jpg",
1188
- duration: 5,
1189
- kenBurns: {
1190
- type: "smart",
1191
- anchor: "bottom",
1192
- startZoom: 1.05,
1193
- endZoom: 1.2,
1194
- easing: "ease-in-out",
1195
- },
1196
- },
1197
- {
1198
- type: "image",
1199
- url: "./wide.jpg",
1200
- duration: 4,
1201
- kenBurns: {
1202
- type: "custom",
1203
- startX: 0.15,
1204
- startY: 0.7,
1205
- endX: 0.85,
1206
- endY: 0.2,
1207
- easing: "ease-in-out",
1208
- },
1209
- },
1210
- ]);
1211
- ```
1212
-
1213
- When `position` is omitted, clips are placed sequentially — see [Auto-Sequencing & Duration Shorthand](#auto-sequencing--duration-shorthand) for details.
1214
-
1215
- > **Note:** Ken Burns effects work best with images at least as large as your output resolution. Smaller images are automatically upscaled (with a validation warning). Use `strictKenBurns: true` in validation options to enforce size requirements instead.
1216
- > If you pass `width`/`height`, they override probed dimensions (useful for remote or generated images).
1217
- > `smart` mode uses source vs output aspect (when known) to choose pan direction.
1218
- > Ken Burns defaults to `imageFit: "cover"` (full-frame motion). Set `imageFit: "blur-fill"` or `"contain"` for phone-style editing where the motion applies to the contained image while the background stays static.
1219
-
1220
- ### Text & Animations
1221
-
1222
- Text is centered by default. Use `xPercent`/`yPercent` for percentage positioning, `x`/`y` for pixels, or `xOffset`/`yOffset` to nudge from any base:
1223
-
1224
- ```ts
1225
- await project.load([
1226
- { type: "video", url: "./bg.mp4", position: 0, end: 10 },
1227
- // Title: centered, 100px above center
1228
- {
1229
- type: "text",
1230
- text: "Main Title",
1231
- position: 0,
1232
- end: 5,
1233
- fontSize: 72,
1234
- yOffset: -100,
1235
- },
1236
- // Subtitle: centered, 50px below center
1237
- {
1238
- type: "text",
1239
- text: "Subtitle here",
1240
- position: 0.5,
1241
- end: 5,
1242
- fontSize: 36,
1243
- yOffset: 50,
1244
- },
1245
- ]);
1246
- ```
1247
-
1248
- **Word-by-word replacement:**
1249
-
1250
- ```ts
1251
- {
1252
- type: "text",
1253
- mode: "word-replace",
1254
- text: "One Two Three Four",
1255
- position: 2,
1256
- end: 6,
1257
- wordTimestamps: [2, 3, 4, 5, 6],
1258
- animation: { type: "fade-in", in: 0.2 },
1259
- fontSize: 72,
1260
- fontColor: "white",
1261
- }
1262
- ```
1263
-
1264
- **Typewriter, pulse, and other animations:**
1265
-
1266
- ```ts
1267
- // Typewriter — letters appear one at a time
1268
- { type: "text", text: "Appearing letter by letter...", position: 1, end: 4,
1269
- animation: { type: "typewriter", speed: 15 } }
1270
-
1271
- // Pulse — rhythmic scaling
1272
- { type: "text", text: "Pulsing...", position: 0.5, end: 4.5,
1273
- animation: { type: "pulse", speed: 2, intensity: 0.2 } }
1274
-
1275
- // Also available: fade-in, fade-out, fade-in-out, pop, pop-bounce, scale-in
1276
- ```
1277
-
1278
- **Emoji in text overlays:**
1279
-
1280
- Emoji characters are automatically detected. By default they are stripped from text to prevent tofu (blank boxes). To render emoji, configure an `emojiFont` path in the constructor:
1281
-
1282
- ```ts
1283
- // Enable emoji rendering by providing a font path
1284
- const project = new SIMPLEFFMPEG({
1285
- width: 1920,
1286
- height: 1080,
1287
- emojiFont: "./fonts/NotoEmoji-Regular.ttf",
1288
- });
1289
-
1290
- await project.load([
1291
- { type: "video", url: "./bg.mp4", position: 0, end: 10 },
1292
- {
1293
- type: "text",
1294
- text: "small dog, big heart 🐾",
1295
- position: 1,
1296
- end: 5,
1297
- fontSize: 48,
1298
- fontColor: "#FFFFFF",
1299
- yPercent: 0.5,
1300
- },
1301
- {
1302
- type: "text",
1303
- text: "Movie night! 🎬🍿✨",
1304
- position: 5,
1305
- end: 9,
1306
- fontSize: 48,
1307
- fontColor: "#FFFFFF",
1308
- animation: { type: "fade-in-out", in: 0.5, out: 0.5 },
1309
- },
1310
- ]);
1311
- ```
1312
-
1313
- > **Note:** Without `emojiFont`, emoji are silently stripped (no tofu). With `emojiFont`, emoji render as monochrome outlines via the ASS path. Supports fade animations (`fade-in`, `fade-out`, `fade-in-out`) and static text. For other animation types (`pop`, `typewriter`, etc.), emoji are stripped and a console warning is logged.
1314
-
1315
- ### Karaoke
1316
-
1317
- Word-by-word highlighting with customizable colors. Use `highlightStyle: "instant"` for immediate color changes instead of the default smooth fill:
1318
-
1319
- ```ts
1320
- await project.load([
1321
- { type: "video", url: "./music-video.mp4", position: 0, end: 10 },
1322
- {
1323
- type: "text",
1324
- mode: "karaoke",
1325
- text: "Never gonna give you up",
1326
- position: 0,
1327
- end: 5,
1328
- words: [
1329
- { text: "Never", start: 0, end: 0.8 },
1330
- { text: "gonna", start: 0.8, end: 1.4 },
1331
- { text: "give", start: 1.4, end: 2.0 },
1332
- { text: "you", start: 2.0, end: 2.5 },
1333
- { text: "up", start: 2.5, end: 3.5 },
1334
- ],
1335
49
  fontColor: "#FFFFFF",
1336
- highlightColor: "#00FF00",
1337
- fontSize: 52,
1338
- yPercent: 0.85,
1339
- },
1340
- ]);
1341
- ```
1342
-
1343
- For simple usage without explicit word timings, just provide `text` and `wordTimestamps` — the library will split on spaces. Multi-line karaoke is supported with `\n` in the text string or `lineBreak: true` in the words array.
1344
-
1345
- ### Subtitles
1346
-
1347
- Import external subtitle files (SRT, VTT, ASS/SSA):
1348
-
1349
- ```ts
1350
- await project.load([
1351
- { type: "video", url: "./video.mp4", position: 0, end: 60 },
1352
- {
1353
- type: "subtitle",
1354
- url: "./subtitles.srt", // or .vtt, .ass, .ssa
1355
- fontSize: 24,
1356
- fontColor: "#FFFFFF",
1357
- borderColor: "#000000",
1358
- },
1359
- ]);
1360
- ```
1361
-
1362
- Use `position` to offset all subtitle timestamps forward (e.g., `position: 2.5` delays everything by 2.5s). ASS/SSA files use their own embedded styles — font options are for SRT/VTT imports.
1363
-
1364
- ### Export Settings
1365
-
1366
- ```ts
1367
- // High-quality H.265 with metadata
1368
- await project.export({
1369
- outputPath: "./output.mp4",
1370
- videoCodec: "libx265",
1371
- crf: 18,
1372
- preset: "slow",
1373
- audioCodec: "libopus",
1374
- audioBitrate: "256k",
1375
- metadata: { title: "My Video", artist: "My Name", date: "2025" },
1376
- });
1377
-
1378
- // Hardware-accelerated (macOS)
1379
- await project.export({
1380
- outputPath: "./output.mp4",
1381
- hwaccel: "videotoolbox",
1382
- videoCodec: "h264_videotoolbox",
1383
- });
1384
-
1385
- // Two-pass encoding for target file size
1386
- await project.export({
1387
- outputPath: "./output.mp4",
1388
- twoPass: true,
1389
- videoBitrate: "5M",
1390
- preset: "slow",
1391
- });
1392
-
1393
- // Scale output resolution
1394
- await project.export({ outputPath: "./720p.mp4", outputResolution: "720p" });
1395
-
1396
- // Audio-only export
1397
- await project.export({
1398
- outputPath: "./audio.mp3",
1399
- audioOnly: true,
1400
- audioCodec: "libmp3lame",
1401
- audioBitrate: "320k",
1402
- });
1403
-
1404
- // Generate thumbnail
1405
- await project.export({
1406
- outputPath: "./output.mp4",
1407
- thumbnail: { outputPath: "./thumb.jpg", time: 5, width: 640 },
1408
- });
1409
-
1410
- // Debug — save the FFmpeg command to a file
1411
- await project.export({
1412
- outputPath: "./output.mp4",
1413
- verbose: true,
1414
- saveCommand: "./ffmpeg-command.txt",
1415
- });
1416
- ```
1417
-
1418
- ## Advanced
1419
-
1420
- ### Timeline Behavior
1421
-
1422
- - Clip timing uses `[position, end)` intervals in seconds
1423
- - Transitions create overlaps that reduce total duration
1424
- - Background music is mixed after video transitions (unaffected by crossfades)
1425
-
1426
- **Transition Compensation:**
1427
-
1428
- FFmpeg's `xfade` transitions **overlap** clips, compressing the timeline. A 1s fade between two 10s clips produces 19s of output, not 20s. With multiple transitions this compounds.
1429
-
1430
- By default, simple-ffmpeg automatically adjusts text and subtitle timings to compensate. When you position text at "15s", it appears at the visual 15s mark regardless of how many transitions preceded it:
1431
-
1432
- ```ts
1433
- await project.load([
1434
- { type: "video", url: "./a.mp4", position: 0, end: 10 },
1435
- {
1436
- type: "video",
1437
- url: "./b.mp4",
1438
- position: 10,
1439
- end: 20,
1440
- transition: { type: "fade", duration: 1 },
1441
- },
1442
- { type: "text", text: "Appears at 15s visual", position: 15, end: 18 },
1443
- ]);
1444
- ```
1445
-
1446
- Disable with `compensateTransitions: false` in export options if you've pre-calculated offsets yourself.
1447
-
1448
- ### Auto-Batching
1449
-
1450
- FFmpeg's `filter_complex` has platform-specific length limits (Windows ~32KB, macOS ~1MB, Linux ~2MB). When text animations create many filter nodes, the command can exceed these limits.
1451
-
1452
- simple-ffmpeg handles this automatically — detecting oversized filter graphs and splitting text overlays into multiple rendering passes with intermediate files. No configuration needed.
1453
-
1454
- For very complex projects, you can tune it:
1455
-
1456
- ```js
1457
- await project.export({
1458
- textMaxNodesPerPass: 30, // default: 75
1459
- intermediateVideoCodec: "libx264", // default
1460
- intermediateCrf: 18, // default (high quality)
1461
- intermediatePreset: "veryfast", // default (fast encoding)
1462
- });
1463
- ```
1464
-
1465
- Batching activates for typewriter animations with long text, many simultaneous text overlays, or complex animation combinations. With `verbose: true`, you'll see when it kicks in.
1466
-
1467
- ## Real-World Usage Patterns
1468
-
1469
- ### Data Pipeline Example
1470
-
1471
- Generate videos programmatically from structured data — database records, API responses, CMS content, etc. This example creates property tour videos from real estate listings:
1472
-
1473
- ```js
1474
- import SIMPLEFFMPEG from "simple-ffmpegjs";
1475
-
1476
- const listings = await db.getActiveListings(); // your data source
1477
-
1478
- async function generateListingVideo(listing, outputPath) {
1479
- const photos = listing.photos; // ['kitchen.jpg', 'living-room.jpg', ...]
1480
- const slideDuration = 4;
1481
-
1482
- // Build an image slideshow from listing photos (auto-sequenced with crossfades)
1483
- const transitionDuration = 0.5;
1484
- const photoClips = photos.map((photo, i) => ({
1485
- type: "image",
1486
- url: photo,
1487
- duration: slideDuration,
1488
- kenBurns: i % 2 === 0 ? "zoom-in" : "pan-right",
1489
- ...(i > 0 && {
1490
- transition: { type: "fade", duration: transitionDuration },
1491
- }),
1492
- }));
1493
-
1494
- const totalDuration = SIMPLEFFMPEG.getDuration(photoClips);
1495
-
1496
- const clips = [
1497
- ...photoClips,
1498
- // Price banner
1499
- {
1500
- type: "text",
1501
- text: listing.price,
1502
- position: 0.5,
1503
- end: totalDuration - 0.5,
1504
- fontSize: 36,
1505
- fontColor: "#FFFFFF",
1506
- backgroundColor: "#000000",
1507
- backgroundOpacity: 0.6,
1508
- padding: 12,
1509
- xPercent: 0.5,
1510
- yPercent: 0.1,
1511
- },
1512
- // Address at the bottom
1513
- {
1514
- type: "text",
1515
- text: listing.address,
1516
- position: 0.5,
1517
- end: totalDuration - 0.5,
1518
- fontSize: 28,
1519
- fontColor: "#FFFFFF",
1520
- borderColor: "#000000",
1521
- borderWidth: 2,
1522
- xPercent: 0.5,
1523
- yPercent: 0.9,
1524
- },
1525
- { type: "music", url: "./assets/ambient.mp3", volume: 0.15, loop: true },
1526
- ];
1527
-
1528
- const project = new SIMPLEFFMPEG({ preset: "instagram-reel" });
1529
- await project.load(clips);
1530
- return project.export({ outputPath });
1531
- }
1532
-
1533
- // Batch generate videos for all listings
1534
- for (const listing of listings) {
1535
- await generateListingVideo(listing, `./output/${listing.id}.mp4`);
1536
- }
1537
- ```
1538
-
1539
- ### AI Video Generation Pipeline Example
1540
-
1541
- Combine schema export, validation, and structured error codes to build a complete AI-driven video generation pipeline. The schema gives the model the exact specification it needs, and the validation loop lets it self-correct until the output is valid.
1542
-
1543
- ```js
1544
- import SIMPLEFFMPEG from "simple-ffmpegjs";
1545
-
1546
- // 1. Build the schema context for the AI
1547
- // Only expose the clip types you want the AI to work with.
1548
- // Developer-level config (codecs, resolution, etc.) stays out of the schema.
1549
-
1550
- const schema = SIMPLEFFMPEG.getSchema({
1551
- include: ["video", "image", "text", "music"],
1552
- instructions: [
1553
- "You are composing a short-form video for TikTok.",
1554
- "Keep total duration under 30 seconds.",
1555
- "Return ONLY valid JSON — an array of clip objects.",
1556
- ],
1557
- moduleInstructions: {
1558
- video: "Use fade transitions between clips. Keep each clip 3-6 seconds.",
1559
- text: [
1560
- "Add a title in the first 2 seconds with fontSize 72.",
1561
- "Use white text with a black border for readability.",
1562
- ],
1563
- music: "Always include looping background music at volume 0.15.",
1564
- },
1565
- });
1566
-
1567
- // 2. Send the schema + prompt to your LLM
1568
-
1569
- async function askAI(systemPrompt, userPrompt) {
1570
- // Replace with your LLM provider (OpenAI, Anthropic, etc.)
1571
- const response = await llm.chat({
1572
- messages: [
1573
- { role: "system", content: systemPrompt },
1574
- { role: "user", content: userPrompt },
1575
- ],
1576
- });
1577
- return JSON.parse(response.content);
1578
- }
1579
-
1580
- // 3. Generate → Validate → Retry loop
1581
-
1582
- async function generateVideo(userPrompt, media) {
1583
- // Build the system prompt with schema + available media and their details.
1584
- // Descriptions and durations help the AI make good creative decisions —
1585
- // ordering clips logically, setting accurate position/end times, etc.
1586
- const mediaList = media
1587
- .map((m) => ` - ${m.file} (${m.duration}s) — ${m.description}`)
1588
- .join("\n");
1589
-
1590
- const systemPrompt = [
1591
- "You are a video editor. Given the user's request and the available media,",
1592
- "produce a clips array that follows this schema:\n",
1593
- schema,
1594
- "\nAvailable media (use these exact file paths):",
1595
- mediaList,
1596
- ].join("\n");
1597
-
1598
- const knownPaths = media.map((m) => m.file);
1599
-
1600
- // First attempt
1601
- let clips = await askAI(systemPrompt, userPrompt);
1602
- let result = SIMPLEFFMPEG.validate(clips, { skipFileChecks: true });
1603
- let attempts = 1;
1604
-
1605
- // Self-correction loop: feed structured errors back to the AI
1606
- while (!result.valid && attempts < 3) {
1607
- const errorFeedback = result.errors
1608
- .map((e) => `[${e.code}] ${e.path}: ${e.message}`)
1609
- .join("\n");
1610
-
1611
- clips = await askAI(
1612
- systemPrompt,
1613
- [
1614
- `Your previous output had validation errors:\n${errorFeedback}`,
1615
- `\nOriginal request: ${userPrompt}`,
1616
- "\nPlease fix the errors and return the corrected clips array.",
1617
- ].join("\n"),
1618
- );
1619
-
1620
- result = SIMPLEFFMPEG.validate(clips, { skipFileChecks: true });
1621
- attempts++;
1622
- }
1623
-
1624
- if (!result.valid) {
1625
- throw new Error(
1626
- `Failed to generate valid config after ${attempts} attempts:\n` +
1627
- SIMPLEFFMPEG.formatValidationResult(result),
1628
- );
1629
- }
1630
-
1631
- // 4. Verify the AI only used known media paths
1632
- // The structural loop (skipFileChecks: true) can't catch hallucinated paths.
1633
- // You could also put this inside the retry loop to let the AI self-correct
1634
- // bad paths — just append the unknown paths to the error feedback string.
1635
-
1636
- const usedPaths = clips.filter((c) => c.url).map((c) => c.url);
1637
- const unknownPaths = usedPaths.filter((p) => !knownPaths.includes(p));
1638
- if (unknownPaths.length > 0) {
1639
- throw new Error(`AI used unknown media paths: ${unknownPaths.join(", ")}`);
1640
- }
1641
-
1642
- // 5. Build and export
1643
- // load() will also throw MediaNotFoundError if any file is missing on disk.
1644
-
1645
- const project = new SIMPLEFFMPEG({ preset: "tiktok" });
1646
- await project.load(clips);
1647
-
1648
- return project.export({
1649
- outputPath: "./output.mp4",
1650
- onProgress: ({ percent }) => console.log(`Rendering: ${percent}%`),
1651
- });
1652
- }
1653
-
1654
- // Usage
1655
-
1656
- await generateVideo("Make a hype travel montage with upbeat text overlays", [
1657
- {
1658
- file: "clips/beach-drone.mp4",
1659
- duration: 4,
1660
- description:
1661
- "Aerial drone shot of a tropical beach with people playing volleyball",
1662
- },
1663
- {
1664
- file: "clips/city-timelapse.mp4",
1665
- duration: 8,
1666
- description: "Timelapse of a city skyline transitioning from day to night",
1667
- },
1668
- {
1669
- file: "clips/sunset.mp4",
1670
- duration: 6,
1671
- description: "Golden hour sunset over the ocean with gentle waves",
1672
- },
1673
- {
1674
- file: "music/upbeat-track.mp3",
1675
- duration: 120,
1676
- description:
1677
- "Upbeat electronic track with a strong beat, good for montages",
50
+ animation: { type: "pop", in: 0.3 },
1678
51
  },
52
+ { type: "music", url: "./music.mp3", volume: 0.2, loop: true },
1679
53
  ]);
1680
- ```
1681
-
1682
- The key parts of this pattern:
1683
-
1684
- 1. **`getSchema()`** gives the AI a precise specification of what it can produce, with only the clip types you've chosen to expose.
1685
- 2. **`instructions` / `moduleInstructions`** embed your creative constraints directly into the spec — the AI treats them the same as built-in rules.
1686
- 3. **Media descriptions** with durations and content details give the AI enough context to make good creative decisions — ordering clips logically, setting accurate timings, and choosing the right media for each part of the video.
1687
- 4. **`validate()`** with `skipFileChecks: true` checks structural correctness in the retry loop — types, timelines, required fields — without touching the filesystem.
1688
- 5. **The retry loop** lets the AI self-correct. Most validation failures resolve in one retry.
1689
- 6. **The path guard** catches hallucinated file paths before `load()` hits the filesystem. You can optionally move this check inside the retry loop to let the AI self-correct bad paths. `load()` itself will also throw `MediaNotFoundError` if a file is missing on disk.
1690
-
1691
- ## Testing
1692
-
1693
- ### Automated Tests
1694
-
1695
- The library includes comprehensive unit and integration tests using Vitest:
1696
-
1697
- ```bash
1698
- # Run all tests
1699
- npm test
1700
54
 
1701
- # Run unit tests only
1702
- npm run test:unit
1703
-
1704
- # Run integration tests only
1705
- npm run test:integration
1706
-
1707
- # Run with watch mode
1708
- npm run test:watch
1709
- ```
1710
-
1711
- ### Manual Verification
1712
-
1713
- For visual verification, run the demo suite to generate sample videos covering all major features. Each demo outputs to its own subfolder under `examples/output/` and includes annotated expected timelines so you know exactly what to look for:
1714
-
1715
- ```bash
1716
- # Run all demos (color clips, effects, transitions, text, emoji, Ken Burns, audio, watermarks, karaoke, torture test)
1717
- node examples/run-examples.js
1718
-
1719
- # Run a specific demo by name (partial match)
1720
- node examples/run-examples.js transitions
1721
- node examples/run-examples.js torture ken
55
+ await project.export({ outputPath: "./output.mp4" });
1722
56
  ```
1723
57
 
1724
- Available demo scripts (can also be run individually):
1725
-
1726
- | Script | What it tests |
1727
- | ------------------------------- | -------------------------------------------------------------------------------------- |
1728
- | `demo-color-clips.js` | Flat colors, linear/radial gradients, transitions, full composition with color clips |
1729
- | `demo-effects.js` | Timed overlay effects (all 9 effects) with smooth fade ramps |
1730
- | `demo-transitions.js` | Fade, wipe, slide, dissolve, fadeblack/white, short/long durations, image transitions |
1731
- | `demo-text-and-animations.js` | Positioning, fade, pop, pop-bounce, typewriter, scale-in, pulse, styling, word-replace |
1732
- | `demo-emoji-text.js` | Emoji stripping (default) and opt-in rendering via emojiFont, fade, styling, fallback |
1733
- | `demo-ken-burns.js` | All 6 presets, smart anchors, custom diagonal, slideshow with transitions |
1734
- | `demo-audio-mixing.js` | Volume levels, background music, standalone audio, loop, multi-source mix |
1735
- | `demo-watermarks.js` | Text/image watermarks, all positions, timed appearance, styled over transitions |
1736
- | `demo-karaoke-and-subtitles.js` | Smooth/instant karaoke, word timestamps, multiline, SRT, VTT, mixed text+karaoke |
1737
- | `demo-image-fit.js` | Image fitting modes (blur-fill, cover, contain), Ken Burns + imageFit, mixed timelines |
1738
- | `demo-torture-test.js` | Kitchen sink, many clips+gaps+transitions, 6 simultaneous text animations, edge cases |
1739
-
1740
- Each script header contains a `WHAT TO CHECK` section describing the expected visual output at every timestamp, making it easy to spot regressions.
1741
-
1742
- ## Contributing
1743
-
1744
- Contributions are welcome. Please open an issue to discuss significant changes before submitting a pull request.
1745
-
1746
- 1. Fork the repository
1747
- 2. Create a feature branch (`git checkout -b feature/my-feature`)
1748
- 3. Write tests for new functionality
1749
- 4. Ensure all tests pass (`npm test`)
1750
- 5. Submit a pull request
1751
-
1752
- ## Credits
58
+ ## Features
1753
59
 
1754
- Inspired by [ezffmpeg](https://github.com/ezffmpeg/ezffmpeg) by John Chen.
60
+ - **Declarative timeline** `video`, `image`, `color`, `effect`, `text`, `subtitle`, `audio`, `music` clip types
61
+ - **Transitions** — all FFmpeg xfade transitions with automatic compensation for timeline compression
62
+ - **Ken Burns effects** — zoom, pan, smart, and custom with full easing control
63
+ - **Image fitting** — `blur-fill`, `cover`, and `contain` modes for aspect ratio mismatches
64
+ - **Text overlays** — static, word-by-word, karaoke, and cumulative modes with animations
65
+ - **Effect clips** — vignette, film grain, blur, color grading, sepia, B&W, sharpen, chromatic aberration, letterbox
66
+ - **Audio mixing** — multiple sources, background music, looping, independent volume control
67
+ - **Platform presets** — TikTok, YouTube, Instagram, and more
68
+ - **Pre-validation** — structured error codes before rendering; integrates cleanly into data pipelines and AI workflows
69
+ - **Schema export** — machine-readable clip specification for docs, code generation, and LLM context
70
+ - **Static helpers** — `probe()`, `snapshot()`, `extractKeyframes()`
71
+ - **TypeScript** — full type definitions included
72
+ - **Zero runtime dependencies** — only requires FFmpeg on your system
73
+
74
+ ## Documentation
75
+
76
+ Full documentation at **[simple-ffmpegjs.com](https://www.simple-ffmpegjs.com)**
1755
77
 
1756
78
  ## License
1757
79