simple-ffmpegjs 0.3.0 → 0.3.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,10 +1,17 @@
1
- # simple-ffmpeg
1
+ <p align="center">
2
+ <img src="https://7llpl63xkl8jovgt.public.blob.vercel-storage.com/simple-ffmpeg/zENiV5XBIET_cu11ZpOdE.png" alt="simple-ffmpeg" width="100%">
3
+ </p>
2
4
 
3
- [![npm version](https://img.shields.io/npm/v/simple-ffmpegjs.svg)](https://www.npmjs.com/package/simple-ffmpegjs)
4
- [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
5
- [![Node.js](https://img.shields.io/badge/node-%3E%3D18-brightgreen.svg)](https://nodejs.org)
5
+ <p align="center">
6
+ <a href="https://www.npmjs.com/package/simple-ffmpegjs"><img src="https://img.shields.io/npm/v/simple-ffmpegjs.svg" alt="npm version"></a>
7
+ <a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-blue.svg" alt="License: MIT"></a>
8
+ <a href="https://nodejs.org"><img src="https://img.shields.io/badge/node-%3E%3D18-brightgreen.svg" alt="Node.js"></a>
9
+ </p>
6
10
 
7
- A lightweight Node.js library for programmatic video composition using FFmpeg. Designed for data pipelines and automation workflows that need reliable video assembly without the complexity of a full editing suite.
11
+ <p align="center">
12
+ A lightweight Node.js library for programmatic video composition using FFmpeg.<br>
13
+ Define your timeline as a simple array of clips, and the library handles the rest.
14
+ </p>
8
15
 
9
16
  ## Table of Contents
10
17
 
@@ -12,7 +19,8 @@ A lightweight Node.js library for programmatic video composition using FFmpeg. D
12
19
  - [Features](#features)
13
20
  - [Installation](#installation)
14
21
  - [Quick Start](#quick-start)
15
- - [Pre-Validation (for AI Pipelines)](#pre-validation-for-ai-pipelines)
22
+ - [Pre-Validation](#pre-validation)
23
+ - [Schema Export](#schema-export)
16
24
  - [API Reference](#api-reference)
17
25
  - [Constructor](#constructor)
18
26
  - [Methods](#methods)
@@ -24,32 +32,32 @@ A lightweight Node.js library for programmatic video composition using FFmpeg. D
24
32
  - [Cancellation](#cancellation)
25
33
  - [Gap Handling](#gap-handling)
26
34
  - [Examples](#examples)
27
- - [Transitions](#two-clips-with-transition)
28
- - [Text Positioning](#text-positioning-with-offsets)
29
- - [Word-by-Word Animation](#word-by-word-text-animation)
30
- - [Ken Burns Slideshow](#image-slideshow-with-ken-burns)
31
- - [Export Options](#high-quality-export-with-custom-settings)
32
- - [Text Animations](#typewriter-text-effect)
33
- - [Karaoke](#karaoke-text-effect)
34
- - [Subtitles](#import-srtvtt-subtitles)
35
- - [Timeline Behavior](#timeline-behavior)
36
- - [Transition Compensation](#transition-compensation)
37
- - [Auto-Batching](#auto-batching-for-complex-filter-graphs)
35
+ - [Clips & Transitions](#clips--transitions)
36
+ - [Text & Animations](#text--animations)
37
+ - [Karaoke](#karaoke)
38
+ - [Subtitles](#subtitles)
39
+ - [Export Settings](#export-settings)
40
+ - [Real-World Usage Patterns](#real-world-usage-patterns)
41
+ - [Data Pipeline](#data-pipeline-example)
42
+ - [AI Video Pipeline](#ai-video-generation-pipeline-example)
43
+ - [Advanced](#advanced)
44
+ - [Timeline Behavior](#timeline-behavior)
45
+ - [Auto-Batching](#auto-batching)
38
46
  - [Testing](#testing)
39
47
  - [Contributing](#contributing)
40
48
  - [License](#license)
41
49
 
42
50
  ## Why simple-ffmpeg?
43
51
 
44
- With `fluent-ffmpeg` no longer actively maintained, there's a need for a modern, well-supported alternative. simple-ffmpeg fills this gap with a declarative, config-driven API that's particularly well-suited for structured validation with error codes that makes it easy to build feedback loops where AI can generate configs, validate them, and iterate until successful
52
+ FFmpeg is incredibly powerful, but its command-line interface is notoriously difficult to work with programmatically. Composing even a simple two-clip video with a crossfade requires navigating complex filter graphs, input mapping, and stream labeling. simple-ffmpeg abstracts all of that behind a declarative, config-driven API. You describe _what_ your video should look like, and the library figures out _how_ to build the FFmpeg command.
45
53
 
46
- The library handles FFmpeg's complexity internally while exposing a clean interface that both humans and AI can work with effectively.
54
+ The entire timeline is expressed as a plain array of clip objects, making it straightforward to generate configurations from any data source: databases, APIs, templates, or AI models. Structured validation with machine-readable error codes means you can catch problems early and handle them programmatically, whether that's logging a warning, retrying with corrected input, or surfacing feedback to an end user.
47
55
 
48
56
  ## Example Output
49
57
 
50
58
  <p align="center">
51
- <a href="https://7llpl63xkl8jovgt.public.blob.vercel-storage.com/wonders-showcase.mp4">
52
- <img src="https://7llpl63xkl8jovgt.public.blob.vercel-storage.com/wonders-thumbnail.jpg" alt="Example video - click to watch" width="640">
59
+ <a href="https://7llpl63xkl8jovgt.public.blob.vercel-storage.com/wonders-showcase-1.mp4">
60
+ <img src="https://7llpl63xkl8jovgt.public.blob.vercel-storage.com/simple-ffmpeg/wonders-thumbnail-1.jpg" alt="Example video - click to watch" width="640">
53
61
  </a>
54
62
  </p>
55
63
 
@@ -70,6 +78,8 @@ _Click to watch a "Wonders of the World" video created with simple-ffmpeg — co
70
78
  - **Cancellation** — AbortController support for stopping exports
71
79
  - **Gap Handling** — Optional black frame fill for timeline gaps
72
80
  - **Auto-Batching** — Automatically splits complex filter graphs to avoid OS command limits
81
+ - **Schema Export** — Generate a structured description of the clip format for documentation, code generation, or AI context
82
+ - **Pre-Validation** — Validate clip configurations before processing with structured, machine-readable error codes
73
83
  - **TypeScript Ready** — Full type definitions included
74
84
  - **Zero Dependencies** — Only requires FFmpeg on your system
75
85
 
@@ -109,41 +119,50 @@ apt-get install -y ffmpeg fontconfig fonts-dejavu-core
109
119
  ```js
110
120
  import SIMPLEFFMPEG from "simple-ffmpegjs";
111
121
 
112
- const project = new SIMPLEFFMPEG({
113
- width: 1920,
114
- height: 1080,
115
- fps: 30,
116
- });
122
+ // Use a platform preset — or set width/height/fps manually
123
+ const project = new SIMPLEFFMPEG({ preset: "youtube" });
117
124
 
118
125
  await project.load([
119
- { type: "video", url: "./intro.mp4", position: 0, end: 5 },
126
+ // Two video clips with a crossfade transition between them
127
+ { type: "video", url: "./opening-shot.mp4", position: 0, end: 6 },
120
128
  {
121
129
  type: "video",
122
- url: "./main.mp4",
123
- position: 5,
124
- end: 15,
130
+ url: "./highlights.mp4",
131
+ position: 5.5,
132
+ end: 18,
133
+ cutFrom: 3, // start 3s into the source file
125
134
  transition: { type: "fade", duration: 0.5 },
126
135
  },
127
- { type: "music", url: "./bgm.mp3", volume: 0.2 },
136
+
137
+ // Title card with a pop animation
128
138
  {
129
139
  type: "text",
130
- text: "Hello World",
131
- position: 1,
140
+ text: "Summer Highlights 2025",
141
+ position: 0.5,
132
142
  end: 4,
133
- fontColor: "white",
134
- fontSize: 64,
143
+ fontFile: "./fonts/Montserrat-Bold.ttf",
144
+ fontSize: 72,
145
+ fontColor: "#FFFFFF",
146
+ borderColor: "#000000",
147
+ borderWidth: 2,
148
+ xPercent: 0.5,
149
+ yPercent: 0.4,
150
+ animation: { type: "pop", in: 0.3 },
135
151
  },
152
+
153
+ // Background music — loops to fill the whole video
154
+ { type: "music", url: "./chill-beat.mp3", volume: 0.2, loop: true },
136
155
  ]);
137
156
 
138
157
  await project.export({
139
- outputPath: "./output.mp4",
158
+ outputPath: "./summer-highlights.mp4",
140
159
  onProgress: ({ percent }) => console.log(`${percent}% complete`),
141
160
  });
142
161
  ```
143
162
 
144
- ## Pre-Validation (for AI Pipelines)
163
+ ## Pre-Validation
145
164
 
146
- Validate configurations before creating a project—ideal for AI feedback loops where you want to catch errors early and provide structured feedback:
165
+ Validate clip configurations before creating a project. Useful for catching errors early in data pipelines, form-based editors, or any workflow where configurations are generated dynamically:
147
166
 
148
167
  ```js
149
168
  import SIMPLEFFMPEG from "simple-ffmpegjs";
@@ -155,7 +174,7 @@ const clips = [
155
174
 
156
175
  // Validate without creating a project
157
176
  const result = SIMPLEFFMPEG.validate(clips, {
158
- skipFileChecks: true, // Skip file existence checks (useful when files don't exist yet)
177
+ skipFileChecks: true, // Skip file existence checks (useful when files aren't on disk yet)
159
178
  width: 1920, // Project dimensions (for Ken Burns size validation)
160
179
  height: 1080,
161
180
  strictKenBurns: false, // If true, undersized Ken Burns images error instead of warn (default: false)
@@ -190,6 +209,70 @@ if (result.errors.some((e) => e.code === ValidationCodes.TIMELINE_GAP)) {
190
209
  }
191
210
  ```
192
211
 
212
+ ## Schema Export
213
+
214
+ Export a structured, human-readable description of all clip types accepted by `load()`. The output is designed to serve as context for LLMs, documentation generators, code generation tools, or anything that needs to understand the library's clip format.
215
+
216
+ ### Basic Usage
217
+
218
+ ```js
219
+ // Get the full schema (all clip types)
220
+ const schema = SIMPLEFFMPEG.getSchema();
221
+ console.log(schema);
222
+ ```
223
+
224
+ The output is a formatted text document with type definitions, allowed values, usage notes, and examples for each clip type.
225
+
226
+ ### Filtering Modules
227
+
228
+ The schema is broken into modules — one per clip type. You can include or exclude modules to control exactly what appears in the output:
229
+
230
+ ```js
231
+ // Only include video and image clip types
232
+ const schema = SIMPLEFFMPEG.getSchema({ include: ["video", "image"] });
233
+
234
+ // Include everything except text and subtitle
235
+ const schema = SIMPLEFFMPEG.getSchema({ exclude: ["text", "subtitle"] });
236
+
237
+ // See all available module IDs
238
+ SIMPLEFFMPEG.getSchemaModules();
239
+ // ['video', 'audio', 'image', 'text', 'subtitle', 'music']
240
+ ```
241
+
242
+ Available modules:
243
+
244
+ | Module | Covers |
245
+ | ---------- | ----------------------------------------------------------- |
246
+ | `video` | Video clips, transitions, volume, trimming |
247
+ | `audio` | Standalone audio clips |
248
+ | `image` | Image clips, Ken Burns effects |
249
+ | `text` | Text overlays — all modes, animations, positioning, styling |
250
+ | `subtitle` | Subtitle file import (SRT, VTT, ASS, SSA) |
251
+ | `music` | Background music / background audio, looping |
252
+
253
+ ### Custom Instructions
254
+
255
+ Embed your own instructions directly into the schema output. Top-level instructions appear at the beginning, and per-module instructions are placed inside the relevant section — formatted identically to the built-in notes:
256
+
257
+ ```js
258
+ const schema = SIMPLEFFMPEG.getSchema({
259
+ include: ["video", "image", "music"],
260
+ instructions: [
261
+ "You are creating short cooking tutorials for TikTok.",
262
+ "Keep all videos under 30 seconds.",
263
+ ],
264
+ moduleInstructions: {
265
+ video: [
266
+ "Always use fade transitions at 0.5s.",
267
+ "Limit to 5 clips maximum.",
268
+ ],
269
+ music: "Always include background music at volume 0.15.",
270
+ },
271
+ });
272
+ ```
273
+
274
+ Both `instructions` and `moduleInstructions` values accept a `string` or `string[]`. Per-module instructions for excluded modules are silently ignored.
275
+
193
276
  ## API Reference
194
277
 
195
278
  ### Constructor
@@ -597,9 +680,10 @@ await project.load([
597
680
 
598
681
  ## Examples
599
682
 
600
- ### Two Clips with Transition
683
+ ### Clips & Transitions
601
684
 
602
685
  ```ts
686
+ // Two clips with a crossfade
603
687
  await project.load([
604
688
  { type: "video", url: "./a.mp4", position: 0, end: 5 },
605
689
  {
@@ -612,56 +696,7 @@ await project.load([
612
696
  ]);
613
697
  ```
614
698
 
615
- ### Text Positioning with Offsets
616
-
617
- Text is centered by default. Use `xOffset` and `yOffset` to adjust position relative to any base:
618
-
619
- ```ts
620
- await project.load([
621
- { type: "video", url: "./bg.mp4", position: 0, end: 10 },
622
- // Title: centered, 100px above center
623
- {
624
- type: "text",
625
- text: "Main Title",
626
- position: 0,
627
- end: 5,
628
- fontSize: 72,
629
- yOffset: -100,
630
- },
631
- // Subtitle: centered, 50px below center
632
- {
633
- type: "text",
634
- text: "Subtitle here",
635
- position: 0.5,
636
- end: 5,
637
- fontSize: 36,
638
- yOffset: 50,
639
- },
640
- ]);
641
- ```
642
-
643
- Offsets work with all positioning methods (`x`/`y` pixels, `xPercent`/`yPercent`, or default center).
644
-
645
- ### Word-by-Word Text Animation
646
-
647
- ```ts
648
- await project.load([
649
- { type: "video", url: "./bg.mp4", position: 0, end: 10 },
650
- {
651
- type: "text",
652
- mode: "word-replace",
653
- text: "One Two Three Four",
654
- position: 2,
655
- end: 6,
656
- wordTimestamps: [2, 3, 4, 5, 6],
657
- animation: { type: "fade-in", in: 0.2 },
658
- fontSize: 72,
659
- fontColor: "white",
660
- },
661
- ]);
662
- ```
663
-
664
- ### Image Slideshow with Ken Burns
699
+ **Image slideshow with Ken Burns effects:**
665
700
 
666
701
  ```ts
667
702
  await project.load([
@@ -690,173 +725,69 @@ await project.load([
690
725
  ]);
691
726
  ```
692
727
 
693
- > **Note:** Ken Burns effects work best with images at least as large as your output resolution. Smaller images are automatically upscaled (with a validation warning about potential quality loss). Use `strictKenBurns: true` in validation options to enforce size requirements instead.
694
-
695
- ### Export with Progress Tracking
696
-
697
- ```ts
698
- await project.export({
699
- outputPath: "./output.mp4",
700
- onProgress: ({ percent, fps, speed }) => {
701
- process.stdout.write(`\rRendering: ${percent}% (${fps} fps, ${speed}x)`);
702
- },
703
- });
704
- ```
705
-
706
- ### High-Quality Export with Custom Settings
707
-
708
- ```ts
709
- await project.export({
710
- outputPath: "./output.mp4",
711
- videoCodec: "libx265",
712
- crf: 18, // Higher quality
713
- preset: "slow", // Better compression
714
- audioCodec: "libopus",
715
- audioBitrate: "256k",
716
- metadata: {
717
- title: "My Video",
718
- artist: "My Name",
719
- date: "2024",
720
- },
721
- });
722
- ```
723
-
724
- ### Hardware-Accelerated Export (macOS)
725
-
726
- ```ts
727
- await project.export({
728
- outputPath: "./output.mp4",
729
- hwaccel: "videotoolbox",
730
- videoCodec: "h264_videotoolbox",
731
- crf: 23,
732
- });
733
- ```
734
-
735
- ### Two-Pass Encoding for Target File Size
736
-
737
- ```ts
738
- await project.export({
739
- outputPath: "./output.mp4",
740
- twoPass: true,
741
- videoBitrate: "5M", // Target bitrate
742
- preset: "slow",
743
- });
744
- ```
745
-
746
- ### Scale Output Resolution
728
+ > **Note:** Ken Burns effects work best with images at least as large as your output resolution. Smaller images are automatically upscaled (with a validation warning). Use `strictKenBurns: true` in validation options to enforce size requirements instead.
747
729
 
748
- ```ts
749
- // Use resolution preset
750
- await project.export({
751
- outputPath: "./output-720p.mp4",
752
- outputResolution: "720p",
753
- });
730
+ ### Text & Animations
754
731
 
755
- // Or specify exact dimensions
756
- await project.export({
757
- outputPath: "./output-custom.mp4",
758
- outputWidth: 1280,
759
- outputHeight: 720,
760
- });
761
- ```
762
-
763
- ### Audio-Only Export
764
-
765
- ```ts
766
- await project.export({
767
- outputPath: "./audio.mp3",
768
- audioOnly: true,
769
- audioCodec: "libmp3lame",
770
- audioBitrate: "320k",
771
- });
772
- ```
773
-
774
- ### Generate Thumbnail
775
-
776
- ```ts
777
- await project.export({
778
- outputPath: "./output.mp4",
779
- thumbnail: {
780
- outputPath: "./thumbnail.jpg",
781
- time: 5, // Capture at 5 seconds
782
- width: 640,
783
- },
784
- });
785
- ```
786
-
787
- ### Debug Export Command
788
-
789
- ```ts
790
- await project.export({
791
- outputPath: "./output.mp4",
792
- verbose: true, // Log export options
793
- saveCommand: "./ffmpeg-command.txt", // Save command to file
794
- });
795
- ```
796
-
797
- ### Typewriter Text Effect
732
+ Text is centered by default. Use `xPercent`/`yPercent` for percentage positioning, `x`/`y` for pixels, or `xOffset`/`yOffset` to nudge from any base:
798
733
 
799
734
  ```ts
800
735
  await project.load([
801
- { type: "video", url: "./bg.mp4", position: 0, end: 5 },
736
+ { type: "video", url: "./bg.mp4", position: 0, end: 10 },
737
+ // Title: centered, 100px above center
802
738
  {
803
739
  type: "text",
804
- text: "Appearing letter by letter...",
805
- position: 1,
806
- end: 4,
807
- fontSize: 48,
808
- fontColor: "white",
809
- animation: {
810
- type: "typewriter",
811
- speed: 15, // 15 characters per second
812
- },
740
+ text: "Main Title",
741
+ position: 0,
742
+ end: 5,
743
+ fontSize: 72,
744
+ yOffset: -100,
813
745
  },
814
- ]);
815
- ```
816
-
817
- ### Pulsing Text Effect
818
-
819
- ```ts
820
- await project.load([
821
- { type: "video", url: "./bg.mp4", position: 0, end: 5 },
746
+ // Subtitle: centered, 50px below center
822
747
  {
823
748
  type: "text",
824
- text: "Pulsing...",
749
+ text: "Subtitle here",
825
750
  position: 0.5,
826
- end: 4.5,
827
- fontSize: 52,
828
- fontColor: "cyan",
829
- animation: {
830
- type: "pulse",
831
- speed: 2, // 2 pulses per second
832
- intensity: 0.2, // 20% size variation
833
- },
751
+ end: 5,
752
+ fontSize: 36,
753
+ yOffset: 50,
834
754
  },
835
755
  ]);
836
756
  ```
837
757
 
838
- ### Karaoke Text Effect
758
+ **Word-by-word replacement:**
839
759
 
840
- Create word-by-word highlighting like karaoke subtitles:
760
+ ```ts
761
+ {
762
+ type: "text",
763
+ mode: "word-replace",
764
+ text: "One Two Three Four",
765
+ position: 2,
766
+ end: 6,
767
+ wordTimestamps: [2, 3, 4, 5, 6],
768
+ animation: { type: "fade-in", in: 0.2 },
769
+ fontSize: 72,
770
+ fontColor: "white",
771
+ }
772
+ ```
773
+
774
+ **Typewriter, pulse, and other animations:**
841
775
 
842
776
  ```ts
843
- await project.load([
844
- { type: "video", url: "./music-video.mp4", position: 0, end: 10 },
845
- {
846
- type: "text",
847
- mode: "karaoke",
848
- text: "Never gonna give you up",
849
- position: 2,
850
- end: 6,
851
- fontColor: "#FFFFFF",
852
- highlightColor: "#FFFF00", // Words highlight to yellow
853
- fontSize: 48,
854
- yPercent: 0.85, // Position near bottom
855
- },
856
- ]);
777
+ // Typewriter — letters appear one at a time
778
+ { type: "text", text: "Appearing letter by letter...", position: 1, end: 4,
779
+ animation: { type: "typewriter", speed: 15 } }
780
+
781
+ // Pulse — rhythmic scaling
782
+ { type: "text", text: "Pulsing...", position: 0.5, end: 4.5,
783
+ animation: { type: "pulse", speed: 2, intensity: 0.2 } }
784
+
785
+ // Also available: fade-in, fade-out, fade-in-out, pop, pop-bounce, scale-in
857
786
  ```
858
787
 
859
- With precise word timings:
788
+ ### Karaoke
789
+
790
+ Word-by-word highlighting with customizable colors. Use `highlightStyle: "instant"` for immediate color changes instead of the default smooth fill:
860
791
 
861
792
  ```ts
862
793
  await project.load([
@@ -877,74 +808,16 @@ await project.load([
877
808
  fontColor: "#FFFFFF",
878
809
  highlightColor: "#00FF00",
879
810
  fontSize: 52,
811
+ yPercent: 0.85,
880
812
  },
881
813
  ]);
882
814
  ```
883
815
 
884
- With instant highlight (words change color immediately instead of gradual fill):
885
-
886
- ```ts
887
- await project.load([
888
- { type: "video", url: "./music-video.mp4", position: 0, end: 10 },
889
- {
890
- type: "text",
891
- mode: "karaoke",
892
- text: "Each word pops instantly",
893
- position: 1,
894
- end: 5,
895
- fontColor: "#FFFFFF",
896
- highlightColor: "#FF00FF",
897
- highlightStyle: "instant", // Words change color immediately
898
- fontSize: 48,
899
- },
900
- ]);
901
- ```
902
-
903
- Multi-line karaoke (use `\n` for line breaks):
816
+ For simple usage without explicit word timings, just provide `text` and `wordTimestamps` — the library will split on spaces. Multi-line karaoke is supported with `\n` in the text string or `lineBreak: true` in the words array.
904
817
 
905
- ```ts
906
- await project.load([
907
- { type: "video", url: "./music-video.mp4", position: 0, end: 10 },
908
- {
909
- type: "text",
910
- mode: "karaoke",
911
- text: "First line of lyrics\nSecond line continues",
912
- position: 0,
913
- end: 6,
914
- fontColor: "#FFFFFF",
915
- highlightColor: "#FFFF00",
916
- fontSize: 36,
917
- yPercent: 0.8,
918
- },
919
- ]);
920
- ```
818
+ ### Subtitles
921
819
 
922
- Or with explicit line breaks in the words array:
923
-
924
- ```ts
925
- await project.load([
926
- { type: "video", url: "./music-video.mp4", position: 0, end: 10 },
927
- {
928
- type: "text",
929
- mode: "karaoke",
930
- text: "Hello World Goodbye World",
931
- position: 0,
932
- end: 4,
933
- words: [
934
- { text: "Hello", start: 0, end: 1 },
935
- { text: "World", start: 1, end: 2, lineBreak: true }, // Line break after this word
936
- { text: "Goodbye", start: 2, end: 3 },
937
- { text: "World", start: 3, end: 4 },
938
- ],
939
- fontColor: "#FFFFFF",
940
- highlightColor: "#00FF00",
941
- },
942
- ]);
943
- ```
944
-
945
- ### Import SRT/VTT Subtitles
946
-
947
- Add existing subtitle files to your video:
820
+ Import external subtitle files (SRT, VTT, ASS/SSA):
948
821
 
949
822
  ```ts
950
823
  await project.load([
@@ -959,72 +832,77 @@ await project.load([
959
832
  ]);
960
833
  ```
961
834
 
962
- With time offset (shift subtitles forward):
835
+ Use `position` to offset all subtitle timestamps forward (e.g., `position: 2.5` delays everything by 2.5s). ASS/SSA files use their own embedded styles — font options are for SRT/VTT imports.
836
+
837
+ ### Export Settings
963
838
 
964
839
  ```ts
965
- await project.load([
966
- { type: "video", url: "./video.mp4", position: 0, end: 60 },
967
- {
968
- type: "subtitle",
969
- url: "./subtitles.srt",
970
- position: 2.5, // Delay subtitles by 2.5 seconds
971
- },
972
- ]);
973
- ```
840
+ // High-quality H.265 with metadata
841
+ await project.export({
842
+ outputPath: "./output.mp4",
843
+ videoCodec: "libx265",
844
+ crf: 18,
845
+ preset: "slow",
846
+ audioCodec: "libopus",
847
+ audioBitrate: "256k",
848
+ metadata: { title: "My Video", artist: "My Name", date: "2025" },
849
+ });
974
850
 
975
- ### Using Platform Presets
851
+ // Hardware-accelerated (macOS)
852
+ await project.export({
853
+ outputPath: "./output.mp4",
854
+ hwaccel: "videotoolbox",
855
+ videoCodec: "h264_videotoolbox",
856
+ });
976
857
 
977
- ```ts
978
- // Create a TikTok-optimized video
979
- const tiktok = new SIMPLEFFMPEG({ preset: "tiktok" });
858
+ // Two-pass encoding for target file size
859
+ await project.export({
860
+ outputPath: "./output.mp4",
861
+ twoPass: true,
862
+ videoBitrate: "5M",
863
+ preset: "slow",
864
+ });
980
865
 
981
- await tiktok.load([
982
- { type: "video", url: "./vertical.mp4", position: 0, end: 15 },
983
- {
984
- type: "text",
985
- text: "Follow for more!",
986
- position: 12,
987
- end: 15,
988
- fontSize: 48,
989
- fontColor: "white",
990
- yPercent: 0.8,
991
- animation: { type: "pop-bounce", in: 0.3 },
992
- },
993
- ]);
866
+ // Scale output resolution
867
+ await project.export({ outputPath: "./720p.mp4", outputResolution: "720p" });
994
868
 
995
- await tiktok.export({
996
- outputPath: "./tiktok-video.mp4",
997
- watermark: {
998
- type: "text",
999
- text: "@myhandle",
1000
- position: "bottom-right",
1001
- opacity: 0.7,
1002
- },
869
+ // Audio-only export
870
+ await project.export({
871
+ outputPath: "./audio.mp3",
872
+ audioOnly: true,
873
+ audioCodec: "libmp3lame",
874
+ audioBitrate: "320k",
875
+ });
876
+
877
+ // Generate thumbnail
878
+ await project.export({
879
+ outputPath: "./output.mp4",
880
+ thumbnail: { outputPath: "./thumb.jpg", time: 5, width: 640 },
881
+ });
882
+
883
+ // Debug — save the FFmpeg command to a file
884
+ await project.export({
885
+ outputPath: "./output.mp4",
886
+ verbose: true,
887
+ saveCommand: "./ffmpeg-command.txt",
1003
888
  });
1004
889
  ```
1005
890
 
1006
- ## Timeline Behavior
891
+ ## Advanced
892
+
893
+ ### Timeline Behavior
1007
894
 
1008
895
  - Clip timing uses `[position, end)` intervals in seconds
1009
896
  - Transitions create overlaps that reduce total duration
1010
897
  - Background music is mixed after video transitions (unaffected by crossfades)
1011
898
 
1012
- ### Transition Compensation
1013
-
1014
- FFmpeg's `xfade` transitions work by **overlapping** clips, which compresses the timeline. For example:
1015
-
1016
- - Clip A: 0-10s
1017
- - Clip B: 10-20s with 1s fade transition
1018
- - **Actual output duration: 19s** (not 20s)
1019
-
1020
- With multiple transitions, this compounds—10 clips with 0.5s transitions each would be ~4.5 seconds shorter than the sum of clip durations.
899
+ **Transition Compensation:**
1021
900
 
1022
- **Automatic Compensation (default):**
901
+ FFmpeg's `xfade` transitions **overlap** clips, compressing the timeline. A 1s fade between two 10s clips produces 19s of output, not 20s. With multiple transitions this compounds.
1023
902
 
1024
- By default, simple-ffmpeg automatically adjusts text and subtitle timings to compensate for this compression. When you position text at "15s", it appears at the visual 15s mark in the output video, regardless of how many transitions have occurred.
903
+ By default, simple-ffmpeg automatically adjusts text and subtitle timings to compensate. When you position text at "15s", it appears at the visual 15s mark regardless of how many transitions preceded it:
1025
904
 
1026
905
  ```ts
1027
- // Text will appear at the correct visual position even with transitions
1028
906
  await project.load([
1029
907
  { type: "video", url: "./a.mp4", position: 0, end: 10 },
1030
908
  {
@@ -1038,138 +916,247 @@ await project.load([
1038
916
  ]);
1039
917
  ```
1040
918
 
1041
- **Disabling Compensation:**
1042
-
1043
- If you need raw timeline positioning (e.g., you've pre-calculated offsets yourself):
1044
-
1045
- ```ts
1046
- await project.export({
1047
- outputPath: "./output.mp4",
1048
- compensateTransitions: false, // Use raw timestamps
1049
- });
1050
- ```
1051
-
1052
- ## Auto-Batching for Complex Filter Graphs
919
+ Disable with `compensateTransitions: false` in export options if you've pre-calculated offsets yourself.
1053
920
 
1054
- FFmpeg's `filter_complex` has platform-specific length limits (Windows ~32KB, macOS ~1MB, Linux ~2MB). When text animations like typewriter create many filter nodes, the command can exceed these limits.
921
+ ### Auto-Batching
1055
922
 
1056
- **simple-ffmpeg automatically handles this:**
923
+ FFmpeg's `filter_complex` has platform-specific length limits (Windows ~32KB, macOS ~1MB, Linux ~2MB). When text animations create many filter nodes, the command can exceed these limits.
1057
924
 
1058
- 1. **Auto-detection**: Before running FFmpeg, the library checks if the filter graph exceeds a safe 100KB limit
1059
- 2. **Smart batching**: If too long, text overlays are rendered in multiple passes with intermediate files
1060
- 3. **Optimal batch sizing**: Calculates the ideal number of nodes per pass based on actual filter complexity
925
+ simple-ffmpeg handles this automatically detecting oversized filter graphs and splitting text overlays into multiple rendering passes with intermediate files. No configuration needed.
1061
926
 
1062
- This happens transparently—you don't need to configure anything. For very complex projects, you can tune it manually:
927
+ For very complex projects, you can tune it:
1063
928
 
1064
929
  ```js
1065
930
  await project.export({
1066
- outputPath: "./output.mp4",
1067
- // Lower this if you have many complex text animations
1068
931
  textMaxNodesPerPass: 30, // default: 75
1069
- // Intermediate encoding settings (used between passes)
1070
932
  intermediateVideoCodec: "libx264", // default
1071
933
  intermediateCrf: 18, // default (high quality)
1072
934
  intermediatePreset: "veryfast", // default (fast encoding)
1073
935
  });
1074
936
  ```
1075
937
 
1076
- **When batching activates:**
1077
-
1078
- - Typewriter animations with long text (creates one filter node per character)
1079
- - Many simultaneous text overlays
1080
- - Complex animation combinations
1081
-
1082
- With `verbose: true`, you'll see when auto-batching kicks in:
1083
-
1084
- ```
1085
- simple-ffmpeg: Auto-batching text (filter too long: 150000 > 100000). Using 35 nodes per pass.
1086
- ```
938
+ Batching activates for typewriter animations with long text, many simultaneous text overlays, or complex animation combinations. With `verbose: true`, you'll see when it kicks in.
1087
939
 
1088
940
  ## Real-World Usage Patterns
1089
941
 
1090
942
  ### Data Pipeline Example
1091
943
 
1092
- Generate videos programmatically from structured data (JSON, database, API, CMS):
944
+ Generate videos programmatically from structured data database records, API responses, CMS content, etc. This example creates property tour videos from real estate listings:
1093
945
 
1094
946
  ```js
1095
- const SIMPLEFFMPEG = require("simple-ffmpegjs");
947
+ import SIMPLEFFMPEG from "simple-ffmpegjs";
1096
948
 
1097
- // Your data source - could be database records, API response, etc.
1098
- const quotes = [
1099
- {
1100
- text: "The only way to do great work is to love what you do.",
1101
- author: "Steve Jobs",
1102
- },
1103
- { text: "Move fast and break things.", author: "Mark Zuckerberg" },
1104
- ];
949
+ const listings = await db.getActiveListings(); // your data source
950
+
951
+ async function generateListingVideo(listing, outputPath) {
952
+ const photos = listing.photos; // ['kitchen.jpg', 'living-room.jpg', ...]
953
+ const slideDuration = 4;
954
+
955
+ // Build an image slideshow from listing photos
956
+ const photoClips = photos.map((photo, i) => ({
957
+ type: "image",
958
+ url: photo,
959
+ position: i * slideDuration,
960
+ end: (i + 1) * slideDuration,
961
+ kenBurns: i % 2 === 0 ? "zoom-in" : "pan-right",
962
+ }));
963
+
964
+ const totalDuration = photos.length * slideDuration;
1105
965
 
1106
- async function generateQuoteVideo(quote, outputPath) {
1107
966
  const clips = [
1108
- { type: "video", url: "./backgrounds/default.mp4", position: 0, end: 5 },
967
+ ...photoClips,
968
+ // Price banner
1109
969
  {
1110
970
  type: "text",
1111
- text: `"${quote.text}"`,
971
+ text: listing.price,
1112
972
  position: 0.5,
1113
- end: 4,
1114
- fontSize: 42,
973
+ end: totalDuration - 0.5,
974
+ fontSize: 36,
1115
975
  fontColor: "#FFFFFF",
1116
- yPercent: 0.4,
1117
- animation: { type: "fade-in", in: 0.3 },
976
+ backgroundColor: "#000000",
977
+ backgroundOpacity: 0.6,
978
+ padding: 12,
979
+ xPercent: 0.5,
980
+ yPercent: 0.1,
1118
981
  },
982
+ // Address at the bottom
1119
983
  {
1120
984
  type: "text",
1121
- text: `— ${quote.author}`,
1122
- position: 1.5,
1123
- end: 4.5,
985
+ text: listing.address,
986
+ position: 0.5,
987
+ end: totalDuration - 0.5,
1124
988
  fontSize: 28,
1125
- fontColor: "#CCCCCC",
1126
- yPercent: 0.6,
1127
- animation: { type: "fade-in", in: 0.3 },
989
+ fontColor: "#FFFFFF",
990
+ borderColor: "#000000",
991
+ borderWidth: 2,
992
+ xPercent: 0.5,
993
+ yPercent: 0.9,
1128
994
  },
995
+ { type: "music", url: "./assets/ambient.mp3", volume: 0.15, loop: true },
1129
996
  ];
1130
997
 
1131
- const project = new SIMPLEFFMPEG({ preset: "tiktok" });
998
+ const project = new SIMPLEFFMPEG({ preset: "instagram-reel" });
1132
999
  await project.load(clips);
1133
1000
  return project.export({ outputPath });
1134
1001
  }
1135
1002
 
1136
- // Batch process all quotes
1137
- for (const [i, quote] of quotes.entries()) {
1138
- await generateQuoteVideo(quote, `./output/quote-${i + 1}.mp4`);
1003
+ // Batch generate videos for all listings
1004
+ for (const listing of listings) {
1005
+ await generateListingVideo(listing, `./output/${listing.id}.mp4`);
1139
1006
  }
1140
1007
  ```
1141
1008
 
1142
- ### AI Generation with Validation Loop
1009
+ ### AI Video Generation Pipeline Example
1143
1010
 
1144
- The structured validation with error codes makes it easy to build AI feedback loops:
1011
+ Combine schema export, validation, and structured error codes to build a complete AI-driven video generation pipeline. The schema gives the model the exact specification it needs, and the validation loop lets it self-correct until the output is valid.
1145
1012
 
1146
1013
  ```js
1147
- const SIMPLEFFMPEG = require("simple-ffmpegjs");
1148
-
1149
- async function generateVideoWithAI(prompt) {
1150
- let config = await ai.generateVideoConfig(prompt);
1151
- let result = SIMPLEFFMPEG.validate(config, { skipFileChecks: true });
1152
- let retries = 0;
1153
-
1154
- // Let AI fix its own mistakes
1155
- while (!result.valid && retries < 3) {
1156
- // Feed structured errors back to AI for correction
1157
- config = await ai.fixConfig(config, result.errors);
1158
- result = SIMPLEFFMPEG.validate(config, { skipFileChecks: true });
1159
- retries++;
1014
+ import SIMPLEFFMPEG from "simple-ffmpegjs";
1015
+
1016
+ // 1. Build the schema context for the AI
1017
+ // Only expose the clip types you want the AI to work with.
1018
+ // Developer-level config (codecs, resolution, etc.) stays out of the schema.
1019
+
1020
+ const schema = SIMPLEFFMPEG.getSchema({
1021
+ include: ["video", "image", "text", "music"],
1022
+ instructions: [
1023
+ "You are composing a short-form video for TikTok.",
1024
+ "Keep total duration under 30 seconds.",
1025
+ "Return ONLY valid JSON an array of clip objects.",
1026
+ ],
1027
+ moduleInstructions: {
1028
+ video: "Use fade transitions between clips. Keep each clip 3-6 seconds.",
1029
+ text: [
1030
+ "Add a title in the first 2 seconds with fontSize 72.",
1031
+ "Use white text with a black border for readability.",
1032
+ ],
1033
+ music: "Always include looping background music at volume 0.15.",
1034
+ },
1035
+ });
1036
+
1037
+ // 2. Send the schema + prompt to your LLM
1038
+
1039
+ async function askAI(systemPrompt, userPrompt) {
1040
+ // Replace with your LLM provider (OpenAI, Anthropic, etc.)
1041
+ const response = await llm.chat({
1042
+ messages: [
1043
+ { role: "system", content: systemPrompt },
1044
+ { role: "user", content: userPrompt },
1045
+ ],
1046
+ });
1047
+ return JSON.parse(response.content);
1048
+ }
1049
+
1050
+ // 3. Generate → Validate → Retry loop
1051
+
1052
+ async function generateVideo(userPrompt, media) {
1053
+ // Build the system prompt with schema + available media and their details.
1054
+ // Descriptions and durations help the AI make good creative decisions —
1055
+ // ordering clips logically, setting accurate position/end times, etc.
1056
+ const mediaList = media
1057
+ .map((m) => ` - ${m.file} (${m.duration}s) — ${m.description}`)
1058
+ .join("\n");
1059
+
1060
+ const systemPrompt = [
1061
+ "You are a video editor. Given the user's request and the available media,",
1062
+ "produce a clips array that follows this schema:\n",
1063
+ schema,
1064
+ "\nAvailable media (use these exact file paths):",
1065
+ mediaList,
1066
+ ].join("\n");
1067
+
1068
+ const knownPaths = media.map((m) => m.file);
1069
+
1070
+ // First attempt
1071
+ let clips = await askAI(systemPrompt, userPrompt);
1072
+ let result = SIMPLEFFMPEG.validate(clips, { skipFileChecks: true });
1073
+ let attempts = 1;
1074
+
1075
+ // Self-correction loop: feed structured errors back to the AI
1076
+ while (!result.valid && attempts < 3) {
1077
+ const errorFeedback = result.errors
1078
+ .map((e) => `[${e.code}] ${e.path}: ${e.message}`)
1079
+ .join("\n");
1080
+
1081
+ clips = await askAI(
1082
+ systemPrompt,
1083
+ [
1084
+ `Your previous output had validation errors:\n${errorFeedback}`,
1085
+ `\nOriginal request: ${userPrompt}`,
1086
+ "\nPlease fix the errors and return the corrected clips array.",
1087
+ ].join("\n")
1088
+ );
1089
+
1090
+ result = SIMPLEFFMPEG.validate(clips, { skipFileChecks: true });
1091
+ attempts++;
1160
1092
  }
1161
1093
 
1162
1094
  if (!result.valid) {
1163
- throw new Error("AI failed to generate valid config");
1095
+ throw new Error(
1096
+ `Failed to generate valid config after ${attempts} attempts:\n` +
1097
+ SIMPLEFFMPEG.formatValidationResult(result)
1098
+ );
1099
+ }
1100
+
1101
+ // 4. Verify the AI only used known media paths
1102
+ // The structural loop (skipFileChecks: true) can't catch hallucinated paths.
1103
+ // You could also put this inside the retry loop to let the AI self-correct
1104
+ // bad paths — just append the unknown paths to the error feedback string.
1105
+
1106
+ const usedPaths = clips.filter((c) => c.url).map((c) => c.url);
1107
+ const unknownPaths = usedPaths.filter((p) => !knownPaths.includes(p));
1108
+ if (unknownPaths.length > 0) {
1109
+ throw new Error(`AI used unknown media paths: ${unknownPaths.join(", ")}`);
1164
1110
  }
1165
1111
 
1166
- const project = new SIMPLEFFMPEG({ width: 1080, height: 1920 });
1167
- await project.load(config);
1168
- return project.export({ outputPath: "./output.mp4" });
1112
+ // 5. Build and export
1113
+ // load() will also throw MediaNotFoundError if any file is missing on disk.
1114
+
1115
+ const project = new SIMPLEFFMPEG({ preset: "tiktok" });
1116
+ await project.load(clips);
1117
+
1118
+ return project.export({
1119
+ outputPath: "./output.mp4",
1120
+ onProgress: ({ percent }) => console.log(`Rendering: ${percent}%`),
1121
+ });
1169
1122
  }
1123
+
1124
+ // Usage
1125
+
1126
+ await generateVideo("Make a hype travel montage with upbeat text overlays", [
1127
+ {
1128
+ file: "clips/beach-drone.mp4",
1129
+ duration: 4,
1130
+ description:
1131
+ "Aerial drone shot of a tropical beach with people playing volleyball",
1132
+ },
1133
+ {
1134
+ file: "clips/city-timelapse.mp4",
1135
+ duration: 8,
1136
+ description: "Timelapse of a city skyline transitioning from day to night",
1137
+ },
1138
+ {
1139
+ file: "clips/sunset.mp4",
1140
+ duration: 6,
1141
+ description: "Golden hour sunset over the ocean with gentle waves",
1142
+ },
1143
+ {
1144
+ file: "music/upbeat-track.mp3",
1145
+ duration: 120,
1146
+ description:
1147
+ "Upbeat electronic track with a strong beat, good for montages",
1148
+ },
1149
+ ]);
1170
1150
  ```
1171
1151
 
1172
- Each validation error includes a `code` (e.g., `INVALID_TIMELINE`, `MISSING_REQUIRED`) and `path` (e.g., `clips[2].position`) for precise AI feedback.
1152
+ The key parts of this pattern:
1153
+
1154
+ 1. **`getSchema()`** gives the AI a precise specification of what it can produce, with only the clip types you've chosen to expose.
1155
+ 2. **`instructions` / `moduleInstructions`** embed your creative constraints directly into the spec — the AI treats them the same as built-in rules.
1156
+ 3. **Media descriptions** with durations and content details give the AI enough context to make good creative decisions — ordering clips logically, setting accurate timings, and choosing the right media for each part of the video.
1157
+ 4. **`validate()`** with `skipFileChecks: true` checks structural correctness in the retry loop — types, timelines, required fields — without touching the filesystem.
1158
+ 5. **The retry loop** lets the AI self-correct. Most validation failures resolve in one retry.
1159
+ 6. **The path guard** catches hallucinated file paths before `load()` hits the filesystem. You can optionally move this check inside the retry loop to let the AI self-correct bad paths. `load()` itself will also throw `MediaNotFoundError` if a file is missing on disk.
1173
1160
 
1174
1161
  ## Testing
1175
1162