@vibeo/cli 0.3.3 → 0.3.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/commands/install-skills.d.ts.map +1 -1
- package/dist/commands/install-skills.js +22 -83
- package/dist/commands/install-skills.js.map +1 -1
- package/package.json +2 -1
- package/skills/vibeo-audio/SKILL.md +283 -0
- package/skills/vibeo-core/SKILL.md +380 -0
- package/skills/vibeo-effects/SKILL.md +432 -0
- package/skills/vibeo-extras/SKILL.md +457 -0
- package/skills/vibeo-rendering/SKILL.md +364 -0
- package/src/commands/install-skills.ts +25 -82
|
@@ -0,0 +1,457 @@
|
|
|
1
|
+
# Vibeo Extras (`@vibeo/extras`)
|
|
2
|
+
|
|
3
|
+
## Overview
|
|
4
|
+
|
|
5
|
+
`@vibeo/extras` provides higher-level components for subtitles, audio visualization, scene graph management, and declarative audio mixing. These build on `@vibeo/core`, `@vibeo/audio`, and `@vibeo/effects`.
|
|
6
|
+
|
|
7
|
+
**When to use**: When your composition needs subtitles, audio waveforms/spectrograms, layered scene graphs, or multi-track audio mixing.
|
|
8
|
+
|
|
9
|
+
---
|
|
10
|
+
|
|
11
|
+
## API Reference
|
|
12
|
+
|
|
13
|
+
### Subtitle System
|
|
14
|
+
|
|
15
|
+
#### `Subtitle`
|
|
16
|
+
|
|
17
|
+
Renders subtitle text overlaid on the video, synced to the timeline.
|
|
18
|
+
|
|
19
|
+
```tsx
|
|
20
|
+
import { Subtitle } from "@vibeo/extras";
|
|
21
|
+
|
|
22
|
+
<Subtitle
|
|
23
|
+
src={`1
|
|
24
|
+
00:00:00,000 --> 00:00:03,000
|
|
25
|
+
Hello world!
|
|
26
|
+
|
|
27
|
+
2
|
|
28
|
+
00:00:04,000 --> 00:00:07,000
|
|
29
|
+
Welcome to Vibeo.`}
|
|
30
|
+
format="srt"
|
|
31
|
+
position="bottom"
|
|
32
|
+
fontSize={32}
|
|
33
|
+
color="white"
|
|
34
|
+
outlineColor="black"
|
|
35
|
+
outlineWidth={2}
|
|
36
|
+
/>
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
**`SubtitleProps`**:
|
|
40
|
+
| Prop | Type | Default | Description |
|
|
41
|
+
|------|------|---------|-------------|
|
|
42
|
+
| `src` | `string` | — | URL to subtitle file, or inline subtitle string |
|
|
43
|
+
| `format?` | `SubtitleFormat` | `"auto"` | `"srt" \| "vtt" \| "auto"` |
|
|
44
|
+
| `style?` | `CSSProperties` | — | Additional CSS styles |
|
|
45
|
+
| `position?` | `"top" \| "bottom" \| "center"` | — | Vertical position |
|
|
46
|
+
| `fontSize?` | `number` | — | Font size in pixels |
|
|
47
|
+
| `color?` | `string` | — | Text color |
|
|
48
|
+
| `outlineColor?` | `string` | — | Text outline/stroke color |
|
|
49
|
+
| `outlineWidth?` | `number` | — | Outline stroke width |
|
|
50
|
+
|
|
51
|
+
#### `parseSRT(content: string): SubtitleCue[]`
|
|
52
|
+
|
|
53
|
+
Parse an SRT string into an array of cues.
|
|
54
|
+
|
|
55
|
+
#### `parseVTT(content: string): SubtitleCue[]`
|
|
56
|
+
|
|
57
|
+
Parse a WebVTT string into an array of cues.
|
|
58
|
+
|
|
59
|
+
#### `useSubtitle(cues, fps): UseSubtitleResult`
|
|
60
|
+
|
|
61
|
+
Hook that returns the active cue for the current frame.
|
|
62
|
+
|
|
63
|
+
**`SubtitleCue`**:
|
|
64
|
+
| Field | Type | Description |
|
|
65
|
+
|-------|------|-------------|
|
|
66
|
+
| `startTime` | `number` | Start time in seconds |
|
|
67
|
+
| `endTime` | `number` | End time in seconds |
|
|
68
|
+
| `text` | `string` | Cue text (may contain `<b>`, `<i>`, `<u>`) |
|
|
69
|
+
|
|
70
|
+
### Audio Visualization
|
|
71
|
+
|
|
72
|
+
#### `AudioWaveform`
|
|
73
|
+
|
|
74
|
+
Renders an audio waveform visualization synced to the current playback position.
|
|
75
|
+
|
|
76
|
+
```tsx
|
|
77
|
+
import { AudioWaveform } from "@vibeo/extras";
|
|
78
|
+
|
|
79
|
+
<AudioWaveform
|
|
80
|
+
src="/music.mp3"
|
|
81
|
+
width={800}
|
|
82
|
+
height={200}
|
|
83
|
+
color="cyan"
|
|
84
|
+
backgroundColor="black"
|
|
85
|
+
windowSize={60}
|
|
86
|
+
barStyle="bars"
|
|
87
|
+
/>
|
|
88
|
+
```
|
|
89
|
+
|
|
90
|
+
**`AudioWaveformProps`**:
|
|
91
|
+
| Prop | Type | Default | Description |
|
|
92
|
+
|------|------|---------|-------------|
|
|
93
|
+
| `src` | `string` | — | Audio source URL |
|
|
94
|
+
| `width` | `number` | — | Width in pixels |
|
|
95
|
+
| `height` | `number` | — | Height in pixels |
|
|
96
|
+
| `color?` | `string` | — | Waveform color |
|
|
97
|
+
| `backgroundColor?` | `string` | — | Background color |
|
|
98
|
+
| `windowSize?` | `number` | — | Frames visible in the waveform window |
|
|
99
|
+
| `barStyle?` | `BarStyle` | — | `"bars" \| "line" \| "mirror"` |
|
|
100
|
+
|
|
101
|
+
#### `AudioSpectrogram`
|
|
102
|
+
|
|
103
|
+
Renders a scrolling frequency spectrogram.
|
|
104
|
+
|
|
105
|
+
```tsx
|
|
106
|
+
import { AudioSpectrogram } from "@vibeo/extras";
|
|
107
|
+
|
|
108
|
+
<AudioSpectrogram
|
|
109
|
+
src="/music.mp3"
|
|
110
|
+
width={800}
|
|
111
|
+
height={300}
|
|
112
|
+
colorMap="viridis"
|
|
113
|
+
fftSize={2048}
|
|
114
|
+
/>
|
|
115
|
+
```
|
|
116
|
+
|
|
117
|
+
**`AudioSpectrogramProps`**:
|
|
118
|
+
| Prop | Type | Default | Description |
|
|
119
|
+
|------|------|---------|-------------|
|
|
120
|
+
| `src` | `string` | — | Audio source URL |
|
|
121
|
+
| `width` | `number` | — | Width in pixels |
|
|
122
|
+
| `height` | `number` | — | Height in pixels |
|
|
123
|
+
| `colorMap?` | `ColorMapName` | — | `"viridis" \| "magma" \| "inferno" \| "grayscale"` |
|
|
124
|
+
| `fftSize?` | `number` | — | FFT window size |
|
|
125
|
+
|
|
126
|
+
### Scene Graph
|
|
127
|
+
|
|
128
|
+
#### `SceneGraph` & `Layer`
|
|
129
|
+
|
|
130
|
+
Provides z-index management and named layer access for complex compositions.
|
|
131
|
+
|
|
132
|
+
```tsx
|
|
133
|
+
import { SceneGraph, Layer } from "@vibeo/extras";
|
|
134
|
+
|
|
135
|
+
<SceneGraph>
|
|
136
|
+
<Layer name="background" zIndex={0}>
|
|
137
|
+
<Background />
|
|
138
|
+
</Layer>
|
|
139
|
+
<Layer name="characters" zIndex={10} opacity={0.9}>
|
|
140
|
+
<Characters />
|
|
141
|
+
</Layer>
|
|
142
|
+
<Layer name="ui" zIndex={100}>
|
|
143
|
+
<UIOverlay />
|
|
144
|
+
</Layer>
|
|
145
|
+
</SceneGraph>
|
|
146
|
+
```
|
|
147
|
+
|
|
148
|
+
**`LayerProps`**:
|
|
149
|
+
| Prop | Type | Default | Description |
|
|
150
|
+
|------|------|---------|-------------|
|
|
151
|
+
| `name` | `string` | — | Unique layer identifier |
|
|
152
|
+
| `zIndex?` | `number` | — | Z-order for stacking |
|
|
153
|
+
| `visible?` | `boolean` | `true` | Whether the layer renders |
|
|
154
|
+
| `opacity?` | `number` | `1` | Layer opacity (0-1) |
|
|
155
|
+
| `transform?` | `string` | — | CSS transform string |
|
|
156
|
+
|
|
157
|
+
#### `useLayer(name): LayerState`
|
|
158
|
+
|
|
159
|
+
Hook to read the state of a named layer from within the scene graph.
|
|
160
|
+
|
|
161
|
+
#### `SceneGraphContext`
|
|
162
|
+
|
|
163
|
+
React context providing `{ layers, setLayerState, getLayerState }`.
|
|
164
|
+
|
|
165
|
+
### Audio Mixing
|
|
166
|
+
|
|
167
|
+
#### `AudioMix` & `Track`
|
|
168
|
+
|
|
169
|
+
Declarative multi-track audio mixing with per-frame volume, ducking, and crossfade support.
|
|
170
|
+
|
|
171
|
+
```tsx
|
|
172
|
+
import { AudioMix, Track } from "@vibeo/extras";
|
|
173
|
+
|
|
174
|
+
<AudioMix>
|
|
175
|
+
<Track src="/voice.mp3" volume={1} />
|
|
176
|
+
<Track
|
|
177
|
+
src="/music.mp3"
|
|
178
|
+
volume={(frame) => (frame < 30 ? 1 : 0.3)}
|
|
179
|
+
duckWhen="voice"
|
|
180
|
+
duckAmount={0.3}
|
|
181
|
+
/>
|
|
182
|
+
</AudioMix>
|
|
183
|
+
```
|
|
184
|
+
|
|
185
|
+
**`TrackProps`**:
|
|
186
|
+
| Prop | Type | Default | Description |
|
|
187
|
+
|------|------|---------|-------------|
|
|
188
|
+
| `src` | `string` | — | Audio source URL |
|
|
189
|
+
| `volume?` | `VolumeInput` | — | `number \| (frame: number) => number` |
|
|
190
|
+
| `pan?` | `number` | — | Stereo panning (-1 to 1) |
|
|
191
|
+
| `startAt?` | `number` | — | Frame to start playback |
|
|
192
|
+
| `duckWhen?` | `string` | — | Track name to duck against |
|
|
193
|
+
| `duckAmount?` | `number` | — | Volume reduction when ducking (0-1) |
|
|
194
|
+
|
|
195
|
+
#### `crossfadeVolume(frame, startFrame, durationInFrames): number`
|
|
196
|
+
|
|
197
|
+
Utility for computing a crossfade-in volume curve. Returns 0→1 over `durationInFrames` frames starting at `startFrame`, clamped to [0, 1].
|
|
198
|
+
|
|
199
|
+
```ts
|
|
200
|
+
import { crossfadeVolume } from "@vibeo/extras";
|
|
201
|
+
|
|
202
|
+
const vol = crossfadeVolume(frame, 0, 30);
|
|
203
|
+
// 0→1 over frames 0-30, then holds at 1
|
|
204
|
+
```
|
|
205
|
+
|
|
206
|
+
To fade out, invert: `1 - crossfadeVolume(frame, fadeOutStart, fadeDuration)`.
|
|
207
|
+
|
|
208
|
+
### Types
|
|
209
|
+
|
|
210
|
+
```ts
|
|
211
|
+
import type {
|
|
212
|
+
SubtitleCue,
|
|
213
|
+
SubtitleFormat,
|
|
214
|
+
SubtitleProps,
|
|
215
|
+
BarStyle,
|
|
216
|
+
AudioWaveformProps,
|
|
217
|
+
ColorMapName,
|
|
218
|
+
AudioSpectrogramProps,
|
|
219
|
+
LayerState,
|
|
220
|
+
LayerProps,
|
|
221
|
+
SceneGraphProps,
|
|
222
|
+
SceneGraphContextValue,
|
|
223
|
+
VolumeInput,
|
|
224
|
+
TrackProps,
|
|
225
|
+
AudioMixProps,
|
|
226
|
+
UseSubtitleResult,
|
|
227
|
+
} from "@vibeo/extras";
|
|
228
|
+
```
|
|
229
|
+
|
|
230
|
+
---
|
|
231
|
+
|
|
232
|
+
## Subtitle Format Guide
|
|
233
|
+
|
|
234
|
+
### SRT Format
|
|
235
|
+
|
|
236
|
+
```
|
|
237
|
+
1
|
|
238
|
+
00:00:00,000 --> 00:00:03,000
|
|
239
|
+
Hello world!
|
|
240
|
+
|
|
241
|
+
2
|
|
242
|
+
00:00:04,000 --> 00:00:07,000
|
|
243
|
+
Welcome to <b>Vibeo</b>.
|
|
244
|
+
```
|
|
245
|
+
|
|
246
|
+
- Numbered entries separated by blank lines
|
|
247
|
+
- Timestamps: `HH:MM:SS,mmm --> HH:MM:SS,mmm` (comma for milliseconds)
|
|
248
|
+
- Basic HTML tags: `<b>`, `<i>`, `<u>`
|
|
249
|
+
|
|
250
|
+
### VTT (WebVTT) Format
|
|
251
|
+
|
|
252
|
+
```
|
|
253
|
+
WEBVTT
|
|
254
|
+
|
|
255
|
+
00:00:00.000 --> 00:00:03.000
|
|
256
|
+
Hello world!
|
|
257
|
+
|
|
258
|
+
00:00:04.000 --> 00:00:07.000
|
|
259
|
+
Welcome to <b>Vibeo</b>.
|
|
260
|
+
```
|
|
261
|
+
|
|
262
|
+
- Starts with `WEBVTT` header
|
|
263
|
+
- No cue numbers required
|
|
264
|
+
- Timestamps: `HH:MM:SS.mmm --> HH:MM:SS.mmm` (period for milliseconds)
|
|
265
|
+
|
|
266
|
+
### Auto-detection
|
|
267
|
+
|
|
268
|
+
When `format="auto"`, the parser checks:
|
|
269
|
+
1. If content starts with `WEBVTT` → VTT
|
|
270
|
+
2. Otherwise → SRT
|
|
271
|
+
|
|
272
|
+
---
|
|
273
|
+
|
|
274
|
+
## Audio Visualization Patterns
|
|
275
|
+
|
|
276
|
+
### Waveform with custom window
|
|
277
|
+
|
|
278
|
+
```tsx
|
|
279
|
+
// Show 2 seconds of waveform at 30fps
|
|
280
|
+
<AudioWaveform src="/audio.mp3" width={600} height={100} windowSize={60} barStyle="mirror" />
|
|
281
|
+
```
|
|
282
|
+
|
|
283
|
+
### Spectrogram for music analysis
|
|
284
|
+
|
|
285
|
+
```tsx
|
|
286
|
+
<AudioSpectrogram src="/song.mp3" width={800} height={400} colorMap="magma" fftSize={4096} />
|
|
287
|
+
```
|
|
288
|
+
|
|
289
|
+
Higher `fftSize` = better frequency resolution but less time precision.
|
|
290
|
+
|
|
291
|
+
---
|
|
292
|
+
|
|
293
|
+
## Scene Graph Usage for Complex Compositions
|
|
294
|
+
|
|
295
|
+
```tsx
|
|
296
|
+
function ComplexScene() {
|
|
297
|
+
const frame = useCurrentFrame();
|
|
298
|
+
|
|
299
|
+
return (
|
|
300
|
+
<SceneGraph>
|
|
301
|
+
<Layer name="sky" zIndex={0}>
|
|
302
|
+
<div style={{ background: "linear-gradient(#1a1a2e, #16213e)", width: "100%", height: "100%" }} />
|
|
303
|
+
</Layer>
|
|
304
|
+
<Layer name="mountains" zIndex={10} opacity={interpolate(frame, [0, 30], [0, 1])}>
|
|
305
|
+
<Mountains />
|
|
306
|
+
</Layer>
|
|
307
|
+
<Layer name="foreground" zIndex={20}>
|
|
308
|
+
<Characters />
|
|
309
|
+
</Layer>
|
|
310
|
+
<Layer name="hud" zIndex={100} visible={frame > 60}>
|
|
311
|
+
<HUD />
|
|
312
|
+
</Layer>
|
|
313
|
+
</SceneGraph>
|
|
314
|
+
);
|
|
315
|
+
}
|
|
316
|
+
```
|
|
317
|
+
|
|
318
|
+
Layers provide:
|
|
319
|
+
- **Z-ordering**: Higher `zIndex` renders on top
|
|
320
|
+
- **Visibility culling**: `visible={false}` prevents rendering
|
|
321
|
+
- **Opacity control**: Per-layer opacity
|
|
322
|
+
- **Named access**: `useLayer("hud")` reads layer state from children
|
|
323
|
+
|
|
324
|
+
---
|
|
325
|
+
|
|
326
|
+
## Audio Mixing Recipes
|
|
327
|
+
|
|
328
|
+
### Music ducking under voice
|
|
329
|
+
|
|
330
|
+
```tsx
|
|
331
|
+
<AudioMix>
|
|
332
|
+
<Track src="/narration.mp3" volume={1} />
|
|
333
|
+
<Track
|
|
334
|
+
src="/bg-music.mp3"
|
|
335
|
+
volume={0.8}
|
|
336
|
+
duckWhen="narration"
|
|
337
|
+
duckAmount={0.7}
|
|
338
|
+
/>
|
|
339
|
+
</AudioMix>
|
|
340
|
+
```
|
|
341
|
+
|
|
342
|
+
When the narration track has audio, music volume is reduced by 70%.
|
|
343
|
+
|
|
344
|
+
### Crossfade between two songs
|
|
345
|
+
|
|
346
|
+
```tsx
|
|
347
|
+
<AudioMix>
|
|
348
|
+
<Track
|
|
349
|
+
src="/song1.mp3"
|
|
350
|
+
volume={(frame) => 1 - crossfadeVolume(frame, 240, 60)}
|
|
351
|
+
// Holds at 1, then fades 1→0 over frames 240-300
|
|
352
|
+
/>
|
|
353
|
+
<Track
|
|
354
|
+
src="/song2.mp3"
|
|
355
|
+
volume={(frame) => crossfadeVolume(frame, 0, 60)}
|
|
356
|
+
// Fades 0→1 over first 60 frames of this track
|
|
357
|
+
startAt={240}
|
|
358
|
+
/>
|
|
359
|
+
</AudioMix>
|
|
360
|
+
```
|
|
361
|
+
|
|
362
|
+
### Volume fade in/out
|
|
363
|
+
|
|
364
|
+
```tsx
|
|
365
|
+
<Track
|
|
366
|
+
src="/music.mp3"
|
|
367
|
+
volume={(frame) => {
|
|
368
|
+
if (frame < 30) return frame / 30; // fade in
|
|
369
|
+
if (frame > 270) return (300 - frame) / 30; // fade out
|
|
370
|
+
return 1;
|
|
371
|
+
}}
|
|
372
|
+
/>
|
|
373
|
+
```
|
|
374
|
+
|
|
375
|
+
---
|
|
376
|
+
|
|
377
|
+
## Gotchas and Tips
|
|
378
|
+
|
|
379
|
+
1. **`Subtitle` `src` can be inline content or a URL** — if it looks like a URL, it will be fetched; otherwise, parsed directly.
|
|
380
|
+
|
|
381
|
+
2. **SRT uses commas for milliseconds** (`00:00:01,500`), while VTT uses periods (`00:00:01.500`). Mixing them up causes parse errors.
|
|
382
|
+
|
|
383
|
+
3. **`AudioWaveform` and `AudioSpectrogram` require audio data to load** — they may render empty on the first frame.
|
|
384
|
+
|
|
385
|
+
4. **Scene graph `Layer` names must be unique** within a `SceneGraph`.
|
|
386
|
+
|
|
387
|
+
5. **`duckWhen` references another `Track`'s `src`** — it auto-reduces volume when the named source has active audio.
|
|
388
|
+
|
|
389
|
+
6. **`crossfadeVolume` is a pure function** — it can be used outside of `<AudioMix>` for any volume curve calculation.
|
|
390
|
+
|
|
391
|
+
|
|
392
|
+
---
|
|
393
|
+
|
|
394
|
+
## LLM & Agent Integration
|
|
395
|
+
|
|
396
|
+
Vibeo's CLI is built with [incur](https://github.com/wevm/incur), making it natively discoverable by AI agents and LLMs.
|
|
397
|
+
|
|
398
|
+
### Discovering the API
|
|
399
|
+
|
|
400
|
+
```bash
|
|
401
|
+
# Get a compact summary of all CLI commands (ideal for LLM system prompts)
|
|
402
|
+
bunx @vibeo/cli --llms
|
|
403
|
+
|
|
404
|
+
# Get the full manifest with schemas, examples, and argument details
|
|
405
|
+
bunx @vibeo/cli --llms-full
|
|
406
|
+
|
|
407
|
+
# Get JSON Schema for a specific command (useful for structured tool calls)
|
|
408
|
+
bunx @vibeo/cli render --schema
|
|
409
|
+
bunx @vibeo/cli create --schema
|
|
410
|
+
```
|
|
411
|
+
|
|
412
|
+
### Using as an MCP Server
|
|
413
|
+
|
|
414
|
+
```bash
|
|
415
|
+
# Start Vibeo as an MCP (Model Context Protocol) server
|
|
416
|
+
bunx @vibeo/cli --mcp
|
|
417
|
+
|
|
418
|
+
# Register as a persistent MCP server for your agent
|
|
419
|
+
bunx @vibeo/cli mcp add
|
|
420
|
+
```
|
|
421
|
+
|
|
422
|
+
This lets LLMs call `create`, `render`, `preview`, and `list` as structured tool calls through the MCP protocol.
|
|
423
|
+
|
|
424
|
+
### Generating Skill Files
|
|
425
|
+
|
|
426
|
+
```bash
|
|
427
|
+
# Sync skill files to your agent's skill directory
|
|
428
|
+
bunx @vibeo/cli skills add
|
|
429
|
+
```
|
|
430
|
+
|
|
431
|
+
This generates markdown skill files that agents like Claude Code can discover and use to write Vibeo code without reading source.
|
|
432
|
+
|
|
433
|
+
### Agent-Friendly Output
|
|
434
|
+
|
|
435
|
+
```bash
|
|
436
|
+
# Output as JSON for programmatic consumption
|
|
437
|
+
bunx @vibeo/cli list --entry src/index.tsx --format json
|
|
438
|
+
|
|
439
|
+
# Output as YAML
|
|
440
|
+
bunx @vibeo/cli list --entry src/index.tsx --format yaml
|
|
441
|
+
|
|
442
|
+
# Filter output to specific keys
|
|
443
|
+
bunx @vibeo/cli list --entry src/index.tsx --filter-output compositions[0].id
|
|
444
|
+
|
|
445
|
+
# Count tokens in output (useful for context window planning)
|
|
446
|
+
bunx @vibeo/cli render --schema --token-count
|
|
447
|
+
```
|
|
448
|
+
|
|
449
|
+
### How LLMs Should Use Vibeo
|
|
450
|
+
|
|
451
|
+
1. **Discover commands**: Run `bunx @vibeo/cli --llms` to get the command manifest
|
|
452
|
+
2. **Create a project**: `bunx @vibeo/cli create my-video --template basic`
|
|
453
|
+
3. **Edit `src/index.tsx`**: Write React components using `@vibeo/core` hooks and components
|
|
454
|
+
4. **Preview**: `bunx @vibeo/cli preview --entry src/index.tsx`
|
|
455
|
+
5. **Render**: `bunx @vibeo/cli render --entry src/index.tsx --composition MyComp`
|
|
456
|
+
|
|
457
|
+
All commands accept `--format json` for structured output that LLMs can parse reliably.
|