vidpipe 1.3.7 → 1.3.9
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +142 -16
- package/dist/cli.js +115 -11
- package/dist/cli.js.map +1 -1
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -9,16 +9,16 @@
|
|
|
9
9
|
╚═══╝ ╚═╝╚═════╝ ╚═╝ ╚═╝╚═╝ ╚══════╝
|
|
10
10
|
```
|
|
11
11
|
|
|
12
|
-
**Your AI video editor — turn raw recordings into shorts, reels, captions, social posts, and blog posts.
|
|
12
|
+
**Your AI video editor and content ideation engine — turn raw recordings into shorts, reels, captions, social posts, and blog posts. Ideate, record, edit, publish.**
|
|
13
13
|
|
|
14
|
-
An agentic video editor that watches for new recordings and edits them into social-media-ready content — shorts, reels, captions, blog posts, and platform-tailored social posts — using [GitHub Copilot SDK](https://github.com/github/copilot-sdk) AI agents and
|
|
14
|
+
An agentic video editor and content ideation platform that watches for new recordings and edits them into social-media-ready content — shorts, reels, captions, blog posts, and platform-tailored social posts — using [GitHub Copilot SDK](https://github.com/github/copilot-sdk) AI agents, OpenAI Whisper, and Google Gemini.
|
|
15
15
|
|
|
16
16
|
[](https://github.com/htekdev/vidpipe/actions/workflows/ci.yml)
|
|
17
17
|
[](https://www.npmjs.com/package/vidpipe)
|
|
18
18
|
[](https://nodejs.org/)
|
|
19
19
|
[](./LICENSE)
|
|
20
20
|
[](https://htekdev.github.io/vidpipe/)
|
|
21
|
-
[](.)
|
|
22
22
|
|
|
23
23
|
</div>
|
|
24
24
|
|
|
@@ -38,28 +38,32 @@ npm install -g vidpipe
|
|
|
38
38
|
|
|
39
39
|
<table>
|
|
40
40
|
<tr>
|
|
41
|
+
<td>💡 <b>Content Ideation (ID8)</b> — AI-generated, trend-backed video ideas</td>
|
|
41
42
|
<td>🎙️ <b>Whisper Transcription</b> — Word-level timestamps</td>
|
|
42
|
-
<td>📐 <b>Split-Screen Layouts</b> — Portrait, square, and feed</td>
|
|
43
43
|
</tr>
|
|
44
44
|
<tr>
|
|
45
|
+
<td>📐 <b>Split-Screen Layouts</b> — Portrait, square, and feed</td>
|
|
45
46
|
<td>🔇 <b>AI Silence Removal</b> — Context-aware, capped at 20%</td>
|
|
47
|
+
</tr>
|
|
48
|
+
<tr>
|
|
46
49
|
<td>💬 <b>Karaoke Captions</b> — Word-by-word highlighting</td>
|
|
50
|
+
<td>✂️ <b>Short Clips</b> — Best 15–60s moments, hook-first ordering</td>
|
|
47
51
|
</tr>
|
|
48
52
|
<tr>
|
|
49
|
-
<td>✂️ <b>Short Clips</b> — Best 15–60s moments, multi-segment</td>
|
|
50
53
|
<td>🎞️ <b>Medium Clips</b> — 1–3 min with crossfade transitions</td>
|
|
54
|
+
<td>📑 <b>Chapter Detection</b> — JSON, Markdown, YouTube, FFmeta</td>
|
|
51
55
|
</tr>
|
|
52
56
|
<tr>
|
|
53
|
-
<td>📑 <b>Chapter Detection</b> — JSON, Markdown, YouTube, FFmeta</td>
|
|
54
57
|
<td>📱 <b>Social Posts</b> — TikTok, YouTube, Instagram, LinkedIn, X</td>
|
|
58
|
+
<td>📰 <b>Blog Post</b> — Dev.to style with web-sourced links</td>
|
|
55
59
|
</tr>
|
|
56
60
|
<tr>
|
|
57
|
-
<td>📰 <b>Blog Post</b> — Dev.to style with web-sourced links</td>
|
|
58
61
|
<td>🎨 <b>Brand Voice</b> — Custom tone, hashtags via brand.json</td>
|
|
62
|
+
<td>🔍 <b>Face Detection</b> — ONNX-based webcam cropping</td>
|
|
59
63
|
</tr>
|
|
60
64
|
<tr>
|
|
61
|
-
<td
|
|
62
|
-
<td
|
|
65
|
+
<td>🚀 <b>Auto-Publish</b> — Scheduled posting via Late API</td>
|
|
66
|
+
<td>👁️ <b>Gemini Vision</b> — AI video analysis and scene detection</td>
|
|
63
67
|
</tr>
|
|
64
68
|
</table>
|
|
65
69
|
|
|
@@ -92,6 +96,9 @@ vidpipe --watch-dir ~/Videos/Recordings
|
|
|
92
96
|
# Generate a saved idea bank for future recordings
|
|
93
97
|
vidpipe ideate --topics "GitHub Copilot, Azure, TypeScript" --count 4
|
|
94
98
|
|
|
99
|
+
# Add a single idea with AI enrichment
|
|
100
|
+
vidpipe ideate --add --topic "Building CI/CD with GitHub Actions"
|
|
101
|
+
|
|
95
102
|
# Full example with options
|
|
96
103
|
vidpipe \
|
|
97
104
|
--watch-dir ~/Videos/Recordings \
|
|
@@ -118,28 +125,57 @@ vidpipe [options] [video-path]
|
|
|
118
125
|
vidpipe init # Interactive setup wizard
|
|
119
126
|
vidpipe review # Open post review web app
|
|
120
127
|
vidpipe schedule # View posting schedule
|
|
128
|
+
vidpipe realign # Realign scheduled posts to match schedule.json
|
|
121
129
|
vidpipe ideate # Generate or list saved content ideas
|
|
130
|
+
vidpipe chat # Interactive schedule management agent
|
|
131
|
+
vidpipe doctor # Check all prerequisites
|
|
122
132
|
```
|
|
123
133
|
|
|
134
|
+
### Process Options
|
|
135
|
+
|
|
124
136
|
| Option | Description |
|
|
125
137
|
|--------|-------------|
|
|
126
|
-
| `--doctor` | Check that all prerequisites (FFmpeg, API keys, etc.) are installed and configured |
|
|
127
138
|
| `[video-path]` | Process a specific video file (implies `--once`) |
|
|
128
139
|
| `--watch-dir <path>` | Folder to watch for new recordings |
|
|
129
140
|
| `--output-dir <path>` | Output directory (default: `./recordings`) |
|
|
130
141
|
| `--openai-key <key>` | OpenAI API key |
|
|
131
142
|
| `--exa-key <key>` | Exa AI key for web search in social posts |
|
|
132
143
|
| `--brand <path>` | Path to `brand.json` (default: `./brand.json`) |
|
|
144
|
+
| `--ideas <ids>` | Comma-separated idea IDs to link to this video |
|
|
133
145
|
| `--once` | Process next video and exit |
|
|
134
146
|
| `--no-silence-removal` | Skip silence removal |
|
|
135
147
|
| `--no-shorts` | Skip short clip extraction |
|
|
136
148
|
| `--no-medium-clips` | Skip medium clip generation |
|
|
137
149
|
| `--no-social` | Skip social media posts |
|
|
138
150
|
| `--no-social-publish` | Skip social media queue-build stage |
|
|
139
|
-
| `--late-api-key <key>` | Override Late API key |
|
|
140
151
|
| `--no-captions` | Skip caption generation/burning |
|
|
141
152
|
| `--no-git` | Skip git commit/push |
|
|
153
|
+
| `--late-api-key <key>` | Override Late API key |
|
|
142
154
|
| `-v, --verbose` | Debug-level logging |
|
|
155
|
+
| `--doctor` | Check that all prerequisites are installed |
|
|
156
|
+
|
|
157
|
+
### Ideate Options
|
|
158
|
+
|
|
159
|
+
| Option | Description |
|
|
160
|
+
|--------|-------------|
|
|
161
|
+
| `--topics <topics>` | Comma-separated seed topics for trend research |
|
|
162
|
+
| `--count <n>` | Number of ideas to generate (default: 5) |
|
|
163
|
+
| `--list` | List existing ideas instead of generating |
|
|
164
|
+
| `--status <status>` | Filter by status: `draft`, `ready`, `recorded`, `published` |
|
|
165
|
+
| `--format <format>` | Output format: `table` (default) or `json` |
|
|
166
|
+
| `--output <dir>` | Ideas directory (default: `./ideas`) |
|
|
167
|
+
| `--brand <path>` | Brand config path (default: `./brand.json`) |
|
|
168
|
+
| `--add` | Create a single idea (AI-enriched by default) |
|
|
169
|
+
| `--topic <topic>` | Topic for the idea (required with `--add`) |
|
|
170
|
+
| `--hook <hook>` | Opening hook (AI-generated if omitted) |
|
|
171
|
+
| `--audience <audience>` | Target audience (default: `"developers"`) |
|
|
172
|
+
| `--platforms <list>` | Comma-separated platforms: `youtube,tiktok,instagram,linkedin,x` |
|
|
173
|
+
| `--key-takeaway <msg>` | Core message (AI-generated if omitted) |
|
|
174
|
+
| `--talking-points <list>` | Comma-separated talking points |
|
|
175
|
+
| `--tags <list>` | Comma-separated categorization tags |
|
|
176
|
+
| `--publish-by <date>` | Publish-by date (default: 14 days from now) |
|
|
177
|
+
| `--trend-context <text>` | Trend research context |
|
|
178
|
+
| `--no-ai` | Skip AI research agent, use CLI values + defaults |
|
|
143
179
|
|
|
144
180
|
---
|
|
145
181
|
|
|
@@ -190,6 +226,69 @@ recordings/
|
|
|
190
226
|
|
|
191
227
|
---
|
|
192
228
|
|
|
229
|
+
## 💡 Content Ideation (ID8)
|
|
230
|
+
|
|
231
|
+
VidPipe includes a research-backed content ideation engine that generates video ideas before you record. Ideas are stored as GitHub Issues for full lifecycle tracking.
|
|
232
|
+
|
|
233
|
+
```bash
|
|
234
|
+
# Generate ideas backed by trend research
|
|
235
|
+
vidpipe ideate --topics "GitHub Copilot, TypeScript" --count 4
|
|
236
|
+
|
|
237
|
+
# List all saved ideas
|
|
238
|
+
vidpipe ideate --list
|
|
239
|
+
|
|
240
|
+
# Filter by status
|
|
241
|
+
vidpipe ideate --list --status ready
|
|
242
|
+
|
|
243
|
+
# JSON output for programmatic access (e.g., VidRecord integration)
|
|
244
|
+
vidpipe ideate --list --format json
|
|
245
|
+
|
|
246
|
+
# Link ideas to a recording
|
|
247
|
+
vidpipe process video.mp4 --ideas 12,15
|
|
248
|
+
```
|
|
249
|
+
|
|
250
|
+
### Manual Idea Creation
|
|
251
|
+
|
|
252
|
+
Add a single idea with AI enrichment or direct CLI values:
|
|
253
|
+
|
|
254
|
+
```bash
|
|
255
|
+
# AI-researched — full IdeationAgent with MCP research tools
|
|
256
|
+
vidpipe ideate --add --topic "Building CI/CD with GitHub Actions"
|
|
257
|
+
|
|
258
|
+
# Direct — skip AI, use CLI flags + defaults
|
|
259
|
+
vidpipe ideate --add --topic "Quick Demo" --no-ai --hook "Ship it live" --audience "developers"
|
|
260
|
+
|
|
261
|
+
# JSON output for programmatic consumers (e.g., VidRecord Electron app)
|
|
262
|
+
vidpipe ideate --add --topic "My Topic" --format json
|
|
263
|
+
```
|
|
264
|
+
|
|
265
|
+
### How It Works
|
|
266
|
+
|
|
267
|
+
The **IdeationAgent** uses MCP tools (Exa web search, YouTube, Perplexity) to research trending topics in your niche before generating ideas. Each idea includes:
|
|
268
|
+
|
|
269
|
+
- **Topic & hook** — The angle that makes it compelling
|
|
270
|
+
- **Audience & key takeaway** — Who it's for and what they'll learn
|
|
271
|
+
- **Talking points** — Structured bullet points to guide your recording
|
|
272
|
+
- **Publish-by date** — Based on timeliness (3–5 days for hot trends, months for evergreen)
|
|
273
|
+
- **Trend context** — The research findings that back the idea
|
|
274
|
+
|
|
275
|
+
### Idea Lifecycle
|
|
276
|
+
|
|
277
|
+
```
|
|
278
|
+
draft → ready → recorded → published
|
|
279
|
+
```
|
|
280
|
+
|
|
281
|
+
| Status | Meaning |
|
|
282
|
+
|--------|---------|
|
|
283
|
+
| `draft` | Generated by AI, awaiting your review |
|
|
284
|
+
| `ready` | Approved — ready to record |
|
|
285
|
+
| `recorded` | Linked to a video via `--ideas` flag |
|
|
286
|
+
| `published` | Content from this idea has been published |
|
|
287
|
+
|
|
288
|
+
Ideas automatically influence downstream content — when you link ideas to a recording with `--ideas`, the pipeline's agents (shorts, social posts, summaries, blog) reference your intended topic and hook for more focused output.
|
|
289
|
+
|
|
290
|
+
---
|
|
291
|
+
|
|
193
292
|
## 📺 Review App
|
|
194
293
|
|
|
195
294
|
VidPipe includes a built-in web app for reviewing, editing, and scheduling social media posts before publishing.
|
|
@@ -290,6 +389,8 @@ OUTPUT_DIR=/path/to/output
|
|
|
290
389
|
# FFMPEG_PATH=/usr/local/bin/ffmpeg
|
|
291
390
|
# FFPROBE_PATH=/usr/local/bin/ffprobe
|
|
292
391
|
# LATE_API_KEY=sk_your_key_here # Optional: Late API for social publishing
|
|
392
|
+
# GITHUB_TOKEN=ghp_... # Optional: GitHub token for ID8 idea storage
|
|
393
|
+
# IDEAS_REPO=owner/repo # Optional: GitHub repo for storing ideas as Issues
|
|
293
394
|
```
|
|
294
395
|
|
|
295
396
|
Social media publishing is configured via `schedule.json` and the Late API. See [Social Publishing Guide](./docs/social-publishing.md) for details.
|
|
@@ -305,12 +406,29 @@ Social media publishing is configured via `schedule.json` and the Late API. See
|
|
|
305
406
|
| [FFmpeg Setup](./docs/ffmpeg-setup.md) | Platform-specific install (Windows, macOS, Linux, ARM64) |
|
|
306
407
|
| [Brand Customization](./docs/brand-customization.md) | Customize AI voice, vocabulary, hashtags, and content style |
|
|
307
408
|
| [Social Publishing](./docs/social-publishing.md) | Review, schedule, and publish social posts via Late API |
|
|
409
|
+
| [Architecture (L0–L7)](./docs/architecture/layers.md) | Layer hierarchy, import rules, and testing strategy |
|
|
410
|
+
| [Platform Content Strategy](./docs/platform-content-strategy.md) | Research-backed recommendations per social platform |
|
|
411
|
+
|
|
412
|
+
Full reference docs are available at [htekdev.github.io/vidpipe](https://htekdev.github.io/vidpipe/).
|
|
308
413
|
|
|
309
414
|
---
|
|
310
415
|
|
|
311
416
|
## 🏗️ Architecture
|
|
312
417
|
|
|
313
|
-
|
|
418
|
+
VidPipe uses a strict **L0–L7 layered architecture** where each layer can only import from specific lower layers. This enforces clean separation of concerns and makes every layer independently testable.
|
|
419
|
+
|
|
420
|
+
```
|
|
421
|
+
L7-app CLI, servers, watchers → L0, L1, L3, L6
|
|
422
|
+
L6-pipeline Stage orchestration → L0, L1, L5
|
|
423
|
+
L5-assets Lazy-loaded asset + bridges → L0, L1, L4
|
|
424
|
+
L4-agents LLM agents (BaseAgent) → L0, L1, L3
|
|
425
|
+
L3-services Business logic + cost tracking → L0, L1, L2
|
|
426
|
+
L2-clients External API/process wrappers → L0, L1
|
|
427
|
+
L1-infra Infrastructure (config, logger) → L0
|
|
428
|
+
L0-pure Pure functions, zero I/O → (nothing)
|
|
429
|
+
```
|
|
430
|
+
|
|
431
|
+
Each editing task is handled by a specialized AI agent built on the [GitHub Copilot SDK](https://github.com/github/copilot-sdk):
|
|
314
432
|
|
|
315
433
|
```mermaid
|
|
316
434
|
graph TD
|
|
@@ -321,6 +439,7 @@ graph TD
|
|
|
321
439
|
BP --> CA[ChapterAgent]
|
|
322
440
|
BP --> SMA[SocialMediaAgent]
|
|
323
441
|
BP --> BA[BlogAgent]
|
|
442
|
+
BP --> IA[IdeationAgent]
|
|
324
443
|
|
|
325
444
|
SRA -->|tools| T1[detect_silence, decide_removals]
|
|
326
445
|
SHA -->|tools| T2[plan_shorts]
|
|
@@ -329,11 +448,13 @@ graph TD
|
|
|
329
448
|
SA -->|tools| T5[capture_frame, write_summary]
|
|
330
449
|
SMA -->|tools| T6[search_links, create_posts]
|
|
331
450
|
BA -->|tools| T7[search_web, write_blog]
|
|
451
|
+
IA -->|tools| T8[web_search, youtube_search, generate_ideas]
|
|
332
452
|
|
|
333
453
|
style BP fill:#1e3a5f,stroke:#60a5fa,color:#fff
|
|
454
|
+
style IA fill:#5a4d27,stroke:#fbbf24,color:#fff
|
|
334
455
|
```
|
|
335
456
|
|
|
336
|
-
Each agent communicates with the LLM through structured tool calls, ensuring reliable, parseable outputs.
|
|
457
|
+
Each agent communicates with the LLM through structured tool calls, ensuring reliable, parseable outputs. See the [Architecture Guide](./docs/architecture/layers.md) for full details on layer rules and import enforcement.
|
|
337
458
|
|
|
338
459
|
---
|
|
339
460
|
|
|
@@ -344,23 +465,28 @@ Each agent communicates with the LLM through structured tool calls, ensuring rel
|
|
|
344
465
|
| [TypeScript](https://www.typescriptlang.org/) | Language (ES2022, ESM) |
|
|
345
466
|
| [GitHub Copilot SDK](https://github.com/github/copilot-sdk) | AI agent framework |
|
|
346
467
|
| [OpenAI Whisper](https://platform.openai.com/docs/guides/speech-to-text) | Speech-to-text |
|
|
468
|
+
| [Google Gemini](https://ai.google.dev/) | Vision-based video analysis |
|
|
347
469
|
| [FFmpeg](https://ffmpeg.org/) | Video/audio processing |
|
|
348
470
|
| [Sharp](https://sharp.pixelplumbing.com/) | Image analysis (webcam detection) |
|
|
471
|
+
| [Octokit](https://github.com/octokit/octokit.js) | GitHub API (idea storage as Issues) |
|
|
349
472
|
| [Commander.js](https://github.com/tj/commander.js) | CLI framework |
|
|
350
473
|
| [Chokidar](https://github.com/paulmillr/chokidar) | File system watching |
|
|
351
474
|
| [Winston](https://github.com/winstonjs/winston) | Logging |
|
|
352
|
-
| [Exa AI](https://exa.ai/) | Web search for social posts and
|
|
475
|
+
| [Exa AI](https://exa.ai/) | Web search for social posts, blog, and ideation |
|
|
353
476
|
|
|
354
477
|
---
|
|
355
478
|
|
|
356
479
|
## 🗺️ Roadmap
|
|
357
480
|
|
|
358
481
|
- [x] **Automated social posting** — Publish directly to platforms via Late API
|
|
482
|
+
- [x] **Content ideation (ID8)** — AI-generated, trend-backed video ideas with lifecycle tracking
|
|
483
|
+
- [x] **Gemini Vision integration** — AI-powered video analysis and scene detection
|
|
484
|
+
- [x] **L0–L7 layered architecture** — Strict separation of concerns with import enforcement
|
|
485
|
+
- [x] **GitHub agentic workflows** — Automated issue and PR triage via GitHub Actions
|
|
486
|
+
- [x] **Hook-first clip ordering** — Most engaging moment plays first in shorts
|
|
359
487
|
- [ ] **Multi-language support** — Transcription and summaries in multiple languages
|
|
360
488
|
- [ ] **Custom templates** — User-defined Markdown & social post templates
|
|
361
|
-
- [ ] **Web dashboard** — Browser UI for reviewing and editing outputs
|
|
362
489
|
- [ ] **Batch processing** — Process an entire folder of existing videos
|
|
363
|
-
- [ ] **Custom short criteria** — Configure what makes a "good" short for your content
|
|
364
490
|
- [ ] **Thumbnail generation** — Auto-generate branded thumbnails for shorts
|
|
365
491
|
|
|
366
492
|
---
|
package/dist/cli.js
CHANGED
|
@@ -11557,12 +11557,13 @@ function isStringArray(value) {
|
|
|
11557
11557
|
function hasField(source, field) {
|
|
11558
11558
|
return Object.prototype.hasOwnProperty.call(source, field);
|
|
11559
11559
|
}
|
|
11560
|
-
function normalizeCount(count) {
|
|
11560
|
+
function normalizeCount(count, allowSingle) {
|
|
11561
|
+
const min = allowSingle ? 1 : MIN_IDEA_COUNT;
|
|
11561
11562
|
if (typeof count !== "number" || Number.isNaN(count)) {
|
|
11562
|
-
return
|
|
11563
|
+
return min;
|
|
11563
11564
|
}
|
|
11564
11565
|
const rounded = Math.round(count);
|
|
11565
|
-
return Math.min(MAX_IDEA_COUNT, Math.max(
|
|
11566
|
+
return Math.min(MAX_IDEA_COUNT, Math.max(min, rounded));
|
|
11566
11567
|
}
|
|
11567
11568
|
function normalizeSeedTopics(seedTopics) {
|
|
11568
11569
|
return (seedTopics ?? []).map((topic) => topic.trim()).filter((topic) => topic.length > 0);
|
|
@@ -12207,7 +12208,7 @@ var IdeationAgent = class extends BaseAgent {
|
|
|
12207
12208
|
};
|
|
12208
12209
|
async function generateIdeas(options = {}) {
|
|
12209
12210
|
const seedTopics = normalizeSeedTopics(options.seedTopics);
|
|
12210
|
-
const count = normalizeCount(options.count);
|
|
12211
|
+
const count = normalizeCount(options.count, options.singleTopic);
|
|
12211
12212
|
const config2 = getConfig();
|
|
12212
12213
|
const previousBrandPath = config2.BRAND_PATH;
|
|
12213
12214
|
if (options.brandPath) {
|
|
@@ -13161,11 +13162,30 @@ function generateIdeas3(...args) {
|
|
|
13161
13162
|
}
|
|
13162
13163
|
|
|
13163
13164
|
// src/L7-app/commands/ideate.ts
|
|
13165
|
+
init_types();
|
|
13166
|
+
var VALID_PLATFORMS = new Set(Object.values(Platform));
|
|
13164
13167
|
async function runIdeate(options = {}) {
|
|
13165
13168
|
initConfig();
|
|
13169
|
+
if (options.add) {
|
|
13170
|
+
await handleAdd(options);
|
|
13171
|
+
return;
|
|
13172
|
+
}
|
|
13166
13173
|
if (options.list) {
|
|
13167
13174
|
const ideas2 = await listIdeas();
|
|
13168
13175
|
const filtered = options.status ? ideas2.filter((idea) => idea.status === options.status) : ideas2;
|
|
13176
|
+
if (options.format === "json") {
|
|
13177
|
+
const jsonIdeas = filtered.map((idea) => ({
|
|
13178
|
+
issueNumber: idea.issueNumber,
|
|
13179
|
+
id: idea.id,
|
|
13180
|
+
topic: idea.topic,
|
|
13181
|
+
hook: idea.hook,
|
|
13182
|
+
audience: idea.audience,
|
|
13183
|
+
platforms: idea.platforms,
|
|
13184
|
+
status: idea.status
|
|
13185
|
+
}));
|
|
13186
|
+
console.log(JSON.stringify(jsonIdeas, null, 2));
|
|
13187
|
+
return;
|
|
13188
|
+
}
|
|
13169
13189
|
if (filtered.length === 0) {
|
|
13170
13190
|
console.log("No ideas found.");
|
|
13171
13191
|
if (options.status) {
|
|
@@ -13188,12 +13208,14 @@ ${filtered.length} idea(s) total`);
|
|
|
13188
13208
|
}
|
|
13189
13209
|
const seedTopics = options.topics?.split(",").map((t) => t.trim()).filter(Boolean);
|
|
13190
13210
|
const count = options.count ? parseInt(options.count, 10) : 5;
|
|
13191
|
-
|
|
13192
|
-
|
|
13193
|
-
|
|
13194
|
-
|
|
13195
|
-
|
|
13211
|
+
if (options.format !== "json") {
|
|
13212
|
+
console.log("\n\u{1F9E0} Generating content ideas...\n");
|
|
13213
|
+
if (seedTopics?.length) {
|
|
13214
|
+
console.log(`Seed topics: ${seedTopics.join(", ")}`);
|
|
13215
|
+
}
|
|
13216
|
+
console.log(`Target count: ${count}
|
|
13196
13217
|
`);
|
|
13218
|
+
}
|
|
13197
13219
|
const ideas = await generateIdeas3({
|
|
13198
13220
|
seedTopics,
|
|
13199
13221
|
count,
|
|
@@ -13201,7 +13223,24 @@ ${filtered.length} idea(s) total`);
|
|
|
13201
13223
|
brandPath: options.brand
|
|
13202
13224
|
});
|
|
13203
13225
|
if (ideas.length === 0) {
|
|
13204
|
-
|
|
13226
|
+
if (options.format === "json") {
|
|
13227
|
+
console.log(JSON.stringify([], null, 2));
|
|
13228
|
+
} else {
|
|
13229
|
+
console.log("No ideas were generated. Check your API key configuration.");
|
|
13230
|
+
}
|
|
13231
|
+
return;
|
|
13232
|
+
}
|
|
13233
|
+
if (options.format === "json") {
|
|
13234
|
+
const jsonIdeas = ideas.map((idea) => ({
|
|
13235
|
+
issueNumber: idea.issueNumber,
|
|
13236
|
+
id: idea.id,
|
|
13237
|
+
topic: idea.topic,
|
|
13238
|
+
hook: idea.hook,
|
|
13239
|
+
audience: idea.audience,
|
|
13240
|
+
platforms: idea.platforms,
|
|
13241
|
+
status: idea.status
|
|
13242
|
+
}));
|
|
13243
|
+
console.log(JSON.stringify(jsonIdeas, null, 2));
|
|
13205
13244
|
return;
|
|
13206
13245
|
}
|
|
13207
13246
|
console.log(`
|
|
@@ -13219,6 +13258,71 @@ ${filtered.length} idea(s) total`);
|
|
|
13219
13258
|
console.log("Use `vidpipe ideate --list` to view all ideas.");
|
|
13220
13259
|
console.log("Use `vidpipe process video.mp4 --ideas <issueNumber1>,<issueNumber2>` to link ideas to a recording.");
|
|
13221
13260
|
}
|
|
13261
|
+
function parseCommaSeparated(value) {
|
|
13262
|
+
if (!value) return [];
|
|
13263
|
+
return value.split(",").map((s) => s.trim()).filter(Boolean);
|
|
13264
|
+
}
|
|
13265
|
+
function parsePlatforms(value) {
|
|
13266
|
+
if (!value) return ["youtube" /* YouTube */];
|
|
13267
|
+
const names = parseCommaSeparated(value);
|
|
13268
|
+
const platforms = [];
|
|
13269
|
+
for (const name of names) {
|
|
13270
|
+
const lower = name.toLowerCase();
|
|
13271
|
+
if (!VALID_PLATFORMS.has(lower)) {
|
|
13272
|
+
throw new Error(`Invalid platform "${name}". Valid platforms: ${[...VALID_PLATFORMS].join(", ")}`);
|
|
13273
|
+
}
|
|
13274
|
+
platforms.push(lower);
|
|
13275
|
+
}
|
|
13276
|
+
return platforms.length > 0 ? platforms : ["youtube" /* YouTube */];
|
|
13277
|
+
}
|
|
13278
|
+
function defaultPublishBy() {
|
|
13279
|
+
const date = /* @__PURE__ */ new Date();
|
|
13280
|
+
date.setDate(date.getDate() + 14);
|
|
13281
|
+
return date.toISOString().split("T")[0];
|
|
13282
|
+
}
|
|
13283
|
+
function buildDirectInput(options) {
|
|
13284
|
+
const topic = options.topic;
|
|
13285
|
+
const hook = options.hook ?? topic;
|
|
13286
|
+
const audience = options.audience ?? "developers";
|
|
13287
|
+
const platforms = parsePlatforms(options.platforms);
|
|
13288
|
+
const keyTakeaway = options.keyTakeaway ?? hook;
|
|
13289
|
+
const talkingPoints = parseCommaSeparated(options.talkingPoints);
|
|
13290
|
+
const tags = parseCommaSeparated(options.tags);
|
|
13291
|
+
const publishBy = options.publishBy ?? defaultPublishBy();
|
|
13292
|
+
const trendContext = options.trendContext;
|
|
13293
|
+
return { topic, hook, audience, keyTakeaway, talkingPoints, platforms, tags, publishBy, trendContext };
|
|
13294
|
+
}
|
|
13295
|
+
async function handleAdd(options) {
|
|
13296
|
+
if (!options.topic) {
|
|
13297
|
+
throw new Error("--topic is required when using --add");
|
|
13298
|
+
}
|
|
13299
|
+
const useAI = options.ai !== false;
|
|
13300
|
+
if (useAI) {
|
|
13301
|
+
const ideas = await generateIdeas3({
|
|
13302
|
+
seedTopics: [options.topic],
|
|
13303
|
+
count: 1,
|
|
13304
|
+
singleTopic: true,
|
|
13305
|
+
brandPath: options.brand
|
|
13306
|
+
});
|
|
13307
|
+
const idea = ideas[0];
|
|
13308
|
+
if (!idea) {
|
|
13309
|
+
throw new Error("IdeationAgent did not create an idea");
|
|
13310
|
+
}
|
|
13311
|
+
if (options.format === "json") {
|
|
13312
|
+
console.log(JSON.stringify(idea, null, 2));
|
|
13313
|
+
} else {
|
|
13314
|
+
console.log(`Created idea #${idea.issueNumber}: "${idea.topic}"`);
|
|
13315
|
+
}
|
|
13316
|
+
} else {
|
|
13317
|
+
const input = buildDirectInput(options);
|
|
13318
|
+
const idea = await createIdea(input);
|
|
13319
|
+
if (options.format === "json") {
|
|
13320
|
+
console.log(JSON.stringify(idea, null, 2));
|
|
13321
|
+
} else {
|
|
13322
|
+
console.log(`Created idea #${idea.issueNumber}: "${idea.topic}"`);
|
|
13323
|
+
}
|
|
13324
|
+
}
|
|
13325
|
+
}
|
|
13222
13326
|
|
|
13223
13327
|
// src/L1-infra/http/http.ts
|
|
13224
13328
|
import { default as default8 } from "express";
|
|
@@ -13853,7 +13957,7 @@ program.command("chat").description("Interactive chat session with the schedule
|
|
|
13853
13957
|
program.command("doctor").description("Check all prerequisites and dependencies").action(async () => {
|
|
13854
13958
|
await runDoctor();
|
|
13855
13959
|
});
|
|
13856
|
-
program.command("ideate").description("Generate AI-powered content ideas using trend research").option("--topics <topics>", "Comma-separated seed topics").option("--count <n>", "Number of ideas to generate (default: 5)", "5").option("--output <dir>", "Ideas directory (default: ./ideas)").option("--brand <path>", "Brand config path (default: ./brand.json)").option("--list", "List existing ideas instead of generating").option("--status <status>", "Filter by status when listing (draft|ready|recorded|published)").action(async (opts) => {
|
|
13960
|
+
program.command("ideate").description("Generate AI-powered content ideas using trend research").option("--topics <topics>", "Comma-separated seed topics").option("--count <n>", "Number of ideas to generate (default: 5)", "5").option("--output <dir>", "Ideas directory (default: ./ideas)").option("--brand <path>", "Brand config path (default: ./brand.json)").option("--list", "List existing ideas instead of generating").option("--status <status>", "Filter by status when listing (draft|ready|recorded|published)").option("--format <format>", "Output format: table (default) or json").option("--add", "Add a single idea (AI-researched by default, or --no-ai for direct)").option("--topic <topic>", "Idea topic/title (required with --add)").option("--hook <hook>", "Attention-grabbing hook (default: topic, --no-ai only)").option("--audience <audience>", "Target audience (default: developers, --no-ai only)").option("--platforms <platforms>", "Comma-separated platforms: tiktok,youtube,instagram,linkedin,x (--no-ai only)").option("--key-takeaway <takeaway>", "Core message the viewer should remember (--no-ai only)").option("--talking-points <points>", "Comma-separated talking points (--no-ai only)").option("--tags <tags>", "Comma-separated categorization tags (--no-ai only)").option("--publish-by <date>", "Publish deadline (ISO 8601 date, default: 14 days from now, --no-ai only)").option("--trend-context <context>", "Why this topic is timely (--no-ai only)").option("--no-ai", "Skip AI research agent \u2014 create directly from CLI flags + defaults").action(async (opts) => {
|
|
13857
13961
|
initConfig();
|
|
13858
13962
|
await runIdeate(opts);
|
|
13859
13963
|
process.exit(0);
|