veogent 1.0.21 → 1.0.25
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +54 -4
- package/index.js +358 -24
- package/package.json +1 -1
- package/skills/SKILL.md +185 -18
package/README.md
CHANGED
|
@@ -69,8 +69,13 @@ veogent create-project -n "Cyberpunk T-Rex" -k "T-rex, Neon, Sci-fi" -d "A massi
|
|
|
69
69
|
# Get all chapters within a project ID
|
|
70
70
|
veogent chapters <projectId>
|
|
71
71
|
|
|
72
|
-
# View characters cast for the project
|
|
72
|
+
# View characters cast for the project (includes readiness info)
|
|
73
73
|
veogent characters <projectId>
|
|
74
|
+
# Response includes: { characters: [{id, name, imageUri, ready}], characterReadiness: {total, ready, allReady} }
|
|
75
|
+
|
|
76
|
+
# Check scene materialization status (how many scenes are ready)
|
|
77
|
+
veogent scene-materialization-status -p <projectId> -c <chapterId>
|
|
78
|
+
# Response: { expectedScenes, materializedScenes, status: "PROCESSING"|"READY"|"EMPTY" }
|
|
74
79
|
|
|
75
80
|
# Create scenes automatically using AI-generated narrative scripts
|
|
76
81
|
veogent create-scene -p <projectId> -c <chapterId> --flowkey -C "The T-Rex looks up at the sky." "A meteor shower begins."
|
|
@@ -91,8 +96,12 @@ Queue generation jobs directly from the terminal.
|
|
|
91
96
|
*Note: VEOGENT uses strict validation depending on the request type.*
|
|
92
97
|
|
|
93
98
|
```bash
|
|
99
|
+
# List supported models
|
|
100
|
+
veogent image-models # → { models: ["imagen3.5"] }
|
|
101
|
+
veogent video-models # → { models: ["veo_3_1_fast", "veo_3_1_fast_r2v"] }
|
|
102
|
+
|
|
94
103
|
# Generate Image (Supports: imagen3.5)
|
|
95
|
-
veogent request -t "GENERATE_IMAGES" -p <proj> -c <chap> -s <scene> -i "
|
|
104
|
+
veogent request -t "GENERATE_IMAGES" -p <proj> -c <chap> -s <scene> -i "imagen3.5"
|
|
96
105
|
|
|
97
106
|
# Generate Video (Supports: veo_3_1_fast, veo_3_1_fast_r2v)
|
|
98
107
|
# Default/recommended model: veo_3_1_fast
|
|
@@ -105,6 +114,43 @@ veogent request -t "VIDEO_UPSCALE" -p <proj> -c <chap> -s <scene> -o "HORIZONTAL
|
|
|
105
114
|
veogent upscale -p <proj> -c <chap> -s <scene> -o "VERTICAL" -r "VIDEO_RESOLUTION_4K"
|
|
106
115
|
```
|
|
107
116
|
|
|
117
|
+
### 📊 Monitoring & Status
|
|
118
|
+
```bash
|
|
119
|
+
# View all requests (most recent first)
|
|
120
|
+
veogent requests
|
|
121
|
+
|
|
122
|
+
# Get N most recent requests
|
|
123
|
+
veogent requests -n 10
|
|
124
|
+
|
|
125
|
+
# Filter by project / chapter / status
|
|
126
|
+
veogent requests -p <projectId> -c <chapterId> -n 5
|
|
127
|
+
veogent requests -s FAILED -n 20
|
|
128
|
+
veogent requests -s COMPLETED -p <projectId>
|
|
129
|
+
|
|
130
|
+
# Scene-level status with embedded asset URLs (image + video)
|
|
131
|
+
veogent scene-status -p <projectId> -c <chapterId>
|
|
132
|
+
# Each scene returns: { sceneId, image: { status, url }, video: { status, url } }
|
|
133
|
+
|
|
134
|
+
# Full workflow snapshot (scenes + requests + assets)
|
|
135
|
+
veogent workflow-status -p <projectId> -c <chapterId>
|
|
136
|
+
|
|
137
|
+
# Wait for all images to finish processing
|
|
138
|
+
veogent wait-images -p <projectId> -c <chapterId>
|
|
139
|
+
|
|
140
|
+
# Wait and verify all images succeeded (not just finished)
|
|
141
|
+
veogent wait-images -p <projectId> -c <chapterId> --require-success
|
|
142
|
+
|
|
143
|
+
# Same for videos
|
|
144
|
+
veogent wait-videos -p <projectId> -c <chapterId> --require-success
|
|
145
|
+
|
|
146
|
+
# Queue concurrency status
|
|
147
|
+
veogent queue-status
|
|
148
|
+
|
|
149
|
+
# Google Flow credit/plan info (requires flow key)
|
|
150
|
+
veogent flow-credits
|
|
151
|
+
veogent flow-credits -f "ya29.a0ATk..."
|
|
152
|
+
```
|
|
153
|
+
|
|
108
154
|
---
|
|
109
155
|
|
|
110
156
|
## 🤖 For AI Agents
|
|
@@ -126,8 +172,9 @@ Veogent CLI ships with a comprehensive **[`skills/SKILL.md`](./skills/SKILL.md)*
|
|
|
126
172
|
6. Create scenes from returned script list:
|
|
127
173
|
- `veogent create-scene -p <projectId> -c <chapterId> -C "scene 1" "scene 2" ... --flowkey`
|
|
128
174
|
7. Wait for character generation completion (`imageUri` required for all characters):
|
|
129
|
-
- `veogent characters <projectId>`
|
|
130
|
-
-
|
|
175
|
+
- `veogent characters <projectId>` — check `characterReadiness.allReady === true`
|
|
176
|
+
- `veogent scene-materialization-status -p <projectId> -c <chapterId>` — verify `status: "READY"`
|
|
177
|
+
- If missing/fail: inspect `veogent requests -n 20`, recover via `veogent edit-character`.
|
|
131
178
|
8. Generate scene images:
|
|
132
179
|
- `veogent request -t "GENERATE_IMAGES" ...`
|
|
133
180
|
- If image is reported wrong, decode image to Base64 and send to AI reviewer for evaluation.
|
|
@@ -140,6 +187,9 @@ Veogent CLI ships with a comprehensive **[`skills/SKILL.md`](./skills/SKILL.md)*
|
|
|
140
187
|
> 📖 For the full detailed guide with all commands, options tables, and examples, see **[`skills/SKILL.md`](./skills/SKILL.md)**.
|
|
141
188
|
|
|
142
189
|
**Important:** `veogent requests` is the primary status board for image/video/edit workflows.
|
|
190
|
+
- Use `-n <N>` to get only the N most recent requests.
|
|
191
|
+
- Use `-s FAILED` / `-s COMPLETED` to filter by status.
|
|
192
|
+
- Use `--require-success` on `wait-images` / `wait-videos` to ensure assets actually exist (not just "finished").
|
|
143
193
|
|
|
144
194
|
**Concurrency:** maximum **5** requests can be processed simultaneously. If the API reports maximum limit reached, treat it as queue-full (wait/retry), not a hard failure.
|
|
145
195
|
|
package/index.js
CHANGED
|
@@ -6,7 +6,7 @@ import { setConfig, clearConfig, getToken } from './config.js';
|
|
|
6
6
|
|
|
7
7
|
const program = new Command();
|
|
8
8
|
|
|
9
|
-
const IMAGE_MODELS = ['imagen3.5'
|
|
9
|
+
const IMAGE_MODELS = ['imagen3.5'];
|
|
10
10
|
const VIDEO_MODELS = ['veo_3_1_fast', 'veo_3_1_fast_r2v'];
|
|
11
11
|
|
|
12
12
|
function globalOpts() {
|
|
@@ -170,9 +170,9 @@ program
|
|
|
170
170
|
try {
|
|
171
171
|
const data = await api.get('/app/capabilities');
|
|
172
172
|
const caps = unwrapData(data);
|
|
173
|
-
emitJson({
|
|
173
|
+
emitJson({ status: 'success', models: caps?.imageModels || IMAGE_MODELS });
|
|
174
174
|
} catch {
|
|
175
|
-
emitJson({
|
|
175
|
+
emitJson({ status: 'success', models: IMAGE_MODELS });
|
|
176
176
|
}
|
|
177
177
|
});
|
|
178
178
|
|
|
@@ -183,9 +183,9 @@ program
|
|
|
183
183
|
try {
|
|
184
184
|
const data = await api.get('/app/capabilities');
|
|
185
185
|
const caps = unwrapData(data);
|
|
186
|
-
emitJson({
|
|
186
|
+
emitJson({ status: 'success', models: caps?.videoModels || VIDEO_MODELS });
|
|
187
187
|
} catch {
|
|
188
|
-
emitJson({
|
|
188
|
+
emitJson({ status: 'success', models: VIDEO_MODELS });
|
|
189
189
|
}
|
|
190
190
|
});
|
|
191
191
|
|
|
@@ -316,7 +316,7 @@ program
|
|
|
316
316
|
|
|
317
317
|
program
|
|
318
318
|
.command('create-chapter-content')
|
|
319
|
-
.description('Generate content for a specific chapter')
|
|
319
|
+
.description('Generate content for a specific chapter. This is a synchronous AI generation call — use the returned chapterContent directly.')
|
|
320
320
|
.requiredOption('-p, --project <project>', 'Project ID')
|
|
321
321
|
.requiredOption('-c, --chapter <chapter>', 'Chapter ID')
|
|
322
322
|
.option('-s, --scenes <count>', 'Number of scenes', '1')
|
|
@@ -328,9 +328,15 @@ program
|
|
|
328
328
|
numberScene: parseInt(options.scenes),
|
|
329
329
|
};
|
|
330
330
|
const data = await api.post('/app/chapter/content', payload);
|
|
331
|
-
|
|
331
|
+
const raw = unwrapData(data);
|
|
332
|
+
// Normalize: always return chapterContent as array of strings
|
|
333
|
+
let chapterContent = raw;
|
|
334
|
+
if (raw?.chapterContent) chapterContent = raw.chapterContent;
|
|
335
|
+
else if (raw?.content) chapterContent = raw.content;
|
|
336
|
+
if (!Array.isArray(chapterContent)) chapterContent = [chapterContent];
|
|
337
|
+
emitJson({ status: "success", chapterContent });
|
|
332
338
|
} catch (error) {
|
|
333
|
-
|
|
339
|
+
emitJson({ status: "error", ...formatCliError(error) });
|
|
334
340
|
}
|
|
335
341
|
});
|
|
336
342
|
|
|
@@ -338,13 +344,34 @@ program
|
|
|
338
344
|
// --- Characters ---
|
|
339
345
|
program
|
|
340
346
|
.command('characters <projectId>')
|
|
341
|
-
.description('Get all characters for a specific project')
|
|
347
|
+
.description('Get all characters for a specific project (includes readiness info)')
|
|
342
348
|
.action(async (projectId) => {
|
|
343
349
|
try {
|
|
344
350
|
const data = await api.get(`/app/characters/${projectId}`);
|
|
345
|
-
|
|
351
|
+
const raw = unwrapData(data);
|
|
352
|
+
const chars = Array.isArray(raw) ? raw : (raw?.characters || raw?.items || []);
|
|
353
|
+
|
|
354
|
+
const characters = chars.map((ch) => ({
|
|
355
|
+
id: ch?.id || ch?.characterId || null,
|
|
356
|
+
name: ch?.name || ch?.characterName || null,
|
|
357
|
+
imageUri: ch?.imageUri || ch?.imageUrl || ch?.image || null,
|
|
358
|
+
ready: !!(ch?.imageUri || ch?.imageUrl || ch?.image),
|
|
359
|
+
...ch,
|
|
360
|
+
}));
|
|
361
|
+
|
|
362
|
+
const readyCount = characters.filter((c) => c.ready).length;
|
|
363
|
+
const result = {
|
|
364
|
+
status: 'success',
|
|
365
|
+
characters,
|
|
366
|
+
characterReadiness: {
|
|
367
|
+
total: characters.length,
|
|
368
|
+
ready: readyCount,
|
|
369
|
+
allReady: characters.length > 0 && readyCount === characters.length,
|
|
370
|
+
},
|
|
371
|
+
};
|
|
372
|
+
emitJson(result);
|
|
346
373
|
} catch (error) {
|
|
347
|
-
|
|
374
|
+
emitJson({ status: "error", ...formatCliError(error) });
|
|
348
375
|
}
|
|
349
376
|
});
|
|
350
377
|
|
|
@@ -414,13 +441,66 @@ program
|
|
|
414
441
|
|
|
415
442
|
program
|
|
416
443
|
.command('scene-status')
|
|
417
|
-
.description('Get scene status snapshot by chapter')
|
|
444
|
+
.description('Get scene status snapshot by chapter (with embedded asset URLs)')
|
|
418
445
|
.requiredOption('-p, --project <project>', 'Project ID')
|
|
419
446
|
.requiredOption('-c, --chapter <chapter>', 'Chapter ID')
|
|
420
447
|
.action(async (options) => {
|
|
421
448
|
try {
|
|
422
449
|
const data = await api.get(`/app/scene-status/${options.project}/${options.chapter}`);
|
|
423
|
-
|
|
450
|
+
const rawData = unwrapData(data);
|
|
451
|
+
|
|
452
|
+
// Try to enrich each scene with asset URLs
|
|
453
|
+
const scenes = Array.isArray(rawData) ? rawData : (rawData?.scenes || rawData?.items || [rawData]);
|
|
454
|
+
const enriched = await Promise.all(scenes.map(async (scene) => {
|
|
455
|
+
const sceneId = scene?.id || scene?.sceneId;
|
|
456
|
+
if (!sceneId) return scene;
|
|
457
|
+
|
|
458
|
+
let imageAsset = { status: scene?.imageStatus || null, url: scene?.imageUrl || scene?.imageVerticalUri || scene?.imageHorizontalUri || null };
|
|
459
|
+
let videoAsset = { status: scene?.videoStatus || null, url: scene?.videoUrl || scene?.videoVerticalUri || scene?.videoHorizontalUri || null };
|
|
460
|
+
|
|
461
|
+
// Fetch assets if URLs not already present
|
|
462
|
+
try {
|
|
463
|
+
if (!imageAsset.url) {
|
|
464
|
+
const imgData = unwrapData(await api.get(`/app/request/assets/${options.project}/${options.chapter}/${sceneId}?type=GENERATE_IMAGES`));
|
|
465
|
+
const imgItems = Array.isArray(imgData) ? imgData : (imgData?.items || []);
|
|
466
|
+
const completed = imgItems.find((r) => String(r?.status || '').toUpperCase() === 'COMPLETED');
|
|
467
|
+
if (completed) {
|
|
468
|
+
imageAsset = {
|
|
469
|
+
status: 'COMPLETED',
|
|
470
|
+
url: completed?.imageVerticalUri || completed?.imageHorizontalUri || completed?.imageUrl || completed?.outputUrl || null,
|
|
471
|
+
createdAt: completed?.createdAt || null,
|
|
472
|
+
completedAt: completed?.completedAt || completed?.updatedAt || null,
|
|
473
|
+
};
|
|
474
|
+
}
|
|
475
|
+
}
|
|
476
|
+
} catch { /* asset fetch optional */ }
|
|
477
|
+
|
|
478
|
+
try {
|
|
479
|
+
if (!videoAsset.url) {
|
|
480
|
+
const vidData = unwrapData(await api.get(`/app/request/assets/${options.project}/${options.chapter}/${sceneId}?type=GENERATE_VIDEO`));
|
|
481
|
+
const vidItems = Array.isArray(vidData) ? vidData : (vidData?.items || []);
|
|
482
|
+
const completed = vidItems.find((r) => String(r?.status || '').toUpperCase() === 'COMPLETED');
|
|
483
|
+
if (completed) {
|
|
484
|
+
videoAsset = {
|
|
485
|
+
status: 'COMPLETED',
|
|
486
|
+
url: completed?.videoVerticalUri || completed?.videoHorizontalUri || completed?.videoUrl || completed?.outputUrl || null,
|
|
487
|
+
createdAt: completed?.createdAt || null,
|
|
488
|
+
completedAt: completed?.completedAt || completed?.updatedAt || null,
|
|
489
|
+
};
|
|
490
|
+
}
|
|
491
|
+
}
|
|
492
|
+
} catch { /* asset fetch optional */ }
|
|
493
|
+
|
|
494
|
+
return {
|
|
495
|
+
sceneId,
|
|
496
|
+
displayOrder: scene?.displayOrder ?? null,
|
|
497
|
+
image: imageAsset,
|
|
498
|
+
video: videoAsset,
|
|
499
|
+
raw: scene,
|
|
500
|
+
};
|
|
501
|
+
}));
|
|
502
|
+
|
|
503
|
+
emitJson({ status: 'success', data: enriched });
|
|
424
504
|
} catch (error) {
|
|
425
505
|
emitJson({ status: 'error', ...formatCliError(error) });
|
|
426
506
|
}
|
|
@@ -428,13 +508,49 @@ program
|
|
|
428
508
|
|
|
429
509
|
program
|
|
430
510
|
.command('workflow-status')
|
|
431
|
-
.description('Export workflow snapshot for a project/chapter (agent helper)')
|
|
511
|
+
.description('Export workflow snapshot for a project/chapter with embedded asset URLs (agent helper)')
|
|
432
512
|
.requiredOption('-p, --project <project>', 'Project ID')
|
|
433
513
|
.requiredOption('-c, --chapter <chapter>', 'Chapter ID')
|
|
434
514
|
.action(async (options) => {
|
|
435
515
|
try {
|
|
436
516
|
const data = await api.get(`/app/workflow-status/${options.project}/${options.chapter}`);
|
|
437
|
-
|
|
517
|
+
const rawData = unwrapData(data);
|
|
518
|
+
|
|
519
|
+
// Enrich scenes with asset URLs
|
|
520
|
+
const scenes = Array.isArray(rawData?.scenes) ? rawData.scenes : (Array.isArray(rawData) ? rawData : []);
|
|
521
|
+
const enrichedScenes = await Promise.all(scenes.map(async (scene) => {
|
|
522
|
+
const sceneId = scene?.id || scene?.sceneId;
|
|
523
|
+
if (!sceneId) return scene;
|
|
524
|
+
|
|
525
|
+
let imageAsset = { status: scene?.imageStatus || null, url: scene?.imageUrl || scene?.imageVerticalUri || scene?.imageHorizontalUri || null };
|
|
526
|
+
let videoAsset = { status: scene?.videoStatus || null, url: scene?.videoUrl || scene?.videoVerticalUri || scene?.videoHorizontalUri || null };
|
|
527
|
+
|
|
528
|
+
try {
|
|
529
|
+
if (!imageAsset.url) {
|
|
530
|
+
const imgData = unwrapData(await api.get(`/app/request/assets/${options.project}/${options.chapter}/${sceneId}?type=GENERATE_IMAGES`));
|
|
531
|
+
const imgItems = Array.isArray(imgData) ? imgData : (imgData?.items || []);
|
|
532
|
+
const completed = imgItems.find((r) => String(r?.status || '').toUpperCase() === 'COMPLETED');
|
|
533
|
+
if (completed) {
|
|
534
|
+
imageAsset = { status: 'COMPLETED', url: completed?.imageVerticalUri || completed?.imageHorizontalUri || completed?.imageUrl || completed?.outputUrl || null };
|
|
535
|
+
}
|
|
536
|
+
}
|
|
537
|
+
} catch { /* optional */ }
|
|
538
|
+
|
|
539
|
+
try {
|
|
540
|
+
if (!videoAsset.url) {
|
|
541
|
+
const vidData = unwrapData(await api.get(`/app/request/assets/${options.project}/${options.chapter}/${sceneId}?type=GENERATE_VIDEO`));
|
|
542
|
+
const vidItems = Array.isArray(vidData) ? vidData : (vidData?.items || []);
|
|
543
|
+
const completed = vidItems.find((r) => String(r?.status || '').toUpperCase() === 'COMPLETED');
|
|
544
|
+
if (completed) {
|
|
545
|
+
videoAsset = { status: 'COMPLETED', url: completed?.videoVerticalUri || completed?.videoHorizontalUri || completed?.videoUrl || completed?.outputUrl || null };
|
|
546
|
+
}
|
|
547
|
+
}
|
|
548
|
+
} catch { /* optional */ }
|
|
549
|
+
|
|
550
|
+
return { sceneId, image: imageAsset, video: videoAsset, raw: scene };
|
|
551
|
+
}));
|
|
552
|
+
|
|
553
|
+
emitJson({ status: 'success', data: { ...rawData, scenes: enrichedScenes } });
|
|
438
554
|
} catch (error) {
|
|
439
555
|
// Backward-compatible fallback for old backend
|
|
440
556
|
try {
|
|
@@ -451,13 +567,77 @@ program
|
|
|
451
567
|
(r?.chapter_id === options.chapter || r?.chapterId === options.chapter)
|
|
452
568
|
);
|
|
453
569
|
|
|
454
|
-
|
|
570
|
+
// Enrich fallback scenes too
|
|
571
|
+
const enrichedFallback = await Promise.all(scenes.map(async (scene) => {
|
|
572
|
+
const sceneId = scene?.id || scene?.sceneId;
|
|
573
|
+
if (!sceneId) return scene;
|
|
574
|
+
|
|
575
|
+
let imageAsset = { status: null, url: null };
|
|
576
|
+
let videoAsset = { status: null, url: null };
|
|
577
|
+
|
|
578
|
+
try {
|
|
579
|
+
const imgData = unwrapData(await api.get(`/app/request/assets/${options.project}/${options.chapter}/${sceneId}?type=GENERATE_IMAGES`));
|
|
580
|
+
const imgItems = Array.isArray(imgData) ? imgData : (imgData?.items || []);
|
|
581
|
+
const completed = imgItems.find((r) => String(r?.status || '').toUpperCase() === 'COMPLETED');
|
|
582
|
+
if (completed) {
|
|
583
|
+
imageAsset = { status: 'COMPLETED', url: completed?.imageVerticalUri || completed?.imageHorizontalUri || completed?.imageUrl || null };
|
|
584
|
+
}
|
|
585
|
+
} catch { /* optional */ }
|
|
586
|
+
|
|
587
|
+
try {
|
|
588
|
+
const vidData = unwrapData(await api.get(`/app/request/assets/${options.project}/${options.chapter}/${sceneId}?type=GENERATE_VIDEO`));
|
|
589
|
+
const vidItems = Array.isArray(vidData) ? vidData : (vidData?.items || []);
|
|
590
|
+
const completed = vidItems.find((r) => String(r?.status || '').toUpperCase() === 'COMPLETED');
|
|
591
|
+
if (completed) {
|
|
592
|
+
videoAsset = { status: 'COMPLETED', url: completed?.videoVerticalUri || completed?.videoHorizontalUri || completed?.videoUrl || null };
|
|
593
|
+
}
|
|
594
|
+
} catch { /* optional */ }
|
|
595
|
+
|
|
596
|
+
return { sceneId, image: imageAsset, video: videoAsset, raw: scene };
|
|
597
|
+
}));
|
|
598
|
+
|
|
599
|
+
emitJson({ status: 'success', data: { projectId: options.project, chapterId: options.chapter, scenes: enrichedFallback, requests: chapterRequests } });
|
|
455
600
|
} catch (fallbackError) {
|
|
456
601
|
emitJson({ status: 'error', ...formatCliError(fallbackError) });
|
|
457
602
|
}
|
|
458
603
|
}
|
|
459
604
|
});
|
|
460
605
|
|
|
606
|
+
program
|
|
607
|
+
.command('scene-materialization-status')
|
|
608
|
+
.description('Check how many scenes have been materialized for a chapter')
|
|
609
|
+
.requiredOption('-p, --project <project>', 'Project ID')
|
|
610
|
+
.requiredOption('-c, --chapter <chapter>', 'Chapter ID')
|
|
611
|
+
.action(async (options) => {
|
|
612
|
+
try {
|
|
613
|
+
// Get chapter data to determine expected scenes
|
|
614
|
+
let expectedScenes = 0;
|
|
615
|
+
try {
|
|
616
|
+
const chapterData = unwrapData(await api.get(`/app/chapters/${options.project}`));
|
|
617
|
+
const chapters = Array.isArray(chapterData) ? chapterData : (chapterData?.chapters || chapterData?.items || []);
|
|
618
|
+
const chapter = chapters.find((ch) => (ch?.id || ch?.chapterId) === options.chapter);
|
|
619
|
+
expectedScenes = chapter?.numberScene || chapter?.sceneCount || chapter?.expectedScenes || 0;
|
|
620
|
+
} catch { /* fallback: expectedScenes stays 0 */ }
|
|
621
|
+
|
|
622
|
+
// Get actual scenes
|
|
623
|
+
const scenesRaw = unwrapData(await api.get(`/app/scenes/${options.project}/${options.chapter}`));
|
|
624
|
+
const scenes = Array.isArray(scenesRaw) ? scenesRaw : (scenesRaw?.scenes || scenesRaw?.items || []);
|
|
625
|
+
const materializedScenes = scenes.length;
|
|
626
|
+
|
|
627
|
+
// If we couldn't get expectedScenes from chapter, use materialized as fallback
|
|
628
|
+
if (expectedScenes === 0) expectedScenes = materializedScenes;
|
|
629
|
+
|
|
630
|
+
let status;
|
|
631
|
+
if (materializedScenes === 0) status = 'EMPTY';
|
|
632
|
+
else if (materializedScenes >= expectedScenes) status = 'READY';
|
|
633
|
+
else status = 'PROCESSING';
|
|
634
|
+
|
|
635
|
+
emitJson({ status: 'success', expectedScenes, materializedScenes, materialization: status });
|
|
636
|
+
} catch (error) {
|
|
637
|
+
emitJson({ status: 'error', ...formatCliError(error) });
|
|
638
|
+
}
|
|
639
|
+
});
|
|
640
|
+
|
|
461
641
|
program
|
|
462
642
|
.command('create-scene')
|
|
463
643
|
.description('Create a new scene from text content')
|
|
@@ -516,7 +696,7 @@ program
|
|
|
516
696
|
.requiredOption('-c, --chapter <chapter>', 'Chapter ID')
|
|
517
697
|
.requiredOption('-s, --scene <scene>', 'Scene ID')
|
|
518
698
|
.option('-o, --orientation <orientation>', 'Request orientation (HORIZONTAL, VERTICAL)')
|
|
519
|
-
.option('-i, --imagemodel <imagemodel>', 'Image Model (
|
|
699
|
+
.option('-i, --imagemodel <imagemodel>', 'Image Model (imagen3.5)', 'imagen3.5')
|
|
520
700
|
.option('-v, --videomodel <videomodel>', 'Video Model (veo_3_1_fast, veo_3_1_fast_r2v). Default: veo_3_1_fast', 'veo_3_1_fast')
|
|
521
701
|
.option('-S, --speed <speed>', 'Video Speed (normal, timelapse, slowmotion)', 'normal')
|
|
522
702
|
.option('-E, --endscene <endscene>', 'End Scene ID for continuous video generation')
|
|
@@ -564,7 +744,13 @@ program
|
|
|
564
744
|
}
|
|
565
745
|
|
|
566
746
|
const data = await api.post('/app/request', payload);
|
|
567
|
-
|
|
747
|
+
const requestResult = unwrapData(data);
|
|
748
|
+
// P2-7: Persist endScene metadata when chained video generation
|
|
749
|
+
if (options.endscene) {
|
|
750
|
+
requestResult.end_scene_id = options.endscene;
|
|
751
|
+
requestResult.generationMode = 'CHAINED_VIDEO';
|
|
752
|
+
}
|
|
753
|
+
console.log(JSON.stringify({ status: "success", request: requestResult }, null, 2));
|
|
568
754
|
} catch (error) {
|
|
569
755
|
console.log(JSON.stringify({ status: "error", ...formatCliError(error) }));
|
|
570
756
|
}
|
|
@@ -572,13 +758,45 @@ program
|
|
|
572
758
|
|
|
573
759
|
program
|
|
574
760
|
.command('requests')
|
|
575
|
-
.description('Get
|
|
576
|
-
.
|
|
761
|
+
.description('Get generation requests/jobs status for the current user')
|
|
762
|
+
.option('-n, --limit <n>', 'Return only the N most recent requests', null)
|
|
763
|
+
.option('-p, --project <projectId>', 'Filter by project ID')
|
|
764
|
+
.option('-c, --chapter <chapterId>', 'Filter by chapter ID')
|
|
765
|
+
.option('-s, --status <status>', 'Filter by status (e.g. COMPLETED, FAILED, PROCESSING)')
|
|
766
|
+
.action(async (options) => {
|
|
577
767
|
try {
|
|
578
768
|
const data = await api.get('/app/requests');
|
|
579
|
-
|
|
769
|
+
let items = unwrapData(data);
|
|
770
|
+
items = Array.isArray(items) ? items : (items?.items || []);
|
|
771
|
+
|
|
772
|
+
// Filter by project
|
|
773
|
+
if (options.project) {
|
|
774
|
+
items = items.filter((r) => r?.projectId === options.project || r?.project_id === options.project);
|
|
775
|
+
}
|
|
776
|
+
// Filter by chapter
|
|
777
|
+
if (options.chapter) {
|
|
778
|
+
items = items.filter((r) => r?.chapterId === options.chapter || r?.chapter_id === options.chapter);
|
|
779
|
+
}
|
|
780
|
+
// Filter by status
|
|
781
|
+
if (options.status) {
|
|
782
|
+
const s = options.status.toUpperCase();
|
|
783
|
+
items = items.filter((r) => String(r?.status || '').toUpperCase() === s);
|
|
784
|
+
}
|
|
785
|
+
// Sort by createdAt desc (most recent first)
|
|
786
|
+
items = items.sort((a, b) => {
|
|
787
|
+
const ta = a?.createdAt || a?.created_at || 0;
|
|
788
|
+
const tb = b?.createdAt || b?.created_at || 0;
|
|
789
|
+
return (Number(tb) || 0) - (Number(ta) || 0);
|
|
790
|
+
});
|
|
791
|
+
// Limit to N most recent
|
|
792
|
+
if (options.limit !== null && options.limit !== undefined) {
|
|
793
|
+
const n = parseInt(options.limit, 10);
|
|
794
|
+
if (!isNaN(n) && n > 0) items = items.slice(0, n);
|
|
795
|
+
}
|
|
796
|
+
|
|
797
|
+
emitJson({ status: 'success', total: items.length, data: items });
|
|
580
798
|
} catch (error) {
|
|
581
|
-
|
|
799
|
+
emitJson({ status: 'error', ...formatCliError(error) });
|
|
582
800
|
}
|
|
583
801
|
});
|
|
584
802
|
|
|
@@ -622,6 +840,7 @@ program
|
|
|
622
840
|
.requiredOption('-c, --chapter <chapter>', 'Chapter ID')
|
|
623
841
|
.option('-i, --interval <sec>', 'Polling interval in seconds', '10')
|
|
624
842
|
.option('-t, --timeout <sec>', 'Timeout in seconds', '1800')
|
|
843
|
+
.option('--require-success', 'Exit non-zero if any scene lacks a successful image asset')
|
|
625
844
|
.action(async (options) => {
|
|
626
845
|
const intervalMs = Math.max(1, Number(options.interval || 10)) * 1000;
|
|
627
846
|
const timeoutMs = Math.max(30, Number(options.timeout || 1800)) * 1000;
|
|
@@ -640,6 +859,24 @@ program
|
|
|
640
859
|
|
|
641
860
|
const pending = filtered.filter((r) => ['PENDING', 'PROCESSING', 'RUNNING'].includes(String(r?.status || '').toUpperCase()));
|
|
642
861
|
if (pending.length === 0) {
|
|
862
|
+
// --require-success: verify each scene has at least one COMPLETED request with asset URL
|
|
863
|
+
if (options.requireSuccess) {
|
|
864
|
+
const sceneIds = [...new Set(filtered.map((r) => r?.scene || r?.sceneId).filter(Boolean))];
|
|
865
|
+
const failedScenes = [];
|
|
866
|
+
for (const sid of sceneIds) {
|
|
867
|
+
const sceneReqs = filtered.filter((r) => (r?.scene === sid || r?.sceneId === sid));
|
|
868
|
+
const hasSuccess = sceneReqs.some((r) => {
|
|
869
|
+
const st = String(r?.status || '').toUpperCase();
|
|
870
|
+
const hasUrl = !!(r?.imageVerticalUri || r?.imageHorizontalUri || r?.imageUrl || r?.outputUrl);
|
|
871
|
+
return st === 'COMPLETED' && hasUrl;
|
|
872
|
+
});
|
|
873
|
+
if (!hasSuccess) failedScenes.push(sid);
|
|
874
|
+
}
|
|
875
|
+
if (failedScenes.length > 0) {
|
|
876
|
+
console.log(JSON.stringify({ status: 'error', code: 'ASSETS_NOT_SUCCESS', message: 'Some scenes did not produce successful assets', failedScenes }, null, 2));
|
|
877
|
+
process.exit(1);
|
|
878
|
+
}
|
|
879
|
+
}
|
|
643
880
|
console.log(JSON.stringify({ status: 'success', data: filtered, message: 'All image requests finished' }, null, 2));
|
|
644
881
|
return;
|
|
645
882
|
}
|
|
@@ -660,6 +897,7 @@ program
|
|
|
660
897
|
.requiredOption('-c, --chapter <chapter>', 'Chapter ID')
|
|
661
898
|
.option('-i, --interval <sec>', 'Polling interval in seconds', '10')
|
|
662
899
|
.option('-t, --timeout <sec>', 'Timeout in seconds', '3600')
|
|
900
|
+
.option('--require-success', 'Exit non-zero if any scene lacks a successful video asset')
|
|
663
901
|
.action(async (options) => {
|
|
664
902
|
const intervalMs = Math.max(1, Number(options.interval || 10)) * 1000;
|
|
665
903
|
const timeoutMs = Math.max(30, Number(options.timeout || 3600)) * 1000;
|
|
@@ -678,6 +916,24 @@ program
|
|
|
678
916
|
|
|
679
917
|
const pending = filtered.filter((r) => ['PENDING', 'PROCESSING', 'RUNNING'].includes(String(r?.status || '').toUpperCase()));
|
|
680
918
|
if (pending.length === 0) {
|
|
919
|
+
// --require-success: verify each scene has at least one COMPLETED request with asset URL
|
|
920
|
+
if (options.requireSuccess) {
|
|
921
|
+
const sceneIds = [...new Set(filtered.map((r) => r?.scene || r?.sceneId).filter(Boolean))];
|
|
922
|
+
const failedScenes = [];
|
|
923
|
+
for (const sid of sceneIds) {
|
|
924
|
+
const sceneReqs = filtered.filter((r) => (r?.scene === sid || r?.sceneId === sid));
|
|
925
|
+
const hasSuccess = sceneReqs.some((r) => {
|
|
926
|
+
const st = String(r?.status || '').toUpperCase();
|
|
927
|
+
const hasUrl = !!(r?.videoVerticalUri || r?.videoHorizontalUri || r?.videoUrl || r?.outputUrl);
|
|
928
|
+
return st === 'COMPLETED' && hasUrl;
|
|
929
|
+
});
|
|
930
|
+
if (!hasSuccess) failedScenes.push(sid);
|
|
931
|
+
}
|
|
932
|
+
if (failedScenes.length > 0) {
|
|
933
|
+
console.log(JSON.stringify({ status: 'error', code: 'ASSETS_NOT_SUCCESS', message: 'Some scenes did not produce successful assets', failedScenes }, null, 2));
|
|
934
|
+
process.exit(1);
|
|
935
|
+
}
|
|
936
|
+
}
|
|
681
937
|
console.log(JSON.stringify({ status: 'success', data: filtered, message: 'All video requests finished' }, null, 2));
|
|
682
938
|
return;
|
|
683
939
|
}
|
|
@@ -697,9 +953,21 @@ program
|
|
|
697
953
|
.action(async (projectId, chapterId, sceneId, type) => {
|
|
698
954
|
try {
|
|
699
955
|
const data = await api.get(`/app/request/assets/${projectId}/${chapterId}/${sceneId}?type=${type}`);
|
|
700
|
-
|
|
956
|
+
const raw = unwrapData(data);
|
|
957
|
+
// Enrich with lifecycle metadata
|
|
958
|
+
const items = Array.isArray(raw) ? raw : (raw?.items || [raw]);
|
|
959
|
+
const enriched = items.map((r) => ({
|
|
960
|
+
...r,
|
|
961
|
+
createdAt: r?.createdAt || null,
|
|
962
|
+
startedAt: r?.startedAt || null,
|
|
963
|
+
updatedAt: r?.updatedAt || null,
|
|
964
|
+
completedAt: r?.completedAt || null,
|
|
965
|
+
deducted: r?.deducted ?? null,
|
|
966
|
+
retryable: typeof r?.retryable === 'boolean' ? r.retryable : (r?.status && ['FAILED', 'ERROR'].includes(String(r.status).toUpperCase())),
|
|
967
|
+
}));
|
|
968
|
+
emitJson({ status: 'success', data: enriched });
|
|
701
969
|
} catch (error) {
|
|
702
|
-
|
|
970
|
+
emitJson({ status: "error", ...formatCliError(error) });
|
|
703
971
|
}
|
|
704
972
|
});
|
|
705
973
|
|
|
@@ -834,4 +1102,70 @@ program
|
|
|
834
1102
|
}
|
|
835
1103
|
});
|
|
836
1104
|
|
|
1105
|
+
// Flow Credits — fetch plan and credit info from Google AI Sandbox using flow key
|
|
1106
|
+
program
|
|
1107
|
+
.command('flow-credits')
|
|
1108
|
+
.description('Fetch plan and credit info from Google AI Sandbox using your Flow key (Bearer token)')
|
|
1109
|
+
.option('-f, --flowkey <flowkey>', 'Flow key (ya29. token). If omitted, uses stored flow key from account.')
|
|
1110
|
+
.action(async (options) => {
|
|
1111
|
+
try {
|
|
1112
|
+
// Determine which flow key to use
|
|
1113
|
+
let flowKey = options.flowkey;
|
|
1114
|
+
|
|
1115
|
+
if (!flowKey) {
|
|
1116
|
+
// Try to get from stored account
|
|
1117
|
+
const token = getToken();
|
|
1118
|
+
if (!token) {
|
|
1119
|
+
emitJson({ status: 'error', code: 'NO_TOKEN', message: 'Not logged in. Run: veogent login' });
|
|
1120
|
+
process.exit(1);
|
|
1121
|
+
}
|
|
1122
|
+
const accountData = unwrapData(await api.get('/app/flow-key'));
|
|
1123
|
+
flowKey = accountData?.flowKey;
|
|
1124
|
+
if (!flowKey) {
|
|
1125
|
+
emitJson({ status: 'error', code: 'NO_FLOW_KEY', message: 'No flow key found. Set one with: veogent setup-flow -f <token>' });
|
|
1126
|
+
process.exit(1);
|
|
1127
|
+
}
|
|
1128
|
+
}
|
|
1129
|
+
|
|
1130
|
+
const response = await fetch('https://aisandbox-pa.googleapis.com/v1/credits', {
|
|
1131
|
+
method: 'GET',
|
|
1132
|
+
headers: {
|
|
1133
|
+
'accept': '*/*',
|
|
1134
|
+
'accept-language': 'en-US,en;q=0.9',
|
|
1135
|
+
'authorization': `Bearer ${flowKey}`,
|
|
1136
|
+
'content-type': 'application/json',
|
|
1137
|
+
'origin': 'https://veogent.com',
|
|
1138
|
+
'referer': 'https://veogent.com/',
|
|
1139
|
+
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/145.0.0.0 Safari/537.36',
|
|
1140
|
+
'x-browser-channel': 'stable',
|
|
1141
|
+
'x-browser-year': '2026',
|
|
1142
|
+
},
|
|
1143
|
+
});
|
|
1144
|
+
|
|
1145
|
+
if (!response.ok) {
|
|
1146
|
+
const errText = await response.text();
|
|
1147
|
+
emitJson({ status: 'error', code: `HTTP_${response.status}`, message: errText || response.statusText });
|
|
1148
|
+
process.exit(1);
|
|
1149
|
+
}
|
|
1150
|
+
|
|
1151
|
+
const data = await response.json();
|
|
1152
|
+
|
|
1153
|
+
// Normalize output
|
|
1154
|
+
const result = {
|
|
1155
|
+
status: 'success',
|
|
1156
|
+
plan: data?.plan || data?.tier || data?.subscriptionTier || null,
|
|
1157
|
+
credits: data?.credits ?? data?.remainingCredits ?? data?.balance ?? null,
|
|
1158
|
+
totalCredits: data?.totalCredits ?? data?.maxCredits ?? null,
|
|
1159
|
+
usedCredits: data?.usedCredits ?? null,
|
|
1160
|
+
resetAt: data?.resetAt ?? data?.renewalDate ?? null,
|
|
1161
|
+
raw: data,
|
|
1162
|
+
};
|
|
1163
|
+
|
|
1164
|
+
emitJson(result);
|
|
1165
|
+
} catch (error) {
|
|
1166
|
+
emitJson({ status: 'error', ...formatCliError(error) });
|
|
1167
|
+
process.exit(1);
|
|
1168
|
+
}
|
|
1169
|
+
});
|
|
1170
|
+
|
|
837
1171
|
program.parse(process.argv);
|
package/package.json
CHANGED
package/skills/SKILL.md
CHANGED
|
@@ -15,12 +15,12 @@ The `veogent` command-line interface interacts with the **VEOGENT API** for mana
|
|
|
15
15
|
2. [Projects & Assets](#-projects--assets)
|
|
16
16
|
3. [Chapters](#-chapters)
|
|
17
17
|
4. [Scenes](#-scenes)
|
|
18
|
-
5. [Characters](#-characters)
|
|
18
|
+
5. [Characters & Readiness](#-characters)
|
|
19
19
|
6. [Scene Editing & Prompt Crafting](#-scene-editing--prompt-crafting)
|
|
20
20
|
7. [Generation Requests (Images / Videos / Upscale)](#-generation-requests)
|
|
21
21
|
8. [Monitoring & Status](#-monitoring--status)
|
|
22
22
|
9. [YouTube Metadata & Thumbnails](#-youtube-metadata--thumbnails)
|
|
23
|
-
10. [Flow Token
|
|
23
|
+
10. [Flow Token & Credits](#-flow-token-management)
|
|
24
24
|
11. [The Complete VEOGENT Pipeline (Best Practices)](#-the-complete-veogent-pipeline)
|
|
25
25
|
12. [Error Handling](#-error-handling)
|
|
26
26
|
|
|
@@ -173,7 +173,14 @@ The `veogent` command-line interface interacts with the **VEOGENT API** for mana
|
|
|
173
173
|
|
|
174
174
|
### Skill: `create-chapter-content`
|
|
175
175
|
- **Description:** Generate AI story/narrative content for a specific chapter, producing scene scripts.
|
|
176
|
-
- **Guidelines:**
|
|
176
|
+
- **Guidelines:**
|
|
177
|
+
- This is a **synchronous AI generation call** — use the `chapterContent` array from the returned response directly.
|
|
178
|
+
- Do NOT poll chapter state for contents; the canonical output is the response itself.
|
|
179
|
+
- Each scene maps to approximately an **~8-second video clip**. The returned scene-script array should be **immediately** passed to `create-scene`.
|
|
180
|
+
- **Response:**
|
|
181
|
+
```json
|
|
182
|
+
{ "status": "success", "chapterContent": ["Scene 1 script...", "Scene 2 script..."] }
|
|
183
|
+
```
|
|
177
184
|
- **Options:**
|
|
178
185
|
| Flag | Required | Description | Default |
|
|
179
186
|
|------|----------|-------------|---------|
|
|
@@ -183,6 +190,7 @@ The `veogent` command-line interface interacts with the **VEOGENT API** for mana
|
|
|
183
190
|
- **Example:**
|
|
184
191
|
```bash
|
|
185
192
|
veogent create-chapter-content -p "projID" -c "chapID" -s 5
|
|
193
|
+
# → { "status": "success", "chapterContent": ["...", "..."] }
|
|
186
194
|
```
|
|
187
195
|
|
|
188
196
|
---
|
|
@@ -218,11 +226,41 @@ The `veogent` command-line interface interacts with the **VEOGENT API** for mana
|
|
|
218
226
|
## 🧑🎨 Characters
|
|
219
227
|
|
|
220
228
|
### Skill: `characters`
|
|
221
|
-
- **Description:** Get all characters for a specific project, including
|
|
222
|
-
- **Guidelines:** After creating scenes, the system **auto-generates characters**. Poll this command until
|
|
229
|
+
- **Description:** Get all characters for a specific project, including `imageUri` status and readiness info.
|
|
230
|
+
- **Guidelines:** After creating scenes, the system **auto-generates characters**. Poll this command until `characterReadiness.allReady === true` before proceeding to image/video generation.
|
|
231
|
+
- **Response:**
|
|
232
|
+
```json
|
|
233
|
+
{
|
|
234
|
+
"status": "success",
|
|
235
|
+
"characters": [
|
|
236
|
+
{ "id": "...", "name": "...", "imageUri": "https://...", "ready": true }
|
|
237
|
+
],
|
|
238
|
+
"characterReadiness": { "total": 4, "ready": 4, "allReady": true }
|
|
239
|
+
}
|
|
240
|
+
```
|
|
223
241
|
- **Example:**
|
|
224
242
|
```bash
|
|
225
243
|
veogent characters <projectId>
|
|
244
|
+
# Poll until: characterReadiness.allReady === true
|
|
245
|
+
```
|
|
246
|
+
|
|
247
|
+
### Skill: `scene-materialization-status`
|
|
248
|
+
- **Description:** Check how many scenes have been materialized (created on the backend) for a chapter.
|
|
249
|
+
- **Guidelines:** Use this after `create-scene` to verify all scenes are ready before starting image generation. Poll until `status === "READY"`.
|
|
250
|
+
- **Options:**
|
|
251
|
+
| Flag | Required | Description |
|
|
252
|
+
|------|----------|-------------|
|
|
253
|
+
| `-p, --project <project>` | ✅ | Project ID |
|
|
254
|
+
| `-c, --chapter <chapter>` | ✅ | Chapter ID |
|
|
255
|
+
- **Response:**
|
|
256
|
+
```json
|
|
257
|
+
{ "expectedScenes": 5, "materializedScenes": 5, "status": "READY" }
|
|
258
|
+
```
|
|
259
|
+
Status values: `"PROCESSING"` | `"READY"` | `"EMPTY"`
|
|
260
|
+
- **Example:**
|
|
261
|
+
```bash
|
|
262
|
+
veogent scene-materialization-status -p "projID" -c "chapID"
|
|
263
|
+
# Poll until: status === "READY"
|
|
226
264
|
```
|
|
227
265
|
|
|
228
266
|
### Skill: `edit-character`
|
|
@@ -333,12 +371,12 @@ Beside the main visual description, you **CAN** include minor action description
|
|
|
333
371
|
| `-p, --project` | ✅ | Project ID | — |
|
|
334
372
|
| `-c, --chapter` | ✅ | Chapter ID | — |
|
|
335
373
|
| `-s, --scene` | ✅ | Scene ID | — |
|
|
336
|
-
| `-i, --imagemodel` | ✅ | Image model | `
|
|
374
|
+
| `-i, --imagemodel` | ✅ | Image model | `imagen3.5` |
|
|
337
375
|
|
|
338
376
|
> 🚫 **Prohibited flags:** Do NOT pass `-o` (orientation), `-v` (videomodel), or `-S` (speed).
|
|
339
377
|
|
|
340
378
|
```bash
|
|
341
|
-
veogent request -t "GENERATE_IMAGES" -p "projID" -c "chapID" -s "sceneID" -i "
|
|
379
|
+
veogent request -t "GENERATE_IMAGES" -p "projID" -c "chapID" -s "sceneID" -i "imagen3.5"
|
|
342
380
|
```
|
|
343
381
|
|
|
344
382
|
#### 2. Generate Video (`GENERATE_VIDEO`)
|
|
@@ -351,7 +389,7 @@ veogent request -t "GENERATE_IMAGES" -p "projID" -c "chapID" -s "sceneID" -i "im
|
|
|
351
389
|
| `-s, --scene` | ✅ | Scene ID | — |
|
|
352
390
|
| `-v, --videomodel` | ✅ | Video model | `veo_3_1_fast` (recommended), `veo_3_1_fast_r2v` |
|
|
353
391
|
| `-S, --speed` | ✅ | Video speed | `normal`, `timelapse`, `slowmotion` |
|
|
354
|
-
| `-i, --imagemodel` | ✅ | Image model for video context | `
|
|
392
|
+
| `-i, --imagemodel` | ✅ | Image model for video context | `imagen3.5` |
|
|
355
393
|
| `-o, --orientation` | ✅ | Orientation | `HORIZONTAL`, `VERTICAL` |
|
|
356
394
|
| `-E, --endscene` | ❌ | End Scene ID for continuous video generation (interpolation to next frame) | — |
|
|
357
395
|
|
|
@@ -399,11 +437,121 @@ veogent request -t "VIDEO_UPSCALE" -p "projID" -c "chapID" -s "sceneID" -o "HORI
|
|
|
399
437
|
## 📊 Monitoring & Status
|
|
400
438
|
|
|
401
439
|
### Skill: `requests`
|
|
402
|
-
- **Description:** Get
|
|
403
|
-
- **Guidelines:**
|
|
440
|
+
- **Description:** Get generation requests/jobs status for the current user. This is the **primary status board** for all generation workflows (image/video/character-edit).
|
|
441
|
+
- **Guidelines:**
|
|
442
|
+
- Always check this before retrying any generation.
|
|
443
|
+
- Results are sorted by `createdAt` descending (most recent first).
|
|
444
|
+
- Use `-n` to limit results for efficiency in agent loops.
|
|
445
|
+
- **Options:**
|
|
446
|
+
| Flag | Required | Description |
|
|
447
|
+
|------|----------|-------------|
|
|
448
|
+
| `-n, --limit <n>` | ❌ | Return only the N most recent requests |
|
|
449
|
+
| `-p, --project <projectId>` | ❌ | Filter by project ID |
|
|
450
|
+
| `-c, --chapter <chapterId>` | ❌ | Filter by chapter ID |
|
|
451
|
+
| `-s, --status <status>` | ❌ | Filter by status (`COMPLETED`, `FAILED`, `PROCESSING`, `PENDING`) |
|
|
452
|
+
- **Response:** `{ "status": "success", "total": N, "data": [...] }`
|
|
453
|
+
- **Examples:**
|
|
454
|
+
```bash
|
|
455
|
+
veogent requests # all requests, newest first
|
|
456
|
+
veogent requests -n 10 # 10 most recent
|
|
457
|
+
veogent requests -p <projId> -c <chapId> # filter by project/chapter
|
|
458
|
+
veogent requests -s FAILED -n 20 # 20 most recent failures
|
|
459
|
+
```
|
|
460
|
+
|
|
461
|
+
### Skill: `scene-status`
|
|
462
|
+
- **Description:** Get scene status snapshot for a chapter, with **embedded asset URLs** (image + video) per scene.
|
|
463
|
+
- **Guidelines:** Use this as the canonical status view for scenes — no need for separate asset-history lookup.
|
|
464
|
+
- **Options:**
|
|
465
|
+
| Flag | Required | Description |
|
|
466
|
+
|------|----------|-------------|
|
|
467
|
+
| `-p, --project <project>` | ✅ | Project ID |
|
|
468
|
+
| `-c, --chapter <chapter>` | ✅ | Chapter ID |
|
|
469
|
+
- **Response per scene:**
|
|
470
|
+
```json
|
|
471
|
+
{
|
|
472
|
+
"sceneId": "...",
|
|
473
|
+
"image": { "status": "COMPLETED", "url": "https://...", "completedAt": 1234567890 },
|
|
474
|
+
"video": { "status": "COMPLETED", "url": "https://...", "completedAt": 1234567890 }
|
|
475
|
+
}
|
|
476
|
+
```
|
|
477
|
+
- **Example:**
|
|
478
|
+
```bash
|
|
479
|
+
veogent scene-status -p "projID" -c "chapID"
|
|
480
|
+
```
|
|
481
|
+
|
|
482
|
+
### Skill: `workflow-status`
|
|
483
|
+
- **Description:** Export a full workflow snapshot — scenes, requests, and embedded asset URLs — for a project/chapter.
|
|
484
|
+
- **Guidelines:** The most comprehensive single-command status view for agents. Use this to audit progress before and after generation steps.
|
|
485
|
+
- **Options:**
|
|
486
|
+
| Flag | Required | Description |
|
|
487
|
+
|------|----------|-------------|
|
|
488
|
+
| `-p, --project <project>` | ✅ | Project ID |
|
|
489
|
+
| `-c, --chapter <chapter>` | ✅ | Chapter ID |
|
|
490
|
+
- **Example:**
|
|
491
|
+
```bash
|
|
492
|
+
veogent workflow-status -p "projID" -c "chapID"
|
|
493
|
+
```
|
|
494
|
+
|
|
495
|
+
### Skill: `wait-images`
|
|
496
|
+
- **Description:** Wait until all image requests in a chapter finish processing.
|
|
497
|
+
- **Guidelines:**
|
|
498
|
+
- Default: waits until queue is drained (no longer pending/processing).
|
|
499
|
+
- `--require-success`: after queue drains, also verifies every scene has at least one `COMPLETED` image asset. Exits non-zero if any scene lacks a successful asset.
|
|
500
|
+
- **Distinction:** "finished" ≠ "succeeded". Always use `--require-success` in automated pipelines.
|
|
501
|
+
- **Options:**
|
|
502
|
+
| Flag | Required | Description |
|
|
503
|
+
|------|----------|-------------|
|
|
504
|
+
| `-p, --project <project>` | ✅ | Project ID |
|
|
505
|
+
| `-c, --chapter <chapter>` | ✅ | Chapter ID |
|
|
506
|
+
| `--require-success` | ❌ | Fail if any scene has no completed image asset |
|
|
507
|
+
- **Example:**
|
|
508
|
+
```bash
|
|
509
|
+
veogent wait-images -p "projID" -c "chapID" --require-success
|
|
510
|
+
```
|
|
511
|
+
|
|
512
|
+
### Skill: `wait-videos`
|
|
513
|
+
- **Description:** Wait until all video requests in a chapter finish processing.
|
|
514
|
+
- **Options:**
|
|
515
|
+
| Flag | Required | Description |
|
|
516
|
+
|------|----------|-------------|
|
|
517
|
+
| `-p, --project <project>` | ✅ | Project ID |
|
|
518
|
+
| `-c, --chapter <chapter>` | ✅ | Chapter ID |
|
|
519
|
+
| `--require-success` | ❌ | Fail if any scene has no completed video asset |
|
|
520
|
+
- **Example:**
|
|
521
|
+
```bash
|
|
522
|
+
veogent wait-videos -p "projID" -c "chapID" --require-success
|
|
523
|
+
```
|
|
524
|
+
|
|
525
|
+
### Skill: `image-models`
|
|
526
|
+
- **Description:** List all supported image generation models.
|
|
527
|
+
- **Example:**
|
|
528
|
+
```bash
|
|
529
|
+
veogent image-models
|
|
530
|
+
# → { "status": "success", "models": ["imagen3.5"] }
|
|
531
|
+
```
|
|
532
|
+
|
|
533
|
+
### Skill: `video-models`
|
|
534
|
+
- **Description:** List all supported video generation models.
|
|
404
535
|
- **Example:**
|
|
405
536
|
```bash
|
|
406
|
-
veogent
|
|
537
|
+
veogent video-models
|
|
538
|
+
# → { "status": "success", "models": ["veo_3_1_fast", "veo_3_1_fast_r2v"] }
|
|
539
|
+
```
|
|
540
|
+
|
|
541
|
+
### Skill: `flow-credits`
|
|
542
|
+
- **Description:** Fetch plan and credit balance from Google AI Sandbox using your Flow key (Bearer token).
|
|
543
|
+
- **Options:**
|
|
544
|
+
| Flag | Required | Description |
|
|
545
|
+
|------|----------|-------------|
|
|
546
|
+
| `-f, --flowkey <flowkey>` | ❌ | Flow key (`ya29.` token). Defaults to stored key. |
|
|
547
|
+
- **Response:**
|
|
548
|
+
```json
|
|
549
|
+
{ "status": "success", "plan": "...", "credits": 100, "totalCredits": 200, "usedCredits": 100, "resetAt": "..." }
|
|
550
|
+
```
|
|
551
|
+
- **Example:**
|
|
552
|
+
```bash
|
|
553
|
+
veogent flow-credits
|
|
554
|
+
veogent flow-credits -f "ya29.a0ATk..."
|
|
407
555
|
```
|
|
408
556
|
|
|
409
557
|
### Skill: `assets`
|
|
@@ -502,19 +650,28 @@ veogent create-project -n "Name" -k "keywords" -d "AI-generated description" -l
|
|
|
502
650
|
```
|
|
503
651
|
|
|
504
652
|
### Step 4: Chapter & Scene Content
|
|
505
|
-
Generate chapter narrative content, then **immediately** pass the returned
|
|
653
|
+
Generate chapter narrative content, then **immediately** pass the returned `chapterContent` array to `create-scene`:
|
|
506
654
|
```bash
|
|
507
655
|
veogent create-chapter-content -p <projId> -c <chapId> -s 5
|
|
508
|
-
#
|
|
656
|
+
# → { "status": "success", "chapterContent": ["scene 1...", "scene 2...", ...] }
|
|
657
|
+
# Use chapterContent directly — do NOT poll chapter state
|
|
509
658
|
|
|
510
659
|
veogent create-scene -p <projId> -c <chapId> -C "script 1" "script 2" "script 3" --flowkey
|
|
511
660
|
```
|
|
512
661
|
|
|
662
|
+
### Step 4b: Verify Scene Materialization
|
|
663
|
+
After `create-scene`, scenes appear incrementally. Poll until ready:
|
|
664
|
+
```bash
|
|
665
|
+
veogent scene-materialization-status -p <projId> -c <chapId>
|
|
666
|
+
# Poll until: { "status": "READY", "expectedScenes": N, "materializedScenes": N }
|
|
667
|
+
```
|
|
668
|
+
|
|
513
669
|
### Step 5: Await Character Casting (⚠️ CRITICAL)
|
|
514
670
|
The system **auto-generates characters**. Do NOT rush to image/video generation.
|
|
515
671
|
```bash
|
|
516
|
-
# Poll until
|
|
672
|
+
# Poll until characterReadiness.allReady === true
|
|
517
673
|
veogent characters <projectId>
|
|
674
|
+
# → { "characterReadiness": { "total": 4, "ready": 4, "allReady": true } }
|
|
518
675
|
|
|
519
676
|
# Alternative: check project progress object
|
|
520
677
|
veogent project <projectId>
|
|
@@ -522,21 +679,26 @@ veogent project <projectId>
|
|
|
522
679
|
```
|
|
523
680
|
|
|
524
681
|
### Step 6: Character Recovery Path
|
|
525
|
-
If any character is missing `imageUri
|
|
682
|
+
If any character is missing `imageUri` (`ready: false`):
|
|
526
683
|
```bash
|
|
527
|
-
veogent requests
|
|
684
|
+
veogent requests -n 20 # inspect recent failures
|
|
528
685
|
veogent edit-character -p <proj> -c "<charId>" -u "regenerate portrait"
|
|
529
686
|
```
|
|
530
687
|
|
|
531
688
|
### Step 7: Generate Images (Critical QA Gate)
|
|
532
689
|
Request `GENERATE_IMAGES` per scene:
|
|
533
690
|
```bash
|
|
534
|
-
veogent request -t "GENERATE_IMAGES" -p <proj> -c <chap> -s <scene> -i "
|
|
691
|
+
veogent request -t "GENERATE_IMAGES" -p <proj> -c <chap> -s <scene> -i "imagen3.5"
|
|
535
692
|
|
|
536
693
|
# Anti-Spam: check status first!
|
|
537
694
|
veogent assets <proj> <chap> <scene> GENERATE_IMAGES
|
|
538
695
|
# If PENDING/PROCESSING → WAIT, do NOT duplicate requests
|
|
539
696
|
# Concurrency limit: max 5 simultaneous requests
|
|
697
|
+
|
|
698
|
+
# After all scenes submitted, wait with success verification:
|
|
699
|
+
veogent wait-images -p <proj> -c <chap> --require-success
|
|
700
|
+
# Check: scene-status for per-scene image URLs
|
|
701
|
+
veogent scene-status -p <proj> -c <chap>
|
|
540
702
|
```
|
|
541
703
|
|
|
542
704
|
### Step 8: AI Review on Image Mismatch
|
|
@@ -558,7 +720,12 @@ Continue the visual QA loop until the scene image is **director-approved**. Iter
|
|
|
558
720
|
Only run `GENERATE_VIDEO` **after** the scene already has a successful matching-orientation image:
|
|
559
721
|
```bash
|
|
560
722
|
veogent request -t "GENERATE_VIDEO" -p <proj> -c <chap> -s <scene> \
|
|
561
|
-
-v "veo_3_1_fast" -S "normal" -i "
|
|
723
|
+
-v "veo_3_1_fast" -S "normal" -i "imagen3.5" -o "HORIZONTAL"
|
|
724
|
+
|
|
725
|
+
# Wait for all videos with success verification:
|
|
726
|
+
veogent wait-videos -p <proj> -c <chap> --require-success
|
|
727
|
+
# Full snapshot:
|
|
728
|
+
veogent workflow-status -p <proj> -c <chap>
|
|
562
729
|
```
|
|
563
730
|
|
|
564
731
|
### Step 11: YouTube Publishing Assets
|