veogent 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,103 @@
1
+ # 🎬 Veogent CLI <a href="https://veogent.com"><img src="https://veogent.com/favicon.ico" width="24" height="24" align="center" /></a>
2
+
3
+ **The official Command-Line Interface for the [VEOGENT API](https://veogent.com)**.
4
+
5
+ Veogent CLI gives you (and your AI Agents) the power to manage full-scale AI video and story projects directly from the terminal. Connect to projects, orchestrate multi-frame scenes, edit image prompts with professional camera cues, and trigger large-scale generation jobs natively.
6
+
7
+ Perfectly engineered for **Agentic workflows** — enabling tools like OpenClaw, Claude, and Codex to autonomously generate JSON-driven movies from scratch.
8
+
9
+ ---
10
+
11
+ ## 🚀 Installation
12
+
13
+ Install globally via npm:
14
+
15
+ ```bash
16
+ npm install -g veogent
17
+ ```
18
+
19
+ ## 🔐 Quick Start (Authentication)
20
+
21
+ Veogent CLI utilizes a secure, browser-based SSO flow via Firebase.
22
+
23
+ 1. Run the login command:
24
+ ```bash
25
+ veogent login
26
+ ```
27
+ 2. The CLI will automatically open your default browser to `https://veogent.com/cli-auth`.
28
+ 3. Sign in via your account. The API access token will be silently transmitted back to your local terminal.
29
+ 4. Verify your session:
30
+ ```bash
31
+ veogent status
32
+ ```
33
+
34
+ ---
35
+
36
+ ## 🛠️ Key Capabilities
37
+
38
+ All responses are provided in strict, pretty-printed **JSON** format to easily pipe `veogent` into `jq` or be parsed natively by AI Agents.
39
+
40
+ ### 📁 Project Management
41
+ ```bash
42
+ # List all your active projects
43
+ veogent projects
44
+
45
+ # View available Image Material styles (e.g., CINEMATIC, PIXAR_3D)
46
+ veogent image-materials
47
+
48
+ # Create a brand new AI Story Project using your prompt
49
+ veogent create-project -n "Cyberpunk T-Rex" -k "T-rex, Neon, Sci-fi" -d "A massive T-rex walking inside Tokyo" -l "English" -m "CINEMATIC" -c 5
50
+ ```
51
+
52
+ ### 📖 Storyboard, Chapters & Scenes
53
+ ```bash
54
+ # Get all chapters within a project ID
55
+ veogent chapters <projectId>
56
+
57
+ # View characters cast for the project
58
+ veogent characters <projectId>
59
+
60
+ # Create scenes automatically using AI-generated narrative scripts
61
+ veogent create-scene -p <projectId> -c <chapterId> --flowkey -C "The T-Rex looks up at the sky." "A meteor shower begins."
62
+ ```
63
+
64
+ ### 🖌️ Directing & Editing (AI Prompt adjustments)
65
+ ```bash
66
+ # Edit an existing image prompt for a scene.
67
+ # Note: Use camera direction verbs like "Wide shot," "Tilt up," or "Close-up."
68
+ veogent edit-scene -p <proj> -c <chap> -s <scene> -u "Low angle shot of the T-Rex, dramatic lighting."
69
+
70
+ # Apply a direct image in-paint edit to a specific character model
71
+ veogent edit-character -p <proj> -c "drelenavance" -u "change outfit to dark leather jacket" -e
72
+ ```
73
+
74
+ ### 🎬 Media Generation (Gen Models)
75
+ Queue generation jobs directly from the terminal.
76
+ *Note: VEOGENT uses strict validation depending on the request type.*
77
+
78
+ ```bash
79
+ # Generate Image (Supports: imagen4, imagen3.5)
80
+ veogent request -t "GENERATE_IMAGES" -p <proj> -c <chap> -s <scene> -i "imagen4"
81
+
82
+ # Generate Video (Supports: veo_3_1_fast, veo_3_1_fast_r2v)
83
+ veogent request -t "GENERATE_VIDEO" -p <proj> -c <chap> -s <scene> -v "veo_3_1_fast_r2v" -S "normal"
84
+
85
+ # Generate Scene Video from existing Frame 0 (Requires orientation)
86
+ veogent request -t "CREATE_SCENE_VIDEO" -p <proj> -c <chap> -s <scene> -o "HORIZONTAL"
87
+ ```
88
+
89
+ ---
90
+
91
+ ## 🤖 For AI Agents
92
+
93
+ Veogent CLI comes with a built-in guide optimized for LLM/Coding Agents context processing. Just run:
94
+
95
+ ```bash
96
+ veogent skill
97
+ ```
98
+ This prints out a comprehensive Markdown `SKILL.md` cheat sheet defining our exact DTO validations, model enumerations, logic pipelines, and required variables for orchestrating complete projects without hitting `400 Bad Request`.
99
+
100
+ ---
101
+
102
+ ## 📜 License
103
+ MIT License. Crafted with ❤️ by Pym & Tuan Nguyen.
package/api.js ADDED
@@ -0,0 +1,37 @@
1
+ import axios from 'axios';
2
+ import { getToken } from './config.js';
3
+
4
+ // Define the environment variables
5
+ const API_URL = process.env.VGEN_API_URL || 'https://api.veogent.com';
6
+
7
+ // API instance helper
8
+ export const api = axios.create({
9
+ baseURL: API_URL,
10
+ });
11
+
12
+ // Interceptor to inject bearer token before every request
13
+ api.interceptors.request.use((config) => {
14
+ const token = getToken();
15
+ if (token) {
16
+ config.headers.Authorization = `Bearer ${token}`;
17
+ }
18
+ return config;
19
+ }, (error) => {
20
+ return Promise.reject(error);
21
+ });
22
+
23
+ // Interceptor to handle specific errors generically
24
+ api.interceptors.response.use(
25
+ (response) => {
26
+ // Return only the response data payload
27
+ return response.data;
28
+ },
29
+ (error) => {
30
+ if (error.response && error.response.status === 401) {
31
+ console.error('\n❌ Unauthorized! Your token might be expired. Please run `veogent login` again.');
32
+ } else {
33
+ console.error(`\n❌ Error: ${error.response?.data?.message || error.message}`);
34
+ }
35
+ process.exit(1);
36
+ }
37
+ );
package/auth.js ADDED
@@ -0,0 +1,77 @@
1
+ import express from 'express';
2
+ import cors from 'cors';
3
+ import open from 'open';
4
+ import crypto from 'crypto';
5
+
6
+ export class WebAuthFlow {
7
+ constructor(port = 7890) {
8
+ this.port = port;
9
+ this.app = express();
10
+ this.server = null;
11
+ // Generate a random token to verify the callback matches our local terminal
12
+ this.cliToken = crypto.randomBytes(32).toString('hex');
13
+ }
14
+
15
+ start() {
16
+ return new Promise((resolve, reject) => {
17
+ this.app.use(cors());
18
+ this.app.use(express.json());
19
+
20
+ // Callback endpoint hit by the Next.js frontend
21
+ this.app.post('/callback', (req, res) => {
22
+ const { uid, idToken, cliToken } = req.body;
23
+
24
+ // Basic CSRF/validation check
25
+ if (cliToken !== this.cliToken) {
26
+ res.status(403).json({ error: 'Mismatched CLI Token' });
27
+ return;
28
+ }
29
+
30
+ if (!uid || !idToken) {
31
+ res.status(400).json({ error: 'Missing UID or Token' });
32
+ return;
33
+ }
34
+
35
+ // Send success immediately so the browser UI updates to "Success" quickly
36
+ res.status(200).json({ status: 'success' });
37
+
38
+ // Close the server and return credentials to the CLI
39
+ setTimeout(() => {
40
+ this.stop();
41
+ resolve({ uid, accessToken: idToken });
42
+ }, 1000);
43
+ });
44
+
45
+ this.server = this.app.listen(this.port, () => {
46
+ // Open the default browser to the web app's authorization page (en is default)
47
+ const frontendUrl = process.env.VGEN_WEB_URL || 'https://veogent.com';
48
+ const authUrl = `${frontendUrl}/cli-auth?port=${this.port}&token=${this.cliToken}`;
49
+
50
+ console.log('🔄 Opening browser for authentication...');
51
+ console.log(`\nIf your browser doesn't open automatically, click this link:\n=> ${authUrl}\n`);
52
+
53
+ open(authUrl).catch(() => {
54
+ console.log('⚠️ Could not open browser automatically.');
55
+ });
56
+ }).on('error', (err) => {
57
+ if (err.code === 'EADDRINUSE') {
58
+ console.error(`\n❌ Error: Port ${this.port} is already in use.`);
59
+ reject(err);
60
+ }
61
+ });
62
+
63
+ // Timeout after 5 minutes
64
+ setTimeout(() => {
65
+ this.stop();
66
+ reject(new Error('Authentication timeout.'));
67
+ }, 5 * 60 * 1000);
68
+ });
69
+ }
70
+
71
+ stop() {
72
+ if (this.server) {
73
+ this.server.close();
74
+ this.server = null;
75
+ }
76
+ }
77
+ }
package/config.js ADDED
@@ -0,0 +1,37 @@
1
+ import fs from 'fs';
2
+ import path from 'path';
3
+
4
+ // Define the path for the config directory and file
5
+ const configDir = path.join(process.env.HOME || process.env.USERPROFILE, '.vgen-cli');
6
+ const configPath = path.join(configDir, 'config.json');
7
+
8
+ // Get the current configuration
9
+ export const getConfig = () => {
10
+ if (!fs.existsSync(configPath)) {
11
+ return {};
12
+ }
13
+ const data = fs.readFileSync(configPath, 'utf8');
14
+ return JSON.parse(data);
15
+ };
16
+
17
+ // Get the stored token
18
+ export const getToken = () => {
19
+ const config = getConfig();
20
+ return config.token || null;
21
+ };
22
+
23
+ // Set and save the configuration
24
+ export const setConfig = (newConfig) => {
25
+ if (!fs.existsSync(configDir)) {
26
+ fs.mkdirSync(configDir, { recursive: true });
27
+ }
28
+ const config = { ...getConfig(), ...newConfig };
29
+ fs.writeFileSync(configPath, JSON.stringify(config, null, 2), 'utf8');
30
+ };
31
+
32
+ // Clear the configuration
33
+ export const clearConfig = () => {
34
+ if (fs.existsSync(configPath)) {
35
+ fs.unlinkSync(configPath);
36
+ }
37
+ };
package/index.js ADDED
@@ -0,0 +1,419 @@
1
+ #!/usr/bin/env node
2
+ import { Command } from 'commander';
3
+ import { api } from './api.js';
4
+ import { setConfig, clearConfig, getToken } from './config.js';
5
+
6
+ const program = new Command();
7
+
8
+ program
9
+ .name('veogent')
10
+ .description('CLI to interact with the VEOGENT API')
11
+ .version('1.0.0');
12
+
13
+ import { WebAuthFlow } from './auth.js';
14
+
15
+ // Login
16
+ program
17
+ .command('login')
18
+ .description('Login to obtain access token via Web Browser')
19
+ .action(async () => {
20
+ try {
21
+ console.log('\n--- 🌐 VEOGENT CLI Web Authentication ---');
22
+ console.log('We will now open your default web browser to authorize VEOGENT CLI.\n');
23
+
24
+ // Start local web server to catch the callback from the Next.js app
25
+ const authFlow = new WebAuthFlow(7890); // default port
26
+ const authResult = await authFlow.start();
27
+
28
+ // Perform backend sign-in using the payload received from the browser
29
+ console.log('\n🔄 Credentials received! Saving API Access Token...');
30
+ const response = { data: { access_token: authResult.accessToken, email: authResult.uid, name: 'CLI User' } }; // JWT mapped
31
+
32
+ if (response?.data?.access_token) {
33
+ setConfig({ token: response.data.access_token, user: response.data });
34
+ console.log(`✅ Successfully logged in as: ${response.data.email || response.data.name}`);
35
+ console.log(`You can now use all VEOGENT CLI commands!`);
36
+ } else {
37
+ console.error('❌ Failed to retrieve access token from VEOGENT Backend.');
38
+ }
39
+ } catch (error) {
40
+ console.error('\n❌ Login process failed or was canceled.', error.message);
41
+ }
42
+ });
43
+
44
+ // Logout
45
+ program
46
+ .command('logout')
47
+ .description('Clear saved credentials')
48
+ .action(() => {
49
+ clearConfig();
50
+ console.log('✅ Logged out successfully.');
51
+ });
52
+
53
+ // Status
54
+ program
55
+ .command('status')
56
+ .description('Show current authenticated user')
57
+ .action(async () => {
58
+ const token = getToken();
59
+ if (!token) {
60
+ console.log(JSON.stringify({ error: 'Not logged in' }));
61
+ return;
62
+ }
63
+
64
+ try {
65
+ const response = await api.get('/app/flow-key');
66
+ console.log(JSON.stringify({ authenticated: true, flowKey: response.data?.flowKey || null }, null, 2));
67
+ } catch (error) {
68
+ console.log(JSON.stringify({ error: 'Error verifying token' }));
69
+ }
70
+ });
71
+
72
+ // --- Prompts & Materials ---
73
+ program
74
+ .command('image-materials')
75
+ .description('Get all available Image Material styles')
76
+ .action(async () => {
77
+ try {
78
+ const data = await api.get('/app/project/image-materials');
79
+ console.log(JSON.stringify(data.data || data, null, 2));
80
+ } catch (error) {
81
+ console.log(JSON.stringify({ status: "error", message: error.message }));
82
+ }
83
+ });
84
+
85
+ program
86
+ .command('custom-prompts')
87
+ .description('Get all custom prompt templates / story directives')
88
+ .action(async () => {
89
+ try {
90
+ const data = await api.get('/app/custom-prompts');
91
+ console.log(JSON.stringify(data.data || data, null, 2));
92
+ } catch (error) {
93
+ console.log(JSON.stringify({ status: "error", message: error.message }));
94
+ }
95
+ });
96
+ program
97
+ .command('projects')
98
+ .description('List your available projects')
99
+ .action(async () => {
100
+ try {
101
+ const data = await api.get('/app/projects');
102
+ console.log(JSON.stringify(data, null, 2));
103
+ } catch (error) {}
104
+ });
105
+
106
+ program
107
+ .command('project <id>')
108
+ .description('Get details for a specific project')
109
+ .action(async (id) => {
110
+ try {
111
+ const data = await api.get(`/app/project/${id}`);
112
+ console.log(JSON.stringify(data.data, null, 2));
113
+ } catch (error) {}
114
+ });
115
+
116
+ program
117
+ .command('create-project-description')
118
+ .description('Generate AI description for a new project based on keywords')
119
+ .requiredOption('-k, --keyword <keyword>', 'Keywords for the project')
120
+ .requiredOption('-l, --lang <lang>', 'Story language')
121
+ .requiredOption('-p, --promptId <promptId>', 'Custom Prompt ID from custom-prompts')
122
+ .action(async (options) => {
123
+ try {
124
+ const payload = {
125
+ keywords: options.keyword,
126
+ language: options.lang,
127
+ customPromptId: options.promptId,
128
+ objects: []
129
+ };
130
+ const data = await api.post('/app/description', payload);
131
+ console.log(JSON.stringify({ status: "success", descriptionData: data.data || data }, null, 2));
132
+ } catch (error) {
133
+ console.log(JSON.stringify({ status: "error", message: error.response?.data?.message || error.message }));
134
+ }
135
+ });
136
+
137
+ program
138
+ .command('create-project')
139
+ .description('Create a new project')
140
+ .requiredOption('-n, --name <name>', 'Project name')
141
+ .requiredOption('-k, --keyword <keyword>', 'Keyword')
142
+ .requiredOption('-d, --desc <desc>', 'Description')
143
+ .requiredOption('-l, --lang <lang>', 'Story language')
144
+ .requiredOption('-s, --sound', 'Sound effects (true/false)', true)
145
+ .requiredOption('-m, --material <material>', 'Image material')
146
+ .requiredOption('-c, --chapters <count>', 'Number of chapters', 1)
147
+ .option('-C, --customPromptId <customPromptId>', 'Custom Prompt ID')
148
+ .action(async (options) => {
149
+ try {
150
+ const payload = {
151
+ projectName: options.name,
152
+ keyword: options.keyword,
153
+ description: options.desc,
154
+ storyLanguage: options.lang,
155
+ soundEffects: options.sound === 'true' || options.sound === true,
156
+ imageMaterial: options.material,
157
+ numberChapters: parseInt(options.chapters),
158
+ };
159
+ if (options.customPromptId) payload.customPromptId = options.customPromptId;
160
+
161
+ const data = await api.post('/app/project', payload);
162
+ console.log(JSON.stringify({ status: "success", project: data.data || data }, null, 2));
163
+ } catch (error) {
164
+ console.log(JSON.stringify({ status: "error", message: error.message }));
165
+ }
166
+ });
167
+
168
+ // --- Chapters ---
169
+ program
170
+ .command('chapters <projectId>')
171
+ .description('Get all chapters for a project')
172
+ .action(async (projectId) => {
173
+ try {
174
+ const data = await api.get(`/app/chapters/${projectId}`);
175
+ console.log(JSON.stringify(data, null, 2));
176
+ } catch (error) {}
177
+ });
178
+
179
+ program
180
+ .command('create-chapter-content')
181
+ .description('Generate content for a specific chapter')
182
+ .requiredOption('-p, --project <project>', 'Project ID')
183
+ .requiredOption('-c, --chapter <chapter>', 'Chapter ID')
184
+ .requiredOption('-s, --scenes <count>', 'Number of scenes', 1)
185
+ .action(async (options) => {
186
+ try {
187
+ const payload = {
188
+ project: options.project,
189
+ chapter: options.chapter,
190
+ numberScene: parseInt(options.scenes),
191
+ };
192
+ const data = await api.post('/app/chapter/content', payload);
193
+ console.log(JSON.stringify({ status: "success", chapterContent: data.data || data }, null, 2));
194
+ } catch (error) {
195
+ console.log(JSON.stringify({ status: "error", message: error.message }));
196
+ }
197
+ });
198
+
199
+
200
+ // --- Characters ---
201
+ program
202
+ .command('characters <projectId>')
203
+ .description('Get all characters for a specific project')
204
+ .action(async (projectId) => {
205
+ try {
206
+ const data = await api.get(`/app/characters/${projectId}`);
207
+ console.log(JSON.stringify(data.data || data, null, 2));
208
+ } catch (error) {
209
+ console.log(JSON.stringify({ status: "error", message: error.message }));
210
+ }
211
+ });
212
+
213
+ program
214
+ .command('edit-character')
215
+ .description('Update a character\'s description or edit their generated image directly via AI prompt')
216
+ .requiredOption('-p, --project <project>', 'Project ID')
217
+ .requiredOption('-c, --character <character>', 'Character ID (e.g., drelenavance)')
218
+ .requiredOption('-u, --userprompt <userprompt>', 'User instruction to modify the character')
219
+ .option('-e, --editimage', 'Enable direct Image Editing Mode (true). Default is Regenerate Profile Mode (false)', false)
220
+ .option('--flowkey', 'Enable useFlowKey to sync context via FireBase', true)
221
+ .action(async (options) => {
222
+ try {
223
+ const payload = {
224
+ userPrompt: options.userprompt,
225
+ useFlowKey: options.flowkey === true || options.flowkey === 'true',
226
+ editImageMode: options.editimage === true,
227
+ };
228
+ const data = await api.post(`/app/character/${options.project}/${options.character}/update-by-prompt`, payload);
229
+ console.log(JSON.stringify({ status: "success", character: data.data || data }, null, 2));
230
+ } catch (error) {
231
+ console.log(JSON.stringify({ status: "error", message: error.response?.data?.message || error.message }));
232
+ }
233
+ });
234
+ program
235
+ .command('create-scene')
236
+ .description('Create a new scene from text content')
237
+ .requiredOption('-p, --project <project>', 'Project ID')
238
+ .requiredOption('-c, --chapter <chapter>', 'Chapter ID')
239
+ .requiredOption('-C, --content <content...>', 'Array of scene text scripts')
240
+ .option('--flowkey', 'Enable useFlowKey to sync context via FireBase', false)
241
+ .action(async (options) => {
242
+ try {
243
+ const payload = {
244
+ project: options.project,
245
+ chapter: options.chapter,
246
+ chapterContent: options.content,
247
+ useFlowKey: options.flowkey === true,
248
+ };
249
+ const data = await api.post('/app/scene', payload);
250
+ console.log(JSON.stringify({ status: "success", scene: data.data || data }, null, 2));
251
+ } catch (error) {
252
+ console.log(JSON.stringify({ status: "error", message: error.message }));
253
+ }
254
+ });
255
+
256
+ program
257
+ .command('edit-scene')
258
+ .description('Edit an existing scene (image prompt) via AI assistance')
259
+ .requiredOption('-p, --project <project>', 'Project ID')
260
+ .requiredOption('-c, --chapter <chapter>', 'Chapter ID')
261
+ .requiredOption('-s, --scene <scene>', 'Scene ID')
262
+ .requiredOption('-u, --userprompt <userprompt>', 'User narrative prompt to modify the scene')
263
+ .option('--no-regenerate', 'Tells backend to NOT automatically trigger regenerating the image')
264
+ .option('-R, --request <request>', 'Target a specific past Request ID to edit its output')
265
+ .action(async (options) => {
266
+ try {
267
+ const payload = {
268
+ userPrompt: options.userprompt,
269
+ regenerateImage: options.regenerate !== false,
270
+ };
271
+
272
+ if (options.request) {
273
+ payload.requestId = options.request;
274
+ }
275
+
276
+ const data = await api.patch(`/app/scene/script-segment/${options.project}/${options.chapter}/${options.scene}`, payload);
277
+ console.log(JSON.stringify({ status: "success", scene: data.data || data }, null, 2));
278
+ } catch (error) {
279
+ console.log(JSON.stringify({ status: "error", message: error.response?.data?.message || error.message }));
280
+ }
281
+ });
282
+
283
+ // --- Execution Request (Generate Image / Video) ---
284
+ program
285
+ .command('request')
286
+ .description('Create a job request (GENERATE_IMAGES, GENERATE_VIDEO, CREATE_SCENE_VIDEO)')
287
+ .requiredOption('-t, --type <type>', 'Request type (e.g., GENERATE_IMAGES, GENERATE_VIDEO)')
288
+ .requiredOption('-p, --project <project>', 'Project ID')
289
+ .requiredOption('-c, --chapter <chapter>', 'Chapter ID')
290
+ .requiredOption('-s, --scene <scene>', 'Scene ID')
291
+ .option('-o, --orientation <orientation>', 'Request orientation (HORIZONTAL, VERTICAL)', '')
292
+ .option('-i, --imagemodel <imagemodel>', 'Image Model (imagen4, imagen3.5)', 'imagen4')
293
+ .option('-v, --videomodel <videomodel>', 'Video Model (veo_3_1_fast, veo_3_1_fast_r2v)', 'veo_3_1_fast_r2v')
294
+ .option('-S, --speed <speed>', 'Video Speed (normal, timelapse, slowmotion)', 'normal')
295
+ .action(async (options) => {
296
+ try {
297
+ const payload = {
298
+ type: options.type,
299
+ project: options.project,
300
+ chapter: options.chapter,
301
+ scene: options.scene,
302
+ };
303
+
304
+ // Conditionally add fields based on strict DTO validators
305
+ if (['CREATE_SCENE_VIDEO', 'CREATE_CHAPTER_VIDEO', 'VIDEO_UPSCALE'].includes(options.type)) {
306
+ payload.orientation = options.orientation || 'HORIZONTAL'; // Provide fallback
307
+ }
308
+ if (options.type === 'GENERATE_IMAGES') {
309
+ payload.imageModel = options.imagemodel;
310
+ }
311
+ if (options.type === 'GENERATE_VIDEO') {
312
+ payload.model = options.videomodel;
313
+ payload.videoSpeedMode = options.speed;
314
+ payload.useFlowKey = false;
315
+ }
316
+
317
+ const data = await api.post('/app/request', payload);
318
+ console.log(JSON.stringify({ status: "success", request: data.data || data }, null, 2));
319
+ } catch (error) {
320
+ console.log(JSON.stringify({ status: "error", message: error.message }));
321
+ }
322
+ });
323
+
324
+ program
325
+ .command('requests')
326
+ .description('Get all generation requests/jobs status for the current user')
327
+ .action(async () => {
328
+ try {
329
+ const data = await api.get('/app/requests');
330
+ console.log(JSON.stringify(data.data || data, null, 2));
331
+ } catch (error) {
332
+ console.log(JSON.stringify({ status: "error", message: error.message }));
333
+ }
334
+ });
335
+
336
+ // --- System ---
337
+ program
338
+ .command('skill')
339
+ .description('Print the native Agent SKILL.md guide for VEOGENT CLI')
340
+ .action(() => {
341
+ const skillContent = `# VEOGENT CLI Agent Skill
342
+
343
+ The \`veogent\` command-line interface interacts with the VEOGENT API for managing movie/story projects, chapters, scenes, and media generation requests (images, videos).
344
+ This CLI is designed with AI agents in mind: all data outputs are in raw JSON format for easy parsing and chaining.
345
+
346
+ ## 🛠 Authentication
347
+ Agents **CANNOT** authenticate automatically because \`veogent login\` opens a local browser for Google Firebase OAuth.
348
+ * **Check Auth:** \`veogent status\`
349
+ * **Action:** If status returns \`{"error": "Not logged in"}\`, instruct the user to run \`veogent login\` on their desktop terminal manually. Wait for them to confirm success.
350
+
351
+ ## 📁 Projects & Assets
352
+ * **List Image Materials:** \`veogent image-materials\` (e.g., \`CINEMATIC\`, \`PIXAR_3D\`)
353
+ * **List Custom Prompts:** \`veogent custom-prompts\`
354
+ * **List Projects:** \`veogent projects\`
355
+ * *Note: IDs are Firebase Document Strings (e.g., \`APQF6Ay0kLeXLhpctTdD\`), not simple integers. Always use the field \`"id"\` from the JSON response array.*
356
+ * **Get Project Details:** \`veogent project <projectId>\`
357
+ * **Create Project:** \`veogent create-project -n "Project Name" -k "Scifi" -d "Description" -l "English" -m "Anime style" -c 1\`
358
+
359
+ ## 📖 Chapters & Scenes
360
+ * **List Chapters for a Project:** \`veogent chapters <projectId>\`
361
+ * **Create Chapter Content:** \`veogent create-chapter-content -p <projectId> -c <chapterId> -s 5\` (Generates text/story content for 5 scenes)
362
+ * **Create Scene:** \`veogent create-scene -p <projectId> -c <chapterId> -C <sceneId1> <sceneId2>\`
363
+
364
+ ## 🎬 Generation Requests
365
+ To initiate generation jobs, use \`veogent request\`.
366
+ **CRITICAL VALIDATION RULES (Strict DTO):**
367
+ Do not pass unsupported flags for a specific type, or the server will return a \`400 Bad Request\` or \`Validation Error\`.
368
+
369
+ 1. **Generate Images** (\`GENERATE_IMAGES\`):
370
+ * **Required Options:** \`-t GENERATE_IMAGES\`, \`-p <proj>\`, \`-c <chap>\`, \`-s <scene>\`, \`-i <imagemodel>\`
371
+ * **Allowed Models:** \`imagen4\`, \`imagen3.5\`
372
+ * **Prohibited:** Do NOT pass \`-o\` (orientation) or \`-v\` (videomodel) or \`-S\` (speed).
373
+ * \`veogent request -t "GENERATE_IMAGES" -p "projID" -c "chapID" -s "sceneID" -i "imagen4"\`
374
+
375
+ 2. **Generate Video** (\`GENERATE_VIDEO\`):
376
+ * **Required Options:** \`-t GENERATE_VIDEO\`, \`-p <proj>\`, \`-c <chap>\`, \`-s <scene>\`, \`-v <videomodel>\`, \`-S <speed>\`
377
+ * **Allowed Models:** \`veo_3_1_fast\`, \`veo_3_1_fast_r2v\`
378
+ * **Allowed Speeds:** \`normal\`, \`timelapse\`, \`slowmotion\`
379
+ * \`veogent request -t "GENERATE_VIDEO" -p "projID" -c "chapID" -s "sceneID" -v "veo_3_1_fast_r2v" -S "normal"\`
380
+
381
+ 3. **Create Scene Video / Chapter Video** (\`CREATE_SCENE_VIDEO\` / \`CREATE_CHAPTER_VIDEO\` / \`VIDEO_UPSCALE\`):
382
+ * **Required Options:** \`-t <type>\`, \`-p <proj>\`, \`-c <chap>\`, \`-s <scene>\`, \`-o <orientation>\`
383
+ * **Allowed Orientations:** \`HORIZONTAL\`, \`VERTICAL\`
384
+ * **Prohibited:** Do NOT pass \`-i\` (image model) or \`-v\` (video model).
385
+ * \`veogent request -t "CREATE_SCENE_VIDEO" -p "projID" -c "chapID" -s "sceneID" -o "HORIZONTAL"\`
386
+
387
+ ## 🖌️ Character Editing
388
+ Characters generated within a project can be updated or re-sketched via text prompts using \`veogent edit-character\`.
389
+ * **List all Characters in a Project:**
390
+ * \`veogent characters <projectId>\`
391
+ * **Regenerate Character Profile:** (Updates description/traits and generates a new portrait base)
392
+ * \`veogent edit-character -p <proj> -c "drelenavance" -u "change outfit to lab coat"\`
393
+ * **Direct Image Edit Mode:** (Applies changes directly to the existing generated portrait)
394
+ * \`veogent edit-character -p <proj> -c "drelenavance" -u "change shoe to black" -e\`
395
+
396
+ ## 🖌️ Scene Editing
397
+ If a generated image prompt needs adjustment by the AI, use \`veogent edit-scene\` to hit the \`updateScriptSegment\` workflow:
398
+ * **Context:** Note that you are editing the Image Prompt for **Frame 0 (the starting image)** used for video generation.
399
+ * **Prompting Guide:** Beside the main visual description, you CAN include minor action descriptions (e.g., "bus speeding over cliff", "smoke rising"). You SHOULD heavily leverage Camera Angles and Camera Movements to steer the frame composition:
400
+ * **Shot Types:** Close-up (Cận cảnh), Wide shot (Toàn cảnh), Medium shot (Trung cảnh), Extreme close-up (Cận mặt), Long shot (Tầm trung xa).
401
+ * **Angles:** High angle (Góc cao), Low angle (Góc thấp), Bird-eye (Góc chim), Eye-level (Góc ngang mắt), Over-the-shoulder, POV.
402
+ * **Camera Tracking:** Pan left (←), Pan right (→), Tilt up (↑), Tilt down (↓), Zoom in (+), Zoom out (-).
403
+ * **Modify Image Prompt (Default creates auto-regen):**
404
+ * \`veogent edit-scene -p <proj> -c <chap> -s <scene> -u "Low angle shot of a yellow school bus speeding over a cliff, wide angle, dramatic lighting. Camera tilts down."\`
405
+ * **Modify Image Prompt (Cancel Auto-Regen):**
406
+ * \`veogent edit-scene -p <proj> -c <chap> -s <scene> -u "child seat behind Paulo" --no-regenerate\`
407
+ * **Edit a specific past generated image:** Pass the previous generated \`requestId\` using \`-R\`.
408
+ * \`veogent edit-scene -p <proj> -c <chap> -s <scene> -u "make the car red" -R <requestId>\`
409
+
410
+ * **List All Generation Requests/Jobs Status:** \`veogent requests\`
411
+
412
+ ## 💡 Best Practices for AI Agents
413
+ 1. **Data Chaining Workflow:** Always use the ID strings from \`veogent projects\` -> pass to \`veogent chapters\` -> extract \`sceneId\` -> pass to \`veogent request <...args>\`.
414
+ 2. **Error Handling:** The CLI returns JSON like \`{"status": "error", "message": "..."}\`. Check this before proceeding with the next logic block.
415
+ 3. If an API request returns \`400 Bad Request\`, review your flags. You might be sending forbidden options for the requested DTO \`RequestType\`.`;
416
+ console.log(skillContent);
417
+ });
418
+
419
+ program.parse(process.argv);
package/package.json ADDED
@@ -0,0 +1,38 @@
1
+ {
2
+ "name": "veogent",
3
+ "version": "1.0.0",
4
+ "description": "The official CLI to interact with the VEOGENT API - AI Video and Image generation platform",
5
+ "main": "index.js",
6
+ "bin": {
7
+ "veogent": "./index.js"
8
+ },
9
+ "type": "module",
10
+ "scripts": {
11
+ "test": "echo \"Error: no test specified\" && exit 1"
12
+ },
13
+ "keywords": [
14
+ "veogent",
15
+ "cli",
16
+ "ai",
17
+ "video-generation",
18
+ "image-generation",
19
+ "sora",
20
+ "veo"
21
+ ],
22
+ "author": "Pym & Tuan Nguyen",
23
+ "license": "MIT",
24
+ "dependencies": {
25
+ "axios": "^1.13.5",
26
+ "commander": "^14.0.3",
27
+ "cors": "^2.8.6",
28
+ "dotenv": "^17.3.1",
29
+ "express": "^5.2.1",
30
+ "form-data": "^4.0.5",
31
+ "inquirer": "^13.3.0",
32
+ "open": "^11.0.0"
33
+ },
34
+ "repository": {
35
+ "type": "git",
36
+ "url": "https://veogent.com"
37
+ }
38
+ }