@vibeframe/cli 0.27.0 → 0.30.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (118) hide show
  1. package/LICENSE +21 -0
  2. package/dist/agent/adapters/index.d.ts +1 -0
  3. package/dist/agent/adapters/index.d.ts.map +1 -1
  4. package/dist/agent/adapters/index.js +5 -0
  5. package/dist/agent/adapters/index.js.map +1 -1
  6. package/dist/agent/adapters/openrouter.d.ts +16 -0
  7. package/dist/agent/adapters/openrouter.d.ts.map +1 -0
  8. package/dist/agent/adapters/openrouter.js +100 -0
  9. package/dist/agent/adapters/openrouter.js.map +1 -0
  10. package/dist/agent/types.d.ts +1 -1
  11. package/dist/agent/types.d.ts.map +1 -1
  12. package/dist/commands/agent.d.ts.map +1 -1
  13. package/dist/commands/agent.js +3 -1
  14. package/dist/commands/agent.js.map +1 -1
  15. package/dist/commands/ai-edit-cli.d.ts.map +1 -1
  16. package/dist/commands/ai-edit-cli.js +18 -0
  17. package/dist/commands/ai-edit-cli.js.map +1 -1
  18. package/dist/commands/generate.js +14 -0
  19. package/dist/commands/generate.js.map +1 -1
  20. package/dist/commands/schema.d.ts +1 -0
  21. package/dist/commands/schema.d.ts.map +1 -1
  22. package/dist/commands/schema.js +122 -21
  23. package/dist/commands/schema.js.map +1 -1
  24. package/dist/commands/setup.js +5 -2
  25. package/dist/commands/setup.js.map +1 -1
  26. package/dist/config/schema.d.ts +2 -1
  27. package/dist/config/schema.d.ts.map +1 -1
  28. package/dist/config/schema.js +2 -0
  29. package/dist/config/schema.js.map +1 -1
  30. package/dist/index.js +0 -0
  31. package/package.json +16 -12
  32. package/.turbo/turbo-build.log +0 -4
  33. package/.turbo/turbo-lint.log +0 -21
  34. package/.turbo/turbo-test.log +0 -689
  35. package/src/agent/adapters/claude.ts +0 -143
  36. package/src/agent/adapters/gemini.ts +0 -159
  37. package/src/agent/adapters/index.ts +0 -61
  38. package/src/agent/adapters/ollama.ts +0 -231
  39. package/src/agent/adapters/openai.ts +0 -116
  40. package/src/agent/adapters/xai.ts +0 -119
  41. package/src/agent/index.ts +0 -251
  42. package/src/agent/memory/index.ts +0 -151
  43. package/src/agent/prompts/system.ts +0 -106
  44. package/src/agent/tools/ai-editing.ts +0 -845
  45. package/src/agent/tools/ai-generation.ts +0 -1073
  46. package/src/agent/tools/ai-pipeline.ts +0 -1055
  47. package/src/agent/tools/ai.ts +0 -21
  48. package/src/agent/tools/batch.ts +0 -429
  49. package/src/agent/tools/e2e.test.ts +0 -545
  50. package/src/agent/tools/export.ts +0 -184
  51. package/src/agent/tools/filesystem.ts +0 -237
  52. package/src/agent/tools/index.ts +0 -150
  53. package/src/agent/tools/integration.test.ts +0 -775
  54. package/src/agent/tools/media.ts +0 -697
  55. package/src/agent/tools/project.ts +0 -313
  56. package/src/agent/tools/timeline.ts +0 -951
  57. package/src/agent/types.ts +0 -68
  58. package/src/commands/agent.ts +0 -340
  59. package/src/commands/ai-analyze.ts +0 -429
  60. package/src/commands/ai-animated-caption.ts +0 -390
  61. package/src/commands/ai-audio.ts +0 -941
  62. package/src/commands/ai-broll.ts +0 -490
  63. package/src/commands/ai-edit-cli.ts +0 -658
  64. package/src/commands/ai-edit.ts +0 -1542
  65. package/src/commands/ai-fill-gaps.ts +0 -566
  66. package/src/commands/ai-helpers.ts +0 -65
  67. package/src/commands/ai-highlights.ts +0 -1303
  68. package/src/commands/ai-image.ts +0 -761
  69. package/src/commands/ai-motion.ts +0 -347
  70. package/src/commands/ai-narrate.ts +0 -451
  71. package/src/commands/ai-review.ts +0 -309
  72. package/src/commands/ai-script-pipeline-cli.ts +0 -1710
  73. package/src/commands/ai-script-pipeline.ts +0 -1365
  74. package/src/commands/ai-suggest-edit.ts +0 -264
  75. package/src/commands/ai-video-fx.ts +0 -445
  76. package/src/commands/ai-video.ts +0 -915
  77. package/src/commands/ai-viral.ts +0 -595
  78. package/src/commands/ai-visual-fx.ts +0 -601
  79. package/src/commands/ai.test.ts +0 -627
  80. package/src/commands/ai.ts +0 -307
  81. package/src/commands/analyze.ts +0 -282
  82. package/src/commands/audio.ts +0 -644
  83. package/src/commands/batch.test.ts +0 -279
  84. package/src/commands/batch.ts +0 -440
  85. package/src/commands/detect.ts +0 -329
  86. package/src/commands/doctor.ts +0 -237
  87. package/src/commands/edit-cmd.ts +0 -1014
  88. package/src/commands/export.ts +0 -918
  89. package/src/commands/generate.ts +0 -2146
  90. package/src/commands/media.ts +0 -177
  91. package/src/commands/output.ts +0 -142
  92. package/src/commands/pipeline.ts +0 -398
  93. package/src/commands/project.test.ts +0 -127
  94. package/src/commands/project.ts +0 -149
  95. package/src/commands/sanitize.ts +0 -60
  96. package/src/commands/schema.ts +0 -130
  97. package/src/commands/setup.ts +0 -509
  98. package/src/commands/timeline.test.ts +0 -499
  99. package/src/commands/timeline.ts +0 -529
  100. package/src/commands/validate.ts +0 -77
  101. package/src/config/config.test.ts +0 -197
  102. package/src/config/index.ts +0 -125
  103. package/src/config/schema.ts +0 -82
  104. package/src/engine/index.ts +0 -2
  105. package/src/engine/project.test.ts +0 -702
  106. package/src/engine/project.ts +0 -439
  107. package/src/index.ts +0 -146
  108. package/src/utils/api-key.test.ts +0 -41
  109. package/src/utils/api-key.ts +0 -247
  110. package/src/utils/audio.ts +0 -83
  111. package/src/utils/exec-safe.ts +0 -75
  112. package/src/utils/first-run.ts +0 -52
  113. package/src/utils/provider-resolver.ts +0 -56
  114. package/src/utils/remotion.ts +0 -951
  115. package/src/utils/subtitle.test.ts +0 -227
  116. package/src/utils/subtitle.ts +0 -169
  117. package/src/utils/tty.ts +0 -196
  118. package/tsconfig.json +0 -20
@@ -1,119 +0,0 @@
1
- /**
2
- * xAI Grok LLM Adapter (OpenAI-compatible)
3
- */
4
-
5
- import OpenAI from "openai";
6
- import type { LLMAdapter } from "./index.js";
7
- import type {
8
- ToolDefinition,
9
- LLMResponse,
10
- AgentMessage,
11
- ToolCall,
12
- LLMProvider,
13
- } from "../types.js";
14
-
15
- export class XAIAdapter implements LLMAdapter {
16
- readonly provider: LLMProvider = "xai";
17
- private client: OpenAI | null = null;
18
- private model: string = "grok-4-1-fast-reasoning";
19
-
20
- async initialize(apiKey: string): Promise<void> {
21
- this.client = new OpenAI({
22
- apiKey,
23
- baseURL: "https://api.x.ai/v1",
24
- });
25
- }
26
-
27
- isInitialized(): boolean {
28
- return this.client !== null;
29
- }
30
-
31
- setModel(model: string): void {
32
- this.model = model;
33
- }
34
-
35
- async chat(
36
- messages: AgentMessage[],
37
- tools: ToolDefinition[]
38
- ): Promise<LLMResponse> {
39
- if (!this.client) {
40
- throw new Error("xAI adapter not initialized");
41
- }
42
-
43
- // Convert messages to OpenAI format
44
- const openaiMessages: OpenAI.Chat.ChatCompletionMessageParam[] = messages.map(
45
- (msg) => {
46
- if (msg.role === "tool") {
47
- return {
48
- role: "tool" as const,
49
- tool_call_id: msg.toolCallId!,
50
- content: msg.content,
51
- };
52
- }
53
- if (msg.role === "assistant" && msg.toolCalls) {
54
- return {
55
- role: "assistant" as const,
56
- content: msg.content || null,
57
- tool_calls: msg.toolCalls.map((tc) => ({
58
- id: tc.id,
59
- type: "function" as const,
60
- function: {
61
- name: tc.name,
62
- arguments: JSON.stringify(tc.arguments),
63
- },
64
- })),
65
- };
66
- }
67
- return {
68
- role: msg.role as "system" | "user" | "assistant",
69
- content: msg.content,
70
- };
71
- }
72
- );
73
-
74
- // Convert tools to OpenAI format
75
- const openaiTools: OpenAI.Chat.ChatCompletionTool[] = tools.map((tool) => ({
76
- type: "function" as const,
77
- function: {
78
- name: tool.name,
79
- description: tool.description,
80
- parameters: tool.parameters as unknown as Record<string, unknown>,
81
- },
82
- }));
83
-
84
- // Make API call
85
- const response = await this.client.chat.completions.create({
86
- model: this.model,
87
- messages: openaiMessages,
88
- tools: openaiTools.length > 0 ? openaiTools : undefined,
89
- tool_choice: openaiTools.length > 0 ? "auto" : undefined,
90
- });
91
-
92
- const choice = response.choices[0];
93
- const message = choice.message;
94
-
95
- // Parse tool calls
96
- let toolCalls: ToolCall[] | undefined;
97
- if (message.tool_calls && message.tool_calls.length > 0) {
98
- toolCalls = message.tool_calls.map((tc) => ({
99
- id: tc.id,
100
- name: tc.function.name,
101
- arguments: JSON.parse(tc.function.arguments),
102
- }));
103
- }
104
-
105
- // Map finish reason
106
- let finishReason: LLMResponse["finishReason"] = "stop";
107
- if (choice.finish_reason === "tool_calls") {
108
- finishReason = "tool_calls";
109
- } else if (choice.finish_reason === "length") {
110
- finishReason = "length";
111
- }
112
-
113
- return {
114
- content: message.content || "",
115
- toolCalls,
116
- finishReason,
117
- };
118
- }
119
- }
@@ -1,251 +0,0 @@
1
- /**
2
- * AgentExecutor - Main Agentic Loop
3
- * Orchestrates LLM reasoning and tool execution
4
- */
5
-
6
- import type {
7
- AgentMessage,
8
- AgentContext,
9
- ToolCall,
10
- ToolResult,
11
- LLMProvider,
12
- } from "./types.js";
13
- import type { LLMAdapter } from "./adapters/index.js";
14
- import { createAdapter } from "./adapters/index.js";
15
- import { ToolRegistry, registerAllTools } from "./tools/index.js";
16
- import { getSystemPrompt } from "./prompts/system.js";
17
- import { ConversationMemory } from "./memory/index.js";
18
-
19
- export interface AgentExecutorOptions {
20
- provider: LLMProvider;
21
- apiKey: string;
22
- model?: string;
23
- maxTurns?: number;
24
- verbose?: boolean;
25
- projectPath?: string;
26
- /**
27
- * Callback to confirm before executing each tool.
28
- * Return true to execute, false to skip.
29
- */
30
- confirmCallback?: (toolName: string, args: Record<string, unknown>) => Promise<boolean>;
31
- }
32
-
33
- export interface ExecutionResult {
34
- response: string;
35
- toolsUsed: string[];
36
- turns: number;
37
- }
38
-
39
- /**
40
- * AgentExecutor class
41
- * Handles the agentic loop: reason → tool call → result → reason → ...
42
- */
43
- /** Mask sensitive values in tool arguments for verbose logging */
44
- function maskSensitiveArgs(args: Record<string, unknown>): Record<string, unknown> {
45
- const sensitiveKeys = ["apiKey", "api_key", "token", "secret", "password", "key"];
46
- const masked: Record<string, unknown> = {};
47
- for (const [k, v] of Object.entries(args)) {
48
- if (sensitiveKeys.some((sk) => k.toLowerCase().includes(sk.toLowerCase())) && typeof v === "string") {
49
- masked[k] = v.slice(0, 4) + "****";
50
- } else {
51
- masked[k] = v;
52
- }
53
- }
54
- return masked;
55
- }
56
-
57
- export class AgentExecutor {
58
- private adapter: LLMAdapter | null = null;
59
- private registry: ToolRegistry;
60
- private memory: ConversationMemory;
61
- private context: AgentContext;
62
- private config: AgentExecutorOptions;
63
- private initialized = false;
64
-
65
- constructor(options: AgentExecutorOptions) {
66
- this.config = {
67
- maxTurns: 10,
68
- verbose: false,
69
- ...options,
70
- };
71
- this.registry = new ToolRegistry();
72
- this.memory = new ConversationMemory();
73
- this.context = {
74
- projectPath: options.projectPath || null,
75
- workingDirectory: process.cwd(),
76
- };
77
- }
78
-
79
- /**
80
- * Initialize the agent
81
- */
82
- async initialize(): Promise<void> {
83
- if (this.initialized) return;
84
-
85
- // Create and initialize LLM adapter
86
- this.adapter = await createAdapter(this.config.provider);
87
- await this.adapter.initialize(this.config.apiKey);
88
-
89
- // Register all tools
90
- await registerAllTools(this.registry);
91
-
92
- // Add system message
93
- const systemPrompt = getSystemPrompt(this.context);
94
- this.memory.addSystem(systemPrompt);
95
-
96
- this.initialized = true;
97
-
98
- if (this.config.verbose) {
99
- console.log(`[Agent] Initialized with ${this.registry.size} tools`);
100
- console.log(`[Agent] Provider: ${this.config.provider}`);
101
- }
102
- }
103
-
104
- /**
105
- * Execute a user request
106
- */
107
- async execute(userInput: string): Promise<ExecutionResult> {
108
- if (!this.initialized || !this.adapter) {
109
- throw new Error("Agent not initialized. Call initialize() first.");
110
- }
111
-
112
- // Add user message
113
- this.memory.addUser(userInput);
114
-
115
- const toolsUsed: string[] = [];
116
- let turns = 0;
117
-
118
- // Agentic loop
119
- while (turns < this.config.maxTurns!) {
120
- turns++;
121
-
122
- if (this.config.verbose) {
123
- console.log(`[Agent] Turn ${turns}`);
124
- }
125
-
126
- // Get LLM response
127
- const response = await this.adapter.chat(
128
- this.memory.getMessages(),
129
- this.registry.getDefinitions()
130
- );
131
-
132
- // Handle tool calls
133
- if (response.finishReason === "tool_calls" && response.toolCalls) {
134
- // Add assistant message with tool calls
135
- this.memory.addAssistant(response.content, response.toolCalls);
136
-
137
- // Execute each tool
138
- for (const toolCall of response.toolCalls) {
139
- if (this.config.verbose) {
140
- console.log(`[Agent] Calling tool: ${toolCall.name}`);
141
- const maskedArgs = maskSensitiveArgs(toolCall.arguments);
142
- console.log(`[Agent] Args: ${JSON.stringify(maskedArgs)}`);
143
- }
144
-
145
- const result = await this.executeTool(toolCall);
146
- toolsUsed.push(toolCall.name);
147
-
148
- // Add tool result
149
- this.memory.addToolResult(toolCall.id, result);
150
-
151
- if (this.config.verbose) {
152
- console.log(`[Agent] Result: ${result.success ? "success" : "error"}`);
153
- if (result.output) {
154
- console.log(`[Agent] Output: ${result.output.substring(0, 200)}...`);
155
- }
156
- }
157
- }
158
-
159
- // Continue loop to get next response
160
- continue;
161
- }
162
-
163
- // No more tool calls - we're done
164
- this.memory.addAssistant(response.content);
165
-
166
- return {
167
- response: response.content,
168
- toolsUsed: [...new Set(toolsUsed)], // Deduplicate
169
- turns,
170
- };
171
- }
172
-
173
- // Max turns reached
174
- return {
175
- response: "Maximum turns reached. Please try breaking down your request.",
176
- toolsUsed: [...new Set(toolsUsed)],
177
- turns,
178
- };
179
- }
180
-
181
- /**
182
- * Execute a single tool
183
- */
184
- private async executeTool(toolCall: ToolCall): Promise<ToolResult> {
185
- // Check confirmCallback if set
186
- if (this.config.confirmCallback) {
187
- const confirmed = await this.config.confirmCallback(
188
- toolCall.name,
189
- toolCall.arguments
190
- );
191
- if (!confirmed) {
192
- return {
193
- toolCallId: toolCall.id,
194
- success: false,
195
- output: "Tool execution skipped by user",
196
- };
197
- }
198
- }
199
-
200
- const result = await this.registry.execute(
201
- toolCall.name,
202
- toolCall.arguments,
203
- this.context
204
- );
205
- result.toolCallId = toolCall.id;
206
- return result;
207
- }
208
-
209
- /**
210
- * Update context (e.g., when project changes)
211
- */
212
- updateContext(updates: Partial<AgentContext>): void {
213
- this.context = { ...this.context, ...updates };
214
- }
215
-
216
- /**
217
- * Reset conversation memory
218
- */
219
- reset(): void {
220
- this.memory.clear();
221
- const systemPrompt = getSystemPrompt(this.context);
222
- this.memory.addSystem(systemPrompt);
223
- }
224
-
225
- /**
226
- * Get current context
227
- */
228
- getContext(): AgentContext {
229
- return { ...this.context };
230
- }
231
-
232
- /**
233
- * Get conversation history
234
- */
235
- getHistory(): AgentMessage[] {
236
- return this.memory.getMessages();
237
- }
238
-
239
- /**
240
- * Get available tools
241
- */
242
- getTools(): string[] {
243
- return this.registry.list();
244
- }
245
- }
246
-
247
- // Re-export types
248
- export type { AgentConfig, AgentContext, AgentMessage, ToolCall, ToolResult } from "./types.js";
249
- export type { LLMAdapter } from "./adapters/index.js";
250
- export { ToolRegistry } from "./tools/index.js";
251
- export { ConversationMemory } from "./memory/index.js";
@@ -1,151 +0,0 @@
1
- /**
2
- * Conversation Memory Management
3
- */
4
-
5
- import type { AgentMessage, ToolCall, ToolResult } from "../types.js";
6
-
7
- /**
8
- * ConversationMemory class
9
- * Manages conversation history for the agent
10
- */
11
- export class ConversationMemory {
12
- private messages: AgentMessage[] = [];
13
- private maxMessages: number;
14
-
15
- constructor(maxMessages: number = 100) {
16
- this.maxMessages = maxMessages;
17
- }
18
-
19
- /**
20
- * Add a system message
21
- */
22
- addSystem(content: string): void {
23
- // Replace existing system message or add new one
24
- const existingIndex = this.messages.findIndex((m) => m.role === "system");
25
- if (existingIndex !== -1) {
26
- this.messages[existingIndex] = { role: "system", content };
27
- } else {
28
- this.messages.unshift({ role: "system", content });
29
- }
30
- }
31
-
32
- /**
33
- * Add a user message
34
- */
35
- addUser(content: string): void {
36
- this.messages.push({ role: "user", content });
37
- this.trim();
38
- }
39
-
40
- /**
41
- * Add an assistant message
42
- */
43
- addAssistant(content: string, toolCalls?: ToolCall[]): void {
44
- this.messages.push({
45
- role: "assistant",
46
- content,
47
- toolCalls,
48
- });
49
- this.trim();
50
- }
51
-
52
- /**
53
- * Add a tool result message
54
- */
55
- addToolResult(toolCallId: string, result: ToolResult): void {
56
- const content = result.success
57
- ? result.output
58
- : `Error: ${result.error || "Unknown error"}`;
59
-
60
- this.messages.push({
61
- role: "tool",
62
- content,
63
- toolCallId,
64
- });
65
- this.trim();
66
- }
67
-
68
- /**
69
- * Get all messages
70
- */
71
- getMessages(): AgentMessage[] {
72
- return [...this.messages];
73
- }
74
-
75
- /**
76
- * Get messages for a specific role
77
- */
78
- getByRole(role: AgentMessage["role"]): AgentMessage[] {
79
- return this.messages.filter((m) => m.role === role);
80
- }
81
-
82
- /**
83
- * Get the last N messages
84
- */
85
- getLast(n: number): AgentMessage[] {
86
- return this.messages.slice(-n);
87
- }
88
-
89
- /**
90
- * Clear all messages except system
91
- */
92
- clear(): void {
93
- const system = this.messages.find((m) => m.role === "system");
94
- this.messages = system ? [system] : [];
95
- }
96
-
97
- /**
98
- * Clear all messages including system
99
- */
100
- clearAll(): void {
101
- this.messages = [];
102
- }
103
-
104
- /**
105
- * Get message count
106
- */
107
- get length(): number {
108
- return this.messages.length;
109
- }
110
-
111
- /**
112
- * Trim messages to max limit (keep system + most recent)
113
- */
114
- private trim(): void {
115
- if (this.messages.length <= this.maxMessages) return;
116
-
117
- const system = this.messages.find((m) => m.role === "system");
118
- const nonSystem = this.messages.filter((m) => m.role !== "system");
119
-
120
- // Keep the most recent messages
121
- const toKeep = nonSystem.slice(-(this.maxMessages - 1));
122
- this.messages = system ? [system, ...toKeep] : toKeep;
123
- }
124
-
125
- /**
126
- * Summarize conversation for context compression
127
- */
128
- summarize(): string {
129
- const userMessages = this.getByRole("user");
130
- const summary = userMessages
131
- .slice(-5)
132
- .map((m) => `- ${m.content}`)
133
- .join("\n");
134
-
135
- return `Recent requests:\n${summary}`;
136
- }
137
-
138
- /**
139
- * Export conversation as JSON
140
- */
141
- toJSON(): AgentMessage[] {
142
- return this.messages;
143
- }
144
-
145
- /**
146
- * Import conversation from JSON
147
- */
148
- fromJSON(messages: AgentMessage[]): void {
149
- this.messages = messages;
150
- }
151
- }
@@ -1,106 +0,0 @@
1
- /**
2
- * System Prompt for VibeFrame Agent
3
- */
4
-
5
- import type { AgentContext } from "../types.js";
6
-
7
- export function getSystemPrompt(context: AgentContext): string {
8
- const projectInfo = context.projectPath
9
- ? `Current project: ${context.projectPath}`
10
- : "No project loaded. Use project_create or project_open to start.";
11
-
12
- return `You are VibeFrame, an AI video editing assistant. You help users edit videos through natural language commands.
13
-
14
- ## Current Context
15
- - Working directory: ${context.workingDirectory}
16
- - ${projectInfo}
17
-
18
- ## Your Capabilities
19
- You have access to tools for:
20
- 1. **Project Management**: Create, open, save, and modify video projects
21
- 2. **Timeline Editing**: Add sources, create clips, add tracks, apply effects, trim, split, move, and delete clips
22
- 3. **Media Analysis**: Detect scenes, silence, and beats in media files
23
- 4. **AI Generation**: Generate images, videos, TTS, sound effects, music, and more
24
- 5. **Export**: Export projects to video files
25
-
26
- ## Guidelines
27
-
28
- ### Always
29
- - Use tools to accomplish tasks - don't just describe what you would do
30
- - When working with a project, ensure it's loaded first (use project_open if needed)
31
- - Provide clear feedback about what you did after completing actions
32
- - If multiple steps are needed, execute them in sequence
33
- - When adding media to the timeline, first add it as a source, then create a clip from that source
34
-
35
- ### Ask for Clarification When Needed
36
- - If the user's request is vague or missing important details, ASK before proceeding
37
- - For generate_image: Ask what kind of image (subject, style, mood) if not specified
38
- - For generate_video: Ask about the video prompt/motion if not specified
39
- - For generate_speech: Ask what text to convert if not provided
40
- - For script-to-video: Ask for the actual script content
41
- - Example: "generate an image" → Ask "What kind of image would you like? (e.g., space landscape, cute robot, product photo)"
42
- - DON'T make up random content - the user knows what they want
43
-
44
- ### Project Workflow
45
- 1. Create or open a project first
46
- 2. Add media sources (video, audio, images)
47
- 3. Create clips from sources on tracks
48
- 4. Apply effects as needed
49
- 5. Export when ready
50
-
51
- ### Tool Usage Patterns
52
- - For "add video.mp4": Use timeline_add_source, then timeline_add_clip
53
- - For "trim to 10 seconds": Use timeline_trim with duration parameter
54
- - For "add fade out": Use timeline_add_effect with fadeOut type
55
- - For "generate sunset image": Use generate_image with the prompt
56
- - For "export video": Use export_video
57
-
58
- ### Filesystem Tools
59
- - **fs_list**: List files in a directory to see what's available
60
- - **fs_read**: Read file contents (storyboard.json, project.vibe.json, etc.)
61
- - **fs_write**: Write files
62
- - **fs_exists**: Check if a file exists
63
-
64
- ### Proactive File Reading (IMPORTANT)
65
- ALWAYS read relevant files before executing commands:
66
- - **Before regenerate-scene**: Read storyboard.json to understand scene details (visuals, narration, character description)
67
- - **Before modifying project**: Read project.vibe.json to see current state
68
- - **When user asks about scenes**: Read storyboard.json and summarize scene info
69
-
70
- This is critical because:
71
- 1. It helps you provide context to the user about what will be regenerated
72
- 2. It allows you to catch issues before running expensive AI operations
73
- 3. It makes your responses more informative and trustworthy
74
-
75
- ### Script-to-Video Projects
76
- When working with script-to-video output directories:
77
- - **storyboard.json**: Contains all scene data (description, visuals, narration text, duration)
78
- - **project.vibe.json**: The VibeFrame project file
79
- - **scene-N.png/mp4**: Generated images and videos for each scene
80
- - **narration-N.mp3**: Generated narration audio for each scene
81
-
82
- USE fs_read to examine these files when the user asks about scenes or wants to regenerate content. For example:
83
- - "regenerate scene 3" → First fs_read storyboard.json, tell user what scene 3 contains, then use pipeline_regenerate_scene
84
- - "what's in scene 2?" → fs_read storyboard.json and explain scene 2 details
85
-
86
- ### Error Recovery
87
- If a command fails:
88
- 1. Use fs_list to check if expected files exist in the directory
89
- 2. Use fs_read to examine file contents for issues (e.g., malformed JSON, missing fields)
90
- 3. Report what you found to the user with specific details
91
- 4. Suggest corrective actions based on what you discovered
92
-
93
- ### Response Format
94
- After completing tasks, summarize what was done:
95
- - List actions taken
96
- - Show relevant IDs (source-xxx, clip-xxx)
97
- - Mention any issues or warnings
98
-
99
- ### Export Reminder
100
- When you complete project editing tasks (adding clips, effects, trimming, etc.), remind the user:
101
- - Project file (.vibe.json) saves the edit information only
102
- - To create the actual video file, say "export" or "extract"
103
- - Example: "Project saved. To create the video file, say 'export' or 'extract'."
104
-
105
- Be concise but informative. Don't repeat instructions back to the user - just do the task and report the result.`;
106
- }