@umituz/react-native-ai-generation-content 1.61.65 → 1.62.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@umituz/react-native-ai-generation-content",
3
- "version": "1.61.65",
3
+ "version": "1.62.2",
4
4
  "description": "Provider-agnostic AI generation orchestration for React Native with result preview components",
5
5
  "main": "src/index.ts",
6
6
  "types": "src/index.ts",
@@ -1,314 +1,29 @@
1
- # Face Detection Domain
1
+ # Face Detection & Preservation
2
2
 
3
- Face detection and analysis system for AI features.
3
+ AI-powered face detection and preservation for image generation tasks.
4
4
 
5
- ## 📍 Import Path
5
+ ## Features
6
6
 
7
- ```typescript
8
- import {
9
- detectFaces,
10
- analyzeFace,
11
- compareFaces,
12
- cropFace,
13
- checkFaceQuality,
14
- findMatchingFace
15
- } from '@umituz/react-native-ai-generation-content';
16
- ```
17
-
18
- **Location**: `src/domains/face-detection/`
19
-
20
- ## 🎯 Domain Purpose
21
-
22
- Comprehensive face detection and analysis capabilities for AI features. Detect faces in images, analyze facial features and attributes, support multiple faces, extract facial measurements and landmarks, and enable face matching for various AI generation features.
23
-
24
- ---
25
-
26
- ## 📋 Usage Strategy
27
-
28
- ### When to Use This Domain
29
-
30
- ✅ **Use Cases:**
31
- - Preparing images for face swap
32
- - Validating photos for AI features
33
- - Detecting multiple faces in group photos
34
- - Quality checking before processing
35
- - Face matching and verification
36
-
37
- ❌ **When NOT to Use:**
38
- - Real-time video processing (use specialized tools)
39
- - Surveillance or tracking (ethical concerns)
40
- - Biometric authentication (use security libraries)
41
- - Age verification for legal purposes
42
-
43
- ### Implementation Strategy
44
-
45
- 1. **Detect faces** before AI generation
46
- 2. **Validate quality** of detected faces
47
- 3. **Handle multiple faces** appropriately
48
- 4. **Extract landmarks** for processing
49
- 5. **Compare faces** when needed
50
- 6. **Crop faces** for focused processing
51
- 7. **Provide feedback** on detection results
52
-
53
- ---
54
-
55
- ## ⚠️ Critical Rules (MUST FOLLOW)
56
-
57
- ### 1. Detection Requirements
58
- - **MUST** detect faces before processing
59
- - **MUST** handle no-face scenarios
60
- - **MUST** support multiple faces
61
- - **MUST** return confidence scores
62
- - **MUST** provide bounding boxes
63
-
64
- ### 2. Quality Validation
65
- - **MUST** check face quality before use
66
- - **MUST** provide quality feedback
67
- - **MUST** handle poor quality gracefully
68
- - **MUST** guide users to better photos
69
- - **MUST** validate clarity and visibility
70
-
71
- ### 3. Multiple Face Handling
72
- - **MUST** detect all faces in image
73
- - **MUST** allow face selection
74
- - **MUST** provide face indexing
75
- - **MUST** handle face ordering
76
- - **MUST** support group photos
77
-
78
- ### 4. Performance
79
- - **MUST** optimize detection speed
80
- - **MUST** cache detection results
81
- - **MUST** handle large images
82
- - **MUST NOT** block main thread
83
- - **MUST** implement efficient algorithms
84
-
85
- ### 5. User Experience
86
- - **MUST** provide clear feedback
87
- - **MUST** explain detection failures
88
- - **MUST** guide users to better photos
89
- - **MUST** show detected faces
90
- - **MUST** allow face selection
91
-
92
- ---
93
-
94
- ## 🚫 Prohibitions (MUST AVOID)
95
-
96
- ### Strictly Forbidden
97
-
98
- ❌ **NEVER** do the following:
99
-
100
- 1. **No Skipping Detection**
101
- - Always detect before processing
102
- - Never assume face presence
103
- - Validate detection results
104
-
105
- 2. **No Poor Quality Processing**
106
- - Always check quality first
107
- - Never process poor quality faces
108
- - Guide users to better photos
109
-
110
- 3. **No Missing Feedback**
111
- - Always explain detection results
112
- - Never silently fail
113
- - Provide actionable guidance
114
-
115
- 4. **No Privacy Violations**
116
- - Never store faces without consent
117
- - Always handle data properly
118
- - Comply with regulations
119
-
120
- 5. **No Biased Detection**
121
- - Ensure fair detection across demographics
122
- - Test for bias regularly
123
- - Use diverse training data
124
-
125
- 6. **No Missing Context**
126
- - Always explain what was detected
127
- - Show face locations
128
- - Provide confidence scores
129
-
130
- 7. **No Blocking UI**
131
- - Never block on detection
132
- - Show progress indicators
133
- - Allow cancellation
134
-
135
- ---
136
-
137
- ## 🤖 AI Agent Directions
138
-
139
- ### For AI Code Generation Tools
7
+ - ✅ **AI-based Face Detection** - Detect faces in images using AI vision models
8
+ - ✅ **Face Preservation Modes** - Multiple preservation strategies (strict, balanced, minimal)
9
+ - ✅ **Provider-Agnostic** - Works with any AI vision provider
10
+ - ✅ **Image Generation Integration** - Easy integration with image-to-video, image-to-image, etc.
11
+ - ✅ **React Hooks** - Ready-to-use hooks for React Native
140
12
 
141
- #### Prompt Template for AI Agents
142
-
143
- ```
144
- You are implementing face detection using @umituz/react-native-ai-generation-content.
145
-
146
- REQUIREMENTS:
147
- 1. Import face detection functions
148
- 2. Detect faces before AI generation
149
- 3. Validate face quality
150
- 4. Handle multiple faces
151
- 5. Extract landmarks for processing
152
- 6. Provide clear user feedback
153
- 7. Handle no-face scenarios
154
- 8. Optimize for performance
155
-
156
- CRITICAL RULES:
157
- - MUST detect before processing
158
- - MUST validate face quality
159
- - MUST handle multiple faces
160
- - MUST provide clear feedback
161
- - MUST guide users to better photos
162
- - MUST handle no-face scenarios
163
-
164
- DETECTION FUNCTIONS:
165
- - detectFaces: Detect all faces in image
166
- - analyzeFace: Analyze facial features
167
- - compareFaces: Compare two faces
168
- - cropFace: Crop face from image
169
- - checkFaceQuality: Validate face quality
170
- - findMatchingFace: Find face in group photo
171
-
172
- FACE DATA:
173
- - boundingBox: Face location and size
174
- - confidence: Detection confidence (0-1)
175
- - landmarks: Facial feature points
176
- - gender: Detected gender
177
- - age: Estimated age range
178
- - emotions: Emotion probabilities
179
-
180
- QUALITY METRICS:
181
- - brightness: Image brightness
182
- - sharpness: Face clarity
183
- - overall: 'good' | 'fair' | 'poor'
184
-
185
- STRICTLY FORBIDDEN:
186
- - No skipping detection
187
- - No poor quality processing
188
- - No missing feedback
189
- - No privacy violations
190
- - No biased detection
191
- - No missing context
192
- - No blocking UI
193
-
194
- QUALITY CHECKLIST:
195
- - [ ] Face detection implemented
196
- - [ ] Quality validation added
197
- - [ ] Multiple faces handled
198
- - [ ] Clear feedback provided
199
- - [ ] No-face scenarios handled
200
- - [ ] Performance optimized
201
- - [ ] Privacy protected
202
- - [ ] Bias tested
203
- - [ ] User guidance provided
204
- - [ ] Error handling complete
205
- ```
206
-
207
- ---
208
-
209
- ## 🛠️ Configuration Strategy
210
-
211
- ### Detection Result
13
+ ## Quick Example
212
14
 
213
15
  ```typescript
214
- interface FaceDetectionResult {
215
- faces: DetectedFace[];
216
- imageWidth: number;
217
- imageHeight: number;
218
- processingTime: number;
219
- }
16
+ import { prepareImageGenerationWithFacePreservation } from "@umituz/react-native-ai-generation-content/domains/face-detection";
220
17
 
221
- interface DetectedFace {
222
- id: string;
223
- boundingBox: BoundingBox;
224
- confidence: number;
225
- landmarks?: FacialLandmarks;
226
- }
227
- ```
228
-
229
- ### Face Analysis
18
+ // Enhance your generation prompts with face preservation
19
+ const generation = prepareImageGenerationWithFacePreservation({
20
+ prompt: "Transform into cartoon style",
21
+ faceDetectionResult: faceResult,
22
+ preservationMode: "balanced",
23
+ });
230
24
 
231
- ```typescript
232
- interface FaceAnalysis {
233
- gender: 'male' | 'female' | 'unknown';
234
- age: {
235
- min: number;
236
- max: number;
237
- estimated: number;
238
- };
239
- emotions: {
240
- happy: number;
241
- sad: number;
242
- angry: number;
243
- surprised: number;
244
- neutral: number;
245
- };
246
- dominantEmotion: string;
247
- landmarks: FacialLandmarks;
248
- faceQuality: {
249
- brightness: number;
250
- sharpness: number;
251
- overall: 'good' | 'fair' | 'poor';
252
- };
253
- }
25
+ // Use enhanced prompt
26
+ await generateVideo({ prompt: generation.enhancedPrompt });
254
27
  ```
255
28
 
256
- ---
257
-
258
- ## 📊 Core Functions
259
-
260
- ### Detection
261
- - `detectFaces()` - Find all faces
262
- - `analyzeFace()` - Analyze single face
263
- - `checkFaceQuality()` - Validate quality
264
-
265
- ### Processing
266
- - `cropFace()` - Extract face
267
- - `findMatchingFace()` - Match in group
268
- - `compareFaces()` - Compare two faces
269
-
270
- ### Components
271
- - `FaceDetectionOverlay` - Visual overlay
272
- - `FaceAnalysisDisplay` - Analysis results
273
-
274
- ---
275
-
276
- ## 🎨 Best Practices
277
-
278
- ### Image Quality
279
- - Use high-quality, well-lit photos
280
- - Ensure faces are clearly visible
281
- - Forward-facing photos work best
282
- - Avoid extreme angles
283
-
284
- ### User Feedback
285
- - Explain why detection failed
286
- - Guide to better photos
287
- - Show what was detected
288
- - Provide confidence scores
289
-
290
- ### Performance
291
- - Cache detection results
292
- - Optimize image sizes
293
- - Implement lazy loading
294
- - Background processing
295
-
296
- ---
297
-
298
- ## 🐛 Common Pitfalls
299
-
300
- ❌ **No face detected**: Guide user to better photo
301
- ❌ **Poor quality**: Check quality first
302
- ❌ **Multiple faces**: Allow face selection
303
- ❌ **Slow detection**: Optimize, cache results
304
-
305
- ---
306
-
307
- ## 📚 Related Features
308
-
309
- - [Face Swap](../../features/face-swap) - Swap faces between images
310
-
311
- ---
312
-
313
- **Last Updated**: 2025-01-08
314
- **Version**: 2.0.0 (Strategy-based Documentation)
29
+ See [index.ts](./index.ts) for complete API.
@@ -0,0 +1,50 @@
1
+ /**
2
+ * Face Preservation Types
3
+ * Enhanced face detection with preservation strategies
4
+ */
5
+
6
+ export type FacePreservationMode = "strict" | "balanced" | "minimal" | "none";
7
+
8
+ export interface FaceMetadata {
9
+ readonly hasFace: boolean;
10
+ readonly confidence: number;
11
+ readonly position?: {
12
+ readonly x: number;
13
+ readonly y: number;
14
+ readonly width: number;
15
+ readonly height: number;
16
+ };
17
+ readonly features?: {
18
+ readonly eyes: boolean;
19
+ readonly nose: boolean;
20
+ readonly mouth: boolean;
21
+ };
22
+ readonly quality?: "high" | "medium" | "low";
23
+ }
24
+
25
+ export interface FacePreservationConfig {
26
+ readonly mode: FacePreservationMode;
27
+ readonly minConfidence: number;
28
+ readonly requireFullFace?: boolean;
29
+ readonly allowMultipleFaces?: boolean;
30
+ }
31
+
32
+ export interface FacePreservationPrompt {
33
+ readonly basePrompt: string;
34
+ readonly faceGuidance: string;
35
+ readonly preservationWeight: number;
36
+ }
37
+
38
+ export const DEFAULT_PRESERVATION_CONFIG: FacePreservationConfig = {
39
+ mode: "balanced",
40
+ minConfidence: 0.5,
41
+ requireFullFace: false,
42
+ allowMultipleFaces: true,
43
+ };
44
+
45
+ export const PRESERVATION_MODE_WEIGHTS: Record<FacePreservationMode, number> = {
46
+ strict: 1.0,
47
+ balanced: 0.7,
48
+ minimal: 0.4,
49
+ none: 0.0,
50
+ };
@@ -1,17 +1,33 @@
1
1
  /**
2
- * React Native AI Face Detection - Public API
2
+ * React Native AI Face Detection & Preservation - Public API
3
3
  *
4
- * AI-powered face detection for React Native apps
4
+ * AI-powered face detection and preservation for image generation
5
5
  */
6
6
 
7
+ // Domain - Entities
7
8
  export type {
8
9
  FaceDetectionResult,
9
10
  FaceValidationState,
10
11
  FaceDetectionConfig,
11
12
  } from "./domain/entities/FaceDetection";
12
13
 
14
+ // Domain - Face Preservation Types
15
+ export type {
16
+ FacePreservationMode,
17
+ FaceMetadata,
18
+ FacePreservationConfig,
19
+ FacePreservationPrompt,
20
+ } from "./domain/types/face-preservation.types";
21
+
22
+ export {
23
+ DEFAULT_PRESERVATION_CONFIG,
24
+ PRESERVATION_MODE_WEIGHTS,
25
+ } from "./domain/types/face-preservation.types";
26
+
27
+ // Domain - Constants
13
28
  export { FACE_DETECTION_CONFIG, FACE_DETECTION_PROMPTS } from "./domain/constants/faceDetectionConstants";
14
29
 
30
+ // Infrastructure - Validators
15
31
  export {
16
32
  isValidFace,
17
33
  parseDetectionResponse,
@@ -19,9 +35,31 @@ export {
19
35
  createSuccessResult,
20
36
  } from "./infrastructure/validators/faceValidator";
21
37
 
38
+ // Infrastructure - Analyzers
22
39
  export { analyzeImageForFace } from "./infrastructure/analyzers/faceAnalyzer";
40
+ export type { AIAnalyzerFunction } from "./infrastructure/analyzers/faceAnalyzer";
41
+
42
+ // Infrastructure - Face Preservation Builders
43
+ export {
44
+ buildFacePreservationPrompt,
45
+ combineFacePreservationPrompt,
46
+ getFacePreservationWeight,
47
+ } from "./infrastructure/builders/facePreservationPromptBuilder";
48
+ export type { BuildPreservationPromptOptions } from "./infrastructure/builders/facePreservationPromptBuilder";
49
+
50
+ // Infrastructure - Image Generation Integration
51
+ export {
52
+ prepareImageGenerationWithFacePreservation,
53
+ shouldEnableFacePreservation,
54
+ } from "./infrastructure/integration/imageGenerationIntegration";
55
+ export type {
56
+ ImageGenerationWithFacePreservation,
57
+ PrepareImageGenerationOptions,
58
+ } from "./infrastructure/integration/imageGenerationIntegration";
23
59
 
60
+ // Presentation - Hooks
24
61
  export { useFaceDetection } from "./presentation/hooks/useFaceDetection";
25
62
 
63
+ // Presentation - Components
26
64
  export { FaceValidationStatus } from "./presentation/components/FaceValidationStatus";
27
65
  export { FaceDetectionToggle } from "./presentation/components/FaceDetectionToggle";
@@ -0,0 +1,66 @@
1
+ /**
2
+ * Face Preservation Prompt Builder
3
+ * Builds prompts that preserve facial features in image generation
4
+ */
5
+
6
+ import type {
7
+ FacePreservationMode,
8
+ FacePreservationPrompt,
9
+ FaceMetadata,
10
+ } from "../../domain/types/face-preservation.types";
11
+ import { PRESERVATION_MODE_WEIGHTS } from "../../domain/types/face-preservation.types";
12
+
13
+ const PRESERVATION_PROMPTS: Record<FacePreservationMode, string> = {
14
+ strict: "Preserve the exact facial features, expressions, and details from the original image. Maintain face structure, skin tone, eyes, nose, and mouth precisely.",
15
+ balanced: "Keep the main facial features recognizable while allowing natural variations. Preserve overall face structure and key characteristics.",
16
+ minimal: "Maintain basic facial proportions and general appearance while allowing creative interpretation.",
17
+ none: "",
18
+ };
19
+
20
+ export interface BuildPreservationPromptOptions {
21
+ readonly mode: FacePreservationMode;
22
+ readonly basePrompt: string;
23
+ readonly faceMetadata?: FaceMetadata;
24
+ readonly customGuidance?: string;
25
+ }
26
+
27
+ export function buildFacePreservationPrompt(
28
+ options: BuildPreservationPromptOptions,
29
+ ): FacePreservationPrompt {
30
+ const { mode, basePrompt, faceMetadata, customGuidance } = options;
31
+
32
+ if (mode === "none" || !faceMetadata?.hasFace) {
33
+ return {
34
+ basePrompt,
35
+ faceGuidance: "",
36
+ preservationWeight: 0,
37
+ };
38
+ }
39
+
40
+ const faceGuidance = customGuidance || PRESERVATION_PROMPTS[mode];
41
+ const preservationWeight = PRESERVATION_MODE_WEIGHTS[mode];
42
+
43
+ const qualityNote = faceMetadata?.quality === "high"
44
+ ? " High quality facial details detected."
45
+ : "";
46
+
47
+ const enhancedGuidance = `${faceGuidance}${qualityNote}`;
48
+
49
+ return {
50
+ basePrompt,
51
+ faceGuidance: enhancedGuidance,
52
+ preservationWeight,
53
+ };
54
+ }
55
+
56
+ export function combineFacePreservationPrompt(prompt: FacePreservationPrompt): string {
57
+ if (!prompt.faceGuidance) {
58
+ return prompt.basePrompt;
59
+ }
60
+
61
+ return `${prompt.basePrompt}\n\nFace Preservation: ${prompt.faceGuidance}`;
62
+ }
63
+
64
+ export function getFacePreservationWeight(mode: FacePreservationMode): number {
65
+ return PRESERVATION_MODE_WEIGHTS[mode];
66
+ }
@@ -0,0 +1,85 @@
1
+ /**
2
+ * Image Generation Integration
3
+ * Utilities for integrating face preservation with image generation flows
4
+ */
5
+
6
+ import type { FaceDetectionResult } from "../../domain/entities/FaceDetection";
7
+ import type {
8
+ FacePreservationMode,
9
+ FaceMetadata,
10
+ FacePreservationConfig,
11
+ } from "../../domain/types/face-preservation.types";
12
+ import {
13
+ buildFacePreservationPrompt,
14
+ combineFacePreservationPrompt,
15
+ } from "../builders/facePreservationPromptBuilder";
16
+
17
+ export interface ImageGenerationWithFacePreservation {
18
+ readonly originalPrompt: string;
19
+ readonly enhancedPrompt: string;
20
+ readonly preservationMode: FacePreservationMode;
21
+ readonly faceDetected: boolean;
22
+ readonly shouldPreserveFace: boolean;
23
+ }
24
+
25
+ export interface PrepareImageGenerationOptions {
26
+ readonly prompt: string;
27
+ readonly faceDetectionResult?: FaceDetectionResult;
28
+ readonly preservationMode?: FacePreservationMode;
29
+ readonly config?: Partial<FacePreservationConfig>;
30
+ }
31
+
32
+ export function prepareImageGenerationWithFacePreservation(
33
+ options: PrepareImageGenerationOptions,
34
+ ): ImageGenerationWithFacePreservation {
35
+ const {
36
+ prompt,
37
+ faceDetectionResult,
38
+ preservationMode = "balanced",
39
+ config,
40
+ } = options;
41
+
42
+ const faceDetected = faceDetectionResult?.hasFace ?? false;
43
+ const minConfidence = config?.minConfidence ?? 0.5;
44
+ const meetsConfidence = (faceDetectionResult?.confidence ?? 0) >= minConfidence;
45
+ const shouldPreserveFace = faceDetected && meetsConfidence && preservationMode !== "none";
46
+
47
+ if (!shouldPreserveFace) {
48
+ return {
49
+ originalPrompt: prompt,
50
+ enhancedPrompt: prompt,
51
+ preservationMode: "none",
52
+ faceDetected,
53
+ shouldPreserveFace: false,
54
+ };
55
+ }
56
+
57
+ const faceMetadata: FaceMetadata = {
58
+ hasFace: faceDetected,
59
+ confidence: faceDetectionResult?.confidence ?? 0,
60
+ };
61
+
62
+ const preservationPrompt = buildFacePreservationPrompt({
63
+ mode: preservationMode,
64
+ basePrompt: prompt,
65
+ faceMetadata,
66
+ });
67
+
68
+ const enhancedPrompt = combineFacePreservationPrompt(preservationPrompt);
69
+
70
+ return {
71
+ originalPrompt: prompt,
72
+ enhancedPrompt,
73
+ preservationMode,
74
+ faceDetected,
75
+ shouldPreserveFace: true,
76
+ };
77
+ }
78
+
79
+ export function shouldEnableFacePreservation(
80
+ faceDetectionResult?: FaceDetectionResult,
81
+ minConfidence: number = 0.5,
82
+ ): boolean {
83
+ if (!faceDetectionResult) return false;
84
+ return faceDetectionResult.hasFace && faceDetectionResult.confidence >= minConfidence;
85
+ }
@@ -1,5 +1,2 @@
1
1
  // Types
2
2
  export * from "./types";
3
-
4
- // Constants
5
- export * from "./constants";
@@ -9,7 +9,6 @@ import type { ImageToVideoFormState } from "./form.types";
9
9
  export interface ImageToVideoCallbacks {
10
10
  onGenerate: (formState: ImageToVideoFormState) => Promise<void>;
11
11
  onSelectImages?: () => Promise<string[]>;
12
- onSelectCustomAudio?: () => Promise<string | null>;
13
12
  onCreditCheck?: (cost: number) => boolean;
14
13
  onShowPaywall?: (cost: number) => void;
15
14
  onSuccess?: (result: ImageToVideoResult) => void;
@@ -19,7 +18,6 @@ export interface ImageToVideoCallbacks {
19
18
  export interface ImageToVideoFormConfig {
20
19
  maxImages?: number;
21
20
  creditCost?: number;
22
- enableCustomAudio?: boolean;
23
21
  enableMotionPrompt?: boolean;
24
22
  }
25
23
 
@@ -28,7 +26,6 @@ export interface ImageToVideoTranslationsExtended {
28
26
  selectedImages: string;
29
27
  animationStyle: string;
30
28
  durationPerImage: string;
31
- addMusic: string;
32
29
  };
33
30
  imageSelection: {
34
31
  selectImages: string;
@@ -38,9 +35,6 @@ export interface ImageToVideoTranslationsExtended {
38
35
  duration: {
39
36
  totalVideo: string;
40
37
  };
41
- music: {
42
- customAudioSelected: string;
43
- };
44
38
  hero: {
45
39
  title: string;
46
40
  subtitle: string;
@@ -4,15 +4,12 @@
4
4
  */
5
5
 
6
6
  import type { AnimationStyleId } from "./animation.types";
7
- import type { MusicMoodId } from "./music.types";
8
7
  import type { VideoDuration } from "./duration.types";
9
8
 
10
9
  export interface ImageToVideoFormState {
11
10
  selectedImages: string[];
12
11
  animationStyle: AnimationStyleId;
13
12
  duration: VideoDuration;
14
- musicMood: MusicMoodId;
15
- customAudioUri: string | null;
16
13
  motionPrompt: string;
17
14
  }
18
15
 
@@ -22,8 +19,6 @@ export interface ImageToVideoFormActions {
22
19
  removeImage: (index: number) => void;
23
20
  setAnimationStyle: (style: AnimationStyleId) => void;
24
21
  setDuration: (duration: VideoDuration) => void;
25
- setMusicMood: (mood: MusicMoodId) => void;
26
- setCustomAudioUri: (uri: string | null) => void;
27
22
  setMotionPrompt: (prompt: string) => void;
28
23
  reset: () => void;
29
24
  }
@@ -31,5 +26,4 @@ export interface ImageToVideoFormActions {
31
26
  export interface ImageToVideoFormDefaults {
32
27
  animationStyle?: AnimationStyleId;
33
28
  duration?: VideoDuration;
34
- musicMood?: MusicMoodId;
35
29
  }
@@ -4,7 +4,6 @@
4
4
  */
5
5
 
6
6
  import type { AnimationStyleId } from "./animation.types";
7
- import type { MusicMoodId } from "./music.types";
8
7
  import type { VideoDuration } from "./duration.types";
9
8
 
10
9
  export interface ImageToVideoOptions {
@@ -13,7 +12,6 @@ export interface ImageToVideoOptions {
13
12
  aspectRatio?: "16:9" | "9:16" | "1:1";
14
13
  fps?: number;
15
14
  animationStyle?: AnimationStyleId;
16
- musicMood?: MusicMoodId;
17
15
  }
18
16
 
19
17
  export interface ImageToVideoGenerateParams extends ImageToVideoOptions {
@@ -28,10 +26,8 @@ export interface ImageToVideoRequest {
28
26
  motionPrompt?: string;
29
27
  options?: ImageToVideoOptions;
30
28
  allImages?: string[];
31
- customAudioUri?: string | null;
32
29
  animationStyle?: AnimationStyleId;
33
30
  duration?: VideoDuration;
34
- musicMood?: MusicMoodId;
35
31
  model?: string;
36
32
  }
37
33
 
@@ -44,19 +44,6 @@ export type {
44
44
  ImageToVideoFeatureConfig,
45
45
  } from "./domain";
46
46
 
47
- // =============================================================================
48
- // DOMAIN LAYER - Constants
49
- // =============================================================================
50
-
51
- export {
52
- DEFAULT_ANIMATION_STYLES as IMAGE_TO_VIDEO_ANIMATION_STYLES,
53
- DEFAULT_ANIMATION_STYLE_ID as IMAGE_TO_VIDEO_DEFAULT_ANIMATION,
54
- DEFAULT_DURATION_OPTIONS as IMAGE_TO_VIDEO_DURATION_OPTIONS,
55
- DEFAULT_VIDEO_DURATION as IMAGE_TO_VIDEO_DEFAULT_DURATION,
56
- DEFAULT_FORM_VALUES as IMAGE_TO_VIDEO_FORM_DEFAULTS,
57
- DEFAULT_FORM_CONFIG as IMAGE_TO_VIDEO_CONFIG,
58
- } from "./domain";
59
-
60
47
  // =============================================================================
61
48
  // INFRASTRUCTURE LAYER
62
49
  // =============================================================================
@@ -9,17 +9,11 @@ import type {
9
9
  ImageToVideoFormActions,
10
10
  ImageToVideoFormDefaults,
11
11
  AnimationStyleId,
12
- MusicMoodId,
13
12
  VideoDuration,
14
13
  } from "../../domain/types";
15
- import {
16
- DEFAULT_ANIMATION_STYLE_ID,
17
- DEFAULT_MUSIC_MOOD_ID,
18
- DEFAULT_VIDEO_DURATION,
19
- } from "../../domain/constants";
20
14
 
21
15
  export interface UseFormStateOptions {
22
- defaults?: ImageToVideoFormDefaults;
16
+ defaults: ImageToVideoFormDefaults;
23
17
  }
24
18
 
25
19
  export interface UseFormStateReturn {
@@ -27,19 +21,17 @@ export interface UseFormStateReturn {
27
21
  actions: ImageToVideoFormActions;
28
22
  }
29
23
 
30
- function createInitialState(defaults?: ImageToVideoFormDefaults): ImageToVideoFormState {
24
+ function createInitialState(defaults: ImageToVideoFormDefaults): ImageToVideoFormState {
31
25
  return {
32
26
  selectedImages: [],
33
- animationStyle: defaults?.animationStyle ?? DEFAULT_ANIMATION_STYLE_ID,
34
- duration: defaults?.duration ?? DEFAULT_VIDEO_DURATION,
35
- musicMood: defaults?.musicMood ?? DEFAULT_MUSIC_MOOD_ID,
36
- customAudioUri: null,
27
+ animationStyle: defaults.animationStyle,
28
+ duration: defaults.duration,
37
29
  motionPrompt: "",
38
30
  };
39
31
  }
40
32
 
41
- export function useFormState(options?: UseFormStateOptions): UseFormStateReturn {
42
- const { defaults } = options ?? {};
33
+ export function useFormState(options: UseFormStateOptions): UseFormStateReturn {
34
+ const { defaults } = options;
43
35
 
44
36
  const [state, setState] = useState<ImageToVideoFormState>(() =>
45
37
  createInitialState(defaults)
@@ -71,14 +63,6 @@ export function useFormState(options?: UseFormStateOptions): UseFormStateReturn
71
63
  setState((prev) => ({ ...prev, duration }));
72
64
  }, []);
73
65
 
74
- const setMusicMood = useCallback((mood: MusicMoodId) => {
75
- setState((prev) => ({ ...prev, musicMood: mood }));
76
- }, []);
77
-
78
- const setCustomAudioUri = useCallback((uri: string | null) => {
79
- setState((prev) => ({ ...prev, customAudioUri: uri }));
80
- }, []);
81
-
82
66
  const setMotionPrompt = useCallback((prompt: string) => {
83
67
  setState((prev) => ({ ...prev, motionPrompt: prompt }));
84
68
  }, []);
@@ -94,8 +78,6 @@ export function useFormState(options?: UseFormStateOptions): UseFormStateReturn
94
78
  removeImage,
95
79
  setAnimationStyle,
96
80
  setDuration,
97
- setMusicMood,
98
- setCustomAudioUri,
99
81
  setMotionPrompt,
100
82
  reset,
101
83
  }),
@@ -105,8 +87,6 @@ export function useFormState(options?: UseFormStateOptions): UseFormStateReturn
105
87
  removeImage,
106
88
  setAnimationStyle,
107
89
  setDuration,
108
- setMusicMood,
109
- setCustomAudioUri,
110
90
  setMotionPrompt,
111
91
  reset,
112
92
  ]
@@ -13,7 +13,6 @@ import type {
13
13
  ImageToVideoFormActions,
14
14
  ImageToVideoGenerationState,
15
15
  ImageToVideoCallbacks,
16
- MusicMoodId,
17
16
  } from "../../domain/types";
18
17
 
19
18
  export interface UseImageToVideoFormOptions extends UseFormStateOptions {
@@ -25,7 +24,6 @@ export interface UseImageToVideoFormReturn {
25
24
  actions: ImageToVideoFormActions;
26
25
  generationState: ImageToVideoGenerationState;
27
26
  handleGenerate: () => Promise<void>;
28
- handleMusicSelect: (moodId: MusicMoodId) => void;
29
27
  handleSelectImages: () => Promise<void>;
30
28
  isReady: boolean;
31
29
  }
@@ -42,25 +40,6 @@ export function useImageToVideoForm(
42
40
  callbacks,
43
41
  });
44
42
 
45
- const handleMusicSelect = useCallback(
46
- (moodId: MusicMoodId) => {
47
- if (moodId === "custom" && callbacks.onSelectCustomAudio) {
48
- callbacks.onSelectCustomAudio().then((uri) => {
49
- if (uri) {
50
- actions.setCustomAudioUri(uri);
51
- actions.setMusicMood("custom");
52
- }
53
- });
54
- } else {
55
- actions.setMusicMood(moodId);
56
- if (moodId !== "custom") {
57
- actions.setCustomAudioUri(null);
58
- }
59
- }
60
- },
61
- [callbacks, actions]
62
- );
63
-
64
43
  const handleSelectImages = useCallback(async () => {
65
44
  if (__DEV__) {
66
45
 
@@ -100,7 +79,6 @@ export function useImageToVideoForm(
100
79
  actions,
101
80
  generationState,
102
81
  handleGenerate,
103
- handleMusicSelect,
104
82
  handleSelectImages,
105
83
  isReady,
106
84
  }),
@@ -109,7 +87,6 @@ export function useImageToVideoForm(
109
87
  actions,
110
88
  generationState,
111
89
  handleGenerate,
112
- handleMusicSelect,
113
90
  handleSelectImages,
114
91
  isReady,
115
92
  ]
@@ -72,9 +72,6 @@ export type {
72
72
  ImageToVideoSelectionGridTranslations, ImageToVideoGenerateButtonProps,
73
73
  } from "../domains/image-to-video";
74
74
  export {
75
- IMAGE_TO_VIDEO_ANIMATION_STYLES, IMAGE_TO_VIDEO_DEFAULT_ANIMATION,
76
- IMAGE_TO_VIDEO_DURATION_OPTIONS, IMAGE_TO_VIDEO_DEFAULT_DURATION,
77
- IMAGE_TO_VIDEO_FORM_DEFAULTS, IMAGE_TO_VIDEO_CONFIG,
78
75
  executeImageToVideo, hasImageToVideoSupport,
79
76
  useImageToVideoFormState, useImageToVideoGeneration, useImageToVideoForm, useImageToVideoFeature,
80
77
  ImageToVideoAnimationStyleSelector, ImageToVideoDurationSelector,
@@ -1,47 +0,0 @@
1
- /**
2
- * Animation Style Constants
3
- * Default animation styles for image-to-video
4
- */
5
-
6
- import type { AnimationStyle } from "../types";
7
-
8
- export const DEFAULT_ANIMATION_STYLES: AnimationStyle[] = [
9
- {
10
- id: "ken_burns",
11
- name: "Ken Burns",
12
- description: "Slow zoom and pan effect",
13
- icon: "expand-outline",
14
- },
15
- {
16
- id: "zoom_in",
17
- name: "Zoom In",
18
- description: "Gradual zoom in effect",
19
- icon: "add-circle-outline",
20
- },
21
- {
22
- id: "zoom_out",
23
- name: "Zoom Out",
24
- description: "Gradual zoom out effect",
25
- icon: "remove-circle-outline",
26
- },
27
- {
28
- id: "slide_left",
29
- name: "Slide Left",
30
- description: "Pan from right to left",
31
- icon: "arrow-back-outline",
32
- },
33
- {
34
- id: "slide_right",
35
- name: "Slide Right",
36
- description: "Pan from left to right",
37
- icon: "arrow-forward-outline",
38
- },
39
- {
40
- id: "parallax",
41
- name: "Parallax",
42
- description: "3D depth effect",
43
- icon: "layers-outline",
44
- },
45
- ];
46
-
47
- export const DEFAULT_ANIMATION_STYLE_ID = "ken_burns";
@@ -1,12 +0,0 @@
1
- /**
2
- * Duration Constants
3
- * Fixed to 4 seconds for video generation
4
- */
5
-
6
- import type { VideoDuration, DurationOption } from "../types";
7
-
8
- export const DEFAULT_DURATION_OPTIONS: DurationOption[] = [
9
- { value: 4, label: "4s" },
10
- ];
11
-
12
- export const DEFAULT_VIDEO_DURATION: VideoDuration = 4;
@@ -1,22 +0,0 @@
1
- /**
2
- * Form Constants
3
- * Default form values for image-to-video
4
- */
5
-
6
- import type { ImageToVideoFormDefaults, ImageToVideoFormConfig } from "../types";
7
- import { DEFAULT_ANIMATION_STYLE_ID } from "./animation.constants";
8
- import { DEFAULT_MUSIC_MOOD_ID } from "./music.constants";
9
- import { DEFAULT_VIDEO_DURATION } from "./duration.constants";
10
-
11
- export const DEFAULT_FORM_VALUES: ImageToVideoFormDefaults = {
12
- animationStyle: DEFAULT_ANIMATION_STYLE_ID,
13
- duration: DEFAULT_VIDEO_DURATION,
14
- musicMood: DEFAULT_MUSIC_MOOD_ID,
15
- };
16
-
17
- export const DEFAULT_FORM_CONFIG: ImageToVideoFormConfig = {
18
- maxImages: 10,
19
- creditCost: 1,
20
- enableCustomAudio: true,
21
- enableMotionPrompt: false,
22
- };
@@ -1,18 +0,0 @@
1
- /**
2
- * Image-to-Video Constants Index
3
- */
4
-
5
- export {
6
- DEFAULT_ANIMATION_STYLES,
7
- DEFAULT_ANIMATION_STYLE_ID,
8
- } from "./animation.constants";
9
-
10
- export {
11
- DEFAULT_DURATION_OPTIONS,
12
- DEFAULT_VIDEO_DURATION,
13
- } from "./duration.constants";
14
-
15
- export {
16
- DEFAULT_FORM_VALUES,
17
- DEFAULT_FORM_CONFIG,
18
- } from "./form.constants";