magic_hour 0.39.0__py3-none-any.whl → 0.40.0__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- magic_hour/README.md +2 -0
- magic_hour/environment.py +1 -1
- magic_hour/resources/v1/README.md +2 -0
- magic_hour/resources/v1/ai_clothes_changer/README.md +1 -0
- magic_hour/resources/v1/ai_face_editor/README.md +1 -0
- magic_hour/resources/v1/ai_gif_generator/README.md +1 -0
- magic_hour/resources/v1/ai_headshot_generator/README.md +1 -0
- magic_hour/resources/v1/ai_image_editor/README.md +1 -0
- magic_hour/resources/v1/ai_image_generator/README.md +1 -0
- magic_hour/resources/v1/ai_image_upscaler/README.md +1 -0
- magic_hour/resources/v1/ai_meme_generator/README.md +1 -0
- magic_hour/resources/v1/ai_photo_editor/README.md +1 -0
- magic_hour/resources/v1/ai_qr_code_generator/README.md +1 -0
- magic_hour/resources/v1/ai_talking_photo/README.md +1 -0
- magic_hour/resources/v1/ai_voice_generator/README.md +56 -0
- magic_hour/resources/v1/ai_voice_generator/__init__.py +4 -0
- magic_hour/resources/v1/ai_voice_generator/client.py +119 -0
- magic_hour/resources/v1/animation/README.md +1 -0
- magic_hour/resources/v1/audio_projects/README.md +90 -0
- magic_hour/resources/v1/audio_projects/__init__.py +4 -0
- magic_hour/resources/v1/audio_projects/client.py +173 -0
- magic_hour/resources/v1/auto_subtitle_generator/README.md +1 -0
- magic_hour/resources/v1/client.py +14 -0
- magic_hour/resources/v1/face_detection/README.md +1 -0
- magic_hour/resources/v1/face_swap/README.md +1 -0
- magic_hour/resources/v1/face_swap_photo/README.md +1 -0
- magic_hour/resources/v1/files/README.md +1 -0
- magic_hour/resources/v1/image_background_remover/README.md +1 -0
- magic_hour/resources/v1/image_projects/README.md +1 -0
- magic_hour/resources/v1/image_to_video/README.md +1 -0
- magic_hour/resources/v1/lip_sync/README.md +1 -0
- magic_hour/resources/v1/photo_colorizer/README.md +1 -0
- magic_hour/resources/v1/text_to_video/README.md +1 -0
- magic_hour/resources/v1/video_projects/README.md +1 -0
- magic_hour/resources/v1/video_to_video/README.md +6 -7
- magic_hour/resources/v1/video_to_video/client.py +0 -2
- magic_hour/types/models/__init__.py +10 -0
- magic_hour/types/models/v1_ai_voice_generator_create_response.py +27 -0
- magic_hour/types/models/v1_audio_projects_get_response.py +72 -0
- magic_hour/types/models/v1_audio_projects_get_response_downloads_item.py +19 -0
- magic_hour/types/models/v1_audio_projects_get_response_error.py +25 -0
- magic_hour/types/params/__init__.py +12 -0
- magic_hour/types/params/v1_ai_voice_generator_create_body.py +40 -0
- magic_hour/types/params/v1_ai_voice_generator_create_body_style.py +60 -0
- magic_hour/types/params/v1_video_to_video_create_body_style.py +19 -19
- {magic_hour-0.39.0.dist-info → magic_hour-0.40.0.dist-info}/METADATA +11 -1
- {magic_hour-0.39.0.dist-info → magic_hour-0.40.0.dist-info}/RECORD +49 -37
- {magic_hour-0.39.0.dist-info → magic_hour-0.40.0.dist-info}/LICENSE +0 -0
- {magic_hour-0.39.0.dist-info → magic_hour-0.40.0.dist-info}/WHEEL +0 -0
|
@@ -3,6 +3,7 @@
|
|
|
3
3
|
## Module Functions
|
|
4
4
|
|
|
5
5
|
|
|
6
|
+
|
|
6
7
|
<!-- CUSTOM DOCS START -->
|
|
7
8
|
|
|
8
9
|
### Video To Video Generate Workflow <a name="generate"></a>
|
|
@@ -97,12 +98,12 @@ Get more information about this mode at our [product page](https://magichour.ai/
|
|
|
97
98
|
| `└─ youtube_url` | ✗ | — | Using a youtube video as the input source. This field is required if `video_source` is `youtube` | `"http://www.example.com"` |
|
|
98
99
|
| `end_seconds` | ✓ | ✗ | The end time of the input video in seconds. This value is used to trim the input video. The value must be greater than 0.1, and more than the start_seconds. | `15.0` |
|
|
99
100
|
| `start_seconds` | ✓ | ✗ | The start time of the input video in seconds. This value is used to trim the input video. The value must be greater than 0. | `0.0` |
|
|
100
|
-
| `style` | ✓ | ✗ | | `{"art_style": "3D Render", "model": "default", "
|
|
101
|
+
| `style` | ✓ | ✗ | | `{"art_style": "3D Render", "model": "default", "prompt_type": "default", "version": "default"}` |
|
|
101
102
|
| `└─ art_style` | ✓ | — | | `"3D Render"` |
|
|
102
|
-
| `└─ model` |
|
|
103
|
-
| `└─ prompt` |
|
|
104
|
-
| `└─ prompt_type` |
|
|
105
|
-
| `└─ version` |
|
|
103
|
+
| `└─ model` | ✗ | — | * `Dreamshaper` - a good all-around model that works for both animations as well as realism. * `Absolute Reality` - better at realism, but you'll often get similar results with Dreamshaper as well. * `Flat 2D Anime` - best for a flat illustration style that's common in most anime. * `default` - use the default recommended model for the selected art style. | `"default"` |
|
|
104
|
+
| `└─ prompt` | ✗ | — | The prompt used for the video. Prompt is required if `prompt_type` is `custom` or `append_default`. If `prompt_type` is `default`, then the `prompt` value passed will be ignored. | `"string"` |
|
|
105
|
+
| `└─ prompt_type` | ✗ | — | * `default` - Use the default recommended prompt for the art style. * `custom` - Only use the prompt passed in the API. Note: for v1, lora prompt will still be auto added to apply the art style properly. * `append_default` - Add the default recommended prompt to the end of the prompt passed in the API. | `"default"` |
|
|
106
|
+
| `└─ version` | ✗ | — | * `v1` - more detail, closer prompt adherence, and frame-by-frame previews. * `v2` - faster, more consistent, and less noisy. * `default` - use the default version for the selected art style. | `"default"` |
|
|
106
107
|
| `fps_resolution` | ✗ | ✗ | Determines whether the resulting video will have the same frame per second as the original video, or half. * `FULL` - the result video will have the same FPS as the input video * `HALF` - the result video will have half the FPS as the input video | `"HALF"` |
|
|
107
108
|
| `height` | ✗ | ✓ | `height` is deprecated and no longer influences the output video's resolution. Output resolution is determined by the **minimum** of: - The resolution of the input video - The maximum resolution allowed by your subscription tier. See our [pricing page](https://magichour.ai/pricing) for more details. This field is retained only for backward compatibility and will be removed in a future release. | `123` |
|
|
108
109
|
| `name` | ✗ | ✗ | The name of video. This value is mainly used for your own identification of the video. | `"Video To Video video"` |
|
|
@@ -122,7 +123,6 @@ res = client.v1.video_to_video.create(
|
|
|
122
123
|
style={
|
|
123
124
|
"art_style": "3D Render",
|
|
124
125
|
"model": "default",
|
|
125
|
-
"prompt": "string",
|
|
126
126
|
"prompt_type": "default",
|
|
127
127
|
"version": "default",
|
|
128
128
|
},
|
|
@@ -146,7 +146,6 @@ res = await client.v1.video_to_video.create(
|
|
|
146
146
|
style={
|
|
147
147
|
"art_style": "3D Render",
|
|
148
148
|
"model": "default",
|
|
149
|
-
"prompt": "string",
|
|
150
149
|
"prompt_type": "default",
|
|
151
150
|
"version": "default",
|
|
152
151
|
},
|
|
@@ -206,7 +206,6 @@ class VideoToVideoClient:
|
|
|
206
206
|
style={
|
|
207
207
|
"art_style": "3D Render",
|
|
208
208
|
"model": "default",
|
|
209
|
-
"prompt": "string",
|
|
210
209
|
"prompt_type": "default",
|
|
211
210
|
"version": "default",
|
|
212
211
|
},
|
|
@@ -425,7 +424,6 @@ class AsyncVideoToVideoClient:
|
|
|
425
424
|
style={
|
|
426
425
|
"art_style": "3D Render",
|
|
427
426
|
"model": "default",
|
|
428
|
-
"prompt": "string",
|
|
429
427
|
"prompt_type": "default",
|
|
430
428
|
"version": "default",
|
|
431
429
|
},
|
|
@@ -11,7 +11,13 @@ from .v1_ai_meme_generator_create_response import V1AiMemeGeneratorCreateRespons
|
|
|
11
11
|
from .v1_ai_photo_editor_create_response import V1AiPhotoEditorCreateResponse
|
|
12
12
|
from .v1_ai_qr_code_generator_create_response import V1AiQrCodeGeneratorCreateResponse
|
|
13
13
|
from .v1_ai_talking_photo_create_response import V1AiTalkingPhotoCreateResponse
|
|
14
|
+
from .v1_ai_voice_generator_create_response import V1AiVoiceGeneratorCreateResponse
|
|
14
15
|
from .v1_animation_create_response import V1AnimationCreateResponse
|
|
16
|
+
from .v1_audio_projects_get_response import V1AudioProjectsGetResponse
|
|
17
|
+
from .v1_audio_projects_get_response_downloads_item import (
|
|
18
|
+
V1AudioProjectsGetResponseDownloadsItem,
|
|
19
|
+
)
|
|
20
|
+
from .v1_audio_projects_get_response_error import V1AudioProjectsGetResponseError
|
|
15
21
|
from .v1_auto_subtitle_generator_create_response import (
|
|
16
22
|
V1AutoSubtitleGeneratorCreateResponse,
|
|
17
23
|
)
|
|
@@ -59,7 +65,11 @@ __all__ = [
|
|
|
59
65
|
"V1AiPhotoEditorCreateResponse",
|
|
60
66
|
"V1AiQrCodeGeneratorCreateResponse",
|
|
61
67
|
"V1AiTalkingPhotoCreateResponse",
|
|
68
|
+
"V1AiVoiceGeneratorCreateResponse",
|
|
62
69
|
"V1AnimationCreateResponse",
|
|
70
|
+
"V1AudioProjectsGetResponse",
|
|
71
|
+
"V1AudioProjectsGetResponseDownloadsItem",
|
|
72
|
+
"V1AudioProjectsGetResponseError",
|
|
63
73
|
"V1AutoSubtitleGeneratorCreateResponse",
|
|
64
74
|
"V1FaceDetectionCreateResponse",
|
|
65
75
|
"V1FaceDetectionGetResponse",
|
|
@@ -0,0 +1,27 @@
|
|
|
1
|
+
import pydantic
|
|
2
|
+
|
|
3
|
+
|
|
4
|
+
class V1AiVoiceGeneratorCreateResponse(pydantic.BaseModel):
|
|
5
|
+
"""
|
|
6
|
+
Success
|
|
7
|
+
"""
|
|
8
|
+
|
|
9
|
+
model_config = pydantic.ConfigDict(
|
|
10
|
+
arbitrary_types_allowed=True,
|
|
11
|
+
populate_by_name=True,
|
|
12
|
+
)
|
|
13
|
+
|
|
14
|
+
credits_charged: int = pydantic.Field(
|
|
15
|
+
alias="credits_charged",
|
|
16
|
+
)
|
|
17
|
+
"""
|
|
18
|
+
The amount of credits deducted from your account to generate the audio. We charge credits right when the request is made.
|
|
19
|
+
|
|
20
|
+
If an error occurred while generating the audio, credits will be refunded and this field will be updated to include the refund.
|
|
21
|
+
"""
|
|
22
|
+
id: str = pydantic.Field(
|
|
23
|
+
alias="id",
|
|
24
|
+
)
|
|
25
|
+
"""
|
|
26
|
+
Unique ID of the audio. This value can be used in the [get audio project API](https://docs.magichour.ai/api-reference/audio-projects/get-audio-details) to fetch additional details such as status
|
|
27
|
+
"""
|
|
@@ -0,0 +1,72 @@
|
|
|
1
|
+
import pydantic
|
|
2
|
+
import typing
|
|
3
|
+
import typing_extensions
|
|
4
|
+
|
|
5
|
+
from .v1_audio_projects_get_response_downloads_item import (
|
|
6
|
+
V1AudioProjectsGetResponseDownloadsItem,
|
|
7
|
+
)
|
|
8
|
+
from .v1_audio_projects_get_response_error import V1AudioProjectsGetResponseError
|
|
9
|
+
|
|
10
|
+
|
|
11
|
+
class V1AudioProjectsGetResponse(pydantic.BaseModel):
|
|
12
|
+
"""
|
|
13
|
+
Success
|
|
14
|
+
"""
|
|
15
|
+
|
|
16
|
+
model_config = pydantic.ConfigDict(
|
|
17
|
+
arbitrary_types_allowed=True,
|
|
18
|
+
populate_by_name=True,
|
|
19
|
+
)
|
|
20
|
+
|
|
21
|
+
created_at: str = pydantic.Field(
|
|
22
|
+
alias="created_at",
|
|
23
|
+
)
|
|
24
|
+
credits_charged: int = pydantic.Field(
|
|
25
|
+
alias="credits_charged",
|
|
26
|
+
)
|
|
27
|
+
"""
|
|
28
|
+
The amount of credits deducted from your account to generate the audio. We charge credits right when the request is made.
|
|
29
|
+
|
|
30
|
+
If an error occurred while generating the audio, credits will be refunded and this field will be updated to include the refund.
|
|
31
|
+
"""
|
|
32
|
+
downloads: typing.List[V1AudioProjectsGetResponseDownloadsItem] = pydantic.Field(
|
|
33
|
+
alias="downloads",
|
|
34
|
+
)
|
|
35
|
+
enabled: bool = pydantic.Field(
|
|
36
|
+
alias="enabled",
|
|
37
|
+
)
|
|
38
|
+
"""
|
|
39
|
+
Indicates whether the resource is deleted
|
|
40
|
+
"""
|
|
41
|
+
error: typing.Optional[V1AudioProjectsGetResponseError] = pydantic.Field(
|
|
42
|
+
alias="error",
|
|
43
|
+
)
|
|
44
|
+
"""
|
|
45
|
+
In the case of an error, this object will contain the error encountered during video render
|
|
46
|
+
"""
|
|
47
|
+
id: str = pydantic.Field(
|
|
48
|
+
alias="id",
|
|
49
|
+
)
|
|
50
|
+
"""
|
|
51
|
+
Unique ID of the audio. This value can be used in the [get audio project API](https://docs.magichour.ai/api-reference/audio-projects/get-audio-details) to fetch additional details such as status
|
|
52
|
+
"""
|
|
53
|
+
name: typing.Optional[str] = pydantic.Field(
|
|
54
|
+
alias="name",
|
|
55
|
+
)
|
|
56
|
+
"""
|
|
57
|
+
The name of the audio.
|
|
58
|
+
"""
|
|
59
|
+
status: typing_extensions.Literal[
|
|
60
|
+
"canceled", "complete", "draft", "error", "queued", "rendering"
|
|
61
|
+
] = pydantic.Field(
|
|
62
|
+
alias="status",
|
|
63
|
+
)
|
|
64
|
+
"""
|
|
65
|
+
The status of the audio.
|
|
66
|
+
"""
|
|
67
|
+
type_: str = pydantic.Field(
|
|
68
|
+
alias="type",
|
|
69
|
+
)
|
|
70
|
+
"""
|
|
71
|
+
The type of the audio project. Possible values are VOICE_GENERATOR
|
|
72
|
+
"""
|
|
@@ -0,0 +1,19 @@
|
|
|
1
|
+
import pydantic
|
|
2
|
+
|
|
3
|
+
|
|
4
|
+
class V1AudioProjectsGetResponseDownloadsItem(pydantic.BaseModel):
|
|
5
|
+
"""
|
|
6
|
+
The download url and expiration date of the audio project
|
|
7
|
+
"""
|
|
8
|
+
|
|
9
|
+
model_config = pydantic.ConfigDict(
|
|
10
|
+
arbitrary_types_allowed=True,
|
|
11
|
+
populate_by_name=True,
|
|
12
|
+
)
|
|
13
|
+
|
|
14
|
+
expires_at: str = pydantic.Field(
|
|
15
|
+
alias="expires_at",
|
|
16
|
+
)
|
|
17
|
+
url: str = pydantic.Field(
|
|
18
|
+
alias="url",
|
|
19
|
+
)
|
|
@@ -0,0 +1,25 @@
|
|
|
1
|
+
import pydantic
|
|
2
|
+
|
|
3
|
+
|
|
4
|
+
class V1AudioProjectsGetResponseError(pydantic.BaseModel):
|
|
5
|
+
"""
|
|
6
|
+
In the case of an error, this object will contain the error encountered during video render
|
|
7
|
+
"""
|
|
8
|
+
|
|
9
|
+
model_config = pydantic.ConfigDict(
|
|
10
|
+
arbitrary_types_allowed=True,
|
|
11
|
+
populate_by_name=True,
|
|
12
|
+
)
|
|
13
|
+
|
|
14
|
+
code: str = pydantic.Field(
|
|
15
|
+
alias="code",
|
|
16
|
+
)
|
|
17
|
+
"""
|
|
18
|
+
An error code to indicate why a failure happened.
|
|
19
|
+
"""
|
|
20
|
+
message: str = pydantic.Field(
|
|
21
|
+
alias="message",
|
|
22
|
+
)
|
|
23
|
+
"""
|
|
24
|
+
Details on the reason why a failure happened.
|
|
25
|
+
"""
|
|
@@ -123,6 +123,14 @@ from .v1_ai_talking_photo_create_body_style import (
|
|
|
123
123
|
_SerializerV1AiTalkingPhotoCreateBodyStyle,
|
|
124
124
|
)
|
|
125
125
|
from .v1_ai_talking_photo_generate_body_assets import V1AiTalkingPhotoGenerateBodyAssets
|
|
126
|
+
from .v1_ai_voice_generator_create_body import (
|
|
127
|
+
V1AiVoiceGeneratorCreateBody,
|
|
128
|
+
_SerializerV1AiVoiceGeneratorCreateBody,
|
|
129
|
+
)
|
|
130
|
+
from .v1_ai_voice_generator_create_body_style import (
|
|
131
|
+
V1AiVoiceGeneratorCreateBodyStyle,
|
|
132
|
+
_SerializerV1AiVoiceGeneratorCreateBodyStyle,
|
|
133
|
+
)
|
|
126
134
|
from .v1_animation_create_body import (
|
|
127
135
|
V1AnimationCreateBody,
|
|
128
136
|
_SerializerV1AnimationCreateBody,
|
|
@@ -306,6 +314,8 @@ __all__ = [
|
|
|
306
314
|
"V1AiTalkingPhotoCreateBodyAssets",
|
|
307
315
|
"V1AiTalkingPhotoCreateBodyStyle",
|
|
308
316
|
"V1AiTalkingPhotoGenerateBodyAssets",
|
|
317
|
+
"V1AiVoiceGeneratorCreateBody",
|
|
318
|
+
"V1AiVoiceGeneratorCreateBodyStyle",
|
|
309
319
|
"V1AnimationCreateBody",
|
|
310
320
|
"V1AnimationCreateBodyAssets",
|
|
311
321
|
"V1AnimationCreateBodyStyle",
|
|
@@ -378,6 +388,8 @@ __all__ = [
|
|
|
378
388
|
"_SerializerV1AiTalkingPhotoCreateBody",
|
|
379
389
|
"_SerializerV1AiTalkingPhotoCreateBodyAssets",
|
|
380
390
|
"_SerializerV1AiTalkingPhotoCreateBodyStyle",
|
|
391
|
+
"_SerializerV1AiVoiceGeneratorCreateBody",
|
|
392
|
+
"_SerializerV1AiVoiceGeneratorCreateBodyStyle",
|
|
381
393
|
"_SerializerV1AnimationCreateBody",
|
|
382
394
|
"_SerializerV1AnimationCreateBodyAssets",
|
|
383
395
|
"_SerializerV1AnimationCreateBodyStyle",
|
|
@@ -0,0 +1,40 @@
|
|
|
1
|
+
import pydantic
|
|
2
|
+
import typing
|
|
3
|
+
import typing_extensions
|
|
4
|
+
|
|
5
|
+
from .v1_ai_voice_generator_create_body_style import (
|
|
6
|
+
V1AiVoiceGeneratorCreateBodyStyle,
|
|
7
|
+
_SerializerV1AiVoiceGeneratorCreateBodyStyle,
|
|
8
|
+
)
|
|
9
|
+
|
|
10
|
+
|
|
11
|
+
class V1AiVoiceGeneratorCreateBody(typing_extensions.TypedDict):
|
|
12
|
+
"""
|
|
13
|
+
V1AiVoiceGeneratorCreateBody
|
|
14
|
+
"""
|
|
15
|
+
|
|
16
|
+
name: typing_extensions.NotRequired[str]
|
|
17
|
+
"""
|
|
18
|
+
The name of audio. This value is mainly used for your own identification of the audio.
|
|
19
|
+
"""
|
|
20
|
+
|
|
21
|
+
style: typing_extensions.Required[V1AiVoiceGeneratorCreateBodyStyle]
|
|
22
|
+
"""
|
|
23
|
+
The content used to generate speech.
|
|
24
|
+
"""
|
|
25
|
+
|
|
26
|
+
|
|
27
|
+
class _SerializerV1AiVoiceGeneratorCreateBody(pydantic.BaseModel):
|
|
28
|
+
"""
|
|
29
|
+
Serializer for V1AiVoiceGeneratorCreateBody handling case conversions
|
|
30
|
+
and file omissions as dictated by the API
|
|
31
|
+
"""
|
|
32
|
+
|
|
33
|
+
model_config = pydantic.ConfigDict(
|
|
34
|
+
populate_by_name=True,
|
|
35
|
+
)
|
|
36
|
+
|
|
37
|
+
name: typing.Optional[str] = pydantic.Field(alias="name", default=None)
|
|
38
|
+
style: _SerializerV1AiVoiceGeneratorCreateBodyStyle = pydantic.Field(
|
|
39
|
+
alias="style",
|
|
40
|
+
)
|
|
@@ -0,0 +1,60 @@
|
|
|
1
|
+
import pydantic
|
|
2
|
+
import typing_extensions
|
|
3
|
+
|
|
4
|
+
|
|
5
|
+
class V1AiVoiceGeneratorCreateBodyStyle(typing_extensions.TypedDict):
|
|
6
|
+
"""
|
|
7
|
+
The content used to generate speech.
|
|
8
|
+
"""
|
|
9
|
+
|
|
10
|
+
prompt: typing_extensions.Required[str]
|
|
11
|
+
"""
|
|
12
|
+
Text used to generate speech. Starter tier users can use up to 200 characters, while Creator, Pro, or Business users can use up to 1000.
|
|
13
|
+
"""
|
|
14
|
+
|
|
15
|
+
voice_name: typing_extensions.Required[
|
|
16
|
+
typing_extensions.Literal[
|
|
17
|
+
"Barack Obama",
|
|
18
|
+
"Donald Trump",
|
|
19
|
+
"Elon Musk",
|
|
20
|
+
"Joe Biden",
|
|
21
|
+
"Joe Rogan",
|
|
22
|
+
"Kanye West",
|
|
23
|
+
"Kim Kardashian",
|
|
24
|
+
"Mark Zuckerberg",
|
|
25
|
+
"Morgan Freeman",
|
|
26
|
+
"Taylor Swift",
|
|
27
|
+
]
|
|
28
|
+
]
|
|
29
|
+
"""
|
|
30
|
+
The voice to use for the speech. Available voices: Elon Musk, Mark Zuckerberg, Joe Rogan, Barack Obama, Morgan Freeman, Kanye West, Donald Trump, Joe Biden, Kim Kardashian, Taylor Swift
|
|
31
|
+
"""
|
|
32
|
+
|
|
33
|
+
|
|
34
|
+
class _SerializerV1AiVoiceGeneratorCreateBodyStyle(pydantic.BaseModel):
|
|
35
|
+
"""
|
|
36
|
+
Serializer for V1AiVoiceGeneratorCreateBodyStyle handling case conversions
|
|
37
|
+
and file omissions as dictated by the API
|
|
38
|
+
"""
|
|
39
|
+
|
|
40
|
+
model_config = pydantic.ConfigDict(
|
|
41
|
+
populate_by_name=True,
|
|
42
|
+
)
|
|
43
|
+
|
|
44
|
+
prompt: str = pydantic.Field(
|
|
45
|
+
alias="prompt",
|
|
46
|
+
)
|
|
47
|
+
voice_name: typing_extensions.Literal[
|
|
48
|
+
"Barack Obama",
|
|
49
|
+
"Donald Trump",
|
|
50
|
+
"Elon Musk",
|
|
51
|
+
"Joe Biden",
|
|
52
|
+
"Joe Rogan",
|
|
53
|
+
"Kanye West",
|
|
54
|
+
"Kim Kardashian",
|
|
55
|
+
"Mark Zuckerberg",
|
|
56
|
+
"Morgan Freeman",
|
|
57
|
+
"Taylor Swift",
|
|
58
|
+
] = pydantic.Field(
|
|
59
|
+
alias="voice_name",
|
|
60
|
+
)
|
|
@@ -19,6 +19,7 @@ class V1VideoToVideoCreateBodyStyle(typing_extensions.TypedDict):
|
|
|
19
19
|
"Avatar",
|
|
20
20
|
"Black Spiderman",
|
|
21
21
|
"Boba Fett",
|
|
22
|
+
"Bold Anime",
|
|
22
23
|
"Celestial Skin",
|
|
23
24
|
"Chinese Swordsmen",
|
|
24
25
|
"Clay",
|
|
@@ -58,6 +59,7 @@ class V1VideoToVideoCreateBodyStyle(typing_extensions.TypedDict):
|
|
|
58
59
|
"Pixel",
|
|
59
60
|
"Power Armor",
|
|
60
61
|
"Power Ranger",
|
|
62
|
+
"Realistic Anime",
|
|
61
63
|
"Retro Anime",
|
|
62
64
|
"Retro Sci-Fi",
|
|
63
65
|
"Samurai",
|
|
@@ -79,7 +81,7 @@ class V1VideoToVideoCreateBodyStyle(typing_extensions.TypedDict):
|
|
|
79
81
|
]
|
|
80
82
|
]
|
|
81
83
|
|
|
82
|
-
model: typing_extensions.
|
|
84
|
+
model: typing_extensions.NotRequired[
|
|
83
85
|
typing_extensions.Literal[
|
|
84
86
|
"Absolute Reality", "Dreamshaper", "Flat 2D Anime", "default"
|
|
85
87
|
]
|
|
@@ -91,12 +93,12 @@ class V1VideoToVideoCreateBodyStyle(typing_extensions.TypedDict):
|
|
|
91
93
|
* `default` - use the default recommended model for the selected art style.
|
|
92
94
|
"""
|
|
93
95
|
|
|
94
|
-
prompt: typing_extensions.
|
|
96
|
+
prompt: typing_extensions.NotRequired[typing.Optional[str]]
|
|
95
97
|
"""
|
|
96
98
|
The prompt used for the video. Prompt is required if `prompt_type` is `custom` or `append_default`. If `prompt_type` is `default`, then the `prompt` value passed will be ignored.
|
|
97
99
|
"""
|
|
98
100
|
|
|
99
|
-
prompt_type: typing_extensions.
|
|
101
|
+
prompt_type: typing_extensions.NotRequired[
|
|
100
102
|
typing_extensions.Literal["append_default", "custom", "default"]
|
|
101
103
|
]
|
|
102
104
|
"""
|
|
@@ -105,7 +107,7 @@ class V1VideoToVideoCreateBodyStyle(typing_extensions.TypedDict):
|
|
|
105
107
|
* `append_default` - Add the default recommended prompt to the end of the prompt passed in the API.
|
|
106
108
|
"""
|
|
107
109
|
|
|
108
|
-
version: typing_extensions.
|
|
110
|
+
version: typing_extensions.NotRequired[
|
|
109
111
|
typing_extensions.Literal["default", "v1", "v2"]
|
|
110
112
|
]
|
|
111
113
|
"""
|
|
@@ -135,6 +137,7 @@ class _SerializerV1VideoToVideoCreateBodyStyle(pydantic.BaseModel):
|
|
|
135
137
|
"Avatar",
|
|
136
138
|
"Black Spiderman",
|
|
137
139
|
"Boba Fett",
|
|
140
|
+
"Bold Anime",
|
|
138
141
|
"Celestial Skin",
|
|
139
142
|
"Chinese Swordsmen",
|
|
140
143
|
"Clay",
|
|
@@ -174,6 +177,7 @@ class _SerializerV1VideoToVideoCreateBodyStyle(pydantic.BaseModel):
|
|
|
174
177
|
"Pixel",
|
|
175
178
|
"Power Armor",
|
|
176
179
|
"Power Ranger",
|
|
180
|
+
"Realistic Anime",
|
|
177
181
|
"Retro Anime",
|
|
178
182
|
"Retro Sci-Fi",
|
|
179
183
|
"Samurai",
|
|
@@ -195,19 +199,15 @@ class _SerializerV1VideoToVideoCreateBodyStyle(pydantic.BaseModel):
|
|
|
195
199
|
] = pydantic.Field(
|
|
196
200
|
alias="art_style",
|
|
197
201
|
)
|
|
198
|
-
model:
|
|
199
|
-
|
|
200
|
-
|
|
201
|
-
|
|
202
|
-
)
|
|
203
|
-
prompt: typing.Optional[str] = pydantic.Field(
|
|
204
|
-
|
|
205
|
-
|
|
206
|
-
|
|
207
|
-
|
|
208
|
-
|
|
209
|
-
)
|
|
210
|
-
)
|
|
211
|
-
version: typing_extensions.Literal["default", "v1", "v2"] = pydantic.Field(
|
|
212
|
-
alias="version",
|
|
202
|
+
model: typing.Optional[
|
|
203
|
+
typing_extensions.Literal[
|
|
204
|
+
"Absolute Reality", "Dreamshaper", "Flat 2D Anime", "default"
|
|
205
|
+
]
|
|
206
|
+
] = pydantic.Field(alias="model", default=None)
|
|
207
|
+
prompt: typing.Optional[str] = pydantic.Field(alias="prompt", default=None)
|
|
208
|
+
prompt_type: typing.Optional[
|
|
209
|
+
typing_extensions.Literal["append_default", "custom", "default"]
|
|
210
|
+
] = pydantic.Field(alias="prompt_type", default=None)
|
|
211
|
+
version: typing.Optional[typing_extensions.Literal["default", "v1", "v2"]] = (
|
|
212
|
+
pydantic.Field(alias="version", default=None)
|
|
213
213
|
)
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.1
|
|
2
2
|
Name: magic_hour
|
|
3
|
-
Version: 0.
|
|
3
|
+
Version: 0.40.0
|
|
4
4
|
Summary: Python SDK for Magic Hour API
|
|
5
5
|
Requires-Python: >=3.8,<4.0
|
|
6
6
|
Classifier: Programming Language :: Python :: 3
|
|
@@ -223,11 +223,21 @@ download_urls = result.downloads
|
|
|
223
223
|
* [create](magic_hour/resources/v1/ai_talking_photo/README.md#create) - AI Talking Photo
|
|
224
224
|
* [generate](magic_hour/resources/v1/ai_talking_photo/README.md#generate) - Ai Talking Photo Generate Workflow
|
|
225
225
|
|
|
226
|
+
### [v1.ai_voice_generator](magic_hour/resources/v1/ai_voice_generator/README.md)
|
|
227
|
+
|
|
228
|
+
* [create](magic_hour/resources/v1/ai_voice_generator/README.md#create) - AI Voice Generator
|
|
229
|
+
|
|
226
230
|
### [v1.animation](magic_hour/resources/v1/animation/README.md)
|
|
227
231
|
|
|
228
232
|
* [create](magic_hour/resources/v1/animation/README.md#create) - Animation
|
|
229
233
|
* [generate](magic_hour/resources/v1/animation/README.md#generate) - Animation Generate Workflow
|
|
230
234
|
|
|
235
|
+
### [v1.audio_projects](magic_hour/resources/v1/audio_projects/README.md)
|
|
236
|
+
|
|
237
|
+
* [delete](magic_hour/resources/v1/audio_projects/README.md#delete) - Delete audio
|
|
238
|
+
* [get](magic_hour/resources/v1/audio_projects/README.md#get) - Get audio details
|
|
239
|
+
|
|
240
|
+
|
|
231
241
|
### [v1.auto_subtitle_generator](magic_hour/resources/v1/auto_subtitle_generator/README.md)
|
|
232
242
|
|
|
233
243
|
* [create](magic_hour/resources/v1/auto_subtitle_generator/README.md#create) - Auto Subtitle Generator
|