comfyui-workflow-templates 0.1.94__py3-none-any.whl → 0.1.95__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of comfyui-workflow-templates might be problematic. Click here for more details.

Files changed (129) hide show
  1. comfyui_workflow_templates/templates/3d_hunyuan3d_image_to_model.json +26 -26
  2. comfyui_workflow_templates/templates/3d_hunyuan3d_multiview_to_model.json +35 -35
  3. comfyui_workflow_templates/templates/3d_hunyuan3d_multiview_to_model_turbo.json +37 -37
  4. comfyui_workflow_templates/templates/api_bfl_flux_1_kontext_max_image.json +57 -81
  5. comfyui_workflow_templates/templates/api_bfl_flux_1_kontext_multiple_images_input.json +60 -84
  6. comfyui_workflow_templates/templates/api_bfl_flux_1_kontext_pro_image.json +57 -81
  7. comfyui_workflow_templates/templates/api_bfl_flux_pro_t2i.json +28 -26
  8. comfyui_workflow_templates/templates/api_bytedance_flf2v.json +11 -11
  9. comfyui_workflow_templates/templates/api_bytedance_image_to_video.json +34 -34
  10. comfyui_workflow_templates/templates/api_bytedance_text_to_video.json +39 -40
  11. comfyui_workflow_templates/templates/api_google_gemini.json +6 -7
  12. comfyui_workflow_templates/templates/api_hailuo_minimax_i2v.json +12 -3
  13. comfyui_workflow_templates/templates/api_hailuo_minimax_t2v.json +28 -28
  14. comfyui_workflow_templates/templates/api_hailuo_minimax_video.json +30 -30
  15. comfyui_workflow_templates/templates/api_ideogram_v3_t2i.json +32 -18
  16. comfyui_workflow_templates/templates/api_kling_effects.json +28 -26
  17. comfyui_workflow_templates/templates/api_kling_flf.json +32 -30
  18. comfyui_workflow_templates/templates/api_kling_i2v.json +34 -34
  19. comfyui_workflow_templates/templates/api_luma_i2v.json +93 -110
  20. comfyui_workflow_templates/templates/api_luma_photon_i2i.json +59 -50
  21. comfyui_workflow_templates/templates/api_luma_photon_style_ref.json +131 -124
  22. comfyui_workflow_templates/templates/api_luma_t2v.json +59 -50
  23. comfyui_workflow_templates/templates/api_openai_dall_e_2_t2i.json +8 -29
  24. comfyui_workflow_templates/templates/api_openai_image_1_i2i.json +12 -33
  25. comfyui_workflow_templates/templates/api_openai_image_1_inpaint.json +30 -51
  26. comfyui_workflow_templates/templates/api_openai_image_1_multi_inputs.json +10 -33
  27. comfyui_workflow_templates/templates/api_openai_image_1_t2i.json +42 -63
  28. comfyui_workflow_templates/templates/api_openai_sora_video.json +37 -38
  29. comfyui_workflow_templates/templates/api_pika_i2v.json +27 -27
  30. comfyui_workflow_templates/templates/api_pika_scene.json +12 -3
  31. comfyui_workflow_templates/templates/api_pixverse_i2v.json +31 -36
  32. comfyui_workflow_templates/templates/api_pixverse_t2v.json +20 -16
  33. comfyui_workflow_templates/templates/api_pixverse_template_i2v.json +39 -35
  34. comfyui_workflow_templates/templates/api_recraft_image_gen_with_color_control.json +6 -6
  35. comfyui_workflow_templates/templates/api_recraft_image_gen_with_style_control.json +212 -199
  36. comfyui_workflow_templates/templates/api_recraft_vector_gen.json +78 -69
  37. comfyui_workflow_templates/templates/api_rodin_gen2.json +30 -30
  38. comfyui_workflow_templates/templates/api_rodin_image_to_model.json +55 -55
  39. comfyui_workflow_templates/templates/api_rodin_multiview_to_model.json +188 -132
  40. comfyui_workflow_templates/templates/api_runway_first_last_frame.json +4 -4
  41. comfyui_workflow_templates/templates/api_runway_gen3a_turbo_image_to_video.json +30 -31
  42. comfyui_workflow_templates/templates/api_runway_gen4_turo_image_to_video.json +29 -30
  43. comfyui_workflow_templates/templates/api_runway_reference_to_image.json +31 -32
  44. comfyui_workflow_templates/templates/api_runway_text_to_image.json +17 -17
  45. comfyui_workflow_templates/templates/api_stability_ai_audio_inpaint.json +18 -18
  46. comfyui_workflow_templates/templates/api_stability_ai_audio_to_audio.json +31 -31
  47. comfyui_workflow_templates/templates/api_stability_ai_i2i.json +34 -34
  48. comfyui_workflow_templates/templates/api_stability_ai_sd3.5_i2i.json +21 -19
  49. comfyui_workflow_templates/templates/api_stability_ai_sd3.5_t2i.json +35 -35
  50. comfyui_workflow_templates/templates/api_stability_ai_stable_image_ultra_t2i.json +11 -9
  51. comfyui_workflow_templates/templates/api_tripo_image_to_model.json +90 -92
  52. comfyui_workflow_templates/templates/api_tripo_multiview_to_model.json +241 -241
  53. comfyui_workflow_templates/templates/api_tripo_text_to_model.json +102 -102
  54. comfyui_workflow_templates/templates/api_veo2_i2v.json +31 -28
  55. comfyui_workflow_templates/templates/api_veo3.json +30 -30
  56. comfyui_workflow_templates/templates/api_vidu_text_to_video.json +2 -2
  57. comfyui_workflow_templates/templates/api_wan_image_to_video.json +41 -42
  58. comfyui_workflow_templates/templates/api_wan_text_to_image .json +140 -0
  59. comfyui_workflow_templates/templates/api_wan_text_to_video.json +38 -45
  60. comfyui_workflow_templates/templates/audio_ace_step_1_m2m_editing.json +84 -84
  61. comfyui_workflow_templates/templates/audio_ace_step_1_t2a_instrumentals.json +60 -60
  62. comfyui_workflow_templates/templates/audio_ace_step_1_t2a_song.json +60 -60
  63. comfyui_workflow_templates/templates/esrgan_example.json +24 -30
  64. comfyui_workflow_templates/templates/flux1_dev_uso_reference_image_gen.json +215 -210
  65. comfyui_workflow_templates/templates/flux1_krea_dev.json +3 -3
  66. comfyui_workflow_templates/templates/flux_kontext_dev_basic.json +151 -231
  67. comfyui_workflow_templates/templates/flux_redux_model_example.json +108 -120
  68. comfyui_workflow_templates/templates/flux_schnell_full_text_to_image.json +21 -29
  69. comfyui_workflow_templates/templates/hidream_e1_1.json +179 -209
  70. comfyui_workflow_templates/templates/hidream_e1_full.json +33 -39
  71. comfyui_workflow_templates/templates/hidream_i1_dev.json +15 -15
  72. comfyui_workflow_templates/templates/hidream_i1_fast.json +15 -15
  73. comfyui_workflow_templates/templates/hidream_i1_full.json +17 -16
  74. comfyui_workflow_templates/templates/hiresfix_esrgan_workflow.json +31 -37
  75. comfyui_workflow_templates/templates/hiresfix_latent_workflow.json +84 -88
  76. comfyui_workflow_templates/templates/image2image.json +30 -30
  77. comfyui_workflow_templates/templates/image_chroma1_radiance_text_to_image.json +60 -60
  78. comfyui_workflow_templates/templates/image_lotus_depth_v1_1.json +25 -31
  79. comfyui_workflow_templates/templates/image_netayume_lumina_t2i-1.webp +0 -0
  80. comfyui_workflow_templates/templates/image_netayume_lumina_t2i.json +597 -0
  81. comfyui_workflow_templates/templates/image_omnigen2_image_edit.json +55 -62
  82. comfyui_workflow_templates/templates/image_omnigen2_t2i.json +26 -33
  83. comfyui_workflow_templates/templates/image_qwen_image.json +40 -40
  84. comfyui_workflow_templates/templates/image_qwen_image_controlnet_patch.json +32 -32
  85. comfyui_workflow_templates/templates/image_qwen_image_edit.json +29 -29
  86. comfyui_workflow_templates/templates/image_qwen_image_edit_2509.json +127 -127
  87. comfyui_workflow_templates/templates/image_qwen_image_instantx_controlnet.json +56 -55
  88. comfyui_workflow_templates/templates/image_qwen_image_instantx_inpainting_controlnet.json +108 -107
  89. comfyui_workflow_templates/templates/image_qwen_image_union_control_lora.json +5 -5
  90. comfyui_workflow_templates/templates/index.es.json +24 -0
  91. comfyui_workflow_templates/templates/index.fr.json +24 -0
  92. comfyui_workflow_templates/templates/index.ja.json +24 -0
  93. comfyui_workflow_templates/templates/index.json +11 -0
  94. comfyui_workflow_templates/templates/index.ko.json +24 -0
  95. comfyui_workflow_templates/templates/index.ru.json +24 -0
  96. comfyui_workflow_templates/templates/index.zh-TW.json +24 -0
  97. comfyui_workflow_templates/templates/index.zh.json +24 -0
  98. comfyui_workflow_templates/templates/inpaint_example.json +70 -72
  99. comfyui_workflow_templates/templates/inpaint_model_outpainting.json +4 -4
  100. comfyui_workflow_templates/templates/latent_upscale_different_prompt_model.json +179 -185
  101. comfyui_workflow_templates/templates/sdxlturbo_example.json +308 -162
  102. comfyui_workflow_templates/templates/video_wan2.1_fun_camera_v1.1_1.3B.json +89 -62
  103. comfyui_workflow_templates/templates/video_wan2.1_fun_camera_v1.1_14B.json +8 -4
  104. comfyui_workflow_templates/templates/video_wan2_2_14B_animate.json +46 -44
  105. comfyui_workflow_templates/templates/video_wan2_2_14B_flf2v.json +38 -38
  106. comfyui_workflow_templates/templates/video_wan2_2_14B_fun_camera.json +58 -54
  107. comfyui_workflow_templates/templates/video_wan2_2_14B_fun_control.json +36 -36
  108. comfyui_workflow_templates/templates/video_wan2_2_14B_fun_inpaint.json +26 -26
  109. comfyui_workflow_templates/templates/video_wan2_2_14B_i2v.json +4 -4
  110. comfyui_workflow_templates/templates/video_wan2_2_14B_s2v.json +33 -29
  111. comfyui_workflow_templates/templates/video_wan2_2_14B_t2v (2).json +1954 -0
  112. comfyui_workflow_templates/templates/video_wan2_2_5B_fun_control.json +29 -29
  113. comfyui_workflow_templates/templates/video_wan2_2_5B_fun_inpaint.json +25 -25
  114. comfyui_workflow_templates/templates/video_wan2_2_5B_ti2v.json +49 -49
  115. comfyui_workflow_templates/templates/video_wan_ati.json +49 -49
  116. comfyui_workflow_templates/templates/video_wan_vace_14B_ref2v.json +47 -61
  117. comfyui_workflow_templates/templates/video_wan_vace_14B_t2v.json +2 -2
  118. comfyui_workflow_templates/templates/video_wan_vace_14B_v2v.json +55 -55
  119. comfyui_workflow_templates/templates/video_wan_vace_flf2v.json +40 -56
  120. comfyui_workflow_templates/templates/video_wan_vace_inpainting.json +72 -72
  121. comfyui_workflow_templates/templates/video_wan_vace_outpainting.json +211 -237
  122. comfyui_workflow_templates/templates/wan2.1_flf2v_720_f16.json +84 -92
  123. comfyui_workflow_templates/templates/wan2.1_fun_control.json +51 -27
  124. comfyui_workflow_templates/templates/wan2.1_fun_inp.json +43 -17
  125. {comfyui_workflow_templates-0.1.94.dist-info → comfyui_workflow_templates-0.1.95.dist-info}/METADATA +1 -1
  126. {comfyui_workflow_templates-0.1.94.dist-info → comfyui_workflow_templates-0.1.95.dist-info}/RECORD +129 -125
  127. {comfyui_workflow_templates-0.1.94.dist-info → comfyui_workflow_templates-0.1.95.dist-info}/WHEEL +0 -0
  128. {comfyui_workflow_templates-0.1.94.dist-info → comfyui_workflow_templates-0.1.95.dist-info}/licenses/LICENSE +0 -0
  129. {comfyui_workflow_templates-0.1.94.dist-info → comfyui_workflow_templates-0.1.95.dist-info}/top_level.txt +0 -0
@@ -30,9 +30,9 @@
30
30
  }
31
31
  ],
32
32
  "properties": {
33
+ "Node name for S&R": "UNETLoader",
33
34
  "cnr_id": "comfy-core",
34
35
  "ver": "0.3.34",
35
- "Node name for S&R": "UNETLoader",
36
36
  "models": [
37
37
  {
38
38
  "name": "Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors",
@@ -81,9 +81,9 @@
81
81
  }
82
82
  ],
83
83
  "properties": {
84
+ "Node name for S&R": "CLIPLoader",
84
85
  "cnr_id": "comfy-core",
85
86
  "ver": "0.3.34",
86
- "Node name for S&R": "CLIPLoader",
87
87
  "models": [
88
88
  {
89
89
  "name": "umt5_xxl_fp8_e4m3fn_scaled.safetensors",
@@ -180,9 +180,9 @@
180
180
  }
181
181
  ],
182
182
  "properties": {
183
+ "Node name for S&R": "WanTrackToVideo",
183
184
  "cnr_id": "comfy-core",
184
- "ver": "0.3.45",
185
- "Node name for S&R": "WanTrackToVideo"
185
+ "ver": "0.3.45"
186
186
  },
187
187
  "widgets_values": [
188
188
  "[]",
@@ -227,9 +227,9 @@
227
227
  ],
228
228
  "title": "CLIP Text Encode (Negative Prompt)",
229
229
  "properties": {
230
+ "Node name for S&R": "CLIPTextEncode",
230
231
  "cnr_id": "comfy-core",
231
232
  "ver": "0.3.34",
232
- "Node name for S&R": "CLIPTextEncode",
233
233
  "enableTabs": false,
234
234
  "tabWidth": 65,
235
235
  "tabXOffset": 10,
@@ -281,9 +281,9 @@
281
281
  }
282
282
  ],
283
283
  "properties": {
284
+ "Node name for S&R": "CLIPVisionEncode",
284
285
  "cnr_id": "comfy-core",
285
- "ver": "0.3.45",
286
- "Node name for S&R": "CLIPVisionEncode"
286
+ "ver": "0.3.45"
287
287
  },
288
288
  "widgets_values": [
289
289
  "none"
@@ -316,9 +316,9 @@
316
316
  }
317
317
  ],
318
318
  "properties": {
319
+ "Node name for S&R": "VAELoader",
319
320
  "cnr_id": "comfy-core",
320
321
  "ver": "0.3.34",
321
- "Node name for S&R": "VAELoader",
322
322
  "models": [
323
323
  {
324
324
  "name": "wan_2.1_vae.safetensors",
@@ -364,9 +364,9 @@
364
364
  }
365
365
  ],
366
366
  "properties": {
367
+ "Node name for S&R": "CLIPVisionLoader",
367
368
  "cnr_id": "comfy-core",
368
369
  "ver": "0.3.41",
369
- "Node name for S&R": "CLIPVisionLoader",
370
370
  "models": [
371
371
  {
372
372
  "name": "clip_vision_h.safetensors",
@@ -402,9 +402,9 @@
402
402
  ],
403
403
  "outputs": [],
404
404
  "properties": {
405
+ "Node name for S&R": "SaveVideo",
405
406
  "cnr_id": "comfy-core",
406
- "ver": "0.3.45",
407
- "Node name for S&R": "SaveVideo"
407
+ "ver": "0.3.45"
408
408
  },
409
409
  "widgets_values": [
410
410
  "video/ComfyUI",
@@ -443,9 +443,9 @@
443
443
  }
444
444
  ],
445
445
  "properties": {
446
+ "Node name for S&R": "LoadImage",
446
447
  "cnr_id": "comfy-core",
447
- "ver": "0.3.26",
448
- "Node name for S&R": "LoadImage"
448
+ "ver": "0.3.26"
449
449
  },
450
450
  "widgets_values": [
451
451
  "input-14.jpg",
@@ -513,9 +513,9 @@
513
513
  }
514
514
  ],
515
515
  "properties": {
516
+ "Node name for S&R": "CreateVideo",
516
517
  "cnr_id": "comfy-core",
517
- "ver": "0.3.45",
518
- "Node name for S&R": "CreateVideo"
518
+ "ver": "0.3.45"
519
519
  },
520
520
  "widgets_values": [
521
521
  16
@@ -568,9 +568,9 @@
568
568
  }
569
569
  ],
570
570
  "properties": {
571
+ "Node name for S&R": "KSampler",
571
572
  "cnr_id": "comfy-core",
572
573
  "ver": "0.3.34",
573
- "Node name for S&R": "KSampler",
574
574
  "enableTabs": false,
575
575
  "tabWidth": 65,
576
576
  "tabXOffset": 10,
@@ -622,9 +622,9 @@
622
622
  }
623
623
  ],
624
624
  "properties": {
625
+ "Node name for S&R": "ModelSamplingSD3",
625
626
  "cnr_id": "comfy-core",
626
627
  "ver": "0.3.34",
627
- "Node name for S&R": "ModelSamplingSD3",
628
628
  "enableTabs": false,
629
629
  "tabWidth": 65,
630
630
  "tabXOffset": 10,
@@ -701,9 +701,9 @@
701
701
  }
702
702
  ],
703
703
  "properties": {
704
+ "Node name for S&R": "VAEDecode",
704
705
  "cnr_id": "comfy-core",
705
706
  "ver": "0.3.34",
706
- "Node name for S&R": "VAEDecode",
707
707
  "enableTabs": false,
708
708
  "tabWidth": 65,
709
709
  "tabXOffset": 10,
@@ -748,9 +748,9 @@
748
748
  ],
749
749
  "title": "CLIP Text Encode (Positive Prompt)",
750
750
  "properties": {
751
+ "Node name for S&R": "CLIPTextEncode",
751
752
  "cnr_id": "comfy-core",
752
753
  "ver": "0.3.34",
753
- "Node name for S&R": "CLIPTextEncode",
754
754
  "enableTabs": false,
755
755
  "tabWidth": 65,
756
756
  "tabXOffset": 10,
@@ -766,30 +766,6 @@
766
766
  "color": "#232",
767
767
  "bgcolor": "#353"
768
768
  },
769
- {
770
- "id": 259,
771
- "type": "MarkdownNote",
772
- "pos": [
773
- -1040,
774
- 10
775
- ],
776
- "size": [
777
- 480,
778
- 410
779
- ],
780
- "flags": {},
781
- "order": 7,
782
- "mode": 0,
783
- "inputs": [],
784
- "outputs": [],
785
- "title": "Model Links",
786
- "properties": {},
787
- "widgets_values": [
788
- "[Tutorial](http://docs.comfy.org/tutorials/video/wan/wan-ati) | [教程](http://docs.comfy.org/zh-CN/tutorials/video/wan/wan-ati)\n\n**Diffusion Model**\n- [Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors](https://huggingface.co/Kijai/WanVideo_comfy/resolve/main/Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors)\n\n**VAE**\n- [wan_2.1_vae.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors?download=true)\n\n**Text encoders** Chose one of following model\n- [umt5_xxl_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp16.safetensors?download=true)\n- [umt5_xxl_fp8_e4m3fn_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors?download=true)\n\n\n**clip_vision**\n- [clip_vision_h.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/clip_vision/clip_vision_h.safetensors)\n\nFile save location\n\n```\nComfyUI/\n├───📂 models/\n│ ├───📂 diffusion_models/\n│ │ └───Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors\n│ ├───📂 text_encoders/\n│ │ └─── umt5_xxl_fp8_e4m3fn_scaled.safetensors # or other version\n│ ├───📂 clip_vision/\n│ │ └─── clip_vision_h.safetensors\n│ └───📂 vae/\n│ └── wan_2.1_vae.safetensors\n```\n"
789
- ],
790
- "color": "#432",
791
- "bgcolor": "#653"
792
- },
793
769
  {
794
770
  "id": 247,
795
771
  "type": "PrimitiveStringMultiline",
@@ -802,7 +778,7 @@
802
778
  310
803
779
  ],
804
780
  "flags": {},
805
- "order": 8,
781
+ "order": 7,
806
782
  "mode": 0,
807
783
  "inputs": [],
808
784
  "outputs": [
@@ -816,13 +792,37 @@
816
792
  ],
817
793
  "title": "Trajectory JSON",
818
794
  "properties": {
795
+ "Node name for S&R": "PrimitiveStringMultiline",
819
796
  "cnr_id": "comfy-core",
820
- "ver": "0.3.45",
821
- "Node name for S&R": "PrimitiveStringMultiline"
797
+ "ver": "0.3.45"
822
798
  },
823
799
  "widgets_values": [
824
800
  "[\n [\n {\n \"x\": 393,\n \"y\": 126\n },\n {\n \"x\": 393,\n \"y\": 126\n },\n {\n \"x\": 393,\n \"y\": 126\n },\n {\n \"x\": 393,\n \"y\": 125\n },\n {\n \"x\": 388,\n \"y\": 123\n },\n {\n \"x\": 380,\n \"y\": 123\n },\n {\n \"x\": 372,\n \"y\": 122\n },\n {\n \"x\": 338,\n \"y\": 122\n },\n {\n \"x\": 312,\n \"y\": 121\n },\n {\n \"x\": 294,\n \"y\": 121\n },\n {\n \"x\": 281,\n \"y\": 122\n },\n {\n \"x\": 263,\n \"y\": 123\n },\n {\n \"x\": 254,\n \"y\": 125\n },\n {\n \"x\": 245,\n \"y\": 126\n },\n {\n \"x\": 241,\n \"y\": 127\n },\n {\n \"x\": 240,\n \"y\": 127\n },\n {\n \"x\": 240,\n \"y\": 128\n },\n {\n \"x\": 239,\n \"y\": 128\n },\n {\n \"x\": 238,\n \"y\": 130\n },\n {\n \"x\": 236,\n \"y\": 131\n },\n {\n \"x\": 233,\n \"y\": 133\n },\n {\n \"x\": 230,\n \"y\": 135\n },\n {\n \"x\": 226,\n \"y\": 137\n },\n {\n \"x\": 226,\n \"y\": 137\n },\n {\n \"x\": 226,\n \"y\": 137\n },\n {\n \"x\": 226,\n \"y\": 138\n },\n {\n \"x\": 226,\n \"y\": 138\n },\n {\n \"x\": 226,\n \"y\": 138\n },\n {\n \"x\": 226,\n \"y\": 139\n },\n {\n \"x\": 226,\n \"y\": 139\n },\n {\n \"x\": 226,\n \"y\": 141\n },\n {\n \"x\": 226,\n \"y\": 142\n },\n {\n \"x\": 226,\n \"y\": 142\n },\n {\n \"x\": 228,\n \"y\": 143\n },\n {\n \"x\": 229,\n \"y\": 143\n },\n {\n \"x\": 230,\n \"y\": 144\n },\n {\n \"x\": 232,\n \"y\": 145\n },\n {\n \"x\": 237,\n \"y\": 146\n },\n {\n \"x\": 238,\n \"y\": 146\n },\n {\n \"x\": 246,\n \"y\": 147\n },\n {\n \"x\": 252,\n \"y\": 149\n },\n {\n \"x\": 260,\n \"y\": 150\n },\n {\n \"x\": 262,\n \"y\": 150\n },\n {\n \"x\": 265,\n \"y\": 150\n },\n {\n \"x\": 271,\n \"y\": 151\n },\n {\n \"x\": 276,\n \"y\": 151\n },\n {\n \"x\": 278,\n \"y\": 151\n },\n {\n \"x\": 281,\n \"y\": 151\n },\n {\n \"x\": 286,\n \"y\": 152\n },\n {\n \"x\": 289,\n \"y\": 152\n },\n {\n \"x\": 292,\n \"y\": 152\n },\n {\n \"x\": 296,\n \"y\": 152\n },\n {\n \"x\": 299,\n \"y\": 152\n },\n {\n \"x\": 307,\n \"y\": 153\n },\n {\n \"x\": 311,\n \"y\": 153\n },\n {\n \"x\": 315,\n \"y\": 154\n },\n {\n \"x\": 317,\n \"y\": 154\n },\n {\n \"x\": 320,\n \"y\": 155\n },\n {\n \"x\": 324,\n \"y\": 155\n },\n {\n \"x\": 326,\n \"y\": 157\n },\n {\n \"x\": 327,\n \"y\": 157\n },\n {\n \"x\": 328,\n \"y\": 157\n },\n {\n \"x\": 331,\n \"y\": 158\n },\n {\n \"x\": 332,\n \"y\": 158\n },\n {\n \"x\": 333,\n \"y\": 158\n },\n {\n \"x\": 335,\n \"y\": 159\n },\n {\n \"x\": 340,\n \"y\": 159\n },\n {\n \"x\": 345,\n \"y\": 160\n },\n {\n \"x\": 353,\n \"y\": 161\n },\n {\n \"x\": 357,\n \"y\": 162\n },\n {\n \"x\": 362,\n \"y\": 163\n },\n {\n \"x\": 367,\n \"y\": 165\n },\n {\n \"x\": 369,\n \"y\": 165\n },\n {\n \"x\": 372,\n \"y\": 166\n },\n {\n \"x\": 375,\n \"y\": 166\n },\n {\n \"x\": 378,\n \"y\": 166\n },\n {\n \"x\": 379,\n \"y\": 167\n },\n {\n \"x\": 381,\n \"y\": 167\n },\n {\n \"x\": 384,\n \"y\": 167\n },\n {\n \"x\": 387,\n \"y\": 168\n },\n {\n \"x\": 392,\n \"y\": 169\n },\n {\n \"x\": 400,\n \"y\": 170\n },\n {\n \"x\": 405,\n \"y\": 170\n },\n {\n \"x\": 410,\n \"y\": 170\n },\n {\n \"x\": 417,\n \"y\": 171\n },\n {\n \"x\": 425,\n \"y\": 174\n },\n {\n \"x\": 434,\n \"y\": 174\n },\n {\n \"x\": 441,\n \"y\": 175\n },\n {\n \"x\": 448,\n \"y\": 175\n },\n {\n \"x\": 456,\n \"y\": 176\n },\n {\n \"x\": 465,\n \"y\": 177\n },\n {\n \"x\": 472,\n \"y\": 177\n },\n {\n \"x\": 479,\n \"y\": 177\n },\n {\n \"x\": 484,\n \"y\": 177\n },\n {\n \"x\": 491,\n \"y\": 178\n },\n {\n \"x\": 498,\n \"y\": 179\n },\n {\n \"x\": 502,\n \"y\": 179\n },\n {\n \"x\": 505,\n \"y\": 179\n },\n {\n \"x\": 514,\n \"y\": 179\n },\n {\n \"x\": 520,\n \"y\": 179\n },\n {\n \"x\": 523,\n \"y\": 181\n },\n {\n \"x\": 530,\n \"y\": 181\n },\n {\n \"x\": 537,\n \"y\": 182\n },\n {\n \"x\": 544,\n \"y\": 183\n },\n {\n \"x\": 551,\n \"y\": 183\n },\n {\n \"x\": 554,\n \"y\": 183\n },\n {\n \"x\": 561,\n \"y\": 184\n },\n {\n \"x\": 569,\n \"y\": 185\n },\n {\n \"x\": 577,\n \"y\": 186\n },\n {\n \"x\": 581,\n \"y\": 186\n },\n {\n \"x\": 586,\n \"y\": 186\n },\n {\n \"x\": 590,\n \"y\": 187\n },\n {\n \"x\": 596,\n \"y\": 187\n },\n {\n \"x\": 600,\n \"y\": 189\n },\n {\n \"x\": 602,\n \"y\": 189\n },\n {\n \"x\": 607,\n \"y\": 189\n },\n {\n \"x\": 612,\n \"y\": 190\n },\n {\n \"x\": 614,\n \"y\": 190\n },\n {\n \"x\": 616,\n \"y\": 190\n },\n {\n \"x\": 617,\n \"y\": 190\n },\n {\n \"x\": 617,\n \"y\": 190\n },\n {\n \"x\": 619,\n \"y\": 191\n },\n {\n \"x\": 619,\n \"y\": 191\n },\n {\n \"x\": 620,\n \"y\": 191\n },\n {\n \"x\": 620,\n \"y\": 191\n },\n {\n \"x\": 621,\n \"y\": 191\n },\n {\n \"x\": 623,\n \"y\": 191\n },\n {\n \"x\": 624,\n \"y\": 191\n },\n {\n \"x\": 625,\n \"y\": 191\n },\n {\n \"x\": 625,\n \"y\": 192\n },\n {\n \"x\": 625,\n \"y\": 192\n },\n {\n \"x\": 625,\n \"y\": 192\n },\n {\n \"x\": 628,\n \"y\": 197\n },\n {\n \"x\": 628,\n \"y\": 199\n },\n {\n \"x\": 629,\n \"y\": 200\n },\n {\n \"x\": 629,\n \"y\": 201\n },\n {\n \"x\": 629,\n \"y\": 202\n },\n {\n \"x\": 629,\n \"y\": 202\n },\n {\n \"x\": 629,\n \"y\": 203\n },\n {\n \"x\": 629,\n \"y\": 203\n },\n {\n \"x\": 629,\n \"y\": 203\n },\n {\n \"x\": 629,\n \"y\": 203\n },\n {\n \"x\": 629,\n \"y\": 203\n },\n {\n \"x\": 630,\n \"y\": 205\n },\n {\n \"x\": 630,\n \"y\": 205\n },\n {\n \"x\": 630,\n \"y\": 206\n },\n {\n \"x\": 630,\n \"y\": 207\n },\n {\n \"x\": 630,\n \"y\": 207\n },\n {\n \"x\": 630,\n \"y\": 208\n },\n {\n \"x\": 630,\n \"y\": 208\n },\n {\n \"x\": 631,\n \"y\": 211\n },\n {\n \"x\": 631,\n \"y\": 211\n },\n {\n \"x\": 631,\n \"y\": 213\n },\n {\n \"x\": 631,\n \"y\": 215\n },\n {\n \"x\": 632,\n \"y\": 215\n },\n {\n \"x\": 632,\n \"y\": 216\n },\n {\n \"x\": 632,\n \"y\": 216\n },\n {\n \"x\": 632,\n \"y\": 216\n },\n {\n \"x\": 632,\n \"y\": 217\n },\n {\n \"x\": 632,\n \"y\": 217\n },\n {\n \"x\": 632,\n \"y\": 218\n },\n {\n \"x\": 632,\n \"y\": 218\n }\n ]\n]"
825
801
  ]
802
+ },
803
+ {
804
+ "id": 259,
805
+ "type": "MarkdownNote",
806
+ "pos": [
807
+ -1040,
808
+ 10
809
+ ],
810
+ "size": [
811
+ 480,
812
+ 410
813
+ ],
814
+ "flags": {},
815
+ "order": 8,
816
+ "mode": 0,
817
+ "inputs": [],
818
+ "outputs": [],
819
+ "title": "Model Links",
820
+ "properties": {},
821
+ "widgets_values": [
822
+ "[Tutorial](http://docs.comfy.org/tutorials/video/wan/wan-ati)\n\n**Diffusion Model**\n- [Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors](https://huggingface.co/Kijai/WanVideo_comfy/resolve/main/Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors)\n\n**VAE**\n- [wan_2.1_vae.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors?download=true)\n\n**Text encoders** Chose one of following model\n- [umt5_xxl_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp16.safetensors?download=true)\n- [umt5_xxl_fp8_e4m3fn_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors?download=true)\n\n\n**clip_vision**\n- [clip_vision_h.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/clip_vision/clip_vision_h.safetensors)\n\nFile save location\n\n```\nComfyUI/\n├───📂 models/\n│ ├───📂 diffusion_models/\n│ │ └───Wan2_1-I2V-ATI-14B_fp8_e4m3fn.safetensors\n│ ├───📂 text_encoders/\n│ │ └─── umt5_xxl_fp8_e4m3fn_scaled.safetensors # or other version\n│ ├───📂 clip_vision/\n│ │ └─── clip_vision_h.safetensors\n│ └───📂 vae/\n│ └── wan_2.1_vae.safetensors\n```\n"
823
+ ],
824
+ "color": "#432",
825
+ "bgcolor": "#653"
826
826
  }
827
827
  ],
828
828
  "links": [
@@ -1051,11 +1051,11 @@
1051
1051
  "ds": {
1052
1052
  "scale": 0.9229599817707979,
1053
1053
  "offset": [
1054
- 1098.5764546277032,
1055
- -443.13709495742705
1054
+ 1667.800585821365,
1055
+ 193.31300076934068
1056
1056
  ]
1057
1057
  },
1058
- "frontendVersion": "1.23.4",
1058
+ "frontendVersion": "1.28.6",
1059
1059
  "node_versions": {
1060
1060
  "comfy-core": "0.3.34"
1061
1061
  },
@@ -237,9 +237,7 @@
237
237
  "umt5_xxl_fp16.safetensors",
238
238
  "wan",
239
239
  "default"
240
- ],
241
- "color": "#322",
242
- "bgcolor": "#533"
240
+ ]
243
241
  },
244
242
  {
245
243
  "id": 70,
@@ -852,9 +850,7 @@
852
850
  "widgets_values": [
853
851
  "input.jpg",
854
852
  "image"
855
- ],
856
- "color": "#322",
857
- "bgcolor": "#533"
853
+ ]
858
854
  },
859
855
  {
860
856
  "id": 117,
@@ -875,7 +871,7 @@
875
871
  "title": "KSampler Setting",
876
872
  "properties": {},
877
873
  "widgets_values": [
878
- "## Default\n\n- steps:20\n- cfg:6.0\n\n## [For CausVid LoRA](https://www.reddit.com/r/StableDiffusion/comments/1knuafk/causvid_lora_massive_speedup_for_wan21_made_by/)\n\n- steps: 2-4\n- cfg: 1.0\n\n"
874
+ "## Default\n\n- steps:20\n- cfg:6.0\n\n## For CausVid LoRA\n\n- steps: 2-4\n- cfg: 1.0\n\n"
879
875
  ],
880
876
  "color": "#432",
881
877
  "bgcolor": "#653"
@@ -916,9 +912,7 @@
916
912
  "widgets_values": [
917
913
  768,
918
914
  "fixed"
919
- ],
920
- "color": "#322",
921
- "bgcolor": "#533"
915
+ ]
922
916
  },
923
917
  {
924
918
  "id": 159,
@@ -956,9 +950,7 @@
956
950
  "widgets_values": [
957
951
  768,
958
952
  "fixed"
959
- ],
960
- "color": "#322",
961
- "bgcolor": "#533"
953
+ ]
962
954
  },
963
955
  {
964
956
  "id": 114,
@@ -1000,9 +992,7 @@
1000
992
  },
1001
993
  "widgets_values": [
1002
994
  "wan_2.1_vae.safetensors"
1003
- ],
1004
- "color": "#322",
1005
- "bgcolor": "#533"
995
+ ]
1006
996
  },
1007
997
  {
1008
998
  "id": 69,
@@ -1137,30 +1127,6 @@
1137
1127
  1
1138
1128
  ]
1139
1129
  },
1140
- {
1141
- "id": 116,
1142
- "type": "MarkdownNote",
1143
- "pos": [
1144
- 60,
1145
- 1200
1146
- ],
1147
- "size": [
1148
- 210,
1149
- 110
1150
- ],
1151
- "flags": {},
1152
- "order": 10,
1153
- "mode": 0,
1154
- "inputs": [],
1155
- "outputs": [],
1156
- "title": "About Video Size",
1157
- "properties": {},
1158
- "widgets_values": [
1159
- "| Model | 480P | 720P |\n| ------------------------------------------------------------ | ---- | ---- |\n| [VACE-1.3B](https://huggingface.co/Wan-AI/Wan2.1-VACE-1.3B) | ✅ | ❌ |\n| [VACE-14B](https://huggingface.co/Wan-AI/Wan2.1-VACE-14B) | ✅ | ✅ |"
1160
- ],
1161
- "color": "#432",
1162
- "bgcolor": "#653"
1163
- },
1164
1130
  {
1165
1131
  "id": 68,
1166
1132
  "type": "CreateVideo",
@@ -1328,7 +1294,7 @@
1328
1294
  82
1329
1295
  ],
1330
1296
  "flags": {},
1331
- "order": 11,
1297
+ "order": 10,
1332
1298
  "mode": 0,
1333
1299
  "inputs": [],
1334
1300
  "outputs": [
@@ -1356,9 +1322,7 @@
1356
1322
  "widgets_values": [
1357
1323
  "wan2.1_vace_14B_fp16.safetensors",
1358
1324
  "default"
1359
- ],
1360
- "color": "#322",
1361
- "bgcolor": "#533"
1325
+ ]
1362
1326
  },
1363
1327
  {
1364
1328
  "id": 115,
@@ -1419,9 +1383,7 @@
1419
1383
  "Wan21_CausVid_14B_T2V_lora_rank32.safetensors",
1420
1384
  0.30000000000000004,
1421
1385
  1
1422
- ],
1423
- "color": "#322",
1424
- "bgcolor": "#533"
1386
+ ]
1425
1387
  },
1426
1388
  {
1427
1389
  "id": 107,
@@ -1521,7 +1483,7 @@
1521
1483
  100
1522
1484
  ],
1523
1485
  "flags": {},
1524
- "order": 12,
1486
+ "order": 11,
1525
1487
  "mode": 0,
1526
1488
  "inputs": [],
1527
1489
  "outputs": [],
@@ -1545,7 +1507,7 @@
1545
1507
  130
1546
1508
  ],
1547
1509
  "flags": {},
1548
- "order": 13,
1510
+ "order": 12,
1549
1511
  "mode": 0,
1550
1512
  "inputs": [],
1551
1513
  "outputs": [],
@@ -1569,7 +1531,7 @@
1569
1531
  120
1570
1532
  ],
1571
1533
  "flags": {},
1572
- "order": 14,
1534
+ "order": 13,
1573
1535
  "mode": 0,
1574
1536
  "inputs": [],
1575
1537
  "outputs": [],
@@ -1581,6 +1543,30 @@
1581
1543
  "color": "#432",
1582
1544
  "bgcolor": "#653"
1583
1545
  },
1546
+ {
1547
+ "id": 161,
1548
+ "type": "MarkdownNote",
1549
+ "pos": [
1550
+ 470,
1551
+ 1200
1552
+ ],
1553
+ "size": [
1554
+ 230,
1555
+ 110
1556
+ ],
1557
+ "flags": {},
1558
+ "order": 14,
1559
+ "mode": 4,
1560
+ "inputs": [],
1561
+ "outputs": [],
1562
+ "title": "Note",
1563
+ "properties": {},
1564
+ "widgets_values": [
1565
+ "Since VACE supports converting any frame into a video, here we have created a sequence of images with the first frame and the corresponding mask. In this way, we can control the starting frame.\n"
1566
+ ],
1567
+ "color": "#432",
1568
+ "bgcolor": "#653"
1569
+ },
1584
1570
  {
1585
1571
  "id": 164,
1586
1572
  "type": "MarkdownNote",
@@ -1599,31 +1585,31 @@
1599
1585
  "outputs": [],
1600
1586
  "properties": {},
1601
1587
  "widgets_values": [
1602
- "[Tutorial](https://docs.comfy.org/tutorials/video/wan/vace) | [教程](https://docs.comfy.org/zh-CN/tutorials/video/wan/vace)\n\n[Causvid Lora extracted by Kijai](https://www.reddit.com/r/StableDiffusion/comments/1knuafk/causvid_lora_massive_speedup_for_wan21_made_by/) Thanks to CausVid MIT\n\n## 14B Support 480P 720P\n\n**Diffusion Model**\n- [wan2.1_vace_14B_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/diffusion_models/wan2.1_vace_14B_fp16.safetensors)\n\n**LoRA**\n- [Wan21_CausVid_14B_T2V_lora_rank32.safetensors](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan21_CausVid_14B_T2V_lora_rank32.safetensors)\n\nIt takes about 40 minutes to complete at 81 frames 720P resolution with the RTX 4090 . \nAfter using Wan21_CausVid_14B_T2V_lora_rank32.safetensors, it only takes about 4 minutes.\n\n## 1.3B Support 480P only\n\n**Diffusion Model**\n- [wan2.1_vace_1.3B_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/diffusion_models/wan2.1_vace_1.3B_fp16.safetensors)\n\n**LoRA**\n- [Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors)\n \n\n## Other Models\n\n* You may already have these models if you use Wan workflow before.\n\n**VAE**\n- [wan_2.1_vae.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors?download=true)\n\n**Text encoders** Chose one of following model\n- [umt5_xxl_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp16.safetensors?download=true)\n- [umt5_xxl_fp8_e4m3fn_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors?download=true)\n\n> You can choose between fp16 of fp8; I used fp16 to match what kijai's wrapper is compatible with.\n\nFile save location\n\n```\nComfyUI/\n├── models/\n│ ├── diffusion_models/\n│ │ ├-── wan2.1_vace_14B_fp16.safetensors\n│ │ └─── wan2.1_vace_1.3B_fp16.safetensors \n│ ├── text_encoders/\n│ │ └─── umt5_xxl_fp8_e4m3fn_scaled.safetensors # or fp16\n│ ├── loras/\n│ │ ├── Wan21_CausVid_14B_T2V_lora_rank32.safetensors\n│ │ └── Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors\n│ └── vae/\n│ └── wan_2.1_vae.safetensors\n```\n"
1588
+ "[Tutorial](https://docs.comfy.org/tutorials/video/wan/vace)\n\n\n\n## 14B Support 480P 720P\n\n**Diffusion Model**\n- [wan2.1_vace_14B_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/diffusion_models/wan2.1_vace_14B_fp16.safetensors)\n\n**LoRA**\n- [Wan21_CausVid_14B_T2V_lora_rank32.safetensors](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan21_CausVid_14B_T2V_lora_rank32.safetensors)\n\nIt takes about 40 minutes to complete at 81 frames 720P resolution with the RTX 4090 . \nAfter using Wan21_CausVid_14B_T2V_lora_rank32.safetensors, it only takes about 4 minutes.\n\n## 1.3B Support 480P only\n\n**Diffusion Model**\n- [wan2.1_vace_1.3B_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/diffusion_models/wan2.1_vace_1.3B_fp16.safetensors)\n\n**LoRA**\n- [Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors)\n \n\n## Other Models\n\n* You may already have these models if you use Wan workflow before.\n\n**VAE**\n- [wan_2.1_vae.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors?download=true)\n\n**Text encoders** Chose one of following model\n- [umt5_xxl_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp16.safetensors?download=true)\n- [umt5_xxl_fp8_e4m3fn_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors?download=true)\n\n> You can choose between fp16 of fp8; I used fp16 to match what kijai's wrapper is compatible with.\n\nFile save location\n\n```\nComfyUI/\n├── models/\n│ ├── diffusion_models/\n│ │ ├-── wan2.1_vace_14B_fp16.safetensors\n│ │ └─── wan2.1_vace_1.3B_fp16.safetensors \n│ ├── text_encoders/\n│ │ └─── umt5_xxl_fp8_e4m3fn_scaled.safetensors # or fp16\n│ ├── loras/\n│ │ ├── Wan21_CausVid_14B_T2V_lora_rank32.safetensors\n│ │ └── Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors\n│ └── vae/\n│ └── wan_2.1_vae.safetensors\n```\n"
1603
1589
  ],
1604
1590
  "color": "#432",
1605
1591
  "bgcolor": "#653"
1606
1592
  },
1607
1593
  {
1608
- "id": 161,
1594
+ "id": 116,
1609
1595
  "type": "MarkdownNote",
1610
1596
  "pos": [
1611
- 470,
1597
+ 60,
1612
1598
  1200
1613
1599
  ],
1614
1600
  "size": [
1615
- 230,
1601
+ 210,
1616
1602
  110
1617
1603
  ],
1618
1604
  "flags": {},
1619
1605
  "order": 16,
1620
- "mode": 4,
1606
+ "mode": 0,
1621
1607
  "inputs": [],
1622
1608
  "outputs": [],
1623
- "title": "Note",
1609
+ "title": "About Video Size",
1624
1610
  "properties": {},
1625
1611
  "widgets_values": [
1626
- "Since VACE supports converting any frame into a video, here we have created a sequence of images with the first frame and the corresponding mask. In this way, we can control the starting frame.\n"
1612
+ "| Model | 480P | 720P |\n| ------------------------------------------------------------ | ---- | ---- |\n| [VACE-1.3B](https://huggingface.co/Wan-AI/Wan2.1-VACE-1.3B) | ✅ | ❌ |\n| [VACE-14B](https://huggingface.co/Wan-AI/Wan2.1-VACE-14B) | ✅ | ✅ |"
1627
1613
  ],
1628
1614
  "color": "#432",
1629
1615
  "bgcolor": "#653"
@@ -2146,13 +2132,13 @@
2146
2132
  "config": {},
2147
2133
  "extra": {
2148
2134
  "ds": {
2149
- "scale": 0.4665073802097333,
2135
+ "scale": 0.42409761837248505,
2150
2136
  "offset": [
2151
- 1673.8309046403217,
2152
- 132.15298787291272
2137
+ 1211.216186862774,
2138
+ 115.13140907226659
2153
2139
  ]
2154
2140
  },
2155
- "frontendVersion": "1.24.0-1",
2141
+ "frontendVersion": "1.28.6",
2156
2142
  "node_versions": {
2157
2143
  "comfy-core": "0.3.34"
2158
2144
  },
@@ -717,7 +717,7 @@
717
717
  "title": "KSampler Setting",
718
718
  "properties": {},
719
719
  "widgets_values": [
720
- "## Default\n\n- steps:20\n- cfg:6.0\n\n## [For CausVid LoRA](https://www.reddit.com/r/StableDiffusion/comments/1knuafk/causvid_lora_massive_speedup_for_wan21_made_by/)\n\n- steps: 2-4\n- cfg: 1.0\n\n"
720
+ "## Default\n\n- steps:20\n- cfg:6.0\n\n## For CausVid LoRA\n\n- steps: 2-4\n- cfg: 1.0\n\n"
721
721
  ],
722
722
  "color": "#432",
723
723
  "bgcolor": "#653"
@@ -951,7 +951,7 @@
951
951
  "outputs": [],
952
952
  "properties": {},
953
953
  "widgets_values": [
954
- "[Tutorial](https://docs.comfy.org/tutorials/video/wan/vace) | [教程](https://docs.comfy.org/zh-CN/tutorials/video/wan/vace)\n\n[Causvid Lora extracted by Kijai](https://www.reddit.com/r/StableDiffusion/comments/1knuafk/causvid_lora_massive_speedup_for_wan21_made_by/) Thanks to CausVid MIT\n\n## 14B Support 480P 720P\n\n**Diffusion Model**\n- [wan2.1_vace_14B_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/diffusion_models/wan2.1_vace_14B_fp16.safetensors)\n\n**LoRA**\n- [Wan21_CausVid_14B_T2V_lora_rank32.safetensors](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan21_CausVid_14B_T2V_lora_rank32.safetensors)\n\nIt takes about 40 minutes to complete at 81 frames 720P resolution with the RTX 4090 . \nAfter using Wan21_CausVid_14B_T2V_lora_rank32.safetensors, it only takes about 4 minutes.\n\n## 1.3B Support 480P only\n\n**Diffusion Model**\n- [wan2.1_vace_1.3B_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/diffusion_models/wan2.1_vace_1.3B_fp16.safetensors)\n\n**LoRA**\n- [Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors)\n \n\n## Other Models\n\n* You may already have these models if you use Wan workflow before.\n\n**VAE**\n- [wan_2.1_vae.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors?download=true)\n\n**Text encoders** Chose one of following model\n- [umt5_xxl_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp16.safetensors?download=true)\n- [umt5_xxl_fp8_e4m3fn_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors?download=true)\n\n> You can choose between fp16 of fp8; I used fp16 to match what kijai's wrapper is compatible with.\n\nFile save location\n\n```\nComfyUI/\n├── models/\n│ ├── diffusion_models/\n│ │ ├-── wan2.1_vace_14B_fp16.safetensors\n│ │ └─── wan2.1_vace_1.3B_fp16.safetensors \n│ ├── text_encoders/\n│ │ └─── umt5_xxl_fp8_e4m3fn_scaled.safetensors # or fp16\n│ ├── loras/\n│ │ ├── Wan21_CausVid_14B_T2V_lora_rank32.safetensors\n│ │ └── Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors\n│ └── vae/\n│ └── wan_2.1_vae.safetensors\n```\n"
954
+ "[Tutorial](https://docs.comfy.org/tutorials/video/wan/vace) | [教程](https://docs.comfy.org/zh-CN/tutorials/video/wan/vace)\n\n\n\n## 14B Support 480P 720P\n\n**Diffusion Model**\n- [wan2.1_vace_14B_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/diffusion_models/wan2.1_vace_14B_fp16.safetensors)\n\n**LoRA**\n- [Wan21_CausVid_14B_T2V_lora_rank32.safetensors](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan21_CausVid_14B_T2V_lora_rank32.safetensors)\n\nIt takes about 40 minutes to complete at 81 frames 720P resolution with the RTX 4090 . \nAfter using Wan21_CausVid_14B_T2V_lora_rank32.safetensors, it only takes about 4 minutes.\n\n## 1.3B Support 480P only\n\n**Diffusion Model**\n- [wan2.1_vace_1.3B_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/diffusion_models/wan2.1_vace_1.3B_fp16.safetensors)\n\n**LoRA**\n- [Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors)\n \n\n## Other Models\n\n* You may already have these models if you use Wan workflow before.\n\n**VAE**\n- [wan_2.1_vae.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors?download=true)\n\n**Text encoders** Chose one of following model\n- [umt5_xxl_fp16.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp16.safetensors?download=true)\n- [umt5_xxl_fp8_e4m3fn_scaled.safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors?download=true)\n\n> You can choose between fp16 of fp8; I used fp16 to match what kijai's wrapper is compatible with.\n\nFile save location\n\n```\nComfyUI/\n├── models/\n│ ├── diffusion_models/\n│ │ ├-── wan2.1_vace_14B_fp16.safetensors\n│ │ └─── wan2.1_vace_1.3B_fp16.safetensors \n│ ├── text_encoders/\n│ │ └─── umt5_xxl_fp8_e4m3fn_scaled.safetensors # or fp16\n│ ├── loras/\n│ │ ├── Wan21_CausVid_14B_T2V_lora_rank32.safetensors\n│ │ └── Wan21_CausVid_bidirect2_T2V_1_3B_lora_rank32.safetensors\n│ └── vae/\n│ └── wan_2.1_vae.safetensors\n```\n"
955
955
  ],
956
956
  "color": "#432",
957
957
  "bgcolor": "#653"