mmgp 1.2.0__tar.gz → 2.0.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of mmgp might be problematic. Click here for more details.

mmgp-2.0.0/PKG-INFO ADDED
@@ -0,0 +1,137 @@
1
+ Metadata-Version: 2.1
2
+ Name: mmgp
3
+ Version: 2.0.0
4
+ Summary: Memory Management for the GPU Poor
5
+ Author-email: deepbeepmeep <deepbeepmeep@yahoo.com>
6
+ License: GNU GENERAL PUBLIC LICENSE
7
+ Version 3, 29 June 2007
8
+ Requires-Python: >=3.10
9
+ Description-Content-Type: text/markdown
10
+ License-File: LICENSE.md
11
+ Requires-Dist: torch>=2.1.0
12
+ Requires-Dist: optimum-quanto
13
+ Requires-Dist: accelerate
14
+
15
+
16
+ <p align="center">
17
+ <H2>Memory Management 2.0 for the GPU Poor by DeepBeepMeep</H2>
18
+ </p>
19
+
20
+
21
+ This module contains multiples optimisations so that models such as Flux (and derived), Mochi, CogView, HunyuanVideo, ... can run smoothly on a 12 to 24 GB GPU limited card.
22
+ This a replacement for the accelerate library that should in theory manage offloading, but doesn't work properly with models that are loaded / unloaded several
23
+ times in a pipe (eg VAE).
24
+
25
+ Requirements:
26
+ - VRAM: minimum 12 GB, recommended 24 GB (RTX 3090/ RTX 4090)
27
+ - RAM: minimum 24 GB, recommended 48 GB
28
+
29
+ This module features 5 profiles in order to able to run the model at a decent speed on a low end consumer config (32 GB of RAM and 12 VRAM) and to run it at a very good speed on a high end consumer config (48 GB of RAM and 24 GB of VRAM).
30
+
31
+ Each profile may use the following:
32
+ - Smart preloading of models in RAM to reduce RAM requirements
33
+ - Smart automated loading / unloading of models in the GPU to avoid unloading models that may be needed again soon
34
+ - Smart slicing of models to reduce memory occupied by models in the VRAM
35
+ - Ability to pin models in reserved RAM to accelerate transfers to VRAM
36
+ - Async transfers to VRAM to avoid a pause when loading a new slice of a model
37
+ - Automated on the fly quantization or ability to load quantized models
38
+
39
+ ## Installation
40
+ First you need to install the module in your current project with:
41
+ ```shell
42
+ pip install mmgp
43
+ ```
44
+
45
+
46
+ ## Usage
47
+
48
+ It is almost plug and play and just needs to be invoked from the main app just after the model pipeline has been created.
49
+ 1) First make sure that the pipeline explictly loads the models in the CPU device, for instance:
50
+ ```
51
+ pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16).to("cpu")
52
+ ```
53
+
54
+ 2) Once every potential Lora has been loaded and merged, add the following lines for a quick setup:
55
+ ```
56
+ from mmgp import offload, profile_type
57
+ offload.profile(pipe, profile_type.HighRAM_LowVRAM_Fast)
58
+ ```
59
+
60
+ You can choose between 5 profiles depending on your hardware:
61
+ - HighRAM_HighVRAM_Fastest: at least 48 GB of RAM and 24 GB of VRAM : the fastest well suited for a RTX 3090 / RTX 4090
62
+ - HighRAM_LowVRAM_Fast (recommended): at least 48 GB of RAM and 12 GB of VRAM : a bit slower, better suited for RTX 3070/3080/4070/4080
63
+ or for RTX 3090 / RTX 4090 with large pictures batches or long videos
64
+ - LowRAM_HighVRAM_Medium: at least 32 GB of RAM and 24 GB of VRAM : so so speed but adapted for RTX 3090 / RTX 4090 with limited RAM
65
+ - LowRAM_LowVRAM_Slow: at least 32 GB of RAM and 12 GB of VRAM : if have little VRAM or generate longer videos
66
+ - VerylowRAM_LowVRAM_Slowest: at least 24 GB of RAM and 10 GB of VRAM : if you don't have much it won't be fast but maybe it will work
67
+
68
+ By default the 'transformer' will be quantized to 8 bits for all profiles. If you don't want that you may specify the optional parameter *quantizeTransformer = False*.
69
+
70
+ ## Alternatively you may want to create your own profile with specific parameters:
71
+
72
+ For example:
73
+ ```
74
+ from mmgp import offload
75
+ offload.all(pipe, pinInRAM=True, modelsToQuantize = ["text_encoder_2"] )
76
+ ```
77
+ - pinInRAM: Boolean (for all models) or List of models ids to pin in RAM. Every model pinned in RAM will load much faster (4 times) but this requires more RAM
78
+ - modelsToQuantize: list of model ids to quantize on the fly. If the corresponding model is already quantized, this option will be ignored.
79
+ - quantizeTransformer: boolean by default True. The 'transformer' model in the pipe contains usually the video or image generator is by defaut; quantized on the fly by default to 8 bits. If you want to save time on disk and reduce the loading time, you may want to load directly a prequantized model. If you don't want to quantize the image generator, you need to set the option *quantizeTransformer* to *False* to turn off on the fly quantization.
80
+ - budgets: either a number in mega bytes (for all models, if 0 unlimited budget) or a dictionary that maps model ids to mega bytes : define the budget in VRAM (in fact the real number is 2.5 this number) that is allocated in VRAM for each model. The smaller this number, the more VRAM left for image data / longer video but also the slower because there will be lots of loading / unloading between the RAM and the VRAM. Turning on the PinInRAM accelerates greatly (4x) small budgets but consumes usually 50% more RAM.
81
+
82
+
83
+ ## Going further
84
+
85
+ The module includes several tools to package a light version of your favorite video / image generator:
86
+ - *save_model(model, file_path, do_quantize = False, quantization_type = qint8 )*\
87
+ Save tensors of a model already loaded in memory in a safetensor format (much faster to reload). You can save it in a quantized format (default qint8 quantization recommended).
88
+ If the model is saved in a quantized format, an extra file that ends with '_map.json' will be created and needed to reload the model again.
89
+
90
+ - *load_model_data(model, file_path: str)*\
91
+ Load the tensors data of a model in RAM of a model already initialized with no data. Detect and handle quantized models saved previously with save_model.
92
+
93
+ - *fast_load_transformers_model(model_path: str)*\
94
+ Initialize (build the model hierarchy in memory) and fast load the corresponding tensors of a 'transformers' library model.
95
+ The advantages over the original *LoadfromPretrained* function is that the full model can fit into a single file with a filename of your choosing (thefore you can have multiple 'transformers' versions of the same model in the same directory) and prequantized model are processed in a transparent way.
96
+ Please note that you need to keep the original file transformers 'config.json' in the same directory.
97
+
98
+
99
+ The typical workflow wil be:
100
+ 1) temporarly insert the *save_model* function just after a model has been fully loaded to save a copy of the model / quantized model.
101
+ 2) replace the full initalizing / loading logic with *fast_load_transformers_model* (if there is a 'Loadfrompretrained' call to a transformers object) or only the tensor loading functions (*torch.load_model_file* and *torch.load_state_dict*) with *load_model_data after* the initializing logic.
102
+
103
+ ## Special cases
104
+ Sometime there isn't an explicit pipe object as each submodel is loaded separately in the main app. If this is the case, you need to create a dictionary that manually maps all the models.\
105
+ For instance :
106
+
107
+
108
+ - for flux derived models:
109
+ ```
110
+ pipe = { "text_encoder": clip, "text_encoder_2": t5, "transformer": model, "vae":ae }
111
+ ```
112
+ - for mochi:
113
+ ```
114
+ pipe = { "text_encoder": self.text_encoder, "transformer": self.dit, "vae":self.decoder }
115
+ ```
116
+
117
+
118
+ Please note that there should be always one model whose Id is 'transformer'. It corresponds to the main image / video model which usually needs to be quantized (this is done on the fly by default when loading the model).
119
+
120
+ Becareful, lots of models use the T5 XXL as a text encoder. However, quite often their corresponding pipeline configurations point at the official Google T5 XXL repository
121
+ where there is a huge 40GB model to download and load. It is cumbersorme as it is a 32 bits model and contains the decoder part of T5 that is not used.
122
+ I suggest you use instead one of the 16 bits encoder only version available around, for instance:
123
+ ```
124
+ text_encoder_2 = T5EncoderModel.from_pretrained("black-forest-labs/FLUX.1-dev", subfolder="text_encoder_2", torch_dtype=torch.float16)
125
+ ```
126
+
127
+ Sometime just providing the pipe won't be sufficient as you will need to change the content of the core model:
128
+ - For instance you may need to disable an existing CPU offload logic that already exists (such as manual calls to move tensors between cuda and the cpu)
129
+ - mmpg to tries to fake the device as being "cuda" but sometimes some code won't be fooled and it will create tensors in the cpu device and this may cause some issues.
130
+
131
+ You are free to use my module for non commercial use as long you give me proper credits. You may contact me on twitter @deepbeepmeep
132
+
133
+ Thanks to
134
+ ---------
135
+ - Huggingface / accelerate for the hooking examples
136
+ - Huggingface / quanto for their very useful quantizer
137
+ - gau-nernst for his Pinnig RAM samples
mmgp-2.0.0/README.md ADDED
@@ -0,0 +1,123 @@
1
+
2
+ <p align="center">
3
+ <H2>Memory Management 2.0 for the GPU Poor by DeepBeepMeep</H2>
4
+ </p>
5
+
6
+
7
+ This module contains multiples optimisations so that models such as Flux (and derived), Mochi, CogView, HunyuanVideo, ... can run smoothly on a 12 to 24 GB GPU limited card.
8
+ This a replacement for the accelerate library that should in theory manage offloading, but doesn't work properly with models that are loaded / unloaded several
9
+ times in a pipe (eg VAE).
10
+
11
+ Requirements:
12
+ - VRAM: minimum 12 GB, recommended 24 GB (RTX 3090/ RTX 4090)
13
+ - RAM: minimum 24 GB, recommended 48 GB
14
+
15
+ This module features 5 profiles in order to able to run the model at a decent speed on a low end consumer config (32 GB of RAM and 12 VRAM) and to run it at a very good speed on a high end consumer config (48 GB of RAM and 24 GB of VRAM).
16
+
17
+ Each profile may use the following:
18
+ - Smart preloading of models in RAM to reduce RAM requirements
19
+ - Smart automated loading / unloading of models in the GPU to avoid unloading models that may be needed again soon
20
+ - Smart slicing of models to reduce memory occupied by models in the VRAM
21
+ - Ability to pin models in reserved RAM to accelerate transfers to VRAM
22
+ - Async transfers to VRAM to avoid a pause when loading a new slice of a model
23
+ - Automated on the fly quantization or ability to load quantized models
24
+
25
+ ## Installation
26
+ First you need to install the module in your current project with:
27
+ ```shell
28
+ pip install mmgp
29
+ ```
30
+
31
+
32
+ ## Usage
33
+
34
+ It is almost plug and play and just needs to be invoked from the main app just after the model pipeline has been created.
35
+ 1) First make sure that the pipeline explictly loads the models in the CPU device, for instance:
36
+ ```
37
+ pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16).to("cpu")
38
+ ```
39
+
40
+ 2) Once every potential Lora has been loaded and merged, add the following lines for a quick setup:
41
+ ```
42
+ from mmgp import offload, profile_type
43
+ offload.profile(pipe, profile_type.HighRAM_LowVRAM_Fast)
44
+ ```
45
+
46
+ You can choose between 5 profiles depending on your hardware:
47
+ - HighRAM_HighVRAM_Fastest: at least 48 GB of RAM and 24 GB of VRAM : the fastest well suited for a RTX 3090 / RTX 4090
48
+ - HighRAM_LowVRAM_Fast (recommended): at least 48 GB of RAM and 12 GB of VRAM : a bit slower, better suited for RTX 3070/3080/4070/4080
49
+ or for RTX 3090 / RTX 4090 with large pictures batches or long videos
50
+ - LowRAM_HighVRAM_Medium: at least 32 GB of RAM and 24 GB of VRAM : so so speed but adapted for RTX 3090 / RTX 4090 with limited RAM
51
+ - LowRAM_LowVRAM_Slow: at least 32 GB of RAM and 12 GB of VRAM : if have little VRAM or generate longer videos
52
+ - VerylowRAM_LowVRAM_Slowest: at least 24 GB of RAM and 10 GB of VRAM : if you don't have much it won't be fast but maybe it will work
53
+
54
+ By default the 'transformer' will be quantized to 8 bits for all profiles. If you don't want that you may specify the optional parameter *quantizeTransformer = False*.
55
+
56
+ ## Alternatively you may want to create your own profile with specific parameters:
57
+
58
+ For example:
59
+ ```
60
+ from mmgp import offload
61
+ offload.all(pipe, pinInRAM=True, modelsToQuantize = ["text_encoder_2"] )
62
+ ```
63
+ - pinInRAM: Boolean (for all models) or List of models ids to pin in RAM. Every model pinned in RAM will load much faster (4 times) but this requires more RAM
64
+ - modelsToQuantize: list of model ids to quantize on the fly. If the corresponding model is already quantized, this option will be ignored.
65
+ - quantizeTransformer: boolean by default True. The 'transformer' model in the pipe contains usually the video or image generator is by defaut; quantized on the fly by default to 8 bits. If you want to save time on disk and reduce the loading time, you may want to load directly a prequantized model. If you don't want to quantize the image generator, you need to set the option *quantizeTransformer* to *False* to turn off on the fly quantization.
66
+ - budgets: either a number in mega bytes (for all models, if 0 unlimited budget) or a dictionary that maps model ids to mega bytes : define the budget in VRAM (in fact the real number is 2.5 this number) that is allocated in VRAM for each model. The smaller this number, the more VRAM left for image data / longer video but also the slower because there will be lots of loading / unloading between the RAM and the VRAM. Turning on the PinInRAM accelerates greatly (4x) small budgets but consumes usually 50% more RAM.
67
+
68
+
69
+ ## Going further
70
+
71
+ The module includes several tools to package a light version of your favorite video / image generator:
72
+ - *save_model(model, file_path, do_quantize = False, quantization_type = qint8 )*\
73
+ Save tensors of a model already loaded in memory in a safetensor format (much faster to reload). You can save it in a quantized format (default qint8 quantization recommended).
74
+ If the model is saved in a quantized format, an extra file that ends with '_map.json' will be created and needed to reload the model again.
75
+
76
+ - *load_model_data(model, file_path: str)*\
77
+ Load the tensors data of a model in RAM of a model already initialized with no data. Detect and handle quantized models saved previously with save_model.
78
+
79
+ - *fast_load_transformers_model(model_path: str)*\
80
+ Initialize (build the model hierarchy in memory) and fast load the corresponding tensors of a 'transformers' library model.
81
+ The advantages over the original *LoadfromPretrained* function is that the full model can fit into a single file with a filename of your choosing (thefore you can have multiple 'transformers' versions of the same model in the same directory) and prequantized model are processed in a transparent way.
82
+ Please note that you need to keep the original file transformers 'config.json' in the same directory.
83
+
84
+
85
+ The typical workflow wil be:
86
+ 1) temporarly insert the *save_model* function just after a model has been fully loaded to save a copy of the model / quantized model.
87
+ 2) replace the full initalizing / loading logic with *fast_load_transformers_model* (if there is a 'Loadfrompretrained' call to a transformers object) or only the tensor loading functions (*torch.load_model_file* and *torch.load_state_dict*) with *load_model_data after* the initializing logic.
88
+
89
+ ## Special cases
90
+ Sometime there isn't an explicit pipe object as each submodel is loaded separately in the main app. If this is the case, you need to create a dictionary that manually maps all the models.\
91
+ For instance :
92
+
93
+
94
+ - for flux derived models:
95
+ ```
96
+ pipe = { "text_encoder": clip, "text_encoder_2": t5, "transformer": model, "vae":ae }
97
+ ```
98
+ - for mochi:
99
+ ```
100
+ pipe = { "text_encoder": self.text_encoder, "transformer": self.dit, "vae":self.decoder }
101
+ ```
102
+
103
+
104
+ Please note that there should be always one model whose Id is 'transformer'. It corresponds to the main image / video model which usually needs to be quantized (this is done on the fly by default when loading the model).
105
+
106
+ Becareful, lots of models use the T5 XXL as a text encoder. However, quite often their corresponding pipeline configurations point at the official Google T5 XXL repository
107
+ where there is a huge 40GB model to download and load. It is cumbersorme as it is a 32 bits model and contains the decoder part of T5 that is not used.
108
+ I suggest you use instead one of the 16 bits encoder only version available around, for instance:
109
+ ```
110
+ text_encoder_2 = T5EncoderModel.from_pretrained("black-forest-labs/FLUX.1-dev", subfolder="text_encoder_2", torch_dtype=torch.float16)
111
+ ```
112
+
113
+ Sometime just providing the pipe won't be sufficient as you will need to change the content of the core model:
114
+ - For instance you may need to disable an existing CPU offload logic that already exists (such as manual calls to move tensors between cuda and the cpu)
115
+ - mmpg to tries to fake the device as being "cuda" but sometimes some code won't be fooled and it will create tensors in the cpu device and this may cause some issues.
116
+
117
+ You are free to use my module for non commercial use as long you give me proper credits. You may contact me on twitter @deepbeepmeep
118
+
119
+ Thanks to
120
+ ---------
121
+ - Huggingface / accelerate for the hooking examples
122
+ - Huggingface / quanto for their very useful quantizer
123
+ - gau-nernst for his Pinnig RAM samples
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "mmgp"
3
- version = "1.2.0"
3
+ version = "2.0.0"
4
4
  authors = [
5
5
  { name = "deepbeepmeep", email = "deepbeepmeep@yahoo.com" },
6
6
  ]
@@ -11,5 +11,6 @@ license = { file = "LICENSE.md" }
11
11
  dependencies = [
12
12
  "torch >= 2.1.0",
13
13
  "optimum-quanto",
14
+ "accelerate"
14
15
  ]
15
16
 
@@ -0,0 +1,137 @@
1
+ Metadata-Version: 2.1
2
+ Name: mmgp
3
+ Version: 2.0.0
4
+ Summary: Memory Management for the GPU Poor
5
+ Author-email: deepbeepmeep <deepbeepmeep@yahoo.com>
6
+ License: GNU GENERAL PUBLIC LICENSE
7
+ Version 3, 29 June 2007
8
+ Requires-Python: >=3.10
9
+ Description-Content-Type: text/markdown
10
+ License-File: LICENSE.md
11
+ Requires-Dist: torch>=2.1.0
12
+ Requires-Dist: optimum-quanto
13
+ Requires-Dist: accelerate
14
+
15
+
16
+ <p align="center">
17
+ <H2>Memory Management 2.0 for the GPU Poor by DeepBeepMeep</H2>
18
+ </p>
19
+
20
+
21
+ This module contains multiples optimisations so that models such as Flux (and derived), Mochi, CogView, HunyuanVideo, ... can run smoothly on a 12 to 24 GB GPU limited card.
22
+ This a replacement for the accelerate library that should in theory manage offloading, but doesn't work properly with models that are loaded / unloaded several
23
+ times in a pipe (eg VAE).
24
+
25
+ Requirements:
26
+ - VRAM: minimum 12 GB, recommended 24 GB (RTX 3090/ RTX 4090)
27
+ - RAM: minimum 24 GB, recommended 48 GB
28
+
29
+ This module features 5 profiles in order to able to run the model at a decent speed on a low end consumer config (32 GB of RAM and 12 VRAM) and to run it at a very good speed on a high end consumer config (48 GB of RAM and 24 GB of VRAM).
30
+
31
+ Each profile may use the following:
32
+ - Smart preloading of models in RAM to reduce RAM requirements
33
+ - Smart automated loading / unloading of models in the GPU to avoid unloading models that may be needed again soon
34
+ - Smart slicing of models to reduce memory occupied by models in the VRAM
35
+ - Ability to pin models in reserved RAM to accelerate transfers to VRAM
36
+ - Async transfers to VRAM to avoid a pause when loading a new slice of a model
37
+ - Automated on the fly quantization or ability to load quantized models
38
+
39
+ ## Installation
40
+ First you need to install the module in your current project with:
41
+ ```shell
42
+ pip install mmgp
43
+ ```
44
+
45
+
46
+ ## Usage
47
+
48
+ It is almost plug and play and just needs to be invoked from the main app just after the model pipeline has been created.
49
+ 1) First make sure that the pipeline explictly loads the models in the CPU device, for instance:
50
+ ```
51
+ pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16).to("cpu")
52
+ ```
53
+
54
+ 2) Once every potential Lora has been loaded and merged, add the following lines for a quick setup:
55
+ ```
56
+ from mmgp import offload, profile_type
57
+ offload.profile(pipe, profile_type.HighRAM_LowVRAM_Fast)
58
+ ```
59
+
60
+ You can choose between 5 profiles depending on your hardware:
61
+ - HighRAM_HighVRAM_Fastest: at least 48 GB of RAM and 24 GB of VRAM : the fastest well suited for a RTX 3090 / RTX 4090
62
+ - HighRAM_LowVRAM_Fast (recommended): at least 48 GB of RAM and 12 GB of VRAM : a bit slower, better suited for RTX 3070/3080/4070/4080
63
+ or for RTX 3090 / RTX 4090 with large pictures batches or long videos
64
+ - LowRAM_HighVRAM_Medium: at least 32 GB of RAM and 24 GB of VRAM : so so speed but adapted for RTX 3090 / RTX 4090 with limited RAM
65
+ - LowRAM_LowVRAM_Slow: at least 32 GB of RAM and 12 GB of VRAM : if have little VRAM or generate longer videos
66
+ - VerylowRAM_LowVRAM_Slowest: at least 24 GB of RAM and 10 GB of VRAM : if you don't have much it won't be fast but maybe it will work
67
+
68
+ By default the 'transformer' will be quantized to 8 bits for all profiles. If you don't want that you may specify the optional parameter *quantizeTransformer = False*.
69
+
70
+ ## Alternatively you may want to create your own profile with specific parameters:
71
+
72
+ For example:
73
+ ```
74
+ from mmgp import offload
75
+ offload.all(pipe, pinInRAM=True, modelsToQuantize = ["text_encoder_2"] )
76
+ ```
77
+ - pinInRAM: Boolean (for all models) or List of models ids to pin in RAM. Every model pinned in RAM will load much faster (4 times) but this requires more RAM
78
+ - modelsToQuantize: list of model ids to quantize on the fly. If the corresponding model is already quantized, this option will be ignored.
79
+ - quantizeTransformer: boolean by default True. The 'transformer' model in the pipe contains usually the video or image generator is by defaut; quantized on the fly by default to 8 bits. If you want to save time on disk and reduce the loading time, you may want to load directly a prequantized model. If you don't want to quantize the image generator, you need to set the option *quantizeTransformer* to *False* to turn off on the fly quantization.
80
+ - budgets: either a number in mega bytes (for all models, if 0 unlimited budget) or a dictionary that maps model ids to mega bytes : define the budget in VRAM (in fact the real number is 2.5 this number) that is allocated in VRAM for each model. The smaller this number, the more VRAM left for image data / longer video but also the slower because there will be lots of loading / unloading between the RAM and the VRAM. Turning on the PinInRAM accelerates greatly (4x) small budgets but consumes usually 50% more RAM.
81
+
82
+
83
+ ## Going further
84
+
85
+ The module includes several tools to package a light version of your favorite video / image generator:
86
+ - *save_model(model, file_path, do_quantize = False, quantization_type = qint8 )*\
87
+ Save tensors of a model already loaded in memory in a safetensor format (much faster to reload). You can save it in a quantized format (default qint8 quantization recommended).
88
+ If the model is saved in a quantized format, an extra file that ends with '_map.json' will be created and needed to reload the model again.
89
+
90
+ - *load_model_data(model, file_path: str)*\
91
+ Load the tensors data of a model in RAM of a model already initialized with no data. Detect and handle quantized models saved previously with save_model.
92
+
93
+ - *fast_load_transformers_model(model_path: str)*\
94
+ Initialize (build the model hierarchy in memory) and fast load the corresponding tensors of a 'transformers' library model.
95
+ The advantages over the original *LoadfromPretrained* function is that the full model can fit into a single file with a filename of your choosing (thefore you can have multiple 'transformers' versions of the same model in the same directory) and prequantized model are processed in a transparent way.
96
+ Please note that you need to keep the original file transformers 'config.json' in the same directory.
97
+
98
+
99
+ The typical workflow wil be:
100
+ 1) temporarly insert the *save_model* function just after a model has been fully loaded to save a copy of the model / quantized model.
101
+ 2) replace the full initalizing / loading logic with *fast_load_transformers_model* (if there is a 'Loadfrompretrained' call to a transformers object) or only the tensor loading functions (*torch.load_model_file* and *torch.load_state_dict*) with *load_model_data after* the initializing logic.
102
+
103
+ ## Special cases
104
+ Sometime there isn't an explicit pipe object as each submodel is loaded separately in the main app. If this is the case, you need to create a dictionary that manually maps all the models.\
105
+ For instance :
106
+
107
+
108
+ - for flux derived models:
109
+ ```
110
+ pipe = { "text_encoder": clip, "text_encoder_2": t5, "transformer": model, "vae":ae }
111
+ ```
112
+ - for mochi:
113
+ ```
114
+ pipe = { "text_encoder": self.text_encoder, "transformer": self.dit, "vae":self.decoder }
115
+ ```
116
+
117
+
118
+ Please note that there should be always one model whose Id is 'transformer'. It corresponds to the main image / video model which usually needs to be quantized (this is done on the fly by default when loading the model).
119
+
120
+ Becareful, lots of models use the T5 XXL as a text encoder. However, quite often their corresponding pipeline configurations point at the official Google T5 XXL repository
121
+ where there is a huge 40GB model to download and load. It is cumbersorme as it is a 32 bits model and contains the decoder part of T5 that is not used.
122
+ I suggest you use instead one of the 16 bits encoder only version available around, for instance:
123
+ ```
124
+ text_encoder_2 = T5EncoderModel.from_pretrained("black-forest-labs/FLUX.1-dev", subfolder="text_encoder_2", torch_dtype=torch.float16)
125
+ ```
126
+
127
+ Sometime just providing the pipe won't be sufficient as you will need to change the content of the core model:
128
+ - For instance you may need to disable an existing CPU offload logic that already exists (such as manual calls to move tensors between cuda and the cpu)
129
+ - mmpg to tries to fake the device as being "cuda" but sometimes some code won't be fooled and it will create tensors in the cpu device and this may cause some issues.
130
+
131
+ You are free to use my module for non commercial use as long you give me proper credits. You may contact me on twitter @deepbeepmeep
132
+
133
+ Thanks to
134
+ ---------
135
+ - Huggingface / accelerate for the hooking examples
136
+ - Huggingface / quanto for their very useful quantizer
137
+ - gau-nernst for his Pinnig RAM samples
@@ -1,2 +1,3 @@
1
1
  torch>=2.1.0
2
2
  optimum-quanto
3
+ accelerate