mmgp 2.0.4__tar.gz → 3.0.0__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of mmgp might be problematic. Click here for more details.
- {mmgp-2.0.4/src/mmgp.egg-info → mmgp-3.0.0}/PKG-INFO +35 -17
- {mmgp-2.0.4 → mmgp-3.0.0}/README.md +32 -16
- {mmgp-2.0.4 → mmgp-3.0.0}/pyproject.toml +4 -2
- mmgp-3.0.0/src/mmgp/__init__.py +22 -0
- mmgp-3.0.0/src/mmgp/offload.py +1472 -0
- mmgp-3.0.0/src/mmgp/safetensors2.py +387 -0
- {mmgp-2.0.4 → mmgp-3.0.0/src/mmgp.egg-info}/PKG-INFO +35 -17
- {mmgp-2.0.4 → mmgp-3.0.0}/src/mmgp.egg-info/SOURCES.txt +3 -1
- {mmgp-2.0.4 → mmgp-3.0.0}/src/mmgp.egg-info/requires.txt +2 -0
- mmgp-2.0.4/src/mmgp.py +0 -951
- {mmgp-2.0.4 → mmgp-3.0.0}/LICENSE.md +0 -0
- {mmgp-2.0.4 → mmgp-3.0.0}/setup.cfg +0 -0
- {mmgp-2.0.4 → mmgp-3.0.0}/src/__init__.py +0 -0
- {mmgp-2.0.4 → mmgp-3.0.0}/src/mmgp.egg-info/dependency_links.txt +0 -0
- {mmgp-2.0.4 → mmgp-3.0.0}/src/mmgp.egg-info/top_level.txt +0 -0
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.1
|
|
2
2
|
Name: mmgp
|
|
3
|
-
Version:
|
|
3
|
+
Version: 3.0.0
|
|
4
4
|
Summary: Memory Management for the GPU Poor
|
|
5
5
|
Author-email: deepbeepmeep <deepbeepmeep@yahoo.com>
|
|
6
6
|
License: GNU GENERAL PUBLIC LICENSE
|
|
@@ -11,10 +11,12 @@ License-File: LICENSE.md
|
|
|
11
11
|
Requires-Dist: torch>=2.1.0
|
|
12
12
|
Requires-Dist: optimum-quanto
|
|
13
13
|
Requires-Dist: accelerate
|
|
14
|
+
Requires-Dist: safetensors
|
|
15
|
+
Requires-Dist: psutil
|
|
14
16
|
|
|
15
17
|
|
|
16
18
|
<p align="center">
|
|
17
|
-
<H2>Memory Management
|
|
19
|
+
<H2>Memory Management 3.0 for the GPU Poor by DeepBeepMeep</H2>
|
|
18
20
|
</p>
|
|
19
21
|
|
|
20
22
|
|
|
@@ -26,15 +28,16 @@ Requirements:
|
|
|
26
28
|
- VRAM: minimum 12 GB, recommended 24 GB (RTX 3090/ RTX 4090)
|
|
27
29
|
- RAM: minimum 24 GB, recommended 48 GB
|
|
28
30
|
|
|
29
|
-
This module features 5 profiles in order to able to run the model at a decent speed on a low end consumer config (32 GB of RAM and 12 VRAM) and to run it at a very good speed on a high end consumer config (48 GB of RAM and 24 GB of VRAM)
|
|
31
|
+
This module features 5 profiles in order to able to run the model at a decent speed on a low end consumer config (32 GB of RAM and 12 VRAM) and to run it at a very good speed on a high end consumer config (48 GB of RAM and 24 GB of VRAM).\
|
|
32
|
+
These RAM requirements are for Linux systems. Due to different memory management Windows will require an extra 16 GB of RAM to run the corresponding profile.
|
|
30
33
|
|
|
31
34
|
Each profile may use the following:
|
|
32
|
-
-
|
|
35
|
+
- Low RAM consumption (thanks to a rewritten safetensors library) that allows low RAM on the fly quantization
|
|
33
36
|
- Smart automated loading / unloading of models in the GPU to avoid unloading models that may be needed again soon
|
|
34
37
|
- Smart slicing of models to reduce memory occupied by models in the VRAM
|
|
35
|
-
- Ability to pin models
|
|
38
|
+
- Ability to pin models to reserved RAM to accelerate transfers to VRAM
|
|
36
39
|
- Async transfers to VRAM to avoid a pause when loading a new slice of a model
|
|
37
|
-
- Automated on the fly quantization or ability to load quantized models
|
|
40
|
+
- Automated on the fly quantization or ability to load pre quantized models
|
|
38
41
|
|
|
39
42
|
## Installation
|
|
40
43
|
First you need to install the module in your current project with:
|
|
@@ -67,33 +70,48 @@ You can choose between 5 profiles depending on your hardware:
|
|
|
67
70
|
|
|
68
71
|
By default the 'transformer' will be quantized to 8 bits for all profiles. If you don't want that you may specify the optional parameter *quantizeTransformer = False*.
|
|
69
72
|
|
|
73
|
+
Every parameter set automatically by a profile can be overridden with one or multiple parameters accepted by *offload.all* (see below):
|
|
74
|
+
```
|
|
75
|
+
from mmgp import offload, profile_type
|
|
76
|
+
offload.profile(pipe, profile_type.HighRAM_LowVRAM_Fast, budgets = 1000)
|
|
77
|
+
```
|
|
78
|
+
If you want to know which parameter are set by one specific profile you can use the parameter *verboseLevel=2*
|
|
79
|
+
|
|
70
80
|
## Alternatively you may want to create your own profile with specific parameters:
|
|
71
81
|
|
|
72
82
|
For example:
|
|
73
83
|
```
|
|
74
84
|
from mmgp import offload
|
|
75
|
-
offload.all(pipe,
|
|
85
|
+
offload.all(pipe, pinnedMemory=True, ExtraModelsToQuantize = ["text_encoder_2"] )
|
|
76
86
|
```
|
|
77
|
-
-
|
|
78
|
-
- modelsToQuantize: list of model ids to quantize on the fly. If the corresponding model is already quantized, this option will be ignored.
|
|
87
|
+
- pinnedMemory: Boolean (for all models) or List of models ids to pin to RAM. Every model pinned to RAM will load much faster (up to 2 times) but this requires more RAM
|
|
79
88
|
- quantizeTransformer: boolean by default True. The 'transformer' model in the pipe contains usually the video or image generator is by defaut; quantized on the fly by default to 8 bits. If you want to save time on disk and reduce the loading time, you may want to load directly a prequantized model. If you don't want to quantize the image generator, you need to set the option *quantizeTransformer* to *False* to turn off on the fly quantization.
|
|
80
|
-
-
|
|
89
|
+
- extraModelsToQuantize: list of additional modelids of models to quantize on the fly. If the corresponding model is already quantized, this option will be ignored.
|
|
90
|
+
- budgets: either a number in mega bytes (for all models, if 0 unlimited budget) or a dictionary that maps model ids to mega bytes : define the budget in VRAM (in fact the real number is 1.5 this number or 2.5 if asyncTransfers are also enabled) that is allocated in VRAM for each model.
|
|
91
|
+
The smaller this number, the more VRAM left for image data / longer video but also the slower because there will be lots of loading / unloading between the RAM and the VRAM. If model is too big to fit in a budget, it will be broken down in multiples parts that will be unloaded / loaded consequently. The speed of low budget can be increased (up to 2 times) by turning on the options pinnedMemory and asyncTransfers.
|
|
92
|
+
- asyncTransfers: boolean, load to the GPU the next model part while the current part is being processed. This requires twice the budget if any is defined. This may increase speed by 20% (mostly visible on fast modern GPUs).
|
|
93
|
+
- verboseLevel: number between 0 and 2 (1 by default), provides various level of feedback of the different processes
|
|
94
|
+
- perc_reserved_mem_max: a float below 0.5 (or 0 for auto), may be reduced to a lower number if any out of memory is triggered while using pinnedMemory
|
|
95
|
+
- compile (experimental): list of model ids to compile, may accelerate processing (or not) depending on the type of GPU. As of 01/01/2025 it will work only on Linux since compilation relies on Triton which is not yet supported on Windows
|
|
81
96
|
|
|
97
|
+
If you are short on RAM and plan to work with quantized models, it is recommended to load pre-quantized models direclty rather than using on the fly quantization (especially on Windows) as due to the way safetensors work almost twice the amount of RAM may be needed for the loading of the model.
|
|
82
98
|
|
|
83
99
|
## Going further
|
|
84
100
|
|
|
85
101
|
The module includes several tools to package a light version of your favorite video / image generator:
|
|
86
102
|
- *save_model(model, file_path, do_quantize = False, quantization_type = qint8 )*\
|
|
87
103
|
Save tensors of a model already loaded in memory in a safetensor format (much faster to reload). You can save it in a quantized format (default qint8 quantization recommended).
|
|
88
|
-
|
|
104
|
+
The resulting safetensor file will contain extra fields in its metadata such as the quantization map and its configuration, so you will be able to move the file around without files such as *config.json* or *file_map.json*.
|
|
105
|
+
You will need *load_model_data* or *fast_load_transformers_model* to read the file again . You may also load it using the default *safetensor* librar however you will need to provide in the same directory any complementary file that are usually requested (for instance *config.json*)
|
|
106
|
+
|
|
107
|
+
- *load_model_data(model, file_path: str, do_quantize = False, quantization_type = qint8, pinToRAM = False, partialPin = False)*\
|
|
108
|
+
Load the tensors data of a model in RAM of a model already initialized with no data. Detect and handle quantized models saved previously with *save_model*.A model can also be quantized on the fly while being loaded. The model which is loaded can be pinned to RAM while it is loaded, this is more RAM efficient than pinning tensors later using *offline.all* or *offline.profile*
|
|
89
109
|
|
|
90
|
-
- *
|
|
91
|
-
|
|
110
|
+
- *fast_load_transformers_model(model_path: str, do_quantize = False, quantization_type = qint8, pinToRAM = False, partialPin = False)*\
|
|
111
|
+
Initialize (build the model hierarchy in memory) and fast load the corresponding tensors of a 'transformers' or 'diffusers' library model.
|
|
112
|
+
The advantages over the original *from_pretrained* method is that a full model can fit into a single file with a filename of your choosing (thefore you can have multiple 'transformers' versions of the same model in the same directory) and prequantized models are processed in a transparent way.
|
|
113
|
+
Last but not least, you can also on the fly pin to RAM the whole model or the most important part of it (partialPin = True) in a more efficient way (faster and requires less RAM) than if you did through *offload.all* or *offload.profile*.
|
|
92
114
|
|
|
93
|
-
- *fast_load_transformers_model(model_path: str)*\
|
|
94
|
-
Initialize (build the model hierarchy in memory) and fast load the corresponding tensors of a 'transformers' library model.
|
|
95
|
-
The advantages over the original *from_pretrained* method is that the full model can fit into a single file with a filename of your choosing (thefore you can have multiple 'transformers' versions of the same model in the same directory) and prequantized model are processed in a transparent way.
|
|
96
|
-
Please note that you need to keep the original file transformers 'config.json' in the same directory.
|
|
97
115
|
|
|
98
116
|
|
|
99
117
|
The typical workflow wil be:
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
|
|
2
2
|
<p align="center">
|
|
3
|
-
<H2>Memory Management
|
|
3
|
+
<H2>Memory Management 3.0 for the GPU Poor by DeepBeepMeep</H2>
|
|
4
4
|
</p>
|
|
5
5
|
|
|
6
6
|
|
|
@@ -12,15 +12,16 @@ Requirements:
|
|
|
12
12
|
- VRAM: minimum 12 GB, recommended 24 GB (RTX 3090/ RTX 4090)
|
|
13
13
|
- RAM: minimum 24 GB, recommended 48 GB
|
|
14
14
|
|
|
15
|
-
This module features 5 profiles in order to able to run the model at a decent speed on a low end consumer config (32 GB of RAM and 12 VRAM) and to run it at a very good speed on a high end consumer config (48 GB of RAM and 24 GB of VRAM)
|
|
15
|
+
This module features 5 profiles in order to able to run the model at a decent speed on a low end consumer config (32 GB of RAM and 12 VRAM) and to run it at a very good speed on a high end consumer config (48 GB of RAM and 24 GB of VRAM).\
|
|
16
|
+
These RAM requirements are for Linux systems. Due to different memory management Windows will require an extra 16 GB of RAM to run the corresponding profile.
|
|
16
17
|
|
|
17
18
|
Each profile may use the following:
|
|
18
|
-
-
|
|
19
|
+
- Low RAM consumption (thanks to a rewritten safetensors library) that allows low RAM on the fly quantization
|
|
19
20
|
- Smart automated loading / unloading of models in the GPU to avoid unloading models that may be needed again soon
|
|
20
21
|
- Smart slicing of models to reduce memory occupied by models in the VRAM
|
|
21
|
-
- Ability to pin models
|
|
22
|
+
- Ability to pin models to reserved RAM to accelerate transfers to VRAM
|
|
22
23
|
- Async transfers to VRAM to avoid a pause when loading a new slice of a model
|
|
23
|
-
- Automated on the fly quantization or ability to load quantized models
|
|
24
|
+
- Automated on the fly quantization or ability to load pre quantized models
|
|
24
25
|
|
|
25
26
|
## Installation
|
|
26
27
|
First you need to install the module in your current project with:
|
|
@@ -53,33 +54,48 @@ You can choose between 5 profiles depending on your hardware:
|
|
|
53
54
|
|
|
54
55
|
By default the 'transformer' will be quantized to 8 bits for all profiles. If you don't want that you may specify the optional parameter *quantizeTransformer = False*.
|
|
55
56
|
|
|
57
|
+
Every parameter set automatically by a profile can be overridden with one or multiple parameters accepted by *offload.all* (see below):
|
|
58
|
+
```
|
|
59
|
+
from mmgp import offload, profile_type
|
|
60
|
+
offload.profile(pipe, profile_type.HighRAM_LowVRAM_Fast, budgets = 1000)
|
|
61
|
+
```
|
|
62
|
+
If you want to know which parameter are set by one specific profile you can use the parameter *verboseLevel=2*
|
|
63
|
+
|
|
56
64
|
## Alternatively you may want to create your own profile with specific parameters:
|
|
57
65
|
|
|
58
66
|
For example:
|
|
59
67
|
```
|
|
60
68
|
from mmgp import offload
|
|
61
|
-
offload.all(pipe,
|
|
69
|
+
offload.all(pipe, pinnedMemory=True, ExtraModelsToQuantize = ["text_encoder_2"] )
|
|
62
70
|
```
|
|
63
|
-
-
|
|
64
|
-
- modelsToQuantize: list of model ids to quantize on the fly. If the corresponding model is already quantized, this option will be ignored.
|
|
71
|
+
- pinnedMemory: Boolean (for all models) or List of models ids to pin to RAM. Every model pinned to RAM will load much faster (up to 2 times) but this requires more RAM
|
|
65
72
|
- quantizeTransformer: boolean by default True. The 'transformer' model in the pipe contains usually the video or image generator is by defaut; quantized on the fly by default to 8 bits. If you want to save time on disk and reduce the loading time, you may want to load directly a prequantized model. If you don't want to quantize the image generator, you need to set the option *quantizeTransformer* to *False* to turn off on the fly quantization.
|
|
66
|
-
-
|
|
73
|
+
- extraModelsToQuantize: list of additional modelids of models to quantize on the fly. If the corresponding model is already quantized, this option will be ignored.
|
|
74
|
+
- budgets: either a number in mega bytes (for all models, if 0 unlimited budget) or a dictionary that maps model ids to mega bytes : define the budget in VRAM (in fact the real number is 1.5 this number or 2.5 if asyncTransfers are also enabled) that is allocated in VRAM for each model.
|
|
75
|
+
The smaller this number, the more VRAM left for image data / longer video but also the slower because there will be lots of loading / unloading between the RAM and the VRAM. If model is too big to fit in a budget, it will be broken down in multiples parts that will be unloaded / loaded consequently. The speed of low budget can be increased (up to 2 times) by turning on the options pinnedMemory and asyncTransfers.
|
|
76
|
+
- asyncTransfers: boolean, load to the GPU the next model part while the current part is being processed. This requires twice the budget if any is defined. This may increase speed by 20% (mostly visible on fast modern GPUs).
|
|
77
|
+
- verboseLevel: number between 0 and 2 (1 by default), provides various level of feedback of the different processes
|
|
78
|
+
- perc_reserved_mem_max: a float below 0.5 (or 0 for auto), may be reduced to a lower number if any out of memory is triggered while using pinnedMemory
|
|
79
|
+
- compile (experimental): list of model ids to compile, may accelerate processing (or not) depending on the type of GPU. As of 01/01/2025 it will work only on Linux since compilation relies on Triton which is not yet supported on Windows
|
|
67
80
|
|
|
81
|
+
If you are short on RAM and plan to work with quantized models, it is recommended to load pre-quantized models direclty rather than using on the fly quantization (especially on Windows) as due to the way safetensors work almost twice the amount of RAM may be needed for the loading of the model.
|
|
68
82
|
|
|
69
83
|
## Going further
|
|
70
84
|
|
|
71
85
|
The module includes several tools to package a light version of your favorite video / image generator:
|
|
72
86
|
- *save_model(model, file_path, do_quantize = False, quantization_type = qint8 )*\
|
|
73
87
|
Save tensors of a model already loaded in memory in a safetensor format (much faster to reload). You can save it in a quantized format (default qint8 quantization recommended).
|
|
74
|
-
|
|
88
|
+
The resulting safetensor file will contain extra fields in its metadata such as the quantization map and its configuration, so you will be able to move the file around without files such as *config.json* or *file_map.json*.
|
|
89
|
+
You will need *load_model_data* or *fast_load_transformers_model* to read the file again . You may also load it using the default *safetensor* librar however you will need to provide in the same directory any complementary file that are usually requested (for instance *config.json*)
|
|
90
|
+
|
|
91
|
+
- *load_model_data(model, file_path: str, do_quantize = False, quantization_type = qint8, pinToRAM = False, partialPin = False)*\
|
|
92
|
+
Load the tensors data of a model in RAM of a model already initialized with no data. Detect and handle quantized models saved previously with *save_model*.A model can also be quantized on the fly while being loaded. The model which is loaded can be pinned to RAM while it is loaded, this is more RAM efficient than pinning tensors later using *offline.all* or *offline.profile*
|
|
75
93
|
|
|
76
|
-
- *
|
|
77
|
-
|
|
94
|
+
- *fast_load_transformers_model(model_path: str, do_quantize = False, quantization_type = qint8, pinToRAM = False, partialPin = False)*\
|
|
95
|
+
Initialize (build the model hierarchy in memory) and fast load the corresponding tensors of a 'transformers' or 'diffusers' library model.
|
|
96
|
+
The advantages over the original *from_pretrained* method is that a full model can fit into a single file with a filename of your choosing (thefore you can have multiple 'transformers' versions of the same model in the same directory) and prequantized models are processed in a transparent way.
|
|
97
|
+
Last but not least, you can also on the fly pin to RAM the whole model or the most important part of it (partialPin = True) in a more efficient way (faster and requires less RAM) than if you did through *offload.all* or *offload.profile*.
|
|
78
98
|
|
|
79
|
-
- *fast_load_transformers_model(model_path: str)*\
|
|
80
|
-
Initialize (build the model hierarchy in memory) and fast load the corresponding tensors of a 'transformers' library model.
|
|
81
|
-
The advantages over the original *from_pretrained* method is that the full model can fit into a single file with a filename of your choosing (thefore you can have multiple 'transformers' versions of the same model in the same directory) and prequantized model are processed in a transparent way.
|
|
82
|
-
Please note that you need to keep the original file transformers 'config.json' in the same directory.
|
|
83
99
|
|
|
84
100
|
|
|
85
101
|
The typical workflow wil be:
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
[project]
|
|
2
2
|
name = "mmgp"
|
|
3
|
-
version = "
|
|
3
|
+
version = "3.0.0"
|
|
4
4
|
authors = [
|
|
5
5
|
{ name = "deepbeepmeep", email = "deepbeepmeep@yahoo.com" },
|
|
6
6
|
]
|
|
@@ -11,6 +11,8 @@ license = { file = "LICENSE.md" }
|
|
|
11
11
|
dependencies = [
|
|
12
12
|
"torch >= 2.1.0",
|
|
13
13
|
"optimum-quanto",
|
|
14
|
-
"accelerate"
|
|
14
|
+
"accelerate",
|
|
15
|
+
"safetensors",
|
|
16
|
+
"psutil"
|
|
15
17
|
]
|
|
16
18
|
|
|
@@ -0,0 +1,22 @@
|
|
|
1
|
+
import enum
|
|
2
|
+
class profile_type(int, enum.Enum):
|
|
3
|
+
@staticmethod
|
|
4
|
+
def tostr(v):
|
|
5
|
+
if v == 1:
|
|
6
|
+
s= "HighRAM_HighVRAM_Fastest"
|
|
7
|
+
elif v == 2:
|
|
8
|
+
s ="HighRAM_LowVRAM_Fast"
|
|
9
|
+
elif v == 3:
|
|
10
|
+
s = "LowRAM_HighVRAM_Medium"
|
|
11
|
+
elif v == 4:
|
|
12
|
+
s = "LowRAM_LowVRAM_Slow"
|
|
13
|
+
else:
|
|
14
|
+
s = "VerylowRAM_LowVRAM_Slowest"
|
|
15
|
+
return s
|
|
16
|
+
|
|
17
|
+
HighRAM_HighVRAM_Fastest = 1
|
|
18
|
+
HighRAM_LowVRAM_Fast = 2
|
|
19
|
+
LowRAM_HighVRAM_Medium = 3
|
|
20
|
+
LowRAM_LowVRAM_Slow = 4
|
|
21
|
+
VerylowRAM_LowVRAM_Slowest = 5
|
|
22
|
+
|