cache-dit 0.1.1.dev2__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of cache-dit might be problematic. Click here for more details.
- cache_dit-0.1.1.dev2/.github/workflows/issue.yml +22 -0
- cache_dit-0.1.1.dev2/.gitignore +167 -0
- cache_dit-0.1.1.dev2/.pre-commit-config.yaml +31 -0
- cache_dit-0.1.1.dev2/CONTRIBUTE.md +28 -0
- cache_dit-0.1.1.dev2/LICENSE +53 -0
- cache_dit-0.1.1.dev2/MANIFEST.in +6 -0
- cache_dit-0.1.1.dev2/PKG-INFO +31 -0
- cache_dit-0.1.1.dev2/README.md +320 -0
- cache_dit-0.1.1.dev2/assets/DBCACHE_F12B12S4_R0.2_S16.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBCACHE_F12B16S4_R0.08_S6.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBCACHE_F16B16S2_R0.2_S14.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBCACHE_F16B16S4_R0.2_S13.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBCACHE_F1B0S1_R0.08_S11.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBCACHE_F1B0S1_R0.2_S19.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBCACHE_F8B0S2_R0.12_S12.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBCACHE_F8B16S1_R0.2_S18.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBCACHE_F8B8S1_R0.08_S9.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBCACHE_F8B8S1_R0.12_S12.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBCACHE_F8B8S1_R0.15_S15.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBCache.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBPRUNE_F1B0_R0.03_P24.0_T19.43s.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBPRUNE_F1B0_R0.04_P34.6_T16.82s.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBPRUNE_F1B0_R0.05_P38.3_T15.95s.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBPRUNE_F1B0_R0.06_P45.2_T14.24s.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBPRUNE_F1B0_R0.07_P52.3_T12.53s.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBPRUNE_F1B0_R0.08_P52.4_T12.52s.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBPRUNE_F1B0_R0.09_P59.2_T10.81s.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBPRUNE_F1B0_R0.12_P59.5_T10.76s.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBPRUNE_F1B0_R0.12_P63.0_T9.90s.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBPRUNE_F1B0_R0.1_P62.8_T9.95s.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBPRUNE_F1B0_R0.2_P59.5_T10.66s.png +0 -0
- cache_dit-0.1.1.dev2/assets/DBPRUNE_F1B0_R0.3_P63.1_T9.79s.png +0 -0
- cache_dit-0.1.1.dev2/assets/NONE_R0.08_S0.png +0 -0
- cache_dit-0.1.1.dev2/bench/.gitignore +168 -0
- cache_dit-0.1.1.dev2/bench/bench.py +208 -0
- cache_dit-0.1.1.dev2/docs/.gitignore +166 -0
- cache_dit-0.1.1.dev2/examples/.gitignore +168 -0
- cache_dit-0.1.1.dev2/examples/run_flux.py +23 -0
- cache_dit-0.1.1.dev2/pyproject.toml +27 -0
- cache_dit-0.1.1.dev2/pytest.ini +7 -0
- cache_dit-0.1.1.dev2/requirements.txt +6 -0
- cache_dit-0.1.1.dev2/setup.cfg +23 -0
- cache_dit-0.1.1.dev2/setup.py +78 -0
- cache_dit-0.1.1.dev2/src/cache_dit/__init__.py +0 -0
- cache_dit-0.1.1.dev2/src/cache_dit/_version.py +21 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/__init__.py +166 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/dual_block_cache/__init__.py +0 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/dual_block_cache/cache_context.py +1361 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/dual_block_cache/diffusers_adapters/__init__.py +45 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/dual_block_cache/diffusers_adapters/cogvideox.py +89 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/dual_block_cache/diffusers_adapters/flux.py +100 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/dual_block_cache/diffusers_adapters/mochi.py +88 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/dynamic_block_prune/__init__.py +0 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/dynamic_block_prune/diffusers_adapters/__init__.py +45 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/dynamic_block_prune/diffusers_adapters/cogvideox.py +89 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/dynamic_block_prune/diffusers_adapters/flux.py +100 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/dynamic_block_prune/diffusers_adapters/mochi.py +89 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/dynamic_block_prune/prune_context.py +979 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/first_block_cache/__init__.py +0 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/first_block_cache/cache_context.py +727 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/first_block_cache/diffusers_adapters/__init__.py +53 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/first_block_cache/diffusers_adapters/cogvideox.py +89 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/first_block_cache/diffusers_adapters/flux.py +100 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/first_block_cache/diffusers_adapters/mochi.py +89 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/first_block_cache/diffusers_adapters/wan.py +98 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/taylorseer.py +76 -0
- cache_dit-0.1.1.dev2/src/cache_dit/cache_factory/utils.py +0 -0
- cache_dit-0.1.1.dev2/src/cache_dit/logger.py +97 -0
- cache_dit-0.1.1.dev2/src/cache_dit/primitives.py +152 -0
- cache_dit-0.1.1.dev2/src/cache_dit.egg-info/PKG-INFO +31 -0
- cache_dit-0.1.1.dev2/src/cache_dit.egg-info/SOURCES.txt +73 -0
- cache_dit-0.1.1.dev2/src/cache_dit.egg-info/dependency_links.txt +1 -0
- cache_dit-0.1.1.dev2/src/cache_dit.egg-info/requires.txt +21 -0
- cache_dit-0.1.1.dev2/src/cache_dit.egg-info/top_level.txt +1 -0
|
@@ -0,0 +1,22 @@
|
|
|
1
|
+
name: issues
|
|
2
|
+
on:
|
|
3
|
+
schedule:
|
|
4
|
+
- cron: "0 0 * * 0"
|
|
5
|
+
|
|
6
|
+
jobs:
|
|
7
|
+
close-issues:
|
|
8
|
+
runs-on: ubuntu-latest
|
|
9
|
+
permissions:
|
|
10
|
+
issues: write
|
|
11
|
+
pull-requests: write
|
|
12
|
+
steps:
|
|
13
|
+
- uses: actions/stale@v9.0.0
|
|
14
|
+
with:
|
|
15
|
+
days-before-issue-stale: 30
|
|
16
|
+
days-before-issue-close: 7
|
|
17
|
+
stale-issue-label: "stale"
|
|
18
|
+
stale-issue-message: "This issue is stale because it has been open for 30 days with no activity."
|
|
19
|
+
close-issue-message: "This issue was closed because it has been inactive for 7 days since being marked as stale."
|
|
20
|
+
days-before-pr-stale: -1
|
|
21
|
+
days-before-pr-close: -1
|
|
22
|
+
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
|
@@ -0,0 +1,167 @@
|
|
|
1
|
+
# Byte-compiled / optimized / DLL files
|
|
2
|
+
__pycache__/
|
|
3
|
+
*.py[cod]
|
|
4
|
+
*$py.class
|
|
5
|
+
|
|
6
|
+
# C extensions
|
|
7
|
+
*.so
|
|
8
|
+
|
|
9
|
+
# Distribution / packaging
|
|
10
|
+
.Python
|
|
11
|
+
build/
|
|
12
|
+
develop-eggs/
|
|
13
|
+
dist/
|
|
14
|
+
downloads/
|
|
15
|
+
eggs/
|
|
16
|
+
.eggs/
|
|
17
|
+
lib/
|
|
18
|
+
lib64/
|
|
19
|
+
parts/
|
|
20
|
+
sdist/
|
|
21
|
+
var/
|
|
22
|
+
wheels/
|
|
23
|
+
share/python-wheels/
|
|
24
|
+
*.egg-info/
|
|
25
|
+
.installed.cfg
|
|
26
|
+
*.egg
|
|
27
|
+
MANIFEST
|
|
28
|
+
|
|
29
|
+
# PyInstaller
|
|
30
|
+
# Usually these files are written by a python script from a template
|
|
31
|
+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
|
32
|
+
*.manifest
|
|
33
|
+
*.spec
|
|
34
|
+
|
|
35
|
+
# Installer logs
|
|
36
|
+
pip-log.txt
|
|
37
|
+
pip-delete-this-directory.txt
|
|
38
|
+
|
|
39
|
+
# Unit test / coverage reports
|
|
40
|
+
htmlcov/
|
|
41
|
+
.tox/
|
|
42
|
+
.nox/
|
|
43
|
+
.coverage
|
|
44
|
+
.coverage.*
|
|
45
|
+
.cache
|
|
46
|
+
nosetests.xml
|
|
47
|
+
coverage.xml
|
|
48
|
+
*.cover
|
|
49
|
+
*.py,cover
|
|
50
|
+
.hypothesis/
|
|
51
|
+
.pytest_cache/
|
|
52
|
+
cover/
|
|
53
|
+
|
|
54
|
+
# Translations
|
|
55
|
+
*.mo
|
|
56
|
+
*.pot
|
|
57
|
+
|
|
58
|
+
# Django stuff:
|
|
59
|
+
*.log
|
|
60
|
+
local_settings.py
|
|
61
|
+
db.sqlite3
|
|
62
|
+
db.sqlite3-journal
|
|
63
|
+
|
|
64
|
+
# Flask stuff:
|
|
65
|
+
instance/
|
|
66
|
+
.webassets-cache
|
|
67
|
+
|
|
68
|
+
# Scrapy stuff:
|
|
69
|
+
.scrapy
|
|
70
|
+
|
|
71
|
+
# Sphinx documentation
|
|
72
|
+
docs/_build/
|
|
73
|
+
|
|
74
|
+
# PyBuilder
|
|
75
|
+
.pybuilder/
|
|
76
|
+
target/
|
|
77
|
+
|
|
78
|
+
# Jupyter Notebook
|
|
79
|
+
.ipynb_checkpoints
|
|
80
|
+
|
|
81
|
+
# IPython
|
|
82
|
+
profile_default/
|
|
83
|
+
ipython_config.py
|
|
84
|
+
|
|
85
|
+
# pyenv
|
|
86
|
+
# For a library or package, you might want to ignore these files since the code is
|
|
87
|
+
# intended to run in multiple environments; otherwise, check them in:
|
|
88
|
+
# .python-version
|
|
89
|
+
|
|
90
|
+
# pipenv
|
|
91
|
+
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
|
|
92
|
+
# However, in case of collaboration, if having platform-specific dependencies or dependencies
|
|
93
|
+
# having no cross-platform support, pipenv may install dependencies that don't work, or not
|
|
94
|
+
# install all needed dependencies.
|
|
95
|
+
#Pipfile.lock
|
|
96
|
+
|
|
97
|
+
# poetry
|
|
98
|
+
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
|
|
99
|
+
# This is especially recommended for binary packages to ensure reproducibility, and is more
|
|
100
|
+
# commonly ignored for libraries.
|
|
101
|
+
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
|
|
102
|
+
#poetry.lock
|
|
103
|
+
|
|
104
|
+
# pdm
|
|
105
|
+
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
|
|
106
|
+
#pdm.lock
|
|
107
|
+
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
|
|
108
|
+
# in version control.
|
|
109
|
+
# https://pdm.fming.dev/#use-with-ide
|
|
110
|
+
.pdm.toml
|
|
111
|
+
|
|
112
|
+
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
|
|
113
|
+
__pypackages__/
|
|
114
|
+
|
|
115
|
+
# Celery stuff
|
|
116
|
+
celerybeat-schedule
|
|
117
|
+
celerybeat.pid
|
|
118
|
+
|
|
119
|
+
# SageMath parsed files
|
|
120
|
+
*.sage.py
|
|
121
|
+
|
|
122
|
+
# Environments
|
|
123
|
+
.env
|
|
124
|
+
.venv
|
|
125
|
+
env/
|
|
126
|
+
venv/
|
|
127
|
+
ENV/
|
|
128
|
+
env.bak/
|
|
129
|
+
venv.bak/
|
|
130
|
+
|
|
131
|
+
# Spyder project settings
|
|
132
|
+
.spyderproject
|
|
133
|
+
.spyproject
|
|
134
|
+
|
|
135
|
+
# Rope project settings
|
|
136
|
+
.ropeproject
|
|
137
|
+
|
|
138
|
+
# mkdocs documentation
|
|
139
|
+
/site
|
|
140
|
+
|
|
141
|
+
# mypy
|
|
142
|
+
.mypy_cache/
|
|
143
|
+
.dmypy.json
|
|
144
|
+
dmypy.json
|
|
145
|
+
|
|
146
|
+
# Pyre type checker
|
|
147
|
+
.pyre/
|
|
148
|
+
|
|
149
|
+
# pytype static type analyzer
|
|
150
|
+
.pytype/
|
|
151
|
+
|
|
152
|
+
# Cython debug symbols
|
|
153
|
+
cython_debug/
|
|
154
|
+
|
|
155
|
+
# PyCharm
|
|
156
|
+
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
|
|
157
|
+
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
|
|
158
|
+
# and can be added to the global gitignore or merged into this file. For a more nuclear
|
|
159
|
+
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
|
|
160
|
+
#.idea/
|
|
161
|
+
|
|
162
|
+
_version.py
|
|
163
|
+
|
|
164
|
+
report*.html
|
|
165
|
+
|
|
166
|
+
.DS_Store
|
|
167
|
+
tmp
|
|
@@ -0,0 +1,31 @@
|
|
|
1
|
+
repos:
|
|
2
|
+
- repo: https://github.com/pre-commit/pre-commit-hooks
|
|
3
|
+
rev: v4.0.1
|
|
4
|
+
hooks:
|
|
5
|
+
- id: check-docstring-first
|
|
6
|
+
- id: check-toml
|
|
7
|
+
- id: check-yaml
|
|
8
|
+
exclude: packaging/.*
|
|
9
|
+
args:
|
|
10
|
+
- --allow-multiple-documents
|
|
11
|
+
- id: mixed-line-ending
|
|
12
|
+
args: [--fix=lf]
|
|
13
|
+
- id: end-of-file-fixer
|
|
14
|
+
|
|
15
|
+
- repo: https://github.com/PyCQA/flake8
|
|
16
|
+
rev: 7.1.1
|
|
17
|
+
hooks:
|
|
18
|
+
- id: flake8
|
|
19
|
+
args: [--config=setup.cfg]
|
|
20
|
+
|
|
21
|
+
- repo: https://github.com/PyCQA/pydocstyle
|
|
22
|
+
rev: 6.1.1
|
|
23
|
+
hooks:
|
|
24
|
+
- id: pydocstyle
|
|
25
|
+
|
|
26
|
+
- repo: https://github.com/psf/black
|
|
27
|
+
rev: 24.10.0
|
|
28
|
+
hooks:
|
|
29
|
+
- id: black-jupyter
|
|
30
|
+
args:
|
|
31
|
+
- --line-length=80
|
|
@@ -0,0 +1,28 @@
|
|
|
1
|
+
# Developer Guide
|
|
2
|
+
|
|
3
|
+
## 👨💻Pre-commit
|
|
4
|
+
|
|
5
|
+
Before submitting code, configure pre-commit, for example:
|
|
6
|
+
|
|
7
|
+
```bash
|
|
8
|
+
# fork vipshop/DBCache to your own github page, then:
|
|
9
|
+
git clone git@github.com:your-github-page/your-fork-DBCache.git
|
|
10
|
+
cd your-fork-DBCache && git checkout -b dev
|
|
11
|
+
# update submodule
|
|
12
|
+
git submodule update --init --recursive --force
|
|
13
|
+
# install pre-commit
|
|
14
|
+
pip3 install pre-commit
|
|
15
|
+
pre-commit install
|
|
16
|
+
pre-commit run --all-files
|
|
17
|
+
```
|
|
18
|
+
|
|
19
|
+
## 👨💻Add a new feature
|
|
20
|
+
|
|
21
|
+
```bash
|
|
22
|
+
# feat: support xxx-cache method
|
|
23
|
+
# add your commits
|
|
24
|
+
git add .
|
|
25
|
+
git commit -m "support xxx-cache method"
|
|
26
|
+
git push
|
|
27
|
+
# then, open a PR from your personal branch to DBCache:main
|
|
28
|
+
```
|
|
@@ -0,0 +1,53 @@
|
|
|
1
|
+
# License
|
|
2
|
+
|
|
3
|
+
## Acceptance
|
|
4
|
+
|
|
5
|
+
By using the software, you agree to all of the terms and conditions below.
|
|
6
|
+
|
|
7
|
+
## Copyright License
|
|
8
|
+
|
|
9
|
+
The licensor grants you a non-exclusive, royalty-free, worldwide, non-sublicensable, non-transferable license to use, copy, distribute, make available, and prepare derivative works of the software, in each case subject to the limitations and conditions below.
|
|
10
|
+
|
|
11
|
+
## Limitations
|
|
12
|
+
|
|
13
|
+
You may not provide the software to third parties as a hosted or managed service, where the service provides users with access to any substantial set of the features or functionality of the software.
|
|
14
|
+
|
|
15
|
+
You may not move, change, disable, or circumvent the license key functionality in the software, and you may not remove or obscure any functionality in the software that is protected by the license key.
|
|
16
|
+
|
|
17
|
+
You may not alter, remove, or obscure any licensing, copyright, or other notices of the licensor in the software. Any use of the licensor’s trademarks is subject to applicable law.
|
|
18
|
+
|
|
19
|
+
## Patents
|
|
20
|
+
|
|
21
|
+
The licensor grants you a license, under any patent claims the licensor can license, or becomes able to license, to make, have made, use, sell, offer for sale, import and have imported the software, in each case subject to the limitations and conditions in this license. This license does not cover any patent claims that you cause to be infringed by modifications or additions to the software. If you or your company make any written claim that the software infringes or contributes to infringement of any patent, your patent license for the software granted under these terms ends immediately. If your company makes such a claim, your patent license ends immediately for work on behalf of your company.
|
|
22
|
+
|
|
23
|
+
## Notices
|
|
24
|
+
|
|
25
|
+
You must ensure that anyone who gets a copy of any part of the software from you also gets a copy of these terms.
|
|
26
|
+
|
|
27
|
+
If you modify the software, you must include in any modified copies of the software prominent notices stating that you have modified the software.
|
|
28
|
+
No Other Rights
|
|
29
|
+
|
|
30
|
+
These terms do not imply any licenses other than those expressly granted in these terms.
|
|
31
|
+
|
|
32
|
+
## Termination
|
|
33
|
+
|
|
34
|
+
If you use the software in violation of these terms, such use is not licensed, and your licenses will automatically terminate. If the licensor provides you with a notice of your violation, and you cease all violation of this license no later than 30 days after you receive that notice, your licenses will be reinstated retroactively. However, if you violate these terms after such reinstatement, any additional violation of these terms will cause your licenses to terminate automatically and permanently.
|
|
35
|
+
|
|
36
|
+
## No Liability
|
|
37
|
+
|
|
38
|
+
As far as the law allows, the software comes as is, without any warranty or condition, and the licensor will not be liable to you for any damages arising out of these terms or the use or nature of the software, under any kind of legal claim.
|
|
39
|
+
Definitions
|
|
40
|
+
|
|
41
|
+
The licensor is the entity offering these terms, and the software is the software the licensor makes available under these terms, including any portion of it.
|
|
42
|
+
|
|
43
|
+
## Definitions
|
|
44
|
+
|
|
45
|
+
you refers to the individual or entity agreeing to these terms.
|
|
46
|
+
|
|
47
|
+
your company is any legal entity, sole proprietorship, or other kind of organization that you work for, plus all organizations that have control over, are under the control of, or are under common control with that organization. control means ownership of substantially all the assets of an entity, or the power to direct its management and policies by vote, contract, or otherwise. Control can be direct or indirect.
|
|
48
|
+
|
|
49
|
+
your licenses are all the licenses granted to you for the software under these terms.
|
|
50
|
+
|
|
51
|
+
use means anything you do with the software requiring one of your licenses.
|
|
52
|
+
|
|
53
|
+
trademark means trademarks, service marks, and similar rights.
|
|
@@ -0,0 +1,31 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: cache_dit
|
|
3
|
+
Version: 0.1.1.dev2
|
|
4
|
+
Summary: ⚡️DBCache: A Training-free UNet-style Cache Acceleration for Diffusion Transformers
|
|
5
|
+
Author: DefTruth, vipshop.com, etc.
|
|
6
|
+
Maintainer: DefTruth, vipshop.com, etc
|
|
7
|
+
Project-URL: Repository, https://github.com/vipshop/DBCache.git
|
|
8
|
+
Project-URL: Homepage, https://github.com/vipshop/DBCache.git
|
|
9
|
+
Requires-Python: >=3.10
|
|
10
|
+
Requires-Dist: packaging
|
|
11
|
+
Requires-Dist: torch
|
|
12
|
+
Requires-Dist: transformers
|
|
13
|
+
Requires-Dist: diffusers
|
|
14
|
+
Provides-Extra: all
|
|
15
|
+
Provides-Extra: dev
|
|
16
|
+
Requires-Dist: pre-commit; extra == "dev"
|
|
17
|
+
Requires-Dist: pytest<8.0.0,>=7.0.0; extra == "dev"
|
|
18
|
+
Requires-Dist: pytest-html; extra == "dev"
|
|
19
|
+
Requires-Dist: expecttest; extra == "dev"
|
|
20
|
+
Requires-Dist: hypothesis; extra == "dev"
|
|
21
|
+
Requires-Dist: transformers; extra == "dev"
|
|
22
|
+
Requires-Dist: diffusers; extra == "dev"
|
|
23
|
+
Requires-Dist: accelerate; extra == "dev"
|
|
24
|
+
Requires-Dist: peft; extra == "dev"
|
|
25
|
+
Requires-Dist: protobuf; extra == "dev"
|
|
26
|
+
Requires-Dist: sentencepiece; extra == "dev"
|
|
27
|
+
Requires-Dist: opencv-python-headless; extra == "dev"
|
|
28
|
+
Requires-Dist: ftfy; extra == "dev"
|
|
29
|
+
Dynamic: provides-extra
|
|
30
|
+
Dynamic: requires-dist
|
|
31
|
+
Dynamic: requires-python
|
|
@@ -0,0 +1,320 @@
|
|
|
1
|
+
<div align="center">
|
|
2
|
+
<p align="center">
|
|
3
|
+
<h3>⚡️DBCache: A Training-free UNet-style Cache Acceleration for <br>Diffusion Transformers</h2>
|
|
4
|
+
</p>
|
|
5
|
+
<img src=./assets/DBCache.png >
|
|
6
|
+
<div align='center'>
|
|
7
|
+
<img src=https://img.shields.io/badge/Language-Python-brightgreen.svg >
|
|
8
|
+
<img src=https://img.shields.io/badge/PRs-welcome-9cf.svg >
|
|
9
|
+
<img src=https://img.shields.io/badge/Build-pass-brightgreen.svg >
|
|
10
|
+
<img src=https://img.shields.io/badge/Python-3.10|3.11|3.12-9cf.svg >
|
|
11
|
+
<img src=https://img.shields.io/badge/Release-v0.1.0-brightgreen.svg >
|
|
12
|
+
</div>
|
|
13
|
+
<p align="center">
|
|
14
|
+
DeepCache requires UNet’s U-shape, but DiT lacks it. Most DiT cache accelerators are complex and not training-free. DBCache builds on FBCache to create a training-free, UNet-style cache accelerator for DiT.
|
|
15
|
+
</p>
|
|
16
|
+
</div>
|
|
17
|
+
|
|
18
|
+
## 🤗 Introduction
|
|
19
|
+
|
|
20
|
+
<div align="center">
|
|
21
|
+
<p align="center">
|
|
22
|
+
<h3>DBCache: Dual Block Caching for Diffusion Transformers</h3>
|
|
23
|
+
</p>
|
|
24
|
+
</div>
|
|
25
|
+
|
|
26
|
+
**DBCache**: **Dual Block Caching** for Diffusion Transformers. We have enhanced `FBCache` into a more general and customizable cache algorithm, namely `DBCache`, enabling it to achieve fully `UNet-style` cache acceleration for DiT models. Different configurations of compute blocks (**F8B12**, etc.) can be customized in DBCache. Moreover, it can be entirely **training**-**free**. DBCache can strike a perfect **balance** between performance and precision!
|
|
27
|
+
|
|
28
|
+
<div align="center">
|
|
29
|
+
<p align="center">
|
|
30
|
+
DBCache, <b> L20x1 </b>, Steps: 28, "A cat holding a sign that says hello world with complex background"
|
|
31
|
+
</p>
|
|
32
|
+
</div>
|
|
33
|
+
|
|
34
|
+
|Baseline(L20x1)|F1B0 (0.08)|F1B0 (0.20)|F8B8 (0.15)|F12B12 (0.20)|F16B16 (0.20)|
|
|
35
|
+
|:---:|:---:|:---:|:---:|:---:|:---:|
|
|
36
|
+
|24.85s|15.59s|8.58s|15.41s|15.11s|17.74s|
|
|
37
|
+
|<img src=./assets/NONE_R0.08_S0.png width=105px>|<img src=./assets/DBCACHE_F1B0S1_R0.08_S11.png width=105px> | <img src=./assets/DBCACHE_F1B0S1_R0.2_S19.png width=105px>|<img src=./assets/DBCACHE_F8B8S1_R0.15_S15.png width=105px>|<img src=./assets/DBCACHE_F12B12S4_R0.2_S16.png width=105px>|<img src=./assets/DBCACHE_F16B16S4_R0.2_S13.png width=105px>|
|
|
38
|
+
|**Baseline(L20x1)**|**F1B0 (0.08)**|**F8B8 (0.12)**|**F8B12 (0.20)**|**F8B16 (0.20)**|**F8B20 (0.20)**|
|
|
39
|
+
|27.85s|6.04s|5.88s|5.77s|6.01s|6.20s|
|
|
40
|
+
|<img src=https://github.com/user-attachments/assets/70ea57f4-d8f2-415b-8a96-d8315974a5e6 width=105px>|<img src=https://github.com/user-attachments/assets/fc0e1a67-19cc-44aa-bf50-04696e7978a0 width=105px> |<img src=https://github.com/user-attachments/assets/d1434896-628c-436b-95ad-43c085a8629e width=105px>|<img src=https://github.com/user-attachments/assets/aaa42cd2-57de-4c4e-8bfb-913018a8251d width=105px>|<img src=https://github.com/user-attachments/assets/dc0ba2a4-ef7c-436d-8a39-67055deab92f width=105px>|<img src=https://github.com/user-attachments/assets/aede466f-61ed-4256-8df0-fecf8020c5ca width=105px>|
|
|
41
|
+
|
|
42
|
+
<div align="center">
|
|
43
|
+
<p align="center">
|
|
44
|
+
DBCache, <b> L20x4 </b>, Steps: 20, case to show the texture recovery ability of DBCache
|
|
45
|
+
</p>
|
|
46
|
+
</div>
|
|
47
|
+
|
|
48
|
+
These case studies demonstrate that even with relatively high thresholds (such as 0.12, 0.15, 0.2, etc.) under the DBCache **F12B12** or **F8B16** configuration, the detailed texture of the kitten's fur, colored cloth, and the clarity of text can still be preserved. This suggests that users can leverage DBCache to effectively balance performance and precision in their workflows!
|
|
49
|
+
|
|
50
|
+
<div align="center">
|
|
51
|
+
<p align="center">
|
|
52
|
+
<h3>DBPrune: Dynamic Block Prune with Residual Caching</h3>
|
|
53
|
+
</p>
|
|
54
|
+
</div>
|
|
55
|
+
|
|
56
|
+
**DBPrune**: Dynamic Block Prune algorithm with Residual Caching. We have further implemented a new Dynamic Block Prune algorithm based on residual caching for Diffusion Transformers, referred to as DBPrune. DBPrune is currently in the experimental phase, and we kindly invite you to stay tuned for upcoming updates.
|
|
57
|
+
|
|
58
|
+
|Baseline(L20x1)|Pruned(24%)|Pruned(35%)|Pruned(38%)|Pruned(45%)|Pruned(60%)|
|
|
59
|
+
|:---:|:---:|:---:|:---:|:---:|:---:|
|
|
60
|
+
|24.85s|19.43s|16.82s|15.95s|14.24s|10.66s|
|
|
61
|
+
|<img src=./assets/NONE_R0.08_S0.png width=105px>|<img src=./assets/DBPRUNE_F1B0_R0.03_P24.0_T19.43s.png width=105px> | <img src=./assets/DBPRUNE_F1B0_R0.04_P34.6_T16.82s.png width=105px>|<img src=./assets/DBPRUNE_F1B0_R0.05_P38.3_T15.95s.png width=105px>|<img src=./assets/DBPRUNE_F1B0_R0.06_P45.2_T14.24s.png width=105px>|<img src=./assets/DBPRUNE_F1B0_R0.2_P59.5_T10.66s.png width=105px>|
|
|
62
|
+
|
|
63
|
+
<div align="center">
|
|
64
|
+
<p align="center">
|
|
65
|
+
DBPrune, <b> L20x1 </b>, Steps: 28, "A cat holding a sign that says hello world with complex background"
|
|
66
|
+
</p>
|
|
67
|
+
</div>
|
|
68
|
+
|
|
69
|
+
Moreover, both DBCache and DBPrune are **plug-and-play** solutions that works hand-in-hand with [ParaAttention](https://github.com/chengzeyi/ParaAttention). Users can easily tap into its **Context Parallelism** features for distributed inference.
|
|
70
|
+
|
|
71
|
+
## ©️Citations
|
|
72
|
+
|
|
73
|
+
```BibTeX
|
|
74
|
+
@misc{DBCache@2025,
|
|
75
|
+
title={DBCache: A Training-free UNet-style Cache Acceleration for Diffusion Transformers},
|
|
76
|
+
url={https://github.com/vipshop/DBCache.git},
|
|
77
|
+
note={Open-source software available at https://github.com/vipshop/DBCache.git},
|
|
78
|
+
author={vipshop.com},
|
|
79
|
+
year={2025}
|
|
80
|
+
}
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
## 👋Reference
|
|
84
|
+
|
|
85
|
+
<div id="reference"></div>
|
|
86
|
+
|
|
87
|
+
**DBCache** is built upon **FBCache**. The **DBCache** codebase was adapted from FBCache's implementation at the [ParaAttention](https://github.com/chengzeyi/ParaAttention/tree/main/src/para_attn/first_block_cache). We would like to express our sincere gratitude for this excellent work!
|
|
88
|
+
|
|
89
|
+
## 📖Contents
|
|
90
|
+
|
|
91
|
+
<div id="contents"></div>
|
|
92
|
+
|
|
93
|
+
- [⚙️Installation](#️installation)
|
|
94
|
+
- [⚡️Dual Block Cache](#dbcache)
|
|
95
|
+
- [🎉First Block Cache](#fbcache)
|
|
96
|
+
- [⚡️Dynamic Block Prune](#dbprune)
|
|
97
|
+
- [🎉Context Parallelism](#context-parallelism)
|
|
98
|
+
- [⚡️Torch Compile](#compile)
|
|
99
|
+
- [🎉Supported Models](#supported)
|
|
100
|
+
- [👋Contribute](#contribute)
|
|
101
|
+
- [©️License](#license)
|
|
102
|
+
|
|
103
|
+
|
|
104
|
+
## ⚙️Installation
|
|
105
|
+
|
|
106
|
+
<div id="installation"></div>
|
|
107
|
+
|
|
108
|
+
You can install `DBCache` from GitHub:
|
|
109
|
+
|
|
110
|
+
```bash
|
|
111
|
+
pip3 install git+https://github.com/vipshop/DBCache.git
|
|
112
|
+
```
|
|
113
|
+
|
|
114
|
+
or just install it from sources:
|
|
115
|
+
|
|
116
|
+
```bash
|
|
117
|
+
git clone https://github.com/vipshop/DBCache.git && cd DBCache
|
|
118
|
+
pip3 install 'torch==2.7.0' 'setuptools>=64' 'setuptools_scm>=8'
|
|
119
|
+
|
|
120
|
+
pip3 install -e '.[dev]' --no-build-isolation # build editable package
|
|
121
|
+
python3 -m build && pip3 install ./dist/cache_dit-*.whl # or build whl first and then install it.
|
|
122
|
+
```
|
|
123
|
+
|
|
124
|
+
## ⚡️DBCache: Dual Block Cache
|
|
125
|
+
|
|
126
|
+
<div id="dbcache"></div>
|
|
127
|
+
|
|
128
|
+

|
|
129
|
+
|
|
130
|
+
**DBCache** provides configurable parameters for custom optimization, enabling a balanced trade-off between performance and precision:
|
|
131
|
+
|
|
132
|
+
- **Fn**: Specifies that DBCache uses the **first n** Transformer blocks to fit the information at time step t, enabling the calculation of a more stable L1 diff and delivering more accurate information to subsequent blocks.
|
|
133
|
+
- **Bn**: Further fuses approximate information in the **last n** Transformer blocks to enhance prediction accuracy. These blocks act as an auto-scaler for approximate hidden states that use residual cache.
|
|
134
|
+
- **warmup_steps**: (default: 0) DBCache does not apply the caching strategy when the number of running steps is less than or equal to this value, ensuring the model sufficiently learns basic features during warmup.
|
|
135
|
+
- **max_cached_steps**: (default: -1) DBCache disables the caching strategy when the running steps exceed this value to prevent precision degradation.
|
|
136
|
+
- **residual_diff_threshold**: The value of residual diff threshold, a higher value leads to faster performance at the cost of lower precision.
|
|
137
|
+
|
|
138
|
+
For a good balance between performance and precision, DBCache is configured by default with **F8B8**, 8 warmup steps, and unlimited cached steps.
|
|
139
|
+
|
|
140
|
+
```python
|
|
141
|
+
from diffusers import FluxPipeline
|
|
142
|
+
from cache_dit.cache_factory import apply_cache_on_pipe, CacheType
|
|
143
|
+
|
|
144
|
+
pipe = FluxPipeline.from_pretrained(
|
|
145
|
+
"black-forest-labs/FLUX.1-dev",
|
|
146
|
+
torch_dtype=torch.bfloat16,
|
|
147
|
+
).to("cuda")
|
|
148
|
+
|
|
149
|
+
# Default options, F8B8, good balance between performance and precision
|
|
150
|
+
cache_options = CacheType.default_options(CacheType.DBCache)
|
|
151
|
+
|
|
152
|
+
# Custom options, F8B16, higher precision
|
|
153
|
+
cache_options = {
|
|
154
|
+
"cache_type": CacheType.DBCache,
|
|
155
|
+
"warmup_steps": 8,
|
|
156
|
+
"max_cached_steps": 8, # -1 means no limit
|
|
157
|
+
"Fn_compute_blocks": 8, # Fn, F8, etc.
|
|
158
|
+
"Bn_compute_blocks": 16, # Bn, B16, etc.
|
|
159
|
+
"residual_diff_threshold": 0.12,
|
|
160
|
+
}
|
|
161
|
+
|
|
162
|
+
apply_cache_on_pipe(pipe, **cache_options)
|
|
163
|
+
```
|
|
164
|
+
Moreover, users configuring higher **Bn** values (e.g., **F8B16**) while aiming to maintain good performance can specify **Bn_compute_blocks_ids** to work with Bn. DBCache will only compute the specified blocks, with the remaining estimated using the previous step's residual cache.
|
|
165
|
+
|
|
166
|
+
```python
|
|
167
|
+
# Custom options, F8B16, higher precision with good performance.
|
|
168
|
+
cache_options = {
|
|
169
|
+
# 0, 2, 4, ..., 14, 15, etc. [0,16)
|
|
170
|
+
"Bn_compute_blocks_ids": CacheType.range(0, 16, 2),
|
|
171
|
+
# Skip Bn blocks (1, 3, 5 ,..., etc.) only if the L1 diff
|
|
172
|
+
# lower than this value, otherwise, compute it.
|
|
173
|
+
"non_compute_blocks_diff_threshold": 0.08,
|
|
174
|
+
}
|
|
175
|
+
```
|
|
176
|
+
|
|
177
|
+
## 🎉FBCache: First Block Cache
|
|
178
|
+
|
|
179
|
+
<div id="fbcache"></div>
|
|
180
|
+
|
|
181
|
+

|
|
182
|
+
|
|
183
|
+
**DBCache** is a more general cache algorithm than **FBCache**. When Fn=1 and Bn=0, DBCache behaves identically to FBCache. Therefore, you can either use the original FBCache implementation directly or configure **DBCache** with **F1B0** settings to achieve the same functionality.
|
|
184
|
+
|
|
185
|
+
```python
|
|
186
|
+
from diffusers import FluxPipeline
|
|
187
|
+
from cache_dit.cache_factory import apply_cache_on_pipe, CacheType
|
|
188
|
+
|
|
189
|
+
pipe = FluxPipeline.from_pretrained(
|
|
190
|
+
"black-forest-labs/FLUX.1-dev",
|
|
191
|
+
torch_dtype=torch.bfloat16,
|
|
192
|
+
).to("cuda")
|
|
193
|
+
|
|
194
|
+
# Using FBCache directly
|
|
195
|
+
cache_options = CacheType.default_options(CacheType.FBCache)
|
|
196
|
+
|
|
197
|
+
# Or using DBCache with F1B0.
|
|
198
|
+
# Fn=1, Bn=0, means FB Cache, otherwise, Dual Block Cache
|
|
199
|
+
cache_options = {
|
|
200
|
+
"cache_type": CacheType.DBCache,
|
|
201
|
+
"warmup_steps": 8,
|
|
202
|
+
"max_cached_steps": 8, # -1 means no limit
|
|
203
|
+
"Fn_compute_blocks": 1, # Fn, F1, etc.
|
|
204
|
+
"Bn_compute_blocks": 0, # Bn, B0, etc.
|
|
205
|
+
"residual_diff_threshold": 0.12,
|
|
206
|
+
}
|
|
207
|
+
|
|
208
|
+
apply_cache_on_pipe(pipe, **cache_options)
|
|
209
|
+
```
|
|
210
|
+
|
|
211
|
+
## ⚡️DBPrune: Dynamic Block Prune
|
|
212
|
+
|
|
213
|
+
<div id="dbprune"></div>
|
|
214
|
+
|
|
215
|
+

|
|
216
|
+
|
|
217
|
+
We have further implemented a new **Dynamic Block Prune** algorithm based on **Residual Caching** for Diffusion Transformers, which is referred to as **DBPrune**. DBPrune is currently in the experimental phase, and we kindly invite you to stay tuned for upcoming updates.
|
|
218
|
+
|
|
219
|
+
```python
|
|
220
|
+
from diffusers import FluxPipeline
|
|
221
|
+
from cache_dit.cache_factory import apply_cache_on_pipe, CacheType
|
|
222
|
+
|
|
223
|
+
pipe = FluxPipeline.from_pretrained(
|
|
224
|
+
"black-forest-labs/FLUX.1-dev",
|
|
225
|
+
torch_dtype=torch.bfloat16,
|
|
226
|
+
).to("cuda")
|
|
227
|
+
|
|
228
|
+
# Using DBPrune
|
|
229
|
+
cache_options = CacheType.default_options(CacheType.DBPrune)
|
|
230
|
+
|
|
231
|
+
apply_cache_on_pipe(pipe, **cache_options)
|
|
232
|
+
```
|
|
233
|
+
|
|
234
|
+
<div align="center">
|
|
235
|
+
<p align="center">
|
|
236
|
+
DBPrune, <b> L20x1 </b>, Steps: 28, "A cat holding a sign that says hello world with complex background"
|
|
237
|
+
</p>
|
|
238
|
+
</div>
|
|
239
|
+
|
|
240
|
+
|Baseline(L20x1)|Pruned(24%)|Pruned(35%)|Pruned(38%)|Pruned(45%)|Pruned(60%)|
|
|
241
|
+
|:---:|:---:|:---:|:---:|:---:|:---:|
|
|
242
|
+
|24.85s|19.43s|16.82s|15.95s|14.24s|10.66s|
|
|
243
|
+
|<img src=./assets/NONE_R0.08_S0.png width=105px>|<img src=./assets/DBPRUNE_F1B0_R0.03_P24.0_T19.43s.png width=105px> | <img src=./assets/DBPRUNE_F1B0_R0.04_P34.6_T16.82s.png width=105px>|<img src=./assets/DBPRUNE_F1B0_R0.05_P38.3_T15.95s.png width=105px>|<img src=./assets/DBPRUNE_F1B0_R0.06_P45.2_T14.24s.png width=105px>|<img src=./assets/DBPRUNE_F1B0_R0.2_P59.5_T10.66s.png width=105px>|
|
|
244
|
+
|
|
245
|
+
## 🎉Context Parallelism
|
|
246
|
+
|
|
247
|
+
<div id="context-parallelism"></div>
|
|
248
|
+
|
|
249
|
+
DBCache and DBPrune are **plug-and-play** solutions that works hand-in-hand with [ParaAttention](https://github.com/chengzeyi/ParaAttention). Users can **easily tap into** its **Context Parallelism** features for distributed inference. Firstly, install `para-attn` from PyPI:
|
|
250
|
+
|
|
251
|
+
```bash
|
|
252
|
+
pip3 install para-attn # or install `para-attn` from sources.
|
|
253
|
+
```
|
|
254
|
+
|
|
255
|
+
Then, you can run **DBCache** with **Context Parallelism** on 4 GPUs:
|
|
256
|
+
|
|
257
|
+
```python
|
|
258
|
+
from diffusers import FluxPipeline
|
|
259
|
+
from para_attn.context_parallel import init_context_parallel_mesh
|
|
260
|
+
from para_attn.context_parallel.diffusers_adapters import parallelize_pipe
|
|
261
|
+
from cache_dit.cache_factory import apply_cache_on_pipe, CacheType
|
|
262
|
+
|
|
263
|
+
pipe = FluxPipeline.from_pretrained(
|
|
264
|
+
"black-forest-labs/FLUX.1-dev",
|
|
265
|
+
torch_dtype=torch.bfloat16,
|
|
266
|
+
).to("cuda")
|
|
267
|
+
|
|
268
|
+
# Context Parallel from ParaAttention
|
|
269
|
+
parallelize_pipe(
|
|
270
|
+
pipe, mesh=init_context_parallel_mesh(
|
|
271
|
+
pipe.device.type, max_ulysses_dim_size=4
|
|
272
|
+
)
|
|
273
|
+
)
|
|
274
|
+
|
|
275
|
+
# DBCache with F8B8 from this library
|
|
276
|
+
apply_cache_on_pipe(
|
|
277
|
+
pipe, **CacheType.default_options(CacheType.DBCache)
|
|
278
|
+
)
|
|
279
|
+
```
|
|
280
|
+
|
|
281
|
+
## ⚡️Torch Compile
|
|
282
|
+
|
|
283
|
+
<div id="compile"></div>
|
|
284
|
+
|
|
285
|
+
**DBCache** and **DBPrune** are designed to work compatibly with `torch.compile`. For example:
|
|
286
|
+
|
|
287
|
+
```python
|
|
288
|
+
apply_cache_on_pipe(
|
|
289
|
+
pipe, **CacheType.default_options(CacheType.DBCache)
|
|
290
|
+
)
|
|
291
|
+
# Compile the Transformer module
|
|
292
|
+
pipe.transformer = torch.compile(pipe.transformer)
|
|
293
|
+
```
|
|
294
|
+
However, users intending to use DBCache and DBPrune for DiT with **dynamic input shapes** should consider increasing the **recompile** **limit** of `torch._dynamo` to achieve better performance.
|
|
295
|
+
|
|
296
|
+
```python
|
|
297
|
+
torch._dynamo.config.recompile_limit = 96 # default is 8
|
|
298
|
+
torch._dynamo.config.accumulated_recompile_limit = 2048 # default is 256
|
|
299
|
+
```
|
|
300
|
+
Otherwise, the recompile_limit error may be triggered, causing the module to fall back to eager mode.
|
|
301
|
+
|
|
302
|
+
## 🎉Supported Models
|
|
303
|
+
|
|
304
|
+
<div id="supported"></div>
|
|
305
|
+
|
|
306
|
+
- [🚀FLUX.1](./src/cache_dit/cache_factory/dual_block_cache/diffusers_adapters)
|
|
307
|
+
- [🚀CogVideoX](./src/cache_dit/cache_factory/dual_block_cache/diffusers_adapters)
|
|
308
|
+
- [🚀Mochi](./src/cache_dit/cache_factory/dual_block_cache/diffusers_adapters)
|
|
309
|
+
|
|
310
|
+
## 👋Contribute
|
|
311
|
+
<div id="contribute"></div>
|
|
312
|
+
|
|
313
|
+
How to contribute? Star this repo or check [CONTRIBUTE.md](./CONTRIBUTE.md).
|
|
314
|
+
|
|
315
|
+
## ©️License
|
|
316
|
+
|
|
317
|
+
<div id="license"></div>
|
|
318
|
+
|
|
319
|
+
|
|
320
|
+
We have followed the original License from [ParaAttention](https://github.com/chengzeyi/ParaAttention), please check [LICENSE](./LICENSE) for more details.
|
|
Binary file
|
|
Binary file
|
|
Binary file
|
|
Binary file
|
|
Binary file
|
|
Binary file
|
|
Binary file
|
|
Binary file
|