torch-max-mem 0.1.2__tar.gz → 0.1.4__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- torch_max_mem-0.1.4/PKG-INFO +345 -0
- torch_max_mem-0.1.4/README.md +311 -0
- torch_max_mem-0.1.4/pyproject.toml +201 -0
- {torch_max_mem-0.1.2 → torch_max_mem-0.1.4}/src/torch_max_mem/__init__.py +2 -3
- {torch_max_mem-0.1.2 → torch_max_mem-0.1.4}/src/torch_max_mem/api.py +31 -47
- torch_max_mem-0.1.4/src/torch_max_mem/py.typed +0 -0
- {torch_max_mem-0.1.2 → torch_max_mem-0.1.4}/src/torch_max_mem/version.py +4 -6
- torch_max_mem-0.1.2/MANIFEST.in +0 -13
- torch_max_mem-0.1.2/PKG-INFO +0 -215
- torch_max_mem-0.1.2/README.md +0 -168
- torch_max_mem-0.1.2/docs/source/conf.py +0 -237
- torch_max_mem-0.1.2/docs/source/index.rst +0 -21
- torch_max_mem-0.1.2/docs/source/installation.rst +0 -22
- torch_max_mem-0.1.2/docs/source/usage.rst +0 -4
- torch_max_mem-0.1.2/pyproject.toml +0 -15
- torch_max_mem-0.1.2/setup.cfg +0 -98
- torch_max_mem-0.1.2/src/torch_max_mem.egg-info/PKG-INFO +0 -215
- torch_max_mem-0.1.2/src/torch_max_mem.egg-info/SOURCES.txt +0 -20
- torch_max_mem-0.1.2/src/torch_max_mem.egg-info/dependency_links.txt +0 -1
- torch_max_mem-0.1.2/src/torch_max_mem.egg-info/not-zip-safe +0 -1
- torch_max_mem-0.1.2/src/torch_max_mem.egg-info/requires.txt +0 -18
- torch_max_mem-0.1.2/src/torch_max_mem.egg-info/top_level.txt +0 -1
- torch_max_mem-0.1.2/tests/__init__.py +0 -3
- torch_max_mem-0.1.2/tests/test_decorator.py +0 -161
- {torch_max_mem-0.1.2 → torch_max_mem-0.1.4}/LICENSE +0 -0
@@ -0,0 +1,345 @@
|
|
1
|
+
Metadata-Version: 2.4
|
2
|
+
Name: torch-max-mem
|
3
|
+
Version: 0.1.4
|
4
|
+
Summary: Maximize memory utilization with PyTorch.
|
5
|
+
Keywords: snekpack,cookiecutter,torch
|
6
|
+
Author: Max Berrendorf
|
7
|
+
Author-email: Max Berrendorf <max.berrendorf@gmail.com>
|
8
|
+
License-File: LICENSE
|
9
|
+
Classifier: Development Status :: 4 - Beta
|
10
|
+
Classifier: Environment :: Console
|
11
|
+
Classifier: Intended Audience :: Developers
|
12
|
+
Classifier: License :: OSI Approved :: MIT License
|
13
|
+
Classifier: Operating System :: OS Independent
|
14
|
+
Classifier: Framework :: Pytest
|
15
|
+
Classifier: Framework :: tox
|
16
|
+
Classifier: Framework :: Sphinx
|
17
|
+
Classifier: Programming Language :: Python
|
18
|
+
Classifier: Programming Language :: Python :: 3.9
|
19
|
+
Classifier: Programming Language :: Python :: 3.10
|
20
|
+
Classifier: Programming Language :: Python :: 3.11
|
21
|
+
Classifier: Programming Language :: Python :: 3.12
|
22
|
+
Classifier: Programming Language :: Python :: 3.13
|
23
|
+
Classifier: Programming Language :: Python :: 3 :: Only
|
24
|
+
Requires-Dist: torch>=2.0
|
25
|
+
Requires-Dist: typing-extensions
|
26
|
+
Maintainer: Max Berrendorf
|
27
|
+
Maintainer-email: Max Berrendorf <max.berrendorf@gmail.com>
|
28
|
+
Requires-Python: >=3.9
|
29
|
+
Project-URL: Bug Tracker, https://github.com/mberr/torch-max-mem/issues
|
30
|
+
Project-URL: Download, https://github.com/mberr/torch-max-mem/releases
|
31
|
+
Project-URL: Homepage, https://github.com/mberr/torch-max-mem
|
32
|
+
Project-URL: Source Code, https://github.com/mberr/torch-max-mem
|
33
|
+
Description-Content-Type: text/markdown
|
34
|
+
|
35
|
+
<!--
|
36
|
+
<p align="center">
|
37
|
+
<img src="https://github.com/mberr/torch-max-mem/raw/main/docs/source/logo.png" height="150">
|
38
|
+
</p>
|
39
|
+
-->
|
40
|
+
|
41
|
+
<h1 align="center">
|
42
|
+
torch-max-mem
|
43
|
+
</h1>
|
44
|
+
|
45
|
+
<p align="center">
|
46
|
+
<a href="https://github.com/mberr/torch-max-mem/actions/workflows/tests.yml">
|
47
|
+
<img alt="Tests" src="https://github.com/mberr/torch-max-mem/actions/workflows/tests.yml/badge.svg" /></a>
|
48
|
+
<a href="https://pypi.org/project/torch_max_mem">
|
49
|
+
<img alt="PyPI" src="https://img.shields.io/pypi/v/torch_max_mem" /></a>
|
50
|
+
<a href="https://pypi.org/project/torch_max_mem">
|
51
|
+
<img alt="PyPI - Python Version" src="https://img.shields.io/pypi/pyversions/torch_max_mem" /></a>
|
52
|
+
<a href="https://github.com/mberr/torch-max-mem/blob/main/LICENSE">
|
53
|
+
<img alt="PyPI - License" src="https://img.shields.io/pypi/l/torch_max_mem" /></a>
|
54
|
+
<a href='https://torch_max_mem.readthedocs.io/en/latest/?badge=latest'>
|
55
|
+
<img src='https://readthedocs.org/projects/torch_max_mem/badge/?version=latest' alt='Documentation Status' /></a>
|
56
|
+
<a href="https://codecov.io/gh/mberr/torch-max-mem/branch/main">
|
57
|
+
<img src="https://codecov.io/gh/mberr/torch-max-mem/branch/main/graph/badge.svg" alt="Codecov status" /></a>
|
58
|
+
<a href="https://github.com/cthoyt/cookiecutter-python-package">
|
59
|
+
<img alt="Cookiecutter template from @cthoyt" src="https://img.shields.io/badge/Cookiecutter-snekpack-blue" /></a>
|
60
|
+
<a href="https://github.com/astral-sh/ruff">
|
61
|
+
<img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json" alt="Ruff" style="max-width:100%;"></a>
|
62
|
+
<a href="https://github.com/mberr/torch-max-mem/blob/main/.github/CODE_OF_CONDUCT.md">
|
63
|
+
<img src="https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg" alt="Contributor Covenant"/></a>
|
64
|
+
<!-- uncomment if you archive on zenodo
|
65
|
+
<a href="https://zenodo.org/badge/latestdoi/XXXXXX">
|
66
|
+
<img src="https://zenodo.org/badge/XXXXXX.svg" alt="DOI"></a>
|
67
|
+
-->
|
68
|
+
</p>
|
69
|
+
|
70
|
+
This package provides decorators for memory utilization maximization with
|
71
|
+
PyTorch and CUDA by starting with a maximum parameter size and applying
|
72
|
+
successive halving until no more out-of-memory exception occurs.
|
73
|
+
|
74
|
+
## 💪 Getting Started
|
75
|
+
|
76
|
+
Assume you have a function for batched computation of nearest neighbors using
|
77
|
+
brute-force distance calculation.
|
78
|
+
|
79
|
+
```python
|
80
|
+
import torch
|
81
|
+
|
82
|
+
def knn(x, y, batch_size, k: int = 3):
|
83
|
+
return torch.cat(
|
84
|
+
[
|
85
|
+
torch.cdist(x[start : start + batch_size], y).topk(k=k, dim=1, largest=False).indices
|
86
|
+
for start in range(0, x.shape[0], batch_size)
|
87
|
+
],
|
88
|
+
dim=0,
|
89
|
+
)
|
90
|
+
```
|
91
|
+
|
92
|
+
With `torch_max_mem` you can decorate this function to reduce the batch size
|
93
|
+
until no more out-of-memory error occurs.
|
94
|
+
|
95
|
+
```python
|
96
|
+
import torch
|
97
|
+
from torch_max_mem import maximize_memory_utilization
|
98
|
+
|
99
|
+
|
100
|
+
@maximize_memory_utilization()
|
101
|
+
def knn(x, y, batch_size, k: int = 3):
|
102
|
+
return torch.cat(
|
103
|
+
[
|
104
|
+
torch.cdist(x[start : start + batch_size], y).topk(k=k, dim=1, largest=False).indices
|
105
|
+
for start in range(0, x.shape[0], batch_size)
|
106
|
+
],
|
107
|
+
dim=0,
|
108
|
+
)
|
109
|
+
```
|
110
|
+
|
111
|
+
In the code, you can now always pass the largest sensible batch size, e.g.,
|
112
|
+
|
113
|
+
```python
|
114
|
+
x = torch.rand(100, 100, device="cuda")
|
115
|
+
y = torch.rand(200, 100, device="cuda")
|
116
|
+
knn(x, y, batch_size=x.shape[0])
|
117
|
+
```
|
118
|
+
|
119
|
+
## 🚀 Installation
|
120
|
+
|
121
|
+
The most recent release can be installed from
|
122
|
+
[PyPI](https://pypi.org/project/torch_max_mem/) with uv:
|
123
|
+
|
124
|
+
```console
|
125
|
+
uv pip install torch_max_mem
|
126
|
+
```
|
127
|
+
|
128
|
+
or with pip:
|
129
|
+
|
130
|
+
```console
|
131
|
+
python3 -m pip install torch_max_mem
|
132
|
+
```
|
133
|
+
|
134
|
+
The most recent code and data can be installed directly from GitHub with uv:
|
135
|
+
|
136
|
+
```console
|
137
|
+
uv pip install git+https://github.com/mberr/torch-max-mem.git
|
138
|
+
```
|
139
|
+
|
140
|
+
or with pip:
|
141
|
+
|
142
|
+
```console
|
143
|
+
python3 -m pip install git+https://github.com/mberr/torch-max-mem.git
|
144
|
+
```
|
145
|
+
|
146
|
+
## 👐 Contributing
|
147
|
+
|
148
|
+
Contributions, whether filing an issue, making a pull request, or forking, are
|
149
|
+
appreciated. See
|
150
|
+
[CONTRIBUTING.md](https://github.com/mberr/torch-max-mem/blob/master/.github/CONTRIBUTING.md)
|
151
|
+
for more information on getting involved.
|
152
|
+
|
153
|
+
## 👋 Attribution
|
154
|
+
|
155
|
+
Parts of the logic have been developed with
|
156
|
+
[Laurent Vermue](https://github.com/lvermue) for
|
157
|
+
[PyKEEN](https://github.com/pykeen/pykeen).
|
158
|
+
|
159
|
+
### ⚖️ License
|
160
|
+
|
161
|
+
The code in this package is licensed under the MIT License.
|
162
|
+
|
163
|
+
### 🍪 Cookiecutter
|
164
|
+
|
165
|
+
This package was created with
|
166
|
+
[@audreyfeldroy](https://github.com/audreyfeldroy)'s
|
167
|
+
[cookiecutter](https://github.com/cookiecutter/cookiecutter) package using
|
168
|
+
[@cthoyt](https://github.com/cthoyt)'s
|
169
|
+
[cookiecutter-snekpack](https://github.com/cthoyt/cookiecutter-snekpack)
|
170
|
+
template.
|
171
|
+
|
172
|
+
## 🛠️ For Developers
|
173
|
+
|
174
|
+
<details>
|
175
|
+
<summary>See developer instructions</summary>
|
176
|
+
|
177
|
+
The final section of the README is for if you want to get involved by making a
|
178
|
+
code contribution.
|
179
|
+
|
180
|
+
### Development Installation
|
181
|
+
|
182
|
+
To install in development mode, use the following:
|
183
|
+
|
184
|
+
```console
|
185
|
+
git clone git+https://github.com/mberr/torch-max-mem.git
|
186
|
+
cd snekpack-demo
|
187
|
+
uv pip install -e .
|
188
|
+
```
|
189
|
+
|
190
|
+
Alternatively, install using pip:
|
191
|
+
|
192
|
+
```console
|
193
|
+
python3 -m pip install -e .
|
194
|
+
```
|
195
|
+
|
196
|
+
### Updating Package Boilerplate
|
197
|
+
|
198
|
+
This project uses `cruft` to keep boilerplate (i.e., configuration, contribution
|
199
|
+
guidelines, documentation configuration) up-to-date with the upstream
|
200
|
+
cookiecutter package. Install cruft with either `uv tool install cruft` or
|
201
|
+
`python3 -m pip install cruft` then run:
|
202
|
+
|
203
|
+
```console
|
204
|
+
cruft update
|
205
|
+
```
|
206
|
+
|
207
|
+
More info on Cruft's update command is available
|
208
|
+
[here](https://github.com/cruft/cruft?tab=readme-ov-file#updating-a-project).
|
209
|
+
|
210
|
+
### 🥼 Testing
|
211
|
+
|
212
|
+
After cloning the repository and installing `tox` with
|
213
|
+
`uv tool install tox --with tox-uv` or `python3 -m pip install tox tox-uv`, the
|
214
|
+
unit tests in the `tests/` folder can be run reproducibly with:
|
215
|
+
|
216
|
+
```console
|
217
|
+
tox -e py
|
218
|
+
```
|
219
|
+
|
220
|
+
Additionally, these tests are automatically re-run with each commit in a
|
221
|
+
[GitHub Action](https://github.com/mberr/torch-max-mem/actions?query=workflow%3ATests).
|
222
|
+
|
223
|
+
### 📖 Building the Documentation
|
224
|
+
|
225
|
+
The documentation can be built locally using the following:
|
226
|
+
|
227
|
+
```console
|
228
|
+
git clone git+https://github.com/mberr/torch-max-mem.git
|
229
|
+
cd snekpack-demo
|
230
|
+
tox -e docs
|
231
|
+
open docs/build/html/index.html
|
232
|
+
```
|
233
|
+
|
234
|
+
The documentation automatically installs the package as well as the `docs` extra
|
235
|
+
specified in the [`pyproject.toml`](pyproject.toml). `sphinx` plugins like
|
236
|
+
`texext` can be added there. Additionally, they need to be added to the
|
237
|
+
`extensions` list in [`docs/source/conf.py`](docs/source/conf.py).
|
238
|
+
|
239
|
+
The documentation can be deployed to [ReadTheDocs](https://readthedocs.io) using
|
240
|
+
[this guide](https://docs.readthedocs.io/en/stable/intro/import-guide.html). The
|
241
|
+
[`.readthedocs.yml`](.readthedocs.yml) YAML file contains all the configuration
|
242
|
+
you'll need. You can also set up continuous integration on GitHub to check not
|
243
|
+
only that Sphinx can build the documentation in an isolated environment (i.e.,
|
244
|
+
with `tox -e docs-test`) but also that
|
245
|
+
[ReadTheDocs can build it too](https://docs.readthedocs.io/en/stable/pull-requests.html).
|
246
|
+
|
247
|
+
#### Configuring ReadTheDocs
|
248
|
+
|
249
|
+
1. Log in to ReadTheDocs with your GitHub account to install the integration at
|
250
|
+
https://readthedocs.org/accounts/login/?next=/dashboard/
|
251
|
+
2. Import your project by navigating to https://readthedocs.org/dashboard/import
|
252
|
+
then clicking the plus icon next to your repository
|
253
|
+
3. You can rename the repository on the next screen using a more stylized name
|
254
|
+
(i.e., with spaces and capital letters)
|
255
|
+
4. Click next, and you're good to go!
|
256
|
+
|
257
|
+
### 📦 Making a Release
|
258
|
+
|
259
|
+
#### Configuring Zenodo
|
260
|
+
|
261
|
+
[Zenodo](https://zenodo.org) is a long-term archival system that assigns a DOI
|
262
|
+
to each release of your package.
|
263
|
+
|
264
|
+
1. Log in to Zenodo via GitHub with this link:
|
265
|
+
https://zenodo.org/oauth/login/github/?next=%2F. This brings you to a page
|
266
|
+
that lists all of your organizations and asks you to approve installing the
|
267
|
+
Zenodo app on GitHub. Click "grant" next to any organizations you want to
|
268
|
+
enable the integration for, then click the big green "approve" button. This
|
269
|
+
step only needs to be done once.
|
270
|
+
2. Navigate to https://zenodo.org/account/settings/github/, which lists all of
|
271
|
+
your GitHub repositories (both in your username and any organizations you
|
272
|
+
enabled). Click the on/off toggle for any relevant repositories. When you
|
273
|
+
make a new repository, you'll have to come back to this
|
274
|
+
|
275
|
+
After these steps, you're ready to go! After you make "release" on GitHub (steps
|
276
|
+
for this are below), you can navigate to
|
277
|
+
https://zenodo.org/account/settings/github/repository/mberr/torch-max-mem to see
|
278
|
+
the DOI for the release and link to the Zenodo record for it.
|
279
|
+
|
280
|
+
#### Registering with the Python Package Index (PyPI)
|
281
|
+
|
282
|
+
You only have to do the following steps once.
|
283
|
+
|
284
|
+
1. Register for an account on the
|
285
|
+
[Python Package Index (PyPI)](https://pypi.org/account/register)
|
286
|
+
2. Navigate to https://pypi.org/manage/account and make sure you have verified
|
287
|
+
your email address. A verification email might not have been sent by default,
|
288
|
+
so you might have to click the "options" dropdown next to your address to get
|
289
|
+
to the "re-send verification email" button
|
290
|
+
3. 2-Factor authentication is required for PyPI since the end of 2023 (see this
|
291
|
+
[blog post from PyPI](https://blog.pypi.org/posts/2023-05-25-securing-pypi-with-2fa/)).
|
292
|
+
This means you have to first issue account recovery codes, then set up
|
293
|
+
2-factor authentication
|
294
|
+
4. Issue an API token from https://pypi.org/manage/account/token
|
295
|
+
|
296
|
+
#### Configuring your machine's connection to PyPI
|
297
|
+
|
298
|
+
You have to do the following steps once per machine.
|
299
|
+
|
300
|
+
```console
|
301
|
+
uv tool install keyring
|
302
|
+
keyring set https://upload.pypi.org/legacy/ __token__
|
303
|
+
keyring set https://test.pypi.org/legacy/ __token__
|
304
|
+
```
|
305
|
+
|
306
|
+
Note that this deprecates previous workflows using `.pypirc`.
|
307
|
+
|
308
|
+
#### Uploading to PyPI
|
309
|
+
|
310
|
+
After installing the package in development mode and installing `tox` with
|
311
|
+
`uv tool install tox --with tox-uv` or `python3 -m pip install tox tox-uv`, run
|
312
|
+
the following from the console:
|
313
|
+
|
314
|
+
```console
|
315
|
+
tox -e finish
|
316
|
+
```
|
317
|
+
|
318
|
+
This script does the following:
|
319
|
+
|
320
|
+
1. Uses [bump-my-version](https://github.com/callowayproject/bump-my-version) to
|
321
|
+
switch the version number in the `pyproject.toml`, `CITATION.cff`,
|
322
|
+
`src/torch_max_mem/version.py`, and
|
323
|
+
[`docs/source/conf.py`](docs/source/conf.py) to not have the `-dev` suffix
|
324
|
+
2. Packages the code in both a tar archive and a wheel using
|
325
|
+
[`uv build`](https://docs.astral.sh/uv/guides/publish/#building-your-package)
|
326
|
+
3. Uploads to PyPI using
|
327
|
+
[`uv publish`](https://docs.astral.sh/uv/guides/publish/#publishing-your-package).
|
328
|
+
4. Push to GitHub. You'll need to make a release going with the commit where the
|
329
|
+
version was bumped.
|
330
|
+
5. Bump the version to the next patch. If you made big changes and want to bump
|
331
|
+
the version by minor, you can use `tox -e bumpversion -- minor` after.
|
332
|
+
|
333
|
+
#### Releasing on GitHub
|
334
|
+
|
335
|
+
1. Navigate to https://github.com/mberr/torch-max-mem/releases/new to draft a
|
336
|
+
new release
|
337
|
+
2. Click the "Choose a Tag" dropdown and select the tag corresponding to the
|
338
|
+
release you just made
|
339
|
+
3. Click the "Generate Release Notes" button to get a quick outline of recent
|
340
|
+
changes. Modify the title and description as you see fit
|
341
|
+
4. Click the big green "Publish Release" button
|
342
|
+
|
343
|
+
This will trigger Zenodo to assign a DOI to your release as well.
|
344
|
+
|
345
|
+
</details>
|
@@ -0,0 +1,311 @@
|
|
1
|
+
<!--
|
2
|
+
<p align="center">
|
3
|
+
<img src="https://github.com/mberr/torch-max-mem/raw/main/docs/source/logo.png" height="150">
|
4
|
+
</p>
|
5
|
+
-->
|
6
|
+
|
7
|
+
<h1 align="center">
|
8
|
+
torch-max-mem
|
9
|
+
</h1>
|
10
|
+
|
11
|
+
<p align="center">
|
12
|
+
<a href="https://github.com/mberr/torch-max-mem/actions/workflows/tests.yml">
|
13
|
+
<img alt="Tests" src="https://github.com/mberr/torch-max-mem/actions/workflows/tests.yml/badge.svg" /></a>
|
14
|
+
<a href="https://pypi.org/project/torch_max_mem">
|
15
|
+
<img alt="PyPI" src="https://img.shields.io/pypi/v/torch_max_mem" /></a>
|
16
|
+
<a href="https://pypi.org/project/torch_max_mem">
|
17
|
+
<img alt="PyPI - Python Version" src="https://img.shields.io/pypi/pyversions/torch_max_mem" /></a>
|
18
|
+
<a href="https://github.com/mberr/torch-max-mem/blob/main/LICENSE">
|
19
|
+
<img alt="PyPI - License" src="https://img.shields.io/pypi/l/torch_max_mem" /></a>
|
20
|
+
<a href='https://torch_max_mem.readthedocs.io/en/latest/?badge=latest'>
|
21
|
+
<img src='https://readthedocs.org/projects/torch_max_mem/badge/?version=latest' alt='Documentation Status' /></a>
|
22
|
+
<a href="https://codecov.io/gh/mberr/torch-max-mem/branch/main">
|
23
|
+
<img src="https://codecov.io/gh/mberr/torch-max-mem/branch/main/graph/badge.svg" alt="Codecov status" /></a>
|
24
|
+
<a href="https://github.com/cthoyt/cookiecutter-python-package">
|
25
|
+
<img alt="Cookiecutter template from @cthoyt" src="https://img.shields.io/badge/Cookiecutter-snekpack-blue" /></a>
|
26
|
+
<a href="https://github.com/astral-sh/ruff">
|
27
|
+
<img src="https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json" alt="Ruff" style="max-width:100%;"></a>
|
28
|
+
<a href="https://github.com/mberr/torch-max-mem/blob/main/.github/CODE_OF_CONDUCT.md">
|
29
|
+
<img src="https://img.shields.io/badge/Contributor%20Covenant-2.1-4baaaa.svg" alt="Contributor Covenant"/></a>
|
30
|
+
<!-- uncomment if you archive on zenodo
|
31
|
+
<a href="https://zenodo.org/badge/latestdoi/XXXXXX">
|
32
|
+
<img src="https://zenodo.org/badge/XXXXXX.svg" alt="DOI"></a>
|
33
|
+
-->
|
34
|
+
</p>
|
35
|
+
|
36
|
+
This package provides decorators for memory utilization maximization with
|
37
|
+
PyTorch and CUDA by starting with a maximum parameter size and applying
|
38
|
+
successive halving until no more out-of-memory exception occurs.
|
39
|
+
|
40
|
+
## 💪 Getting Started
|
41
|
+
|
42
|
+
Assume you have a function for batched computation of nearest neighbors using
|
43
|
+
brute-force distance calculation.
|
44
|
+
|
45
|
+
```python
|
46
|
+
import torch
|
47
|
+
|
48
|
+
def knn(x, y, batch_size, k: int = 3):
|
49
|
+
return torch.cat(
|
50
|
+
[
|
51
|
+
torch.cdist(x[start : start + batch_size], y).topk(k=k, dim=1, largest=False).indices
|
52
|
+
for start in range(0, x.shape[0], batch_size)
|
53
|
+
],
|
54
|
+
dim=0,
|
55
|
+
)
|
56
|
+
```
|
57
|
+
|
58
|
+
With `torch_max_mem` you can decorate this function to reduce the batch size
|
59
|
+
until no more out-of-memory error occurs.
|
60
|
+
|
61
|
+
```python
|
62
|
+
import torch
|
63
|
+
from torch_max_mem import maximize_memory_utilization
|
64
|
+
|
65
|
+
|
66
|
+
@maximize_memory_utilization()
|
67
|
+
def knn(x, y, batch_size, k: int = 3):
|
68
|
+
return torch.cat(
|
69
|
+
[
|
70
|
+
torch.cdist(x[start : start + batch_size], y).topk(k=k, dim=1, largest=False).indices
|
71
|
+
for start in range(0, x.shape[0], batch_size)
|
72
|
+
],
|
73
|
+
dim=0,
|
74
|
+
)
|
75
|
+
```
|
76
|
+
|
77
|
+
In the code, you can now always pass the largest sensible batch size, e.g.,
|
78
|
+
|
79
|
+
```python
|
80
|
+
x = torch.rand(100, 100, device="cuda")
|
81
|
+
y = torch.rand(200, 100, device="cuda")
|
82
|
+
knn(x, y, batch_size=x.shape[0])
|
83
|
+
```
|
84
|
+
|
85
|
+
## 🚀 Installation
|
86
|
+
|
87
|
+
The most recent release can be installed from
|
88
|
+
[PyPI](https://pypi.org/project/torch_max_mem/) with uv:
|
89
|
+
|
90
|
+
```console
|
91
|
+
uv pip install torch_max_mem
|
92
|
+
```
|
93
|
+
|
94
|
+
or with pip:
|
95
|
+
|
96
|
+
```console
|
97
|
+
python3 -m pip install torch_max_mem
|
98
|
+
```
|
99
|
+
|
100
|
+
The most recent code and data can be installed directly from GitHub with uv:
|
101
|
+
|
102
|
+
```console
|
103
|
+
uv pip install git+https://github.com/mberr/torch-max-mem.git
|
104
|
+
```
|
105
|
+
|
106
|
+
or with pip:
|
107
|
+
|
108
|
+
```console
|
109
|
+
python3 -m pip install git+https://github.com/mberr/torch-max-mem.git
|
110
|
+
```
|
111
|
+
|
112
|
+
## 👐 Contributing
|
113
|
+
|
114
|
+
Contributions, whether filing an issue, making a pull request, or forking, are
|
115
|
+
appreciated. See
|
116
|
+
[CONTRIBUTING.md](https://github.com/mberr/torch-max-mem/blob/master/.github/CONTRIBUTING.md)
|
117
|
+
for more information on getting involved.
|
118
|
+
|
119
|
+
## 👋 Attribution
|
120
|
+
|
121
|
+
Parts of the logic have been developed with
|
122
|
+
[Laurent Vermue](https://github.com/lvermue) for
|
123
|
+
[PyKEEN](https://github.com/pykeen/pykeen).
|
124
|
+
|
125
|
+
### ⚖️ License
|
126
|
+
|
127
|
+
The code in this package is licensed under the MIT License.
|
128
|
+
|
129
|
+
### 🍪 Cookiecutter
|
130
|
+
|
131
|
+
This package was created with
|
132
|
+
[@audreyfeldroy](https://github.com/audreyfeldroy)'s
|
133
|
+
[cookiecutter](https://github.com/cookiecutter/cookiecutter) package using
|
134
|
+
[@cthoyt](https://github.com/cthoyt)'s
|
135
|
+
[cookiecutter-snekpack](https://github.com/cthoyt/cookiecutter-snekpack)
|
136
|
+
template.
|
137
|
+
|
138
|
+
## 🛠️ For Developers
|
139
|
+
|
140
|
+
<details>
|
141
|
+
<summary>See developer instructions</summary>
|
142
|
+
|
143
|
+
The final section of the README is for if you want to get involved by making a
|
144
|
+
code contribution.
|
145
|
+
|
146
|
+
### Development Installation
|
147
|
+
|
148
|
+
To install in development mode, use the following:
|
149
|
+
|
150
|
+
```console
|
151
|
+
git clone git+https://github.com/mberr/torch-max-mem.git
|
152
|
+
cd snekpack-demo
|
153
|
+
uv pip install -e .
|
154
|
+
```
|
155
|
+
|
156
|
+
Alternatively, install using pip:
|
157
|
+
|
158
|
+
```console
|
159
|
+
python3 -m pip install -e .
|
160
|
+
```
|
161
|
+
|
162
|
+
### Updating Package Boilerplate
|
163
|
+
|
164
|
+
This project uses `cruft` to keep boilerplate (i.e., configuration, contribution
|
165
|
+
guidelines, documentation configuration) up-to-date with the upstream
|
166
|
+
cookiecutter package. Install cruft with either `uv tool install cruft` or
|
167
|
+
`python3 -m pip install cruft` then run:
|
168
|
+
|
169
|
+
```console
|
170
|
+
cruft update
|
171
|
+
```
|
172
|
+
|
173
|
+
More info on Cruft's update command is available
|
174
|
+
[here](https://github.com/cruft/cruft?tab=readme-ov-file#updating-a-project).
|
175
|
+
|
176
|
+
### 🥼 Testing
|
177
|
+
|
178
|
+
After cloning the repository and installing `tox` with
|
179
|
+
`uv tool install tox --with tox-uv` or `python3 -m pip install tox tox-uv`, the
|
180
|
+
unit tests in the `tests/` folder can be run reproducibly with:
|
181
|
+
|
182
|
+
```console
|
183
|
+
tox -e py
|
184
|
+
```
|
185
|
+
|
186
|
+
Additionally, these tests are automatically re-run with each commit in a
|
187
|
+
[GitHub Action](https://github.com/mberr/torch-max-mem/actions?query=workflow%3ATests).
|
188
|
+
|
189
|
+
### 📖 Building the Documentation
|
190
|
+
|
191
|
+
The documentation can be built locally using the following:
|
192
|
+
|
193
|
+
```console
|
194
|
+
git clone git+https://github.com/mberr/torch-max-mem.git
|
195
|
+
cd snekpack-demo
|
196
|
+
tox -e docs
|
197
|
+
open docs/build/html/index.html
|
198
|
+
```
|
199
|
+
|
200
|
+
The documentation automatically installs the package as well as the `docs` extra
|
201
|
+
specified in the [`pyproject.toml`](pyproject.toml). `sphinx` plugins like
|
202
|
+
`texext` can be added there. Additionally, they need to be added to the
|
203
|
+
`extensions` list in [`docs/source/conf.py`](docs/source/conf.py).
|
204
|
+
|
205
|
+
The documentation can be deployed to [ReadTheDocs](https://readthedocs.io) using
|
206
|
+
[this guide](https://docs.readthedocs.io/en/stable/intro/import-guide.html). The
|
207
|
+
[`.readthedocs.yml`](.readthedocs.yml) YAML file contains all the configuration
|
208
|
+
you'll need. You can also set up continuous integration on GitHub to check not
|
209
|
+
only that Sphinx can build the documentation in an isolated environment (i.e.,
|
210
|
+
with `tox -e docs-test`) but also that
|
211
|
+
[ReadTheDocs can build it too](https://docs.readthedocs.io/en/stable/pull-requests.html).
|
212
|
+
|
213
|
+
#### Configuring ReadTheDocs
|
214
|
+
|
215
|
+
1. Log in to ReadTheDocs with your GitHub account to install the integration at
|
216
|
+
https://readthedocs.org/accounts/login/?next=/dashboard/
|
217
|
+
2. Import your project by navigating to https://readthedocs.org/dashboard/import
|
218
|
+
then clicking the plus icon next to your repository
|
219
|
+
3. You can rename the repository on the next screen using a more stylized name
|
220
|
+
(i.e., with spaces and capital letters)
|
221
|
+
4. Click next, and you're good to go!
|
222
|
+
|
223
|
+
### 📦 Making a Release
|
224
|
+
|
225
|
+
#### Configuring Zenodo
|
226
|
+
|
227
|
+
[Zenodo](https://zenodo.org) is a long-term archival system that assigns a DOI
|
228
|
+
to each release of your package.
|
229
|
+
|
230
|
+
1. Log in to Zenodo via GitHub with this link:
|
231
|
+
https://zenodo.org/oauth/login/github/?next=%2F. This brings you to a page
|
232
|
+
that lists all of your organizations and asks you to approve installing the
|
233
|
+
Zenodo app on GitHub. Click "grant" next to any organizations you want to
|
234
|
+
enable the integration for, then click the big green "approve" button. This
|
235
|
+
step only needs to be done once.
|
236
|
+
2. Navigate to https://zenodo.org/account/settings/github/, which lists all of
|
237
|
+
your GitHub repositories (both in your username and any organizations you
|
238
|
+
enabled). Click the on/off toggle for any relevant repositories. When you
|
239
|
+
make a new repository, you'll have to come back to this
|
240
|
+
|
241
|
+
After these steps, you're ready to go! After you make "release" on GitHub (steps
|
242
|
+
for this are below), you can navigate to
|
243
|
+
https://zenodo.org/account/settings/github/repository/mberr/torch-max-mem to see
|
244
|
+
the DOI for the release and link to the Zenodo record for it.
|
245
|
+
|
246
|
+
#### Registering with the Python Package Index (PyPI)
|
247
|
+
|
248
|
+
You only have to do the following steps once.
|
249
|
+
|
250
|
+
1. Register for an account on the
|
251
|
+
[Python Package Index (PyPI)](https://pypi.org/account/register)
|
252
|
+
2. Navigate to https://pypi.org/manage/account and make sure you have verified
|
253
|
+
your email address. A verification email might not have been sent by default,
|
254
|
+
so you might have to click the "options" dropdown next to your address to get
|
255
|
+
to the "re-send verification email" button
|
256
|
+
3. 2-Factor authentication is required for PyPI since the end of 2023 (see this
|
257
|
+
[blog post from PyPI](https://blog.pypi.org/posts/2023-05-25-securing-pypi-with-2fa/)).
|
258
|
+
This means you have to first issue account recovery codes, then set up
|
259
|
+
2-factor authentication
|
260
|
+
4. Issue an API token from https://pypi.org/manage/account/token
|
261
|
+
|
262
|
+
#### Configuring your machine's connection to PyPI
|
263
|
+
|
264
|
+
You have to do the following steps once per machine.
|
265
|
+
|
266
|
+
```console
|
267
|
+
uv tool install keyring
|
268
|
+
keyring set https://upload.pypi.org/legacy/ __token__
|
269
|
+
keyring set https://test.pypi.org/legacy/ __token__
|
270
|
+
```
|
271
|
+
|
272
|
+
Note that this deprecates previous workflows using `.pypirc`.
|
273
|
+
|
274
|
+
#### Uploading to PyPI
|
275
|
+
|
276
|
+
After installing the package in development mode and installing `tox` with
|
277
|
+
`uv tool install tox --with tox-uv` or `python3 -m pip install tox tox-uv`, run
|
278
|
+
the following from the console:
|
279
|
+
|
280
|
+
```console
|
281
|
+
tox -e finish
|
282
|
+
```
|
283
|
+
|
284
|
+
This script does the following:
|
285
|
+
|
286
|
+
1. Uses [bump-my-version](https://github.com/callowayproject/bump-my-version) to
|
287
|
+
switch the version number in the `pyproject.toml`, `CITATION.cff`,
|
288
|
+
`src/torch_max_mem/version.py`, and
|
289
|
+
[`docs/source/conf.py`](docs/source/conf.py) to not have the `-dev` suffix
|
290
|
+
2. Packages the code in both a tar archive and a wheel using
|
291
|
+
[`uv build`](https://docs.astral.sh/uv/guides/publish/#building-your-package)
|
292
|
+
3. Uploads to PyPI using
|
293
|
+
[`uv publish`](https://docs.astral.sh/uv/guides/publish/#publishing-your-package).
|
294
|
+
4. Push to GitHub. You'll need to make a release going with the commit where the
|
295
|
+
version was bumped.
|
296
|
+
5. Bump the version to the next patch. If you made big changes and want to bump
|
297
|
+
the version by minor, you can use `tox -e bumpversion -- minor` after.
|
298
|
+
|
299
|
+
#### Releasing on GitHub
|
300
|
+
|
301
|
+
1. Navigate to https://github.com/mberr/torch-max-mem/releases/new to draft a
|
302
|
+
new release
|
303
|
+
2. Click the "Choose a Tag" dropdown and select the tag corresponding to the
|
304
|
+
release you just made
|
305
|
+
3. Click the "Generate Release Notes" button to get a quick outline of recent
|
306
|
+
changes. Modify the title and description as you see fit
|
307
|
+
4. Click the big green "Publish Release" button
|
308
|
+
|
309
|
+
This will trigger Zenodo to assign a DOI to your release as well.
|
310
|
+
|
311
|
+
</details>
|