batch-probe 0.1.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,194 @@
1
+ Metadata-Version: 2.4
2
+ Name: batch-probe
3
+ Version: 0.1.0
4
+ Summary: Find the maximum batch size that fits in GPU memory. Binary search with OOM recovery.
5
+ Author: Andrew H. Bond
6
+ License-Expression: MIT
7
+ Project-URL: Homepage, https://github.com/ahb-sjsu/batch-probe
8
+ Project-URL: Bug Tracker, https://github.com/ahb-sjsu/batch-probe/issues
9
+ Keywords: pytorch,gpu,memory,batch-size,oom,cuda
10
+ Classifier: Development Status :: 4 - Beta
11
+ Classifier: Intended Audience :: Developers
12
+ Classifier: Intended Audience :: Science/Research
13
+ Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
14
+ Classifier: Programming Language :: Python :: 3
15
+ Classifier: Programming Language :: Python :: 3.9
16
+ Classifier: Programming Language :: Python :: 3.10
17
+ Classifier: Programming Language :: Python :: 3.11
18
+ Classifier: Programming Language :: Python :: 3.12
19
+ Classifier: Programming Language :: Python :: 3.13
20
+ Requires-Python: >=3.9
21
+ Description-Content-Type: text/markdown
22
+ License-File: LICENSE
23
+ Requires-Dist: torch>=1.13.0
24
+ Provides-Extra: dev
25
+ Requires-Dist: pytest>=7.0.0; extra == "dev"
26
+ Requires-Dist: pytest-cov>=4.0.0; extra == "dev"
27
+ Requires-Dist: ruff>=0.4.0; extra == "dev"
28
+ Dynamic: license-file
29
+
30
+ # batch-probe
31
+
32
+ Find the maximum batch size that fits in GPU memory.
33
+
34
+ Binary search with OOM recovery, configurable safety headroom, no framework required.
35
+
36
+ ## The Problem
37
+
38
+ Every ML practitioner has done this:
39
+
40
+ ```
41
+ batch_size = 64 # OOM
42
+ batch_size = 32 # OOM
43
+ batch_size = 16 # OOM
44
+ batch_size = 8 # works... but am I leaving GPU memory on the table?
45
+ ```
46
+
47
+ `batch-probe` automates this. It binary-searches for the largest batch size your model can handle, with a safety margin so you don't OOM during real training.
48
+
49
+ ## Install
50
+
51
+ ```bash
52
+ pip install batch-probe
53
+ ```
54
+
55
+ ## Quick Start
56
+
57
+ ```python
58
+ from torch_probe import probe_batch_size
59
+
60
+ batch_size = probe_batch_size(
61
+ model,
62
+ lambda bs: {
63
+ "input_ids": torch.zeros(bs, 512, dtype=torch.long, device="cuda"),
64
+ "attention_mask": torch.ones(bs, 512, dtype=torch.long, device="cuda"),
65
+ },
66
+ )
67
+ # torch-probe: probing batch size (mode=train, range=[1, 4096], headroom=20%)... max=6, safe=4
68
+ ```
69
+
70
+ That's it. Three lines. Works with any `nn.Module`.
71
+
72
+ ## Usage
73
+
74
+ ### Encoder models (BERT, RoBERTa, etc.)
75
+
76
+ ```python
77
+ batch_size = probe_batch_size(
78
+ model,
79
+ lambda bs: {
80
+ "input_ids": torch.zeros(bs, 128, dtype=torch.long, device="cuda"),
81
+ "attention_mask": torch.ones(bs, 128, dtype=torch.long, device="cuda"),
82
+ },
83
+ mode="train",
84
+ )
85
+ ```
86
+
87
+ ### Seq2seq models (T5, BART, etc.)
88
+
89
+ ```python
90
+ batch_size = probe_batch_size(
91
+ model,
92
+ lambda bs: {
93
+ "input_ids": torch.zeros(bs, 512, dtype=torch.long, device="cuda"),
94
+ "attention_mask": torch.ones(bs, 512, dtype=torch.long, device="cuda"),
95
+ "labels": torch.zeros(bs, 512, dtype=torch.long, device="cuda"),
96
+ },
97
+ mode="train",
98
+ )
99
+ ```
100
+
101
+ ### Vision models
102
+
103
+ ```python
104
+ batch_size = probe_batch_size(
105
+ model,
106
+ lambda bs: {"x": torch.randn(bs, 3, 224, 224, device="cuda")},
107
+ mode="infer",
108
+ )
109
+ ```
110
+
111
+ ### Inference-only probing
112
+
113
+ Inference uses ~2-4x less memory than training (no gradients stored):
114
+
115
+ ```python
116
+ infer_batch = probe_batch_size(model, input_fn, mode="infer")
117
+ train_batch = probe_batch_size(model, input_fn, mode="train")
118
+ # infer_batch >> train_batch
119
+ ```
120
+
121
+ ### Custom headroom
122
+
123
+ Default is 20% safety margin. Adjust for your risk tolerance:
124
+
125
+ ```python
126
+ # Conservative (40% headroom) — for long training runs
127
+ batch_size = probe_batch_size(model, input_fn, headroom=0.4)
128
+
129
+ # Aggressive (5% headroom) — squeeze every last sample
130
+ batch_size = probe_batch_size(model, input_fn, headroom=0.05)
131
+ ```
132
+
133
+ ### Caching
134
+
135
+ Use `cached_probe` to avoid re-probing the same model:
136
+
137
+ ```python
138
+ from torch_probe import cached_probe, clear_cache
139
+
140
+ batch_size = cached_probe(model, input_fn, mode="train") # probes
141
+ batch_size = cached_probe(model, input_fn, mode="train") # cache hit
142
+
143
+ clear_cache() # reset if model changed
144
+ ```
145
+
146
+ ## How It Works
147
+
148
+ 1. Binary search between `low` (default 1) and `high` (default 4096)
149
+ 2. At each midpoint, create dummy tensors via your `input_fn`
150
+ 3. Run a forward pass (+ backward pass in train mode)
151
+ 4. If OOM: upper bound ← midpoint − 1, clean GPU memory
152
+ 5. If success: lower bound ← midpoint + 1
153
+ 6. Return `int(max_successful × (1 − headroom))`
154
+
155
+ The OOM recovery uses `gc.collect()` + `torch.cuda.empty_cache()` + `torch.cuda.synchronize()` to fully reclaim memory between iterations.
156
+
157
+ ## vs. Alternatives
158
+
159
+ | Feature | batch-probe | Lightning BatchSizeFinder | HF `auto_find_batch_size` |
160
+ |---|---|---|---|
161
+ | Works with raw PyTorch | Yes | No (needs LightningModule) | No (needs HF Trainer) |
162
+ | Algorithm | Binary search | Power-of-2 scaling | Halve on OOM |
163
+ | Configurable headroom | Yes | No | No |
164
+ | Train + infer modes | Yes | Train only | Train only |
165
+ | Dependencies | torch only | pytorch-lightning | accelerate |
166
+
167
+ ## API Reference
168
+
169
+ ### `probe_batch_size(model, input_fn, *, mode, low, high, headroom, device, verbose)`
170
+
171
+ Find the maximum safe batch size.
172
+
173
+ - **model** (`nn.Module`): Your model, already on the target device.
174
+ - **input_fn** (`Callable[[int], dict[str, Tensor]]`): Takes batch size, returns dict of tensors for `model(**inputs)`.
175
+ - **mode** (`"train"` | `"infer"`): Train mode runs forward + backward. Default: `"train"`.
176
+ - **low** (`int`): Minimum batch size. Default: `1`.
177
+ - **high** (`int`): Upper bound for search. Default: `4096`.
178
+ - **headroom** (`float`): Safety margin. Default: `0.2` (20%).
179
+ - **device** (`str | torch.device | None`): Override device. Default: model's device.
180
+ - **verbose** (`bool`): Print progress. Default: `True`.
181
+
182
+ Returns: `int` — safe batch size.
183
+
184
+ ### `cached_probe(model, input_fn, *, mode, **kwargs)`
185
+
186
+ Same as `probe_batch_size` but caches results keyed on model class, param count, input shapes, and mode.
187
+
188
+ ### `clear_cache()`
189
+
190
+ Clear all cached probe results.
191
+
192
+ ## License
193
+
194
+ MIT
@@ -0,0 +1,10 @@
1
+ batch_probe-0.1.0.dist-info/licenses/LICENSE,sha256=26X3SJ_z1LyCWoldsXSVmDu5ttk5d4SAxrgnxmlBlMo,1092
2
+ torch_probe/__init__.py,sha256=JYZ4O7Psx9WemtWsp_0tMS3DWB5Wjo7qWgUV6QgF3ys,344
3
+ torch_probe/_cache.py,sha256=yEoiSleY1KSrcsdfVs72aEUw7jTDFDKc4GkXoBNFF4w,1736
4
+ torch_probe/_cleanup.py,sha256=pPy84HVpV8LSuAHxIBjktnNGBZSrVFtyqP0RaJ06TeA,413
5
+ torch_probe/_probe.py,sha256=ADUz1cBPyP7G_Ud9FG0KWk2SYxi-FVsbcQ8WNo5GdlY,5589
6
+ torch_probe/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
7
+ batch_probe-0.1.0.dist-info/METADATA,sha256=bFfYvcTbVFofV4KwpNrSanE0rOG8TgFDCjO8yRFkmpc,6098
8
+ batch_probe-0.1.0.dist-info/WHEEL,sha256=aeYiig01lYGDzBgS8HxWXOg3uV61G9ijOsup-k9o1sk,91
9
+ batch_probe-0.1.0.dist-info/top_level.txt,sha256=65Vtl91J5NNyFLwI8ywIHO_2_GKHxsPFUMkNUa1mOJs,12
10
+ batch_probe-0.1.0.dist-info/RECORD,,
@@ -0,0 +1,5 @@
1
+ Wheel-Version: 1.0
2
+ Generator: setuptools (82.0.1)
3
+ Root-Is-Purelib: true
4
+ Tag: py3-none-any
5
+
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Andrew H. Bond
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1 @@
1
+ torch_probe
@@ -0,0 +1,10 @@
1
+ # Copyright (c) 2026 Andrew H. Bond
2
+ # Licensed under the MIT License.
3
+
4
+ """torch-probe: Find the maximum batch size that fits in GPU memory."""
5
+
6
+ from torch_probe._cache import cached_probe, clear_cache
7
+ from torch_probe._probe import probe_batch_size
8
+
9
+ __all__ = ["probe_batch_size", "cached_probe", "clear_cache"]
10
+ __version__ = "0.1.0"
torch_probe/_cache.py ADDED
@@ -0,0 +1,62 @@
1
+ # Copyright (c) 2026 Andrew H. Bond
2
+ # Licensed under the MIT License.
3
+
4
+ """In-memory cache for probe results."""
5
+
6
+ from __future__ import annotations
7
+
8
+ from typing import Any, Callable, Dict, Literal, Optional, Union
9
+
10
+ import torch
11
+ import torch.nn as nn
12
+
13
+ from torch_probe._probe import probe_batch_size
14
+
15
+ _cache: dict[str, int] = {}
16
+
17
+
18
+ def _make_key(
19
+ model: nn.Module,
20
+ input_fn: Callable[[int], Dict[str, torch.Tensor]],
21
+ mode: str,
22
+ ) -> str:
23
+ """Build a cache key from model identity and input shape."""
24
+ # Model class + param count gives a stable identity
25
+ model_id = f"{model.__class__.__name__}_{sum(p.numel() for p in model.parameters())}"
26
+
27
+ # Probe input shapes at batch=1
28
+ try:
29
+ sample = input_fn(1)
30
+ shapes = "_".join(
31
+ f"{k}:{tuple(v.shape)}:{v.dtype}" for k, v in sorted(sample.items())
32
+ )
33
+ # Clean up sample tensors
34
+ del sample
35
+ except Exception:
36
+ shapes = "unknown"
37
+
38
+ return f"{model_id}__{mode}__{shapes}"
39
+
40
+
41
+ def cached_probe(
42
+ model: nn.Module,
43
+ input_fn: Callable[[int], Dict[str, torch.Tensor]],
44
+ *,
45
+ mode: Literal["train", "infer"] = "train",
46
+ **kwargs: Any,
47
+ ) -> int:
48
+ """Like :func:`probe_batch_size` but caches results.
49
+
50
+ Same arguments as :func:`probe_batch_size`. Returns a cached result
51
+ if the same model class, parameter count, input shapes, and mode
52
+ have been probed before.
53
+ """
54
+ key = _make_key(model, input_fn, mode)
55
+ if key not in _cache:
56
+ _cache[key] = probe_batch_size(model, input_fn, mode=mode, **kwargs)
57
+ return _cache[key]
58
+
59
+
60
+ def clear_cache() -> None:
61
+ """Clear all cached probe results."""
62
+ _cache.clear()
@@ -0,0 +1,18 @@
1
+ # Copyright (c) 2026 Andrew H. Bond
2
+ # Licensed under the MIT License.
3
+
4
+ """GPU memory cleanup utilities."""
5
+
6
+ from __future__ import annotations
7
+
8
+ import gc
9
+
10
+ import torch
11
+
12
+
13
+ def gpu_cleanup() -> None:
14
+ """Aggressively free GPU memory after an OOM or between probe iterations."""
15
+ gc.collect()
16
+ if torch.cuda.is_available():
17
+ torch.cuda.empty_cache()
18
+ torch.cuda.synchronize()
torch_probe/_probe.py ADDED
@@ -0,0 +1,173 @@
1
+ # Copyright (c) 2026 Andrew H. Bond
2
+ # Licensed under the MIT License.
3
+
4
+ """Core binary-search GPU memory probe."""
5
+
6
+ from __future__ import annotations
7
+
8
+ from typing import Any, Callable, Dict, Literal, Optional, Union
9
+
10
+ import torch
11
+ import torch.nn as nn
12
+
13
+ from torch_probe._cleanup import gpu_cleanup
14
+
15
+
16
+ def _extract_loss(outputs: Any) -> torch.Tensor:
17
+ """Extract a scalar loss from model outputs for the backward pass.
18
+
19
+ Handles:
20
+ - HuggingFace ModelOutput / dataclass with .loss attribute
21
+ - dict with "loss" key
22
+ - plain Tensor
23
+ - tuple (uses first element)
24
+ - dict without "loss" (uses first value)
25
+ """
26
+ # .loss attribute (HuggingFace ModelOutput, dataclasses)
27
+ if hasattr(outputs, "loss") and outputs.loss is not None:
28
+ return outputs.loss
29
+
30
+ # dict with "loss" key
31
+ if isinstance(outputs, dict):
32
+ if "loss" in outputs:
33
+ return outputs["loss"]
34
+ # Fall back to first tensor value
35
+ for v in outputs.values():
36
+ if isinstance(v, torch.Tensor):
37
+ return v.mean()
38
+
39
+ # plain Tensor
40
+ if isinstance(outputs, torch.Tensor):
41
+ return outputs.mean()
42
+
43
+ # tuple / list
44
+ if isinstance(outputs, (tuple, list)) and len(outputs) > 0:
45
+ first = outputs[0]
46
+ if isinstance(first, torch.Tensor):
47
+ return first.mean()
48
+
49
+ raise TypeError(
50
+ f"Cannot extract a loss from model output of type {type(outputs)}. "
51
+ "Ensure your model returns a tensor, a dict with a 'loss' key, "
52
+ "or an object with a .loss attribute."
53
+ )
54
+
55
+
56
+ def probe_batch_size(
57
+ model: nn.Module,
58
+ input_fn: Callable[[int], Dict[str, torch.Tensor]],
59
+ *,
60
+ mode: Literal["train", "infer"] = "train",
61
+ low: int = 1,
62
+ high: int = 4096,
63
+ headroom: float = 0.2,
64
+ device: Optional[Union[torch.device, str]] = None,
65
+ verbose: bool = True,
66
+ ) -> int:
67
+ """Find the maximum batch size that fits in GPU memory.
68
+
69
+ Uses binary search with OOM recovery. Tries a forward pass (and backward
70
+ pass in train mode) at each candidate batch size. Returns the largest
71
+ successful size minus a safety margin.
72
+
73
+ Args:
74
+ model: Any ``nn.Module``, already on the target device.
75
+ input_fn: A callable that takes a batch size ``int`` and returns a dict
76
+ of tensors to pass as ``**kwargs`` to ``model()``. Tensors must
77
+ already be on the correct device.
78
+ mode: ``"train"`` runs forward + backward (2-4x more memory).
79
+ ``"infer"`` runs forward only under ``torch.no_grad()``.
80
+ low: Minimum batch size to try (and the floor for the return value).
81
+ high: Starting upper bound for binary search.
82
+ headroom: Fraction of headroom to subtract. ``0.2`` (default) means
83
+ the returned batch size is ``int(max_successful * 0.8)``.
84
+ device: Device to check. Defaults to the device of the model's first
85
+ parameter. On CPU the probe still runs but skips CUDA-specific cleanup.
86
+ verbose: Print probe progress.
87
+
88
+ Returns:
89
+ Safe batch size (``int``), guaranteed ``>= low``.
90
+
91
+ Example::
92
+
93
+ from torch_probe import probe_batch_size
94
+
95
+ batch_size = probe_batch_size(
96
+ model,
97
+ lambda bs: {
98
+ "input_ids": torch.zeros(bs, 512, dtype=torch.long, device="cuda"),
99
+ "attention_mask": torch.ones(bs, 512, dtype=torch.long, device="cuda"),
100
+ },
101
+ )
102
+ """
103
+ # Resolve device
104
+ if device is None:
105
+ try:
106
+ device = next(model.parameters()).device
107
+ except StopIteration:
108
+ device = torch.device("cpu")
109
+ device = torch.device(device) if isinstance(device, str) else device
110
+
111
+ is_cuda = device.type == "cuda"
112
+
113
+ # Save and restore model state
114
+ was_training = model.training
115
+ best = low
116
+
117
+ if verbose:
118
+ print(f"torch-probe: probing batch size (mode={mode}, range=[{low}, {high}], "
119
+ f"headroom={headroom:.0%})...", end="", flush=True)
120
+
121
+ while low <= high:
122
+ mid = (low + high) // 2
123
+ success = False
124
+ inputs = None
125
+
126
+ try:
127
+ if is_cuda:
128
+ gpu_cleanup()
129
+ inputs = input_fn(mid)
130
+
131
+ if mode == "train":
132
+ model.train()
133
+ outputs = model(**inputs)
134
+ loss = _extract_loss(outputs)
135
+ loss.backward()
136
+ model.zero_grad(set_to_none=True)
137
+ else:
138
+ model.eval()
139
+ with torch.no_grad():
140
+ model(**inputs)
141
+
142
+ success = True
143
+
144
+ except (torch.cuda.OutOfMemoryError, RuntimeError) as e:
145
+ err_msg = str(e).lower()
146
+ if "out of memory" not in err_msg and "cuda" not in err_msg:
147
+ # Not an OOM — re-raise
148
+ model.train(was_training)
149
+ raise
150
+ finally:
151
+ # Always clean up tensors
152
+ if inputs is not None:
153
+ del inputs
154
+ if is_cuda:
155
+ gpu_cleanup()
156
+
157
+ if success:
158
+ best = mid
159
+ low = mid + 1
160
+ else:
161
+ high = mid - 1
162
+
163
+ # Restore model state
164
+ model.train(was_training)
165
+
166
+ safe = max(1, int(best * (1.0 - headroom)))
167
+ # Never go below the user's requested minimum
168
+ safe = max(safe, 1)
169
+
170
+ if verbose:
171
+ print(f" max={best}, safe={safe}")
172
+
173
+ return safe
torch_probe/py.typed ADDED
File without changes