qpytorch 0.1__py3-none-any.whl → 0.1.1__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of qpytorch might be problematic. Click here for more details.
- qpytorch/version.py +2 -2
- {qpytorch-0.1.dist-info → qpytorch-0.1.1.dist-info}/METADATA +29 -27
- {qpytorch-0.1.dist-info → qpytorch-0.1.1.dist-info}/RECORD +6 -6
- {qpytorch-0.1.dist-info → qpytorch-0.1.1.dist-info}/LICENSE +0 -0
- {qpytorch-0.1.dist-info → qpytorch-0.1.1.dist-info}/WHEEL +0 -0
- {qpytorch-0.1.dist-info → qpytorch-0.1.1.dist-info}/top_level.txt +0 -0
qpytorch/version.py
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.1
|
|
2
2
|
Name: qpytorch
|
|
3
|
-
Version: 0.1
|
|
3
|
+
Version: 0.1.1
|
|
4
4
|
Summary: An implementation of Q-Exponential Processes in Pytorch based on GPyTorch
|
|
5
5
|
Home-page: https://lanzithinking.github.io/qepytorch/
|
|
6
6
|
Author: Shiwei Lan
|
|
@@ -19,6 +19,7 @@ Requires-Dist: scikit-learn
|
|
|
19
19
|
Requires-Dist: scipy>=1.6.0
|
|
20
20
|
Requires-Dist: linear-operator>=0.6
|
|
21
21
|
Requires-Dist: gpytorch>=1.13
|
|
22
|
+
Requires-Dist: numpy<2
|
|
22
23
|
Provides-Extra: dev
|
|
23
24
|
Requires-Dist: pre-commit; extra == "dev"
|
|
24
25
|
Requires-Dist: setuptools-scm; extra == "dev"
|
|
@@ -63,60 +64,61 @@ Requires-Dist: nbval; extra == "test"
|
|
|
63
64
|
[](LICENSE)
|
|
64
65
|
|
|
65
66
|
[](https://www.python.org/downloads/)
|
|
66
|
-
[](https://anaconda.org/qpytorch)
|
|
67
68
|
[](https://pypi.org/project/qpytorch)
|
|
68
69
|
|
|
69
|
-
Q<sup style="font-size: 0.5em;">ⓔ</sup>PyTorch is a Q-exponential process
|
|
70
|
+
Q<sup style="font-size: 0.5em;">ⓔ</sup>PyTorch is a Python package for Q-exponential process ([QEP](https://papers.nips.cc/paper_files/paper/2023/file/e6bfdd58f1326ff821a1b92743963bdf-Paper-Conference.pdf)) implemented using PyTorch and built up on [GPyTorch](https://gpytorch.ai). Q<sup style="font-size: 0.5em;">ⓔ</sup>PyTorch is designed to facilitate creating scalable, flexible, and modular QPE models.
|
|
70
71
|
|
|
71
|
-
|
|
72
|
-
|
|
73
|
-
or by composing many of our already existing `LinearOperators`.
|
|
74
|
-
This allows not only for easy implementation of popular scalable QEP techniques,
|
|
75
|
-
but often also for significantly improved utilization of GPU computing compared to solvers based on the Cholesky decomposition.
|
|
72
|
+
Different from GPyTorch for Gaussian process (GP) models, Q<sup style="font-size: 0.5em;">ⓔ</sup>PyTorch focuses on QEP, which generalizes GP by allowing flexible regularization on function spaces through a parameter $q>0$ and embraces GP as a special case with $q=2$. QEP is proven to be superior than GP in modeling inhomogeneous objects with abrupt changes or sharp contrast for $q<2$ [[Li et al (2023)]](https://papers.nips.cc/paper_files/paper/2023/hash/e6bfdd58f1326ff821a1b92743963bdf-Abstract-Conference.html).
|
|
73
|
+
Inherited from GPyTorch, Q<sup style="font-size: 0.5em;">ⓔ</sup>PyTorch has an efficient and scalable implementation by taking advantage of numerical linear algebra library [LinearOperator](https://github.com/cornellius-gp/linear_operator) and improved GPU utilization.
|
|
76
74
|
|
|
77
|
-
Q<sup style="font-size: 0.5em;">ⓔ</sup>PyTorch provides (1) significant GPU acceleration (through MVM based inference);
|
|
78
|
-
(2) state-of-the-art implementations of the latest algorithmic advances for scalability and flexibility ([SKI/KISS-GP](http://proceedings.mlr.press/v37/wilson15.pdf), [stochastic Lanczos expansions](https://arxiv.org/abs/1711.03481), [LOVE](https://arxiv.org/pdf/1803.06058.pdf), [SKIP](https://arxiv.org/pdf/1802.08903.pdf), [stochastic variational](https://arxiv.org/pdf/1611.00336.pdf) [deep kernel learning](http://proceedings.mlr.press/v51/wilson16.pdf), ...);
|
|
79
|
-
(3) easy integration with deep learning frameworks.
|
|
80
75
|
|
|
76
|
+
<!--
|
|
77
|
+
Q<sup style="font-size: 0.5em;">ⓔ</sup>PyTorch features ...
|
|
78
|
+
-->
|
|
81
79
|
|
|
82
|
-
## Examples, Tutorials, and Documentation
|
|
83
80
|
|
|
84
|
-
|
|
81
|
+
## Tutorials, Examples, and Documentation
|
|
82
|
+
|
|
83
|
+
See [**documentation**](https://qepytorch.readthedocs.io/en/stable/) on how to construct various QEP models in Q<sup style="font-size: 0.5em;">ⓔ</sup>PyTorch.
|
|
85
84
|
|
|
86
85
|
## Installation
|
|
87
86
|
|
|
88
87
|
**Requirements**:
|
|
89
88
|
- Python >= 3.10
|
|
90
|
-
- PyTorch >= 2.
|
|
89
|
+
- PyTorch >= 2.0
|
|
91
90
|
- GPyTorch >= 1.13
|
|
92
91
|
|
|
92
|
+
#### Stable Version
|
|
93
|
+
|
|
93
94
|
Install Q<sup style="font-size: 0.5em;">ⓔ</sup>PyTorch using pip or conda:
|
|
94
95
|
|
|
95
96
|
```bash
|
|
96
97
|
pip install qpytorch
|
|
97
|
-
conda install qpytorch
|
|
98
|
+
conda install qpytorch
|
|
98
99
|
```
|
|
99
100
|
|
|
100
101
|
(To use packages globally but install Q<sup style="font-size: 0.5em;">ⓔ</sup>PyTorch as a user-only package, use `pip install --user` above.)
|
|
101
102
|
|
|
102
|
-
#### Latest
|
|
103
|
+
#### Latest Version
|
|
103
104
|
|
|
104
|
-
To upgrade to the latest
|
|
105
|
+
To upgrade to the latest version, run
|
|
105
106
|
|
|
106
107
|
```bash
|
|
107
|
-
pip install --upgrade git+https://github.com/cornellius-gp/linear_operator.git
|
|
108
|
-
pip install --upgrade git+https://github.com/cornellius-gp/gpytorch.git
|
|
109
108
|
pip install --upgrade git+https://github.com/lanzithinking/qepytorch.git
|
|
110
109
|
```
|
|
111
110
|
|
|
112
|
-
####
|
|
111
|
+
#### from source (for development)
|
|
113
112
|
|
|
114
113
|
If you are contributing a pull request, it is best to perform a manual installation:
|
|
115
114
|
|
|
116
115
|
```sh
|
|
117
|
-
git clone https://github.com/lanzithinking/qepytorch.git
|
|
118
|
-
cd
|
|
116
|
+
git clone https://github.com/lanzithinking/qepytorch.git
|
|
117
|
+
cd qepytorch
|
|
118
|
+
# either
|
|
119
119
|
pip install -e .[dev,docs,examples,keops,pyro,test] # keops and pyro are optional
|
|
120
|
+
# or
|
|
121
|
+
conda env create -f env_install.yaml # installed in the environment qpytorch
|
|
120
122
|
```
|
|
121
123
|
|
|
122
124
|
<!--
|
|
@@ -137,7 +139,7 @@ To discuss any issues related to this AUR package refer to the comments section
|
|
|
137
139
|
|
|
138
140
|
## Citing Us
|
|
139
141
|
|
|
140
|
-
If you use Q<sup style="font-size: 0.5em;">ⓔ</sup>PyTorch, please cite the following
|
|
142
|
+
If you use Q<sup style="font-size: 0.5em;">ⓔ</sup>PyTorch, please cite the following paper:
|
|
141
143
|
> [Li, Shuyi, Michael O'Connor, and Shiwei Lan. "Bayesian Learning via Q-Exponential Process." In Advances in Neural Information Processing Systems (2023).](https://papers.nips.cc/paper_files/paper/2023/hash/e6bfdd58f1326ff821a1b92743963bdf-Abstract-Conference.html)
|
|
142
144
|
```
|
|
143
145
|
@inproceedings{li2023QEP,
|
|
@@ -159,18 +161,18 @@ for information on submitting issues and pull requests.
|
|
|
159
161
|
Q<sup style="font-size: 0.5em;">ⓔ</sup>PyTorch is primarily maintained by:
|
|
160
162
|
- [Shiwei Lan](https://math.la.asu.edu/~slan) (Arizona State University)
|
|
161
163
|
|
|
162
|
-
|
|
163
|
-
Shuyi Li,
|
|
164
|
+
Thanks to the following contributors including (but not limited to)
|
|
165
|
+
- Shuyi Li,
|
|
164
166
|
Guangting Yu,
|
|
165
167
|
Zhi Chang,
|
|
166
168
|
Chukwudi Paul Obite,
|
|
167
169
|
Keyan Wu,
|
|
168
170
|
and many more!
|
|
169
171
|
|
|
170
|
-
|
|
172
|
+
<!--
|
|
171
173
|
## Acknowledgements
|
|
172
174
|
Development of Q<sup style="font-size: 0.5em;">ⓔ</sup>PyTorch is supported by.
|
|
173
|
-
|
|
175
|
+
-->
|
|
174
176
|
|
|
175
177
|
## License
|
|
176
178
|
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
qpytorch/__init__.py,sha256=0sD7DfLvEXx3PvqrTyl5bOYprus1Wjzxs1hOzTbQx6s,12392
|
|
2
2
|
qpytorch/module.py,sha256=xjw6X-v3_iVt4cOUEQdQp-cbTKkLCqwaeJoUardRQSQ,1259
|
|
3
|
-
qpytorch/version.py,sha256=
|
|
3
|
+
qpytorch/version.py,sha256=aDfNBQh0vsVL7CSgYEMdMUbrxVw92raLxMivH20w2A4,160
|
|
4
4
|
qpytorch/constraints/__init__.py,sha256=JS4bzSGrgAiDtOM_Y4rerzkoMAUt671E2U1SVakhfX4,138
|
|
5
5
|
qpytorch/distributions/__init__.py,sha256=OSVX5Lfb72NKVFWVku0t-D2RyZuVcJAIICndOX9f5EA,973
|
|
6
6
|
qpytorch/distributions/delta.py,sha256=AC-N5fhBQad872jl8q9PMLq46E5qwcQOXkRf1PSW-Cg,3507
|
|
@@ -95,8 +95,8 @@ qpytorch/variational/tril_natural_variational_distribution.py,sha256=3wPggMUY293
|
|
|
95
95
|
qpytorch/variational/uncorrelated_multitask_variational_strategy.py,sha256=LdiVacUdQZ-BJDGcdHwlZ_uIVPfRl1WAgEtfs3ahxdg,5176
|
|
96
96
|
qpytorch/variational/unwhitened_variational_strategy.py,sha256=068ScAOHk0VHLfkwakR41vrpASqIDjvvnhOFYIEE9uU,10713
|
|
97
97
|
qpytorch/variational/variational_strategy.py,sha256=TD_rPpQL2n7bK869tICT6cA0cc0YFgqi0AsHfyVo2Zc,13155
|
|
98
|
-
qpytorch-0.1.dist-info/LICENSE,sha256=QcK8fAvGl70vlwIHUqKdi4oV_SvhC6lBGYXTR1znTsY,1067
|
|
99
|
-
qpytorch-0.1.dist-info/METADATA,sha256=
|
|
100
|
-
qpytorch-0.1.dist-info/WHEEL,sha256=tZoeGjtWxWRfdplE7E3d45VPlLNQnvbKiYnx7gwAy8A,92
|
|
101
|
-
qpytorch-0.1.dist-info/top_level.txt,sha256=WZP9m4PVYtj2RhzbzmW4UqUGOy-sOfumPrjnvNFrv4Q,9
|
|
102
|
-
qpytorch-0.1.dist-info/RECORD,,
|
|
98
|
+
qpytorch-0.1.1.dist-info/LICENSE,sha256=QcK8fAvGl70vlwIHUqKdi4oV_SvhC6lBGYXTR1znTsY,1067
|
|
99
|
+
qpytorch-0.1.1.dist-info/METADATA,sha256=9DnPKInL5BeVBkG7vk7XFkUwT_vRSNxyvur0mOIVL54,7499
|
|
100
|
+
qpytorch-0.1.1.dist-info/WHEEL,sha256=tZoeGjtWxWRfdplE7E3d45VPlLNQnvbKiYnx7gwAy8A,92
|
|
101
|
+
qpytorch-0.1.1.dist-info/top_level.txt,sha256=WZP9m4PVYtj2RhzbzmW4UqUGOy-sOfumPrjnvNFrv4Q,9
|
|
102
|
+
qpytorch-0.1.1.dist-info/RECORD,,
|
|
File without changes
|
|
File without changes
|
|
File without changes
|