rienet-torch 0.1.2__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- rienet_torch-0.1.2/LICENSE +21 -0
- rienet_torch-0.1.2/MANIFEST.in +15 -0
- rienet_torch-0.1.2/PKG-INFO +408 -0
- rienet_torch-0.1.2/README.md +381 -0
- rienet_torch-0.1.2/pyproject.toml +63 -0
- rienet_torch-0.1.2/setup.cfg +4 -0
- rienet_torch-0.1.2/src/rienet_torch/__init__.py +67 -0
- rienet_torch-0.1.2/src/rienet_torch/dtype_utils.py +95 -0
- rienet_torch-0.1.2/src/rienet_torch/lag_transform.py +14 -0
- rienet_torch-0.1.2/src/rienet_torch/losses.py +85 -0
- rienet_torch-0.1.2/src/rienet_torch/ops_layers.py +713 -0
- rienet_torch-0.1.2/src/rienet_torch/rnn.py +296 -0
- rienet_torch-0.1.2/src/rienet_torch/serialization.py +91 -0
- rienet_torch-0.1.2/src/rienet_torch/trainable_layers.py +1422 -0
- rienet_torch-0.1.2/src/rienet_torch/version.py +1 -0
- rienet_torch-0.1.2/src/rienet_torch.egg-info/SOURCES.txt +13 -0
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2025 Christian Bongiorno
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
|
@@ -0,0 +1,15 @@
|
|
|
1
|
+
include LICENSE
|
|
2
|
+
include README.md
|
|
3
|
+
include pyproject.toml
|
|
4
|
+
graft src/rienet_torch
|
|
5
|
+
prune .local
|
|
6
|
+
prune .github
|
|
7
|
+
prune .pytest_cache
|
|
8
|
+
prune notes
|
|
9
|
+
prune tests
|
|
10
|
+
prune tools
|
|
11
|
+
prune src/rienet_torch.egg-info
|
|
12
|
+
global-exclude __pycache__
|
|
13
|
+
global-exclude *.py[cod]
|
|
14
|
+
global-exclude *.so
|
|
15
|
+
global-exclude .DS_Store
|
|
@@ -0,0 +1,408 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: rienet-torch
|
|
3
|
+
Version: 0.1.2
|
|
4
|
+
Summary: PyTorch implementation of RIEnet for global minimum-variance portfolio optimization
|
|
5
|
+
Author: Christian Bongiorno
|
|
6
|
+
Maintainer: Christian Bongiorno
|
|
7
|
+
License-Expression: MIT
|
|
8
|
+
Keywords: pytorch,portfolio-optimization,global-minimum-variance,covariance-estimation,finance
|
|
9
|
+
Classifier: Development Status :: 3 - Alpha
|
|
10
|
+
Classifier: Intended Audience :: Science/Research
|
|
11
|
+
Classifier: Programming Language :: Python :: 3
|
|
12
|
+
Classifier: Programming Language :: Python :: 3.10
|
|
13
|
+
Classifier: Programming Language :: Python :: 3.11
|
|
14
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
15
|
+
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
|
|
16
|
+
Classifier: Topic :: Office/Business :: Financial :: Investment
|
|
17
|
+
Requires-Python: >=3.10
|
|
18
|
+
Description-Content-Type: text/markdown
|
|
19
|
+
License-File: LICENSE
|
|
20
|
+
Requires-Dist: numpy>=1.26.0
|
|
21
|
+
Requires-Dist: torch>=2.5.0
|
|
22
|
+
Provides-Extra: dev
|
|
23
|
+
Requires-Dist: build>=1.2; extra == "dev"
|
|
24
|
+
Requires-Dist: pytest>=9.0; extra == "dev"
|
|
25
|
+
Requires-Dist: twine>=6.0; extra == "dev"
|
|
26
|
+
Dynamic: license-file
|
|
27
|
+
|
|
28
|
+
# RIEnet Torch: A Rotational Invariant Estimator Network for GMV Optimization
|
|
29
|
+
|
|
30
|
+
**This library implements the neural estimators introduced in:**
|
|
31
|
+
- **Bongiorno, C., Manolakis, E., & Mantegna, R. N. (2026). End-to-End Large Portfolio Optimization for Variance Minimization with Neural Networks through Covariance Cleaning. The Journal of Finance and Data Science: 100179. [10.1016/j.jfds.2026.100179](https://doi.org/10.1016/j.jfds.2026.100179)**
|
|
32
|
+
- **Bongiorno, C., Manolakis, E., & Mantegna, R. N. (2025). Neural Network-Driven Volatility Drag Mitigation under Aggressive Leverage. In *Proceedings of the 6th ACM International Conference on AI in Finance (ICAIF ’25)*. [10.1145/3768292.3770370](https://doi.org/10.1145/3768292.3770370)**
|
|
33
|
+
|
|
34
|
+
**RIEnet Torch** is a PyTorch implementation for end-to-end global minimum-variance portfolio construction.
|
|
35
|
+
|
|
36
|
+
Given a tensor of asset returns, the model estimates a structured covariance / precision representation and produces analytic GMV portfolio weights in a single forward pass.
|
|
37
|
+
|
|
38
|
+
This repository is intended for:
|
|
39
|
+
- research and methodological replication,
|
|
40
|
+
- experimentation on large equity universes,
|
|
41
|
+
- integration into quantitative portfolio construction workflows,
|
|
42
|
+
- deployment as a standalone PyTorch package.
|
|
43
|
+
|
|
44
|
+
For the original tensorflow implementation, see the [RIEnet repository](https://github.com/bongiornoc/RIEnet)
|
|
45
|
+
|
|
46
|
+
## What this package provides
|
|
47
|
+
|
|
48
|
+
- End-to-end training on a realized-variance objective for GMV portfolios
|
|
49
|
+
- Access to portfolio weights, cleaned covariance matrices, and precision matrices
|
|
50
|
+
- A dimension-agnostic architecture suitable for large cross-sectional universes
|
|
51
|
+
- A PyTorch implementation aligned with the published methodology
|
|
52
|
+
|
|
53
|
+
## Evidence in published experiments
|
|
54
|
+
|
|
55
|
+
The empirical properties of the method are documented in the associated papers.
|
|
56
|
+
|
|
57
|
+
In particular, the published experiments evaluate the model on large equity universes under a global minimum-variance objective and compare it against standard covariance-based benchmarks.
|
|
58
|
+
|
|
59
|
+
For details on datasets, training protocol, benchmark definitions, and evaluation metrics, please refer to the papers listed above.
|
|
60
|
+
|
|
61
|
+
## Module Organization
|
|
62
|
+
|
|
63
|
+
- `rienet_torch.trainable_layers`: modules with trainable parameters (`RIEnetLayer`, `LagTransformLayer`, `DeepLayer`, `DeepRecurrentLayer`, `CorrelationEigenTransformLayer`).
|
|
64
|
+
- `rienet_torch.ops_layers`: deterministic operation modules (statistics, normalization, eigensystem algebra, weight post-processing).
|
|
65
|
+
- `rienet_torch.losses`: the GMV variance objective.
|
|
66
|
+
- `rienet_torch.serialization`: Torch-native save/load helpers.
|
|
67
|
+
|
|
68
|
+
## Installation
|
|
69
|
+
|
|
70
|
+
Install from PyPI:
|
|
71
|
+
|
|
72
|
+
```bash
|
|
73
|
+
pip install rienet-torch
|
|
74
|
+
```
|
|
75
|
+
|
|
76
|
+
Or install from source:
|
|
77
|
+
|
|
78
|
+
```bash
|
|
79
|
+
cd /path/to/RIEnet-torch
|
|
80
|
+
pip install -e .
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
For development:
|
|
84
|
+
|
|
85
|
+
```bash
|
|
86
|
+
pip install -e ".[dev]"
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
## Quick Start
|
|
90
|
+
|
|
91
|
+
### Basic Usage
|
|
92
|
+
|
|
93
|
+
```python
|
|
94
|
+
import torch
|
|
95
|
+
from rienet_torch import RIEnetLayer, variance_loss_function
|
|
96
|
+
|
|
97
|
+
# Defaults reproduce the compact GMV architecture
|
|
98
|
+
# (bidirectional GRU with 16 units, 8-unit volatility head)
|
|
99
|
+
rienet_layer = RIEnetLayer(output_type=["weights", "precision"])
|
|
100
|
+
|
|
101
|
+
# Sample data: (batch_size, n_stocks, n_days)
|
|
102
|
+
returns = torch.randn(32, 10, 60) * 0.02
|
|
103
|
+
|
|
104
|
+
# Retrieve GMV weights and cleaned precision in one pass
|
|
105
|
+
outputs = rienet_layer(returns)
|
|
106
|
+
weights = outputs["weights"] # (32, 10, 1)
|
|
107
|
+
precision = outputs["precision"] # (32, 10, 10)
|
|
108
|
+
|
|
109
|
+
# GMV training objective
|
|
110
|
+
covariance = torch.randn(32, 10, 10)
|
|
111
|
+
covariance = covariance @ covariance.transpose(-1, -2)
|
|
112
|
+
loss = variance_loss_function(covariance, weights)
|
|
113
|
+
print(loss.shape) # torch.Size([32, 1, 1])
|
|
114
|
+
```
|
|
115
|
+
|
|
116
|
+
### Training with the GMV Variance Loss
|
|
117
|
+
|
|
118
|
+
```python
|
|
119
|
+
import torch
|
|
120
|
+
from rienet_torch import RIEnetLayer, variance_loss_function
|
|
121
|
+
|
|
122
|
+
model = RIEnetLayer(output_type="weights")
|
|
123
|
+
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
|
|
124
|
+
|
|
125
|
+
# Synthetic training data
|
|
126
|
+
X_train = torch.randn(256, 10, 60) * 0.02
|
|
127
|
+
Sigma_train = torch.randn(256, 10, 10)
|
|
128
|
+
Sigma_train = Sigma_train @ Sigma_train.transpose(-1, -2)
|
|
129
|
+
|
|
130
|
+
model.train()
|
|
131
|
+
for step in range(100):
|
|
132
|
+
optimizer.zero_grad()
|
|
133
|
+
weights = model(X_train)
|
|
134
|
+
loss = variance_loss_function(Sigma_train, weights).mean()
|
|
135
|
+
loss.backward()
|
|
136
|
+
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0)
|
|
137
|
+
optimizer.step()
|
|
138
|
+
```
|
|
139
|
+
|
|
140
|
+
> **Tip:** When you intend to deploy RIEnet on portfolios of varying size, train on batches that span different asset universes. The RIE-based architecture is dimension agnostic and benefits from heterogeneous training shapes.
|
|
141
|
+
|
|
142
|
+
### Using Different Output Types
|
|
143
|
+
|
|
144
|
+
```python
|
|
145
|
+
import torch
|
|
146
|
+
from rienet_torch import RIEnetLayer
|
|
147
|
+
|
|
148
|
+
returns = torch.randn(32, 10, 60) * 0.02
|
|
149
|
+
|
|
150
|
+
# GMV weights only
|
|
151
|
+
weights = RIEnetLayer(output_type="weights")(returns)
|
|
152
|
+
|
|
153
|
+
# Precision matrix only
|
|
154
|
+
precision_matrix = RIEnetLayer(output_type="precision")(returns)
|
|
155
|
+
|
|
156
|
+
# Precision, covariance, and lag-transformed inputs in one pass
|
|
157
|
+
outputs = RIEnetLayer(
|
|
158
|
+
output_type=["precision", "covariance", "input_transformed"]
|
|
159
|
+
)(returns)
|
|
160
|
+
precision_matrix = outputs["precision"]
|
|
161
|
+
covariance_matrix = outputs["covariance"]
|
|
162
|
+
lagged_inputs = outputs["input_transformed"]
|
|
163
|
+
|
|
164
|
+
# Spectral components (non-inverse)
|
|
165
|
+
spectral = RIEnetLayer(
|
|
166
|
+
output_type=["eigenvalues", "eigenvectors", "transformed_std"]
|
|
167
|
+
)(returns)
|
|
168
|
+
cleaned_eigenvalues = spectral["eigenvalues"] # (batch, n_stocks, 1)
|
|
169
|
+
eigenvectors = spectral["eigenvectors"] # (batch, n_stocks, n_stocks)
|
|
170
|
+
transformed_std = spectral["transformed_std"] # (batch, n_stocks, 1)
|
|
171
|
+
|
|
172
|
+
# Optional: disable variance normalisation
|
|
173
|
+
raw_covariance = RIEnetLayer(
|
|
174
|
+
output_type="covariance",
|
|
175
|
+
normalize_transformed_variance=False
|
|
176
|
+
)(returns)
|
|
177
|
+
```
|
|
178
|
+
|
|
179
|
+
> When RIEnet is trained end-to-end on the GMV variance loss, leave `normalize_transformed_variance=True` (the default). The loss is invariant to global covariance rescalings and the layer keeps the implied variance scale centred around one. Disable the normalization only when using alternative objectives where the absolute volatility scale must be preserved.
|
|
180
|
+
|
|
181
|
+
### Using `LagTransformLayer` Directly
|
|
182
|
+
|
|
183
|
+
`LagTransformLayer` is exposed both at package root and in the dedicated module:
|
|
184
|
+
|
|
185
|
+
```python
|
|
186
|
+
import torch
|
|
187
|
+
from rienet_torch import LagTransformLayer
|
|
188
|
+
# or: from rienet_torch.lag_transform import LagTransformLayer
|
|
189
|
+
|
|
190
|
+
# Dynamic lookback (T can change call-by-call)
|
|
191
|
+
compact = LagTransformLayer(variant="compact")
|
|
192
|
+
y1 = compact(torch.randn(4, 12, 20))
|
|
193
|
+
y2 = compact(torch.randn(4, 12, 40))
|
|
194
|
+
|
|
195
|
+
# Fixed lookback inferred at first build/call (requires static T)
|
|
196
|
+
per_lag = LagTransformLayer(variant="per_lag")
|
|
197
|
+
z1 = per_lag(torch.randn(4, 12, 20))
|
|
198
|
+
z2 = per_lag(torch.randn(4, 8, 20)) # n_assets can change
|
|
199
|
+
```
|
|
200
|
+
|
|
201
|
+
### Using `EigenWeightsLayer` Directly
|
|
202
|
+
|
|
203
|
+
`EigenWeightsLayer` is part of the public API and can be imported directly:
|
|
204
|
+
|
|
205
|
+
```python
|
|
206
|
+
import torch
|
|
207
|
+
from rienet_torch import EigenWeightsLayer
|
|
208
|
+
|
|
209
|
+
layer = EigenWeightsLayer(name="gmv_weights")
|
|
210
|
+
|
|
211
|
+
eigenvectors = torch.randn(8, 20, 20)
|
|
212
|
+
inverse_eigenvalues = torch.rand(8, 20, 1)
|
|
213
|
+
inverse_std = torch.rand(8, 20, 1)
|
|
214
|
+
|
|
215
|
+
# Full GMV-like branch (includes inverse_std scaling)
|
|
216
|
+
weights = layer(eigenvectors, inverse_eigenvalues, inverse_std)
|
|
217
|
+
|
|
218
|
+
# Covariance-eigensystem branch (inverse_std omitted)
|
|
219
|
+
weights_cov = layer(eigenvectors, inverse_eigenvalues)
|
|
220
|
+
```
|
|
221
|
+
|
|
222
|
+
Notes:
|
|
223
|
+
- `inverse_std` is optional by design.
|
|
224
|
+
- Output shape is always `(..., n_assets, 1)`, normalized to sum to one along assets.
|
|
225
|
+
|
|
226
|
+
### Using `CorrelationEigenTransformLayer` Directly
|
|
227
|
+
|
|
228
|
+
```python
|
|
229
|
+
import torch
|
|
230
|
+
from rienet_torch import CorrelationEigenTransformLayer
|
|
231
|
+
|
|
232
|
+
layer = CorrelationEigenTransformLayer(name="corr_cleaner")
|
|
233
|
+
|
|
234
|
+
# Correlation matrix: (batch, n_assets, n_assets)
|
|
235
|
+
corr = torch.eye(6).expand(4, 6, 6).clone()
|
|
236
|
+
|
|
237
|
+
# Optional attributes: (batch, k) e.g. q, lookback, regime flags, etc.
|
|
238
|
+
attrs = torch.tensor([
|
|
239
|
+
[0.5, 60.0],
|
|
240
|
+
[0.7, 60.0],
|
|
241
|
+
[1.2, 30.0],
|
|
242
|
+
[0.9, 90.0],
|
|
243
|
+
], dtype=torch.float32)
|
|
244
|
+
|
|
245
|
+
# With attributes (default output_type='correlation')
|
|
246
|
+
cleaned_corr = layer(corr, attributes=attrs)
|
|
247
|
+
|
|
248
|
+
# Request multiple outputs
|
|
249
|
+
details = layer(
|
|
250
|
+
corr,
|
|
251
|
+
attributes=attrs,
|
|
252
|
+
output_type=[
|
|
253
|
+
"correlation",
|
|
254
|
+
"inverse_correlation",
|
|
255
|
+
"eigenvalues",
|
|
256
|
+
"eigenvectors",
|
|
257
|
+
"inverse_eigenvalues",
|
|
258
|
+
],
|
|
259
|
+
)
|
|
260
|
+
cleaned_eigvals = details["eigenvalues"] # (batch, n_assets, 1)
|
|
261
|
+
cleaned_inv_eigvals = details["inverse_eigenvalues"] # (batch, n_assets, 1)
|
|
262
|
+
inv_corr = details["inverse_correlation"] # (batch, n_assets, n_assets)
|
|
263
|
+
|
|
264
|
+
# Without attributes
|
|
265
|
+
cleaned_corr_no_attr = CorrelationEigenTransformLayer(name="corr_cleaner_no_attr")(corr)
|
|
266
|
+
```
|
|
267
|
+
|
|
268
|
+
Notes:
|
|
269
|
+
- `attributes` is optional and can have shape `(batch, k)` or `(batch, n_assets, k)`.
|
|
270
|
+
- The output is a cleaned correlation matrix `(batch, n_assets, n_assets)`.
|
|
271
|
+
- If you change attribute width `k`, use a new layer instance.
|
|
272
|
+
|
|
273
|
+
## Loss Function
|
|
274
|
+
|
|
275
|
+
### Variance Loss Function
|
|
276
|
+
|
|
277
|
+
```python
|
|
278
|
+
from rienet_torch import variance_loss_function
|
|
279
|
+
|
|
280
|
+
loss = variance_loss_function(
|
|
281
|
+
covariance_true=true_covariance, # (batch_size, n_assets, n_assets)
|
|
282
|
+
weights_predicted=predicted_weights # (batch_size, n_assets, 1)
|
|
283
|
+
)
|
|
284
|
+
```
|
|
285
|
+
|
|
286
|
+
**Mathematical Formula:**
|
|
287
|
+
|
|
288
|
+
```text
|
|
289
|
+
Loss = n_assets x w^T Sigma w
|
|
290
|
+
```
|
|
291
|
+
|
|
292
|
+
Where `w` are the portfolio weights and `Sigma` is the realised covariance matrix.
|
|
293
|
+
|
|
294
|
+
## Serialization
|
|
295
|
+
|
|
296
|
+
The package uses PyTorch-native serialization:
|
|
297
|
+
|
|
298
|
+
```python
|
|
299
|
+
from rienet_torch import RIEnetLayer
|
|
300
|
+
from rienet_torch.serialization import save_module, load_module
|
|
301
|
+
|
|
302
|
+
model = RIEnetLayer(output_type="weights")
|
|
303
|
+
save_module(model, "rienet.pt")
|
|
304
|
+
|
|
305
|
+
restored = load_module(RIEnetLayer, "rienet.pt")
|
|
306
|
+
```
|
|
307
|
+
|
|
308
|
+
This stores:
|
|
309
|
+
- the `state_dict`,
|
|
310
|
+
- the module config when `get_config()` is available,
|
|
311
|
+
- the lazy-build metadata needed to materialize shape-dependent parameters.
|
|
312
|
+
|
|
313
|
+
## Architecture Details
|
|
314
|
+
|
|
315
|
+
The RIEnet pipeline consists of:
|
|
316
|
+
|
|
317
|
+
1. **Input Scaling**: annualise returns by 252
|
|
318
|
+
2. **Lag Transformation**: five-parameter memory kernel for temporal weighting
|
|
319
|
+
3. **Covariance Estimation**: sample covariance across assets
|
|
320
|
+
4. **Eigenvalue Decomposition**: spectral analysis of the covariance matrix
|
|
321
|
+
5. **Recurrent Cleaning**: bidirectional GRU/LSTM processing of eigen spectra
|
|
322
|
+
6. **Marginal Volatility Head**: dense network forecasting inverse standard deviations
|
|
323
|
+
7. **Matrix Reconstruction**: RIE-based synthesis of `Sigma^{-1}` and GMV weight normalisation
|
|
324
|
+
|
|
325
|
+
Paper defaults use a single bidirectional GRU layer with 16 units per direction and a marginal-volatility head with 8 hidden units, matching the compact network described in Bongiorno et al. (2025).
|
|
326
|
+
|
|
327
|
+
## Requirements
|
|
328
|
+
|
|
329
|
+
- Python >= 3.10
|
|
330
|
+
- PyTorch >= 2.5.0
|
|
331
|
+
- NumPy >= 1.26.0
|
|
332
|
+
|
|
333
|
+
## Development
|
|
334
|
+
|
|
335
|
+
```bash
|
|
336
|
+
cd /path/to/RIEnet-torch
|
|
337
|
+
pip install -e ".[dev]"
|
|
338
|
+
python -m pytest tests/
|
|
339
|
+
```
|
|
340
|
+
|
|
341
|
+
## Release Automation
|
|
342
|
+
|
|
343
|
+
This repository is ready for automated publishing with GitHub Actions:
|
|
344
|
+
|
|
345
|
+
- CI workflow: `.github/workflows/ci.yml`
|
|
346
|
+
- PyPI publish workflow: `.github/workflows/publish.yml`
|
|
347
|
+
- TestPyPI publish workflow: `.github/workflows/publish-testpypi.yml`
|
|
348
|
+
|
|
349
|
+
The release checklist is documented in `RELEASING.md`.
|
|
350
|
+
|
|
351
|
+
## Citation
|
|
352
|
+
|
|
353
|
+
Please cite the following references when using RIEnet:
|
|
354
|
+
|
|
355
|
+
```bibtex
|
|
356
|
+
@article{bongiorno2026end,
|
|
357
|
+
title={End-to-end large portfolio optimization for variance minimization with neural networks through covariance cleaning},
|
|
358
|
+
author={Bongiorno, Christian and Manolakis, Efstratios and Mantegna, Rosario Nunzio},
|
|
359
|
+
journal={The Journal of Finance and Data Science},
|
|
360
|
+
pages={100179},
|
|
361
|
+
year={2026},
|
|
362
|
+
publisher={Elsevier}
|
|
363
|
+
}
|
|
364
|
+
|
|
365
|
+
@inproceedings{bongiorno2025Neural,
|
|
366
|
+
author = {Bongiorno, Christian and Manolakis, Efstratios and Mantegna, Rosario Nunzio},
|
|
367
|
+
title = {Neural Network-Driven Volatility Drag Mitigation under Aggressive Leverage},
|
|
368
|
+
year = {2025},
|
|
369
|
+
isbn = {9798400722202},
|
|
370
|
+
publisher = {Association for Computing Machinery},
|
|
371
|
+
address = {New York, NY, USA},
|
|
372
|
+
url = {https://doi.org/10.1145/3768292.3770370},
|
|
373
|
+
doi = {10.1145/3768292.3770370},
|
|
374
|
+
booktitle = {Proceedings of the 6th ACM International Conference on AI in Finance},
|
|
375
|
+
pages = {449–455},
|
|
376
|
+
numpages = {7},
|
|
377
|
+
location = {},
|
|
378
|
+
series = {ICAIF '25}
|
|
379
|
+
}
|
|
380
|
+
```
|
|
381
|
+
|
|
382
|
+
For software citation:
|
|
383
|
+
|
|
384
|
+
```bibtex
|
|
385
|
+
@software{rienet_torch2026,
|
|
386
|
+
title={RIEnet Torch: A Rotational Invariant Estimator Network for Global Minimum-Variance Optimisation},
|
|
387
|
+
author={Bongiorno, Christian},
|
|
388
|
+
year={2026},
|
|
389
|
+
version={0.1.0},
|
|
390
|
+
url={https://github.com/your-user/rienet-torch}
|
|
391
|
+
}
|
|
392
|
+
```
|
|
393
|
+
|
|
394
|
+
You can print citation information programmatically:
|
|
395
|
+
|
|
396
|
+
```python
|
|
397
|
+
import rienet_torch
|
|
398
|
+
rienet_torch.print_citation()
|
|
399
|
+
```
|
|
400
|
+
|
|
401
|
+
|
|
402
|
+
## Support
|
|
403
|
+
|
|
404
|
+
For questions, issues, or contributions, please:
|
|
405
|
+
|
|
406
|
+
- Open an issue on [GitHub](https://github.com/bongiornoc/RIEnet-torch/issues)
|
|
407
|
+
- Check the documentation
|
|
408
|
+
- Contact Prof. Christian Bongiorno (<christian.bongiorno@centralesupelec.fr>) for calibrated model weights or collaboration requests
|