mulaconf 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
mulaconf-0.1.0/LICENSE ADDED
@@ -0,0 +1,28 @@
1
+ BSD 3-Clause License
2
+
3
+ Copyright (c) 2026, Kostas Katsios
4
+
5
+ Redistribution and use in source and binary forms, with or without
6
+ modification, are permitted provided that the following conditions are met:
7
+
8
+ 1. Redistributions of source code must retain the above copyright notice, this
9
+ list of conditions and the following disclaimer.
10
+
11
+ 2. Redistributions in binary form must reproduce the above copyright notice,
12
+ this list of conditions and the following disclaimer in the documentation
13
+ and/or other materials provided with the distribution.
14
+
15
+ 3. Neither the name of the copyright holder nor the names of its
16
+ contributors may be used to endorse or promote products derived from
17
+ this software without specific prior written permission.
18
+
19
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
20
+ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
21
+ IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
22
+ DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
23
+ FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
24
+ DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
25
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
26
+ CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
27
+ OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
28
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
@@ -0,0 +1,435 @@
1
+ Metadata-Version: 2.4
2
+ Name: mulaconf
3
+ Version: 0.1.0
4
+ Summary: Conformal Prediction for Multi-label classification.
5
+ Author-email: Kostas Katsios <kos.katsios@gmail.com>
6
+ License: BSD 3-Clause License
7
+
8
+ Copyright (c) 2026, Kostas Katsios
9
+
10
+ Redistribution and use in source and binary forms, with or without
11
+ modification, are permitted provided that the following conditions are met:
12
+
13
+ 1. Redistributions of source code must retain the above copyright notice, this
14
+ list of conditions and the following disclaimer.
15
+
16
+ 2. Redistributions in binary form must reproduce the above copyright notice,
17
+ this list of conditions and the following disclaimer in the documentation
18
+ and/or other materials provided with the distribution.
19
+
20
+ 3. Neither the name of the copyright holder nor the names of its
21
+ contributors may be used to endorse or promote products derived from
22
+ this software without specific prior written permission.
23
+
24
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
25
+ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
26
+ IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
27
+ DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
28
+ FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
29
+ DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
30
+ SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
31
+ CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
32
+ OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
33
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
34
+ Project-URL: Homepage, https://github.com/k-kostas/MuLaConf
35
+ Project-URL: Bug Tracker, https://github.com/k-kostas/MuLaConf/issues
36
+ Project-URL: Documentation, https://mulaconf.readthedocs.io/
37
+ Keywords: conformal prediction,multi-label,classification,uncertainty quantification,pytorch,scikit-learn
38
+ Classifier: Programming Language :: Python :: 3
39
+ Classifier: License :: OSI Approved :: BSD License
40
+ Classifier: Operating System :: OS Independent
41
+ Requires-Python: >=3.10
42
+ Description-Content-Type: text/markdown
43
+ License-File: LICENSE
44
+ Requires-Dist: numpy>=1.20.0
45
+ Requires-Dist: scikit-learn>=1.0.0
46
+ Requires-Dist: torch>=2.0.0
47
+ Requires-Dist: pandas>=2.0.0
48
+ Requires-Dist: tqdm>=4.10.0
49
+ Dynamic: license-file
50
+
51
+ # MuLaConf : Multi-Label Conformal Prediction
52
+
53
+ [![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)
54
+ [![License: BSD 3-Clause](https://img.shields.io/badge/License-BSD_3--Clause-blue.svg)](https://opensource.org/licenses/bsd-3-clause)
55
+
56
+ A flexible Python package for **Conformal Prediction (CP)** in **Multi-label** classification settings.
57
+ It implements the **Powerset Scoring** approach [[3]](#papadopoulos2014) using the **Mahalanobis
58
+ nonconformity measure** [[1]](#katsios2024), and applies **Structural Penalties** to provide more informative prediction sets, based on
59
+ Hamming distance and label-set cardinality [[2]](#katsios2025). Designed for efficiency, it handles
60
+ model training, calibration, and the update of structural penalty weights without the need for
61
+ retraining. This package bridges **Scikit-Learn** (for the underlying classifiers) and **PyTorch**
62
+ (for efficient tensor computations and GPU acceleration).
63
+
64
+
65
+ Table of Contents
66
+ - [Key Features](#key-features)
67
+ - [Installation](#installation)
68
+ - [Documentation](#documentation)
69
+ - [Quickstart](#quickstart)
70
+ - [Alternative Usage](#alternative-usage)
71
+ - [Examples](#examples)
72
+ - [Citing Structural Penalties ICP](#citing-structural-penalties-icp)
73
+ - [References](#references)
74
+
75
+
76
+ ## Key Features
77
+
78
+ * **Multi-label Conformal Prediction**: Provides sets of label-sets with guaranteed coverage under the assumption of data exchangeability.
79
+ * **Powerset Scoring**: Explicitly assigns p-values to all possible label-sets.
80
+ * **Mahalanobis Nonconformity Measure**: Utilizes the Mahalanobis distance in the error vectors space to account for label correlations.
81
+ * **Structural Penalties**: Incorporates Hamming and Cardinality penalties to produce more informative prediction sets.
82
+ * **Post-training Penalty Updates**: Modify penalty weights after fitting, with no need to retrain the model or recalculate the covariance matrix.
83
+ * **Automatic Classifier Switching**: Replace the underlying classifier (e.g., from `RandomForestClassifier` to `KNeighborsClassifier`) and let the wrapper handles retraining automatically.
84
+ * **Compatible with any model**: Provides a wrapper (ICPWrapper) for any sklearn multi-label classifier (e.g., `MultiOutputClassifier`, `ClassifierChain`) plus a model agnostic InductiveConformalPredictor.
85
+ * **GPU Support**: Offloads heavy matrix computations to CUDA devices.
86
+
87
+
88
+ ## Installation
89
+
90
+ ```bash
91
+ pip install mulaconf
92
+ ```
93
+
94
+
95
+ ## Documentation
96
+ For the complete documentation see [mulaconf.readthedocs.io](https://mulaconf.readthedocs.io/en/latest/)
97
+
98
+
99
+ ## Quickstart
100
+ This guide demonstrates the core usage of the MuLaConf package for a multi-label classification task
101
+ to produce prediction sets for a new test sample in different significance levels.
102
+
103
+ We will load the data,
104
+ split it into proper training, calibration and test sets, train the model and evaluate the conformal predictions.
105
+ For example, we will use the **Yeast** dataset after we have preprocessed the data into features and labels
106
+ in CSV format. The labels are represented as **multi-hot vectors**.
107
+
108
+ ```python
109
+ import pandas as pd
110
+ from sklearn.model_selection import train_test_split
111
+
112
+ # 1. Define the path to your data
113
+ data_path = "/data/yeast"
114
+
115
+ # 2. Load the Yeast dataset (Features and Labels)
116
+ X = pd.read_csv(f"{data_path}/X_yeast.csv")
117
+ y = pd.read_csv(f"{data_path}/y_yeast.csv")
118
+
119
+ # 3. Split the data
120
+ # First, separate out the Test set (10%)
121
+ X_temp, X_test, y_temp, y_test = train_test_split(X, y, test_size=0.1, random_state=42)
122
+
123
+ # Then, split the remaining data into Proper Train and Calibration (30%)
124
+ X_train, X_calib, y_train, y_calib = train_test_split(X_temp, y_temp, test_size=0.3, random_state=42)
125
+ ```
126
+
127
+ ```text
128
+ Loading Yeast dataset...
129
+ Data shapes: Train=(1522, 103), Calib=(653, 103), Test=(242, 103)
130
+ ```
131
+
132
+ We initialize the underlying classifier from Scikit-Learn before fitting it on the proper training data. We have
133
+ chosen the `RandomForestClassifier` here, wrapped by `MultiOutputClassifier`. Then, we initialize the ICPWrapper
134
+ setting the model and the weights of the structural penalties (default values are 0.0). Notice that there are two ways
135
+ to adjust the classifiers' arguments either by passing them directly
136
+
137
+ ```python
138
+ from sklearn.ensemble import RandomForestClassifier
139
+ from sklearn.multioutput import MultiOutputClassifier
140
+
141
+ from mulaconf.icp_wrapper import ICPWrapper
142
+
143
+ base_model = MultiOutputClassifier(RandomForestClassifier(n_estimators=10))
144
+ wrapper = ICPWrapper(base_model, weight_hamming=2.0, weight_cardinality=1.5, device='cpu')
145
+ wrapper.fit(X_train, y_train)
146
+ ```
147
+
148
+ or as a dictionary.
149
+
150
+ ```python
151
+ from sklearn.ensemble import RandomForestClassifier
152
+ from sklearn.multioutput import MultiOutputClassifier
153
+
154
+ from structural_penalties_icp.icp_wrapper import ICPWrapper
155
+
156
+ base_model = MultiOutputClassifier(RandomForestClassifier())
157
+ wrapper = ICPWrapper(base_model, weight_hamming=2.0, weight_cardinality=1.5, device='cpu')
158
+ args = {'estimator__n_estimators': 5}
159
+ wrapper.fit(X_train, y_train, **args)
160
+ ```
161
+
162
+ Once the model is fitted, the next step is calibration. This process uses the calibration set to compute
163
+ nonconformity scores, which are essential for calculating the p-values required to produce valid prediction regions.
164
+
165
+ ```python
166
+ wrapper.calibrate(X_calib, y_calib)
167
+ ```
168
+
169
+ > [!NOTE]
170
+ > **Switching Underlying Scikit-Learn Strategies** :
171
+ > You can switch the classification strategy or update its parameters. If the wrapper detects a change (via fingerprinting) during calibration, it will automatically retrain the new model on the cached proper training data.
172
+ >
173
+ > ```python
174
+ > from sklearn.neighbors import KNeighborsClassifier
175
+ > from sklearn.multioutput import ClassifierChain
176
+ >
177
+ > # Switch strategy to Classifier Chains with KNN
178
+ > wrapper.strategy = ClassifierChain(KNeighborsClassifier())
179
+ > wrapper.kwargs = {'estimator__n_neighbors': 5}
180
+ >
181
+ > # Trigger automatic retraining and calibration
182
+ > wrapper.calibrate(X_calib, y_calib)
183
+ > ```
184
+
185
+ Finally, we generate prediction regions for the test set using the predict method.
186
+
187
+ ```python
188
+ prediction_regions_obj = wrapper.predict(X_test)
189
+ ```
190
+
191
+ The predict method returns a PredictionRegions container holding the conformal prediction regions for each sample.
192
+ You can query this object to extract valid label sets at a specific significance level
193
+ (e.g., $\alpha=0.1$ for 90% confidence) or multiple levels (e.g., $\alpha=[0.05, 0.1, 0.2]$).
194
+
195
+ The label-sets are returned as multi-hot vectors. In the example below, we retrieve the valid label combinations
196
+ for the first sample in the test set.
197
+
198
+ ```python
199
+ prediction_sets = prediction_regions_obj(significance_level=0.1)
200
+ print(prediction_sets[0])
201
+ ```
202
+
203
+ ```text
204
+ tensor([[0, 0, 0, ..., 1, 1, 0],
205
+ [0, 0, 0, ..., 1, 0, 0],
206
+ [0, 0, 0, ..., 1, 1, 0],
207
+ ...,
208
+ [1, 1, 1, ..., 1, 1, 0],
209
+ [1, 1, 1, ..., 0, 0, 0],
210
+ [1, 1, 1, ..., 1, 1, 0]], dtype=torch.int32)
211
+ ```
212
+
213
+ Equivalent one-liner:
214
+
215
+ ```python
216
+ prediction_sets = wrapper.predict(X_test)(significance_level=0.1)
217
+ ```
218
+
219
+ > [!NOTE]
220
+ > **Penalty Weights Update**: We update the penalty weights on-the-fly without retraining the model.
221
+ >
222
+ > ```python
223
+ > wrapper.icp.weight_hamming = 1.5
224
+ > wrapper.icp.weight_cardinality = 0.5
225
+ >
226
+ > # Predict with new penalties
227
+ > updated_prediction_sets = wrapper.predict(X_test)(significance_level=0.1)
228
+ >```
229
+
230
+ > [!NOTE]
231
+ > **Accessing P-Values**: You also have direct access to the raw p-values for every possible label combination.
232
+ > Below, we print the p-values for the first test sample.
233
+ >
234
+ > ```python
235
+ > print(prediction_regions_obj.p_values[0])
236
+ >```
237
+ >
238
+ > ```text
239
+ > tensor([0.0627, 0.0015, 0.0719, ..., 0.0015, 0.0015, 0.0015])
240
+ > ```
241
+
242
+
243
+ The `evaluate` method provides a convenient way to calculate performance metrics, including Coverage,
244
+ N-Criterion, S-Criterion, and statistical validity via the KS-test. Additionally, it can return the p-values
245
+ corresponding to the true labels.
246
+
247
+ The method requires the **ground truth labels** (`true_labelsets`) and the desired **significance level**.
248
+ All other metric-specific arguments are optional boolean flags, which default to `True` if not specified.
249
+
250
+ ```python
251
+ metrics = prediction_regions_obj.evaluate(
252
+ return_true_label_p_value = False,
253
+ return_coverage=True,
254
+ return_n_criterion=True,
255
+ return_s_criterion=True,
256
+ return_ks_test=True,
257
+ true_labelsets=y_test,
258
+ significance_level=0.1,
259
+ )
260
+
261
+ print(metrics)
262
+ ```
263
+
264
+ ```text
265
+ {
266
+ 'coverage': 0.9008264462809917,
267
+ 'n_criterion': 858.8636363636364,
268
+ 's_criterion': 412.99029541015625,
269
+ 'ks_test_metrics': {
270
+ 'ks_statistic': np.float64(0.05622110017075027),
271
+ 'ks_p_value': np.float64(0.4135919018220534),
272
+ 'is_valid': np.True_
273
+ }
274
+ }
275
+ ```
276
+
277
+
278
+ ## Alternative usage
279
+ You can also use the InductiveConformalPredictor class as a standalone engine if you prefer to manage the underlying
280
+ classifier yourself or not using Scikit-Learn. In this mode, you must provide the **predicted probabilities** for the
281
+ proper training, calibration, and test sets, as well as the **ground truth labels** for the training and calibration sets.
282
+
283
+ The package is flexible regarding input formats: it accepts PyTorch Tensors, NumPy arrays, Pandas DataFrames/Series,
284
+ or lists. All data is automatically converted to tensors and moved to the specified device (CPU or GPU) for
285
+ efficient processing.
286
+
287
+ First, we need to initialize the InductiveConformalPredictor class to calculate the structural penalties and to form
288
+ the covariance matrix using the proper training data.
289
+
290
+ ```python
291
+ from mulaconf.icp_predictor import InductiveConformalPredictor
292
+
293
+ icp = InductiveConformalPredictor(
294
+ predicted_probabilities=train_probs,
295
+ true_labels=train_labels,
296
+ weight_hamming=1.5,
297
+ weight_cardinality=0.5,
298
+ device='cpu'
299
+ )
300
+ ```
301
+
302
+ Next, we call the `calibrate` method to calculate the calibration scores based on the calibration probabilities
303
+ and labels.
304
+
305
+ ```python
306
+ icp.calibrate(probabilities=calib_probs,labels=calib_labels)
307
+ ```
308
+ Then, we can generate predictions regions for the test set by calling the `predict` method and passing the test
309
+ probabilities.
310
+
311
+ ```python
312
+ prediction_regions_obj = icp.predict(test_probs)
313
+ ```
314
+
315
+ The predict method returns a PredictionRegions container holding the conformal prediction regions. You can extract
316
+ valid label sets at a specific significance level (e.g., $\alpha=0.1$ for 90% confidence) or multiple levels
317
+ (e.g., $\alpha=[0.05, 0.1, 0.2]$). In the example below, we print the prediction regions for the first sample
318
+ in the test set.
319
+
320
+ ```python
321
+ prediction_sets = prediction_regions_obj(significance_level=0.1)
322
+ print(prediction_sets[0])
323
+ ```
324
+
325
+ ```text
326
+ tensor([[0, 0, 0, ..., 1, 1, 0],
327
+ [0, 0, 0, ..., 1, 0, 0],
328
+ [0, 0, 0, ..., 1, 1, 0],
329
+ ...,
330
+ [1, 1, 1, ..., 1, 1, 0],
331
+ [1, 1, 1, ..., 0, 0, 0],
332
+ [1, 1, 1, ..., 1, 1, 0]], dtype=torch.int32)
333
+ ```
334
+
335
+ And of course, we have access to the p-values. In the example below, we get the p-values of the first sample in the
336
+ test set.
337
+
338
+ ```python
339
+ print(prediction_regions_obj.p_values[0])
340
+ ```
341
+
342
+ ```text
343
+ tensor([0.0627, 0.0015, 0.0719, ..., 0.0015, 0.0015, 0.0015])
344
+ ```
345
+
346
+ Also, it allows us to get the p-values of test set's true labels and evaluate metrics like Coverage, N-Criterion,
347
+ S-Criterion and KS-test.
348
+
349
+ ```python
350
+ metrics = prediction_regions_obj.evaluate(
351
+ return_true_label_p_value = False,
352
+ return_coverage=True,
353
+ return_n_criterion=True,
354
+ return_s_criterion=True,
355
+ return_ks_test=True,
356
+ true_labelsets=test_labels,
357
+ significance_level=0.1,
358
+ )
359
+
360
+ print(metrics)
361
+ ```
362
+
363
+ ```text
364
+ {
365
+ 'coverage': 0.9008264462809917,
366
+ 'n_criterion': 858.8636363636364,
367
+ 's_criterion': 412.99029541015625,
368
+ 'ks_test_metrics': {
369
+ 'ks_statistic': np.float64(0.05622110017075027),
370
+ 'ks_p_value': np.float64(0.4135919018220534),
371
+ 'is_valid': np.True_
372
+ }
373
+ }
374
+ ```
375
+
376
+ > [!NOTE]
377
+ > **Penalties Weights Update**: We update the penalty weights on-the-fly without retraining the model.
378
+ >
379
+ > ```python
380
+ > wrapper.icp.weight_hamming = 1.5
381
+ > wrapper.icp.weight_cardinality = 0.5
382
+ >
383
+ > # Predict with new penalties
384
+ > updated_prediction_sets = wrapper.predict(X_test)(significance_level=0.1)
385
+ > ```
386
+
387
+
388
+ ## Examples
389
+
390
+ For additional examples of how to use the package, see the [documentation](https://mulaconf.readthedocs.io/en/latest/documentation.html).
391
+
392
+
393
+ ## Citing MuLaConf
394
+
395
+ If you use the package for a scientific publication, you are kindly requested to cite the following paper:
396
+
397
+ > <a id="katsios2025"></a>Katsios, K., & Papadopoulos, H. (2025). Incorporating Structural Penalties in Multi-label Conformal Prediction.
398
+ > *Proceedings of Machine Learning Research*, 266, 1-20.
399
+ [[Download PDF](https://proceedings.mlr.press/v230/katsios24a.html)]
400
+
401
+ **BibTeX:**
402
+
403
+ ```bibtex
404
+ @article{katsios2025incorporating,
405
+ title={Incorporating Structural Penalties in Multi-label Conformal Prediction},
406
+ author={Katsios, Kostas and Papadopoulos, Harris},
407
+ journal={Proceedings of Machine Learning Research},
408
+ volume={266},
409
+ pages={1--20},
410
+ year={2025}
411
+ }
412
+ ```
413
+
414
+
415
+ ## References
416
+
417
+ 1. <a id="katsios2025"></a>Katsios, K., & Papadopoulos, H. (2025). Incorporating Structural Penalties in Multi-label Conformal Prediction.
418
+ *Proceedings of Machine Learning Research*, 266, 1-20. [Proceedings](https://proceedings.mlr.press/v266/katsios25a.html)
419
+
420
+ 2. <a id="katsios2024"></a>Katsios, K., & Papadopoulos, H. (2024). Multi-label conformal prediction with a Mahalanobis distance nonconformity measure.
421
+ *Proceedings of Machine Learning Research*, 230, 1-14. [Proceedings](https://proceedings.mlr.press/v230/katsios24a.html)
422
+
423
+ 3. <a id="papadopoulos2014"></a>Papadopoulos, H. (2014). A cross-conformal predictor for multi-label classification. In *Artificial Intelligence Applications and Innovations: AIAI 2014 Workshops: CoPA, MHDW, IIVC, and MT4BD, Rhodes, Greece, September 19-21, 2014. Proceedings 10* (pp. 241–250). Springer. [DOI: 10.1007/978-3-662-44722-2_26](https://doi.org/10.1007/978-3-662-44722-2_26)
424
+
425
+ 4. <a id="lambrou2016"></a>Lambrou, A., & Papadopoulos, H. (2016). Binary relevance multi-label conformal predictor. In *Conformal and Probabilistic Prediction with Applications* (pp. 90–104). Springer. [DOI: 10.1007/978-3-319-33395-3_7](https://doi.org/10.1007/978-3-319-33395-3_7)
426
+
427
+ 5. <a id="maltou2022"></a>Maltoudoglou, L., Paisios, A., Lenc, L., Martı́nek, J., Král, P., & Papadopoulos, H. (2022). Well-calibrated confidence measures for multi-label text classification with a large number of labels. *Pattern Recognition*, 122, 108271. [DOI: 10.1016/j.patcog.2021.108271](https://doi.org/10.1016/j.patcog.2021.108271)
428
+
429
+ 6. <a id="papadopoulos2002a"></a>Papadopoulos, H., Proedrou, K., Vovk, V., & Gammerman, A. (2002a). Inductive confidence machines for regression. In *Machine learning: ECML 2002: 13th European conference on machine learning Helsinki, Finland, August 19–23, 2002 proceedings 13* (pp. 345–356). Springer. [DOI: 10.1007/3-540-36755-1_29](https://doi.org/10.1007/3-540-36755-1_29)
430
+
431
+ 7. <a id="papadopoulos2002b"></a>Papadopoulos, H., Vovk, V., & Gammerman, A. (2002b). Qualified prediction for large data sets in the case of pattern recognition. In *ICMLA* (pp. 159–163).
432
+
433
+ 8. <a id="vovk2005"></a>Vovk, V., Gammerman, A., & Shafer, G. (2005). *Algorithmic Learning in a Random World* (Vol. 29). Springer. [DOI: 10.1007/b106715](https://doi.org/10.1007/b106715)
434
+
435
+ 9. <a id="vovk2016"></a>Vovk, V., Fedorova, V., Nouretdinov, I., & Gammerman, A. (2016). Criteria of efficiency for conformal prediction. In *Conformal and Probabilistic Prediction with Applications: 5th International Symposium, COPA 2016, Madrid, Spain, April 20-22, 2016, Proceedings 5* (pp. 23–39). Springer. [DOI: 10.1007/978-3-319-33395-3_2](https://doi.org/10.1007/978-3-319-33395-3_2)