mlquantify 0.0.11.11__tar.gz → 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (27) hide show
  1. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/PKG-INFO +137 -129
  2. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/README.md +106 -106
  3. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify/__init__.py +31 -29
  4. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify/base.py +559 -559
  5. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify/classification/methods.py +160 -160
  6. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify/evaluation/__init__.py +13 -13
  7. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify/evaluation/measures.py +215 -215
  8. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify/evaluation/protocol.py +646 -646
  9. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify/methods/__init__.py +37 -37
  10. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify/methods/aggregative.py +1029 -1029
  11. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify/methods/meta.py +471 -471
  12. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify/methods/mixture_models.py +1003 -1003
  13. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify/methods/non_aggregative.py +136 -136
  14. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify/methods/threshold_optimization.py +959 -959
  15. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify/model_selection.py +377 -377
  16. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify/plots.py +367 -367
  17. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify/utils/__init__.py +1 -1
  18. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify/utils/general.py +333 -333
  19. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify/utils/method.py +448 -448
  20. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify.egg-info/PKG-INFO +137 -129
  21. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/setup.cfg +4 -4
  22. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/setup.py +32 -32
  23. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify/classification/__init__.py +0 -0
  24. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify.egg-info/SOURCES.txt +0 -0
  25. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify.egg-info/dependency_links.txt +0 -0
  26. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify.egg-info/requires.txt +0 -0
  27. {mlquantify-0.0.11.11 → mlquantify-0.1.0}/mlquantify.egg-info/top_level.txt +0 -0
@@ -1,129 +1,137 @@
1
- Metadata-Version: 2.1
2
- Name: mlquantify
3
- Version: 0.0.11.11
4
- Summary: Quantification Library
5
- Home-page: https://github.com/luizfernandolj/QuantifyML/tree/master
6
- Maintainer: Luiz Fernando Luth Junior
7
- Keywords: python,machine learning,quantification,quantify
8
- Classifier: Development Status :: 4 - Beta
9
- Classifier: Intended Audience :: Science/Research
10
- Classifier: Programming Language :: Python :: 3
11
- Classifier: Operating System :: Unix
12
- Classifier: Operating System :: MacOS :: MacOS X
13
- Classifier: Operating System :: Microsoft :: Windows
14
- Description-Content-Type: text/markdown
15
- Requires-Dist: scikit-learn
16
- Requires-Dist: numpy
17
- Requires-Dist: scipy
18
- Requires-Dist: joblib
19
- Requires-Dist: tqdm
20
- Requires-Dist: pandas
21
- Requires-Dist: xlrd
22
- Requires-Dist: matplotlib
23
-
24
- <h1 align="center">MLQuantify</h1>
25
- <h4 align="center">A Python Package for Quantification</h4>
26
-
27
- ___
28
-
29
- **mlquantify** is a Python library for quantification, also known as supervised prevalence estimation, designed to estimate the distribution of classes within datasets. It offers a range of tools for various quantification methods, model selection tailored for quantification tasks, evaluation metrics, and protocols to assess quantification performance. Additionally, mlquantify includes popular datasets and visualization tools to help analyze and interpret results.
30
-
31
- ___
32
-
33
- ## Latest Release
34
-
35
- - **Version 0.0.11.6**: Inicial beta version. For a detailed list of changes, check the [changelog](#).
36
- - In case you need any help, refer to the [wiki](https://github.com/luizfernandolj/mlquantify/wiki).
37
- - Explore the [API documentation](#) for detailed developer information.
38
- - See also the library in the pypi site in [pypi mlquantify](https://pypi.org/project/mlquantify/)
39
-
40
- ___
41
-
42
- ## Installation
43
-
44
- To install mlquantify, run the following command:
45
-
46
- ```bash
47
- pip install mlquantify
48
- ```
49
-
50
- If you only want to update, run the code below:
51
-
52
- ```bash
53
- pip install --update mlquantify
54
- ```
55
-
56
- ___
57
-
58
- ## Contents
59
-
60
- | Section | Description |
61
- |---|---|
62
- | **21 Quantification Methods** | Methods for quantification, such as classify & Count Correct methods, Threshold Optimization, Mixture Models and more.|
63
- | **Dynamic class management** | All methods are dynamic, and handles multiclass and binary problems, in case of binary it makes One-Vs-All (OVA) automatically. |
64
- | **Model Selection** | Criteria and processes used to select the best model, such as grid-search for the case of quantification|
65
- | **Evaluation Metrics** | Specific metrics used to evaluate quantification performance, (e.g., AE, BIAS, NAE, SE, KLD, etc.). |
66
- | **Evaluation Protocols** | Evaluation protocols used, based on sampling generation (e.g., APP, NPP, etc.).. |
67
- | **Plotting Results** | Tools and techniques used to visualize results, such as the protocol results.|
68
- | **Comprehensive Documentation** | Complete documentation of the project, including code, data, and results. |
69
-
70
- ___
71
-
72
- ## Quick example:
73
-
74
- This code first loads the breast cancer dataset from _sklearn_, which is then split into training and testing sets. It uses the _Expectation Maximisation Quantifier (EMQ)_ with a RandomForest classifier to predict class prevalence. After training the model, it evaluates performance by calculating and printing the absolute error and bias between the real and predicted prevalences.
75
-
76
- ```python
77
- import mlquantify as mq
78
- from sklearn.ensemble import RandomForestClassifier
79
- from sklearn.datasets import load_breast_cancer
80
- from sklearn.model_selection import train_test_split
81
-
82
- # Loading dataset from sklearn
83
- features, target = load_breast_cancer(return_X_y=True)
84
-
85
- #Splitting into train and test
86
- X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=0.3)
87
-
88
- #Create the model, here it is the Expectation Maximisation Quantifier (EMQ) with a classifier
89
- model = mq.methods.EMQ(RandomForestClassifier())
90
- model.fit(X_train, y_train)
91
-
92
- #Predict the class prevalence for X_test
93
- pred_prevalence = model.predict(X_test)
94
- real_prevalence = mq.utils.get_real_prev(y_test)
95
-
96
- #Get the error for the prediction
97
- ae = mq.evaluation.absolute_error(real_prevalence, pred_prevalence)
98
- bias = mq.evaluation.bias(real_prevalence, pred_prevalence)
99
-
100
- print(f"Mean Squared Error (MSE) -> {ae:.4f}")
101
- print(f"Bias -> {bias}")
102
- ```
103
-
104
- ___
105
-
106
- ## Requirements
107
-
108
- - Scikit-learn
109
- - pandas
110
- - numpy
111
- - joblib
112
- - tqdm
113
- - matplotlib
114
- - xlrd
115
-
116
- ___
117
-
118
- ## Documentation
119
-
120
- ##### API is avaliable [here](#)
121
-
122
- - [Methods](https://github.com/luizfernandolj/mlquantify/wiki/Methods)
123
- - [Model Selection](https://github.com/luizfernandolj/mlquantify/wiki/Model-Selection)
124
- - [Evaluation](https://github.com/luizfernandolj/mlquantify/wiki/Evaluation)
125
- - [Plotting](https://github.com/luizfernandolj/mlquantify/wiki/Plotting)
126
- - [Utilities](https://github.com/luizfernandolj/mlquantify/wiki/Utilities)
127
-
128
-
129
- ___
1
+ Metadata-Version: 2.2
2
+ Name: mlquantify
3
+ Version: 0.1.0
4
+ Summary: Quantification Library
5
+ Home-page: https://github.com/luizfernandolj/QuantifyML/tree/master
6
+ Maintainer: Luiz Fernando Luth Junior
7
+ Keywords: python,machine learning,quantification,quantify
8
+ Classifier: Development Status :: 4 - Beta
9
+ Classifier: Intended Audience :: Science/Research
10
+ Classifier: Programming Language :: Python :: 3
11
+ Classifier: Operating System :: Unix
12
+ Classifier: Operating System :: MacOS :: MacOS X
13
+ Classifier: Operating System :: Microsoft :: Windows
14
+ Description-Content-Type: text/markdown
15
+ Requires-Dist: scikit-learn
16
+ Requires-Dist: numpy
17
+ Requires-Dist: scipy
18
+ Requires-Dist: joblib
19
+ Requires-Dist: tqdm
20
+ Requires-Dist: pandas
21
+ Requires-Dist: xlrd
22
+ Requires-Dist: matplotlib
23
+ Dynamic: classifier
24
+ Dynamic: description
25
+ Dynamic: description-content-type
26
+ Dynamic: home-page
27
+ Dynamic: keywords
28
+ Dynamic: maintainer
29
+ Dynamic: requires-dist
30
+ Dynamic: summary
31
+
32
+ <h1 align="center">MLQuantify</h1>
33
+ <h4 align="center">A Python Package for Quantification</h4>
34
+
35
+ ___
36
+
37
+ **mlquantify** is a Python library for quantification, also known as supervised prevalence estimation, designed to estimate the distribution of classes within datasets. It offers a range of tools for various quantification methods, model selection tailored for quantification tasks, evaluation metrics, and protocols to assess quantification performance. Additionally, mlquantify includes popular datasets and visualization tools to help analyze and interpret results.
38
+
39
+ ___
40
+
41
+ ## Latest Release
42
+
43
+ - **Version 0.0.11.6**: Inicial beta version. For a detailed list of changes, check the [changelog](#).
44
+ - In case you need any help, refer to the [wiki](https://github.com/luizfernandolj/mlquantify/wiki).
45
+ - Explore the [API documentation](#) for detailed developer information.
46
+ - See also the library in the pypi site in [pypi mlquantify](https://pypi.org/project/mlquantify/)
47
+
48
+ ___
49
+
50
+ ## Installation
51
+
52
+ To install mlquantify, run the following command:
53
+
54
+ ```bash
55
+ pip install mlquantify
56
+ ```
57
+
58
+ If you only want to update, run the code below:
59
+
60
+ ```bash
61
+ pip install --update mlquantify
62
+ ```
63
+
64
+ ___
65
+
66
+ ## Contents
67
+
68
+ | Section | Description |
69
+ |---|---|
70
+ | **21 Quantification Methods** | Methods for quantification, such as classify & Count Correct methods, Threshold Optimization, Mixture Models and more.|
71
+ | **Dynamic class management** | All methods are dynamic, and handles multiclass and binary problems, in case of binary it makes One-Vs-All (OVA) automatically. |
72
+ | **Model Selection** | Criteria and processes used to select the best model, such as grid-search for the case of quantification|
73
+ | **Evaluation Metrics** | Specific metrics used to evaluate quantification performance, (e.g., AE, BIAS, NAE, SE, KLD, etc.). |
74
+ | **Evaluation Protocols** | Evaluation protocols used, based on sampling generation (e.g., APP, NPP, etc.).. |
75
+ | **Plotting Results** | Tools and techniques used to visualize results, such as the protocol results.|
76
+ | **Comprehensive Documentation** | Complete documentation of the project, including code, data, and results. |
77
+
78
+ ___
79
+
80
+ ## Quick example:
81
+
82
+ This code first loads the breast cancer dataset from _sklearn_, which is then split into training and testing sets. It uses the _Expectation Maximisation Quantifier (EMQ)_ with a RandomForest classifier to predict class prevalence. After training the model, it evaluates performance by calculating and printing the absolute error and bias between the real and predicted prevalences.
83
+
84
+ ```python
85
+ import mlquantify as mq
86
+ from sklearn.ensemble import RandomForestClassifier
87
+ from sklearn.datasets import load_breast_cancer
88
+ from sklearn.model_selection import train_test_split
89
+
90
+ # Loading dataset from sklearn
91
+ features, target = load_breast_cancer(return_X_y=True)
92
+
93
+ #Splitting into train and test
94
+ X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=0.3)
95
+
96
+ #Create the model, here it is the Expectation Maximisation Quantifier (EMQ) with a classifier
97
+ model = mq.methods.EMQ(RandomForestClassifier())
98
+ model.fit(X_train, y_train)
99
+
100
+ #Predict the class prevalence for X_test
101
+ pred_prevalence = model.predict(X_test)
102
+ real_prevalence = mq.utils.get_real_prev(y_test)
103
+
104
+ #Get the error for the prediction
105
+ ae = mq.evaluation.absolute_error(real_prevalence, pred_prevalence)
106
+ bias = mq.evaluation.bias(real_prevalence, pred_prevalence)
107
+
108
+ print(f"Mean Squared Error (MSE) -> {ae:.4f}")
109
+ print(f"Bias -> {bias}")
110
+ ```
111
+
112
+ ___
113
+
114
+ ## Requirements
115
+
116
+ - Scikit-learn
117
+ - pandas
118
+ - numpy
119
+ - joblib
120
+ - tqdm
121
+ - matplotlib
122
+ - xlrd
123
+
124
+ ___
125
+
126
+ ## Documentation
127
+
128
+ ##### API is avaliable [here](#)
129
+
130
+ - [Methods](https://github.com/luizfernandolj/mlquantify/wiki/Methods)
131
+ - [Model Selection](https://github.com/luizfernandolj/mlquantify/wiki/Model-Selection)
132
+ - [Evaluation](https://github.com/luizfernandolj/mlquantify/wiki/Evaluation)
133
+ - [Plotting](https://github.com/luizfernandolj/mlquantify/wiki/Plotting)
134
+ - [Utilities](https://github.com/luizfernandolj/mlquantify/wiki/Utilities)
135
+
136
+
137
+ ___
@@ -1,106 +1,106 @@
1
- <h1 align="center">MLQuantify</h1>
2
- <h4 align="center">A Python Package for Quantification</h4>
3
-
4
- ___
5
-
6
- **mlquantify** is a Python library for quantification, also known as supervised prevalence estimation, designed to estimate the distribution of classes within datasets. It offers a range of tools for various quantification methods, model selection tailored for quantification tasks, evaluation metrics, and protocols to assess quantification performance. Additionally, mlquantify includes popular datasets and visualization tools to help analyze and interpret results.
7
-
8
- ___
9
-
10
- ## Latest Release
11
-
12
- - **Version 0.0.11.6**: Inicial beta version. For a detailed list of changes, check the [changelog](#).
13
- - In case you need any help, refer to the [wiki](https://github.com/luizfernandolj/mlquantify/wiki).
14
- - Explore the [API documentation](#) for detailed developer information.
15
- - See also the library in the pypi site in [pypi mlquantify](https://pypi.org/project/mlquantify/)
16
-
17
- ___
18
-
19
- ## Installation
20
-
21
- To install mlquantify, run the following command:
22
-
23
- ```bash
24
- pip install mlquantify
25
- ```
26
-
27
- If you only want to update, run the code below:
28
-
29
- ```bash
30
- pip install --update mlquantify
31
- ```
32
-
33
- ___
34
-
35
- ## Contents
36
-
37
- | Section | Description |
38
- |---|---|
39
- | **21 Quantification Methods** | Methods for quantification, such as classify & Count Correct methods, Threshold Optimization, Mixture Models and more.|
40
- | **Dynamic class management** | All methods are dynamic, and handles multiclass and binary problems, in case of binary it makes One-Vs-All (OVA) automatically. |
41
- | **Model Selection** | Criteria and processes used to select the best model, such as grid-search for the case of quantification|
42
- | **Evaluation Metrics** | Specific metrics used to evaluate quantification performance, (e.g., AE, BIAS, NAE, SE, KLD, etc.). |
43
- | **Evaluation Protocols** | Evaluation protocols used, based on sampling generation (e.g., APP, NPP, etc.).. |
44
- | **Plotting Results** | Tools and techniques used to visualize results, such as the protocol results.|
45
- | **Comprehensive Documentation** | Complete documentation of the project, including code, data, and results. |
46
-
47
- ___
48
-
49
- ## Quick example:
50
-
51
- This code first loads the breast cancer dataset from _sklearn_, which is then split into training and testing sets. It uses the _Expectation Maximisation Quantifier (EMQ)_ with a RandomForest classifier to predict class prevalence. After training the model, it evaluates performance by calculating and printing the absolute error and bias between the real and predicted prevalences.
52
-
53
- ```python
54
- import mlquantify as mq
55
- from sklearn.ensemble import RandomForestClassifier
56
- from sklearn.datasets import load_breast_cancer
57
- from sklearn.model_selection import train_test_split
58
-
59
- # Loading dataset from sklearn
60
- features, target = load_breast_cancer(return_X_y=True)
61
-
62
- #Splitting into train and test
63
- X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=0.3)
64
-
65
- #Create the model, here it is the Expectation Maximisation Quantifier (EMQ) with a classifier
66
- model = mq.methods.EMQ(RandomForestClassifier())
67
- model.fit(X_train, y_train)
68
-
69
- #Predict the class prevalence for X_test
70
- pred_prevalence = model.predict(X_test)
71
- real_prevalence = mq.utils.get_real_prev(y_test)
72
-
73
- #Get the error for the prediction
74
- ae = mq.evaluation.absolute_error(real_prevalence, pred_prevalence)
75
- bias = mq.evaluation.bias(real_prevalence, pred_prevalence)
76
-
77
- print(f"Mean Squared Error (MSE) -> {ae:.4f}")
78
- print(f"Bias -> {bias}")
79
- ```
80
-
81
- ___
82
-
83
- ## Requirements
84
-
85
- - Scikit-learn
86
- - pandas
87
- - numpy
88
- - joblib
89
- - tqdm
90
- - matplotlib
91
- - xlrd
92
-
93
- ___
94
-
95
- ## Documentation
96
-
97
- ##### API is avaliable [here](#)
98
-
99
- - [Methods](https://github.com/luizfernandolj/mlquantify/wiki/Methods)
100
- - [Model Selection](https://github.com/luizfernandolj/mlquantify/wiki/Model-Selection)
101
- - [Evaluation](https://github.com/luizfernandolj/mlquantify/wiki/Evaluation)
102
- - [Plotting](https://github.com/luizfernandolj/mlquantify/wiki/Plotting)
103
- - [Utilities](https://github.com/luizfernandolj/mlquantify/wiki/Utilities)
104
-
105
-
106
- ___
1
+ <h1 align="center">MLQuantify</h1>
2
+ <h4 align="center">A Python Package for Quantification</h4>
3
+
4
+ ___
5
+
6
+ **mlquantify** is a Python library for quantification, also known as supervised prevalence estimation, designed to estimate the distribution of classes within datasets. It offers a range of tools for various quantification methods, model selection tailored for quantification tasks, evaluation metrics, and protocols to assess quantification performance. Additionally, mlquantify includes popular datasets and visualization tools to help analyze and interpret results.
7
+
8
+ ___
9
+
10
+ ## Latest Release
11
+
12
+ - **Version 0.0.11.6**: Inicial beta version. For a detailed list of changes, check the [changelog](#).
13
+ - In case you need any help, refer to the [wiki](https://github.com/luizfernandolj/mlquantify/wiki).
14
+ - Explore the [API documentation](#) for detailed developer information.
15
+ - See also the library in the pypi site in [pypi mlquantify](https://pypi.org/project/mlquantify/)
16
+
17
+ ___
18
+
19
+ ## Installation
20
+
21
+ To install mlquantify, run the following command:
22
+
23
+ ```bash
24
+ pip install mlquantify
25
+ ```
26
+
27
+ If you only want to update, run the code below:
28
+
29
+ ```bash
30
+ pip install --update mlquantify
31
+ ```
32
+
33
+ ___
34
+
35
+ ## Contents
36
+
37
+ | Section | Description |
38
+ |---|---|
39
+ | **21 Quantification Methods** | Methods for quantification, such as classify & Count Correct methods, Threshold Optimization, Mixture Models and more.|
40
+ | **Dynamic class management** | All methods are dynamic, and handles multiclass and binary problems, in case of binary it makes One-Vs-All (OVA) automatically. |
41
+ | **Model Selection** | Criteria and processes used to select the best model, such as grid-search for the case of quantification|
42
+ | **Evaluation Metrics** | Specific metrics used to evaluate quantification performance, (e.g., AE, BIAS, NAE, SE, KLD, etc.). |
43
+ | **Evaluation Protocols** | Evaluation protocols used, based on sampling generation (e.g., APP, NPP, etc.).. |
44
+ | **Plotting Results** | Tools and techniques used to visualize results, such as the protocol results.|
45
+ | **Comprehensive Documentation** | Complete documentation of the project, including code, data, and results. |
46
+
47
+ ___
48
+
49
+ ## Quick example:
50
+
51
+ This code first loads the breast cancer dataset from _sklearn_, which is then split into training and testing sets. It uses the _Expectation Maximisation Quantifier (EMQ)_ with a RandomForest classifier to predict class prevalence. After training the model, it evaluates performance by calculating and printing the absolute error and bias between the real and predicted prevalences.
52
+
53
+ ```python
54
+ import mlquantify as mq
55
+ from sklearn.ensemble import RandomForestClassifier
56
+ from sklearn.datasets import load_breast_cancer
57
+ from sklearn.model_selection import train_test_split
58
+
59
+ # Loading dataset from sklearn
60
+ features, target = load_breast_cancer(return_X_y=True)
61
+
62
+ #Splitting into train and test
63
+ X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=0.3)
64
+
65
+ #Create the model, here it is the Expectation Maximisation Quantifier (EMQ) with a classifier
66
+ model = mq.methods.EMQ(RandomForestClassifier())
67
+ model.fit(X_train, y_train)
68
+
69
+ #Predict the class prevalence for X_test
70
+ pred_prevalence = model.predict(X_test)
71
+ real_prevalence = mq.utils.get_real_prev(y_test)
72
+
73
+ #Get the error for the prediction
74
+ ae = mq.evaluation.absolute_error(real_prevalence, pred_prevalence)
75
+ bias = mq.evaluation.bias(real_prevalence, pred_prevalence)
76
+
77
+ print(f"Mean Squared Error (MSE) -> {ae:.4f}")
78
+ print(f"Bias -> {bias}")
79
+ ```
80
+
81
+ ___
82
+
83
+ ## Requirements
84
+
85
+ - Scikit-learn
86
+ - pandas
87
+ - numpy
88
+ - joblib
89
+ - tqdm
90
+ - matplotlib
91
+ - xlrd
92
+
93
+ ___
94
+
95
+ ## Documentation
96
+
97
+ ##### API is avaliable [here](#)
98
+
99
+ - [Methods](https://github.com/luizfernandolj/mlquantify/wiki/Methods)
100
+ - [Model Selection](https://github.com/luizfernandolj/mlquantify/wiki/Model-Selection)
101
+ - [Evaluation](https://github.com/luizfernandolj/mlquantify/wiki/Evaluation)
102
+ - [Plotting](https://github.com/luizfernandolj/mlquantify/wiki/Plotting)
103
+ - [Utilities](https://github.com/luizfernandolj/mlquantify/wiki/Utilities)
104
+
105
+
106
+ ___
@@ -1,30 +1,32 @@
1
- "mlquantify, a Python package for quantification"
2
-
3
- from . import base
4
- from . import model_selection
5
- from . import plots
6
- from . import classification
7
- from . import evaluation
8
- from . import methods
9
- from . import utils
10
-
11
- ARGUMENTS_SETTED = False
12
-
13
- arguments = {
14
- "y_pred": None,
15
- "posteriors_train": None,
16
- "posteriors_test": None,
17
- "y_labels": None,
18
- "y_pred_train": None,
19
- }
20
-
21
- def set_arguments(y_pred=None, posteriors_train=None, posteriors_test=None, y_labels=None, y_pred_train=None):
22
- global ARGUMENTS_SETTED
23
- global arguments
24
- arguments["y_pred"] = y_pred
25
- arguments["posteriors_train"] = posteriors_train
26
- arguments["posteriors_test"] = posteriors_test
27
- arguments["y_labels"] = y_labels
28
- arguments["y_pred_train"] = y_pred_train
29
-
1
+ "mlquantify, a Python package for quantification"
2
+
3
+ import pandas
4
+
5
+ from . import base
6
+ from . import model_selection
7
+ from . import plots
8
+ from . import classification
9
+ from . import evaluation
10
+ from . import methods
11
+ from . import utils
12
+
13
+ ARGUMENTS_SETTED = False
14
+
15
+ arguments = {
16
+ "y_pred": None,
17
+ "posteriors_train": None,
18
+ "posteriors_test": None,
19
+ "y_labels": None,
20
+ "y_pred_train": None,
21
+ }
22
+
23
+ def set_arguments(y_pred=None, posteriors_train=None, posteriors_test=None, y_labels=None, y_pred_train=None):
24
+ global ARGUMENTS_SETTED
25
+ global arguments
26
+ arguments["y_pred"] = y_pred
27
+ arguments["posteriors_train"] = posteriors_train.to_numpy() if isinstance(posteriors_train, pandas.DataFrame) else posteriors_train
28
+ arguments["posteriors_test"] = posteriors_test.to_numpy() if isinstance(posteriors_test, pandas.DataFrame) else posteriors_test
29
+ arguments["y_labels"] = y_labels
30
+ arguments["y_pred_train"] = y_pred_train
31
+
30
32
  ARGUMENTS_SETTED = True