aimodelshare 0.3.7__py3-none-any.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- aimodelshare/README.md +26 -0
- aimodelshare/__init__.py +100 -0
- aimodelshare/aimsonnx.py +2381 -0
- aimodelshare/api.py +836 -0
- aimodelshare/auth.py +163 -0
- aimodelshare/aws.py +511 -0
- aimodelshare/aws_client.py +173 -0
- aimodelshare/base_image.py +154 -0
- aimodelshare/bucketpolicy.py +106 -0
- aimodelshare/color_mappings/color_mapping_keras.csv +121 -0
- aimodelshare/color_mappings/color_mapping_pytorch.csv +117 -0
- aimodelshare/containerisation.py +244 -0
- aimodelshare/containerization.py +712 -0
- aimodelshare/containerization_templates/Dockerfile.txt +8 -0
- aimodelshare/containerization_templates/Dockerfile_PySpark.txt +23 -0
- aimodelshare/containerization_templates/buildspec.txt +14 -0
- aimodelshare/containerization_templates/lambda_function.txt +40 -0
- aimodelshare/custom_approach/__init__.py +1 -0
- aimodelshare/custom_approach/lambda_function.py +17 -0
- aimodelshare/custom_eval_metrics.py +103 -0
- aimodelshare/data_sharing/__init__.py +0 -0
- aimodelshare/data_sharing/data_sharing_templates/Dockerfile.txt +3 -0
- aimodelshare/data_sharing/data_sharing_templates/__init__.py +1 -0
- aimodelshare/data_sharing/data_sharing_templates/buildspec.txt +15 -0
- aimodelshare/data_sharing/data_sharing_templates/codebuild_policies.txt +129 -0
- aimodelshare/data_sharing/data_sharing_templates/codebuild_trust_relationship.txt +12 -0
- aimodelshare/data_sharing/download_data.py +620 -0
- aimodelshare/data_sharing/share_data.py +373 -0
- aimodelshare/data_sharing/utils.py +8 -0
- aimodelshare/deploy_custom_lambda.py +246 -0
- aimodelshare/documentation/Makefile +20 -0
- aimodelshare/documentation/karma_sphinx_theme/__init__.py +28 -0
- aimodelshare/documentation/karma_sphinx_theme/_version.py +2 -0
- aimodelshare/documentation/karma_sphinx_theme/breadcrumbs.html +70 -0
- aimodelshare/documentation/karma_sphinx_theme/layout.html +172 -0
- aimodelshare/documentation/karma_sphinx_theme/search.html +50 -0
- aimodelshare/documentation/karma_sphinx_theme/searchbox.html +14 -0
- aimodelshare/documentation/karma_sphinx_theme/static/css/custom.css +2 -0
- aimodelshare/documentation/karma_sphinx_theme/static/css/custom.css.map +1 -0
- aimodelshare/documentation/karma_sphinx_theme/static/css/theme.css +2751 -0
- aimodelshare/documentation/karma_sphinx_theme/static/css/theme.css.map +1 -0
- aimodelshare/documentation/karma_sphinx_theme/static/css/theme.min.css +2 -0
- aimodelshare/documentation/karma_sphinx_theme/static/css/theme.min.css.map +1 -0
- aimodelshare/documentation/karma_sphinx_theme/static/font/fontello.eot +0 -0
- aimodelshare/documentation/karma_sphinx_theme/static/font/fontello.svg +32 -0
- aimodelshare/documentation/karma_sphinx_theme/static/font/fontello.ttf +0 -0
- aimodelshare/documentation/karma_sphinx_theme/static/font/fontello.woff +0 -0
- aimodelshare/documentation/karma_sphinx_theme/static/font/fontello.woff2 +0 -0
- aimodelshare/documentation/karma_sphinx_theme/static/js/theme.js +68 -0
- aimodelshare/documentation/karma_sphinx_theme/theme.conf +9 -0
- aimodelshare/documentation/make.bat +35 -0
- aimodelshare/documentation/requirements.txt +2 -0
- aimodelshare/documentation/source/about.rst +18 -0
- aimodelshare/documentation/source/advanced_features.rst +137 -0
- aimodelshare/documentation/source/competition.rst +218 -0
- aimodelshare/documentation/source/conf.py +58 -0
- aimodelshare/documentation/source/create_credentials.rst +86 -0
- aimodelshare/documentation/source/example_notebooks.rst +132 -0
- aimodelshare/documentation/source/functions.rst +151 -0
- aimodelshare/documentation/source/gettingstarted.rst +390 -0
- aimodelshare/documentation/source/images/creds1.png +0 -0
- aimodelshare/documentation/source/images/creds2.png +0 -0
- aimodelshare/documentation/source/images/creds3.png +0 -0
- aimodelshare/documentation/source/images/creds4.png +0 -0
- aimodelshare/documentation/source/images/creds5.png +0 -0
- aimodelshare/documentation/source/images/creds_file_example.png +0 -0
- aimodelshare/documentation/source/images/predict_tab.png +0 -0
- aimodelshare/documentation/source/index.rst +110 -0
- aimodelshare/documentation/source/modelplayground.rst +132 -0
- aimodelshare/exceptions.py +11 -0
- aimodelshare/generatemodelapi.py +1270 -0
- aimodelshare/iam/codebuild_policy.txt +129 -0
- aimodelshare/iam/codebuild_trust_relationship.txt +12 -0
- aimodelshare/iam/lambda_policy.txt +15 -0
- aimodelshare/iam/lambda_trust_relationship.txt +12 -0
- aimodelshare/json_templates/__init__.py +1 -0
- aimodelshare/json_templates/api_json.txt +155 -0
- aimodelshare/json_templates/auth/policy.txt +1 -0
- aimodelshare/json_templates/auth/role.txt +1 -0
- aimodelshare/json_templates/eval/policy.txt +1 -0
- aimodelshare/json_templates/eval/role.txt +1 -0
- aimodelshare/json_templates/function/policy.txt +1 -0
- aimodelshare/json_templates/function/role.txt +1 -0
- aimodelshare/json_templates/integration_response.txt +5 -0
- aimodelshare/json_templates/lambda_policy_1.txt +15 -0
- aimodelshare/json_templates/lambda_policy_2.txt +8 -0
- aimodelshare/json_templates/lambda_role_1.txt +12 -0
- aimodelshare/json_templates/lambda_role_2.txt +16 -0
- aimodelshare/leaderboard.py +174 -0
- aimodelshare/main/1.txt +132 -0
- aimodelshare/main/1B.txt +112 -0
- aimodelshare/main/2.txt +153 -0
- aimodelshare/main/3.txt +134 -0
- aimodelshare/main/4.txt +128 -0
- aimodelshare/main/5.txt +109 -0
- aimodelshare/main/6.txt +105 -0
- aimodelshare/main/7.txt +144 -0
- aimodelshare/main/8.txt +142 -0
- aimodelshare/main/__init__.py +1 -0
- aimodelshare/main/authorization.txt +275 -0
- aimodelshare/main/eval_classification.txt +79 -0
- aimodelshare/main/eval_lambda.txt +1709 -0
- aimodelshare/main/eval_regression.txt +80 -0
- aimodelshare/main/lambda_function.txt +8 -0
- aimodelshare/main/nst.txt +149 -0
- aimodelshare/model.py +1543 -0
- aimodelshare/modeluser.py +215 -0
- aimodelshare/moral_compass/README.md +408 -0
- aimodelshare/moral_compass/__init__.py +65 -0
- aimodelshare/moral_compass/_version.py +3 -0
- aimodelshare/moral_compass/api_client.py +601 -0
- aimodelshare/moral_compass/apps/__init__.py +69 -0
- aimodelshare/moral_compass/apps/ai_consequences.py +540 -0
- aimodelshare/moral_compass/apps/bias_detective.py +714 -0
- aimodelshare/moral_compass/apps/ethical_revelation.py +898 -0
- aimodelshare/moral_compass/apps/fairness_fixer.py +889 -0
- aimodelshare/moral_compass/apps/judge.py +888 -0
- aimodelshare/moral_compass/apps/justice_equity_upgrade.py +853 -0
- aimodelshare/moral_compass/apps/mc_integration_helpers.py +820 -0
- aimodelshare/moral_compass/apps/model_building_game.py +1104 -0
- aimodelshare/moral_compass/apps/model_building_game_beginner.py +687 -0
- aimodelshare/moral_compass/apps/moral_compass_challenge.py +858 -0
- aimodelshare/moral_compass/apps/session_auth.py +254 -0
- aimodelshare/moral_compass/apps/shared_activity_styles.css +349 -0
- aimodelshare/moral_compass/apps/tutorial.py +481 -0
- aimodelshare/moral_compass/apps/what_is_ai.py +853 -0
- aimodelshare/moral_compass/challenge.py +365 -0
- aimodelshare/moral_compass/config.py +187 -0
- aimodelshare/placeholders/model.onnx +0 -0
- aimodelshare/placeholders/preprocessor.zip +0 -0
- aimodelshare/playground.py +1968 -0
- aimodelshare/postprocessormodules.py +157 -0
- aimodelshare/preprocessormodules.py +373 -0
- aimodelshare/pyspark/1.txt +195 -0
- aimodelshare/pyspark/1B.txt +181 -0
- aimodelshare/pyspark/2.txt +220 -0
- aimodelshare/pyspark/3.txt +204 -0
- aimodelshare/pyspark/4.txt +187 -0
- aimodelshare/pyspark/5.txt +178 -0
- aimodelshare/pyspark/6.txt +174 -0
- aimodelshare/pyspark/7.txt +211 -0
- aimodelshare/pyspark/8.txt +206 -0
- aimodelshare/pyspark/__init__.py +1 -0
- aimodelshare/pyspark/authorization.txt +258 -0
- aimodelshare/pyspark/eval_classification.txt +79 -0
- aimodelshare/pyspark/eval_lambda.txt +1441 -0
- aimodelshare/pyspark/eval_regression.txt +80 -0
- aimodelshare/pyspark/lambda_function.txt +8 -0
- aimodelshare/pyspark/nst.txt +213 -0
- aimodelshare/python/my_preprocessor.py +58 -0
- aimodelshare/readme.md +26 -0
- aimodelshare/reproducibility.py +181 -0
- aimodelshare/sam/Dockerfile.txt +8 -0
- aimodelshare/sam/Dockerfile_PySpark.txt +24 -0
- aimodelshare/sam/__init__.py +1 -0
- aimodelshare/sam/buildspec.txt +11 -0
- aimodelshare/sam/codebuild_policies.txt +129 -0
- aimodelshare/sam/codebuild_trust_relationship.txt +12 -0
- aimodelshare/sam/codepipeline_policies.txt +173 -0
- aimodelshare/sam/codepipeline_trust_relationship.txt +12 -0
- aimodelshare/sam/spark-class.txt +2 -0
- aimodelshare/sam/template.txt +54 -0
- aimodelshare/tools.py +103 -0
- aimodelshare/utils/__init__.py +78 -0
- aimodelshare/utils/optional_deps.py +38 -0
- aimodelshare/utils.py +57 -0
- aimodelshare-0.3.7.dist-info/METADATA +298 -0
- aimodelshare-0.3.7.dist-info/RECORD +171 -0
- aimodelshare-0.3.7.dist-info/WHEEL +5 -0
- aimodelshare-0.3.7.dist-info/licenses/LICENSE +5 -0
- aimodelshare-0.3.7.dist-info/top_level.txt +1 -0
|
@@ -0,0 +1,137 @@
|
|
|
1
|
+
.. _advanced_features:
|
|
2
|
+
|
|
3
|
+
Advanced Features
|
|
4
|
+
#################
|
|
5
|
+
|
|
6
|
+
The AI Model Share library can help deploy interactive web dashboards for making predictions with your models in minutes.
|
|
7
|
+
|
|
8
|
+
However, sometimes there is a need for higher levels of customization than the base features include.
|
|
9
|
+
|
|
10
|
+
This page describes additional pathways to allow for the greatest programming flexibility, while also leveraging the power of the AI Model Share library.
|
|
11
|
+
|
|
12
|
+
.. _custom_deployments:
|
|
13
|
+
|
|
14
|
+
Custom Deployments
|
|
15
|
+
******************
|
|
16
|
+
|
|
17
|
+
The base ``ModelPlayground.deploy`` method deploys a pre-written lambda handler optimized for efficiency with specific types of prediction domains.
|
|
18
|
+
|
|
19
|
+
For projects requiring more flexibility in the lambda handler, the AI ModelShare library allows for "Custom Deployments". Custom Deployments allow users to leverage the AI ModelShare infrastructure through AWS, while also creating opportunities for additional customization.
|
|
20
|
+
|
|
21
|
+
**Tutorial**
|
|
22
|
+
|
|
23
|
+
`Guide to Custom Deployments <https://www.modelshare.org/notebooks/notebook:365>`_
|
|
24
|
+
|
|
25
|
+
.. _PySpark:
|
|
26
|
+
|
|
27
|
+
Using PySpark
|
|
28
|
+
*************
|
|
29
|
+
|
|
30
|
+
AI Model Share supports PySpark. Note that the current prediction API runtime can only accept Pandas Dataframes, which may require additional steps for PySpark preprocessors.
|
|
31
|
+
|
|
32
|
+
|
|
33
|
+
**Tutorial**
|
|
34
|
+
|
|
35
|
+
`Quick Start Tutorial with PySpark <https://www.modelshare.org/notebooks/notebook:366>`_
|
|
36
|
+
|
|
37
|
+
|
|
38
|
+
.. _webapps:
|
|
39
|
+
|
|
40
|
+
Connecting Custom Web Apps
|
|
41
|
+
**************************
|
|
42
|
+
|
|
43
|
+
AI Model Share allows users to leverage the AI Model Share deployment infrastructure to power their own custom web apps. Web apps can be displayed through the AI Model Share website and be highlighted as part of a developer’s portfolio.
|
|
44
|
+
|
|
45
|
+
**Users can connect their web apps in 3 easy steps:**
|
|
46
|
+
|
|
47
|
+
#. Deploy a Model Playground (See tutorials :ref:`HERE<example_notebooks>`).
|
|
48
|
+
|
|
49
|
+
#. In the code for your web app, set the path for your Model Playground's prediction API (built for you by AI Model Share in the “deploy” step) as the endpoint for the API request.
|
|
50
|
+
|
|
51
|
+
.. note::
|
|
52
|
+
|
|
53
|
+
Owners can find their Model Playground's API URL on the “Predict” tab of their Model Playground.
|
|
54
|
+
|
|
55
|
+
AI Model Share API URLs follow this format: *"https://example.execute-api.us-east-1.amazonaws.com"* (Remove "/prod/m" from the end of the url string.)
|
|
56
|
+
|
|
57
|
+
.. image:: images/predict_tab.png
|
|
58
|
+
|
|
59
|
+
|
|
60
|
+
#. Authorization tokens are generated for users when they log in to the AI Model Share website. Unpack the token parameter within your Streamlit code, then format the headers in your API call to expect a token as a query parameter.
|
|
61
|
+
|
|
62
|
+
.. code-block::
|
|
63
|
+
|
|
64
|
+
auth_token = st.experimental_get_query_params()['token'][0]
|
|
65
|
+
|
|
66
|
+
headers = {
|
|
67
|
+
"Content-Type": "application/json",
|
|
68
|
+
"authorizationToken": auth_token,
|
|
69
|
+
}
|
|
70
|
+
|
|
71
|
+
|
|
72
|
+
**Done!** Design your web-app to customize the AI Model Share infrastructure to your own needs. See examples below:
|
|
73
|
+
|
|
74
|
+
**Examples**
|
|
75
|
+
|
|
76
|
+
* `Text Classification: Webapp <https://share.streamlit.io/thestreett/streamlit-text-classification/main>`_
|
|
77
|
+
* `Text Classification: Github <https://github.com/AIModelShare/aimodelshare_tutorials/tree/main/streamlit-text-classification>`_
|
|
78
|
+
* `Tabular Classification: Webapp <https://share.streamlit.io/thestreett/streamlit-tabular-classification/main>`_
|
|
79
|
+
* `Tabular Classification: Github <https://github.com/AIModelShare/aimodelshare_tutorials/tree/main/streamlit-tabular-classification>`_
|
|
80
|
+
* `Image Classification: Webapp <https://share.streamlit.io/thestreett/streamlit-image-classification/main>`_
|
|
81
|
+
* `Image Classification: Github <https://github.com/AIModelShare/aimodelshare_tutorials/tree/main/streamlit-image-classification>`_
|
|
82
|
+
|
|
83
|
+
|
|
84
|
+
.. _reproducibility:
|
|
85
|
+
|
|
86
|
+
Model Reproducibility
|
|
87
|
+
*********************
|
|
88
|
+
|
|
89
|
+
AI Model Share encourages users to share, replicate, and build on each other’s work by offering full model reproducibility functionality.
|
|
90
|
+
Users can leverage Competitions & Experiments as a way to exchange trained & untrained models.
|
|
91
|
+
|
|
92
|
+
To **share** a reproducible model, take the following steps:
|
|
93
|
+
|
|
94
|
+
1. Export the model's reproducibility environment into a json file with the ``export_reproducibility_env`` function. This function captures all of the necessary information to exactly reproduce a machine learning model.
|
|
95
|
+
|
|
96
|
+
Example: ::
|
|
97
|
+
|
|
98
|
+
from aimodelshare import export_reproducibility_env
|
|
99
|
+
|
|
100
|
+
mode = "gpu" # or "cpu", depending on model type
|
|
101
|
+
seed = 2021
|
|
102
|
+
|
|
103
|
+
export_reproducibility_env(
|
|
104
|
+
seed=seed,
|
|
105
|
+
directory="", #use "" for current directory
|
|
106
|
+
mode=mode,
|
|
107
|
+
)
|
|
108
|
+
|
|
109
|
+
.. note::
|
|
110
|
+
|
|
111
|
+
The captured reproducibility environment only applies for one training iteration (data prep, preprocessing, fit model, submit model), so it is recommended to only train one model per training iteration.
|
|
112
|
+
|
|
113
|
+
2. Build your model with your preferred ML library.
|
|
114
|
+
|
|
115
|
+
3. Submit model with the ``reproducibility_env_filepath`` argument set.
|
|
116
|
+
|
|
117
|
+
Example: ::
|
|
118
|
+
|
|
119
|
+
#Submit Reproducible Model:
|
|
120
|
+
reproducibility_env_filepath="reproducibility.json"
|
|
121
|
+
|
|
122
|
+
# generate predicted y values (for keras models)
|
|
123
|
+
y_pred = model.predict(X_test).argmax(axis=1)
|
|
124
|
+
prediction_labels = [y_test.columns[i] for i in y_pred]
|
|
125
|
+
|
|
126
|
+
# Submit Model to Competition Leaderboard
|
|
127
|
+
mycompetition.submit_model(model_filepath = "model.onnx",
|
|
128
|
+
preprocessor_filepath="preprocessor.zip",
|
|
129
|
+
prediction_submission=prediction_labels,
|
|
130
|
+
reproducibility_env_filepath=reproducibility_env_filepath)
|
|
131
|
+
|
|
132
|
+
|
|
133
|
+
To **instantiate** a previously submitted reproducible model, use the ``Competition.instantiate_model`` method.
|
|
134
|
+
|
|
135
|
+
Example: ::
|
|
136
|
+
|
|
137
|
+
reproducible_model = mycompetition.instantiate_model(version=1, reproduce=True)
|
|
@@ -0,0 +1,218 @@
|
|
|
1
|
+
Competition Class
|
|
2
|
+
=================
|
|
3
|
+
|
|
4
|
+
After deploying a Model Playground, users can create a model competition. Creating a competition allows you to:
|
|
5
|
+
|
|
6
|
+
1. Verify the Model Playground performance metrics on aimodelshare.org
|
|
7
|
+
2. Submit models to a leaderboard
|
|
8
|
+
3. Grant access to other users to submit models to the leaderboard
|
|
9
|
+
4. Easily compare model performance and structure
|
|
10
|
+
|
|
11
|
+
|
|
12
|
+
.. _submit_model:
|
|
13
|
+
|
|
14
|
+
submit_model
|
|
15
|
+
------------
|
|
16
|
+
|
|
17
|
+
Submits model/preprocessor to machine learning competition using live prediction API url generated by AI Modelshare library. The submitted model gets evaluated and compared with all existing models and a leaderboard can be generated.
|
|
18
|
+
|
|
19
|
+
.. py:function:: Competition.submit_model(model_filepath, preprocessor_filepath, prediction_submission, sample_data=None, reproducibility_env_filepath=None, custom_metadata=None)
|
|
20
|
+
|
|
21
|
+
:param model_filepath: Value - Absolute path to model file [REQUIRED] to be set by the user. .onnx is the only accepted model file extension. "example_model.onnx" filename for file in directory. "/User/xyz/model/example_model.onnx" absolute path to model file from local directory.
|
|
22
|
+
:type model_filepath: string -ends with '.onnx'
|
|
23
|
+
:param preprocessor_filepath: value - absolute path to preprocessor file. [REQUIRED] to be set by the user. "./preprocessor.zip". Searches for an exported zip preprocessor file in the current directory. File is generated from preprocessor module using export_preprocessor function from the AI Modelshare library.
|
|
24
|
+
:type preprocessor_filepath: string
|
|
25
|
+
:param prediction_submission: Values - predictions for test data. [REQUIRED] for evaluation metrics of the submitted model.
|
|
26
|
+
:type prediction_submission: One-hot encoded prediction data for classification. List of values for regression.
|
|
27
|
+
:param sample_data:
|
|
28
|
+
:type sample_data:
|
|
29
|
+
:param reproducibility_env_filepath: [OPTIONAL] to be set by the user- absolute path to environment environment json file. Example: "./reproducibility.json". File is generated using export_reproducibility_env function from the AI Modelshare library
|
|
30
|
+
:type reproducibility_env_filepath: string
|
|
31
|
+
:param custom_metadata: Dictionary of custom metadata metrics (keys) and values for the model to be submitted.
|
|
32
|
+
:type custom_metadata: Dictionary
|
|
33
|
+
|
|
34
|
+
:return: Model version if the model is submitted sucessfully.
|
|
35
|
+
|
|
36
|
+
Example: ::
|
|
37
|
+
|
|
38
|
+
#-- Generate predicted values (sklearn)
|
|
39
|
+
prediction_labels = model.predict(preprocessor(X_test))
|
|
40
|
+
|
|
41
|
+
#-- Generate predicted values (keras)
|
|
42
|
+
prediction_column_index=model.predict(preprocessor(X_test)).argmax(axis=1)
|
|
43
|
+
# Extract correct prediction labels
|
|
44
|
+
prediction_labels = [y_train.columns[i] for i in prediction_column_index]
|
|
45
|
+
|
|
46
|
+
# Submit Model to Competition Leaderboard
|
|
47
|
+
mycompetition.submit_model(model_filepath = "model.onnx",
|
|
48
|
+
preprocessor_filepath="preprocessor.zip",
|
|
49
|
+
prediction_submission=prediction_labels)
|
|
50
|
+
|
|
51
|
+
.. _instantiate_model:
|
|
52
|
+
|
|
53
|
+
instantiate_model
|
|
54
|
+
-----------------
|
|
55
|
+
|
|
56
|
+
Import a model previously submitted to the competition leaderboard to use in your session.
|
|
57
|
+
|
|
58
|
+
.. py:function:: Competition.instantiate_model(version=None, trained=False, reproduce=False)
|
|
59
|
+
|
|
60
|
+
:param version: Model version number from competition leaderboard.
|
|
61
|
+
:type version: integer
|
|
62
|
+
:param trained: If True, a trained model is instantiated, if False, the untrained model is instantiated
|
|
63
|
+
:type trained: bool, default=False
|
|
64
|
+
:param reproduce: Set to True to instantiate a model with reproducibility environment setup
|
|
65
|
+
:type reproduce: bool, default=False
|
|
66
|
+
|
|
67
|
+
:return: Model chosen from leaderboard
|
|
68
|
+
|
|
69
|
+
Example: ::
|
|
70
|
+
|
|
71
|
+
# Instantiate Model 1 from the leaderboard, pre-trained
|
|
72
|
+
mymodel = mycompetition.instantiate_model(version=1, trained=True, reproduce=False)
|
|
73
|
+
|
|
74
|
+
.. note::
|
|
75
|
+
If ``reproduce = True``, an untrained model will be instantiated, regardless of the ``trained`` parameter value.
|
|
76
|
+
|
|
77
|
+
.. _inspect_model:
|
|
78
|
+
|
|
79
|
+
inspect_model
|
|
80
|
+
-------------
|
|
81
|
+
|
|
82
|
+
Examine structure of model submitted to a competition leaderboard.
|
|
83
|
+
|
|
84
|
+
.. py:function:: Competition.inspect_model(version=None, naming_convention=None)
|
|
85
|
+
|
|
86
|
+
:param version: Model version number from competition leaderboard.
|
|
87
|
+
:type version: integer
|
|
88
|
+
:param naming_convention: Either "keras" or "pytorch" depending on which kinds of layer names should be displayed
|
|
89
|
+
:type naming_convention: string - either "keras" or "pytorch"
|
|
90
|
+
|
|
91
|
+
:return: inspect_pd : dictionary of model summary & metadata
|
|
92
|
+
|
|
93
|
+
|
|
94
|
+
.. _compare_models:
|
|
95
|
+
|
|
96
|
+
compare_models
|
|
97
|
+
--------------
|
|
98
|
+
|
|
99
|
+
Compare the structure of two or more models submitted to a competition leaderboard. Use in conjunction with stylize_compare to visualize data.
|
|
100
|
+
|
|
101
|
+
.. py:function:: Competition.compare_models(version_list="None", verbose=1, naming_convention=None)
|
|
102
|
+
|
|
103
|
+
:param version_list: list of model version numbers to compare (previously submitted to competition leaderboard).
|
|
104
|
+
:type version_list: list of integers
|
|
105
|
+
:param verbose: Controls the verbosity: the higher, the more detail
|
|
106
|
+
:type verbose: integer
|
|
107
|
+
:param naming_convention: Either "keras" or "pytorch" depending on which kinds of layer names should be displayed
|
|
108
|
+
:type naming_convention: string - either "keras" or "pytorch"
|
|
109
|
+
|
|
110
|
+
:return: data : dictionary of model comparison information.
|
|
111
|
+
|
|
112
|
+
Example ::
|
|
113
|
+
|
|
114
|
+
# Compare two or more models
|
|
115
|
+
data=mycompetition.compare_models([7,8], verbose=1)
|
|
116
|
+
mycompetition.stylize_compare(data)
|
|
117
|
+
|
|
118
|
+
.. _stylize_compare:
|
|
119
|
+
|
|
120
|
+
stylize_compare
|
|
121
|
+
---------------
|
|
122
|
+
|
|
123
|
+
Stylizes data received from compare_models to highlight similarities & differences.
|
|
124
|
+
|
|
125
|
+
.. py:function:: Competition.stylize_compare(compare_dict, naming_convention=None)
|
|
126
|
+
|
|
127
|
+
:param compare_dict: Model data from compare_models()
|
|
128
|
+
:type compare_dict: dictionary
|
|
129
|
+
:param naming_convention: Either "keras" or "pytorch" depending on which kinds of layer names should be displayed
|
|
130
|
+
:type naming_convention: string - either "keras" or "pytorch"
|
|
131
|
+
|
|
132
|
+
:return: Formatted table of model comparisons.
|
|
133
|
+
|
|
134
|
+
Example ::
|
|
135
|
+
|
|
136
|
+
# Compare two or more models
|
|
137
|
+
data=mycompetition.compare_models([7,8], verbose=1)
|
|
138
|
+
mycompetition.stylize_compare(data)
|
|
139
|
+
|
|
140
|
+
.. _inspect_y_test:
|
|
141
|
+
|
|
142
|
+
inspect_y_test
|
|
143
|
+
--------------
|
|
144
|
+
|
|
145
|
+
Examines structure of y-test data to hep users understand how to submit models to the competition leaderboard.
|
|
146
|
+
|
|
147
|
+
.. py:function:: Competition.inspect_y_test()
|
|
148
|
+
|
|
149
|
+
:param none:
|
|
150
|
+
|
|
151
|
+
:return: Dictionary of a competition's y-test metadata.
|
|
152
|
+
|
|
153
|
+
Example: ::
|
|
154
|
+
|
|
155
|
+
mycompetition.inspect_y_test()
|
|
156
|
+
|
|
157
|
+
.. _get_leaderboard:
|
|
158
|
+
|
|
159
|
+
get_leaderboard
|
|
160
|
+
---------------
|
|
161
|
+
|
|
162
|
+
Get current competition leaderboard to rank all submitted models. Use in conjunction with stylize_leaderboard to visualize data.
|
|
163
|
+
|
|
164
|
+
.. py:function:: Competition.get_leaderboard(verbose=3, columns=None)
|
|
165
|
+
|
|
166
|
+
:param verbose: (Optional) controls the verbosity: the higher, the more detail.
|
|
167
|
+
:type verbose: integer
|
|
168
|
+
:param columns: (Optional) List of specific column names to include in the leaderboard, all else will be excluded. Performance metrics will always be displayed.
|
|
169
|
+
:type columns: list of strings
|
|
170
|
+
|
|
171
|
+
:return: Dictionary of leaderboard data.
|
|
172
|
+
|
|
173
|
+
Example: ::
|
|
174
|
+
|
|
175
|
+
data = mycompetition.get_leaderboard()
|
|
176
|
+
mycompetition.stylize_leaderboard(data)
|
|
177
|
+
|
|
178
|
+
.. _stylize_leaderboard:
|
|
179
|
+
|
|
180
|
+
stylize_leaderboard
|
|
181
|
+
-------------------
|
|
182
|
+
|
|
183
|
+
Stylizes data received from get_leaderbord.
|
|
184
|
+
|
|
185
|
+
.. py:function:: Competition.stylize_leaderboard(leaderboard, naming_convention="keras"
|
|
186
|
+
|
|
187
|
+
:param leaderboard: Data dictionary object returned from get_leaderboard
|
|
188
|
+
:type leaderboard: dictionary
|
|
189
|
+
|
|
190
|
+
:return: Formatted competition leaderboard
|
|
191
|
+
|
|
192
|
+
Example: ::
|
|
193
|
+
|
|
194
|
+
data = mycompetition.get_leaderboard()
|
|
195
|
+
mycompetition.stylize_leaderboard(data)
|
|
196
|
+
|
|
197
|
+
.. _update_access_list:
|
|
198
|
+
|
|
199
|
+
update_access_list
|
|
200
|
+
------------------
|
|
201
|
+
|
|
202
|
+
Updates list of authenticated participants who can submit new models to a competition.
|
|
203
|
+
|
|
204
|
+
.. py:function:: Competition.update_access_list(email_list=[],update_type="Replace_list")
|
|
205
|
+
|
|
206
|
+
:param email_list: [REQUIRED] list of comma separated emails for users who are allowed to submit models to competition.
|
|
207
|
+
:type email_list: list of strings
|
|
208
|
+
:param update_type:[REQUIRED] options: ``string``: 'Add', 'Remove','Replace_list','Get. Add appends user emails to original list, Remove deletes users from list, 'Replace_list' overwrites the original list with the new list provided, and Get returns the current list.
|
|
209
|
+
:type update_type: string
|
|
210
|
+
|
|
211
|
+
|
|
212
|
+
:return: "Success" upon successful request
|
|
213
|
+
|
|
214
|
+
Example ::
|
|
215
|
+
|
|
216
|
+
# Add, remove, or completely update authorized participants for competition later
|
|
217
|
+
emaillist=["newemailaddress@gmail.com"]
|
|
218
|
+
mycompetition.update_access_list(email_list=emaillist,update_type="Add")
|
|
@@ -0,0 +1,58 @@
|
|
|
1
|
+
# Configuration file for the Sphinx documentation builder.
|
|
2
|
+
#
|
|
3
|
+
# This file only contains a selection of the most common options. For a full
|
|
4
|
+
# list see the documentation:
|
|
5
|
+
# https://www.sphinx-doc.org/en/master/usage/configuration.html
|
|
6
|
+
|
|
7
|
+
# -- Path setup --------------------------------------------------------------
|
|
8
|
+
|
|
9
|
+
# If extensions (or modules to document with autodoc) are in another directory,
|
|
10
|
+
# add these directories to sys.path here. If the directory is relative to the
|
|
11
|
+
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
|
12
|
+
#
|
|
13
|
+
#import os
|
|
14
|
+
#import sys
|
|
15
|
+
#sys.path.insert(0, os.path.abspath('../Documents/Columbia/AI Model Share/Github/aimodelshare/'))
|
|
16
|
+
|
|
17
|
+
|
|
18
|
+
# -- Project information -----------------------------------------------------
|
|
19
|
+
|
|
20
|
+
project = 'AIModelShare'
|
|
21
|
+
copyright = '2022, AIModelShare'
|
|
22
|
+
author = 'AIModelShare'
|
|
23
|
+
|
|
24
|
+
# The full version, including alpha/beta/rc tags
|
|
25
|
+
release = '0.1'
|
|
26
|
+
|
|
27
|
+
|
|
28
|
+
# -- General configuration ---------------------------------------------------
|
|
29
|
+
|
|
30
|
+
# Add any Sphinx extension module names here, as strings. They can be
|
|
31
|
+
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
|
32
|
+
# ones.
|
|
33
|
+
extensions = [
|
|
34
|
+
# 'sphinx.ext.autodoc'
|
|
35
|
+
"nbsphinx",
|
|
36
|
+
# "sphinxcontrib.fulltoc",
|
|
37
|
+
]
|
|
38
|
+
|
|
39
|
+
# Add any paths that contain templates here, relative to this directory.
|
|
40
|
+
templates_path = ['_templates']
|
|
41
|
+
|
|
42
|
+
# List of patterns, relative to source directory, that match files and
|
|
43
|
+
# directories to ignore when looking for source files.
|
|
44
|
+
# This pattern also affects html_static_path and html_extra_path.
|
|
45
|
+
exclude_patterns = []
|
|
46
|
+
|
|
47
|
+
|
|
48
|
+
# -- Options for HTML output -------------------------------------------------
|
|
49
|
+
|
|
50
|
+
# The theme to use for HTML and HTML Help pages. See the documentation for
|
|
51
|
+
# a list of builtin themes.
|
|
52
|
+
#
|
|
53
|
+
html_theme = 'karma_sphinx_theme'
|
|
54
|
+
|
|
55
|
+
# Add any paths that contain custom static files (such as style sheets) here,
|
|
56
|
+
# relative to this directory. They are copied after the builtin static files,
|
|
57
|
+
# so a file named "default.css" will overwrite the builtin "default.css".
|
|
58
|
+
html_static_path = ['_static']
|
|
@@ -0,0 +1,86 @@
|
|
|
1
|
+
.. _create_credentials:
|
|
2
|
+
|
|
3
|
+
Setting up AI Model Share Credentials
|
|
4
|
+
#####################################
|
|
5
|
+
|
|
6
|
+
In order to deploy a Model Playground, AI Model Share needs access to create & manage cloud-based resources on your behalf through Amazon Web Services (AWS). This guide shows you how to create an AWS account, access the proper credentials, and create a credentials file to use with the aimodelshare python library.
|
|
7
|
+
|
|
8
|
+
Step 1: Create an AI Model Share Account
|
|
9
|
+
****************************************
|
|
10
|
+
|
|
11
|
+
*If you already have an AI Model Share account, proceed to the next step.*
|
|
12
|
+
|
|
13
|
+
Create an AI Model Share account by going `HERE <https://www.modelshare.org/login>`_ and following the prompts.
|
|
14
|
+
|
|
15
|
+
*Are you just using your credentials to submit models to a pre-existing playground competition or experiment? You're done!*
|
|
16
|
+
Get started with any of the :ref:`Model Submission notebooks <example_notebooks>`.
|
|
17
|
+
|
|
18
|
+
|
|
19
|
+
Step 2: Create an AWS account
|
|
20
|
+
*****************************
|
|
21
|
+
|
|
22
|
+
*If you already have an AWS account, proceed to the next step.*
|
|
23
|
+
|
|
24
|
+
Create an AWS account by going `HERE <https://portal.aws.amazon.com/billing/signup#/start/email>`_ and following the prompts.
|
|
25
|
+
|
|
26
|
+
|
|
27
|
+
Step 3: Create an IAM User & AWS Access Keys
|
|
28
|
+
********************************************
|
|
29
|
+
|
|
30
|
+
In order for aimodelshare to access your AWS resources, you will need to create IAM credentials for your AWS account.
|
|
31
|
+
|
|
32
|
+
* `Log in <https://signin.aws.amazon.com/signin>`_ to your AWS account.
|
|
33
|
+
* From the AWS Management Console, Navigate to “Security, Identity, & Compliance” and then “IAM”.
|
|
34
|
+
|
|
35
|
+
.. image:: images/creds1.png
|
|
36
|
+
:width: 300
|
|
37
|
+
|
|
38
|
+
* In the side menu, navigate to “Access management”, click on “Users”, then click on “Add User” on the right side of the screen.
|
|
39
|
+
|
|
40
|
+
.. image:: images/creds2.png
|
|
41
|
+
:width: 600
|
|
42
|
+
|
|
43
|
+
* Create a name that you’ll remember, like “aimodelshareadmin”, then enable “programmatic access” by checking the box.
|
|
44
|
+
|
|
45
|
+
.. image:: images/creds3.png
|
|
46
|
+
:width: 600
|
|
47
|
+
|
|
48
|
+
* On the next screen, click “Attach existing policies directly”, then “AdministratorAccess”.
|
|
49
|
+
|
|
50
|
+
.. image:: images/creds4.png
|
|
51
|
+
:width: 600
|
|
52
|
+
|
|
53
|
+
* Click Next: Review, then “Create User”.
|
|
54
|
+
* Copy the Access key ID and Secret access key and save them somewhere safe. These are the credentials you will use to link your AI Model Share account to the resources in your AWS account.
|
|
55
|
+
|
|
56
|
+
Step 4: Create your credentials file
|
|
57
|
+
************************************
|
|
58
|
+
|
|
59
|
+
Combine your AI Model Share & AWS credentials into a single ‘credentials.txt' file with the `configure_credentials` function. You only have to make the file once, then you can use it whenever you use the aimodelshare library.
|
|
60
|
+
|
|
61
|
+
Credentials files must follow this format:
|
|
62
|
+
|
|
63
|
+
.. image:: images/creds_file_example.png
|
|
64
|
+
:width: 600
|
|
65
|
+
|
|
66
|
+
You can create this txt file manually or you can automatically create this file by inputting your credentials in response to simple prompts from the configure_credentials() function. The following code will prompt you to provide your credentials one at a time and pre-format a txt file for you to use in the future:
|
|
67
|
+
|
|
68
|
+
.. code-block::
|
|
69
|
+
|
|
70
|
+
#install aimodelshare library
|
|
71
|
+
! pip install aimodelshare
|
|
72
|
+
|
|
73
|
+
# Generate credentials file
|
|
74
|
+
import aimodelshare as ai
|
|
75
|
+
from aimodelshare.aws import configure_credentials
|
|
76
|
+
configure_credentials()
|
|
77
|
+
|
|
78
|
+
|
|
79
|
+
.. warning::
|
|
80
|
+
|
|
81
|
+
Remember to keep your credentials secure! Handle your credentials file with the same level of security you handle your passwords. Do not share your file with anyone, send via email, or upload to Github.
|
|
82
|
+
|
|
83
|
+
Step 5: Get started!
|
|
84
|
+
********************
|
|
85
|
+
|
|
86
|
+
Now that you have your credentials file, you are ready to work through the :ref:`AI Model Share Tutorial <aimodelshare_tutorial>` or one of the :ref:`example_notebooks`.
|
|
@@ -0,0 +1,132 @@
|
|
|
1
|
+
.. _example_notebooks:
|
|
2
|
+
|
|
3
|
+
Example Notebooks
|
|
4
|
+
#################
|
|
5
|
+
|
|
6
|
+
After completing the AI Model Share Tutorial, you can deploy Model Playgrounds and submit models for a variety of live data classification & regression competitions.
|
|
7
|
+
|
|
8
|
+
Choose a dataset that you would like to work with, and find everything you need included below. Notebooks can be downloaded or opened directly in Google Colab.
|
|
9
|
+
|
|
10
|
+
(*Looking for PySpark?* Go :ref:`here. <tabular_class>`)
|
|
11
|
+
|
|
12
|
+
.. _tabular_class:
|
|
13
|
+
|
|
14
|
+
Tabular Classification:
|
|
15
|
+
***********************
|
|
16
|
+
|
|
17
|
+
Titanic Dataset
|
|
18
|
+
================
|
|
19
|
+
"The sinking of the Titanic is one of the most infamous shipwrecks in history. On April 15, 1912, during her maiden voyage, the widely considered “unsinkable” RMS Titanic sank after colliding with an iceberg. Unfortunately, there weren’t enough lifeboats for everyone onboard, resulting in the death of 1502 out of 2224 passengers and crew. While there was some element of luck involved in surviving, it seems some groups of people were more likely to survive than others." Use data about the passengers on board to determine whether they were likely to survive the shipwreck or not.
|
|
20
|
+
|
|
21
|
+
*Dataset and description adapted from: Kaggle GettingStarted Prediction Competition. (2012, September). Titanic - Machine Learning from Disaster, Version 1. Retrieved September 7, 2021 from https://www.kaggle.com/c/titanic/data.*
|
|
22
|
+
|
|
23
|
+
* `Quick Start Tutorial (Start here to Deploy) <https://www.modelshare.org/notebooks/notebook:304>`_
|
|
24
|
+
* `Quick Start Tutorial with PySpark <https://www.modelshare.org/notebooks/notebook:366>`_
|
|
25
|
+
* `Model Submission Guide: sklearn <https://www.modelshare.org/notebooks/notebook:305>`_
|
|
26
|
+
* `Model Submission Guide: Deep Learning <https://www.modelshare.org/notebooks/notebook:306>`_
|
|
27
|
+
* `Model Submission Guide: Predictions Only (no model metadata extracted) <https://www.modelshare.org/notebooks/notebook:319>`_
|
|
28
|
+
|
|
29
|
+
.. _tabular_reg:
|
|
30
|
+
|
|
31
|
+
Tabular Regression:
|
|
32
|
+
*******************
|
|
33
|
+
|
|
34
|
+
Used Car Sales Price Dataset
|
|
35
|
+
============================
|
|
36
|
+
|
|
37
|
+
Cars notoriously lose value as soon as they are purchased. However, the resell value of any particular car depends on many factors, including make, model, miles driven, transmission type, the number of owners, and more. Use this dataset to predict the resell value of used cars based on their features.
|
|
38
|
+
|
|
39
|
+
*Dataset adapted from: Birla, Nehal, Nishant Verma, and Nikhil Kushwaha. (June, 2018). Vehicle dataset, Version 3. Retrieved September 14, 2021 from https://www.kaggle.com/nehalbirla/vehicle-dataset-from-cardekho.*
|
|
40
|
+
|
|
41
|
+
* `Quick Start Tutorial (Start here to Deploy) <https://www.modelshare.org/notebooks/notebook:285>`_
|
|
42
|
+
* `Model Submission Guide: sklearn <https://www.modelshare.org/notebooks/notebook:286>`_
|
|
43
|
+
* `Model Submission Guide: Deep Learning <https://www.modelshare.org/notebooks/notebook:287>`_
|
|
44
|
+
|
|
45
|
+
.. _text_class:
|
|
46
|
+
|
|
47
|
+
Text Classification:
|
|
48
|
+
********************
|
|
49
|
+
|
|
50
|
+
Covid Misinformation Identification
|
|
51
|
+
===================================
|
|
52
|
+
|
|
53
|
+
"The significance of social media has increased manifold in the past few decades as it helps people from even the most remote corners of the world stay connected. With the COVID-19 pandemic raging, social media has become more relevant and widely used than ever before, and along with this, there has been a resurgence in the circulation of fake news and tweets that demand immediate attention." Use this dataset to read COVID-19 related tweets and determine if the information they present is "real" or "fake".
|
|
54
|
+
|
|
55
|
+
*Description and dataset adapted from: Sourya Dipta Das, Ayan Basak, and Saikat Dutta. "A Heuristic-driven Ensemble Framework for COVID-19 Fake News Detection”. arXiv preprint arXiv:2101.03545. 2021. Retrieved from https://github.com/diptamath/covid_fake_news/tree/main/data.*
|
|
56
|
+
|
|
57
|
+
* `Quick Start Tutorial (Start here to Deploy) <https://www.modelshare.org/notebooks/notebook:290>`_
|
|
58
|
+
* `Model Submission Guide: sklearn <https://www.modelshare.org/notebooks/notebook:291>`_
|
|
59
|
+
* `Model Submission Guide: Deep Learning <https://www.modelshare.org/notebooks/notebook:292>`_
|
|
60
|
+
|
|
61
|
+
|
|
62
|
+
Clickbait Identification
|
|
63
|
+
========================
|
|
64
|
+
|
|
65
|
+
"In the online world, many media outlets have to generate revenue from the clicks made by their readers, and due to the presence of numerous such outlets, they have to compete with each other for reader attention. To attract the readers to click on an article and visit the media site, they often come up with catchy headlines accompanying the article links, which lure the readers to click on the link. Such headlines are known as Clickbaits. While these baits may trick the readers into clicking, in the long-run, clickbaits usually don’t live up to the expectation of the readers and leave them disappointed." Use this dataset to read headlines from multiple media outlets and identify whether they represent clickbait or substantive news.
|
|
66
|
+
|
|
67
|
+
*Dataset and description adapted From: Abhijnan Chakraborty, Bhargavi Paranjape, Sourya Kakarla, and Niloy Ganguly. "Stop Clickbait: Detecting and Preventing Clickbaits in Online News Media”. In Proceedings of the 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), San Fransisco, US, August 2016.*
|
|
68
|
+
|
|
69
|
+
* `Quick Start Tutorial (Start here to Deploy) <https://www.modelshare.org/notebooks/notebook:288>`_
|
|
70
|
+
* `Model Submission Guide <https://www.modelshare.org/notebooks/notebook:289>`_
|
|
71
|
+
|
|
72
|
+
|
|
73
|
+
IMDB Movie Review Identification
|
|
74
|
+
================================
|
|
75
|
+
|
|
76
|
+
IMDb, also knows as the Internet Movie Database, is an online database of movies, TV shows, celebrities, and awards. Registered users can write reviews and rate content that they've seen. Use this dataset to classify 50,000 'highly polarized' movie reviews as positive or negative.
|
|
77
|
+
|
|
78
|
+
*Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. (2011). Learning Word Vectors for Sentiment Analysis. The 49th Annual Meeting of the Association for Computational Linguistics (ACL 2011).*
|
|
79
|
+
|
|
80
|
+
* `Quick Start Tutorial (Start here to Deploy) <https://www.modelshare.org/notebooks/notebook:300>`_
|
|
81
|
+
* `Model Submission Guide <https://www.modelshare.org/notebooks/notebook:301>`_
|
|
82
|
+
|
|
83
|
+
|
|
84
|
+
.. _image_class:
|
|
85
|
+
|
|
86
|
+
Image Classification:
|
|
87
|
+
*********************
|
|
88
|
+
|
|
89
|
+
Dog Breed Classification
|
|
90
|
+
========================
|
|
91
|
+
|
|
92
|
+
This dataset contains pictures from 6 different dog breeds, adapted from the original dataset with 120 different dog breeds from around the world. Use this dataset to look at images of dogs and determine which breed they belong to.
|
|
93
|
+
|
|
94
|
+
*Dataset adapted from: Aditya Khosla, Nityananda Jayadevaprakash, Bangpeng Yao and Li Fei-Fei. Novel dataset for Fine-Grained Image Categorization. First Workshop on Fine-Grained Visual Categorization (FGVC), IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. Retrieved from http://vision.stanford.edu/aditya86/ImageNetDogs/*
|
|
95
|
+
|
|
96
|
+
* `Quick Start Tutorial (Start here to Deploy) <https://www.modelshare.org/notebooks/notebook:293>`_
|
|
97
|
+
* `Model Submission Guide <https://www.modelshare.org/notebooks/notebook:294>`_
|
|
98
|
+
|
|
99
|
+
Fashion MNIST Classification
|
|
100
|
+
============================
|
|
101
|
+
|
|
102
|
+
An updated version of the iconic handwritten digits MNIST dataset. Fashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from one of 10 classes.
|
|
103
|
+
|
|
104
|
+
*Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. Han Xiao, Kashif Rasul, Roland Vollgraf. arXiv:1708.07747*
|
|
105
|
+
|
|
106
|
+
* `Quick Start Tutorial (Start here to Deploy) <https://www.modelshare.org/notebooks/notebook:295>`_
|
|
107
|
+
* `Model Submission Guide <https://www.modelshare.org/notebooks/notebook:296>`_
|
|
108
|
+
|
|
109
|
+
Flower Classification
|
|
110
|
+
=====================
|
|
111
|
+
|
|
112
|
+
A dataset containing pictures of 5 different classes of flowers.
|
|
113
|
+
|
|
114
|
+
*The Tensorflow Team. (2019, January). Flowers. http://download.tensorflow.org/example_images/flower_photos.tgz*
|
|
115
|
+
|
|
116
|
+
* `Quick Start Tutorial (Start here to Deploy) <https://www.modelshare.org/notebooks/notebook:297>`_
|
|
117
|
+
* `Model Submission Guide <https://www.modelshare.org/notebooks/notebook:299>`_
|
|
118
|
+
|
|
119
|
+
.. _video_class:
|
|
120
|
+
|
|
121
|
+
Video Classification:
|
|
122
|
+
*********************
|
|
123
|
+
|
|
124
|
+
Sports Clips Classification
|
|
125
|
+
===========================
|
|
126
|
+
|
|
127
|
+
Video clips of people doing pull-ups, kayaking, and horseback riding. Use this dataset to watch video clips and determine which of three activities are taking place.
|
|
128
|
+
|
|
129
|
+
*Dataset adapted from: Soomro, K., Zamir, A. R., & Sha, M. (2012). UCF101: A Dataset of 101 Human Actions Classes From Videos in The Wild. Center for Research in Computer Vision, University of Central Florida. https://arxiv.org/pdf/1212.0402v1.pdf*
|
|
130
|
+
|
|
131
|
+
* `Quick Start Tutorial (Start here to Deploy) <https://www.modelshare.org/notebooks/notebook:302>`_
|
|
132
|
+
* `Model Submission Guide <https://www.modelshare.org/notebooks/notebook:303>`_
|