active-vision 0.0.3__py3-none-any.whl → 0.0.4__py3-none-any.whl

Sign up to get free protection for your applications and to get access to all the features.
active_vision/__init__.py CHANGED
@@ -1,3 +1,3 @@
1
- __version__ = "0.0.3"
1
+ __version__ = "0.0.4"
2
2
 
3
3
  from .core import *
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.2
2
2
  Name: active-vision
3
- Version: 0.0.3
3
+ Version: 0.0.4
4
4
  Summary: Active learning for edge vision.
5
5
  Requires-Python: >=3.10
6
6
  Description-Content-Type: text/markdown
@@ -12,6 +12,7 @@ Requires-Dist: ipykernel>=6.29.5
12
12
  Requires-Dist: ipywidgets>=8.1.5
13
13
  Requires-Dist: loguru>=0.7.3
14
14
  Requires-Dist: seaborn>=0.13.2
15
+ Requires-Dist: timm>=1.0.13
15
16
 
16
17
  ![Python Version](https://img.shields.io/badge/python-3.10%2B-blue?style=for-the-badge)
17
18
  ![License](https://img.shields.io/badge/License-Apache%202.0-green.svg?style=for-the-badge)
@@ -26,16 +27,13 @@ Active learning at the edge for computer vision.
26
27
 
27
28
  The goal of this project is to create a framework for the active learning loop for computer vision deployed on edge devices.
28
29
 
29
- ## Installation
30
- I recommend using [uv](https://docs.astral.sh/uv/) to set up a virtual environment and install the package. You can also use other virtual env of your choice.
30
+ Supported tasks:
31
+ - [X] Image classification
32
+ - [ ] Object detection
33
+ - [ ] Segmentation
31
34
 
32
- If you're using uv:
33
35
 
34
- ```bash
35
- uv venv
36
- uv sync
37
- ```
38
- Once the virtual environment is created, you can install the package using pip.
36
+ ## Installation
39
37
 
40
38
  Get a release from PyPI
41
39
  ```bash
@@ -49,6 +47,16 @@ cd active-vision
49
47
  pip install -e .
50
48
  ```
51
49
 
50
+ I recommend using [uv](https://docs.astral.sh/uv/) to set up a virtual environment and install the package. You can also use other virtual env of your choice.
51
+
52
+ If you're using uv:
53
+
54
+ ```bash
55
+ uv venv
56
+ uv sync
57
+ ```
58
+ Once the virtual environment is created, you can install the package using pip.
59
+
52
60
  > [!TIP]
53
61
  > If you're using uv add a uv before the pip install command to install into your virtual environment. Eg:
54
62
  > ```bash
@@ -59,9 +67,11 @@ pip install -e .
59
67
  See the [notebook](./nbs/04_relabel_loop.ipynb) for a complete example.
60
68
 
61
69
  Be sure to prepared 3 datasets:
62
- - train: A dataframe of an existing labeled training dataset.
63
- - unlabeled: A dataframe of unlabeled data which we will sample from using active learning.
64
- - eval: A dataframe of labeled data which we will use to evaluate the performance of the model. (Optional)
70
+ - [initial_samples](./nbs/initial_samples.parquet): A dataframe of an existing labeled training dataset to seed the training set.
71
+ - [unlabeled](./nbs/unlabeled_samples.parquet): A dataframe of unlabeled data which we will sample from using active learning.
72
+ - [eval](./nbs/evaluation_samples.parquet): A dataframe of labeled data which we will use to evaluate the performance of the model.
73
+
74
+ As a toy example I created the above 3 datasets from the imagenette dataset.
65
75
 
66
76
  ```python
67
77
  from active_vision import ActiveLearner
@@ -102,6 +112,13 @@ al.add_to_train_set(labeled_df, output_filename="active_labeled")
102
112
 
103
113
  Repeat the process until the model is good enough. Use the dataset to train a larger model and deploy.
104
114
 
115
+ > [!TIP]
116
+ > For the toy dataset, I got to about 93% accuracy on the evaluation set with 200+ labeled images. The best performing model on the [leaderboard](https://github.com/fastai/imagenette) got 95.11% accuracy training on all 9469 labeled images.
117
+ >
118
+ > This took me about 6 iterations of relabeling. Each iteration took about 5 minutes to complete including labeling and model training (resnet18). See the [notebook](./nbs/04_relabel_loop.ipynb) for more details.
119
+ >
120
+ > But using the dataset of 200+ images, I trained a more capable model (convnext_small_in22k) and got 99.3% accuracy on the evaluation set. See the [notebook](./nbs/05_retrain_larger.ipynb) for more details.
121
+
105
122
  ## Workflow
106
123
  There are two workflows for active learning at the edge that we can use depending on the availability of labeled data.
107
124
 
@@ -109,10 +126,10 @@ There are two workflows for active learning at the edge that we can use dependin
109
126
  If we have no labeled data, we can use active learning to iteratively improve the model and build a labeled dataset.
110
127
 
111
128
  1. Load a small proxy model.
112
- 2. Label an initial dataset.
129
+ 2. Label an initial dataset. If there is none, you'll have to label some images.
113
130
  3. Train the proxy model on the labeled dataset.
114
131
  4. Run inference on the unlabeled dataset.
115
- 5. Evaluate the performance of the proxy model on the unlabeled dataset.
132
+ 5. Evaluate the performance of the proxy model.
116
133
  6. Is model good enough?
117
134
  - Yes: Save the proxy model and the dataset.
118
135
  - No: Select the most informative images to label using active learning.
@@ -164,7 +181,7 @@ graph TD
164
181
  ```
165
182
 
166
183
 
167
- ## Methodology
184
+ <!-- ## Methodology
168
185
  To test out the workflows we will use the [imagenette dataset](https://huggingface.co/datasets/frgfm/imagenette). But this will be applicable to any dataset.
169
186
 
170
187
  Imagenette is a subset of the ImageNet dataset with 10 classes. We will use this dataset to test out the workflows. Additionally, Imagenette has an existing leaderboard which we can use to evaluate the performance of the models.
@@ -215,4 +232,4 @@ After the first iteration we got 94.57% accuracy on the validation set. See the
215
232
  > [!TIP]
216
233
  > | Train Epochs | Number of Images | Validation Accuracy | Source |
217
234
  > |--------------|-----------------|----------------------|------------------|
218
- > | 10 | 200 | 94.57% | First relabeling [notebook](./nbs/03_retrain_model.ipynb) |
235
+ > | 10 | 200 | 94.57% | First relabeling [notebook](./nbs/03_retrain_model.ipynb) | -->
@@ -0,0 +1,7 @@
1
+ active_vision/__init__.py,sha256=XITukjUU49hPFzxCzmxqJAUWh3YE8sWQzmyZ5bVra88,43
2
+ active_vision/core.py,sha256=0aXDI5Gpj0Spk7TSIxJf8aJDDBgZh0-jkWdYyZ1Zric,10713
3
+ active_vision-0.0.4.dist-info/LICENSE,sha256=xx0jnfkXJvxRnG63LTGOxlggYnIysveWIZ6H3PNdCrQ,11357
4
+ active_vision-0.0.4.dist-info/METADATA,sha256=WlvtrzUy8m2nr8izUuTtysdQXO4ZjCO9vGWt2i_GMUI,10421
5
+ active_vision-0.0.4.dist-info/WHEEL,sha256=In9FTNxeP60KnTkGw7wk6mJPYd_dQSjEZmXdBdMCI-8,91
6
+ active_vision-0.0.4.dist-info/top_level.txt,sha256=7qUQvccN2UU63z5S9vrgJmqK-8sFGrtpf1e9Z86nihE,14
7
+ active_vision-0.0.4.dist-info/RECORD,,
@@ -1,7 +0,0 @@
1
- active_vision/__init__.py,sha256=hZp8jB284ByY44Q5cdwTt9Zz5n4QWXnz0OexpEE9muk,43
2
- active_vision/core.py,sha256=0aXDI5Gpj0Spk7TSIxJf8aJDDBgZh0-jkWdYyZ1Zric,10713
3
- active_vision-0.0.3.dist-info/LICENSE,sha256=xx0jnfkXJvxRnG63LTGOxlggYnIysveWIZ6H3PNdCrQ,11357
4
- active_vision-0.0.3.dist-info/METADATA,sha256=g629Kn07n4ZXOOX5cW1nPQK1IR9Mm5vW_z7RqxdwKgY,9385
5
- active_vision-0.0.3.dist-info/WHEEL,sha256=In9FTNxeP60KnTkGw7wk6mJPYd_dQSjEZmXdBdMCI-8,91
6
- active_vision-0.0.3.dist-info/top_level.txt,sha256=7qUQvccN2UU63z5S9vrgJmqK-8sFGrtpf1e9Z86nihE,14
7
- active_vision-0.0.3.dist-info/RECORD,,