libreyolo 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,27 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2024 Libre YOLO Contributors
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
22
+
23
+ ---
24
+
25
+ IMPORTANT: The weights in the `weights/` directory are NOT covered by this MIT License.
26
+ They are licensed under AGPL-3.0. See `weights/LICENSE_NOTICE.txt` for details.
27
+
@@ -0,0 +1,17 @@
1
+ # Exclude model weights and generated artifacts
2
+ prune weights
3
+ prune runs
4
+ prune examples/runs
5
+ prune temporary
6
+ prune media
7
+ prune notebooks
8
+ prune tests/output
9
+
10
+ # Exclude binary/large files globally
11
+ global-exclude *.pt
12
+ global-exclude *.onnx
13
+ global-exclude *.jpg
14
+ global-exclude *.png
15
+ global-exclude *.webp
16
+ global-exclude *.mp4
17
+
@@ -0,0 +1,284 @@
1
+ Metadata-Version: 2.4
2
+ Name: libreyolo
3
+ Version: 0.1.0
4
+ Summary: Libre YOLO - An open source YOLO library with MIT license.
5
+ Author: LibreYOLO Team
6
+ License: MIT
7
+ Project-URL: Homepage, https://github.com/Libre-YOLO
8
+ Project-URL: Repository, https://github.com/Libre-YOLO/libreyolo
9
+ Requires-Python: >=3.10
10
+ Description-Content-Type: text/markdown
11
+ License-File: LICENSE
12
+ Requires-Dist: numpy>=1.19.0
13
+ Requires-Dist: Pillow>=8.0.0
14
+ Requires-Dist: torch>=1.7.0
15
+ Requires-Dist: PyYAML>=6.0
16
+ Requires-Dist: matplotlib>=3.3.0
17
+ Requires-Dist: requests>=2.25.0
18
+ Provides-Extra: convert
19
+ Requires-Dist: ultralytics; extra == "convert"
20
+ Dynamic: license-file
21
+
22
+ # Libre YOLO
23
+
24
+ Libre YOLO is an open-source, MIT-licensed implementation of YOLO object detection models. It provides a clean, independent codebase for training and inference, designed to be free from restrictive licensing for the software itself.
25
+
26
+ > **Note:** While this codebase is MIT licensed, pre-trained weights converted from other repositories (like Ultralytics) may inherit their original licenses (often AGPL-3.0). Please check the license of the specific weights you use.
27
+
28
+ ## Features
29
+
30
+ - 🚀 **Supported Models:** Full support for **YOLOv8** and **YOLOv11** architectures.
31
+ - 📦 **Unified API:** Simple, consistent interface for loading and using different YOLO versions.
32
+ - 🛠️ **Training Engine:** Built-in support for training models on custom datasets.
33
+ - ⚖️ **MIT License:** Permissive licensing for the codebase, making it suitable for commercial and research integration.
34
+ - 🔄 **Weight Conversion:** Tools to convert weights from Ultralytics format to LibreYOLO.
35
+
36
+ ## Installation
37
+
38
+ You can install Libre YOLO directly from the source:
39
+
40
+ ```bash
41
+ git clone https://github.com/Libre-YOLO/libreyolo.git
42
+ cd libreyolo
43
+ pip install -e .
44
+ ```
45
+
46
+ To include dependencies for weight conversion:
47
+
48
+ ```bash
49
+ uv sync --extra convert
50
+ ```
51
+
52
+ ## Testing
53
+
54
+ We have two types of tests:
55
+
56
+ **Unit Tests** (fast, no weights needed)
57
+ - Check that functions work correctly without loading real model weights
58
+ - Run automatically when you type `make test` or `pytest`
59
+ - These run in seconds and catch bugs quickly
60
+
61
+ **Integration Tests** (slower, needs real weights)
62
+ - Load real model weights and test the full pipeline
63
+ - Run with `make test_integration` or `pytest -m integration`
64
+ - These take longer but verify everything works end-to-end
65
+
66
+ **Quick Commands:**
67
+ ```bash
68
+ # Using Makefile (requires uv installed)
69
+ make test # Run fast unit tests (default)
70
+ make test_integration # Run slower integration tests (needs weights)
71
+
72
+ # Or run pytest directly (if you have dependencies installed)
73
+ pytest # Run fast unit tests
74
+ pytest -m integration # Run integration tests
75
+ ```
76
+
77
+ **Why two types?** Unit tests are fast so you can run them constantly while coding. Integration tests are slower but make sure everything works together with real models.
78
+
79
+ ## Quick Start
80
+
81
+ ### Inference
82
+
83
+ Libre YOLO provides a unified factory to load models. It automatically detects the model version (v8 or v11) from the weights.
84
+
85
+ ```python
86
+ from libreyolo import LIBREYOLO
87
+
88
+ # Load a model (automatically detects v8 vs v11)
89
+ model = LIBREYOLO(model_path="weights/libreyolo8n.pt", size="n")
90
+
91
+ # Run inference
92
+ detections = model(image="media/test_image_1_creative_commons.jpg", save=True)
93
+
94
+ # Access results
95
+ for detection in detections:
96
+ print(f"Detected {detection.class_name} with confidence {detection.confidence:.2f}")
97
+ ```
98
+
99
+ ### Training
100
+
101
+ You can train models using the training module.
102
+
103
+ ```python
104
+ from libreyolo import LIBREYOLO8
105
+
106
+ model = LIBREYOLO8(model_path="libreyolo8n.pt", size="n")
107
+ detections = model(image="image.jpg", save=True)
108
+ ```
109
+
110
+ ## Feature Map Visualization
111
+
112
+ Save feature maps from different layers of the model for visualization and analysis.
113
+
114
+ ### Usage
115
+
116
+ ```python
117
+ # Save all layers
118
+ model = LIBREYOLO8(model_path="weights/libreyolo8n.pt", size="n", save_feature_maps=True)
119
+
120
+ # Save specific layers only
121
+ model = LIBREYOLO8(model_path="weights/libreyolo8n.pt", size="n",
122
+ save_feature_maps=["backbone_p1", "backbone_c2f2_P3", "neck_c2f21"])
123
+
124
+ # Get list of available layers
125
+ print(model.get_available_layer_names())
126
+ ```
127
+
128
+ Feature maps are saved to `runs/feature_maps/` directory.
129
+
130
+ ## Documentation
131
+
132
+ For more detailed information, check out our documentation:
133
+
134
+ - [User Guide](docs/user_guide.md): Comprehensive guide on how to use Libre YOLO.
135
+ - [Fine-Tuning Guide](docs/fine_tuning.md): Instructions on how to fine-tune models on your custom datasets.
136
+ - [Model Layers Reference](docs/model_layers.md): detailed list of available layers for feature map extraction.
137
+
138
+ ## License
139
+
140
+ - **Code:** [MIT License](LICENSE)
141
+ - **Weights:** Use of converted weights must comply with their original licenses (typically AGPL-3.0 for Ultralytics models). See `weights/LICENSE_NOTICE.txt` for details.
142
+
143
+
144
+ ## PossibleRoadmap
145
+
146
+ - [ ] Unified Command Line Interface (CLI)
147
+ - [ ] Video & Webcam Inference Support
148
+ - [ ] Letterbox Resizing (Padding)
149
+ - [ ] Batch Inference Support
150
+ - [ ] Benchmark / Speed Test Module
151
+ - [ ] Training Data Augmentations
152
+ - [ ] TorchScript Export
153
+ - [ ] Model Summary & Info
154
+ - [ ] Result Filtering (Class/Size)
155
+ - [ ] Confusion Matrix Generation
156
+ - [ ] onnx inference
157
+ - [ ] Add tiled inference
158
+ - [x] YOLOv8 Support
159
+ - [x] YOLOv11 Support
160
+ - [x] Training Engine
161
+ - [x] Weight Conversion Tools
162
+ - [ ] Fine-tuning with custom datasets
163
+ - [ ] Publish to PyPI
164
+ - [ ] Export formats (ONNX, TensorRT)
165
+ - [ ] Other models support (YOLOv10, YOLOv12, etc.)
166
+ - [ ] Feature Maps Visualization
167
+ - [ ] Automated testing (when a commit is pushed)
168
+ - [ ] Add explainability techniques
169
+ - [ ] Cocoeval to measure how good the model is
170
+ - [ ] CLI Tool
171
+ - [ ] Batch Processing
172
+ - [ ] Export Formats
173
+ - [ ] Video Processing
174
+ - [ ] Model Validation/Testing
175
+ - [ ] Training Support
176
+ - [ ] Model Quantization
177
+ - [ ] ONNX/TensorRT Export
178
+ - [ ] Web API / Server
179
+ - [ ] Documentation & Examples
180
+ - [ ] GPU memory optimization (gradient checkpointing)
181
+ - [ ] Multi-threaded batch processing
182
+ - [ ] Async inference support
183
+ - [ ] Model pruning (remove unnecessary weights)
184
+ - [ ] Knowledge distillation (smaller models)
185
+ - [ ] Mixed precision inference (FP16/BF16)
186
+ - [ ] TensorRT optimization pipeline
187
+ - [ ] CoreML export for Apple devices
188
+ - [ ] OpenVINO export for Intel hardware
189
+ - [ ] Model caching for faster repeated loads
190
+ - [ ] Real-time webcam detection
191
+ - [ ] Stream processing (RTSP/HTTP streams)
192
+ - [ ] Object tracking (track IDs across frames)
193
+ - [ ] Custom class names support
194
+ - [ ] Region of Interest (ROI) detection
195
+ - [ ] Confidence score calibration
196
+ - [ ] Multi-model ensemble inference
197
+ - [ ] Temporal smoothing for video
198
+ - [ ] Object counting per class
199
+ - [ ] Detection filtering by area/size
200
+ - [ ] YOLO format dataset export
201
+ - [ ] COCO format dataset export
202
+ - [ ] Pascal VOC format export
203
+ - [ ] LabelImg format compatibility
204
+ - [ ] CSV export with metadata
205
+ - [ ] JSON-LD export for structured data
206
+ - [ ] Database integration (SQLite/PostgreSQL)
207
+ - [ ] Image augmentation pipeline
208
+ - [ ] Dataset validation tools
209
+ - [ ] Annotation format converters
210
+ - [ ] Type hints throughout codebase
211
+ - [ ] Configuration file support (YAML/TOML)
212
+ - [ ] Progress bars for batch processing
213
+ - [ ] Logging system with levels
214
+ - [ ] Error handling improvements
215
+ - [ ] Model versioning system
216
+ - [ ] Checkpoint management
217
+ - [ ] Pre-commit hooks setup
218
+ - [ ] Code formatting (black/isort)
219
+ - [ ] Linting setup (pylint/flake8)
220
+ - [ ] Docker container image
221
+ - [ ] PyPI package publication
222
+ - [ ] Conda package support
223
+ - [ ] GitHub Actions CI/CD
224
+ - [ ] Pre-built Docker images
225
+ - [ ] Kubernetes deployment examples
226
+ - [ ] AWS Lambda function template
227
+ - [ ] Google Cloud Function template
228
+ - [ ] Azure Function template
229
+ - [ ] Edge device deployment guide
230
+
231
+
232
+
233
+ ### Available Layers
234
+
235
+ #### LIBREYOLO8
236
+
237
+ | Layer | Description |
238
+ |-------|-------------|
239
+ | `backbone_p1` | First convolution |
240
+ | `backbone_p2` | Second convolution |
241
+ | `backbone_c2f1` | First C2F block |
242
+ | `backbone_p3` | Third convolution |
243
+ | `backbone_c2f2_P3` | C2F at P3 (Stride 8) |
244
+ | `backbone_p4` | Fourth convolution |
245
+ | `backbone_c2f3_P4` | C2F at P4 (Stride 16) |
246
+ | `backbone_p5` | Fifth convolution |
247
+ | `backbone_c2f4` | Fourth C2F block |
248
+ | `backbone_sppf_P5` | SPPF at P5 (Stride 32) |
249
+ | `neck_c2f21` | Neck C2F block 1 |
250
+ | `neck_c2f11` | Neck C2F block 2 |
251
+ | `neck_c2f12` | Neck C2F block 3 |
252
+ | `neck_c2f22` | Neck C2F block 4 |
253
+ | `head8_conv11` | Head8 box conv |
254
+ | `head8_conv21` | Head8 class conv |
255
+ | `head16_conv11` | Head16 box conv |
256
+ | `head16_conv21` | Head16 class conv |
257
+ | `head32_conv11` | Head32 box conv |
258
+ | `head32_conv21` | Head32 class conv |
259
+
260
+ #### LIBREYOLO11
261
+
262
+ | Layer | Description |
263
+ |-------|-------------|
264
+ | `backbone_p1` | First convolution |
265
+ | `backbone_p2` | Second convolution |
266
+ | `backbone_c2f1` | First C3k2 block |
267
+ | `backbone_p3` | Third convolution |
268
+ | `backbone_c2f2_P3` | C3k2 at P3 (Stride 8) |
269
+ | `backbone_p4` | Fourth convolution |
270
+ | `backbone_c2f3_P4` | C3k2 at P4 (Stride 16) |
271
+ | `backbone_p5` | Fifth convolution |
272
+ | `backbone_c2f4` | Fourth C3k2 block |
273
+ | `backbone_sppf` | SPPF block |
274
+ | `backbone_c2psa_P5` | C2PSA at P5 (Stride 32) |
275
+ | `neck_c2f21` | Neck C3k2 block 1 |
276
+ | `neck_c2f11` | Neck C3k2 block 2 |
277
+ | `neck_c2f12` | Neck C3k2 block 3 |
278
+ | `neck_c2f22` | Neck C3k2 block 4 |
279
+ | `head8_conv11` | Head8 box conv |
280
+ | `head8_conv21` | Head8 class conv |
281
+ | `head16_conv11` | Head16 box conv |
282
+ | `head16_conv21` | Head16 class conv |
283
+ | `head32_conv11` | Head32 box conv |
284
+ | `head32_conv21` | Head32 class conv |
@@ -0,0 +1,263 @@
1
+ # Libre YOLO
2
+
3
+ Libre YOLO is an open-source, MIT-licensed implementation of YOLO object detection models. It provides a clean, independent codebase for training and inference, designed to be free from restrictive licensing for the software itself.
4
+
5
+ > **Note:** While this codebase is MIT licensed, pre-trained weights converted from other repositories (like Ultralytics) may inherit their original licenses (often AGPL-3.0). Please check the license of the specific weights you use.
6
+
7
+ ## Features
8
+
9
+ - 🚀 **Supported Models:** Full support for **YOLOv8** and **YOLOv11** architectures.
10
+ - 📦 **Unified API:** Simple, consistent interface for loading and using different YOLO versions.
11
+ - 🛠️ **Training Engine:** Built-in support for training models on custom datasets.
12
+ - ⚖️ **MIT License:** Permissive licensing for the codebase, making it suitable for commercial and research integration.
13
+ - 🔄 **Weight Conversion:** Tools to convert weights from Ultralytics format to LibreYOLO.
14
+
15
+ ## Installation
16
+
17
+ You can install Libre YOLO directly from the source:
18
+
19
+ ```bash
20
+ git clone https://github.com/Libre-YOLO/libreyolo.git
21
+ cd libreyolo
22
+ pip install -e .
23
+ ```
24
+
25
+ To include dependencies for weight conversion:
26
+
27
+ ```bash
28
+ uv sync --extra convert
29
+ ```
30
+
31
+ ## Testing
32
+
33
+ We have two types of tests:
34
+
35
+ **Unit Tests** (fast, no weights needed)
36
+ - Check that functions work correctly without loading real model weights
37
+ - Run automatically when you type `make test` or `pytest`
38
+ - These run in seconds and catch bugs quickly
39
+
40
+ **Integration Tests** (slower, needs real weights)
41
+ - Load real model weights and test the full pipeline
42
+ - Run with `make test_integration` or `pytest -m integration`
43
+ - These take longer but verify everything works end-to-end
44
+
45
+ **Quick Commands:**
46
+ ```bash
47
+ # Using Makefile (requires uv installed)
48
+ make test # Run fast unit tests (default)
49
+ make test_integration # Run slower integration tests (needs weights)
50
+
51
+ # Or run pytest directly (if you have dependencies installed)
52
+ pytest # Run fast unit tests
53
+ pytest -m integration # Run integration tests
54
+ ```
55
+
56
+ **Why two types?** Unit tests are fast so you can run them constantly while coding. Integration tests are slower but make sure everything works together with real models.
57
+
58
+ ## Quick Start
59
+
60
+ ### Inference
61
+
62
+ Libre YOLO provides a unified factory to load models. It automatically detects the model version (v8 or v11) from the weights.
63
+
64
+ ```python
65
+ from libreyolo import LIBREYOLO
66
+
67
+ # Load a model (automatically detects v8 vs v11)
68
+ model = LIBREYOLO(model_path="weights/libreyolo8n.pt", size="n")
69
+
70
+ # Run inference
71
+ detections = model(image="media/test_image_1_creative_commons.jpg", save=True)
72
+
73
+ # Access results
74
+ for detection in detections:
75
+ print(f"Detected {detection.class_name} with confidence {detection.confidence:.2f}")
76
+ ```
77
+
78
+ ### Training
79
+
80
+ You can train models using the training module.
81
+
82
+ ```python
83
+ from libreyolo import LIBREYOLO8
84
+
85
+ model = LIBREYOLO8(model_path="libreyolo8n.pt", size="n")
86
+ detections = model(image="image.jpg", save=True)
87
+ ```
88
+
89
+ ## Feature Map Visualization
90
+
91
+ Save feature maps from different layers of the model for visualization and analysis.
92
+
93
+ ### Usage
94
+
95
+ ```python
96
+ # Save all layers
97
+ model = LIBREYOLO8(model_path="weights/libreyolo8n.pt", size="n", save_feature_maps=True)
98
+
99
+ # Save specific layers only
100
+ model = LIBREYOLO8(model_path="weights/libreyolo8n.pt", size="n",
101
+ save_feature_maps=["backbone_p1", "backbone_c2f2_P3", "neck_c2f21"])
102
+
103
+ # Get list of available layers
104
+ print(model.get_available_layer_names())
105
+ ```
106
+
107
+ Feature maps are saved to `runs/feature_maps/` directory.
108
+
109
+ ## Documentation
110
+
111
+ For more detailed information, check out our documentation:
112
+
113
+ - [User Guide](docs/user_guide.md): Comprehensive guide on how to use Libre YOLO.
114
+ - [Fine-Tuning Guide](docs/fine_tuning.md): Instructions on how to fine-tune models on your custom datasets.
115
+ - [Model Layers Reference](docs/model_layers.md): detailed list of available layers for feature map extraction.
116
+
117
+ ## License
118
+
119
+ - **Code:** [MIT License](LICENSE)
120
+ - **Weights:** Use of converted weights must comply with their original licenses (typically AGPL-3.0 for Ultralytics models). See `weights/LICENSE_NOTICE.txt` for details.
121
+
122
+
123
+ ## PossibleRoadmap
124
+
125
+ - [ ] Unified Command Line Interface (CLI)
126
+ - [ ] Video & Webcam Inference Support
127
+ - [ ] Letterbox Resizing (Padding)
128
+ - [ ] Batch Inference Support
129
+ - [ ] Benchmark / Speed Test Module
130
+ - [ ] Training Data Augmentations
131
+ - [ ] TorchScript Export
132
+ - [ ] Model Summary & Info
133
+ - [ ] Result Filtering (Class/Size)
134
+ - [ ] Confusion Matrix Generation
135
+ - [ ] onnx inference
136
+ - [ ] Add tiled inference
137
+ - [x] YOLOv8 Support
138
+ - [x] YOLOv11 Support
139
+ - [x] Training Engine
140
+ - [x] Weight Conversion Tools
141
+ - [ ] Fine-tuning with custom datasets
142
+ - [ ] Publish to PyPI
143
+ - [ ] Export formats (ONNX, TensorRT)
144
+ - [ ] Other models support (YOLOv10, YOLOv12, etc.)
145
+ - [ ] Feature Maps Visualization
146
+ - [ ] Automated testing (when a commit is pushed)
147
+ - [ ] Add explainability techniques
148
+ - [ ] Cocoeval to measure how good the model is
149
+ - [ ] CLI Tool
150
+ - [ ] Batch Processing
151
+ - [ ] Export Formats
152
+ - [ ] Video Processing
153
+ - [ ] Model Validation/Testing
154
+ - [ ] Training Support
155
+ - [ ] Model Quantization
156
+ - [ ] ONNX/TensorRT Export
157
+ - [ ] Web API / Server
158
+ - [ ] Documentation & Examples
159
+ - [ ] GPU memory optimization (gradient checkpointing)
160
+ - [ ] Multi-threaded batch processing
161
+ - [ ] Async inference support
162
+ - [ ] Model pruning (remove unnecessary weights)
163
+ - [ ] Knowledge distillation (smaller models)
164
+ - [ ] Mixed precision inference (FP16/BF16)
165
+ - [ ] TensorRT optimization pipeline
166
+ - [ ] CoreML export for Apple devices
167
+ - [ ] OpenVINO export for Intel hardware
168
+ - [ ] Model caching for faster repeated loads
169
+ - [ ] Real-time webcam detection
170
+ - [ ] Stream processing (RTSP/HTTP streams)
171
+ - [ ] Object tracking (track IDs across frames)
172
+ - [ ] Custom class names support
173
+ - [ ] Region of Interest (ROI) detection
174
+ - [ ] Confidence score calibration
175
+ - [ ] Multi-model ensemble inference
176
+ - [ ] Temporal smoothing for video
177
+ - [ ] Object counting per class
178
+ - [ ] Detection filtering by area/size
179
+ - [ ] YOLO format dataset export
180
+ - [ ] COCO format dataset export
181
+ - [ ] Pascal VOC format export
182
+ - [ ] LabelImg format compatibility
183
+ - [ ] CSV export with metadata
184
+ - [ ] JSON-LD export for structured data
185
+ - [ ] Database integration (SQLite/PostgreSQL)
186
+ - [ ] Image augmentation pipeline
187
+ - [ ] Dataset validation tools
188
+ - [ ] Annotation format converters
189
+ - [ ] Type hints throughout codebase
190
+ - [ ] Configuration file support (YAML/TOML)
191
+ - [ ] Progress bars for batch processing
192
+ - [ ] Logging system with levels
193
+ - [ ] Error handling improvements
194
+ - [ ] Model versioning system
195
+ - [ ] Checkpoint management
196
+ - [ ] Pre-commit hooks setup
197
+ - [ ] Code formatting (black/isort)
198
+ - [ ] Linting setup (pylint/flake8)
199
+ - [ ] Docker container image
200
+ - [ ] PyPI package publication
201
+ - [ ] Conda package support
202
+ - [ ] GitHub Actions CI/CD
203
+ - [ ] Pre-built Docker images
204
+ - [ ] Kubernetes deployment examples
205
+ - [ ] AWS Lambda function template
206
+ - [ ] Google Cloud Function template
207
+ - [ ] Azure Function template
208
+ - [ ] Edge device deployment guide
209
+
210
+
211
+
212
+ ### Available Layers
213
+
214
+ #### LIBREYOLO8
215
+
216
+ | Layer | Description |
217
+ |-------|-------------|
218
+ | `backbone_p1` | First convolution |
219
+ | `backbone_p2` | Second convolution |
220
+ | `backbone_c2f1` | First C2F block |
221
+ | `backbone_p3` | Third convolution |
222
+ | `backbone_c2f2_P3` | C2F at P3 (Stride 8) |
223
+ | `backbone_p4` | Fourth convolution |
224
+ | `backbone_c2f3_P4` | C2F at P4 (Stride 16) |
225
+ | `backbone_p5` | Fifth convolution |
226
+ | `backbone_c2f4` | Fourth C2F block |
227
+ | `backbone_sppf_P5` | SPPF at P5 (Stride 32) |
228
+ | `neck_c2f21` | Neck C2F block 1 |
229
+ | `neck_c2f11` | Neck C2F block 2 |
230
+ | `neck_c2f12` | Neck C2F block 3 |
231
+ | `neck_c2f22` | Neck C2F block 4 |
232
+ | `head8_conv11` | Head8 box conv |
233
+ | `head8_conv21` | Head8 class conv |
234
+ | `head16_conv11` | Head16 box conv |
235
+ | `head16_conv21` | Head16 class conv |
236
+ | `head32_conv11` | Head32 box conv |
237
+ | `head32_conv21` | Head32 class conv |
238
+
239
+ #### LIBREYOLO11
240
+
241
+ | Layer | Description |
242
+ |-------|-------------|
243
+ | `backbone_p1` | First convolution |
244
+ | `backbone_p2` | Second convolution |
245
+ | `backbone_c2f1` | First C3k2 block |
246
+ | `backbone_p3` | Third convolution |
247
+ | `backbone_c2f2_P3` | C3k2 at P3 (Stride 8) |
248
+ | `backbone_p4` | Fourth convolution |
249
+ | `backbone_c2f3_P4` | C3k2 at P4 (Stride 16) |
250
+ | `backbone_p5` | Fifth convolution |
251
+ | `backbone_c2f4` | Fourth C3k2 block |
252
+ | `backbone_sppf` | SPPF block |
253
+ | `backbone_c2psa_P5` | C2PSA at P5 (Stride 32) |
254
+ | `neck_c2f21` | Neck C3k2 block 1 |
255
+ | `neck_c2f11` | Neck C3k2 block 2 |
256
+ | `neck_c2f12` | Neck C3k2 block 3 |
257
+ | `neck_c2f22` | Neck C3k2 block 4 |
258
+ | `head8_conv11` | Head8 box conv |
259
+ | `head8_conv21` | Head8 class conv |
260
+ | `head16_conv11` | Head16 box conv |
261
+ | `head16_conv21` | Head16 class conv |
262
+ | `head32_conv11` | Head32 box conv |
263
+ | `head32_conv21` | Head32 class conv |
@@ -0,0 +1,15 @@
1
+ """
2
+ Libre YOLO - An open source YOLO library with MIT license.
3
+ """
4
+ from importlib.metadata import version, PackageNotFoundError
5
+
6
+ from .v8.model import LIBREYOLO8
7
+ from .v11.model import LIBREYOLO11
8
+ from .factory import LIBREYOLO, create_model
9
+
10
+ try:
11
+ __version__ = version("libreyolo")
12
+ except PackageNotFoundError:
13
+ __version__ = "0.0.0.dev0" # Fallback for editable installs without metadata
14
+
15
+ __all__ = ["LIBREYOLO", "LIBREYOLO8", "LIBREYOLO11", "create_model"]