mozo 0.1.0__tar.gz → 0.2.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
1
  MIT License
2
2
 
3
- Copyright (c) 2025
3
+ Copyright (c) 2025 Datamarkin
4
4
 
5
5
  Permission is hereby granted, free of charge, to any person obtaining a copy
6
6
  of this software and associated documentation files (the "Software"), to deal
@@ -18,4 +18,4 @@ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
18
  AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
19
  LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
20
  OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
- SOFTWARE.
21
+ SOFTWARE.
mozo-0.2.0/PKG-INFO ADDED
@@ -0,0 +1,343 @@
1
+ Metadata-Version: 2.4
2
+ Name: mozo
3
+ Version: 0.2.0
4
+ Summary: Universal computer vision model serving library with dynamic model management and PixelFlow integration
5
+ Home-page: https://github.com/datamarkin/mozo
6
+ Author: Emrah NAZIF
7
+ Author-email: Emrah NAZIF <emrah@datamarkin.com>, Datamarkin <support@datamarkin.com>
8
+ License: MIT
9
+ Project-URL: Homepage, https://github.com/datamarkin/mozo
10
+ Project-URL: Bug Tracker, https://github.com/datamarkin/mozo/issues
11
+ Classifier: Development Status :: 3 - Alpha
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: License :: OSI Approved :: MIT License
14
+ Classifier: Programming Language :: Python :: 3
15
+ Classifier: Programming Language :: Python :: 3.9
16
+ Classifier: Programming Language :: Python :: 3.10
17
+ Classifier: Programming Language :: Python :: 3.11
18
+ Classifier: Programming Language :: Python :: 3.12
19
+ Requires-Python: >=3.9
20
+ Description-Content-Type: text/markdown
21
+ License-File: LICENSE
22
+ Requires-Dist: fastapi
23
+ Requires-Dist: uvicorn
24
+ Requires-Dist: requests
25
+ Requires-Dist: opencv-python
26
+ Requires-Dist: transformers
27
+ Requires-Dist: torch
28
+ Requires-Dist: torchvision
29
+ Requires-Dist: Pillow
30
+ Requires-Dist: numpy
31
+ Requires-Dist: scipy
32
+ Requires-Dist: pixelflow
33
+ Requires-Dist: click
34
+ Dynamic: author
35
+ Dynamic: home-page
36
+ Dynamic: license-file
37
+ Dynamic: requires-python
38
+
39
+ # Mozo
40
+
41
+ Universal computer vision model server with automatic memory management and multi-framework support.
42
+
43
+ Mozo provides HTTP access to 25+ pre-configured models from Detectron2, HuggingFace Transformers, and other frameworks. Models load on-demand and clean up automatically.
44
+
45
+ ## Quick Start
46
+
47
+ ```bash
48
+ pip install mozo
49
+ mozo start
50
+ ```
51
+
52
+ Server starts on `http://localhost:8000` with all models available via REST API.
53
+
54
+ ### Examples
55
+
56
+ Object detection:
57
+ ```bash
58
+ curl -X POST "http://localhost:8000/predict/detectron2/mask_rcnn_R_50_FPN_3x" \
59
+ -F "file=@image.jpg"
60
+ ```
61
+
62
+ Depth estimation:
63
+ ```bash
64
+ curl -X POST "http://localhost:8000/predict/depth_anything/small" \
65
+ -F "file=@image.jpg" --output depth.png
66
+ ```
67
+
68
+ Vision-language Q&A:
69
+ ```bash
70
+ curl -X POST "http://localhost:8000/predict/qwen2.5_vl/7b-instruct?prompt=What%20is%20in%20this%20image" \
71
+ -F "file=@image.jpg"
72
+ ```
73
+
74
+ List available models:
75
+ ```bash
76
+ curl http://localhost:8000/models
77
+ ```
78
+
79
+ ## Features
80
+
81
+ - **25+ Pre-configured Models** - Detectron2, HuggingFace Transformers, custom adapters
82
+ - **Automatic Memory Management** - Lazy loading, usage tracking, automatic cleanup
83
+ - **Multi-Framework Support** - Unified API across different ML frameworks
84
+ - **PixelFlow Integration** - Detection models return unified format for filtering and annotation
85
+ - **Thread-Safe** - Concurrent request handling with per-model locks
86
+ - **Production Ready** - Multiple workers, configurable timeouts, health checks
87
+
88
+ ## Installation
89
+
90
+ ```bash
91
+ # Basic installation
92
+ pip install mozo
93
+
94
+ # Framework dependencies (install as needed)
95
+ pip install transformers torch torchvision
96
+ pip install 'git+https://github.com/facebookresearch/detectron2.git'
97
+ ```
98
+
99
+ ## Available Models
100
+
101
+ ### Detectron2 (22 variants)
102
+ Object detection, instance segmentation, keypoint detection trained on COCO dataset.
103
+
104
+ Popular variants:
105
+ - `mask_rcnn_R_50_FPN_3x` - Instance segmentation
106
+ - `faster_rcnn_R_50_FPN_3x` - Object detection
107
+ - `faster_rcnn_X_101_32x8d_FPN_3x` - High-accuracy detection
108
+ - `keypoint_rcnn_R_50_FPN_3x` - Keypoint detection
109
+ - `retinanet_R_50_FPN_3x` - Single-stage detector
110
+
111
+ Output: JSON with bounding boxes, class names, confidence scores (80 COCO classes)
112
+
113
+ ### Depth Anything (3 variants)
114
+ Monocular depth estimation.
115
+
116
+ - `small` - Fastest, lowest memory
117
+ - `base` - Balanced performance
118
+ - `large` - Best accuracy
119
+
120
+ Output: PNG grayscale depth map
121
+
122
+ ### Qwen2.5-VL (1 variant)
123
+ Vision-language understanding for VQA, captioning, and image analysis.
124
+
125
+ - `7b-instruct` - 7B parameter model (requires 16GB+ RAM)
126
+
127
+ Output: JSON with text response
128
+
129
+ ## Server
130
+
131
+ ```bash
132
+ # Start with defaults (0.0.0.0:8000, auto-reload enabled)
133
+ mozo start
134
+
135
+ # Custom port
136
+ mozo start --port 8080
137
+
138
+ # Production mode with multiple workers
139
+ mozo start --workers 4
140
+
141
+ # Check version
142
+ mozo version
143
+ ```
144
+
145
+ ## API Reference
146
+
147
+ ### Run Prediction
148
+ ```http
149
+ POST /predict/{family}/{variant}
150
+ Content-Type: multipart/form-data
151
+ ```
152
+
153
+ Parameters:
154
+ - `family` - Model family (e.g., `detectron2`, `depth_anything`, `qwen2.5_vl`)
155
+ - `variant` - Model variant (e.g., `mask_rcnn_R_50_FPN_3x`, `small`, `7b-instruct`)
156
+ - `file` - Image file
157
+ - `prompt` - Text prompt (VLM models only)
158
+
159
+ ### Health Check
160
+ ```http
161
+ GET /
162
+ ```
163
+
164
+ Returns server status and loaded models.
165
+
166
+ ### List Models
167
+ ```http
168
+ GET /models
169
+ ```
170
+
171
+ Returns all available model families and variants.
172
+
173
+ ### List Loaded Models
174
+ ```http
175
+ GET /models/loaded
176
+ ```
177
+
178
+ Returns currently loaded models with usage information.
179
+
180
+ ### Get Model Info
181
+ ```http
182
+ GET /models/{family}/{variant}/info
183
+ ```
184
+
185
+ Returns detailed information about a specific model variant.
186
+
187
+ ### Unload Model
188
+ ```http
189
+ POST /models/{family}/{variant}/unload
190
+ ```
191
+
192
+ Manually unload a model to free memory.
193
+
194
+ ### Cleanup Inactive Models
195
+ ```http
196
+ POST /models/cleanup?inactive_seconds=600
197
+ ```
198
+
199
+ Unload models inactive for specified duration (default: 600 seconds).
200
+
201
+ ## How It Works
202
+
203
+ **Lazy Loading**
204
+ Models load on first request, not at server startup. This keeps startup time instant regardless of available models.
205
+
206
+ **Smart Caching**
207
+ Loaded models stay in memory and are reused across requests. First request is slower (model download + load), subsequent requests are fast.
208
+
209
+ **Usage Tracking**
210
+ Each model access updates a timestamp. Models inactive for 10+ minutes are automatically unloaded.
211
+
212
+ **Thread Safety**
213
+ Per-model locks ensure only one thread loads a given model. Other threads wait and reuse the loaded instance.
214
+
215
+ Example flow:
216
+ ```bash
217
+ # Server starts instantly (no models loaded)
218
+ mozo start
219
+
220
+ # First request loads model
221
+ curl -X POST "http://localhost:8000/predict/detectron2/faster_rcnn_R_50_FPN_3x" -F "file=@test.jpg"
222
+ # Output: [ModelManager] Loading model: detectron2/faster_rcnn_R_50_FPN_3x...
223
+
224
+ # Subsequent requests reuse loaded model
225
+ curl -X POST "http://localhost:8000/predict/detectron2/faster_rcnn_R_50_FPN_3x" -F "file=@test2.jpg"
226
+ # Output: [ModelManager] Model already loaded, reusing existing instance.
227
+
228
+ # After 10 minutes of inactivity, model auto-unloads
229
+ # Output: [ModelManager] Cleanup: Unloaded 1 inactive model(s).
230
+ ```
231
+
232
+ ## Python SDK
233
+
234
+ For direct integration in Python applications:
235
+
236
+ ```python
237
+ from mozo import ModelManager
238
+ import cv2
239
+
240
+ manager = ModelManager()
241
+ model = manager.get_model('detectron2', 'mask_rcnn_R_50_FPN_3x')
242
+
243
+ image = cv2.imread('image.jpg')
244
+ detections = model.predict(image)
245
+
246
+ # Filter results
247
+ high_confidence = detections.filter_by_confidence(0.8)
248
+
249
+ # Manual memory management
250
+ manager.unload_model('detectron2', 'mask_rcnn_R_50_FPN_3x')
251
+ manager.cleanup_inactive_models(inactive_seconds=300)
252
+ ```
253
+
254
+ ### PixelFlow Integration
255
+
256
+ Detection models return PixelFlow Detections objects - a unified format across all ML frameworks:
257
+
258
+ ```python
259
+ # Works the same for Detectron2, YOLO, or custom models
260
+ detections = model.predict(image)
261
+
262
+ # Filter and annotate
263
+ import pixelflow as pf
264
+ filtered = detections.filter_by_confidence(0.8).filter_by_class_id([0, 2])
265
+ annotated = pf.annotate.box(image, filtered)
266
+ annotated = pf.annotate.label(annotated, filtered)
267
+
268
+ # Export
269
+ json_output = filtered.to_json()
270
+ ```
271
+
272
+ Learn more: [PixelFlow](https://github.com/datamarkin/pixelflow)
273
+
274
+ ## Configuration
275
+
276
+ ### Environment Variables
277
+
278
+ ```bash
279
+ # Enable MPS fallback for macOS (Apple Silicon)
280
+ export PYTORCH_ENABLE_MPS_FALLBACK=1
281
+
282
+ # Configure HuggingFace cache location
283
+ export HF_HOME=~/.cache/huggingface
284
+ ```
285
+
286
+ ### Memory Management
287
+
288
+ Models automatically unload after 10 minutes of inactivity. Adjust this:
289
+
290
+ ```bash
291
+ curl -X POST "http://localhost:8000/models/cleanup?inactive_seconds=300"
292
+ ```
293
+
294
+ Or in Python:
295
+ ```python
296
+ manager.cleanup_inactive_models(inactive_seconds=300)
297
+ ```
298
+
299
+ ## Extending Mozo
300
+
301
+ Add new models in 3 steps:
302
+
303
+ 1. Create adapter in `mozo/adapters/your_model.py`
304
+ 2. Register in `mozo/registry.py`
305
+ 3. Use via HTTP or Python API
306
+
307
+ See [CLAUDE.md](CLAUDE.md) for detailed implementation guide.
308
+
309
+ ## Architecture
310
+
311
+ ```
312
+ HTTP Request → FastAPI Server → ModelManager → ModelFactory → Adapter → Framework
313
+
314
+ Thread-safe cache
315
+ Usage tracking
316
+ Auto cleanup
317
+ ```
318
+
319
+ Components:
320
+ - **Server** - FastAPI REST API
321
+ - **Manager** - Lifecycle management, caching, cleanup
322
+ - **Factory** - Dynamic adapter instantiation
323
+ - **Registry** - Central catalog of models
324
+ - **Adapters** - Framework-specific implementations
325
+
326
+ ## Development
327
+
328
+ ```bash
329
+ # Install in development mode
330
+ pip install -e .
331
+
332
+ # Start server with auto-reload
333
+ mozo start
334
+ ```
335
+
336
+ ## Documentation
337
+
338
+ - [Repository](https://github.com/datamarkin/mozo)
339
+ - [Issues](https://github.com/datamarkin/mozo/issues)
340
+
341
+ ## License
342
+
343
+ MIT License
mozo-0.2.0/README.md ADDED
@@ -0,0 +1,305 @@
1
+ # Mozo
2
+
3
+ Universal computer vision model server with automatic memory management and multi-framework support.
4
+
5
+ Mozo provides HTTP access to 25+ pre-configured models from Detectron2, HuggingFace Transformers, and other frameworks. Models load on-demand and clean up automatically.
6
+
7
+ ## Quick Start
8
+
9
+ ```bash
10
+ pip install mozo
11
+ mozo start
12
+ ```
13
+
14
+ Server starts on `http://localhost:8000` with all models available via REST API.
15
+
16
+ ### Examples
17
+
18
+ Object detection:
19
+ ```bash
20
+ curl -X POST "http://localhost:8000/predict/detectron2/mask_rcnn_R_50_FPN_3x" \
21
+ -F "file=@image.jpg"
22
+ ```
23
+
24
+ Depth estimation:
25
+ ```bash
26
+ curl -X POST "http://localhost:8000/predict/depth_anything/small" \
27
+ -F "file=@image.jpg" --output depth.png
28
+ ```
29
+
30
+ Vision-language Q&A:
31
+ ```bash
32
+ curl -X POST "http://localhost:8000/predict/qwen2.5_vl/7b-instruct?prompt=What%20is%20in%20this%20image" \
33
+ -F "file=@image.jpg"
34
+ ```
35
+
36
+ List available models:
37
+ ```bash
38
+ curl http://localhost:8000/models
39
+ ```
40
+
41
+ ## Features
42
+
43
+ - **25+ Pre-configured Models** - Detectron2, HuggingFace Transformers, custom adapters
44
+ - **Automatic Memory Management** - Lazy loading, usage tracking, automatic cleanup
45
+ - **Multi-Framework Support** - Unified API across different ML frameworks
46
+ - **PixelFlow Integration** - Detection models return unified format for filtering and annotation
47
+ - **Thread-Safe** - Concurrent request handling with per-model locks
48
+ - **Production Ready** - Multiple workers, configurable timeouts, health checks
49
+
50
+ ## Installation
51
+
52
+ ```bash
53
+ # Basic installation
54
+ pip install mozo
55
+
56
+ # Framework dependencies (install as needed)
57
+ pip install transformers torch torchvision
58
+ pip install 'git+https://github.com/facebookresearch/detectron2.git'
59
+ ```
60
+
61
+ ## Available Models
62
+
63
+ ### Detectron2 (22 variants)
64
+ Object detection, instance segmentation, keypoint detection trained on COCO dataset.
65
+
66
+ Popular variants:
67
+ - `mask_rcnn_R_50_FPN_3x` - Instance segmentation
68
+ - `faster_rcnn_R_50_FPN_3x` - Object detection
69
+ - `faster_rcnn_X_101_32x8d_FPN_3x` - High-accuracy detection
70
+ - `keypoint_rcnn_R_50_FPN_3x` - Keypoint detection
71
+ - `retinanet_R_50_FPN_3x` - Single-stage detector
72
+
73
+ Output: JSON with bounding boxes, class names, confidence scores (80 COCO classes)
74
+
75
+ ### Depth Anything (3 variants)
76
+ Monocular depth estimation.
77
+
78
+ - `small` - Fastest, lowest memory
79
+ - `base` - Balanced performance
80
+ - `large` - Best accuracy
81
+
82
+ Output: PNG grayscale depth map
83
+
84
+ ### Qwen2.5-VL (1 variant)
85
+ Vision-language understanding for VQA, captioning, and image analysis.
86
+
87
+ - `7b-instruct` - 7B parameter model (requires 16GB+ RAM)
88
+
89
+ Output: JSON with text response
90
+
91
+ ## Server
92
+
93
+ ```bash
94
+ # Start with defaults (0.0.0.0:8000, auto-reload enabled)
95
+ mozo start
96
+
97
+ # Custom port
98
+ mozo start --port 8080
99
+
100
+ # Production mode with multiple workers
101
+ mozo start --workers 4
102
+
103
+ # Check version
104
+ mozo version
105
+ ```
106
+
107
+ ## API Reference
108
+
109
+ ### Run Prediction
110
+ ```http
111
+ POST /predict/{family}/{variant}
112
+ Content-Type: multipart/form-data
113
+ ```
114
+
115
+ Parameters:
116
+ - `family` - Model family (e.g., `detectron2`, `depth_anything`, `qwen2.5_vl`)
117
+ - `variant` - Model variant (e.g., `mask_rcnn_R_50_FPN_3x`, `small`, `7b-instruct`)
118
+ - `file` - Image file
119
+ - `prompt` - Text prompt (VLM models only)
120
+
121
+ ### Health Check
122
+ ```http
123
+ GET /
124
+ ```
125
+
126
+ Returns server status and loaded models.
127
+
128
+ ### List Models
129
+ ```http
130
+ GET /models
131
+ ```
132
+
133
+ Returns all available model families and variants.
134
+
135
+ ### List Loaded Models
136
+ ```http
137
+ GET /models/loaded
138
+ ```
139
+
140
+ Returns currently loaded models with usage information.
141
+
142
+ ### Get Model Info
143
+ ```http
144
+ GET /models/{family}/{variant}/info
145
+ ```
146
+
147
+ Returns detailed information about a specific model variant.
148
+
149
+ ### Unload Model
150
+ ```http
151
+ POST /models/{family}/{variant}/unload
152
+ ```
153
+
154
+ Manually unload a model to free memory.
155
+
156
+ ### Cleanup Inactive Models
157
+ ```http
158
+ POST /models/cleanup?inactive_seconds=600
159
+ ```
160
+
161
+ Unload models inactive for specified duration (default: 600 seconds).
162
+
163
+ ## How It Works
164
+
165
+ **Lazy Loading**
166
+ Models load on first request, not at server startup. This keeps startup time instant regardless of available models.
167
+
168
+ **Smart Caching**
169
+ Loaded models stay in memory and are reused across requests. First request is slower (model download + load), subsequent requests are fast.
170
+
171
+ **Usage Tracking**
172
+ Each model access updates a timestamp. Models inactive for 10+ minutes are automatically unloaded.
173
+
174
+ **Thread Safety**
175
+ Per-model locks ensure only one thread loads a given model. Other threads wait and reuse the loaded instance.
176
+
177
+ Example flow:
178
+ ```bash
179
+ # Server starts instantly (no models loaded)
180
+ mozo start
181
+
182
+ # First request loads model
183
+ curl -X POST "http://localhost:8000/predict/detectron2/faster_rcnn_R_50_FPN_3x" -F "file=@test.jpg"
184
+ # Output: [ModelManager] Loading model: detectron2/faster_rcnn_R_50_FPN_3x...
185
+
186
+ # Subsequent requests reuse loaded model
187
+ curl -X POST "http://localhost:8000/predict/detectron2/faster_rcnn_R_50_FPN_3x" -F "file=@test2.jpg"
188
+ # Output: [ModelManager] Model already loaded, reusing existing instance.
189
+
190
+ # After 10 minutes of inactivity, model auto-unloads
191
+ # Output: [ModelManager] Cleanup: Unloaded 1 inactive model(s).
192
+ ```
193
+
194
+ ## Python SDK
195
+
196
+ For direct integration in Python applications:
197
+
198
+ ```python
199
+ from mozo import ModelManager
200
+ import cv2
201
+
202
+ manager = ModelManager()
203
+ model = manager.get_model('detectron2', 'mask_rcnn_R_50_FPN_3x')
204
+
205
+ image = cv2.imread('image.jpg')
206
+ detections = model.predict(image)
207
+
208
+ # Filter results
209
+ high_confidence = detections.filter_by_confidence(0.8)
210
+
211
+ # Manual memory management
212
+ manager.unload_model('detectron2', 'mask_rcnn_R_50_FPN_3x')
213
+ manager.cleanup_inactive_models(inactive_seconds=300)
214
+ ```
215
+
216
+ ### PixelFlow Integration
217
+
218
+ Detection models return PixelFlow Detections objects - a unified format across all ML frameworks:
219
+
220
+ ```python
221
+ # Works the same for Detectron2, YOLO, or custom models
222
+ detections = model.predict(image)
223
+
224
+ # Filter and annotate
225
+ import pixelflow as pf
226
+ filtered = detections.filter_by_confidence(0.8).filter_by_class_id([0, 2])
227
+ annotated = pf.annotate.box(image, filtered)
228
+ annotated = pf.annotate.label(annotated, filtered)
229
+
230
+ # Export
231
+ json_output = filtered.to_json()
232
+ ```
233
+
234
+ Learn more: [PixelFlow](https://github.com/datamarkin/pixelflow)
235
+
236
+ ## Configuration
237
+
238
+ ### Environment Variables
239
+
240
+ ```bash
241
+ # Enable MPS fallback for macOS (Apple Silicon)
242
+ export PYTORCH_ENABLE_MPS_FALLBACK=1
243
+
244
+ # Configure HuggingFace cache location
245
+ export HF_HOME=~/.cache/huggingface
246
+ ```
247
+
248
+ ### Memory Management
249
+
250
+ Models automatically unload after 10 minutes of inactivity. Adjust this:
251
+
252
+ ```bash
253
+ curl -X POST "http://localhost:8000/models/cleanup?inactive_seconds=300"
254
+ ```
255
+
256
+ Or in Python:
257
+ ```python
258
+ manager.cleanup_inactive_models(inactive_seconds=300)
259
+ ```
260
+
261
+ ## Extending Mozo
262
+
263
+ Add new models in 3 steps:
264
+
265
+ 1. Create adapter in `mozo/adapters/your_model.py`
266
+ 2. Register in `mozo/registry.py`
267
+ 3. Use via HTTP or Python API
268
+
269
+ See [CLAUDE.md](CLAUDE.md) for detailed implementation guide.
270
+
271
+ ## Architecture
272
+
273
+ ```
274
+ HTTP Request → FastAPI Server → ModelManager → ModelFactory → Adapter → Framework
275
+
276
+ Thread-safe cache
277
+ Usage tracking
278
+ Auto cleanup
279
+ ```
280
+
281
+ Components:
282
+ - **Server** - FastAPI REST API
283
+ - **Manager** - Lifecycle management, caching, cleanup
284
+ - **Factory** - Dynamic adapter instantiation
285
+ - **Registry** - Central catalog of models
286
+ - **Adapters** - Framework-specific implementations
287
+
288
+ ## Development
289
+
290
+ ```bash
291
+ # Install in development mode
292
+ pip install -e .
293
+
294
+ # Start server with auto-reload
295
+ mozo start
296
+ ```
297
+
298
+ ## Documentation
299
+
300
+ - [Repository](https://github.com/datamarkin/mozo)
301
+ - [Issues](https://github.com/datamarkin/mozo/issues)
302
+
303
+ ## License
304
+
305
+ MIT License
@@ -0,0 +1,54 @@
1
+ """
2
+ Mozo - Universal Computer Vision Model Server
3
+
4
+ 25+ pre-configured models ready to use. No deployment, no configuration.
5
+ Just `mozo start` and make HTTP requests.
6
+
7
+ Quick Start:
8
+ >>> # From terminal:
9
+ >>> mozo start
10
+ >>>
11
+ >>> # Then use any model via HTTP:
12
+ >>> curl -X POST "http://localhost:8000/predict/detectron2/mask_rcnn_R_50_FPN_3x" \\
13
+ >>> -F "file=@image.jpg"
14
+
15
+ Advanced Usage (Python SDK):
16
+ >>> from mozo import ModelManager
17
+ >>> import cv2
18
+ >>>
19
+ >>> manager = ModelManager()
20
+ >>> model = manager.get_model('detectron2', 'mask_rcnn_R_50_FPN_3x')
21
+ >>> image = cv2.imread('example.jpg')
22
+ >>> detections = model.predict(image) # Returns PixelFlow Detections
23
+ >>> print(f"Found {len(detections)} objects")
24
+
25
+ Features:
26
+ - 25+ models from Detectron2, HuggingFace Transformers
27
+ - Zero deployment - no Docker, Kubernetes, or cloud needed
28
+ - Automatic memory management with lazy loading
29
+ - PixelFlow integration for unified detection format
30
+ - Thread-safe concurrent access
31
+
32
+ For more information, see:
33
+ - Documentation: https://github.com/datamarkin/mozo
34
+ """
35
+
36
+ __version__ = "0.2.0"
37
+
38
+ # Public API exports
39
+ from mozo.manager import ModelManager
40
+ from mozo.registry import (
41
+ MODEL_REGISTRY,
42
+ get_available_families,
43
+ get_available_variants,
44
+ get_model_info,
45
+ )
46
+
47
+ __all__ = [
48
+ "ModelManager",
49
+ "MODEL_REGISTRY",
50
+ "get_available_families",
51
+ "get_available_variants",
52
+ "get_model_info",
53
+ "__version__",
54
+ ]