detect-lib 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Surya Chand Rayala
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,413 @@
1
+ Metadata-Version: 2.4
2
+ Name: detect-lib
3
+ Version: 0.1.0
4
+ Summary: A modular **video object detection** toolkit with a clean **det-v1** JSON schema, pluggable backends, and optional model export.
5
+ Author-email: Surya Chand Rayala <suryachand2k1@gmail.com>
6
+ License: MIT License
7
+
8
+ Copyright (c) 2026 Surya Chand Rayala
9
+
10
+ Permission is hereby granted, free of charge, to any person obtaining a copy
11
+ of this software and associated documentation files (the "Software"), to deal
12
+ in the Software without restriction, including without limitation the rights
13
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
14
+ copies of the Software, and to permit persons to whom the Software is
15
+ furnished to do so, subject to the following conditions:
16
+
17
+ The above copyright notice and this permission notice shall be included in all
18
+ copies or substantial portions of the Software.
19
+
20
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
21
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
22
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
23
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
24
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
25
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
26
+ SOFTWARE.
27
+
28
+ Requires-Python: >=3.11
29
+ Description-Content-Type: text/markdown
30
+ License-File: LICENSE
31
+ Requires-Dist: tqdm>=4.67.3
32
+ Requires-Dist: ultralytics>=8.4.14
33
+ Provides-Extra: export
34
+ Requires-Dist: onnx>=1.20.1; extra == "export"
35
+ Requires-Dist: onnxruntime>=1.24.1; extra == "export"
36
+ Provides-Extra: tf
37
+ Requires-Dist: tensorflow>=2.20.0; extra == "tf"
38
+ Provides-Extra: openvino
39
+ Requires-Dist: openvino>=2025.4.1; extra == "openvino"
40
+ Provides-Extra: coreml
41
+ Requires-Dist: coremltools>=9.0; extra == "coreml"
42
+ Dynamic: license-file
43
+
44
+ # detect
45
+
46
+ A modular **video object detection** toolkit with a clean **det-v1** JSON schema, pluggable backends, and optional model export.
47
+
48
+ Current backend:
49
+ - **Ultralytics YOLO** (bbox / pose / segmentation)
50
+
51
+ > By default, `detect` **does not write any files**. You opt-in to saving JSON, frames, or annotated video via flags.
52
+
53
+ ---
54
+
55
+ ## Features
56
+
57
+ - Detect videos → det-v1 JSON (always returned in-memory; optionally saved)
58
+ - Optional artifacts:
59
+ - `--json` → save `detections.json`
60
+ - `--frames` → save extracted frames
61
+ - `--save-video <name.mp4>` → save annotated video
62
+ - YOLO tasks:
63
+ - `yolo_bbox` (boxes)
64
+ - `yolo_pose` (boxes + keypoints)
65
+ - `yolo_seg` (boxes + polygons)
66
+ - Model registry keys:
67
+ - pass `--weights yolo26n` / `yolo26n-seg` / `yolo26n-pose` (or a local path / URL)
68
+ - Exports:
69
+ - export to formats like `onnx`, `engine`, `tflite`, `openvino`, `coreml`, etc (depending on toolchain)
70
+
71
+ ---
72
+
73
+ ## Install with `pip` (PyPI)
74
+
75
+ > Use this if you want to install and use the tool without cloning the repo.
76
+
77
+ ### Install
78
+
79
+ ```bash
80
+ pip install detect-lib
81
+ ```
82
+
83
+ ### Optional dependencies (pip extras)
84
+
85
+ Export helpers (ONNX + ONNXRuntime):
86
+
87
+ ```bash
88
+ pip install "detect-lib[export]"
89
+ ```
90
+
91
+ TensorFlow export paths (heavy):
92
+
93
+ ```bash
94
+ pip install "detect-lib[tf]"
95
+ ```
96
+
97
+ OpenVINO export:
98
+
99
+ ```bash
100
+ pip install "detect-lib[openvino]"
101
+ ```
102
+
103
+ CoreML export (macOS):
104
+
105
+ ```bash
106
+ pip install "detect-lib[coreml]"
107
+ ```
108
+
109
+ ---
110
+
111
+ ### CLI usage (pip)
112
+
113
+ Global help:
114
+
115
+ ```bash
116
+ python -m detect.cli.detect_video -h
117
+ python -m detect.cli.export_model -h
118
+ ```
119
+
120
+ List detectors:
121
+
122
+ ```bash
123
+ python -c "import detect; print(detect.available_detectors())"
124
+ ```
125
+
126
+ List models (registry + installed):
127
+
128
+ ```bash
129
+ python -m detect.cli.detect_video --list-models
130
+ python -m detect.cli.export_model --list-models
131
+ ```
132
+
133
+ Basic command (detection):
134
+
135
+ ```bash
136
+ python -m detect.cli.detect_video \
137
+ --video <in.mp4> \
138
+ --detector yolo_bbox \
139
+ --weights yolo26n
140
+ ```
141
+
142
+ Basic command (export):
143
+
144
+ ```bash
145
+ python -m detect.cli.export_model \
146
+ --weights yolo26n \
147
+ --formats onnx \
148
+ --out-dir models/exports --run-name y26_onnx
149
+ ```
150
+
151
+ ---
152
+
153
+ ## Python usage (import)
154
+
155
+ You can use `detect` as a library after installing `detect-lib` with pip.
156
+
157
+ ### Quick sanity check
158
+
159
+ ```bash
160
+ python -c "import detect; print(detect.available_detectors())"
161
+ ```
162
+
163
+ ### Run detection from a Python file
164
+
165
+ Create `run_detect.py`:
166
+
167
+ ```python
168
+ from detect import detect_video
169
+
170
+ res = detect_video(
171
+ video="in.mp4",
172
+ detector="yolo_bbox",
173
+ weights="yolo26n",
174
+ )
175
+
176
+ payload = res.payload
177
+ print(payload["schema_version"], len(payload["frames"]))
178
+ print(res.paths) # populated only if you enable saving artifacts
179
+ ```
180
+
181
+ Run:
182
+
183
+ ```bash
184
+ python run_detect.py
185
+ ```
186
+
187
+ ### Run model export from a Python file
188
+
189
+ > Requires export extras based on the type of export needed (e.g., `pip install "detect-lib[export]"`).
190
+
191
+ Create `run_export.py`:
192
+
193
+ ```python
194
+ from detect import export_model
195
+
196
+ res = export_model(
197
+ weights="yolo26n",
198
+ formats=["onnx"],
199
+ imgsz=640,
200
+ out_dir="models/exports",
201
+ run_name="y26_onnx_py",
202
+ )
203
+
204
+ print("run_dir:", res["run_dir"])
205
+ print("artifacts:")
206
+ for p in res["artifacts"]:
207
+ print(" -", p)
208
+ ```
209
+
210
+ Run it:
211
+
212
+ ```bash
213
+ python run_export.py
214
+ ```
215
+
216
+ ---
217
+
218
+ ## Install from GitHub (uv)
219
+
220
+ Use this if you are developing locally or want reproducible project environments.
221
+
222
+ Install uv:
223
+ https://docs.astral.sh/uv/getting-started/installation/#standalone-installer
224
+
225
+ Verify:
226
+
227
+ ```bash
228
+ uv --version
229
+ ```
230
+
231
+ ### Install dependencies
232
+
233
+ ```bash
234
+ git clone https://github.com/Surya-Rayala/VideoPipeline-detection.git
235
+ cd VideoPipeline-detection
236
+ uv sync
237
+ ```
238
+
239
+ ### Optional dependencies (uv extras)
240
+
241
+ ```bash
242
+ uv sync --extra export
243
+ uv sync --extra tf
244
+ uv sync --extra openvino
245
+ uv sync --extra coreml
246
+ ```
247
+
248
+ ---
249
+
250
+ ## CLI usage (uv)
251
+
252
+ Global help:
253
+
254
+ ```bash
255
+ uv run python -m detect.cli.detect_video -h
256
+ uv run python -m detect.cli.export_model -h
257
+ ```
258
+
259
+ List detectors:
260
+
261
+ ```bash
262
+ uv run python -c "import detect; print(detect.available_detectors())"
263
+ ```
264
+
265
+ List models (registry + installed):
266
+
267
+ ```bash
268
+ uv run python -m detect.cli.detect_video --list-models
269
+ uv run python -m detect.cli.export_model --list-models
270
+ ```
271
+
272
+ Basic command (detection):
273
+
274
+ ```bash
275
+ uv run python -m detect.cli.detect_video \
276
+ --video <in.mp4> \
277
+ --detector yolo_bbox \
278
+ --weights yolo26n
279
+ ```
280
+
281
+ Basic command (export):
282
+
283
+ ```bash
284
+ uv run python -m detect.cli.export_model \
285
+ --weights yolo26n \
286
+ --formats onnx \
287
+ --out-dir models/exports --run-name y26_onnx
288
+ ```
289
+
290
+ ---
291
+
292
+ ### TensorRT / engine export and run notes (important)
293
+
294
+ Exporting engine (TensorRT) typically requires an NVIDIA GPU + CUDA + TensorRT installed and version-compatible.
295
+
296
+ Export TensorRT engine:
297
+
298
+ ```bash
299
+ uv run python -m detect.cli.export_model \
300
+ --weights yolo26n \
301
+ --formats engine \
302
+ --device 0 \
303
+ --out-dir models/exports --run-name y26_trt
304
+ ```
305
+
306
+ Run / sanity-check the exported engine using this package (produces det-v1 output):
307
+
308
+ ```bash
309
+ uv run python -m detect.cli.detect_video \
310
+ --video in.mp4 \
311
+ --detector yolo_bbox \
312
+ --weights models/exports/y26_trt/yolo26n.engine \
313
+ --device 0
314
+ ```
315
+
316
+ Optionally save artifacts (JSON + annotated video):
317
+
318
+ ```bash
319
+ uv run python -m detect.cli.detect_video \
320
+ --video in.mp4 \
321
+ --detector yolo_bbox \
322
+ --weights models/exports/y26_trt/yolo26n.engine \
323
+ --device 0 \
324
+ --json \
325
+ --save-video annotated_engine.mp4 \
326
+ --out-dir out --run-name y26_trt_check
327
+ ```
328
+
329
+ ---
330
+
331
+ ## CLI arguments
332
+
333
+ ### Detection: `detect.cli.detect_video`
334
+
335
+ **Required**
336
+
337
+ - `--video <path>`: Path to the input video file.
338
+ - `--detector <name>`: Detector type (yolo_bbox, yolo_pose, or yolo_seg).
339
+ - `--weights <id|path|url>`: Registry key, local weights path, or URL to weights.
340
+
341
+ **Detection options**
342
+
343
+ - `--classes <ids>`: Filter to specific class IDs (comma/semicolon-separated), or omit for all classes.
344
+ - `--conf-thresh <float>`: Confidence threshold for detections (default 0.25).
345
+ - `--imgsz <int>`: Inference image size used by the backend (default 640).
346
+ - `--device <str>`: Device selector (e.g., `auto`, `cpu`, `mps`, `0`).
347
+ - `--half`: Enable FP16 inference where supported.
348
+
349
+ **Artifact saving (opt-in)**
350
+
351
+ - `--json`: Save detections.json under the run directory.
352
+ - `--frames`: Save extracted frames as images under the run directory.
353
+ - `--save-video <name.mp4>`: Save an annotated video under the run directory.
354
+ - `--display`: Show live visualization window while running (press q to quit).
355
+
356
+ **Output control**
357
+
358
+ - `--out-dir <dir>`: Output root directory used only if saving artifacts (default out).
359
+ - `--run-name <name>`: Run folder name inside out-dir (auto-derived if omitted).
360
+
361
+ **Model registry / downloads**
362
+
363
+ - `--models-dir <dir>`: Directory where models are stored/downloaded (default models).
364
+ - `--no-download`: Disable automatic download for registry keys/URLs.
365
+
366
+ **Misc**
367
+
368
+ - `--no-progress`: Disable progress bar output.
369
+ - `--list-models`: Print registry + installed models then exit.
370
+
371
+ ---
372
+
373
+ ### Export: `detect.cli.export_model`
374
+
375
+ **Required**
376
+
377
+ - `--weights <id|path|url>`: Registry key, local weights path, or URL to weights.
378
+
379
+ **Export options**
380
+
381
+ - `--formats <list>`: Comma/semicolon-separated export formats (default onnx).
382
+ - `--imgsz <int|H,W>`: Export image size as an int or `H,W` pair (default 640).
383
+ - `--device <str>`: Export device selector (e.g., `cpu`, `mps`, `0`).
384
+ - `--half`: Enable FP16 export where supported.
385
+ - `--int8`: Enable INT8 quantization (format/toolchain-dependent).
386
+ - `--data <yaml>`: Dataset YAML for INT8 calibration (when required).
387
+ - `--fraction <float>`: Fraction of dataset used for calibration (default 1.0).
388
+ - `--dynamic`: Enable dynamic shapes where supported.
389
+ - `--batch <int>`: Export batch size (default 1).
390
+ - `--opset <int>`: ONNX opset version (ONNX only).
391
+ - `--simplify`: Simplify the ONNX graph (ONNX only).
392
+ - `--workspace <int>`: TensorRT workspace size in GB (TensorRT only).
393
+ - `--nms`: Add NMS to exported model when supported by format/backend.
394
+
395
+ **Output control**
396
+
397
+ - `--out-dir <dir>`: Output root directory for exports (default models/exports).
398
+ - `--run-name <name>`: Export run folder name inside out-dir.
399
+
400
+ **Model registry / downloads**
401
+
402
+ - `--models-dir <dir>`: Directory where models are stored/downloaded (default models).
403
+ - `--no-download`: Disable automatic download for registry keys/URLs.
404
+
405
+ **Misc**
406
+
407
+ - `--list-models`: Print registry + installed models then exit.
408
+
409
+ ---
410
+
411
+ ## License
412
+
413
+ MIT License. See `LICENSE`.