labelr 0.11.0__tar.gz → 0.11.1__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (44) hide show
  1. {labelr-0.11.0/src/labelr.egg-info → labelr-0.11.1}/PKG-INFO +2 -2
  2. {labelr-0.11.0 → labelr-0.11.1}/README.md +1 -1
  3. {labelr-0.11.0 → labelr-0.11.1}/pyproject.toml +1 -1
  4. labelr-0.11.1/src/labelr/annotate.py +57 -0
  5. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/apps/label_studio.py +54 -60
  6. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/config.py +1 -1
  7. {labelr-0.11.0 → labelr-0.11.1/src/labelr.egg-info}/PKG-INFO +2 -2
  8. labelr-0.11.0/src/labelr/annotate.py +0 -108
  9. {labelr-0.11.0 → labelr-0.11.1}/LICENSE +0 -0
  10. {labelr-0.11.0 → labelr-0.11.1}/setup.cfg +0 -0
  11. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/__init__.py +0 -0
  12. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/__main__.py +0 -0
  13. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/apps/__init__.py +0 -0
  14. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/apps/datasets.py +0 -0
  15. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/apps/directus.py +0 -0
  16. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/apps/evaluate.py +0 -0
  17. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/apps/google_batch.py +0 -0
  18. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/apps/hugging_face.py +0 -0
  19. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/apps/train.py +0 -0
  20. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/apps/typer_description.py +0 -0
  21. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/check.py +0 -0
  22. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/dataset_features.py +0 -0
  23. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/evaluate/__init__.py +0 -0
  24. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/evaluate/object_detection.py +0 -0
  25. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/export/__init__.py +0 -0
  26. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/export/classification.py +0 -0
  27. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/export/common.py +0 -0
  28. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/export/llm.py +0 -0
  29. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/export/object_detection.py +0 -0
  30. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/google_genai.py +0 -0
  31. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/main.py +0 -0
  32. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/project_config.py +0 -0
  33. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/sample/__init__.py +0 -0
  34. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/sample/classification.py +0 -0
  35. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/sample/common.py +0 -0
  36. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/sample/llm.py +0 -0
  37. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/sample/object_detection.py +0 -0
  38. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/types.py +0 -0
  39. {labelr-0.11.0 → labelr-0.11.1}/src/labelr/utils.py +0 -0
  40. {labelr-0.11.0 → labelr-0.11.1}/src/labelr.egg-info/SOURCES.txt +0 -0
  41. {labelr-0.11.0 → labelr-0.11.1}/src/labelr.egg-info/dependency_links.txt +0 -0
  42. {labelr-0.11.0 → labelr-0.11.1}/src/labelr.egg-info/entry_points.txt +0 -0
  43. {labelr-0.11.0 → labelr-0.11.1}/src/labelr.egg-info/requires.txt +0 -0
  44. {labelr-0.11.0 → labelr-0.11.1}/src/labelr.egg-info/top_level.txt +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: labelr
3
- Version: 0.11.0
3
+ Version: 0.11.1
4
4
  Summary: A command-line tool to manage labeling tasks with Label Studio.
5
5
  Requires-Python: >=3.10
6
6
  Description-Content-Type: text/markdown
@@ -137,7 +137,7 @@ where `PROJECT_ID` is the ID of the project you created.
137
137
  To accelerate annotation, you can pre-annotate the images with an object detection model. We support three pre-annotation backends:
138
138
 
139
139
  - `ultralytics`: use your own model or [Yolo-World](https://docs.ultralytics.com/models/yolo-world/), a zero-shot model that can detect any object using a text description of the object. You can specify the path or the name of the model with the `--model-name` option. If no model name is provided, the `yolov8x-worldv2.pt` model (Yolo-World) is used.
140
- - `ultralytics_sam3`: use [SAM3](https://docs.ultralytics.com/models/sam-3/), another zero-shot model. We advice to use this backend, as it's the most accurate. The `--model-name` option is ignored when this backend is used.
140
+ - `ultralytics_sam3`: use [SAM3](https://docs.ultralytics.com/models/sam-3/), another zero-shot model. We advice to use this backend, as it's the most accurate. The `--model` option is ignored when this backend is used.
141
141
  - `robotoff`: the ML backend of Open Food Facts (specific to Open Food Facts projects).
142
142
 
143
143
  When using `ultralytics` or `ultralytics_sam3`, make sure you installed the labelr package with the `ultralytics` extra.
@@ -107,7 +107,7 @@ where `PROJECT_ID` is the ID of the project you created.
107
107
  To accelerate annotation, you can pre-annotate the images with an object detection model. We support three pre-annotation backends:
108
108
 
109
109
  - `ultralytics`: use your own model or [Yolo-World](https://docs.ultralytics.com/models/yolo-world/), a zero-shot model that can detect any object using a text description of the object. You can specify the path or the name of the model with the `--model-name` option. If no model name is provided, the `yolov8x-worldv2.pt` model (Yolo-World) is used.
110
- - `ultralytics_sam3`: use [SAM3](https://docs.ultralytics.com/models/sam-3/), another zero-shot model. We advice to use this backend, as it's the most accurate. The `--model-name` option is ignored when this backend is used.
110
+ - `ultralytics_sam3`: use [SAM3](https://docs.ultralytics.com/models/sam-3/), another zero-shot model. We advice to use this backend, as it's the most accurate. The `--model` option is ignored when this backend is used.
111
111
  - `robotoff`: the ML backend of Open Food Facts (specific to Open Food Facts projects).
112
112
 
113
113
  When using `ultralytics` or `ultralytics_sam3`, make sure you installed the labelr package with the `ultralytics` extra.
@@ -1,6 +1,6 @@
1
1
  [project]
2
2
  name = "labelr"
3
- version = "0.11.0"
3
+ version = "0.11.1"
4
4
  description = "A command-line tool to manage labeling tasks with Label Studio."
5
5
  readme = "README.md"
6
6
  requires-python = ">=3.10"
@@ -0,0 +1,57 @@
1
+ import random
2
+ import string
3
+
4
+ from openfoodfacts.utils import get_logger
5
+
6
+ from ultralytics import Results
7
+
8
+ logger = get_logger(__name__)
9
+
10
+
11
+ def format_annotation_results_from_ultralytics(
12
+ results: Results,
13
+ labels: list[str],
14
+ label_mapping: dict[str, str] | None = None,
15
+ ) -> list[dict]:
16
+ annotation_results = []
17
+ orig_height, orig_width = results.orig_shape
18
+ boxes = results.boxes
19
+ classes = boxes.cls.tolist()
20
+ for i, xyxyn in enumerate(boxes.xyxyn):
21
+ # Boxes found.
22
+ if len(xyxyn) > 0:
23
+ xyxyn = xyxyn.tolist()
24
+ x1 = xyxyn[0] * 100
25
+ y1 = xyxyn[1] * 100
26
+ x2 = xyxyn[2] * 100
27
+ y2 = xyxyn[3] * 100
28
+ width = x2 - x1
29
+ height = y2 - y1
30
+ label_id = int(classes[i])
31
+ label_name = labels[label_id]
32
+ if label_mapping:
33
+ label_name = label_mapping.get(label_name, label_name)
34
+ annotation_results.append(
35
+ {
36
+ "id": generate_id(),
37
+ "type": "rectanglelabels",
38
+ "from_name": "label",
39
+ "to_name": "image",
40
+ "original_width": orig_width,
41
+ "original_height": orig_height,
42
+ "image_rotation": 0,
43
+ "value": {
44
+ "rotation": 0,
45
+ "x": x1,
46
+ "y": y1,
47
+ "width": width,
48
+ "height": height,
49
+ "rectanglelabels": [label_name],
50
+ },
51
+ },
52
+ )
53
+ return annotation_results
54
+
55
+
56
+ def generate_id(length: int = 10) -> str:
57
+ return "".join(random.choices(string.ascii_letters + string.digits, k=length))
@@ -254,8 +254,20 @@ def annotate_from_prediction(
254
254
 
255
255
  class PredictorBackend(enum.StrEnum):
256
256
  ultralytics = enum.auto()
257
+ ultralytics_yolo_world = enum.auto()
257
258
  ultralytics_sam3 = enum.auto()
258
- robotoff = enum.auto()
259
+
260
+
261
+ YOLO_WORLD_MODELS = (
262
+ "yolov8s-world.pt",
263
+ "yolov8s-worldv2.pt",
264
+ "yolov8m-world.pt",
265
+ "yolov8m-worldv2.pt",
266
+ "yolov8l-world.pt",
267
+ "yolov8l-worldv2.pt",
268
+ "yolov8x-world.pt",
269
+ "yolov8x-worldv2.pt",
270
+ )
259
271
 
260
272
 
261
273
  @app.command()
@@ -274,10 +286,16 @@ def add_prediction(
274
286
  model_name: Annotated[
275
287
  str | None,
276
288
  typer.Option(
277
- help="Name of the object detection model to run (for Robotoff server) or "
278
- "of the Ultralytics zero-shot model to run. If using Ultralytics backend "
279
- "and no model name is provided, the default is yolov8x-worldv2.pt. "
280
- "If using ultralytics_sam3 backend, the model name is ignored."
289
+ "--model",
290
+ help="Name or path of the object detection model to run. How this is used depends "
291
+ "on the backend. If using `ultralytics` backend, the option is required and is the "
292
+ "name of the model to download from the Ultralytics model zoo or the path to a local "
293
+ "model. "
294
+ "If using `ultralytics_yolo_world` backend, this is optional and is the name of the "
295
+ "`yolo-world` model to download from the Ultralytics model zoo or the path to a local "
296
+ "model (Defaults: `yolov8x-worldv2.pt`). "
297
+ "If using `ultralytics_sam3` backend, this option is ignored, as there is a single model. "
298
+ "The model is downloaded automatically from Hugging Face.",
281
299
  ),
282
300
  ] = None,
283
301
  skip_existing: Annotated[
@@ -286,17 +304,10 @@ def add_prediction(
286
304
  help="Skip tasks that already have predictions",
287
305
  ),
288
306
  ] = True,
289
- server_url: Annotated[
290
- str | None,
291
- typer.Option(
292
- help="The Robotoff URL if the backend is robotoff. If the backend is "
293
- "different than robotoff, this option is ignored."
294
- ),
295
- ] = "https://robotoff.openfoodfacts.org",
296
307
  backend: Annotated[
297
308
  PredictorBackend,
298
309
  typer.Option(
299
- help="The prediction backend, possible options are: `ultralytics`, `ultralytics_sam3` and `robotoff`"
310
+ help="The prediction backend, possible options are: `ultralytics` or `ultralytics_sam3`"
300
311
  ),
301
312
  ] = PredictorBackend.ultralytics,
302
313
  labels: Annotated[
@@ -320,9 +331,8 @@ def add_prediction(
320
331
  threshold: Annotated[
321
332
  float | None,
322
333
  typer.Option(
323
- help="Confidence threshold for selecting bounding boxes. The default is 0.3 "
324
- "for robotoff backend, 0.1 for ultralytics backend and 0.25 for "
325
- "ultralytics_sam3 backend."
334
+ help="Confidence threshold for selecting bounding boxes. The default is 0.1 for "
335
+ "ultralytics backend and 0.25 for ultralytics_sam3 backend."
326
336
  ),
327
337
  ] = None,
328
338
  max_det: Annotated[int, typer.Option(help="Maximum numbers of detections")] = 300,
@@ -356,13 +366,10 @@ def add_prediction(
356
366
  import tqdm
357
367
  from huggingface_hub import hf_hub_download
358
368
  from label_studio_sdk.client import LabelStudio
359
- from openfoodfacts.utils import get_image_from_url, http_session
369
+ from openfoodfacts.utils import get_image_from_url
360
370
  from PIL import Image
361
371
 
362
- from ..annotate import (
363
- format_annotation_results_from_robotoff,
364
- format_annotation_results_from_ultralytics,
365
- )
372
+ from ..annotate import format_annotation_results_from_ultralytics
366
373
 
367
374
  check_label_studio_api_key(api_key)
368
375
 
@@ -373,6 +380,13 @@ def add_prediction(
373
380
  if dry_run:
374
381
  logger.info("** Dry run mode enabled **")
375
382
 
383
+ if backend == PredictorBackend.ultralytics and not Path(model_name).is_file():
384
+ raise typer.BadParameter(
385
+ f"Model file '{model_name}' not found. When the backend is `ultralytics` "
386
+ "and the --model does not refer to a YOLO-World model, --model is expected "
387
+ "to be a local Ultralytics model file (`.pt`)."
388
+ )
389
+
376
390
  logger.info(
377
391
  "backend: %s, model_name: %s, labels: %s, threshold: %s, label mapping: %s",
378
392
  backend,
@@ -383,24 +397,26 @@ def add_prediction(
383
397
  )
384
398
  ls = LabelStudio(base_url=label_studio_url, api_key=api_key)
385
399
 
386
- if backend == PredictorBackend.ultralytics:
400
+ if backend in (
401
+ PredictorBackend.ultralytics,
402
+ PredictorBackend.ultralytics_yolo_world,
403
+ ):
387
404
  from ultralytics import YOLO, YOLOWorld
388
405
 
389
- if model_name is None:
390
- model_name = "yolov8x-worldv2.pt"
391
-
392
406
  if labels is None:
393
- raise typer.BadParameter("Labels are required for Ultralytics backend")
407
+ raise typer.BadParameter("Labels are required for `ultralytics` backend")
394
408
 
395
409
  if threshold is None:
396
410
  threshold = 0.1
397
411
 
398
- model = YOLO(model_name)
399
- if hasattr(model, "set_classes"):
400
- model = typing.cast(YOLOWorld, model)
412
+ if backend == PredictorBackend.ultralytics:
413
+ model = YOLO(model_name)
414
+ elif backend == PredictorBackend.ultralytics_yolo_world:
415
+ if model_name is None:
416
+ model_name = "yolov8x-worldv2.pt"
417
+ model = YOLOWorld(model_name)
401
418
  model.set_classes(labels)
402
- else:
403
- logger.warning("The model does not support setting classes directly.")
419
+
404
420
  elif backend == PredictorBackend.ultralytics_sam3:
405
421
  from ultralytics.models.sam import SAM3SemanticPredictor
406
422
 
@@ -424,13 +440,6 @@ def add_prediction(
424
440
  if imgsz is not None:
425
441
  overrides["imgsz"] = imgsz
426
442
  model = SAM3SemanticPredictor(overrides=overrides)
427
- elif backend == PredictorBackend.robotoff:
428
- if server_url is None:
429
- raise typer.BadParameter("--server-url is required for Robotoff backend")
430
-
431
- if threshold is None:
432
- threshold = 0.1
433
- server_url = server_url.rstrip("/")
434
443
  else:
435
444
  raise typer.BadParameter(f"Unsupported backend: {backend}")
436
445
 
@@ -439,12 +448,14 @@ def add_prediction(
439
448
  ):
440
449
  if not (skip_existing and task.total_predictions > 0):
441
450
  image_url = task.data["image_url"]
442
- image = typing.cast(
443
- Image.Image,
444
- get_image_from_url(image_url, error_raise=error_raise),
445
- )
451
+ image = get_image_from_url(image_url, error_raise=error_raise)
452
+ if image is None:
453
+ continue
446
454
  min_score = None
447
- if backend == PredictorBackend.ultralytics:
455
+ if backend in (
456
+ PredictorBackend.ultralytics,
457
+ PredictorBackend.ultralytics_yolo_world,
458
+ ):
448
459
  predict_kwargs = {
449
460
  "conf": threshold,
450
461
  "max_det": max_det,
@@ -463,23 +474,6 @@ def add_prediction(
463
474
  results, labels, label_mapping_dict
464
475
  )
465
476
  min_score = min(results.boxes.conf.tolist(), default=None)
466
- elif backend == PredictorBackend.robotoff:
467
- r = http_session.get(
468
- f"{server_url}/api/v1/images/predict",
469
- params={
470
- "models": model_name,
471
- "output_image": 0,
472
- "image_url": image_url,
473
- },
474
- )
475
- r.raise_for_status()
476
- response = r.json()
477
- label_studio_result = format_annotation_results_from_robotoff(
478
- response["predictions"][model_name],
479
- image.width,
480
- image.height,
481
- label_mapping_dict,
482
- )
483
477
  if dry_run:
484
478
  logger.info("image_url: %s", image_url)
485
479
  logger.info("result: %s", label_studio_result)
@@ -3,7 +3,7 @@ from pathlib import Path
3
3
  from pydantic import BaseModel, Field
4
4
  import os
5
5
 
6
- CONFIG_PATH = Path("~").expanduser() / ".config/.labelr/config.json"
6
+ CONFIG_PATH = Path("~").expanduser() / ".config/labelr/config.json"
7
7
 
8
8
 
9
9
  # validate_assignment allows to validate the model everytime it is updated
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: labelr
3
- Version: 0.11.0
3
+ Version: 0.11.1
4
4
  Summary: A command-line tool to manage labeling tasks with Label Studio.
5
5
  Requires-Python: >=3.10
6
6
  Description-Content-Type: text/markdown
@@ -137,7 +137,7 @@ where `PROJECT_ID` is the ID of the project you created.
137
137
  To accelerate annotation, you can pre-annotate the images with an object detection model. We support three pre-annotation backends:
138
138
 
139
139
  - `ultralytics`: use your own model or [Yolo-World](https://docs.ultralytics.com/models/yolo-world/), a zero-shot model that can detect any object using a text description of the object. You can specify the path or the name of the model with the `--model-name` option. If no model name is provided, the `yolov8x-worldv2.pt` model (Yolo-World) is used.
140
- - `ultralytics_sam3`: use [SAM3](https://docs.ultralytics.com/models/sam-3/), another zero-shot model. We advice to use this backend, as it's the most accurate. The `--model-name` option is ignored when this backend is used.
140
+ - `ultralytics_sam3`: use [SAM3](https://docs.ultralytics.com/models/sam-3/), another zero-shot model. We advice to use this backend, as it's the most accurate. The `--model` option is ignored when this backend is used.
141
141
  - `robotoff`: the ML backend of Open Food Facts (specific to Open Food Facts projects).
142
142
 
143
143
  When using `ultralytics` or `ultralytics_sam3`, make sure you installed the labelr package with the `ultralytics` extra.
@@ -1,108 +0,0 @@
1
- import random
2
- import string
3
-
4
- from openfoodfacts.types import JSONType
5
- from openfoodfacts.utils import get_logger
6
-
7
- logger = get_logger(__name__)
8
-
9
-
10
- def format_annotation_results_from_robotoff(
11
- objects: list[JSONType],
12
- image_width: int,
13
- image_height: int,
14
- label_mapping: dict[str, str] | None = None,
15
- ) -> list[JSONType]:
16
- """Format annotation results from Robotoff prediction endpoint into
17
- Label Studio format."""
18
- annotation_results = []
19
- for object_ in objects:
20
- bounding_box = object_["bounding_box"]
21
- label_name = object_["label"]
22
-
23
- if label_mapping:
24
- label_name = label_mapping.get(label_name, label_name)
25
-
26
- # These are relative coordinates (between 0.0 and 1.0)
27
- y_min, x_min, y_max, x_max = bounding_box
28
- # Make sure the coordinates are within the image boundaries,
29
- # and convert them to percentages
30
- y_min = min(max(0, y_min), 1.0) * 100
31
- x_min = min(max(0, x_min), 1.0) * 100
32
- y_max = min(max(0, y_max), 1.0) * 100
33
- x_max = min(max(0, x_max), 1.0) * 100
34
- x = x_min
35
- y = y_min
36
- width = x_max - x_min
37
- height = y_max - y_min
38
-
39
- id_ = generate_id()
40
- annotation_results.append(
41
- {
42
- "id": id_,
43
- "type": "rectanglelabels",
44
- "from_name": "label",
45
- "to_name": "image",
46
- "original_width": image_width,
47
- "original_height": image_height,
48
- "image_rotation": 0,
49
- "value": {
50
- "rotation": 0,
51
- "x": x,
52
- "y": y,
53
- "width": width,
54
- "height": height,
55
- "rectanglelabels": [label_name],
56
- },
57
- },
58
- )
59
- return annotation_results
60
-
61
-
62
- def format_annotation_results_from_ultralytics(
63
- results: "Results",
64
- labels: list[str],
65
- label_mapping: dict[str, str] | None = None,
66
- ) -> list[dict]:
67
- annotation_results = []
68
- orig_height, orig_width = results.orig_shape
69
- boxes = results.boxes
70
- classes = boxes.cls.tolist()
71
- for i, xyxyn in enumerate(boxes.xyxyn):
72
- # Boxes found.
73
- if len(xyxyn) > 0:
74
- xyxyn = xyxyn.tolist()
75
- x1 = xyxyn[0] * 100
76
- y1 = xyxyn[1] * 100
77
- x2 = xyxyn[2] * 100
78
- y2 = xyxyn[3] * 100
79
- width = x2 - x1
80
- height = y2 - y1
81
- label_id = int(classes[i])
82
- label_name = labels[label_id]
83
- if label_mapping:
84
- label_name = label_mapping.get(label_name, label_name)
85
- annotation_results.append(
86
- {
87
- "id": generate_id(),
88
- "type": "rectanglelabels",
89
- "from_name": "label",
90
- "to_name": "image",
91
- "original_width": orig_width,
92
- "original_height": orig_height,
93
- "image_rotation": 0,
94
- "value": {
95
- "rotation": 0,
96
- "x": x1,
97
- "y": y1,
98
- "width": width,
99
- "height": height,
100
- "rectanglelabels": [label_name],
101
- },
102
- },
103
- )
104
- return annotation_results
105
-
106
-
107
- def generate_id(length: int = 10) -> str:
108
- return "".join(random.choices(string.ascii_letters + string.digits, k=length))
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes