lamindb 0.49.3__py3-none-any.whl → 0.50.1__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,16 +1,16 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: lamindb
3
- Version: 0.49.3
4
- Summary: Open-source data lake & feature store for biology.
3
+ Version: 0.50.1
4
+ Summary: Open-source data platform for biology.
5
5
  Author-email: Lamin Labs <laminlabs@gmail.com>
6
6
  Requires-Python: >=3.8
7
7
  Description-Content-Type: text/markdown
8
8
  Classifier: Programming Language :: Python :: 3.8
9
9
  Classifier: Programming Language :: Python :: 3.9
10
10
  Classifier: Programming Language :: Python :: 3.10
11
- Requires-Dist: lnschema_core==0.43.3
12
- Requires-Dist: lamindb_setup==0.49.4
13
- Requires-Dist: lamin_utils==0.9.3
11
+ Requires-Dist: lnschema_core==0.44.3
12
+ Requires-Dist: lamindb_setup==0.49.6
13
+ Requires-Dist: lamin_utils==0.9.6
14
14
  Requires-Dist: erdiagram>=0.1.2
15
15
  Requires-Dist: rapidfuzz
16
16
  Requires-Dist: pydantic[dotenv]
@@ -24,7 +24,7 @@ Requires-Dist: botocore==1.29.76 ; extra == "aws"
24
24
  Requires-Dist: urllib3<2 ; extra == "aws"
25
25
  Requires-Dist: boto3==1.26.76 ; extra == "aws"
26
26
  Requires-Dist: fsspec[s3]==2023.5.0 ; extra == "aws"
27
- Requires-Dist: lnschema_bionty==0.29.0 ; extra == "bionty"
27
+ Requires-Dist: lnschema_bionty==0.29.2 ; extra == "bionty"
28
28
  Requires-Dist: pre-commit ; extra == "dev"
29
29
  Requires-Dist: nox ; extra == "dev"
30
30
  Requires-Dist: laminci>=0.3 ; extra == "dev"
@@ -39,7 +39,7 @@ Requires-Dist: fsspec[gs]==2023.5.0 ; extra == "gcp"
39
39
  Requires-Dist: nbproject==0.9.0 ; extra == "jupyter"
40
40
  Requires-Dist: nbproject==0.9.0 ; extra == "nbproject"
41
41
  Requires-Dist: psycopg2-binary ; extra == "postgres"
42
- Requires-Dist: zarr ; extra == "zarr"
42
+ Requires-Dist: zarr>=2.16.0 ; extra == "zarr"
43
43
  Project-URL: Home, https://github.com/laminlabs/lamindb
44
44
  Provides-Extra: aws
45
45
  Provides-Extra: bionty
@@ -58,27 +58,27 @@ Provides-Extra: zarr
58
58
 
59
59
  # LaminDB
60
60
 
61
- Open-source data lake & feature store for biology.
61
+ Open-source data platform for biology.
62
62
 
63
63
  ```{warning}
64
64
 
65
- Public beta: Currently only recommended for collaborators as we still make breaking changes.
65
+ Public beta: Close to having converged a stable API, but some breaking changes might still occur.
66
66
 
67
- Update 2023-06-14:
68
-
69
- - We completed a major migration from SQLAlchemy/SQLModel to Django, available in 0.42.0.
70
- - The last version before the migration is 0.41.2.
71
67
  ```
72
68
 
73
69
  ## Introduction
74
70
 
75
- LaminDB is an open-source Python library to:
71
+ LaminDB is an open-source Python library to manage files & datasets while
72
+
73
+ - tracking provenance across pipelines, notebooks & app uploads
74
+ - validating & linking data batches using biological registries & ontologies
75
+
76
+ You can
76
77
 
77
- - Manage files & datasets while tracking provenance across pipelines, notebooks & app uploads.
78
- - Manage biological registries, ontologies, features & schemas.
79
- - Enhance integrity through built-in data validation and [idempotent](https://lamin.ai/docs/faq/idempotency), [ACID](https://lamin.ai/docs/faq/acid) operations.
78
+ - Manage features & labels schema-less or schema-full.
80
79
  - Query, search, look up, save, load and stream with one API.
81
80
  - Collaborate across a mesh of LaminDB instances.
81
+ - Enjoy [idempotent](https://lamin.ai/docs/faq/idempotency) & [ACID](https://lamin.ai/docs/faq/acid) operations.
82
82
 
83
83
  LaminApp is a data management app built on LaminDB. If LaminDB ~ git, LaminApp ~ GitHub.
84
84
 
@@ -101,17 +101,22 @@ import lamindb as ln
101
101
  # import lnschema_bionty as lb # optional, for bionty schema
102
102
  ```
103
103
 
104
- ### Manage data objects
104
+ ### Manage files and data objects
105
105
 
106
106
  ```python
107
- # Store a DataFrame object
107
+ # Store and register a file
108
+ ln.File("s3://my-bucket/images/image001.jpg").save() # or a local path
109
+
110
+ # Store and register a DataFrame object
108
111
  df = pd.DataFrame({"feat1": [1, 2], "feat2": [3, 4]}) # AnnData works, too
109
112
  ln.File(df, description="Data batch 1").save() # create a File object and save/upload it
110
113
 
111
114
  # To find it, if you don't have specific metadata in mind, run a search
112
115
  ln.File.search("batch 1")
113
- # Or filter (under-the-hood, you have the full power of SQL to query)
116
+ # Or query (under-the-hood, you have the full power of SQL to query)
114
117
  file = ln.File.filter(description="Data batch 1").one() # get exactly one result
118
+ # Query by `key` (the relative path within your storage)
119
+ ln.File.filter(key__startswith="images/").df() # get query results as a DataFrame
115
120
 
116
121
  # Load a file back into memory
117
122
  df = file.load()
@@ -119,27 +124,19 @@ df = file.load()
119
124
  backed = file.backed() # currently works for AnnData, zarr, HDF5, not yet for DataFrame
120
125
  ```
121
126
 
122
- ### Manage files
123
-
124
- ```python
125
- # Store a file
126
- file = ln.File("s3://my-bucket/images/image001.jpg") # or a local path
127
- file.save() # register the file
128
-
129
- # Query by `key` (the relative path within your storage) and load into memory
130
- file.filter(key__startswith="images/").df() # all files in folder "images/" in default storage
131
- ```
132
-
133
127
  ### Auto-complete categoricals and search
134
128
 
135
129
  ```python
136
- # When you're unsure about spellings, use a lookup object:
137
- users = ln.User.lookup()
130
+ # When you're unsure about spellings, use a lookup object
131
+ users = ln.User.lookup() # by default uses handle field
138
132
  ln.File.filter(created_by=users.lizlemon)
133
+ users = ln.User.lookup("name") # a lookup object of the name field
134
+ users.liz_lemon
139
135
 
140
136
  # Or search
141
- ln.User.search("liz lemon", field="name")
137
+ ln.User.search("liz lemon") # by default searches the handle field
142
138
  user = ln.User.search("liz lemon", return_queryset=True).first() # grab the top search result as a record
139
+ ln.User.search("liz lemon", field=["name", "handle"]) # search against multiple fields
143
140
  ```
144
141
 
145
142
  ### Track & query data lineage
@@ -161,11 +158,12 @@ file.view_lineage()
161
158
  Track a Jupyter Notebook:
162
159
 
163
160
  ```python
161
+ # my-analysis.ipynb
164
162
  ln.track() # auto-detect & save notebook metadata
165
163
  ln.File("my_artifact.parquet").save() # this file is now aware that it was saved in this notebook
166
164
  ```
167
165
 
168
- When you query the file, later on, you'll know from which notebook it came:
166
+ When you query the file, later on, you'll know which notebook it came from:
169
167
 
170
168
  ```python
171
169
  file = ln.File.filter(description="my_artifact.parquet").one() # query for a file
@@ -314,7 +312,7 @@ We do _not_ store any of your data, but only basic metadata about you (email add
314
312
 
315
313
  LaminDB builds semantics of R&D and biology onto well-established tools:
316
314
 
317
- - SQLite & Postgres for SQL databases using Django ORM (previously: SQLModel)
315
+ - SQLite & Postgres for SQL databases using Django Registry (previously: SQLModel)
318
316
  - S3, GCP & local storage for object storage using fsspec
319
317
  - Configurable storage formats: pyarrow, anndata, zarr, etc.
320
318
  - Biological knowledge sources & ontologies: see [Bionty](https://lamin.ai/docs/bionty)
@@ -0,0 +1,47 @@
1
+ lamindb/__init__.py,sha256=oE18o68Zgr88aED7EbueNHqZ2JQApmGBUwiEcfqeXl0,3807
2
+ lamindb/_context.py,sha256=bc-OpnQ5g_7ltx_qdj0Z1YYuNgeqm87T2jInjZX3iOY,17837
3
+ lamindb/_dataset.py,sha256=w5Byx-9IwVsi2rZx1lKy-HuxTHq3f_Q-MF7t3zb0Msw,4884
4
+ lamindb/_delete.py,sha256=jvzDM_hbraBvOMhulibRziyDhoM1mqIX_MSLeC42c1M,1917
5
+ lamindb/_feature.py,sha256=B3U9q-v_Bu9o3xrl2ScYefMEkNnDrlWpLzL4sdN4izQ,5479
6
+ lamindb/_feature_set.py,sha256=XBIGI2vuqECG9VegEzaiQm7iBNfqvpCq_WwJnL7h4ig,8492
7
+ lamindb/_file.py,sha256=0W8mSiYhjl7Epvb87W3SJVFAgpEBlxZnS5r7zNgMyhY,35984
8
+ lamindb/_filter.py,sha256=Rf5RSkglIhJhSdk3AIPfc83F5NIfZrdCZDpgcYNj5KY,351
9
+ lamindb/_from_values.py,sha256=mezvC4d_b8ndRF8NNPl1vEj3O70qLv92b9ZjII5Qa3E,10532
10
+ lamindb/_label.py,sha256=-gLa8WvD1mycruDEB67Zx_oewhO85KvOwVZ3SsKvSvs,1544
11
+ lamindb/_logger.py,sha256=Q9ugUnZE4EwH0N6qFMG2HlILmu-SArnEEy-nafTPjRg,47
12
+ lamindb/_query_manager.py,sha256=RSdPsANk4WBlKbu1DIoDto8HdSqCKbo8Q4CGyRqLd2U,1983
13
+ lamindb/_query_set.py,sha256=yyzrU0ACqG0iebME5bah6ycj4HkvZ8_JigtyATHW3PA,9953
14
+ lamindb/_registry.py,sha256=Jd0qb9sK1CO8ONMSN8MnF9UYofd-vvM1XszJQvNETNA,16486
15
+ lamindb/_save.py,sha256=jC2a9JzQBZYVHHuq6NyIovFQI1Y6WT1q2C1dfiXKv_E,9216
16
+ lamindb/_storage.py,sha256=a0ofo106NHNY2RBF3MB1PeeONZMc28aHFG2fbndn_80,246
17
+ lamindb/_synonym.py,sha256=I7qL57F5a_Z3a_5hVIQKz5w55HGKEpgpBY9s-35Bg80,5870
18
+ lamindb/_transform.py,sha256=K-Y37UqQPu06lsA3Jfnkyt5dssRpuMnpCDDHDbT40Z4,1251
19
+ lamindb/_validate.py,sha256=W374jmKFvAy1wIfibz-C4sftxxtlWVwBHliIBy_YD1w,3606
20
+ lamindb/_view.py,sha256=dxZ7BPrMAf0rJ8aKSprkQiO1O2OJsjtSaHBJfpKvVT4,2187
21
+ lamindb/types.py,sha256=svg5S_aynuGfbEOsbmqkR_gF9d9YMzfOkcvGN37Rzvg,232
22
+ lamindb/dev/__init__.py,sha256=K8cgi474J2uIxbnQZeeeMrsDcWIPdsRXwjIrWcJ_qaQ,656
23
+ lamindb/dev/_data.py,sha256=M5AGGCAvw7e_cDooAWA8_hUQFjxAC62QPs-vEQFqpD4,7806
24
+ lamindb/dev/_feature_manager.py,sha256=laJRqKiL8Qlm5Jq60RoXXX6xYwgyL2MPd6OgS8eXs6A,3526
25
+ lamindb/dev/_settings.py,sha256=WLweWZNAvBquybQYUlYyIehIlv3t5llgYhyNpL0obdg,2926
26
+ lamindb/dev/_view_parents.py,sha256=pF0qTZirLwzgWFtqzRe-KksqASHjpFWLRkNK-0GsyGU,8590
27
+ lamindb/dev/exc.py,sha256=xJ0QCeineZccjYKq-W-RXhU6fJH6JyFLtDyHKZTOoyU,96
28
+ lamindb/dev/hashing.py,sha256=oYHDSEjINsmDYgCLasnhh_VHbB1dLag27ufVUwDgLyQ,1385
29
+ lamindb/dev/utils.py,sha256=LGdiW4k3GClLz65vKAVRkL6Tw-Gkx9DWAdez1jyA5bE,428
30
+ lamindb/dev/datasets/__init__.py,sha256=0ObE_PmGsxOaCvMYZYkLQLo7zYD6Hr17Ixh53A-jqNs,1310
31
+ lamindb/dev/datasets/_core.py,sha256=6GsgLPavWQPI0K7EIKJ5xq9JXKIxiF1ZNf_1GOABvD0,12426
32
+ lamindb/dev/datasets/_fake.py,sha256=S8mNho-oSh1M9x9oOSsUBLLHmBAegsOLlFk6LnF81EA,942
33
+ lamindb/dev/storage/__init__.py,sha256=S9BtWiqOj54Fiv213ThKqiyTVgEJpo91wVOkpzE1kEk,368
34
+ lamindb/dev/storage/_anndata_sizes.py,sha256=0XVzA6AQeVGPaGPrhGusKyxFgFjeo3qSN29hxb8D5E8,993
35
+ lamindb/dev/storage/_backed_access.py,sha256=PSyVDGwkaZ1xpo0JJXSQS0HJyR72QIec2hiuvlMIVBY,20126
36
+ lamindb/dev/storage/_zarr.py,sha256=7W1Jos1QOOF3f41uML_arQoDTNPZVpRyP2m3SLWaCAo,2766
37
+ lamindb/dev/storage/file.py,sha256=-pWYLfS5HDyAX-KyomyHnngmTX8PaD2WRs1zIbsrjs4,6988
38
+ lamindb/dev/storage/object.py,sha256=x8sy23WqkGir2sg2c3AngQPgv0RbErEA6cpvVN0Zc4I,945
39
+ lamindb/schema/__init__.py,sha256=PznznlFvbeNSZKpn1RS6Gv0JMXFkLmU2_ej_1hVLSTs,796
40
+ lamindb/schema/_core.py,sha256=nWR3X_rNd1AbWw3naMiBi8ppAEpqIDyEYqM54feRB_s,766
41
+ lamindb/setup/__init__.py,sha256=8-0F2C4Glx23-b8-D_1CBGgRBM5PppVhazhoXZYOLsg,275
42
+ lamindb/setup/dev/__init__.py,sha256=iD0f2lx_Hgp-udkiPGal7si5waJSOgvnG6Id-g1mMOY,213
43
+ lamindb-0.50.1.dist-info/entry_points.txt,sha256=MioM8vSpKwXxY3geNBwjo1wnwy1l15WjJYlI3lpKuZI,53
44
+ lamindb-0.50.1.dist-info/LICENSE,sha256=xx0jnfkXJvxRnG63LTGOxlggYnIysveWIZ6H3PNdCrQ,11357
45
+ lamindb-0.50.1.dist-info/WHEEL,sha256=EZbGkh7Ie4PoZfRQ8I0ZuP9VklN_TvcZ6DSE5Uar4z4,81
46
+ lamindb-0.50.1.dist-info/METADATA,sha256=It6LNQR9QYgxitisJK1KcWhRfDhOq5CkcLpqpDhjLuY,12570
47
+ lamindb-0.50.1.dist-info/RECORD,,
@@ -1,237 +0,0 @@
1
- from collections import defaultdict
2
- from typing import Dict, List, Optional, Union
3
-
4
- import pandas as pd
5
- from lamin_utils import logger
6
- from lnschema_core.models import ORM, Dataset, Feature, FeatureSet, File, Label
7
-
8
- from ._queryset import QuerySet
9
- from ._save import save
10
-
11
-
12
- def validate_and_cast_feature(
13
- feature: Union[str, Feature], records: List[ORM]
14
- ) -> Feature:
15
- if isinstance(feature, str):
16
- feature_name = feature
17
- feature = Feature.filter(name=feature_name).one_or_none()
18
- if feature is None:
19
- orm_types = set([record.__class__ for record in records])
20
- feature = Feature(
21
- name=feature_name, type="category", registries=list(orm_types)
22
- )
23
- logger.warning(f"Created & saved: {feature}")
24
- feature.save()
25
- return feature
26
-
27
-
28
- def create_features_df(
29
- file: File, feature_sets: List[FeatureSet], exclude: bool = True
30
- ):
31
- features = []
32
- for feature_set in feature_sets:
33
- if exclude:
34
- features_df = feature_set.features.exclude(registries__isnull=True).df()
35
- else:
36
- features_df = feature_set.features.df()
37
- slots = file.feature_sets.through.objects.filter(
38
- file=file, feature_set=feature_set
39
- ).list("slot")
40
- for slot in slots:
41
- features_df["slot"] = slot
42
- features.append(features_df)
43
- features_df = pd.concat(features)
44
- return features_df.sort_values(["slot", "registries"])
45
-
46
-
47
- def get_accessor_by_orm(host: Union[File, Dataset]) -> Dict:
48
- dictionary = {
49
- field.related_model.__get_name_with_schema__(): field.name
50
- for field in host._meta.related_objects
51
- }
52
- dictionary["core.Feature"] = "features"
53
- dictionary["core.Label"] = "labels"
54
- return dictionary
55
-
56
-
57
- def get_feature_set_by_slot(host) -> Dict:
58
- feature_set_links = host.feature_sets.through.objects.filter(file_id=host.id)
59
- return {
60
- feature_set_link.slot: FeatureSet.objects.get(
61
- id=feature_set_link.feature_set_id
62
- )
63
- for feature_set_link in feature_set_links
64
- }
65
-
66
-
67
- class FeatureManager:
68
- """Feature manager."""
69
-
70
- def __init__(self, host: Union[File, Dataset]):
71
- self._host = host
72
- self._feature_set_by_slot = get_feature_set_by_slot(host)
73
- self._accessor_by_orm = get_accessor_by_orm(host)
74
-
75
- def __repr__(self) -> str:
76
- if len(self._feature_set_by_slot) > 0:
77
- msg = "slots:\n"
78
- for slot, feature_set in self._feature_set_by_slot.items():
79
- msg += f" {slot}: {feature_set}\n"
80
- return msg
81
- else:
82
- return "No linked features."
83
-
84
- def __getitem__(self, slot) -> QuerySet:
85
- feature_set = self._feature_set_by_slot[slot]
86
- orm_name = ".".join(feature_set.ref_field.split(".")[:2])
87
- return getattr(feature_set, self._accessor_by_orm[orm_name]).all()
88
-
89
- def get_labels(
90
- self,
91
- feature: Optional[Union[str, ORM]] = None,
92
- mute: bool = False,
93
- flat_names: bool = False,
94
- ) -> Union[QuerySet, Dict[str, QuerySet], List]:
95
- """Get labels given a feature."""
96
- if isinstance(feature, str):
97
- feature_name = feature
98
- feature = Feature.filter(name=feature_name).one_or_none()
99
- if feature is None:
100
- raise ValueError("Feature doesn't exist")
101
- if feature.registries is None:
102
- raise ValueError("Feature does not have linked labels")
103
- registries_to_check = feature.registries.split("|")
104
- if len(registries_to_check) > 1 and not mute:
105
- logger.warning("Labels come from multiple registries!")
106
- qs_by_registry = {}
107
- for registry in registries_to_check:
108
- # currently need to distinguish between Label and non-Label, because
109
- # we only have the feature information for Label
110
- if registry == "core.Label":
111
- links_to_labels = getattr(
112
- self._host, self._accessor_by_orm[registry]
113
- ).through.objects.filter(file_id=self._host.id, feature_id=feature.id)
114
- label_ids = [link.label_id for link in links_to_labels]
115
- qs_by_registry[registry] = Label.objects.filter(id__in=label_ids)
116
- else:
117
- qs_by_registry[registry] = getattr(
118
- self._host, self._accessor_by_orm[registry]
119
- ).all()
120
- if flat_names:
121
- # returns a flat list of names
122
- from ._orm import get_default_str_field
123
-
124
- values = []
125
- for v in qs_by_registry.values():
126
- values += v.list(get_default_str_field(v))
127
- return values
128
- if len(registries_to_check) == 1:
129
- return qs_by_registry[registry]
130
- else:
131
- return qs_by_registry
132
-
133
- def add_labels(
134
- self,
135
- records: Union[ORM, List[ORM], QuerySet],
136
- feature: Optional[Union[str, ORM]] = None,
137
- ) -> None:
138
- """Add one or several labels and associate them with a feature."""
139
- if isinstance(records, (QuerySet, QuerySet.__base__)): # need to have both
140
- records = records.list()
141
- if isinstance(records, str) or not isinstance(records, List):
142
- records = [records]
143
- if isinstance(records[0], str): # type: ignore
144
- raise ValueError(
145
- "Please pass a record (an ORM object), not a string, e.g., via: label"
146
- f" = ln.Label(name='{records[0]}')" # type: ignore
147
- )
148
- if self._host._state.adding:
149
- raise ValueError("Please save the file or dataset before adding a label!")
150
- feature = validate_and_cast_feature(feature, records)
151
- records_by_feature_orm = defaultdict(list)
152
- for record in records:
153
- if feature is None:
154
- error_msg = (
155
- "Please pass feature: add_labels(labels, feature='myfeature')"
156
- )
157
- record_feature = feature
158
- if hasattr(record, "_feature"):
159
- record_feature = record._feature
160
- if record_feature is None:
161
- raise ValueError(error_msg)
162
- # TODO: refactor so that we don't call the following line
163
- # repeatedly for the same feature
164
- record_feature = validate_and_cast_feature(record_feature, [record])
165
- else:
166
- record_feature = feature
167
- records_by_feature_orm[
168
- (record_feature, record.__class__.__get_name_with_schema__())
169
- ].append(record)
170
- # ensure all labels are saved
171
- save(records)
172
- for (feature, orm_name), records in records_by_feature_orm.items():
173
- getattr(self._host, self._accessor_by_orm[orm_name]).add(
174
- *records, through_defaults={"feature_id": feature.id}
175
- )
176
- feature_set_links = self._host.feature_sets.through.objects.filter(
177
- file_id=self._host.id
178
- )
179
- feature_set_ids = [link.feature_set_id for link in feature_set_links.all()]
180
- # get all linked features of type Feature
181
- feature_sets = FeatureSet.filter(id__in=feature_set_ids).all()
182
- linked_features_by_slot = {
183
- feature_set_links.filter(feature_set_id=feature_set.id)
184
- .one()
185
- .slot: feature_set.features.all()
186
- for feature_set in feature_sets
187
- if "core.Feature" in feature_set.ref_field
188
- }
189
- for (feature, orm_name), records in records_by_feature_orm.items():
190
- feature = validate_and_cast_feature(feature, records)
191
- logger.info(f"Linking feature {feature.name} to {orm_name}")
192
- if feature.registries is None:
193
- feature.registries = orm_name
194
- elif orm_name not in feature.registries:
195
- feature.registries += f"|{orm_name}"
196
- feature.save()
197
- # check whether we have to update the feature set that manages labels
198
- # (Feature) to account for a new feature
199
- found_feature = False
200
- for _, linked_features in linked_features_by_slot.items():
201
- if feature in linked_features:
202
- found_feature = True
203
- if not found_feature:
204
- if "ext" not in linked_features_by_slot:
205
- logger.info("Creating feature_set for slot 'ext' (external)")
206
- feature_set = FeatureSet([feature], modality="meta")
207
- feature_set.save()
208
- self.add_feature_set(feature_set, slot="ext")
209
- else:
210
- feature_set = self._feature_set_by_slot["ext"]
211
- logger.info(
212
- f"Linking feature {feature.name} to feature set {feature_set}"
213
- )
214
- feature_set.features.add(feature)
215
- feature_set.n += 1
216
- feature_set.save()
217
-
218
- def add_feature_set(self, feature_set: FeatureSet, slot: str):
219
- """Add new feature set to a slot.
220
-
221
- Args:
222
- feature_set: `FeatureSet` A feature set object.
223
- slot: `str` The access slot.
224
- """
225
- if self._host._state.adding:
226
- raise ValueError(
227
- "Please save the file or dataset before adding a feature set!"
228
- )
229
- feature_set.save()
230
- link_record = self._host.feature_sets.through.objects.filter(
231
- file=self._host, feature_set=feature_set, slot=slot
232
- ).one_or_none()
233
- if link_record is None:
234
- self._host.feature_sets.through(
235
- file=self._host, feature_set=feature_set, slot=slot
236
- ).save()
237
- self._feature_set_by_slot[slot] = feature_set
@@ -1,43 +0,0 @@
1
- lamindb/__init__.py,sha256=WlX04-h-0nICmekfxd0IZdSzqnM6kBiYApNvsY7RDFw,2712
2
- lamindb/_context.py,sha256=RhzZXlfiNS4JRjTrVfZKSCM9lSl-PSzOFWSyYR-wU7c,17837
3
- lamindb/_dataset.py,sha256=w5Byx-9IwVsi2rZx1lKy-HuxTHq3f_Q-MF7t3zb0Msw,4884
4
- lamindb/_delete.py,sha256=ZtGl_JnLSM120JB5xwvE5LpL1HBYoBlKdrFQmpgbl5Y,1877
5
- lamindb/_feature.py,sha256=g7cisIpBpj7mpzw1xVfmUEfSFsBgIGLKpea5pVT-Mgs,5294
6
- lamindb/_feature_manager.py,sha256=HhsvFOo0FomB8ANC3OlSGV8vruC1ZX9fvAuuMJ0ChQg,9800
7
- lamindb/_feature_set.py,sha256=_ni45HTLwm-gh_LOU6bbt9vdXu0Qr1WszI0Q7gFc5oc,7218
8
- lamindb/_file.py,sha256=YdYcSBnp2d2M3IVLoyIKjEn-oUBfqs0q2gM_a3yLbXQ,35034
9
- lamindb/_filter.py,sha256=kljLx4mYXqJhyox-ULtRc2Lc6X-l1rya5TSIFSdd62k,325
10
- lamindb/_from_values.py,sha256=UtwbkPnY8ahJh5R4DRmHiHWCKa-cr-YmqLNs5IOGSNk,12124
11
- lamindb/_label.py,sha256=-gLa8WvD1mycruDEB67Zx_oewhO85KvOwVZ3SsKvSvs,1544
12
- lamindb/_logger.py,sha256=Q9ugUnZE4EwH0N6qFMG2HlILmu-SArnEEy-nafTPjRg,47
13
- lamindb/_manager.py,sha256=rM7OBGAkuq7HFfUvWrIY8O7AWt1O1-NRhJigkVlt0Ew,1895
14
- lamindb/_orm.py,sha256=TME3pr6Zs3ClahfaCc0Kaw0v9Z-QxTvnSF61HrC5w5o,23132
15
- lamindb/_queryset.py,sha256=mHAZR3ZIOPmumQIReoghYY_AhHnM8ymYYm6fhR1fR44,9713
16
- lamindb/_save.py,sha256=8BmPw0eUrTlkx3O5KyStTs6y0hsnXmPS49fConB2Fuw,9216
17
- lamindb/_storage.py,sha256=a0ofo106NHNY2RBF3MB1PeeONZMc28aHFG2fbndn_80,246
18
- lamindb/_transform.py,sha256=K-Y37UqQPu06lsA3Jfnkyt5dssRpuMnpCDDHDbT40Z4,1251
19
- lamindb/_view.py,sha256=jA1SfYqFtdKbbExBS2oy5tsXKxAn_ZDAOOJiICcKalo,2047
20
- lamindb/types.py,sha256=svg5S_aynuGfbEOsbmqkR_gF9d9YMzfOkcvGN37Rzvg,232
21
- lamindb/dev/__init__.py,sha256=JYmmgCbPdYNGL6J4aXxWd988su__uFAScRppvLsOzLo,431
22
- lamindb/dev/_settings.py,sha256=2mV4K29PZaf1FVM5XbMZD85_1wSvHc3iF20VSBRsFKY,2842
23
- lamindb/dev/_view_parents.py,sha256=9LuANKbU-03mfW8n5S8RD3gqAEdFteYRN7BCJSqoWYU,7844
24
- lamindb/dev/hashing.py,sha256=oYHDSEjINsmDYgCLasnhh_VHbB1dLag27ufVUwDgLyQ,1385
25
- lamindb/dev/utils.py,sha256=LGdiW4k3GClLz65vKAVRkL6Tw-Gkx9DWAdez1jyA5bE,428
26
- lamindb/dev/datasets/__init__.py,sha256=ZVnWS5LhUjzrIe_v81hlqoCzgi_gDiV-W9T_rnqTuXo,1215
27
- lamindb/dev/datasets/_core.py,sha256=NAjisahI6C7GaZfIvmaIl_MTgvin-1NJAjxSG-EZbUA,11767
28
- lamindb/dev/datasets/_fake.py,sha256=S8mNho-oSh1M9x9oOSsUBLLHmBAegsOLlFk6LnF81EA,942
29
- lamindb/dev/storage/__init__.py,sha256=S9BtWiqOj54Fiv213ThKqiyTVgEJpo91wVOkpzE1kEk,368
30
- lamindb/dev/storage/_anndata_sizes.py,sha256=0XVzA6AQeVGPaGPrhGusKyxFgFjeo3qSN29hxb8D5E8,993
31
- lamindb/dev/storage/_backed_access.py,sha256=Ns8ulfQclmwGQRqDOdq9GkQBFkV58EZ0vFHkTFJNflk,18714
32
- lamindb/dev/storage/_zarr.py,sha256=7W1Jos1QOOF3f41uML_arQoDTNPZVpRyP2m3SLWaCAo,2766
33
- lamindb/dev/storage/file.py,sha256=PKqccIV7t18TMOWvLYbwaN8HZMJoCCNUIKDdkB0vAMU,6463
34
- lamindb/dev/storage/object.py,sha256=x8sy23WqkGir2sg2c3AngQPgv0RbErEA6cpvVN0Zc4I,945
35
- lamindb/schema/__init__.py,sha256=PznznlFvbeNSZKpn1RS6Gv0JMXFkLmU2_ej_1hVLSTs,796
36
- lamindb/schema/_core.py,sha256=nWR3X_rNd1AbWw3naMiBi8ppAEpqIDyEYqM54feRB_s,766
37
- lamindb/setup/__init__.py,sha256=8-0F2C4Glx23-b8-D_1CBGgRBM5PppVhazhoXZYOLsg,275
38
- lamindb/setup/dev/__init__.py,sha256=iD0f2lx_Hgp-udkiPGal7si5waJSOgvnG6Id-g1mMOY,213
39
- lamindb-0.49.3.dist-info/entry_points.txt,sha256=MioM8vSpKwXxY3geNBwjo1wnwy1l15WjJYlI3lpKuZI,53
40
- lamindb-0.49.3.dist-info/LICENSE,sha256=xx0jnfkXJvxRnG63LTGOxlggYnIysveWIZ6H3PNdCrQ,11357
41
- lamindb-0.49.3.dist-info/WHEEL,sha256=EZbGkh7Ie4PoZfRQ8I0ZuP9VklN_TvcZ6DSE5Uar4z4,81
42
- lamindb-0.49.3.dist-info/METADATA,sha256=6nOPabopna1c2h0FdR-5YBa-egg7imgLuNX2eP_PZVY,12529
43
- lamindb-0.49.3.dist-info/RECORD,,