datajoint 0.14.0__tar.gz → 0.14.2__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Potentially problematic release.
This version of datajoint might be problematic. Click here for more details.
- {datajoint-0.14.0/datajoint.egg-info → datajoint-0.14.2}/PKG-INFO +2 -2
- datajoint-0.14.2/README.md +50 -0
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/__init__.py +1 -1
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/admin.py +12 -6
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/autopopulate.py +104 -82
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/blob.py +6 -4
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/connection.py +6 -3
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/declare.py +2 -3
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/dependencies.py +1 -1
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/diagram.py +9 -5
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/expression.py +26 -18
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/fetch.py +20 -14
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/heading.py +11 -7
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/preview.py +10 -6
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/s3.py +4 -1
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/schemas.py +1 -1
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/settings.py +1 -0
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/table.py +45 -6
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/user_tables.py +4 -0
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/utils.py +14 -1
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/version.py +1 -1
- {datajoint-0.14.0 → datajoint-0.14.2/datajoint.egg-info}/PKG-INFO +2 -2
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint.egg-info/requires.txt +1 -1
- {datajoint-0.14.0 → datajoint-0.14.2}/requirements.txt +1 -1
- {datajoint-0.14.0 → datajoint-0.14.2}/setup.py +1 -1
- datajoint-0.14.0/README.md +0 -33
- {datajoint-0.14.0 → datajoint-0.14.2}/LICENSE.txt +0 -0
- {datajoint-0.14.0 → datajoint-0.14.2}/MANIFEST.in +0 -0
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/attribute_adapter.py +0 -0
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/condition.py +0 -0
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/errors.py +0 -0
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/external.py +0 -0
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/hash.py +0 -0
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/jobs.py +0 -0
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/logging.py +0 -0
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint/plugin.py +0 -0
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint.egg-info/SOURCES.txt +0 -0
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint.egg-info/datajoint.pub +0 -0
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint.egg-info/dependency_links.txt +0 -0
- {datajoint-0.14.0 → datajoint-0.14.2}/datajoint.egg-info/top_level.txt +0 -0
- {datajoint-0.14.0 → datajoint-0.14.2}/setup.cfg +0 -0
|
@@ -1,13 +1,13 @@
|
|
|
1
1
|
Metadata-Version: 2.1
|
|
2
2
|
Name: datajoint
|
|
3
|
-
Version: 0.14.
|
|
3
|
+
Version: 0.14.2
|
|
4
4
|
Summary: A relational data pipeline framework.
|
|
5
5
|
Home-page: https://datajoint.com
|
|
6
6
|
Author: DataJoint Contributors
|
|
7
7
|
Author-email: support@datajoint.com
|
|
8
8
|
License: GNU LGPL
|
|
9
9
|
Keywords: database,data pipelines,scientific computing,automated research workflows
|
|
10
|
-
Requires-Python: ~=3.
|
|
10
|
+
Requires-Python: ~=3.8
|
|
11
11
|
License-File: LICENSE.txt
|
|
12
12
|
|
|
13
13
|
A relational data framework for scientific data pipelines with MySQL backend.
|
|
@@ -0,0 +1,50 @@
|
|
|
1
|
+
[](https://zenodo.org/badge/latestdoi/16774/datajoint/datajoint-python)
|
|
2
|
+
[](https://coveralls.io/github/datajoint/datajoint-python?branch=master)
|
|
3
|
+
[](http://badge.fury.io/py/datajoint)
|
|
4
|
+
[](https://datajoint.slack.com/)
|
|
5
|
+
|
|
6
|
+
# Welcome to DataJoint for Python!
|
|
7
|
+
|
|
8
|
+
DataJoint for Python is a framework for scientific workflow management based on
|
|
9
|
+
relational principles. DataJoint is built on the foundation of the relational data
|
|
10
|
+
model and prescribes a consistent method for organizing, populating, computing, and
|
|
11
|
+
querying data.
|
|
12
|
+
|
|
13
|
+
DataJoint was initially developed in 2009 by Dimitri Yatsenko in Andreas Tolias' Lab at
|
|
14
|
+
Baylor College of Medicine for the distributed processing and management of large
|
|
15
|
+
volumes of data streaming from regular experiments. Starting in 2011, DataJoint has
|
|
16
|
+
been available as an open-source project adopted by other labs and improved through
|
|
17
|
+
contributions from several developers.
|
|
18
|
+
Presently, the primary developer of DataJoint open-source software is the company
|
|
19
|
+
DataJoint (https://datajoint.com).
|
|
20
|
+
|
|
21
|
+
## Data Pipeline Example
|
|
22
|
+
|
|
23
|
+

|
|
24
|
+
|
|
25
|
+
[Yatsenko et al., bioRxiv 2021](https://doi.org/10.1101/2021.03.30.437358)
|
|
26
|
+
|
|
27
|
+
## Getting Started
|
|
28
|
+
|
|
29
|
+
- Install with Conda
|
|
30
|
+
|
|
31
|
+
```bash
|
|
32
|
+
conda install -c conda-forge datajoint
|
|
33
|
+
```
|
|
34
|
+
|
|
35
|
+
- Install with pip
|
|
36
|
+
|
|
37
|
+
```bash
|
|
38
|
+
pip install datajoint
|
|
39
|
+
```
|
|
40
|
+
|
|
41
|
+
- [Documentation & Tutorials](https://datajoint.com/docs/core/datajoint-python/)
|
|
42
|
+
|
|
43
|
+
- [Interactive Tutorials](https://github.com/datajoint/datajoint-tutorials) on GitHub Codespaces
|
|
44
|
+
|
|
45
|
+
- [DataJoint Elements](https://datajoint.com/docs/elements/) - Catalog of example pipelines for neuroscience experiments
|
|
46
|
+
|
|
47
|
+
- Contribute
|
|
48
|
+
- [Development Environment](https://datajoint.com/docs/core/datajoint-python/latest/develop/)
|
|
49
|
+
|
|
50
|
+
- [Guidelines](https://datajoint.com/docs/about/contribute/)
|
|
@@ -1,5 +1,5 @@
|
|
|
1
1
|
"""
|
|
2
|
-
DataJoint for Python is a framework for building data
|
|
2
|
+
DataJoint for Python is a framework for building data pipelines using MySQL databases
|
|
3
3
|
to represent pipeline structure and bulk storage systems for large objects.
|
|
4
4
|
DataJoint is built on the foundation of the relational data model and prescribes a
|
|
5
5
|
consistent method for organizing, populating, and querying data.
|
|
@@ -1,5 +1,6 @@
|
|
|
1
1
|
import pymysql
|
|
2
2
|
from getpass import getpass
|
|
3
|
+
from packaging import version
|
|
3
4
|
from .connection import conn
|
|
4
5
|
from .settings import config
|
|
5
6
|
from .utils import user_choice
|
|
@@ -8,17 +9,22 @@ import logging
|
|
|
8
9
|
logger = logging.getLogger(__name__.split(".")[0])
|
|
9
10
|
|
|
10
11
|
|
|
11
|
-
def set_password(
|
|
12
|
-
new_password=None, connection=None, update_config=None
|
|
13
|
-
): # pragma: no cover
|
|
12
|
+
def set_password(new_password=None, connection=None, update_config=None):
|
|
14
13
|
connection = conn() if connection is None else connection
|
|
15
14
|
if new_password is None:
|
|
16
15
|
new_password = getpass("New password: ")
|
|
17
16
|
confirm_password = getpass("Confirm password: ")
|
|
18
17
|
if new_password != confirm_password:
|
|
19
|
-
logger.
|
|
18
|
+
logger.warning("Failed to confirm the password! Aborting password change.")
|
|
20
19
|
return
|
|
21
|
-
|
|
20
|
+
|
|
21
|
+
if version.parse(
|
|
22
|
+
connection.query("select @@version;").fetchone()[0]
|
|
23
|
+
) >= version.parse("5.7"):
|
|
24
|
+
# SET PASSWORD is deprecated as of MySQL 5.7 and removed in 8+
|
|
25
|
+
connection.query("ALTER USER user() IDENTIFIED BY '%s';" % new_password)
|
|
26
|
+
else:
|
|
27
|
+
connection.query("SET PASSWORD = PASSWORD('%s')" % new_password)
|
|
22
28
|
logger.info("Password updated.")
|
|
23
29
|
|
|
24
30
|
if update_config or (
|
|
@@ -28,7 +34,7 @@ def set_password(
|
|
|
28
34
|
config.save_local(verbose=True)
|
|
29
35
|
|
|
30
36
|
|
|
31
|
-
def kill(restriction=None, connection=None, order_by=None):
|
|
37
|
+
def kill(restriction=None, connection=None, order_by=None):
|
|
32
38
|
"""
|
|
33
39
|
view and kill database connections.
|
|
34
40
|
|
|
@@ -1,4 +1,5 @@
|
|
|
1
1
|
"""This module defines class dj.AutoPopulate"""
|
|
2
|
+
|
|
2
3
|
import logging
|
|
3
4
|
import datetime
|
|
4
5
|
import traceback
|
|
@@ -118,7 +119,7 @@ class AutoPopulate:
|
|
|
118
119
|
|
|
119
120
|
def _jobs_to_do(self, restrictions):
|
|
120
121
|
"""
|
|
121
|
-
:return: the query
|
|
122
|
+
:return: the query yielding the keys to be computed (derived from self.key_source)
|
|
122
123
|
"""
|
|
123
124
|
if self.restriction:
|
|
124
125
|
raise DataJointError(
|
|
@@ -180,6 +181,9 @@ class AutoPopulate:
|
|
|
180
181
|
to be passed down to each ``make()`` call. Computation arguments should be
|
|
181
182
|
specified within the pipeline e.g. using a `dj.Lookup` table.
|
|
182
183
|
:type make_kwargs: dict, optional
|
|
184
|
+
:return: a dict with two keys
|
|
185
|
+
"success_count": the count of successful ``make()`` calls in this ``populate()`` call
|
|
186
|
+
"error_list": the error list that is filled if `suppress_errors` is True
|
|
183
187
|
"""
|
|
184
188
|
if self.connection.in_transaction:
|
|
185
189
|
raise DataJointError("Populate cannot be called during a transaction.")
|
|
@@ -204,12 +208,12 @@ class AutoPopulate:
|
|
|
204
208
|
|
|
205
209
|
keys = (self._jobs_to_do(restrictions) - self.target).fetch("KEY", limit=limit)
|
|
206
210
|
|
|
207
|
-
# exclude "error" or "
|
|
211
|
+
# exclude "error", "ignore" or "reserved" jobs
|
|
208
212
|
if reserve_jobs:
|
|
209
213
|
exclude_key_hashes = (
|
|
210
214
|
jobs
|
|
211
215
|
& {"table_name": self.target.table_name}
|
|
212
|
-
& 'status in ("error", "ignore")'
|
|
216
|
+
& 'status in ("error", "ignore", "reserved")'
|
|
213
217
|
).fetch("key_hash")
|
|
214
218
|
keys = [key for key in keys if key_hash(key) not in exclude_key_hashes]
|
|
215
219
|
|
|
@@ -222,49 +226,62 @@ class AutoPopulate:
|
|
|
222
226
|
|
|
223
227
|
keys = keys[:max_calls]
|
|
224
228
|
nkeys = len(keys)
|
|
225
|
-
if not nkeys:
|
|
226
|
-
return
|
|
227
|
-
|
|
228
|
-
processes = min(_ for _ in (processes, nkeys, mp.cpu_count()) if _)
|
|
229
229
|
|
|
230
230
|
error_list = []
|
|
231
|
-
|
|
232
|
-
suppress_errors=suppress_errors,
|
|
233
|
-
return_exception_objects=return_exception_objects,
|
|
234
|
-
make_kwargs=make_kwargs,
|
|
235
|
-
)
|
|
231
|
+
success_list = []
|
|
236
232
|
|
|
237
|
-
if
|
|
238
|
-
for
|
|
239
|
-
|
|
240
|
-
|
|
241
|
-
|
|
242
|
-
|
|
243
|
-
|
|
244
|
-
|
|
245
|
-
|
|
246
|
-
|
|
247
|
-
|
|
248
|
-
|
|
249
|
-
|
|
250
|
-
|
|
251
|
-
|
|
252
|
-
|
|
253
|
-
|
|
254
|
-
|
|
255
|
-
|
|
256
|
-
|
|
257
|
-
|
|
258
|
-
|
|
259
|
-
|
|
260
|
-
|
|
233
|
+
if nkeys:
|
|
234
|
+
processes = min(_ for _ in (processes, nkeys, mp.cpu_count()) if _)
|
|
235
|
+
|
|
236
|
+
populate_kwargs = dict(
|
|
237
|
+
suppress_errors=suppress_errors,
|
|
238
|
+
return_exception_objects=return_exception_objects,
|
|
239
|
+
make_kwargs=make_kwargs,
|
|
240
|
+
)
|
|
241
|
+
|
|
242
|
+
if processes == 1:
|
|
243
|
+
for key in (
|
|
244
|
+
tqdm(keys, desc=self.__class__.__name__)
|
|
245
|
+
if display_progress
|
|
246
|
+
else keys
|
|
247
|
+
):
|
|
248
|
+
status = self._populate1(key, jobs, **populate_kwargs)
|
|
249
|
+
if status is True:
|
|
250
|
+
success_list.append(1)
|
|
251
|
+
elif isinstance(status, tuple):
|
|
252
|
+
error_list.append(status)
|
|
253
|
+
else:
|
|
254
|
+
assert status is False
|
|
255
|
+
else:
|
|
256
|
+
# spawn multiple processes
|
|
257
|
+
self.connection.close() # disconnect parent process from MySQL server
|
|
258
|
+
del self.connection._conn.ctx # SSLContext is not pickleable
|
|
259
|
+
with mp.Pool(
|
|
260
|
+
processes, _initialize_populate, (self, jobs, populate_kwargs)
|
|
261
|
+
) as pool, (
|
|
262
|
+
tqdm(desc="Processes: ", total=nkeys)
|
|
263
|
+
if display_progress
|
|
264
|
+
else contextlib.nullcontext()
|
|
265
|
+
) as progress_bar:
|
|
266
|
+
for status in pool.imap(_call_populate1, keys, chunksize=1):
|
|
267
|
+
if status is True:
|
|
268
|
+
success_list.append(1)
|
|
269
|
+
elif isinstance(status, tuple):
|
|
270
|
+
error_list.append(status)
|
|
271
|
+
else:
|
|
272
|
+
assert status is False
|
|
273
|
+
if display_progress:
|
|
274
|
+
progress_bar.update()
|
|
275
|
+
self.connection.connect() # reconnect parent process to MySQL server
|
|
261
276
|
|
|
262
277
|
# restore original signal handler:
|
|
263
278
|
if reserve_jobs:
|
|
264
279
|
signal.signal(signal.SIGTERM, old_handler)
|
|
265
280
|
|
|
266
|
-
|
|
267
|
-
|
|
281
|
+
return {
|
|
282
|
+
"success_count": sum(success_list),
|
|
283
|
+
"error_list": error_list,
|
|
284
|
+
}
|
|
268
285
|
|
|
269
286
|
def _populate1(
|
|
270
287
|
self, key, jobs, suppress_errors, return_exception_objects, make_kwargs=None
|
|
@@ -275,55 +292,60 @@ class AutoPopulate:
|
|
|
275
292
|
:param key: dict specifying job to populate
|
|
276
293
|
:param suppress_errors: bool if errors should be suppressed and returned
|
|
277
294
|
:param return_exception_objects: if True, errors must be returned as objects
|
|
278
|
-
:return: (key, error) when suppress_errors=True,
|
|
295
|
+
:return: (key, error) when suppress_errors=True,
|
|
296
|
+
True if successfully invoke one `make()` call, otherwise False
|
|
279
297
|
"""
|
|
280
298
|
make = self._make_tuples if hasattr(self, "_make_tuples") else self.make
|
|
281
299
|
|
|
282
|
-
if jobs is None
|
|
283
|
-
self.
|
|
284
|
-
|
|
300
|
+
if jobs is not None and not jobs.reserve(
|
|
301
|
+
self.target.table_name, self._job_key(key)
|
|
302
|
+
):
|
|
303
|
+
return False
|
|
304
|
+
|
|
305
|
+
self.connection.start_transaction()
|
|
306
|
+
if key in self.target: # already populated
|
|
307
|
+
self.connection.cancel_transaction()
|
|
308
|
+
if jobs is not None:
|
|
309
|
+
jobs.complete(self.target.table_name, self._job_key(key))
|
|
310
|
+
return False
|
|
311
|
+
|
|
312
|
+
logger.debug(f"Making {key} -> {self.target.full_table_name}")
|
|
313
|
+
self.__class__._allow_insert = True
|
|
314
|
+
try:
|
|
315
|
+
make(dict(key), **(make_kwargs or {}))
|
|
316
|
+
except (KeyboardInterrupt, SystemExit, Exception) as error:
|
|
317
|
+
try:
|
|
285
318
|
self.connection.cancel_transaction()
|
|
286
|
-
|
|
287
|
-
|
|
319
|
+
except LostConnectionError:
|
|
320
|
+
pass
|
|
321
|
+
error_message = "{exception}{msg}".format(
|
|
322
|
+
exception=error.__class__.__name__,
|
|
323
|
+
msg=": " + str(error) if str(error) else "",
|
|
324
|
+
)
|
|
325
|
+
logger.debug(
|
|
326
|
+
f"Error making {key} -> {self.target.full_table_name} - {error_message}"
|
|
327
|
+
)
|
|
328
|
+
if jobs is not None:
|
|
329
|
+
# show error name and error message (if any)
|
|
330
|
+
jobs.error(
|
|
331
|
+
self.target.table_name,
|
|
332
|
+
self._job_key(key),
|
|
333
|
+
error_message=error_message,
|
|
334
|
+
error_stack=traceback.format_exc(),
|
|
335
|
+
)
|
|
336
|
+
if not suppress_errors or isinstance(error, SystemExit):
|
|
337
|
+
raise
|
|
288
338
|
else:
|
|
289
|
-
logger.
|
|
290
|
-
|
|
291
|
-
|
|
292
|
-
|
|
293
|
-
|
|
294
|
-
|
|
295
|
-
|
|
296
|
-
|
|
297
|
-
|
|
298
|
-
|
|
299
|
-
exception=error.__class__.__name__,
|
|
300
|
-
msg=": " + str(error) if str(error) else "",
|
|
301
|
-
)
|
|
302
|
-
logger.debug(
|
|
303
|
-
f"Error making {key} -> {self.target.full_table_name} - {error_message}"
|
|
304
|
-
)
|
|
305
|
-
if jobs is not None:
|
|
306
|
-
# show error name and error message (if any)
|
|
307
|
-
jobs.error(
|
|
308
|
-
self.target.table_name,
|
|
309
|
-
self._job_key(key),
|
|
310
|
-
error_message=error_message,
|
|
311
|
-
error_stack=traceback.format_exc(),
|
|
312
|
-
)
|
|
313
|
-
if not suppress_errors or isinstance(error, SystemExit):
|
|
314
|
-
raise
|
|
315
|
-
else:
|
|
316
|
-
logger.error(error)
|
|
317
|
-
return key, error if return_exception_objects else error_message
|
|
318
|
-
else:
|
|
319
|
-
self.connection.commit_transaction()
|
|
320
|
-
logger.debug(
|
|
321
|
-
f"Success making {key} -> {self.target.full_table_name}"
|
|
322
|
-
)
|
|
323
|
-
if jobs is not None:
|
|
324
|
-
jobs.complete(self.target.table_name, self._job_key(key))
|
|
325
|
-
finally:
|
|
326
|
-
self.__class__._allow_insert = False
|
|
339
|
+
logger.error(error)
|
|
340
|
+
return key, error if return_exception_objects else error_message
|
|
341
|
+
else:
|
|
342
|
+
self.connection.commit_transaction()
|
|
343
|
+
logger.debug(f"Success making {key} -> {self.target.full_table_name}")
|
|
344
|
+
if jobs is not None:
|
|
345
|
+
jobs.complete(self.target.table_name, self._job_key(key))
|
|
346
|
+
return True
|
|
347
|
+
finally:
|
|
348
|
+
self.__class__._allow_insert = False
|
|
327
349
|
|
|
328
350
|
def progress(self, *restrictions, display=False):
|
|
329
351
|
"""
|
|
@@ -322,9 +322,11 @@ class Blob:
|
|
|
322
322
|
+ "\0".join(array.dtype.names).encode() # number of fields
|
|
323
323
|
+ b"\0"
|
|
324
324
|
+ b"".join( # field names
|
|
325
|
-
|
|
326
|
-
|
|
327
|
-
|
|
325
|
+
(
|
|
326
|
+
self.pack_recarray(array[f])
|
|
327
|
+
if array[f].dtype.fields
|
|
328
|
+
else self.pack_array(array[f])
|
|
329
|
+
)
|
|
328
330
|
for f in array.dtype.names
|
|
329
331
|
)
|
|
330
332
|
)
|
|
@@ -449,7 +451,7 @@ class Blob:
|
|
|
449
451
|
)
|
|
450
452
|
|
|
451
453
|
def read_struct(self):
|
|
452
|
-
"""deserialize matlab
|
|
454
|
+
"""deserialize matlab struct"""
|
|
453
455
|
n_dims = self.read_value()
|
|
454
456
|
shape = self.read_value(count=n_dims)
|
|
455
457
|
n_elem = np.prod(shape, dtype=int)
|
|
@@ -2,6 +2,7 @@
|
|
|
2
2
|
This module contains the Connection class that manages the connection to the database, and
|
|
3
3
|
the ``conn`` function that provides access to a persistent connection in datajoint.
|
|
4
4
|
"""
|
|
5
|
+
|
|
5
6
|
import warnings
|
|
6
7
|
from contextlib import contextmanager
|
|
7
8
|
import pymysql as client
|
|
@@ -79,6 +80,8 @@ def translate_query_error(client_error, query):
|
|
|
79
80
|
# Integrity errors
|
|
80
81
|
if err == 1062:
|
|
81
82
|
return errors.DuplicateError(*args)
|
|
83
|
+
if err == 1217: # MySQL 8 error code
|
|
84
|
+
return errors.IntegrityError(*args)
|
|
82
85
|
if err == 1451:
|
|
83
86
|
return errors.IntegrityError(*args)
|
|
84
87
|
if err == 1452:
|
|
@@ -113,16 +116,16 @@ def conn(
|
|
|
113
116
|
:param init_fun: initialization function
|
|
114
117
|
:param reset: whether the connection should be reset or not
|
|
115
118
|
:param use_tls: TLS encryption option. Valid options are: True (required), False
|
|
116
|
-
(required no TLS), None (TLS
|
|
119
|
+
(required no TLS), None (TLS preferred, default), dict (Manually specify values per
|
|
117
120
|
https://dev.mysql.com/doc/refman/5.7/en/connection-options.html#encrypted-connection-options).
|
|
118
121
|
"""
|
|
119
122
|
if not hasattr(conn, "connection") or reset:
|
|
120
123
|
host = host if host is not None else config["database.host"]
|
|
121
124
|
user = user if user is not None else config["database.user"]
|
|
122
125
|
password = password if password is not None else config["database.password"]
|
|
123
|
-
if user is None:
|
|
126
|
+
if user is None:
|
|
124
127
|
user = input("Please enter DataJoint username: ")
|
|
125
|
-
if password is None:
|
|
128
|
+
if password is None:
|
|
126
129
|
password = getpass(prompt="Please enter DataJoint password: ")
|
|
127
130
|
init_fun = (
|
|
128
131
|
init_fun if init_fun is not None else config["connection.init_function"]
|
|
@@ -2,6 +2,7 @@
|
|
|
2
2
|
This module hosts functions to convert DataJoint table definitions into mysql table definitions, and to
|
|
3
3
|
declare the corresponding mysql tables.
|
|
4
4
|
"""
|
|
5
|
+
|
|
5
6
|
import re
|
|
6
7
|
import pyparsing as pp
|
|
7
8
|
import logging
|
|
@@ -382,9 +383,7 @@ def _make_attribute_alter(new, old, primary_key):
|
|
|
382
383
|
command=(
|
|
383
384
|
"ADD"
|
|
384
385
|
if (old_name or new_name) not in old_names
|
|
385
|
-
else "MODIFY"
|
|
386
|
-
if not old_name
|
|
387
|
-
else "CHANGE `%s`" % old_name
|
|
386
|
+
else "MODIFY" if not old_name else "CHANGE `%s`" % old_name
|
|
388
387
|
),
|
|
389
388
|
new_def=new_def,
|
|
390
389
|
after="" if after is None else "AFTER `%s`" % after,
|
|
@@ -127,7 +127,7 @@ class Dependencies(nx.DiGraph):
|
|
|
127
127
|
self.add_edge(fk["referenced_table"], alias_node, **props)
|
|
128
128
|
self.add_edge(alias_node, fk["referencing_table"], **props)
|
|
129
129
|
|
|
130
|
-
if not nx.is_directed_acyclic_graph(self):
|
|
130
|
+
if not nx.is_directed_acyclic_graph(self):
|
|
131
131
|
raise DataJointError("DataJoint can only work with acyclic dependencies")
|
|
132
132
|
self._loaded = True
|
|
133
133
|
|
|
@@ -385,11 +385,15 @@ else:
|
|
|
385
385
|
assert issubclass(cls, Table)
|
|
386
386
|
description = cls().describe(context=self.context).split("\n")
|
|
387
387
|
description = (
|
|
388
|
-
|
|
389
|
-
|
|
390
|
-
|
|
391
|
-
|
|
392
|
-
|
|
388
|
+
(
|
|
389
|
+
"-" * 30
|
|
390
|
+
if q.startswith("---")
|
|
391
|
+
else (
|
|
392
|
+
q.replace("->", "→")
|
|
393
|
+
if "->" in q
|
|
394
|
+
else q.split(":")[0]
|
|
395
|
+
)
|
|
396
|
+
)
|
|
393
397
|
for q in description
|
|
394
398
|
if not q.startswith("#")
|
|
395
399
|
)
|
|
@@ -100,9 +100,11 @@ class QueryExpression:
|
|
|
100
100
|
|
|
101
101
|
def from_clause(self):
|
|
102
102
|
support = (
|
|
103
|
-
|
|
104
|
-
|
|
105
|
-
|
|
103
|
+
(
|
|
104
|
+
"(" + src.make_sql() + ") as `$%x`" % next(self._subquery_alias_count)
|
|
105
|
+
if isinstance(src, QueryExpression)
|
|
106
|
+
else src
|
|
107
|
+
)
|
|
106
108
|
for src in self.support
|
|
107
109
|
)
|
|
108
110
|
clause = next(support)
|
|
@@ -704,14 +706,16 @@ class Aggregation(QueryExpression):
|
|
|
704
706
|
fields=fields,
|
|
705
707
|
from_=self.from_clause(),
|
|
706
708
|
where=self.where_clause(),
|
|
707
|
-
group_by=
|
|
708
|
-
|
|
709
|
-
|
|
710
|
-
|
|
711
|
-
|
|
712
|
-
|
|
713
|
-
|
|
714
|
-
|
|
709
|
+
group_by=(
|
|
710
|
+
""
|
|
711
|
+
if not self.primary_key
|
|
712
|
+
else (
|
|
713
|
+
" GROUP BY `%s`" % "`,`".join(self._grouping_attributes)
|
|
714
|
+
+ (
|
|
715
|
+
""
|
|
716
|
+
if not self.restriction
|
|
717
|
+
else " HAVING (%s)" % ")AND(".join(self.restriction)
|
|
718
|
+
)
|
|
715
719
|
)
|
|
716
720
|
),
|
|
717
721
|
)
|
|
@@ -773,12 +777,16 @@ class Union(QueryExpression):
|
|
|
773
777
|
# no secondary attributes: use UNION DISTINCT
|
|
774
778
|
fields = arg1.primary_key
|
|
775
779
|
return "SELECT * FROM (({sql1}) UNION ({sql2})) as `_u{alias}`".format(
|
|
776
|
-
sql1=
|
|
777
|
-
|
|
778
|
-
|
|
779
|
-
|
|
780
|
-
|
|
781
|
-
|
|
780
|
+
sql1=(
|
|
781
|
+
arg1.make_sql()
|
|
782
|
+
if isinstance(arg1, Union)
|
|
783
|
+
else arg1.make_sql(fields)
|
|
784
|
+
),
|
|
785
|
+
sql2=(
|
|
786
|
+
arg2.make_sql()
|
|
787
|
+
if isinstance(arg2, Union)
|
|
788
|
+
else arg2.make_sql(fields)
|
|
789
|
+
),
|
|
782
790
|
alias=next(self.__count),
|
|
783
791
|
)
|
|
784
792
|
# with secondary attributes, use union of left join with antijoin
|
|
@@ -839,7 +847,7 @@ class U:
|
|
|
839
847
|
>>> dj.U().aggr(expr, n='count(*)')
|
|
840
848
|
|
|
841
849
|
The following expressions both yield one element containing the number `n` of distinct values of attribute `attr` in
|
|
842
|
-
query
|
|
850
|
+
query expression `expr`.
|
|
843
851
|
|
|
844
852
|
>>> dj.U().aggr(expr, n='count(distinct attr)')
|
|
845
853
|
>>> dj.U().aggr(dj.U('attr').aggr(expr), 'n=count(*)')
|
|
@@ -244,13 +244,15 @@ class Fetch:
|
|
|
244
244
|
]
|
|
245
245
|
else:
|
|
246
246
|
return_values = [
|
|
247
|
-
|
|
248
|
-
(
|
|
249
|
-
|
|
247
|
+
(
|
|
248
|
+
list(
|
|
249
|
+
(to_dicts if as_dict else lambda x: x)(
|
|
250
|
+
ret[self._expression.primary_key]
|
|
251
|
+
)
|
|
250
252
|
)
|
|
253
|
+
if is_key(attribute)
|
|
254
|
+
else ret[attribute]
|
|
251
255
|
)
|
|
252
|
-
if is_key(attribute)
|
|
253
|
-
else ret[attribute]
|
|
254
256
|
for attribute in attrs
|
|
255
257
|
]
|
|
256
258
|
ret = return_values[0] if len(attrs) == 1 else return_values
|
|
@@ -272,12 +274,14 @@ class Fetch:
|
|
|
272
274
|
else np.dtype(
|
|
273
275
|
[
|
|
274
276
|
(
|
|
275
|
-
|
|
276
|
-
|
|
277
|
-
|
|
278
|
-
|
|
279
|
-
|
|
280
|
-
|
|
277
|
+
(
|
|
278
|
+
name,
|
|
279
|
+
type(value),
|
|
280
|
+
) # use the first element to determine blob type
|
|
281
|
+
if heading[name].is_blob
|
|
282
|
+
and isinstance(value, numbers.Number)
|
|
283
|
+
else (name, heading.as_dtype[name])
|
|
284
|
+
)
|
|
281
285
|
for value, name in zip(ret[0], heading.as_dtype.names)
|
|
282
286
|
]
|
|
283
287
|
)
|
|
@@ -353,9 +357,11 @@ class Fetch1:
|
|
|
353
357
|
"fetch1 should only return one tuple. %d tuples found" % len(result)
|
|
354
358
|
)
|
|
355
359
|
return_values = tuple(
|
|
356
|
-
|
|
357
|
-
|
|
358
|
-
|
|
360
|
+
(
|
|
361
|
+
next(to_dicts(result[self._expression.primary_key]))
|
|
362
|
+
if is_key(attribute)
|
|
363
|
+
else result[attribute][0]
|
|
364
|
+
)
|
|
359
365
|
for attribute in attrs
|
|
360
366
|
)
|
|
361
367
|
ret = return_values[0] if len(attrs) == 1 else return_values
|
|
@@ -193,10 +193,12 @@ class Heading:
|
|
|
193
193
|
represent heading as the SQL SELECT clause.
|
|
194
194
|
"""
|
|
195
195
|
return ",".join(
|
|
196
|
-
|
|
197
|
-
|
|
198
|
-
|
|
199
|
-
|
|
196
|
+
(
|
|
197
|
+
"`%s`" % name
|
|
198
|
+
if self.attributes[name].attribute_expression is None
|
|
199
|
+
else self.attributes[name].attribute_expression
|
|
200
|
+
+ (" as `%s`" % name if include_aliases else "")
|
|
201
|
+
)
|
|
200
202
|
for name in fields
|
|
201
203
|
)
|
|
202
204
|
|
|
@@ -371,9 +373,11 @@ class Heading:
|
|
|
371
373
|
is_blob=category in ("INTERNAL_BLOB", "EXTERNAL_BLOB"),
|
|
372
374
|
uuid=category == "UUID",
|
|
373
375
|
is_external=category in EXTERNAL_TYPES,
|
|
374
|
-
store=
|
|
375
|
-
|
|
376
|
-
|
|
376
|
+
store=(
|
|
377
|
+
attr["type"].split("@")[1]
|
|
378
|
+
if category in EXTERNAL_TYPES
|
|
379
|
+
else None
|
|
380
|
+
),
|
|
377
381
|
)
|
|
378
382
|
|
|
379
383
|
if attr["in_key"] and any(
|
|
@@ -68,9 +68,11 @@ def repr_html(query_expression):
|
|
|
68
68
|
}
|
|
69
69
|
.Table tr:nth-child(odd){
|
|
70
70
|
background: #ffffff;
|
|
71
|
+
color: #000000;
|
|
71
72
|
}
|
|
72
73
|
.Table tr:nth-child(even){
|
|
73
74
|
background: #f3f1ff;
|
|
75
|
+
color: #000000;
|
|
74
76
|
}
|
|
75
77
|
/* Tooltip container */
|
|
76
78
|
.djtooltip {
|
|
@@ -124,9 +126,9 @@ def repr_html(query_expression):
|
|
|
124
126
|
head_template.format(
|
|
125
127
|
column=c,
|
|
126
128
|
comment=heading.attributes[c].comment,
|
|
127
|
-
primary=
|
|
128
|
-
|
|
129
|
-
|
|
129
|
+
primary=(
|
|
130
|
+
"primary" if c in query_expression.primary_key else "nonprimary"
|
|
131
|
+
),
|
|
130
132
|
)
|
|
131
133
|
for c in heading.names
|
|
132
134
|
),
|
|
@@ -143,7 +145,9 @@ def repr_html(query_expression):
|
|
|
143
145
|
for tup in tuples
|
|
144
146
|
]
|
|
145
147
|
),
|
|
146
|
-
count=(
|
|
147
|
-
|
|
148
|
-
|
|
148
|
+
count=(
|
|
149
|
+
("<p>Total: %d</p>" % len(rel))
|
|
150
|
+
if config["display.show_tuple_count"]
|
|
151
|
+
else ""
|
|
152
|
+
),
|
|
149
153
|
)
|
|
@@ -1,6 +1,7 @@
|
|
|
1
1
|
"""
|
|
2
2
|
AWS S3 operations
|
|
3
3
|
"""
|
|
4
|
+
|
|
4
5
|
from io import BytesIO
|
|
5
6
|
import minio # https://docs.minio.io/docs/python-client-api-reference
|
|
6
7
|
import urllib3
|
|
@@ -68,7 +69,9 @@ class Folder:
|
|
|
68
69
|
def get(self, name):
|
|
69
70
|
logger.debug("get: {}:{}".format(self.bucket, name))
|
|
70
71
|
try:
|
|
71
|
-
|
|
72
|
+
with self.client.get_object(self.bucket, str(name)) as result:
|
|
73
|
+
data = [d for d in result.stream()]
|
|
74
|
+
return b"".join(data)
|
|
72
75
|
except minio.error.S3Error as e:
|
|
73
76
|
if e.code == "NoSuchKey":
|
|
74
77
|
raise errors.MissingExternalFile("Missing s3 key %s" % name)
|
|
@@ -21,7 +21,7 @@ logger = logging.getLogger(__name__.split(".")[0])
|
|
|
21
21
|
|
|
22
22
|
def ordered_dir(class_):
|
|
23
23
|
"""
|
|
24
|
-
List (most) attributes of the class including inherited ones, similar to `dir`
|
|
24
|
+
List (most) attributes of the class including inherited ones, similar to `dir` built-in function,
|
|
25
25
|
but respects order of attribute declaration as much as possible.
|
|
26
26
|
|
|
27
27
|
:param class_: class to list members for
|
|
@@ -15,7 +15,7 @@ from .declare import declare, alter
|
|
|
15
15
|
from .condition import make_condition
|
|
16
16
|
from .expression import QueryExpression
|
|
17
17
|
from . import blob
|
|
18
|
-
from .utils import user_choice, get_master
|
|
18
|
+
from .utils import user_choice, get_master, is_camel_case
|
|
19
19
|
from .heading import Heading
|
|
20
20
|
from .errors import (
|
|
21
21
|
DuplicateError,
|
|
@@ -75,6 +75,10 @@ class Table(QueryExpression):
|
|
|
75
75
|
def table_name(self):
|
|
76
76
|
return self._table_name
|
|
77
77
|
|
|
78
|
+
@property
|
|
79
|
+
def class_name(self):
|
|
80
|
+
return self.__class__.__name__
|
|
81
|
+
|
|
78
82
|
@property
|
|
79
83
|
def definition(self):
|
|
80
84
|
raise NotImplementedError(
|
|
@@ -93,6 +97,14 @@ class Table(QueryExpression):
|
|
|
93
97
|
"Cannot declare new tables inside a transaction, "
|
|
94
98
|
"e.g. from inside a populate/make call"
|
|
95
99
|
)
|
|
100
|
+
# Enforce strict CamelCase #1150
|
|
101
|
+
if not is_camel_case(self.class_name):
|
|
102
|
+
raise DataJointError(
|
|
103
|
+
"Table class name `{name}` is invalid. Please use CamelCase. ".format(
|
|
104
|
+
name=self.class_name
|
|
105
|
+
)
|
|
106
|
+
+ "Classes defining tables should be formatted in strict CamelCase."
|
|
107
|
+
)
|
|
96
108
|
sql, external_stores = declare(self.full_table_name, self.definition, context)
|
|
97
109
|
sql = sql.format(database=self.database)
|
|
98
110
|
try:
|
|
@@ -230,7 +242,7 @@ class Table(QueryExpression):
|
|
|
230
242
|
|
|
231
243
|
def parts(self, as_objects=False):
|
|
232
244
|
"""
|
|
233
|
-
return part tables either as entries in a dict with foreign key
|
|
245
|
+
return part tables either as entries in a dict with foreign key information or a list of objects
|
|
234
246
|
|
|
235
247
|
:param as_objects: if False (default), the output is a dict describing the foreign keys. If True, return table objects.
|
|
236
248
|
"""
|
|
@@ -474,6 +486,7 @@ class Table(QueryExpression):
|
|
|
474
486
|
transaction: bool = True,
|
|
475
487
|
safemode: Union[bool, None] = None,
|
|
476
488
|
force_parts: bool = False,
|
|
489
|
+
force_masters: bool = False,
|
|
477
490
|
) -> int:
|
|
478
491
|
"""
|
|
479
492
|
Deletes the contents of the table and its dependent tables, recursively.
|
|
@@ -485,6 +498,8 @@ class Table(QueryExpression):
|
|
|
485
498
|
safemode: If `True`, prohibit nested transactions and prompt to confirm. Default
|
|
486
499
|
is `dj.config['safemode']`.
|
|
487
500
|
force_parts: Delete from parts even when not deleting from their masters.
|
|
501
|
+
force_masters: If `True`, include part/master pairs in the cascade.
|
|
502
|
+
Default is `False`.
|
|
488
503
|
|
|
489
504
|
Returns:
|
|
490
505
|
Number of deleted rows (excluding those from dependent tables).
|
|
@@ -495,6 +510,7 @@ class Table(QueryExpression):
|
|
|
495
510
|
DataJointError: Deleting a part table before its master.
|
|
496
511
|
"""
|
|
497
512
|
deleted = set()
|
|
513
|
+
visited_masters = set()
|
|
498
514
|
|
|
499
515
|
def cascade(table):
|
|
500
516
|
"""service function to perform cascading deletes recursively."""
|
|
@@ -547,13 +563,34 @@ class Table(QueryExpression):
|
|
|
547
563
|
and match["fk_attrs"] == match["pk_attrs"]
|
|
548
564
|
):
|
|
549
565
|
child._restriction = table._restriction
|
|
566
|
+
child._restriction_attributes = table.restriction_attributes
|
|
550
567
|
elif match["fk_attrs"] != match["pk_attrs"]:
|
|
551
568
|
child &= table.proj(
|
|
552
569
|
**dict(zip(match["fk_attrs"], match["pk_attrs"]))
|
|
553
570
|
)
|
|
554
571
|
else:
|
|
555
572
|
child &= table.proj()
|
|
556
|
-
|
|
573
|
+
|
|
574
|
+
master_name = get_master(child.full_table_name)
|
|
575
|
+
if (
|
|
576
|
+
force_masters
|
|
577
|
+
and master_name
|
|
578
|
+
and master_name != table.full_table_name
|
|
579
|
+
and master_name not in visited_masters
|
|
580
|
+
):
|
|
581
|
+
master = FreeTable(table.connection, master_name)
|
|
582
|
+
master._restriction_attributes = set()
|
|
583
|
+
master._restriction = [
|
|
584
|
+
make_condition( # &= may cause in target tables in subquery
|
|
585
|
+
master,
|
|
586
|
+
(master.proj() & child.proj()).fetch(),
|
|
587
|
+
master._restriction_attributes,
|
|
588
|
+
)
|
|
589
|
+
]
|
|
590
|
+
visited_masters.add(master_name)
|
|
591
|
+
cascade(master)
|
|
592
|
+
else:
|
|
593
|
+
cascade(child)
|
|
557
594
|
else:
|
|
558
595
|
deleted.add(table.full_table_name)
|
|
559
596
|
logger.info(
|
|
@@ -758,9 +795,11 @@ class Table(QueryExpression):
|
|
|
758
795
|
if do_include:
|
|
759
796
|
attributes_declared.add(attr.name)
|
|
760
797
|
definition += "%-20s : %-28s %s\n" % (
|
|
761
|
-
|
|
762
|
-
|
|
763
|
-
|
|
798
|
+
(
|
|
799
|
+
attr.name
|
|
800
|
+
if attr.default is None
|
|
801
|
+
else "%s=%s" % (attr.name, attr.default)
|
|
802
|
+
),
|
|
764
803
|
"%s%s"
|
|
765
804
|
% (attr.type, " auto_increment" if attr.autoincrement else ""),
|
|
766
805
|
"# " + attr.comment if attr.comment else "",
|
|
@@ -238,3 +238,7 @@ class Part(UserTable):
|
|
|
238
238
|
raise DataJointError(
|
|
239
239
|
"Cannot drop a Part directly. Delete from master instead"
|
|
240
240
|
)
|
|
241
|
+
|
|
242
|
+
def alter(self, prompt=True, context=None):
|
|
243
|
+
# without context, use declaration context which maps master keyword to master table
|
|
244
|
+
super().alter(prompt=prompt, context=context or self.declaration_context)
|
|
@@ -53,6 +53,19 @@ def get_master(full_table_name: str) -> str:
|
|
|
53
53
|
return match["master"] + "`" if match else ""
|
|
54
54
|
|
|
55
55
|
|
|
56
|
+
def is_camel_case(s):
|
|
57
|
+
"""
|
|
58
|
+
Check if a string is in CamelCase notation.
|
|
59
|
+
|
|
60
|
+
:param s: string to check
|
|
61
|
+
:returns: True if the string is in CamelCase notation, False otherwise
|
|
62
|
+
Example:
|
|
63
|
+
>>> is_camel_case("TableName") # returns True
|
|
64
|
+
>>> is_camel_case("table_name") # returns False
|
|
65
|
+
"""
|
|
66
|
+
return bool(re.match(r"^[A-Z][A-Za-z0-9]*$", s))
|
|
67
|
+
|
|
68
|
+
|
|
56
69
|
def to_camel_case(s):
|
|
57
70
|
"""
|
|
58
71
|
Convert names with under score (_) separation into camel case names.
|
|
@@ -82,7 +95,7 @@ def from_camel_case(s):
|
|
|
82
95
|
def convert(match):
|
|
83
96
|
return ("_" if match.groups()[0] else "") + match.group(0).lower()
|
|
84
97
|
|
|
85
|
-
if not
|
|
98
|
+
if not is_camel_case(s):
|
|
86
99
|
raise DataJointError(
|
|
87
100
|
"ClassName must be alphanumeric in CamelCase, begin with a capital letter"
|
|
88
101
|
)
|
|
@@ -1,13 +1,13 @@
|
|
|
1
1
|
Metadata-Version: 2.1
|
|
2
2
|
Name: datajoint
|
|
3
|
-
Version: 0.14.
|
|
3
|
+
Version: 0.14.2
|
|
4
4
|
Summary: A relational data pipeline framework.
|
|
5
5
|
Home-page: https://datajoint.com
|
|
6
6
|
Author: DataJoint Contributors
|
|
7
7
|
Author-email: support@datajoint.com
|
|
8
8
|
License: GNU LGPL
|
|
9
9
|
Keywords: database,data pipelines,scientific computing,automated research workflows
|
|
10
|
-
Requires-Python: ~=3.
|
|
10
|
+
Requires-Python: ~=3.8
|
|
11
11
|
License-File: LICENSE.txt
|
|
12
12
|
|
|
13
13
|
A relational data framework for scientific data pipelines with MySQL backend.
|
datajoint-0.14.0/README.md
DELETED
|
@@ -1,33 +0,0 @@
|
|
|
1
|
-
[](https://zenodo.org/badge/latestdoi/16774/datajoint/datajoint-python)
|
|
2
|
-
[](https://travis-ci.org/datajoint/datajoint-python)
|
|
3
|
-
[](https://coveralls.io/github/datajoint/datajoint-python?branch=master)
|
|
4
|
-
[](http://badge.fury.io/py/datajoint)
|
|
5
|
-
[](https://requires.io/github/datajoint/datajoint-python/requirements/?branch=master)
|
|
6
|
-
[](https://datajoint.slack.com/)
|
|
7
|
-
|
|
8
|
-
# Welcome to DataJoint for Python!
|
|
9
|
-
|
|
10
|
-
DataJoint for Python is a framework for scientific workflow management based on relational principles. DataJoint is built on the foundation of the relational data model and prescribes a consistent method for organizing, populating, computing, and querying data.
|
|
11
|
-
|
|
12
|
-
DataJoint was initially developed in 2009 by Dimitri Yatsenko in Andreas Tolias' Lab at Baylor College of Medicine for the distributed processing and management of large volumes of data streaming from regular experiments. Starting in 2011, DataJoint has been available as an open-source project adopted by other labs and improved through contributions from several developers.
|
|
13
|
-
Presently, the primary developer of DataJoint open-source software is the company DataJoint (https://datajoint.com).
|
|
14
|
-
|
|
15
|
-
- [Getting Started](https://datajoint.com/docs/core/datajoint-python/latest/getting-started/)
|
|
16
|
-
- [DataJoint Elements](https://datajoint.com/docs/elements/) - Catalog of example pipelines
|
|
17
|
-
- [DataJoint CodeBook](https://codebook.datajoint.io) - Interactive online tutorials
|
|
18
|
-
- Contribute
|
|
19
|
-
|
|
20
|
-
- [Development Environment](https://datajoint.com/docs/core/datajoint-python/latest/develop/)
|
|
21
|
-
- [Guidelines](https://datajoint.com/docs/community/contribute/)
|
|
22
|
-
|
|
23
|
-
- Legacy Resources (To be replaced by above)
|
|
24
|
-
- [Documentation](https://docs.datajoint.org)
|
|
25
|
-
- [Tutorials](https://tutorials.datajoint.org)
|
|
26
|
-
|
|
27
|
-
## Citation
|
|
28
|
-
|
|
29
|
-
- If your work uses DataJoint for Python, please cite the following Research Resource Identifier (RRID) and manuscript.
|
|
30
|
-
|
|
31
|
-
- DataJoint ([RRID:SCR_014543](https://scicrunch.org/resolver/SCR_014543)) - DataJoint for Python (version `<Enter version number>`)
|
|
32
|
-
|
|
33
|
-
- Yatsenko D, Reimer J, Ecker AS, Walker EY, Sinz F, Berens P, Hoenselaar A, Cotton RJ, Siapas AS, Tolias AS. DataJoint: managing big scientific data using MATLAB or Python. bioRxiv. 2015 Jan 1:031658. doi: https://doi.org/10.1101/031658
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|