braindecode 1.2.0.dev175337561__tar.gz → 1.2.0.dev176281713__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of braindecode might be problematic. Click here for more details.

Files changed (127) hide show
  1. {braindecode-1.2.0.dev175337561/braindecode.egg-info → braindecode-1.2.0.dev176281713}/PKG-INFO +5 -1
  2. braindecode-1.2.0.dev176281713/braindecode/datasets/experimental.py +218 -0
  3. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/__init__.py +1 -2
  4. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/attentionbasenet.py +23 -20
  5. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/eegconformer.py +15 -12
  6. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/eegnet.py +9 -10
  7. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/eegnex.py +115 -6
  8. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/summary.csv +40 -41
  9. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/util.py +0 -1
  10. braindecode-1.2.0.dev176281713/braindecode/version.py +1 -0
  11. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713/braindecode.egg-info}/PKG-INFO +5 -1
  12. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode.egg-info/SOURCES.txt +6 -2
  13. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode.egg-info/requires.txt +4 -0
  14. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/docs/api.rst +6 -7
  15. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/docs/conf.py +22 -7
  16. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/docs/index.rst +1 -1
  17. braindecode-1.2.0.dev176281713/docs/models/models.rst +76 -0
  18. braindecode-1.2.0.dev176281713/docs/models/models_categorization.rst +174 -0
  19. braindecode-1.2.0.dev176281713/docs/models/models_table.rst +139 -0
  20. braindecode-1.2.0.dev176281713/docs/models/models_visualization.rst +21 -0
  21. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/docs/whats_new.rst +4 -0
  22. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/pyproject.toml +5 -1
  23. braindecode-1.2.0.dev175337561/braindecode/version.py +0 -1
  24. braindecode-1.2.0.dev175337561/docs/models_summary.rst +0 -134
  25. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/LICENSE.txt +0 -0
  26. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/MANIFEST.in +0 -0
  27. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/NOTICE.txt +0 -0
  28. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/README.rst +0 -0
  29. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/__init__.py +0 -0
  30. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/augmentation/__init__.py +0 -0
  31. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/augmentation/base.py +0 -0
  32. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/augmentation/functional.py +0 -0
  33. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/augmentation/transforms.py +0 -0
  34. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/classifier.py +0 -0
  35. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/datasets/__init__.py +0 -0
  36. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/datasets/base.py +0 -0
  37. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/datasets/bbci.py +0 -0
  38. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/datasets/bcicomp.py +0 -0
  39. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/datasets/bids.py +0 -0
  40. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/datasets/mne.py +0 -0
  41. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/datasets/moabb.py +0 -0
  42. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/datasets/nmt.py +0 -0
  43. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/datasets/sleep_physio_challe_18.py +0 -0
  44. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/datasets/sleep_physionet.py +0 -0
  45. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/datasets/tuh.py +0 -0
  46. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/datasets/xy.py +0 -0
  47. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/datautil/__init__.py +0 -0
  48. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/datautil/serialization.py +0 -0
  49. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/datautil/util.py +0 -0
  50. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/eegneuralnet.py +0 -0
  51. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/functional/__init__.py +0 -0
  52. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/functional/functions.py +0 -0
  53. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/functional/initialization.py +0 -0
  54. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/atcnet.py +0 -0
  55. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/base.py +0 -0
  56. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/biot.py +0 -0
  57. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/contrawr.py +0 -0
  58. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/ctnet.py +0 -0
  59. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/deep4.py +0 -0
  60. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/deepsleepnet.py +0 -0
  61. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/eeginception_erp.py +0 -0
  62. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/eeginception_mi.py +0 -0
  63. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/eegitnet.py +0 -0
  64. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/eegminer.py +0 -0
  65. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/eegresnet.py +0 -0
  66. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/eegsimpleconv.py +0 -0
  67. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/eegtcnet.py +0 -0
  68. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/fbcnet.py +0 -0
  69. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/fblightconvnet.py +0 -0
  70. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/fbmsnet.py +0 -0
  71. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/hybrid.py +0 -0
  72. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/ifnet.py +0 -0
  73. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/labram.py +0 -0
  74. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/msvtnet.py +0 -0
  75. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/sccnet.py +0 -0
  76. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/shallow_fbcsp.py +0 -0
  77. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/signal_jepa.py +0 -0
  78. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/sinc_shallow.py +0 -0
  79. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/sleep_stager_blanco_2020.py +0 -0
  80. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/sleep_stager_chambon_2018.py +0 -0
  81. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/sleep_stager_eldele_2021.py +0 -0
  82. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/sparcnet.py +0 -0
  83. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/syncnet.py +0 -0
  84. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/tcn.py +0 -0
  85. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/tidnet.py +0 -0
  86. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/tsinception.py +0 -0
  87. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/models/usleep.py +0 -0
  88. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/modules/__init__.py +0 -0
  89. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/modules/activation.py +0 -0
  90. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/modules/attention.py +0 -0
  91. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/modules/blocks.py +0 -0
  92. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/modules/convolution.py +0 -0
  93. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/modules/filter.py +0 -0
  94. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/modules/layers.py +0 -0
  95. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/modules/linear.py +0 -0
  96. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/modules/parametrization.py +0 -0
  97. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/modules/stats.py +0 -0
  98. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/modules/util.py +0 -0
  99. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/modules/wrapper.py +0 -0
  100. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/preprocessing/__init__.py +0 -0
  101. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/preprocessing/mne_preprocess.py +0 -0
  102. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/preprocessing/preprocess.py +0 -0
  103. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/preprocessing/windowers.py +0 -0
  104. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/regressor.py +0 -0
  105. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/samplers/__init__.py +0 -0
  106. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/samplers/base.py +0 -0
  107. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/samplers/ssl.py +0 -0
  108. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/training/__init__.py +0 -0
  109. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/training/callbacks.py +0 -0
  110. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/training/losses.py +0 -0
  111. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/training/scoring.py +0 -0
  112. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/util.py +0 -0
  113. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/visualization/__init__.py +0 -0
  114. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/visualization/confusion_matrices.py +0 -0
  115. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode/visualization/gradients.py +0 -0
  116. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode.egg-info/dependency_links.txt +0 -0
  117. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/braindecode.egg-info/top_level.txt +0 -0
  118. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/docs/Makefile +0 -0
  119. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/docs/_templates/autosummary/class.rst +0 -0
  120. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/docs/_templates/autosummary/function.rst +0 -0
  121. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/docs/cite.rst +0 -0
  122. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/docs/help.rst +0 -0
  123. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/docs/install/install.rst +0 -0
  124. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/docs/install/install_pip.rst +0 -0
  125. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/docs/install/install_source.rst +0 -0
  126. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/docs/sg_execution_times.rst +0 -0
  127. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev176281713}/setup.cfg +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: braindecode
3
- Version: 1.2.0.dev175337561
3
+ Version: 1.2.0.dev176281713
4
4
  Summary: Deep learning software to decode EEG, ECG or MEG signals
5
5
  Author-email: Robin Tibor Schirrmeister <robintibor@gmail.com>
6
6
  Maintainer-email: Alexandre Gramfort <agramfort@meta.com>, Bruno Aristimunha Pinto <b.aristimunha@gmail.com>, Robin Tibor Schirrmeister <robintibor@gmail.com>
@@ -49,6 +49,9 @@ Requires-Dist: mypy; extra == "tests"
49
49
  Provides-Extra: docs
50
50
  Requires-Dist: sphinx_gallery; extra == "docs"
51
51
  Requires-Dist: sphinx_rtd_theme; extra == "docs"
52
+ Requires-Dist: sphinx-autodoc-typehints; extra == "docs"
53
+ Requires-Dist: sphinx-autobuild; extra == "docs"
54
+ Requires-Dist: sphinxcontrib-bibtex; extra == "docs"
52
55
  Requires-Dist: pydata_sphinx_theme; extra == "docs"
53
56
  Requires-Dist: numpydoc; extra == "docs"
54
57
  Requires-Dist: memory_profiler; extra == "docs"
@@ -59,6 +62,7 @@ Requires-Dist: lightning; extra == "docs"
59
62
  Requires-Dist: seaborn; extra == "docs"
60
63
  Requires-Dist: pre-commit; extra == "docs"
61
64
  Requires-Dist: openneuro-py; extra == "docs"
65
+ Requires-Dist: plotly; extra == "docs"
62
66
  Provides-Extra: all
63
67
  Requires-Dist: braindecode[docs,moabb,tests]; extra == "all"
64
68
  Dynamic: license-file
@@ -0,0 +1,218 @@
1
+ from __future__ import annotations
2
+
3
+ import random
4
+ from pathlib import Path
5
+ from typing import Callable, Sequence
6
+
7
+ import mne_bids
8
+ from torch.utils.data import IterableDataset, get_worker_info
9
+
10
+
11
+ class BIDSIterableDataset(IterableDataset):
12
+ """Dataset for loading BIDS.
13
+
14
+ .. warning::
15
+ This class is experimental and may change in the future.
16
+
17
+ .. warning::
18
+ This dataset is not consistent with the Braindecode API.
19
+
20
+ This class has the same parameters as the :func:`mne_bids.find_matching_paths` function
21
+ as it will be used to find the files to load. The default ``extensions`` parameter was changed.
22
+
23
+ More information on BIDS (Brain Imaging Data Structure)
24
+ can be found at https://bids.neuroimaging.io
25
+
26
+ Examples
27
+ --------
28
+ >>> from braindecode.datasets import BaseDataset, BaseConcatDataset
29
+ >>> from braindecode.datasets.bids import BIDSIterableDataset, _description_from_bids_path
30
+ >>> from braindecode.preprocessing import create_fixed_length_windows
31
+ >>>
32
+ >>> def my_reader_fn(path):
33
+ ... raw = mne_bids.read_raw_bids(path)
34
+ ... desc = _description_from_bids_path(path)
35
+ ... ds = BaseDataset(raw, description=desc)
36
+ ... windows_ds = create_fixed_length_windows(
37
+ ... BaseConcatDataset([ds]),
38
+ ... window_size_samples=400,
39
+ ... window_stride_samples=200,
40
+ ... )
41
+ ... return windows_ds
42
+ >>>
43
+ >>> dataset = BIDSIterableDataset(
44
+ ... reader_fn=my_reader_fn,
45
+ ... root="root/of/my/bids/dataset/",
46
+ ... )
47
+
48
+ Parameters
49
+ ----------
50
+ reader_fn : Callable[[mne_bids.BIDSPath], Sequence]
51
+ A function that takes a BIDSPath and returns a dataset.
52
+ pool_size : int
53
+ The number of recordings to read and sample from.
54
+ bids_paths : list[mne_bids.BIDSPath] | None
55
+ A list of BIDSPaths to load. If None, will use the paths found by
56
+ :func:`mne_bids.find_matching_paths` and the arguments below.
57
+ root : pathlib.Path | str
58
+ The root of the BIDS path.
59
+ subjects : str | array-like of str | None
60
+ The subject ID. Corresponds to "sub".
61
+ sessions : str | array-like of str | None
62
+ The acquisition session. Corresponds to "ses".
63
+ tasks : str | array-like of str | None
64
+ The experimental task. Corresponds to "task".
65
+ acquisitions: str | array-like of str | None
66
+ The acquisition parameters. Corresponds to "acq".
67
+ runs : str | array-like of str | None
68
+ The run number. Corresponds to "run".
69
+ processings : str | array-like of str | None
70
+ The processing label. Corresponds to "proc".
71
+ recordings : str | array-like of str | None
72
+ The recording name. Corresponds to "rec".
73
+ spaces : str | array-like of str | None
74
+ The coordinate space for anatomical and sensor location
75
+ files (e.g., ``*_electrodes.tsv``, ``*_markers.mrk``).
76
+ Corresponds to "space".
77
+ Note that valid values for ``space`` must come from a list
78
+ of BIDS keywords as described in the BIDS specification.
79
+ splits : str | array-like of str | None
80
+ The split of the continuous recording file for ``.fif`` data.
81
+ Corresponds to "split".
82
+ descriptions : str | array-like of str | None
83
+ This corresponds to the BIDS entity ``desc``. It is used to provide
84
+ additional information for derivative data, e.g., preprocessed data
85
+ may be assigned ``description='cleaned'``.
86
+ suffixes : str | array-like of str | None
87
+ The filename suffix. This is the entity after the
88
+ last ``_`` before the extension. E.g., ``'channels'``.
89
+ The following filename suffix's are accepted:
90
+ 'meg', 'markers', 'eeg', 'ieeg', 'T1w',
91
+ 'participants', 'scans', 'electrodes', 'coordsystem',
92
+ 'channels', 'events', 'headshape', 'digitizer',
93
+ 'beh', 'physio', 'stim'
94
+ extensions : str | array-like of str | None
95
+ The extension of the filename. E.g., ``'.json'``.
96
+ By default, uses the ones accepted by :func:`mne_bids.read_raw_bids`.
97
+ datatypes : str | array-like of str | None
98
+ The BIDS data type, e.g., ``'anat'``, ``'func'``, ``'eeg'``, ``'meg'``,
99
+ ``'ieeg'``.
100
+ check : bool
101
+ If ``True``, only returns paths that conform to BIDS. If ``False``
102
+ (default), the ``.check`` attribute of the returned
103
+ :class:`mne_bids.BIDSPath` object will be set to ``True`` for paths that
104
+ do conform to BIDS, and to ``False`` for those that don't.
105
+ preload : bool
106
+ If True, preload the data. Defaults to False.
107
+ n_jobs : int
108
+ Number of jobs to run in parallel. Defaults to 1.
109
+ """
110
+
111
+ def __init__(
112
+ self,
113
+ reader_fn: Callable[[mne_bids.BIDSPath], Sequence],
114
+ pool_size: int = 4,
115
+ bids_paths: list[mne_bids.BIDSPath] | None = None,
116
+ root: Path | str | None = None,
117
+ subjects: str | list[str] | None = None,
118
+ sessions: str | list[str] | None = None,
119
+ tasks: str | list[str] | None = None,
120
+ acquisitions: str | list[str] | None = None,
121
+ runs: str | list[str] | None = None,
122
+ processings: str | list[str] | None = None,
123
+ recordings: str | list[str] | None = None,
124
+ spaces: str | list[str] | None = None,
125
+ splits: str | list[str] | None = None,
126
+ descriptions: str | list[str] | None = None,
127
+ suffixes: str | list[str] | None = None,
128
+ extensions: str | list[str] | None = [
129
+ ".con",
130
+ ".sqd",
131
+ ".pdf",
132
+ ".fif",
133
+ ".ds",
134
+ ".vhdr",
135
+ ".set",
136
+ ".edf",
137
+ ".bdf",
138
+ ".EDF",
139
+ ".snirf",
140
+ ".cdt",
141
+ ".mef",
142
+ ".nwb",
143
+ ],
144
+ datatypes: str | list[str] | None = None,
145
+ check: bool = False,
146
+ ):
147
+ if bids_paths is None:
148
+ bids_paths = mne_bids.find_matching_paths(
149
+ root=root,
150
+ subjects=subjects,
151
+ sessions=sessions,
152
+ tasks=tasks,
153
+ acquisitions=acquisitions,
154
+ runs=runs,
155
+ processings=processings,
156
+ recordings=recordings,
157
+ spaces=spaces,
158
+ splits=splits,
159
+ descriptions=descriptions,
160
+ suffixes=suffixes,
161
+ extensions=extensions,
162
+ datatypes=datatypes,
163
+ check=check,
164
+ ignore_json=True,
165
+ )
166
+ # Filter out _epo.fif files:
167
+ bids_paths = [
168
+ bids_path
169
+ for bids_path in bids_paths
170
+ if not (bids_path.suffix == "epo" and bids_path.extension == ".fif")
171
+ ]
172
+ self.bids_paths = bids_paths
173
+ self.reader_fn = reader_fn
174
+ self.pool_size = pool_size
175
+
176
+ def __add__(self, other):
177
+ assert isinstance(other, BIDSIterableDataset)
178
+ return BIDSIterableDataset(
179
+ reader_fn=self.reader_fn,
180
+ bids_paths=self.bids_paths + other.bids_paths,
181
+ pool_size=self.pool_size,
182
+ )
183
+
184
+ def __iadd__(self, other):
185
+ assert isinstance(other, BIDSIterableDataset)
186
+ self.bids_paths += other.bids_paths
187
+ return self
188
+
189
+ def __iter__(self):
190
+ worker_info = get_worker_info()
191
+ if worker_info is None: # single-process data loading, return the full iterator
192
+ bids_paths = self.bids_paths
193
+ else: # in a worker process
194
+ # split workload
195
+ bids_paths = self.bids_paths[worker_info.id :: worker_info.num_workers]
196
+
197
+ pool = []
198
+ end = False
199
+ paths_it = iter(random.sample(bids_paths, k=len(bids_paths)))
200
+ while not (end and len(pool) == 0):
201
+ while not end and len(pool) < self.pool_size:
202
+ try:
203
+ bids_path = next(paths_it)
204
+ ds = self.reader_fn(bids_path)
205
+ if ds is None:
206
+ print(f"Skipping {bids_path} as it is too short.")
207
+ continue
208
+ idx = iter(random.sample(range(len(ds)), k=len(ds)))
209
+ pool.append((ds, idx))
210
+ except StopIteration:
211
+ end = True
212
+ i_pool = random.randint(0, len(pool) - 1)
213
+ ds, idx = pool[i_pool]
214
+ try:
215
+ i_ds = next(idx)
216
+ yield ds[i_ds]
217
+ except StopIteration:
218
+ pool.pop(i_pool)
@@ -15,7 +15,7 @@ from .eeginception_erp import EEGInceptionERP
15
15
  from .eeginception_mi import EEGInceptionMI
16
16
  from .eegitnet import EEGITNet
17
17
  from .eegminer import EEGMiner
18
- from .eegnet import EEGNetv1, EEGNetv4
18
+ from .eegnet import EEGNetv4
19
19
  from .eegnex import EEGNeX
20
20
  from .eegresnet import EEGResNet
21
21
  from .eegsimpleconv import EEGSimpleConv
@@ -65,7 +65,6 @@ __all__ = [
65
65
  "EEGInceptionMI",
66
66
  "EEGITNet",
67
67
  "EEGMiner",
68
- "EEGNetv1",
69
68
  "EEGNetv4",
70
69
  "EEGNeX",
71
70
  "EEGResNet",
@@ -24,7 +24,7 @@ from braindecode.modules.attention import (
24
24
 
25
25
 
26
26
  class AttentionBaseNet(EEGModuleMixin, nn.Module):
27
- """
27
+ """AttentionBaseNet from Wimpff M et al. (2023) [Martin2023]_.
28
28
 
29
29
  :bdg-success:`Convolution` :bdg-info:`Small Attention`
30
30
 
@@ -57,9 +57,9 @@ class AttentionBaseNet(EEGModuleMixin, nn.Module):
57
57
 
58
58
  - *Operations.*
59
59
  - **Temporal conv** (:class:`torch.nn.Conv2d`) with kernel ``(1, L_t)`` creates a learned
60
- FIR-like filter bank with ``n_temporal_filters`` maps.
60
+ FIR-like filter bank with ``n_temporal_filters`` maps.
61
61
  - **Depthwise spatial conv** (:class:`torch.nn.Conv2d`, ``groups=n_temporal_filters``)
62
- with kernel ``(n_chans, 1)`` learns per-filter spatial projections over the full montage.
62
+ with kernel ``(n_chans, 1)`` learns per-filter spatial projections over the full montage.
63
63
  - **BatchNorm → ELU → AvgPool → Dropout** stabilize and downsample time.
64
64
  - Output shape: ``(B, F2, 1, T₁)`` with ``F2 = n_temporal_filters x spatial_expansion``.
65
65
 
@@ -71,8 +71,8 @@ class AttentionBaseNet(EEGModuleMixin, nn.Module):
71
71
 
72
72
  - *Operations.*
73
73
  - A ``1x1`` conv → BN → activation maps ``F2 → ch_dim`` without changing
74
- the temporal length ``T₁`` (shape: ``(B, ch_dim, 1, T₁)``).
75
- This sets the embedding width for the attention block.
74
+ the temporal length ``T₁`` (shape: ``(B, ch_dim, 1, T₁)``).
75
+ This sets the embedding width for the attention block.
76
76
 
77
77
  - :class:`_ChannelAttentionBlock` **(temporal refinement + channel attention)**
78
78
 
@@ -117,8 +117,10 @@ class AttentionBaseNet(EEGModuleMixin, nn.Module):
117
117
  - **Type.** Channel attention chosen by ``attention_mode`` (SE, ECA, CBAM, CAT, GSoP,
118
118
  EncNet, GE, GCT, SRM, CATLite). Most operate purely on channels; CBAM/CAT additionally
119
119
  include temporal attention.
120
+
120
121
  - **Shapes.** Input/Output around attention: ``(B, ch_dim, 1, T₁)``. Re-arrangements
121
122
  (if any) are internal to the module; the block returns the same shape before pooling.
123
+
122
124
  - **Role.** Re-weights channels (and optionally time) to highlight informative sources
123
125
  and suppress distractors, improving SNR ahead of the linear head.
124
126
 
@@ -163,10 +165,11 @@ class AttentionBaseNet(EEGModuleMixin, nn.Module):
163
165
  Notes
164
166
  -----
165
167
  - Sequence length after each stage is computed internally; the final classifier expects
166
- a flattened ``ch_dim x T₂`` vector.
168
+ a flattened ``ch_dim x T₂`` vector.
167
169
  - Attention operates on *channel* dimension by design; temporal gating exists only in
168
- specific variants (CBAM/CAT).
169
-
170
+ specific variants (CBAM/CAT).
171
+ - The paper and original code with more details about the methodological
172
+ choices are available at the [Martin2023]_ and [MartinCode]_.
170
173
  .. versionadded:: 0.9
171
174
 
172
175
  Parameters
@@ -195,18 +198,18 @@ class AttentionBaseNet(EEGModuleMixin, nn.Module):
195
198
  the depth of the network after the initial layer. Default is 16.
196
199
  attention_mode : str, optional
197
200
  The type of attention mechanism to apply. If `None`, no attention is applied.
198
- - "se" for Squeeze-and-excitation network
199
- - "gsop" for Global Second-Order Pooling
200
- - "fca" for Frequency Channel Attention Network
201
- - "encnet" for context encoding module
202
- - "eca" for Efficient channel attention for deep convolutional neural networks
203
- - "ge" for Gather-Excite
204
- - "gct" for Gated Channel Transformation
205
- - "srm" for Style-based Recalibration Module
206
- - "cbam" for Convolutional Block Attention Module
207
- - "cat" for Learning to collaborate channel and temporal attention
208
- from multi-information fusion
209
- - "catlite" for Learning to collaborate channel attention
201
+ - "se" for Squeeze-and-excitation network
202
+ - "gsop" for Global Second-Order Pooling
203
+ - "fca" for Frequency Channel Attention Network
204
+ - "encnet" for context encoding module
205
+ - "eca" for Efficient channel attention for deep convolutional neural networks
206
+ - "ge" for Gather-Excite
207
+ - "gct" for Gated Channel Transformation
208
+ - "srm" for Style-based Recalibration Module
209
+ - "cbam" for Convolutional Block Attention Module
210
+ - "cat" for Learning to collaborate channel and temporal attention
211
+ from multi-information fusion
212
+ - "catlite" for Learning to collaborate channel attention
210
213
  from multi-information fusion (lite version, cat w/o temporal attention)
211
214
  pool_length : int, default=8
212
215
  The length of the window for the average pooling operation.
@@ -27,22 +27,25 @@ class EEGConformer(EEGModuleMixin, nn.Module):
27
27
  EEG-Conformer is a *convolution-first* model augmented with a *lightweight transformer
28
28
  encoder*. The end-to-end flow is:
29
29
 
30
- - (i) :class:`_PatchEmbedding` converts the continuous EEG into a compact sequence of tokens via a :class:`ShallowFBCSPNet` temporal–spatial conv stem and temporal pooling;
31
- - (ii) :class:`_TransformerEncoder applies small multi-head self-attention to integrate longer-range temporal context across tokens;
30
+ - (i) :class:`_PatchEmbedding` converts the continuous EEG into a compact sequence of tokens via a
31
+ :class:`ShallowFBCSPNet` temporal–spatial conv stem and temporal pooling;
32
+ - (ii) :class:`_TransformerEncoder` applies small multi-head self-attention to integrate
33
+ longer-range temporal context across tokens;
32
34
  - (iii) :class:`_ClassificationHead` aggregates the sequence and performs a linear readout.
33
- This preserves the strong inductive biases of shallow CNN filter banks while adding
34
- just enough attention to capture dependencies beyond the pooling horizon [song2022]_.
35
+ This preserves the strong inductive biases of shallow CNN filter banks while adding
36
+ just enough attention to capture dependencies beyond the pooling horizon [song2022]_.
35
37
 
36
38
  .. rubric:: Macro Components
37
39
 
38
40
  - :class:`_PatchEmbedding` **(Shallow conv stem → tokens)**
39
41
 
40
- - *Operations.*
41
- - A temporal convolution (`:class:`torch.nn.Conv2d`) ``(1 x L_t)`` forms a data-driven "filter bank";
42
- - A spatial convolution (`:class:`torch.nn.Conv2d`) (n_chans x 1)`` projects across electrodes, collapsing the channel axis into a virtual channel.
43
- - **Normalization function** `:class:torch.nn.BatchNorm`
44
- - **Activation function** `:class:torch.nn.ELU`
45
- - **Average Pooling** `:class:torch.nn.AvgPool` along time (kernel ``(1, P)`` with stride ``(1, S)``)
42
+ - *Operations.*
43
+ - A temporal convolution (`:class:torch.nn.Conv2d`) ``(1 x L_t)`` forms a data-driven "filter bank";
44
+ - A spatial convolution (`:class:torch.nn.Conv2d`) (n_chans x 1)`` projects across electrodes,
45
+ collapsing the channel axis into a virtual channel.
46
+ - **Normalization function** :class:`torch.nn.BatchNorm`
47
+ - **Activation function** :class:`torch.nn.ELU`
48
+ - **Average Pooling** :class:`torch.nn.AvgPool` along time (kernel ``(1, P)`` with stride ``(1, S)``)
46
49
  - final ``1x1`` :class:`torch.nn.Linear` projection.
47
50
 
48
51
  The result is rearranged to a token sequence ``(B, S_tokens, D)``, where ``D = n_filters_time``.
@@ -53,7 +56,7 @@ class EEGConformer(EEGModuleMixin, nn.Module):
53
56
 
54
57
  - :class:`_TransformerEncoder` **(context over temporal tokens)**
55
58
 
56
- - *Operations.*
59
+ - *Operations.*
57
60
  - A stack of ``att_depth`` encoder blocks. :class:`_TransformerEncoderBlock`
58
61
  - Each block applies LayerNorm :class:`torch.nn.LayerNorm`
59
62
  - Multi-Head Self-Attention (``att_heads``) with dropout + residual :class:`MultiHeadAttention` (:class:`torch.nn.Dropout`)
@@ -67,7 +70,7 @@ class EEGConformer(EEGModuleMixin, nn.Module):
67
70
 
68
71
  - :class:`ClassificationHead` **(aggregation + readout)**
69
72
 
70
- - *Operations*.
73
+ - *Operations*.
71
74
  - Flatten, :class:`torch.nn.Flatten` the sequence ``(B, S_tokens·D)`` -
72
75
  - MLP (:class:`torch.nn.Linear` → activation (default: :class:`torch.nn.ELU`) → :class:`torch.nn.Dropout` → :class:`torch.nn.Linear`)
73
76
  - final Linear to classes.
@@ -31,8 +31,7 @@ class EEGNetv4(EEGModuleMixin, nn.Sequential):
31
31
 
32
32
  .. rubric:: Architectural Overview
33
33
 
34
- EEGNetv4 is a compact convolutional network designed for EEG decoding with a
35
- pipeline that mirrors classical EEG processing:
34
+ EEGNetv4 is a compact convolutional network designed for EEG decoding with a pipeline that mirrors classical EEG processing:
36
35
  - (i) learn temporal frequency-selective filters,
37
36
  - (ii) learn spatial filters for those frequencies, and
38
37
  - (iii) condense features with depthwise–separable convolutions before a lightweight classifier.
@@ -56,16 +55,16 @@ class EEGNetv4(EEGModuleMixin, nn.Sequential):
56
55
 
57
56
  .. rubric:: Convolutional Details
58
57
 
59
- **Temporal.** The initial temporal convs serve as a *learned filter bank*:
60
- long 1-D kernels (implemented as 2-D with singleton spatial extent) emphasize oscillatory bands and transients.
61
- Because this stage is linear prior to BN/ELU, kernels can be analyzed as FIR filters to reveal each feature’s spectrum [Lawhern2018]_.
58
+ - **Temporal.** The initial temporal convs serve as a *learned filter bank*:
59
+ long 1-D kernels (implemented as 2-D with singleton spatial extent) emphasize oscillatory bands and transients.
60
+ Because this stage is linear prior to BN/ELU, kernels can be analyzed as FIR filters to reveal each feature’s spectrum [Lawhern2018]_.
62
61
 
63
- **Spatial.** The depthwise spatial conv spans the full channel axis (kernel height = #electrodes; temporal size = 1).
64
- With ``groups = F1``, each temporal filter learns its own set of ``D`` spatial projections—akin to CSP, learned end-to-end and
65
- typically regularized with max-norm.
62
+ - **Spatial.** The depthwise spatial conv spans the full channel axis (kernel height = #electrodes; temporal size = 1).
63
+ With ``groups = F1``, each temporal filter learns its own set of ``D`` spatial projections—akin to CSP, learned end-to-end and
64
+ typically regularized with max-norm.
66
65
 
67
- **Spectral.** No explicit Fourier/wavelet transform is used. Frequency structure
68
- is captured implicitly by the temporal filter bank; later depthwise temporal kernels act as short-time integrators/refiners.
66
+ - **Spectral.** No explicit Fourier/wavelet transform is used. Frequency structure
67
+ is captured implicitly by the temporal filter bank; later depthwise temporal kernels act as short-time integrators/refiners.
69
68
 
70
69
  .. rubric:: Additional Comments
71
70
 
@@ -16,9 +16,124 @@ from braindecode.modules import Conv2dWithConstraint, LinearWithConstraint
16
16
  class EEGNeX(EEGModuleMixin, nn.Module):
17
17
  """EEGNeX model from Chen et al. (2024) [eegnex]_.
18
18
 
19
+ :bdg-success:`Convolution`
20
+
19
21
  .. figure:: https://braindecode.org/dev/_static/model/eegnex.jpg
20
22
  :align: center
21
23
  :alt: EEGNeX Architecture
24
+ :width: 620px
25
+
26
+ .. rubric:: Architectural Overview
27
+
28
+ EEGNeX is a **purely convolutional** architecture that refines the EEGNet-style stem
29
+ and deepens the temporal stack with **dilated temporal convolutions**. The end-to-end
30
+ flow is:
31
+
32
+ - (i) **Block-1/2**: two temporal convolutions ``(1 x L)`` with BN refine a
33
+ learned FIR-like *temporal filter bank* (no pooling yet);
34
+ - (ii) **Block-3**: depthwise **spatial** convolution across electrodes
35
+ ``(n_chans x 1)`` with max-norm constraint, followed by ELU → AvgPool (time) → Dropout;
36
+ - (iii) **Block-4/5**: two additional **temporal** convolutions with increasing **dilation**
37
+ to expand the receptive field; the last block applies ELU → AvgPool → Dropout → Flatten;
38
+ - (iv) **Classifier**: a max-norm–constrained linear layer.
39
+
40
+ The published work positions EEGNeX as a compact, conv-only alternative that consistently
41
+ outperforms prior baselines across MOABB-style benchmarks, with the popular
42
+ “EEGNeX-8,32” shorthand denoting *8 temporal filters* and *kernel length 32*.
43
+
44
+
45
+ .. rubric:: Macro Components
46
+
47
+ - **Block-1 / Block-2 — Temporal filter (learned).**
48
+
49
+ - *Operations.*
50
+ - :class:`torch.nn.Conv2d` with kernels ``(1, L)``
51
+ - :class:`torch.nn.BatchNorm2d` (no nonlinearity until Block-3, mirroring a linear FIR analysis stage).
52
+ These layers set up frequency-selective detectors before spatial mixing.
53
+
54
+ - *Interpretability.* Kernels can be inspected as FIR filters; two stacked temporal
55
+ convs allow longer effective kernels without parameter blow-up.
56
+
57
+ - **Block-3 — Spatial projection + condensation.**
58
+
59
+ - *Operations.*
60
+ - :class:`braindecode.modules.Conv2dWithConstraint` with kernel``(n_chans, 1)``
61
+ and ``groups = filter_2`` (depthwise across filters)
62
+ - :class:`torch.nn.BatchNorm2d`
63
+ - :class:`torch.nn.ELU`
64
+ - :class:`torch.nn.AvgPool2d` (time)
65
+ - :class:`torch.nn.Dropout`.
66
+
67
+ *Role.* Learns per-filter spatial patterns over the **full montage** while temporal
68
+ pooling stabilizes and compresses features; max-norm encourages well-behaved spatial
69
+ weights similar to EEGNet practice.
70
+
71
+ - **Block-4 / Block-5 — Dilated temporal integration.**
72
+
73
+ - *Operations.*
74
+ - :class:`torch.nn.Conv2d` with kernels ``(1, k)`` and **dilations**
75
+ (e.g., 2 then 4);
76
+ - :class:`torch.nn.BatchNorm2d`
77
+ - :class:`torch.nn.ELU`
78
+ - :class:`torch.nn.AvgPool2d` (time)
79
+ - :class:`torch.nn.Dropout`
80
+ - :class:`torch.nn.Flatten`.
81
+
82
+ *Role.* Expands the temporal receptive field efficiently to capture rhythms and
83
+ long-range context after condensation.
84
+
85
+ - **Final Classifier — Max-norm linear.**
86
+
87
+ - *Operations.*
88
+ - :class:`braindecode.modules.LinearWithConstraint` maps the flattened
89
+ vector to the target classes; the max-norm constraint regularizes the readout.
90
+
91
+
92
+ .. rubric:: Convolutional Details
93
+
94
+ - **Temporal (where time-domain patterns are learned).**
95
+ Blocks 1-2 learn the primary filter bank (oscillations/transients), while Blocks 4-5
96
+ use **dilation** to integrate over longer horizons without extra pooling. The final
97
+ AvgPool in Block-5 sets the output token rate and helps noise suppression.
98
+
99
+ - **Spatial (how electrodes are processed).**
100
+ A *single* depthwise spatial conv (Block-3) spans the entire electrode set
101
+ (kernel ``(n_chans, 1)``), producing per-temporal-filter topographies; no cross-filter
102
+ mixing occurs at this stage, aiding interpretability.
103
+
104
+ - **Spectral (how frequency content is captured).**
105
+ Frequency selectivity emerges from the learned temporal kernels; dilation broadens effective
106
+ bandwidth coverage by composing multiple scales.
107
+
108
+ .. rubric:: Additional Mechanisms
109
+
110
+ - **EEGNeX-8,32 naming.** “8,32” indicates *8 temporal filters* and *kernel length 32*,
111
+ reflecting the paper's ablation path from EEGNet-8,2 toward thicker temporal kernels
112
+ and a deeper conv stack.
113
+ - **Max-norm constraints.** Spatial (Block-3) and final linear layers use max-norm
114
+ regularization—standard in EEG CNNs—to reduce overfitting and encourage stable spatial
115
+ patterns.
116
+
117
+ .. rubric:: Usage and Configuration
118
+
119
+ - **Kernel schedule.** Start with the canonical **EEGNeX-8,32** (``filter_1=8``,
120
+ ``kernel_block_1_2=32``) and keep **Block-3** depth multiplier modest (e.g., 2) to match
121
+ the paper's “pure conv” profile.
122
+ - **Pooling vs. dilation.** Use pooling in Blocks 3 and 5 to control compute and variance;
123
+ increase dilations (Blocks 4-5) to widen temporal context when windows are short.
124
+ - **Regularization.** Combine dropout (Blocks 3 & 5) with max-norm on spatial and
125
+ classifier layers; prefer ELU activations for stable training on small EEG datasets.
126
+
127
+
128
+ Notes
129
+ -----
130
+ - The braindecode implementation follows the paper's conv-only design with five blocks
131
+ and reproduces the depthwise spatial step and dilated temporal stack. See the class
132
+ reference for exact kernel sizes, dilations, and pooling defaults. You can check the
133
+ original implementation at [EEGNexCode]_.
134
+
135
+ .. versionadded:: 1.1
136
+
22
137
 
23
138
  Parameters
24
139
  ----------
@@ -45,12 +160,6 @@ class EEGNeX(EEGModuleMixin, nn.Module):
45
160
  avg_pool_block5 : tuple[int, int], optional
46
161
  Pooling size for block 5. Default is (1, 8).
47
162
 
48
- Notes
49
- -----
50
- This implementation is not guaranteed to be correct, has not been checked
51
- by original authors, only reimplemented from the paper description and
52
- source code in tensorflow [EEGNexCode]_.
53
-
54
163
  References
55
164
  ----------
56
165
  .. [eegnex] Chen, X., Teng, X., Chen, H., Pan, Y., & Geyer, P. (2024).