braindecode 1.2.0.dev180217551__tar.gz → 1.2.0.dev182051616__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of braindecode might be problematic. Click here for more details.

Files changed (126) hide show
  1. {braindecode-1.2.0.dev180217551/braindecode.egg-info → braindecode-1.2.0.dev182051616}/PKG-INFO +5 -1
  2. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/__init__.py +1 -2
  3. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/attentionbasenet.py +5 -4
  4. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/eegnex.py +2 -7
  5. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/summary.csv +40 -41
  6. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/util.py +0 -1
  7. braindecode-1.2.0.dev182051616/braindecode/version.py +1 -0
  8. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616/braindecode.egg-info}/PKG-INFO +5 -1
  9. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode.egg-info/SOURCES.txt +5 -2
  10. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode.egg-info/requires.txt +4 -0
  11. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/docs/api.rst +6 -7
  12. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/docs/conf.py +22 -7
  13. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/docs/index.rst +1 -1
  14. braindecode-1.2.0.dev182051616/docs/models/models.rst +76 -0
  15. braindecode-1.2.0.dev182051616/docs/models/models_categorization.rst +174 -0
  16. braindecode-1.2.0.dev182051616/docs/models/models_table.rst +139 -0
  17. braindecode-1.2.0.dev182051616/docs/models/models_visualization.rst +21 -0
  18. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/docs/whats_new.rst +2 -0
  19. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/pyproject.toml +5 -1
  20. braindecode-1.2.0.dev180217551/braindecode/version.py +0 -1
  21. braindecode-1.2.0.dev180217551/docs/models_summary.rst +0 -134
  22. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/LICENSE.txt +0 -0
  23. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/MANIFEST.in +0 -0
  24. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/NOTICE.txt +0 -0
  25. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/README.rst +0 -0
  26. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/__init__.py +0 -0
  27. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/augmentation/__init__.py +0 -0
  28. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/augmentation/base.py +0 -0
  29. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/augmentation/functional.py +0 -0
  30. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/augmentation/transforms.py +0 -0
  31. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/classifier.py +0 -0
  32. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/datasets/__init__.py +0 -0
  33. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/datasets/base.py +0 -0
  34. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/datasets/bbci.py +0 -0
  35. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/datasets/bcicomp.py +0 -0
  36. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/datasets/bids.py +0 -0
  37. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/datasets/mne.py +0 -0
  38. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/datasets/moabb.py +0 -0
  39. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/datasets/nmt.py +0 -0
  40. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/datasets/sleep_physio_challe_18.py +0 -0
  41. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/datasets/sleep_physionet.py +0 -0
  42. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/datasets/tuh.py +0 -0
  43. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/datasets/xy.py +0 -0
  44. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/datautil/__init__.py +0 -0
  45. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/datautil/serialization.py +0 -0
  46. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/datautil/util.py +0 -0
  47. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/eegneuralnet.py +0 -0
  48. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/functional/__init__.py +0 -0
  49. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/functional/functions.py +0 -0
  50. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/functional/initialization.py +0 -0
  51. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/atcnet.py +0 -0
  52. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/base.py +0 -0
  53. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/biot.py +0 -0
  54. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/contrawr.py +0 -0
  55. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/ctnet.py +0 -0
  56. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/deep4.py +0 -0
  57. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/deepsleepnet.py +0 -0
  58. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/eegconformer.py +0 -0
  59. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/eeginception_erp.py +0 -0
  60. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/eeginception_mi.py +0 -0
  61. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/eegitnet.py +0 -0
  62. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/eegminer.py +0 -0
  63. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/eegnet.py +0 -0
  64. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/eegresnet.py +0 -0
  65. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/eegsimpleconv.py +0 -0
  66. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/eegtcnet.py +0 -0
  67. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/fbcnet.py +0 -0
  68. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/fblightconvnet.py +0 -0
  69. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/fbmsnet.py +0 -0
  70. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/hybrid.py +0 -0
  71. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/ifnet.py +0 -0
  72. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/labram.py +0 -0
  73. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/msvtnet.py +0 -0
  74. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/sccnet.py +0 -0
  75. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/shallow_fbcsp.py +0 -0
  76. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/signal_jepa.py +0 -0
  77. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/sinc_shallow.py +0 -0
  78. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/sleep_stager_blanco_2020.py +0 -0
  79. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/sleep_stager_chambon_2018.py +0 -0
  80. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/sleep_stager_eldele_2021.py +0 -0
  81. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/sparcnet.py +0 -0
  82. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/syncnet.py +0 -0
  83. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/tcn.py +0 -0
  84. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/tidnet.py +0 -0
  85. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/tsinception.py +0 -0
  86. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/models/usleep.py +0 -0
  87. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/modules/__init__.py +0 -0
  88. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/modules/activation.py +0 -0
  89. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/modules/attention.py +0 -0
  90. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/modules/blocks.py +0 -0
  91. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/modules/convolution.py +0 -0
  92. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/modules/filter.py +0 -0
  93. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/modules/layers.py +0 -0
  94. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/modules/linear.py +0 -0
  95. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/modules/parametrization.py +0 -0
  96. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/modules/stats.py +0 -0
  97. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/modules/util.py +0 -0
  98. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/modules/wrapper.py +0 -0
  99. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/preprocessing/__init__.py +0 -0
  100. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/preprocessing/mne_preprocess.py +0 -0
  101. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/preprocessing/preprocess.py +0 -0
  102. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/preprocessing/windowers.py +0 -0
  103. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/regressor.py +0 -0
  104. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/samplers/__init__.py +0 -0
  105. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/samplers/base.py +0 -0
  106. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/samplers/ssl.py +0 -0
  107. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/training/__init__.py +0 -0
  108. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/training/callbacks.py +0 -0
  109. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/training/losses.py +0 -0
  110. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/training/scoring.py +0 -0
  111. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/util.py +0 -0
  112. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/visualization/__init__.py +0 -0
  113. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/visualization/confusion_matrices.py +0 -0
  114. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode/visualization/gradients.py +0 -0
  115. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode.egg-info/dependency_links.txt +0 -0
  116. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/braindecode.egg-info/top_level.txt +0 -0
  117. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/docs/Makefile +0 -0
  118. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/docs/_templates/autosummary/class.rst +0 -0
  119. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/docs/_templates/autosummary/function.rst +0 -0
  120. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/docs/cite.rst +0 -0
  121. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/docs/help.rst +0 -0
  122. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/docs/install/install.rst +0 -0
  123. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/docs/install/install_pip.rst +0 -0
  124. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/docs/install/install_source.rst +0 -0
  125. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/docs/sg_execution_times.rst +0 -0
  126. {braindecode-1.2.0.dev180217551 → braindecode-1.2.0.dev182051616}/setup.cfg +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: braindecode
3
- Version: 1.2.0.dev180217551
3
+ Version: 1.2.0.dev182051616
4
4
  Summary: Deep learning software to decode EEG, ECG or MEG signals
5
5
  Author-email: Robin Tibor Schirrmeister <robintibor@gmail.com>
6
6
  Maintainer-email: Alexandre Gramfort <agramfort@meta.com>, Bruno Aristimunha Pinto <b.aristimunha@gmail.com>, Robin Tibor Schirrmeister <robintibor@gmail.com>
@@ -49,6 +49,9 @@ Requires-Dist: mypy; extra == "tests"
49
49
  Provides-Extra: docs
50
50
  Requires-Dist: sphinx_gallery; extra == "docs"
51
51
  Requires-Dist: sphinx_rtd_theme; extra == "docs"
52
+ Requires-Dist: sphinx-autodoc-typehints; extra == "docs"
53
+ Requires-Dist: sphinx-autobuild; extra == "docs"
54
+ Requires-Dist: sphinxcontrib-bibtex; extra == "docs"
52
55
  Requires-Dist: pydata_sphinx_theme; extra == "docs"
53
56
  Requires-Dist: numpydoc; extra == "docs"
54
57
  Requires-Dist: memory_profiler; extra == "docs"
@@ -59,6 +62,7 @@ Requires-Dist: lightning; extra == "docs"
59
62
  Requires-Dist: seaborn; extra == "docs"
60
63
  Requires-Dist: pre-commit; extra == "docs"
61
64
  Requires-Dist: openneuro-py; extra == "docs"
65
+ Requires-Dist: plotly; extra == "docs"
62
66
  Provides-Extra: all
63
67
  Requires-Dist: braindecode[docs,moabb,tests]; extra == "all"
64
68
  Dynamic: license-file
@@ -15,7 +15,7 @@ from .eeginception_erp import EEGInceptionERP
15
15
  from .eeginception_mi import EEGInceptionMI
16
16
  from .eegitnet import EEGITNet
17
17
  from .eegminer import EEGMiner
18
- from .eegnet import EEGNetv1, EEGNetv4
18
+ from .eegnet import EEGNetv4
19
19
  from .eegnex import EEGNeX
20
20
  from .eegresnet import EEGResNet
21
21
  from .eegsimpleconv import EEGSimpleConv
@@ -65,7 +65,6 @@ __all__ = [
65
65
  "EEGInceptionMI",
66
66
  "EEGITNet",
67
67
  "EEGMiner",
68
- "EEGNetv1",
69
68
  "EEGNetv4",
70
69
  "EEGNeX",
71
70
  "EEGResNet",
@@ -24,7 +24,7 @@ from braindecode.modules.attention import (
24
24
 
25
25
 
26
26
  class AttentionBaseNet(EEGModuleMixin, nn.Module):
27
- """
27
+ """AttentionBaseNet from Wimpff M et al. (2023) [Martin2023]_.
28
28
 
29
29
  :bdg-success:`Convolution` :bdg-info:`Small Attention`
30
30
 
@@ -165,10 +165,11 @@ class AttentionBaseNet(EEGModuleMixin, nn.Module):
165
165
  Notes
166
166
  -----
167
167
  - Sequence length after each stage is computed internally; the final classifier expects
168
- a flattened ``ch_dim x T₂`` vector.
168
+ a flattened ``ch_dim x T₂`` vector.
169
169
  - Attention operates on *channel* dimension by design; temporal gating exists only in
170
- specific variants (CBAM/CAT).
171
-
170
+ specific variants (CBAM/CAT).
171
+ - The paper and original code with more details about the methodological
172
+ choices are available at the [Martin2023]_ and [MartinCode]_.
172
173
  .. versionadded:: 0.9
173
174
 
174
175
  Parameters
@@ -129,7 +129,8 @@ class EEGNeX(EEGModuleMixin, nn.Module):
129
129
  -----
130
130
  - The braindecode implementation follows the paper's conv-only design with five blocks
131
131
  and reproduces the depthwise spatial step and dilated temporal stack. See the class
132
- reference for exact kernel sizes, dilations, and pooling defaults.
132
+ reference for exact kernel sizes, dilations, and pooling defaults. You can check the
133
+ original implementation at [EEGNexCode]_.
133
134
 
134
135
  .. versionadded:: 1.1
135
136
 
@@ -159,12 +160,6 @@ class EEGNeX(EEGModuleMixin, nn.Module):
159
160
  avg_pool_block5 : tuple[int, int], optional
160
161
  Pooling size for block 5. Default is (1, 8).
161
162
 
162
- Notes
163
- -----
164
- This implementation is not guaranteed to be correct, has not been checked
165
- by original authors, only reimplemented from the paper description and
166
- source code in tensorflow [EEGNexCode]_.
167
-
168
163
  References
169
164
  ----------
170
165
  .. [eegnex] Chen, X., Teng, X., Chen, H., Pan, Y., & Geyer, P. (2024).
@@ -1,41 +1,40 @@
1
- Model,Paradigm,Type,Freq(Hz),Hyperparameters,#Parameters,get_#Parameters
2
- ATCNet,General,Classification,250,"n_chans, n_outputs, n_times",113732,"ATCNet(n_chans=22, n_outputs=4, n_times=1000)"
3
- AttentionBaseNet,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times",3692,"AttentionBaseNet(n_chans=22, n_outputs=4, n_times=1000)"
4
- BDTCN,Normal/Abnormal,Classification,100,"n_chans, n_outputs, n_times",456502,"BDTCN(n_chans=21, n_outputs=2, n_times=6000, n_blocks=5, n_filters=55, kernel_size=16)"
5
- BIOT,"Sleep Staging, Epilepsy",Classification,200,"n_chans, n_outputs",3183879,"BIOT(n_chans=2, n_outputs=5, n_times=6000)"
6
- ContraWR,Sleep Staging,"Classification, Embedding",125,"n_chans, n_outputs, sfreq",1160165,"ContraWR(n_chans=2, n_outputs=5, n_times=3750, emb_size=256, sfreq=125)"
7
- CTNet,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times",26900,"CTNet(n_chans=22, n_outputs=4, n_times=1000, n_filters_time=8, kernel_size=16, heads=2, emb_size=16)"
8
- Deep4Net,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times",282879,"Deep4Net(n_chans=22, n_outputs=4, n_times=1000)"
9
- DeepSleepNet,Sleep Staging,Classification,256,"n_chans, n_outputs",24744837,"DeepSleepNet(n_chans=1, n_outputs=5, n_times=7680, sfreq=256)"
10
- EEGConformer,General,Classification,250,"n_chans, n_outputs, n_times",789572,"EEGConformer(n_chans=22, n_outputs=4, n_times=1000)."
11
- EEGInceptionERP,"ERP, SSVEP",Classification,128,"n_chans, n_outputs",14926,"EEGInceptionERP(n_chans=8, n_outputs=2, n_times=128, sfreq=128)"
12
- EEGInceptionMI,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times",558028,"EEGInceptionMI(n_chans=22, n_outputs=4, n_times=1000, n_convs=5, n_filters=12)"
13
- EEGITNet,Motor Imagery,Classification,125,"n_chans, n_outputs, n_times",5212,"EEGITNet(n_chans=22, n_outputs=4, n_times=500)"
14
- EEGNetv1,General,Classification,128,"n_chans, n_outputs, n_times",3052,"EEGNetv1(n_chans=22, n_outputs=4, n_times=512)"
15
- EEGNetv4,General,Classification,128,"n_chans, n_outputs, n_times",2484,"EEGNetv4(n_chans=22, n_outputs=4, n_times=512)"
16
- EEGNeX,Motor Imagery,Classification,125,"n_chans, n_outputs, n_times",55940,"EEGNeX(n_chans=22, n_outputs=4, n_times=500)"
17
- EEGMiner,Emotion Recognition,Classification,128,"n_chans, n_outputs, n_times, sfreq",7572,"EEGMiner(n_chans=62, n_outputs=2, n_times=2560, sfreq=128)"
18
- EEGResNet,General,Classification,250,"n_chans, n_outputs, n_times",247484,"EEGResNet(n_chans=22, n_outputs=4, n_times=1000)"
19
- EEGSimpleConv,Motor Imagery,Classification,80,"n_chans, n_outputs, sfreq",730404,"EEGSimpleConv(n_chans=22, n_outputs=4, n_times=320, sfreq=80)"
20
- EEGTCNet,Motor Imagery,Classification,250,"n_chans, n_outputs",4516,"EEGTCNet(n_chans=22, n_outputs=4, n_times=1000, kern_length=32)"
21
- Labram,General,"Classification, Embedding",200,"n_chans, n_outputs, n_times",5866180,"Labram(n_chans=22, n_outputs=4, n_times=1000, sfreq=250)"
22
- MSVTNet,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times",75494," MSVTNet(n_chans=22, n_outputs=4, n_times=1000)"
23
- SCCNet,Motor Imagery,Classification,125,"n_chans, n_outputs, n_times, sfreq",12070,"SCCNet(n_chans=22, n_outputs=4, n_times=1000, sfreq=125)"
24
- SignalJEPA,"Motor Imagery, ERP, SSVEP",Embedding,128,"n_times, chs_info",3456882,"SignalJEPA(n_times=512, chs_info=Lee2019_MI().get_data(subjects=[1])[1]['0']['1train'].info[""chs""][:62])"
25
- SignalJEPA_Contextual,"Motor Imagery, ERP, SSVEP",Classification,128,"n_outputs, n_times, chs_info",3459184,"SignalJEPA_Contextual(n_outputs=2, input_window_seconds=4.19, sfreq=128, chs_info=Lee2019_MI().get_data(subjects=[1])[1]['0']['1train'].info[""chs""][:62])"
26
- SignalJEPA_PostLocal,"Motor Imagery, ERP, SSVEP",Classification,128,"n_chans, n_outputs, n_times",16142,"SignalJEPA_PostLocal(n_chans=62, n_outputs=2, input_window_seconds=4.19, sfreq=128)"
27
- SignalJEPA_PreLocal,"Motor Imagery, ERP, SSVEP",Classification,128,"n_outputs, n_times, chs_info",16142,"SignalJEPA_PreLocal(n_chans=62, n_outputs=2, input_window_seconds=4.19, sfreq=128)"
28
- SincShallowNet,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times, sfreq",21892,"SincShallowNet(n_chans=22, n_outputs=4, n_times=1000, sfreq=250)"
29
- ShallowFBCSPNet,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times",46084,"ShallowFBCSPNet(n_chans=22, n_outputs=4, n_times=1000, sfreq=250)"
30
- SleepStagerBlanco2020,Sleep Staging,Classification,100,"n_chans, n_outputs, n_times",2845,"SleepStagerBlanco2020(n_chans=2, n_outputs=5, n_times=3000, sfreq=100)"
31
- SleepStagerChambon2018,Sleep Staging,Classification,128,"n_chans, n_outputs, n_times, sfreq",5835,"SleepStagerChambon2018(n_chans=2, n_outputs=5, n_times=3840, sfreq=128)"
32
- SleepStagerEldele2021,Sleep Staging,Classification,100,"n_chans, n_outputs, n_times, sfreq",719925,"SleepStagerEldele2021(n_chans=2, n_outputs=5, n_times=3000, sfreq=100)"
33
- SPARCNet,Epilepsy,Classification,200,"n_chans, n_outputs, n_times",1141921,"SPARCNet(n_chans=16, n_outputs=6, n_times=2000, sfreq=200)"
34
- SyncNet,"Emotion Recognition, Alcoholism",Classification,256,"n_chans, n_outputs, n_times",554,"SyncNet(n_chans=62, n_outputs=3, n_times=5120, sfreq=256)"
35
- TSceptionV1,Emotion Recognition,Classification,256,"n_chans, n_outputs, n_times, sfreq",2187206,"TSceptionV1(n_chans=62, n_outputs=3, n_times=5120, sfreq=256)"
36
- TIDNet,General,Classification,250,"n_chans, n_outputs, n_times",240404,"TIDNet(n_chans=22, n_outputs=4, n_times=1000)"
37
- USleep,Sleep Staging,Classification,128,"n_chans, n_outputs, n_times, sfreq",2482011,"USleep(n_chans=2, n_outputs=5, n_times=3000, sfreq=100)"
38
- FBCNet,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times, sfreq",11812,"FCNet(n_chans=22, n_outputs=4, n_times=1000, sfreq=250)"
39
- FBMSNet,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times, sfreq",16231,"FBMSNet(n_chans=22, n_outputs=4, n_times=1000, sfreq=250)"
40
- FBLightConvNet,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times, sfreq",6596,"FBLightConvNet(n_chans=22, n_outputs=4, n_times=1000, sfreq=250)"
41
- IFNet,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times, sfreq",9860,"IFNet(n_chans=22, n_outputs=4, n_times=1000, sfreq=250)"
1
+ Model,Paradigm,Type,Freq(Hz),Hyperparameters,#Parameters,get_#Parameters,Categorization
2
+ ATCNet,General,Classification,250,"n_chans, n_outputs, n_times",113732,"ATCNet(n_chans=22, n_outputs=4, n_times=1000)","Convolution,Recurrent,Small Attention"
3
+ AttentionBaseNet,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times",3692,"AttentionBaseNet(n_chans=22, n_outputs=4, n_times=1000)","Convolution,Small Attention"
4
+ BDTCN,Normal Abnormal,Classification,100,"n_chans, n_outputs, n_times",456502,"BDTCN(n_chans=21, n_outputs=2, n_times=6000, n_blocks=5, n_filters=55, kernel_size=16)","Convolution,Recurrent"
5
+ BIOT,"Sleep Staging, Epilepsy",Classification,200,"n_chans, n_outputs",3183879,"BIOT(n_chans=2, n_outputs=5, n_times=6000)","Large Language Model"
6
+ ContraWR,Sleep Staging,"Classification, Embedding",125,"n_chans, n_outputs, sfreq",1160165,"ContraWR(n_chans=2, n_outputs=5, n_times=3750, emb_size=256, sfreq=125)",Convolution
7
+ CTNet,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times",26900,"CTNet(n_chans=22, n_outputs=4, n_times=1000, n_filters_time=8, kernel_size=16, heads=2, emb_size=16)","Convolution,Small Attention"
8
+ Deep4Net,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times",282879,"Deep4Net(n_chans=22, n_outputs=4, n_times=1000)","Convolution"
9
+ DeepSleepNet,Sleep Staging,Classification,256,"n_chans, n_outputs",24744837,"DeepSleepNet(n_chans=1, n_outputs=5, n_times=7680, sfreq=256)","Convolution"
10
+ EEGConformer,General,Classification,250,"n_chans, n_outputs, n_times",789572,"EEGConformer(n_chans=22, n_outputs=4, n_times=1000)","Convolution,Small Attention"
11
+ EEGInceptionERP,"ERP, SSVEP",Classification,128,"n_chans, n_outputs",14926,"EEGInceptionERP(n_chans=8, n_outputs=2, n_times=128, sfreq=128)","Convolution"
12
+ EEGInceptionMI,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times",558028,"EEGInceptionMI(n_chans=22, n_outputs=4, n_times=1000, n_convs=5, n_filters=12)","Convolution"
13
+ EEGITNet,Motor Imagery,Classification,125,"n_chans, n_outputs, n_times",5212,"EEGITNet(n_chans=22, n_outputs=4, n_times=500)","Convolution,Recurrent"
14
+ EEGNetv4,General,Classification,128,"n_chans, n_outputs, n_times",2484,"EEGNetv4(n_chans=22, n_outputs=4, n_times=512)","Convolution"
15
+ EEGNeX,Motor Imagery,Classification,125,"n_chans, n_outputs, n_times",55940,"EEGNeX(n_chans=22, n_outputs=4, n_times=500)","Convolution"
16
+ EEGMiner,Emotion Recognition,Classification,128,"n_chans, n_outputs, n_times, sfreq",7572,"EEGMiner(n_chans=62, n_outputs=2, n_times=2560, sfreq=128)","Convolution"
17
+ EEGResNet,General,Classification,250,"n_chans, n_outputs, n_times",247484,"EEGResNet(n_chans=22, n_outputs=4, n_times=1000)","Interpretability"
18
+ EEGSimpleConv,Motor Imagery,Classification,80,"n_chans, n_outputs, sfreq",730404,"EEGSimpleConv(n_chans=22, n_outputs=4, n_times=320, sfreq=80)","Convolution"
19
+ EEGTCNet,Motor Imagery,Classification,250,"n_chans, n_outputs",4516,"EEGTCNet(n_chans=22, n_outputs=4, n_times=1000, kern_length=32)","Convolution,Recurrent"
20
+ Labram,General,"Classification, Embedding",200,"n_chans, n_outputs, n_times",5866180,"Labram(n_chans=22, n_outputs=4, n_times=1000, sfreq=250)","Convolution,Large Language Model"
21
+ MSVTNet,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times",75494," MSVTNet(n_chans=22, n_outputs=4, n_times=1000)","Convolution,Recurrent,Small Attention"
22
+ SCCNet,Motor Imagery,Classification,125,"n_chans, n_outputs, n_times, sfreq",12070,"SCCNet(n_chans=22, n_outputs=4, n_times=1000, sfreq=125)","Convolution"
23
+ SignalJEPA,"Motor Imagery, ERP, SSVEP",Embedding,128,"n_times, chs_info",3456882,"SignalJEPA(n_times=512, chs_info=Lee2019_MI().get_data(subjects=[1])[1]['0']['1train'].info[""chs""][:62])","Convolution,Channel,Large Language Model"
24
+ SignalJEPA_Contextual,"Motor Imagery, ERP, SSVEP",Classification,128,"n_outputs, n_times, chs_info",3459184,"SignalJEPA_Contextual(n_outputs=2, input_window_seconds=4.19, sfreq=128, chs_info=Lee2019_MI().get_data(subjects=[1])[1]['0']['1train'].info[""chs""][:62])","Convolution,Channel,Large Language Model"
25
+ SignalJEPA_PostLocal,"Motor Imagery, ERP, SSVEP",Classification,128,"n_chans, n_outputs, n_times",16142,"SignalJEPA_PostLocal(n_chans=62, n_outputs=2, input_window_seconds=4.19, sfreq=128)","Convolution,Channel,Large Language Model"
26
+ SignalJEPA_PreLocal,"Motor Imagery, ERP, SSVEP",Classification,128,"n_outputs, n_times, chs_info",16142,"SignalJEPA_PreLocal(n_chans=62, n_outputs=2, input_window_seconds=4.19, sfreq=128)","Convolution,Channel,Large Language Model"
27
+ SincShallowNet,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times, sfreq",21892,"SincShallowNet(n_chans=22, n_outputs=4, n_times=1000, sfreq=250)","Convolution,Interpretability"
28
+ ShallowFBCSPNet,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times",46084,"ShallowFBCSPNet(n_chans=22, n_outputs=4, n_times=1000, sfreq=250)","Convolution"
29
+ SleepStagerBlanco2020,Sleep Staging,Classification,100,"n_chans, n_outputs, n_times",2845,"SleepStagerBlanco2020(n_chans=2, n_outputs=5, n_times=3000, sfreq=100)","Convolution"
30
+ SleepStagerChambon2018,Sleep Staging,Classification,128,"n_chans, n_outputs, n_times, sfreq",5835,"SleepStagerChambon2018(n_chans=2, n_outputs=5, n_times=3840, sfreq=128)","Convolution"
31
+ SleepStagerEldele2021,Sleep Staging,Classification,100,"n_chans, n_outputs, n_times, sfreq",719925,"SleepStagerEldele2021(n_chans=2, n_outputs=5, n_times=3000, sfreq=100)","Convolution, Small Attention"
32
+ SPARCNet,Epilepsy,Classification,200,"n_chans, n_outputs, n_times",1141921,"SPARCNet(n_chans=16, n_outputs=6, n_times=2000, sfreq=200)","Convolution"
33
+ SyncNet,"Emotion Recognition, Alcoholism",Classification,256,"n_chans, n_outputs, n_times",554,"SyncNet(n_chans=62, n_outputs=3, n_times=5120, sfreq=256)","Interpretability"
34
+ TSceptionV1,Emotion Recognition,Classification,256,"n_chans, n_outputs, n_times, sfreq",2187206,"TSceptionV1(n_chans=62, n_outputs=3, n_times=5120, sfreq=256)","Convolution"
35
+ TIDNet,General,Classification,250,"n_chans, n_outputs, n_times",240404,"TIDNet(n_chans=22, n_outputs=4, n_times=1000)","Convolution"
36
+ USleep,Sleep Staging,Classification,128,"n_chans, n_outputs, n_times, sfreq",2482011,"USleep(n_chans=2, n_outputs=5, n_times=3000, sfreq=100)","Convolution"
37
+ FBCNet,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times, sfreq",11812,"FCNet(n_chans=22, n_outputs=4, n_times=1000, sfreq=250)","Convolution,FilterBank"
38
+ FBMSNet,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times, sfreq",16231,"FBMSNet(n_chans=22, n_outputs=4, n_times=1000, sfreq=250)","Convolution,FilterBank"
39
+ FBLightConvNet,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times, sfreq",6596,"FBLightConvNet(n_chans=22, n_outputs=4, n_times=1000, sfreq=250)","Convolution,FilterBank"
40
+ IFNet,Motor Imagery,Classification,250,"n_chans, n_outputs, n_times, sfreq",9860,"IFNet(n_chans=22, n_outputs=4, n_times=1000, sfreq=250)","Convolution,FilterBank"
@@ -55,7 +55,6 @@ models_mandatory_parameters = [
55
55
  ("EEGInceptionERP", ["n_chans", "n_outputs", "n_times", "sfreq"], None),
56
56
  ("EEGInceptionMI", ["n_chans", "n_outputs", "n_times", "sfreq"], None),
57
57
  ("EEGITNet", ["n_chans", "n_outputs", "n_times"], None),
58
- ("EEGNetv1", ["n_chans", "n_outputs", "n_times"], None),
59
58
  ("EEGNetv4", ["n_chans", "n_outputs", "n_times"], None),
60
59
  ("EEGResNet", ["n_chans", "n_outputs", "n_times"], None),
61
60
  ("ShallowFBCSPNet", ["n_chans", "n_outputs", "n_times"], None),
@@ -0,0 +1 @@
1
+ __version__ = "1.2.0.dev182051616"
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: braindecode
3
- Version: 1.2.0.dev180217551
3
+ Version: 1.2.0.dev182051616
4
4
  Summary: Deep learning software to decode EEG, ECG or MEG signals
5
5
  Author-email: Robin Tibor Schirrmeister <robintibor@gmail.com>
6
6
  Maintainer-email: Alexandre Gramfort <agramfort@meta.com>, Bruno Aristimunha Pinto <b.aristimunha@gmail.com>, Robin Tibor Schirrmeister <robintibor@gmail.com>
@@ -49,6 +49,9 @@ Requires-Dist: mypy; extra == "tests"
49
49
  Provides-Extra: docs
50
50
  Requires-Dist: sphinx_gallery; extra == "docs"
51
51
  Requires-Dist: sphinx_rtd_theme; extra == "docs"
52
+ Requires-Dist: sphinx-autodoc-typehints; extra == "docs"
53
+ Requires-Dist: sphinx-autobuild; extra == "docs"
54
+ Requires-Dist: sphinxcontrib-bibtex; extra == "docs"
52
55
  Requires-Dist: pydata_sphinx_theme; extra == "docs"
53
56
  Requires-Dist: numpydoc; extra == "docs"
54
57
  Requires-Dist: memory_profiler; extra == "docs"
@@ -59,6 +62,7 @@ Requires-Dist: lightning; extra == "docs"
59
62
  Requires-Dist: seaborn; extra == "docs"
60
63
  Requires-Dist: pre-commit; extra == "docs"
61
64
  Requires-Dist: openneuro-py; extra == "docs"
65
+ Requires-Dist: plotly; extra == "docs"
62
66
  Provides-Extra: all
63
67
  Requires-Dist: braindecode[docs,moabb,tests]; extra == "all"
64
68
  Dynamic: license-file
@@ -110,11 +110,14 @@ docs/cite.rst
110
110
  docs/conf.py
111
111
  docs/help.rst
112
112
  docs/index.rst
113
- docs/models_summary.rst
114
113
  docs/sg_execution_times.rst
115
114
  docs/whats_new.rst
116
115
  docs/_templates/autosummary/class.rst
117
116
  docs/_templates/autosummary/function.rst
118
117
  docs/install/install.rst
119
118
  docs/install/install_pip.rst
120
- docs/install/install_source.rst
119
+ docs/install/install_source.rst
120
+ docs/models/models.rst
121
+ docs/models/models_categorization.rst
122
+ docs/models/models_table.rst
123
+ docs/models/models_visualization.rst
@@ -22,6 +22,9 @@ braindecode[docs,moabb,tests]
22
22
  [docs]
23
23
  sphinx_gallery
24
24
  sphinx_rtd_theme
25
+ sphinx-autodoc-typehints
26
+ sphinx-autobuild
27
+ sphinxcontrib-bibtex
25
28
  pydata_sphinx_theme
26
29
  numpydoc
27
30
  memory_profiler
@@ -32,6 +35,7 @@ lightning
32
35
  seaborn
33
36
  pre-commit
34
37
  openneuro-py
38
+ plotly
35
39
 
36
40
  [moabb]
37
41
  moabb>=1.2.0
@@ -19,12 +19,12 @@ Model zoo availables in braindecode. The models are implemented as
19
19
  All the models have the convention of having the signal related parameters
20
20
  named the same way, following the braindecode's standards:
21
21
 
22
- + ``n_outputs``: Number of labels or outputs of the model.
23
- + ``n_chans``: Number of EEG channels.
24
- + ``n_times``: Number of time points of the input window.
25
- + ``input_window_seconds``: Length of the input window in seconds.
26
- + ``sfreq``: Sampling frequency of the EEG recordings.
27
- + ``chs_info``: Information about each individual EEG channel. Refer to :class:`mne.Info["chs"]`.
22
+ + :fa:`shapes`\ ``n_outputs``: Number of labels or outputs of the model.
23
+ + :fa:`wave-square`\ ``n_chans``: Number of EEG channels.
24
+ + :fa:`clock`\ ``n_times``: Number of time points of the input window.
25
+ + :fa:`wifi`\ ``sfreq``: Sampling frequency of the EEG recordings.
26
+ + (:fa:`clock`\ / :fa:`wifi`\) ``input_window_seconds``: Length of the input window in seconds.
27
+ + :fa:`info-circle`\ ``chs_info``: Information about each individual EEG channel. Refer to :class:`mne.Info["chs"]`.
28
28
 
29
29
  All the models assume that the input data is a 3D tensor of shape
30
30
  ``(batch_size, n_chans, n_times)``, and some models also accept a 4D tensor of shape
@@ -65,7 +65,6 @@ provides a common interface for all EEG models and derivate variables names if n
65
65
  EEGInceptionMI
66
66
  EEGITNet
67
67
  EEGMiner
68
- EEGNetv1
69
68
  EEGNetv4
70
69
  EEGNeX
71
70
  EEGResNet
@@ -42,6 +42,10 @@ curdir = os.path.dirname(__file__)
42
42
  sys.path.append(os.path.abspath(os.path.join(curdir, "..", "braindecode")))
43
43
  sys.path.append(os.path.abspath(os.path.join(curdir, "sphinxext")))
44
44
 
45
+ import sphinx_design
46
+
47
+ print(f"--- Sphinx is using sphinx_design version: {sphinx_design.__version__} ---")
48
+
45
49
  # Add any Sphinx extension module names here, as strings. They can be
46
50
  # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
47
51
  # ones.
@@ -55,7 +59,9 @@ extensions = [
55
59
  "sphinx.ext.ifconfig",
56
60
  "sphinx.ext.intersphinx",
57
61
  "sphinx.ext.githubpages",
62
+ "sphinxcontrib.bibtex",
58
63
  "sphinx.ext.napoleon",
64
+ "sphinx_autodoc_typehints",
59
65
  "sphinx_gallery.gen_gallery",
60
66
  "sphinx.ext.linkcode",
61
67
  "sphinx_design",
@@ -166,15 +172,15 @@ templates_path = ["_templates"]
166
172
  # source_suffix = ['.rst', '.md']
167
173
  source_suffix = ".rst"
168
174
 
169
- rst_prolog = """
170
- .. role:: tag(bdg-success)
171
- """
175
+
172
176
  # The master toctree document.
173
177
  master_doc = "index"
174
178
 
175
179
  # General information about the project.
176
180
 
177
-
181
+ bibtex_bibfiles = ["references.bib"]
182
+ bibtex_reference_style = "author_year"
183
+ bibtex_default_style = "unsrt"
178
184
  # -- Project information -----------------------------------------------------
179
185
 
180
186
  project = "Braindecode"
@@ -356,7 +362,6 @@ html_context = {
356
362
  }
357
363
 
358
364
  html_sidebars = {
359
- "models_summary": [],
360
365
  "cite": [],
361
366
  "help": [],
362
367
  "whats_new": [],
@@ -391,7 +396,7 @@ latex_documents = [
391
396
  master_doc,
392
397
  "Braindecode.tex",
393
398
  "Braindecode",
394
- "Robin Tibor Schirrmeister",
399
+ "Bruno Aristimunha",
395
400
  "manual",
396
401
  ),
397
402
  ]
@@ -430,6 +435,16 @@ other_icons = (
430
435
  "cloud-download-alt",
431
436
  "wrench",
432
437
  "hourglass",
438
+ # Add your new icons here
439
+ "braille",
440
+ "repeat",
441
+ "lightbulb",
442
+ "layer-group",
443
+ "eye",
444
+ "circle-nodes",
445
+ "magnifying-glass-chart",
446
+ "share-nodes",
447
+ "clone",
433
448
  )
434
449
  icons = dict()
435
450
  for icon in brand_icons + fixed_icons + other_icons:
@@ -457,7 +472,7 @@ prolog += """
457
472
  prolog += """
458
473
  .. |ensp| unicode:: U+2002 .. EN SPACE
459
474
  """
460
-
475
+ rst_prolog = prolog
461
476
  # -- Options for manual page output ---------------------------------------
462
477
 
463
478
  # One entry per manual page. List of tuples
@@ -42,7 +42,7 @@ Braindecode Homepage
42
42
  :hidden:
43
43
 
44
44
  Install <install/install>
45
- Models <models_summary>
45
+ Models <models/models>
46
46
  Cite <cite>
47
47
  Tutorial and Examples <auto_examples/index>
48
48
  API <api>
@@ -0,0 +1,76 @@
1
+ :html_theme.sidebar_secondary.remove: true
2
+
3
+ .. _models:
4
+
5
+ The brain decode problem
6
+ ~~~~~~~~~~~~~~~~~~~~~~~~
7
+
8
+ All the models in this library tackle the following problem:
9
+ given time-series signals :math:`X \in \mathbb{R}^{C \times T}` and labels
10
+ :math:`y \in \mathcal{Y}`, :class:`braindecode` implements neural networks
11
+ :math:`f` that **decode** brain activity, i.e., it applies a series of transformations
12
+ layers (e.g. :class:`torch.nn.Conv2d`, :class:`torch.nn.Linear`, :class:`torch.nn.ELU`) to the data
13
+ to allow us to filter and extract features that are relevant to what we are modeling, in other words:
14
+
15
+ .. math::
16
+
17
+ f_{\theta} : X \to y,
18
+
19
+ where :math:`C` (``n_chans``) is the number of channels/electrodes and :math:`T` (``n_times``) is the temporal window
20
+ length/epoch size over the interval of interest.
21
+
22
+ The definition of :math:`y` is broad; it may be anchored in a cognitive stimulus (e.g., BCI, ERP, SSVEP, cVEP),
23
+ mental state (sleep stage), brain age, visual/audio/text/action inputs, or any target
24
+ that can be quantized and modeled as a decoding task, see references :cite:label:`sleep2023,aristimunhaetal23,chevallier2024largest,levy2025brain,benchetrit2024brain,d2024decoding,engemann2022reusable,xu2024alljoined`.
25
+
26
+ We aim to translate recorded brain activity into its originating stimulus, behavior,
27
+ or mental state, :cite:t:`king2014characterizing,king2020`, again, :math:`f(X) \to y`.
28
+
29
+ The neural networks model :math:`f` learns a representation that is useful for the encoded stimulus
30
+ in the subject's brain over time series—also known as *reverse inference*.
31
+
32
+ In supervised decoding, we usually learn the network parameters :math:`\theta` by minimizing
33
+ the regularized the average loss over the training set :math:`\mathcal{D}_{\text{tr}}=\{(x_i,y_i)\}_{i=1}^{N_{\text{tr}}}`.
34
+
35
+ .. math::
36
+
37
+ \begin{aligned}
38
+ \theta^{*}
39
+ &= \arg\min_{\theta}\, \hat{\mathcal{R}}(\theta) \\
40
+ &= \arg\min_{\theta}\, \frac{1}{N_{\text{tr}}}\sum_{i=1}^{N_{\text{tr}}}
41
+ \ell\!\left(f_{\theta}(x_i),\, y_i\right) \;+\; \lambda\,\Omega(\theta)\,,
42
+ \end{aligned}
43
+
44
+ where :math:`\ell` is the task loss (e.g., cross-entropy :class:`torch.nn.CrossEntropyLoss`), :math:`\Omega` is an optional regularizer, and :math:`\lambda \ge 0` its weight (e.g. ``weight_decay`` parameter in :class:`torch.optim.Adam` is the example of regularization).
45
+
46
+ Equivalently, the goal is to minimize the expected risk :math:`\mathcal{R}(\theta)=\mathbb{E}_{(x,y)\sim P_{\text{tr}}}
47
+ [\ell(f_{\theta}(x),y)]`, for which the empirical average above is a finite-sample
48
+ approximation.
49
+
50
+ With this, in this model's sub-pages, we provide:
51
+
52
+ - 1) Our definition of the brain decoding problem (here);
53
+ - 2) :doc:`The categorization of the neural networks based on what is inside them <models_categorization>`;
54
+ - 3) :doc:`A table overview to understand what is inside the models <models_table>`;
55
+ - 4) :doc:`A visualization of the common important information from the models <models_visualization>`.
56
+
57
+ .. button-ref:: models_categorization
58
+ :ref-type: doc
59
+ :color: primary
60
+ :expand:
61
+
62
+ Next: Models categorization →
63
+
64
+
65
+ .. rubric:: References
66
+ .. bibliography::
67
+
68
+
69
+ .. toctree::
70
+ :hidden:
71
+
72
+ models_categorization
73
+ models_table
74
+ models_visualization
75
+
76
+ .. include:: /links.inc
@@ -0,0 +1,174 @@
1
+ :html_theme.sidebar_secondary.remove: true
2
+
3
+ Models Categorization
4
+ ~~~~~~~~~~~~~~~~~~~~~
5
+
6
+ Given the brain-decoding framework from the previous page, we define our neural networks,
7
+ denoted :math:`f`, as a composition of sequential transformations:
8
+
9
+ .. math::
10
+
11
+ f_{\mathrm{method}}
12
+ \;=\;
13
+ f_{\mathrm{convolution}} \circ \cdots \circ f_{\mathrm{MLP}}\,
14
+
15
+ where each :math:`f_\ell` is a specific transformation (:math:`\ell` layer) in the network.
16
+ Learning is the mapping :math:`f_{\mathrm{method}} : \mathcal{X} \to \mathcal{Y}` on the
17
+ training data, with parameters :math:`\theta \in \Theta`.
18
+
19
+
20
+ .. grid:: 1 2 3 3
21
+ :gutter: 3
22
+
23
+ .. grid-item-card:: |braille| Convolution Layers
24
+ :shadow: sm
25
+
26
+ :bdg-success:`Convolution`
27
+
28
+ .. figure:: ../_static/model_cat/convolution.png
29
+ :width: 90%
30
+ :align: center
31
+ :alt: Diagram of a convolutional layer
32
+ :class: no-scaled-link
33
+
34
+ Applies temporal and/or spatial convolutions to extract local features from brain signals.
35
+
36
+ .. grid-item-card:: |repeat| Recurrent Layers
37
+ :shadow: sm
38
+
39
+ :bdg-secondary:`Recurrent`
40
+
41
+ .. figure:: ../_static/model_cat/rnn.png
42
+ :class: no-scaled-link
43
+ :width: 90%
44
+ :align: center
45
+ :alt: Diagram of recurrent/TCN models
46
+
47
+ Models temporal dependencies via recurrent units or TCNs with dilations.
48
+
49
+ .. grid-item-card:: |magnifying-glass-chart| Small Attention
50
+ :shadow: sm
51
+
52
+ :bdg-info:`Small Attention`
53
+
54
+ .. figure:: ../_static/model_cat/attention.png
55
+ :class: no-scaled-link
56
+ :width: 90%
57
+ :align: center
58
+ :alt: Diagram of attention modules
59
+
60
+ Uses attention mechanisms for feature focusing. Can be trained effectively without self-supervised pre-training.
61
+
62
+ .. grid-item-card:: |layer-group| Filterbank Models
63
+ :shadow: sm
64
+
65
+ :bdg-primary:`Filterbank`
66
+
67
+ .. figure:: ../_static/model_cat/filterbank.png
68
+ :class: no-scaled-link
69
+ :width: 90%
70
+ :align: center
71
+ :alt: Diagram of filterbank models
72
+
73
+ Decomposes signals into multiple bands (learned or fixed) to capture frequency-specific information.
74
+
75
+ .. grid-item-card:: |eye| Interpretability-by-Design
76
+ :shadow: sm
77
+
78
+ :bdg-warning:`Interpretability`
79
+
80
+ .. figure:: ../_static/model_cat/interpre.png
81
+ :class: no-scaled-link
82
+ :width: 90%
83
+ :align: center
84
+ :alt: Diagram of interpretable architectures
85
+
86
+ Architectures with inherently interpretable layers allow direct neuroscientific validation of learned features.
87
+
88
+ .. grid-item-card:: |circle-nodes| Symmetric Positive-Definite
89
+ :shadow: sm
90
+
91
+ :bdg-dark:`SPD` :bdg-danger-line:`To be released soon!`
92
+
93
+ .. figure:: ../_static/model_cat/spd.png
94
+ :class: no-scaled-link
95
+ :width: 90%
96
+ :align: center
97
+ :alt: Diagram of SPD learning
98
+ :figclass: unavailable
99
+
100
+ Learns on covariance/connectivity as SPD matrices using BiMap/ReEig/LogEig layers.
101
+
102
+ .. grid-item-card:: |lightbulb| Large Transformer Models
103
+ :shadow: sm
104
+
105
+ :bdg-danger:`Large Language Model`
106
+
107
+ .. figure:: ../_static/model_cat/llm.png
108
+ :class: no-scaled-link
109
+ :width: 90%
110
+ :align: center
111
+ :alt: Diagram of transformer models
112
+
113
+ Large-scale transformer layers require self-supervised pre-training to work effectively.
114
+
115
+ .. grid-item-card:: |share-nodes| Graph Neural Network
116
+ :shadow: sm
117
+
118
+ :bdg-light:`Graph Neural Network`
119
+
120
+ .. figure:: ../_static/model_cat/gnn.png
121
+ :class: no-scaled-link
122
+ :width: 90%
123
+ :align: center
124
+ :alt: Diagram of GNN models
125
+ :figclass: unavailable
126
+
127
+ Treats channels/regions as nodes with learned/static edges to model connectivity.
128
+
129
+ .. grid-item-card:: |clone| Channel-Domain
130
+ :shadow: sm
131
+
132
+ :bdg-dark-line:`Channel`
133
+
134
+ .. figure:: ../_static/model_cat/channel.png
135
+ :class: no-scaled-link
136
+ :width: 90%
137
+ :align: center
138
+ :alt: Diagram of channel-domain methods
139
+
140
+ Usage montage information with spatial filtering / channel / hemisphere / brain region selection strategies.
141
+
142
+
143
+ - Across most architectures, the earliest stages are convolutional (:bdg-success:`Convolution`), reflecting the brain time series's noisy, locally structured nature.
144
+ These layers apply temporal and/or spatial convolutions—often depthwise-separable as in EEGNet, per-channel or across channel groups to extract robust local features.
145
+ :class:`braindecode.models.EEGNetv4`, :class:`braindecode.models.ShallowFBCSPNet`, :class:`braindecode.models.EEGNeX`, and :class:`braindecode.models.EEGInceptionERP`
146
+ - In the **recurrent** family (:bdg-secondary:`Recurrent`), many modern EEG models actually rely on *temporal convolutional networks* (TCNs) with dilations to grow the receptive field, rather than explicit recurrence (:cite:label:`bai2018tcn`), :class:`braindecode.models.BDTCN`,
147
+ - In contrast, several methods employ **small attention** modules (:bdg-info:`Small Attention`) to capture longer-range dependencies efficiently, e.g., :class:`braindecode.models.EEGConformer`, :class:`braindecode.models.CTNet`, :class:`braindecode.models.ATCNet`, :class:`braindecode.models.AttentionBaseNet` (:cite:label:`song2022eeg,zhao2024ctnet,altaheri2022atcnet`).
148
+ - **Filterbank-style models** (:bdg-primary:`Filterbank`) explicitly decompose signals into multiple bands before (or while) learning, echoing the classic FBCSP pipeline; examples include :class:`braindecode.models.FBCNet` and :class:`braindecode.models.FBMSNet` (:cite:label:`mane2021fbcnet,liu2022fbmsnet`).
149
+ - **Interpretability-by-design** (:bdg-warning:`Interpretability`) architectures expose physiologically meaningful primitives (e.g., band-pass/sinc filters, variance or connectivity features), enabling direct neuroscientific inspection; see :class:`braindecode.models.SincShallowNet` and :class:`braindecode.models.EEGMiner` (:cite:label:`borra2020interpretable,ludwig2024eegminer`).
150
+ - **SPD / Riemannian** (:bdg-dark:`SPD`) methods operate on covariance (or connectivity) matrices as points on the SPD manifold, combining layers such as BiMap, ReEig, and LogEig; deep SPD networks and Riemannian classifiers motivate this family (:cite:label:`huang2017riemannian`). *(Coming soon in a dedicate repository.)*
151
+ - **Large-model / Transformer** (:bdg-danger:`Large Language Model`) approaches pretrain attention-based encoders on diverse biosignals and fine-tune for EEG tasks; e.g., :class:`braindecode.models.BIOT` (:cite:label:`yang2023biot`). These typically need a heavily self-supervised pre-training before decoding.
152
+ - **Graph neural networks** (:bdg-light:`Graph Neural Network`) treat channels/regions as nodes with learned (static or dynamic) edges to model functional connectivity explicitly; representative EEG-GNN, more common in the epileptic decoding (:cite:label:`klepl2024graph`).
153
+ - **Channel-domain robustness** (:bdg-dark-line:`Channel`) techniques target variability in electrode layouts by learning montage-agnostic or channel-selective layers (e.g., dynamic spatial filtering, differentiable channel re-ordering); these strategies improve cross-setup generalization :class:`braindecode.models.SignalJEPA` (:cite:label:`guetschel2024sjepa,chen2024eegprogress`).
154
+
155
+
156
+ We are continually expanding this collection and welcome contributions! If you have implemented a
157
+ model relevant to EEG, ECoG, or MEG analysis, consider adding it to Braindecode.
158
+
159
+ Submit a new model
160
+ ~~~~~~~~~~~~~~~~~~
161
+
162
+ Want to contribute a new model to Braindecode? Great! You can propose a new model by opening an
163
+ `issue <braindecode-issues_>`_ (please include a link to the relevant publication or description) or,
164
+ even better, directly submit your implementation via a `pull request <braindecode-pulls_>`_.
165
+ We appreciate your contributions to expanding the library!
166
+
167
+ .. button-ref:: models_table
168
+ :ref-type: doc
169
+ :color: primary
170
+ :expand:
171
+
172
+ Next: Models Table
173
+
174
+ .. include:: /links.inc