braindecode 1.2.0.dev175337561__tar.gz → 1.2.0.dev180217551__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of braindecode might be problematic. Click here for more details.

Files changed (122) hide show
  1. {braindecode-1.2.0.dev175337561/braindecode.egg-info → braindecode-1.2.0.dev180217551}/PKG-INFO +1 -1
  2. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/attentionbasenet.py +18 -16
  3. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/eegconformer.py +15 -12
  4. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/eegnet.py +9 -10
  5. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/eegnex.py +114 -0
  6. braindecode-1.2.0.dev180217551/braindecode/version.py +1 -0
  7. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551/braindecode.egg-info}/PKG-INFO +1 -1
  8. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/docs/whats_new.rst +1 -0
  9. braindecode-1.2.0.dev175337561/braindecode/version.py +0 -1
  10. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/LICENSE.txt +0 -0
  11. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/MANIFEST.in +0 -0
  12. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/NOTICE.txt +0 -0
  13. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/README.rst +0 -0
  14. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/__init__.py +0 -0
  15. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/augmentation/__init__.py +0 -0
  16. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/augmentation/base.py +0 -0
  17. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/augmentation/functional.py +0 -0
  18. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/augmentation/transforms.py +0 -0
  19. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/classifier.py +0 -0
  20. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/datasets/__init__.py +0 -0
  21. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/datasets/base.py +0 -0
  22. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/datasets/bbci.py +0 -0
  23. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/datasets/bcicomp.py +0 -0
  24. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/datasets/bids.py +0 -0
  25. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/datasets/mne.py +0 -0
  26. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/datasets/moabb.py +0 -0
  27. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/datasets/nmt.py +0 -0
  28. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/datasets/sleep_physio_challe_18.py +0 -0
  29. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/datasets/sleep_physionet.py +0 -0
  30. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/datasets/tuh.py +0 -0
  31. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/datasets/xy.py +0 -0
  32. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/datautil/__init__.py +0 -0
  33. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/datautil/serialization.py +0 -0
  34. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/datautil/util.py +0 -0
  35. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/eegneuralnet.py +0 -0
  36. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/functional/__init__.py +0 -0
  37. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/functional/functions.py +0 -0
  38. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/functional/initialization.py +0 -0
  39. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/__init__.py +0 -0
  40. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/atcnet.py +0 -0
  41. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/base.py +0 -0
  42. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/biot.py +0 -0
  43. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/contrawr.py +0 -0
  44. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/ctnet.py +0 -0
  45. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/deep4.py +0 -0
  46. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/deepsleepnet.py +0 -0
  47. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/eeginception_erp.py +0 -0
  48. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/eeginception_mi.py +0 -0
  49. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/eegitnet.py +0 -0
  50. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/eegminer.py +0 -0
  51. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/eegresnet.py +0 -0
  52. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/eegsimpleconv.py +0 -0
  53. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/eegtcnet.py +0 -0
  54. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/fbcnet.py +0 -0
  55. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/fblightconvnet.py +0 -0
  56. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/fbmsnet.py +0 -0
  57. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/hybrid.py +0 -0
  58. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/ifnet.py +0 -0
  59. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/labram.py +0 -0
  60. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/msvtnet.py +0 -0
  61. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/sccnet.py +0 -0
  62. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/shallow_fbcsp.py +0 -0
  63. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/signal_jepa.py +0 -0
  64. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/sinc_shallow.py +0 -0
  65. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/sleep_stager_blanco_2020.py +0 -0
  66. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/sleep_stager_chambon_2018.py +0 -0
  67. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/sleep_stager_eldele_2021.py +0 -0
  68. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/sparcnet.py +0 -0
  69. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/summary.csv +0 -0
  70. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/syncnet.py +0 -0
  71. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/tcn.py +0 -0
  72. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/tidnet.py +0 -0
  73. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/tsinception.py +0 -0
  74. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/usleep.py +0 -0
  75. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/models/util.py +0 -0
  76. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/modules/__init__.py +0 -0
  77. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/modules/activation.py +0 -0
  78. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/modules/attention.py +0 -0
  79. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/modules/blocks.py +0 -0
  80. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/modules/convolution.py +0 -0
  81. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/modules/filter.py +0 -0
  82. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/modules/layers.py +0 -0
  83. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/modules/linear.py +0 -0
  84. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/modules/parametrization.py +0 -0
  85. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/modules/stats.py +0 -0
  86. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/modules/util.py +0 -0
  87. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/modules/wrapper.py +0 -0
  88. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/preprocessing/__init__.py +0 -0
  89. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/preprocessing/mne_preprocess.py +0 -0
  90. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/preprocessing/preprocess.py +0 -0
  91. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/preprocessing/windowers.py +0 -0
  92. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/regressor.py +0 -0
  93. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/samplers/__init__.py +0 -0
  94. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/samplers/base.py +0 -0
  95. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/samplers/ssl.py +0 -0
  96. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/training/__init__.py +0 -0
  97. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/training/callbacks.py +0 -0
  98. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/training/losses.py +0 -0
  99. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/training/scoring.py +0 -0
  100. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/util.py +0 -0
  101. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/visualization/__init__.py +0 -0
  102. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/visualization/confusion_matrices.py +0 -0
  103. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode/visualization/gradients.py +0 -0
  104. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode.egg-info/SOURCES.txt +0 -0
  105. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode.egg-info/dependency_links.txt +0 -0
  106. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode.egg-info/requires.txt +0 -0
  107. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/braindecode.egg-info/top_level.txt +0 -0
  108. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/docs/Makefile +0 -0
  109. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/docs/_templates/autosummary/class.rst +0 -0
  110. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/docs/_templates/autosummary/function.rst +0 -0
  111. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/docs/api.rst +0 -0
  112. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/docs/cite.rst +0 -0
  113. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/docs/conf.py +0 -0
  114. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/docs/help.rst +0 -0
  115. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/docs/index.rst +0 -0
  116. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/docs/install/install.rst +0 -0
  117. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/docs/install/install_pip.rst +0 -0
  118. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/docs/install/install_source.rst +0 -0
  119. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/docs/models_summary.rst +0 -0
  120. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/docs/sg_execution_times.rst +0 -0
  121. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/pyproject.toml +0 -0
  122. {braindecode-1.2.0.dev175337561 → braindecode-1.2.0.dev180217551}/setup.cfg +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: braindecode
3
- Version: 1.2.0.dev175337561
3
+ Version: 1.2.0.dev180217551
4
4
  Summary: Deep learning software to decode EEG, ECG or MEG signals
5
5
  Author-email: Robin Tibor Schirrmeister <robintibor@gmail.com>
6
6
  Maintainer-email: Alexandre Gramfort <agramfort@meta.com>, Bruno Aristimunha Pinto <b.aristimunha@gmail.com>, Robin Tibor Schirrmeister <robintibor@gmail.com>
@@ -57,9 +57,9 @@ class AttentionBaseNet(EEGModuleMixin, nn.Module):
57
57
 
58
58
  - *Operations.*
59
59
  - **Temporal conv** (:class:`torch.nn.Conv2d`) with kernel ``(1, L_t)`` creates a learned
60
- FIR-like filter bank with ``n_temporal_filters`` maps.
60
+ FIR-like filter bank with ``n_temporal_filters`` maps.
61
61
  - **Depthwise spatial conv** (:class:`torch.nn.Conv2d`, ``groups=n_temporal_filters``)
62
- with kernel ``(n_chans, 1)`` learns per-filter spatial projections over the full montage.
62
+ with kernel ``(n_chans, 1)`` learns per-filter spatial projections over the full montage.
63
63
  - **BatchNorm → ELU → AvgPool → Dropout** stabilize and downsample time.
64
64
  - Output shape: ``(B, F2, 1, T₁)`` with ``F2 = n_temporal_filters x spatial_expansion``.
65
65
 
@@ -71,8 +71,8 @@ class AttentionBaseNet(EEGModuleMixin, nn.Module):
71
71
 
72
72
  - *Operations.*
73
73
  - A ``1x1`` conv → BN → activation maps ``F2 → ch_dim`` without changing
74
- the temporal length ``T₁`` (shape: ``(B, ch_dim, 1, T₁)``).
75
- This sets the embedding width for the attention block.
74
+ the temporal length ``T₁`` (shape: ``(B, ch_dim, 1, T₁)``).
75
+ This sets the embedding width for the attention block.
76
76
 
77
77
  - :class:`_ChannelAttentionBlock` **(temporal refinement + channel attention)**
78
78
 
@@ -117,8 +117,10 @@ class AttentionBaseNet(EEGModuleMixin, nn.Module):
117
117
  - **Type.** Channel attention chosen by ``attention_mode`` (SE, ECA, CBAM, CAT, GSoP,
118
118
  EncNet, GE, GCT, SRM, CATLite). Most operate purely on channels; CBAM/CAT additionally
119
119
  include temporal attention.
120
+
120
121
  - **Shapes.** Input/Output around attention: ``(B, ch_dim, 1, T₁)``. Re-arrangements
121
122
  (if any) are internal to the module; the block returns the same shape before pooling.
123
+
122
124
  - **Role.** Re-weights channels (and optionally time) to highlight informative sources
123
125
  and suppress distractors, improving SNR ahead of the linear head.
124
126
 
@@ -195,18 +197,18 @@ class AttentionBaseNet(EEGModuleMixin, nn.Module):
195
197
  the depth of the network after the initial layer. Default is 16.
196
198
  attention_mode : str, optional
197
199
  The type of attention mechanism to apply. If `None`, no attention is applied.
198
- - "se" for Squeeze-and-excitation network
199
- - "gsop" for Global Second-Order Pooling
200
- - "fca" for Frequency Channel Attention Network
201
- - "encnet" for context encoding module
202
- - "eca" for Efficient channel attention for deep convolutional neural networks
203
- - "ge" for Gather-Excite
204
- - "gct" for Gated Channel Transformation
205
- - "srm" for Style-based Recalibration Module
206
- - "cbam" for Convolutional Block Attention Module
207
- - "cat" for Learning to collaborate channel and temporal attention
208
- from multi-information fusion
209
- - "catlite" for Learning to collaborate channel attention
200
+ - "se" for Squeeze-and-excitation network
201
+ - "gsop" for Global Second-Order Pooling
202
+ - "fca" for Frequency Channel Attention Network
203
+ - "encnet" for context encoding module
204
+ - "eca" for Efficient channel attention for deep convolutional neural networks
205
+ - "ge" for Gather-Excite
206
+ - "gct" for Gated Channel Transformation
207
+ - "srm" for Style-based Recalibration Module
208
+ - "cbam" for Convolutional Block Attention Module
209
+ - "cat" for Learning to collaborate channel and temporal attention
210
+ from multi-information fusion
211
+ - "catlite" for Learning to collaborate channel attention
210
212
  from multi-information fusion (lite version, cat w/o temporal attention)
211
213
  pool_length : int, default=8
212
214
  The length of the window for the average pooling operation.
@@ -27,22 +27,25 @@ class EEGConformer(EEGModuleMixin, nn.Module):
27
27
  EEG-Conformer is a *convolution-first* model augmented with a *lightweight transformer
28
28
  encoder*. The end-to-end flow is:
29
29
 
30
- - (i) :class:`_PatchEmbedding` converts the continuous EEG into a compact sequence of tokens via a :class:`ShallowFBCSPNet` temporal–spatial conv stem and temporal pooling;
31
- - (ii) :class:`_TransformerEncoder applies small multi-head self-attention to integrate longer-range temporal context across tokens;
30
+ - (i) :class:`_PatchEmbedding` converts the continuous EEG into a compact sequence of tokens via a
31
+ :class:`ShallowFBCSPNet` temporal–spatial conv stem and temporal pooling;
32
+ - (ii) :class:`_TransformerEncoder` applies small multi-head self-attention to integrate
33
+ longer-range temporal context across tokens;
32
34
  - (iii) :class:`_ClassificationHead` aggregates the sequence and performs a linear readout.
33
- This preserves the strong inductive biases of shallow CNN filter banks while adding
34
- just enough attention to capture dependencies beyond the pooling horizon [song2022]_.
35
+ This preserves the strong inductive biases of shallow CNN filter banks while adding
36
+ just enough attention to capture dependencies beyond the pooling horizon [song2022]_.
35
37
 
36
38
  .. rubric:: Macro Components
37
39
 
38
40
  - :class:`_PatchEmbedding` **(Shallow conv stem → tokens)**
39
41
 
40
- - *Operations.*
41
- - A temporal convolution (`:class:`torch.nn.Conv2d`) ``(1 x L_t)`` forms a data-driven "filter bank";
42
- - A spatial convolution (`:class:`torch.nn.Conv2d`) (n_chans x 1)`` projects across electrodes, collapsing the channel axis into a virtual channel.
43
- - **Normalization function** `:class:torch.nn.BatchNorm`
44
- - **Activation function** `:class:torch.nn.ELU`
45
- - **Average Pooling** `:class:torch.nn.AvgPool` along time (kernel ``(1, P)`` with stride ``(1, S)``)
42
+ - *Operations.*
43
+ - A temporal convolution (`:class:torch.nn.Conv2d`) ``(1 x L_t)`` forms a data-driven "filter bank";
44
+ - A spatial convolution (`:class:torch.nn.Conv2d`) (n_chans x 1)`` projects across electrodes,
45
+ collapsing the channel axis into a virtual channel.
46
+ - **Normalization function** :class:`torch.nn.BatchNorm`
47
+ - **Activation function** :class:`torch.nn.ELU`
48
+ - **Average Pooling** :class:`torch.nn.AvgPool` along time (kernel ``(1, P)`` with stride ``(1, S)``)
46
49
  - final ``1x1`` :class:`torch.nn.Linear` projection.
47
50
 
48
51
  The result is rearranged to a token sequence ``(B, S_tokens, D)``, where ``D = n_filters_time``.
@@ -53,7 +56,7 @@ class EEGConformer(EEGModuleMixin, nn.Module):
53
56
 
54
57
  - :class:`_TransformerEncoder` **(context over temporal tokens)**
55
58
 
56
- - *Operations.*
59
+ - *Operations.*
57
60
  - A stack of ``att_depth`` encoder blocks. :class:`_TransformerEncoderBlock`
58
61
  - Each block applies LayerNorm :class:`torch.nn.LayerNorm`
59
62
  - Multi-Head Self-Attention (``att_heads``) with dropout + residual :class:`MultiHeadAttention` (:class:`torch.nn.Dropout`)
@@ -67,7 +70,7 @@ class EEGConformer(EEGModuleMixin, nn.Module):
67
70
 
68
71
  - :class:`ClassificationHead` **(aggregation + readout)**
69
72
 
70
- - *Operations*.
73
+ - *Operations*.
71
74
  - Flatten, :class:`torch.nn.Flatten` the sequence ``(B, S_tokens·D)`` -
72
75
  - MLP (:class:`torch.nn.Linear` → activation (default: :class:`torch.nn.ELU`) → :class:`torch.nn.Dropout` → :class:`torch.nn.Linear`)
73
76
  - final Linear to classes.
@@ -31,8 +31,7 @@ class EEGNetv4(EEGModuleMixin, nn.Sequential):
31
31
 
32
32
  .. rubric:: Architectural Overview
33
33
 
34
- EEGNetv4 is a compact convolutional network designed for EEG decoding with a
35
- pipeline that mirrors classical EEG processing:
34
+ EEGNetv4 is a compact convolutional network designed for EEG decoding with a pipeline that mirrors classical EEG processing:
36
35
  - (i) learn temporal frequency-selective filters,
37
36
  - (ii) learn spatial filters for those frequencies, and
38
37
  - (iii) condense features with depthwise–separable convolutions before a lightweight classifier.
@@ -56,16 +55,16 @@ class EEGNetv4(EEGModuleMixin, nn.Sequential):
56
55
 
57
56
  .. rubric:: Convolutional Details
58
57
 
59
- **Temporal.** The initial temporal convs serve as a *learned filter bank*:
60
- long 1-D kernels (implemented as 2-D with singleton spatial extent) emphasize oscillatory bands and transients.
61
- Because this stage is linear prior to BN/ELU, kernels can be analyzed as FIR filters to reveal each feature’s spectrum [Lawhern2018]_.
58
+ - **Temporal.** The initial temporal convs serve as a *learned filter bank*:
59
+ long 1-D kernels (implemented as 2-D with singleton spatial extent) emphasize oscillatory bands and transients.
60
+ Because this stage is linear prior to BN/ELU, kernels can be analyzed as FIR filters to reveal each feature’s spectrum [Lawhern2018]_.
62
61
 
63
- **Spatial.** The depthwise spatial conv spans the full channel axis (kernel height = #electrodes; temporal size = 1).
64
- With ``groups = F1``, each temporal filter learns its own set of ``D`` spatial projections—akin to CSP, learned end-to-end and
65
- typically regularized with max-norm.
62
+ - **Spatial.** The depthwise spatial conv spans the full channel axis (kernel height = #electrodes; temporal size = 1).
63
+ With ``groups = F1``, each temporal filter learns its own set of ``D`` spatial projections—akin to CSP, learned end-to-end and
64
+ typically regularized with max-norm.
66
65
 
67
- **Spectral.** No explicit Fourier/wavelet transform is used. Frequency structure
68
- is captured implicitly by the temporal filter bank; later depthwise temporal kernels act as short-time integrators/refiners.
66
+ - **Spectral.** No explicit Fourier/wavelet transform is used. Frequency structure
67
+ is captured implicitly by the temporal filter bank; later depthwise temporal kernels act as short-time integrators/refiners.
69
68
 
70
69
  .. rubric:: Additional Comments
71
70
 
@@ -16,9 +16,123 @@ from braindecode.modules import Conv2dWithConstraint, LinearWithConstraint
16
16
  class EEGNeX(EEGModuleMixin, nn.Module):
17
17
  """EEGNeX model from Chen et al. (2024) [eegnex]_.
18
18
 
19
+ :bdg-success:`Convolution`
20
+
19
21
  .. figure:: https://braindecode.org/dev/_static/model/eegnex.jpg
20
22
  :align: center
21
23
  :alt: EEGNeX Architecture
24
+ :width: 620px
25
+
26
+ .. rubric:: Architectural Overview
27
+
28
+ EEGNeX is a **purely convolutional** architecture that refines the EEGNet-style stem
29
+ and deepens the temporal stack with **dilated temporal convolutions**. The end-to-end
30
+ flow is:
31
+
32
+ - (i) **Block-1/2**: two temporal convolutions ``(1 x L)`` with BN refine a
33
+ learned FIR-like *temporal filter bank* (no pooling yet);
34
+ - (ii) **Block-3**: depthwise **spatial** convolution across electrodes
35
+ ``(n_chans x 1)`` with max-norm constraint, followed by ELU → AvgPool (time) → Dropout;
36
+ - (iii) **Block-4/5**: two additional **temporal** convolutions with increasing **dilation**
37
+ to expand the receptive field; the last block applies ELU → AvgPool → Dropout → Flatten;
38
+ - (iv) **Classifier**: a max-norm–constrained linear layer.
39
+
40
+ The published work positions EEGNeX as a compact, conv-only alternative that consistently
41
+ outperforms prior baselines across MOABB-style benchmarks, with the popular
42
+ “EEGNeX-8,32” shorthand denoting *8 temporal filters* and *kernel length 32*.
43
+
44
+
45
+ .. rubric:: Macro Components
46
+
47
+ - **Block-1 / Block-2 — Temporal filter (learned).**
48
+
49
+ - *Operations.*
50
+ - :class:`torch.nn.Conv2d` with kernels ``(1, L)``
51
+ - :class:`torch.nn.BatchNorm2d` (no nonlinearity until Block-3, mirroring a linear FIR analysis stage).
52
+ These layers set up frequency-selective detectors before spatial mixing.
53
+
54
+ - *Interpretability.* Kernels can be inspected as FIR filters; two stacked temporal
55
+ convs allow longer effective kernels without parameter blow-up.
56
+
57
+ - **Block-3 — Spatial projection + condensation.**
58
+
59
+ - *Operations.*
60
+ - :class:`braindecode.modules.Conv2dWithConstraint` with kernel``(n_chans, 1)``
61
+ and ``groups = filter_2`` (depthwise across filters)
62
+ - :class:`torch.nn.BatchNorm2d`
63
+ - :class:`torch.nn.ELU`
64
+ - :class:`torch.nn.AvgPool2d` (time)
65
+ - :class:`torch.nn.Dropout`.
66
+
67
+ *Role.* Learns per-filter spatial patterns over the **full montage** while temporal
68
+ pooling stabilizes and compresses features; max-norm encourages well-behaved spatial
69
+ weights similar to EEGNet practice.
70
+
71
+ - **Block-4 / Block-5 — Dilated temporal integration.**
72
+
73
+ - *Operations.*
74
+ - :class:`torch.nn.Conv2d` with kernels ``(1, k)`` and **dilations**
75
+ (e.g., 2 then 4);
76
+ - :class:`torch.nn.BatchNorm2d`
77
+ - :class:`torch.nn.ELU`
78
+ - :class:`torch.nn.AvgPool2d` (time)
79
+ - :class:`torch.nn.Dropout`
80
+ - :class:`torch.nn.Flatten`.
81
+
82
+ *Role.* Expands the temporal receptive field efficiently to capture rhythms and
83
+ long-range context after condensation.
84
+
85
+ - **Final Classifier — Max-norm linear.**
86
+
87
+ - *Operations.*
88
+ - :class:`braindecode.modules.LinearWithConstraint` maps the flattened
89
+ vector to the target classes; the max-norm constraint regularizes the readout.
90
+
91
+
92
+ .. rubric:: Convolutional Details
93
+
94
+ - **Temporal (where time-domain patterns are learned).**
95
+ Blocks 1-2 learn the primary filter bank (oscillations/transients), while Blocks 4-5
96
+ use **dilation** to integrate over longer horizons without extra pooling. The final
97
+ AvgPool in Block-5 sets the output token rate and helps noise suppression.
98
+
99
+ - **Spatial (how electrodes are processed).**
100
+ A *single* depthwise spatial conv (Block-3) spans the entire electrode set
101
+ (kernel ``(n_chans, 1)``), producing per-temporal-filter topographies; no cross-filter
102
+ mixing occurs at this stage, aiding interpretability.
103
+
104
+ - **Spectral (how frequency content is captured).**
105
+ Frequency selectivity emerges from the learned temporal kernels; dilation broadens effective
106
+ bandwidth coverage by composing multiple scales.
107
+
108
+ .. rubric:: Additional Mechanisms
109
+
110
+ - **EEGNeX-8,32 naming.** “8,32” indicates *8 temporal filters* and *kernel length 32*,
111
+ reflecting the paper's ablation path from EEGNet-8,2 toward thicker temporal kernels
112
+ and a deeper conv stack.
113
+ - **Max-norm constraints.** Spatial (Block-3) and final linear layers use max-norm
114
+ regularization—standard in EEG CNNs—to reduce overfitting and encourage stable spatial
115
+ patterns.
116
+
117
+ .. rubric:: Usage and Configuration
118
+
119
+ - **Kernel schedule.** Start with the canonical **EEGNeX-8,32** (``filter_1=8``,
120
+ ``kernel_block_1_2=32``) and keep **Block-3** depth multiplier modest (e.g., 2) to match
121
+ the paper's “pure conv” profile.
122
+ - **Pooling vs. dilation.** Use pooling in Blocks 3 and 5 to control compute and variance;
123
+ increase dilations (Blocks 4-5) to widen temporal context when windows are short.
124
+ - **Regularization.** Combine dropout (Blocks 3 & 5) with max-norm on spatial and
125
+ classifier layers; prefer ELU activations for stable training on small EEG datasets.
126
+
127
+
128
+ Notes
129
+ -----
130
+ - The braindecode implementation follows the paper's conv-only design with five blocks
131
+ and reproduces the depthwise spatial step and dilated temporal stack. See the class
132
+ reference for exact kernel sizes, dilations, and pooling defaults.
133
+
134
+ .. versionadded:: 1.1
135
+
22
136
 
23
137
  Parameters
24
138
  ----------
@@ -0,0 +1 @@
1
+ __version__ = "1.2.0.dev180217551"
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: braindecode
3
- Version: 1.2.0.dev175337561
3
+ Version: 1.2.0.dev180217551
4
4
  Summary: Deep learning software to decode EEG, ECG or MEG signals
5
5
  Author-email: Robin Tibor Schirrmeister <robintibor@gmail.com>
6
6
  Maintainer-email: Alexandre Gramfort <agramfort@meta.com>, Bruno Aristimunha Pinto <b.aristimunha@gmail.com>, Robin Tibor Schirrmeister <robintibor@gmail.com>
@@ -26,6 +26,7 @@ Enhancements
26
26
  - Improving the docstring for :class:`braindecode.models.EEGConformer` (:gh:`769` by `Bruno Aristimunha`_)
27
27
  - Improving the docstring for :class:`braindecode.models.ATCNet` (:gh:`771` by `Bruno Aristimunha`_)
28
28
  - Improving the docstring for :class:`braindecode.models.AttentionBaseNet` (:gh:`772` by `Bruno Aristimunha`_)
29
+ - Improving the docstring for :class:`braindecode.models.EEGNeX` (:gh:`773` by `Bruno Aristimunha`_)
29
30
 
30
31
 
31
32
  API changes
@@ -1 +0,0 @@
1
- __version__ = "1.2.0.dev175337561"