nmn 0.1.3__tar.gz → 0.1.5__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- nmn-0.1.5/PKG-INFO +176 -0
- nmn-0.1.5/README.md +162 -0
- {nmn-0.1.3 → nmn-0.1.5}/pyproject.toml +1 -1
- nmn-0.1.5/src/nmn/nnx/examples/language/mingpt.py +1650 -0
- nmn-0.1.5/src/nmn/nnx/examples/vision/cnn_cifar.py +1769 -0
- {nmn-0.1.3 → nmn-0.1.5}/src/nmn/nnx/nmn.py +26 -15
- nmn-0.1.5/src/nmn/nnx/yatattention.py +764 -0
- {nmn-0.1.3 → nmn-0.1.5}/src/nmn/nnx/yatconv.py +41 -4
- {nmn-0.1.3 → nmn-0.1.5}/src/nmn/torch/nmn.py +2 -1
- nmn-0.1.3/PKG-INFO +0 -119
- nmn-0.1.3/README.md +0 -105
- {nmn-0.1.3 → nmn-0.1.5}/.github/workflows/publish.yml +0 -0
- {nmn-0.1.3 → nmn-0.1.5}/.gitignore +0 -0
- {nmn-0.1.3 → nmn-0.1.5}/LICENSE +0 -0
- {nmn-0.1.3 → nmn-0.1.5}/MANIFEST.in +0 -0
- {nmn-0.1.3 → nmn-0.1.5}/PUBLISH.md +0 -0
- {nmn-0.1.3 → nmn-0.1.5}/hatch.toml +0 -0
- {nmn-0.1.3 → nmn-0.1.5}/src/nmn/__init__.py +0 -0
- {nmn-0.1.3 → nmn-0.1.5}/src/nmn/keras/nmn.py +0 -0
- {nmn-0.1.3 → nmn-0.1.5}/src/nmn/linen/nmn.py +0 -0
- {nmn-0.1.3 → nmn-0.1.5}/src/nmn/tf/nmn.py +0 -0
nmn-0.1.5/PKG-INFO
ADDED
@@ -0,0 +1,176 @@
|
|
1
|
+
Metadata-Version: 2.4
|
2
|
+
Name: nmn
|
3
|
+
Version: 0.1.5
|
4
|
+
Summary: a neuron that matter
|
5
|
+
Project-URL: Homepage, https://github.com/mlnomadpy/nmn
|
6
|
+
Project-URL: Bug Tracker, https://github.com/mlnomadpy/my_package/issues
|
7
|
+
Author-email: Taha Bouhsine <yat@mlnomads.com>
|
8
|
+
License-File: LICENSE
|
9
|
+
Classifier: License :: OSI Approved :: GNU Affero General Public License v3
|
10
|
+
Classifier: Operating System :: OS Independent
|
11
|
+
Classifier: Programming Language :: Python :: 3
|
12
|
+
Requires-Python: >=3.8
|
13
|
+
Description-Content-Type: text/markdown
|
14
|
+
|
15
|
+
# nmn
|
16
|
+
Not the neurons we want, but the neurons we need
|
17
|
+
|
18
|
+
[](https://pypi.org/project/nmn/)
|
19
|
+
[](https://pepy.tech/project/nmn)
|
20
|
+
[](https://pepy.tech/project/nmn)
|
21
|
+
[](https://github.com/mlnomadpy/nmn)
|
22
|
+
[](https://github.com/mlnomadpy/nmn)
|
23
|
+
[](https://github.com/mlnomadpy/nmn/issues)
|
24
|
+
[](https://pypi.org/project/nmn/)
|
25
|
+
[](https://pypi.org/project/nmn/)
|
26
|
+
|
27
|
+
## Features
|
28
|
+
|
29
|
+
* **Activation-Free Non-linearity:** Learns complex, non-linear relationships without separate activation functions.
|
30
|
+
* **Multiple Frameworks:** Supports Flax (Linen & NNX), Keras, PyTorch, and TensorFlow.
|
31
|
+
* **Yat-Product & Yat-Conv:** Implements novel Yat-Product and Yat-Conv operations.
|
32
|
+
* **Inspired by Research:** Based on the principles from "Deep Learning 2.0/2.1: Artificial Neurons that Matter".
|
33
|
+
|
34
|
+
## Overview
|
35
|
+
|
36
|
+
**nmn** provides neural network layers for multiple frameworks (Flax, NNX, Keras, PyTorch, TensorFlow) that do not require activation functions to learn non-linearity. The main goal is to enable deep learning architectures where the layer itself is inherently non-linear, inspired by the papers:
|
37
|
+
|
38
|
+
> Deep Learning 2.0: Artificial Neurons that Matter: Reject Correlation - Embrace Orthogonality
|
39
|
+
>
|
40
|
+
> Deep Learning 2.1: Deep Learning 2.1: Mind and Cosmos - Towards Cosmos-Inspired Interpretable Neural Networks
|
41
|
+
|
42
|
+
## Math
|
43
|
+
|
44
|
+
Yat-Product:
|
45
|
+
$$
|
46
|
+
ⵟ(\mathbf{w},\mathbf{x}) := \frac{\langle \mathbf{w}, \mathbf{x} \rangle^2}{\|\mathbf{w} - \mathbf{x}\|^2 + \epsilon} = \frac{ \|\mathbf{x}\|^2 \|\mathbf{w}\|^2 \cos^2 \theta}{\|\mathbf{w}\|^2 - 2\mathbf{w}^\top\mathbf{x} + \|\mathbf{x}\|^2 + \epsilon} = \frac{ \|\mathbf{x}\|^2 \|\mathbf{w}\|^2 \cos^2 \theta}{((\mathbf{x}-\mathbf{w})\cdot(\mathbf{x}-\mathbf{w}))^2 + \epsilon}.
|
47
|
+
$$
|
48
|
+
|
49
|
+
**Explanation:**
|
50
|
+
- $\mathbf{w}$ is the weight vector, $\mathbf{x}$ is the input vector.
|
51
|
+
- $\langle \mathbf{w}, \mathbf{x} \rangle$ is the dot product between $\mathbf{w}$ and $\mathbf{x}$.
|
52
|
+
- $\|\mathbf{w} - \mathbf{x}\|^2$ is the squared Euclidean distance between $\mathbf{w}$ and $\mathbf{x}$.
|
53
|
+
- $\epsilon$ is a small constant for numerical stability.
|
54
|
+
- $\theta$ is the angle between $\mathbf{w}$ and $\mathbf{x}$.
|
55
|
+
|
56
|
+
This operation:
|
57
|
+
- **Numerator:** Squares the similarity (dot product) between $\mathbf{w}$ and $\mathbf{x}$, emphasizing strong alignments.
|
58
|
+
- **Denominator:** Penalizes large distances, so the response is high only when $\mathbf{w}$ and $\mathbf{x}$ are both similar in direction and close in space.
|
59
|
+
- **No activation needed:** The non-linearity is built into the operation itself, allowing the layer to learn complex, non-linear relationships without a separate activation function.
|
60
|
+
- **Geometric view:** The output is maximized when $\mathbf{w}$ and $\mathbf{x}$ are both large in norm, closely aligned (small $\theta$), and close together in Euclidean space.
|
61
|
+
|
62
|
+
Yat-Conv:
|
63
|
+
$$
|
64
|
+
ⵟ^*(\mathbf{W}, \mathbf{X}) := \frac{\langle \mathbf{w}, \mathbf{x} \rangle^2}{\|\mathbf{w} - \mathbf{x}\|^2 + \epsilon}
|
65
|
+
= \frac{\left(\sum_{i,j} w_{ij} x_{ij}\right)^2}{\sum_{i,j} (w_{ij} - x_{ij})^2 + \epsilon}
|
66
|
+
$$
|
67
|
+
|
68
|
+
Where:
|
69
|
+
- $\mathbf{W}$ and $\mathbf{X}$ are local patches (e.g., kernel and input patch in convolution)
|
70
|
+
- $w_{ij}$ and $x_{ij}$ are elements of the kernel and input patch, respectively
|
71
|
+
- $\epsilon$ is a small constant for numerical stability
|
72
|
+
|
73
|
+
This generalizes the Yat-product to convolutional (patch-wise) operations.
|
74
|
+
|
75
|
+
|
76
|
+
## Supported Frameworks & API
|
77
|
+
|
78
|
+
The `YatNMN` layer (for dense operations) and `YatConv` (for convolutional operations) are the core components. Below is a summary of their availability and features per framework:
|
79
|
+
|
80
|
+
| Framework | `YatNMN` Path | `YatConv` Path | Core Layer | DropConnect | Ternary Network | Recurrent Layer |
|
81
|
+
|----------------|-------------------------------|-------------------------------|------------|-------------|-----------------|-----------------|
|
82
|
+
| **Flax (Linen)** | `src/nmn/linen/nmn.py` | (Available) | ✅ | | | 🚧 |
|
83
|
+
| **Flax (NNX)** | `src/nmn/nnx/nmn.py` | `src/nmn/nnx/yatconv.py` | ✅ | ✅ | 🚧 | 🚧 |
|
84
|
+
| **Keras** | `src/nmn/keras/nmn.py` | (Available) | ✅ | | | 🚧 |
|
85
|
+
| **PyTorch** | `src/nmn/torch/nmn.py` | (Available) | ✅ | | | 🚧 |
|
86
|
+
| **TensorFlow** | `src/nmn/tf/nmn.py` | (Available) | ✅ | | | 🚧 |
|
87
|
+
|
88
|
+
*Legend: ✅ Implemented, 🚧 To be implemented / In Progress, (Available) - Assumed available if NMN is, specific path might vary or be part of the NMN module.*
|
89
|
+
|
90
|
+
## Installation
|
91
|
+
|
92
|
+
```bash
|
93
|
+
pip install nmn
|
94
|
+
```
|
95
|
+
|
96
|
+
## Usage Example (Flax NNX)
|
97
|
+
|
98
|
+
```python
|
99
|
+
import jax
|
100
|
+
import jax.numpy as jnp
|
101
|
+
from flax import nnx
|
102
|
+
from nmn.nnx.nmn import YatNMN
|
103
|
+
from nmn.nnx.yatconv import YatConv
|
104
|
+
|
105
|
+
# Example YatNMN (Dense Layer)
|
106
|
+
model_key, param_key, drop_key, input_key = jax.random.split(jax.random.key(0), 4)
|
107
|
+
in_features, out_features = 3, 4
|
108
|
+
layer = YatNMN(in_features=in_features, out_features=out_features, rngs=nnx.Rngs(params=param_key, dropout=drop_key))
|
109
|
+
dummy_input = jax.random.normal(input_key, (2, in_features)) # Batch size 2
|
110
|
+
output = layer(dummy_input)
|
111
|
+
print("YatNMN Output Shape:", output.shape)
|
112
|
+
|
113
|
+
# Example YatConv (Convolutional Layer)
|
114
|
+
conv_key, conv_param_key, conv_input_key = jax.random.split(jax.random.key(1), 3)
|
115
|
+
in_channels, out_channels = 3, 8
|
116
|
+
kernel_size = (3, 3)
|
117
|
+
conv_layer = YatConv(
|
118
|
+
in_features=in_channels,
|
119
|
+
out_features=out_channels,
|
120
|
+
kernel_size=kernel_size,
|
121
|
+
rngs=nnx.Rngs(params=conv_param_key)
|
122
|
+
)
|
123
|
+
dummy_conv_input = jax.random.normal(conv_input_key, (1, 28, 28, in_channels)) # Batch 1, 28x28 image, in_channels
|
124
|
+
conv_output = conv_layer(dummy_conv_input)
|
125
|
+
print("YatConv Output Shape:", conv_output.shape)
|
126
|
+
|
127
|
+
```
|
128
|
+
*Note: Examples for other frameworks (Keras, PyTorch, TensorFlow, Flax Linen) can be found in their respective `nmn.<framework>` modules and upcoming documentation.*
|
129
|
+
|
130
|
+
## Roadmap
|
131
|
+
|
132
|
+
- [ ] Implement recurrent layers (`YatRNN`, `YatLSTM`, `YatGRU`) for all supported frameworks.
|
133
|
+
- [ ] Develop Ternary Network versions of Yat layers for NNX.
|
134
|
+
- [ ] Add more comprehensive examples and benchmark scripts for various tasks (vision, language).
|
135
|
+
- [ ] Publish detailed documentation and API references.
|
136
|
+
- [ ] Conduct and publish thorough performance benchmarks against traditional layers.
|
137
|
+
|
138
|
+
## Contributing
|
139
|
+
|
140
|
+
Contributions are welcome! If you'd like to contribute, please feel free to:
|
141
|
+
- Open an issue on the [Bug Tracker](https://github.com/mlnomadpy/nmn/issues) to report bugs or suggest features.
|
142
|
+
- Submit a pull request with your improvements.
|
143
|
+
- Help expand the documentation or add more examples.
|
144
|
+
|
145
|
+
## License
|
146
|
+
|
147
|
+
This project is licensed under the **GNU Affero General Public License v3**. See the [LICENSE](LICENSE) file for details.
|
148
|
+
|
149
|
+
## Citation
|
150
|
+
|
151
|
+
If you use `nmn` in your research, please consider citing the original papers that inspired this work:
|
152
|
+
|
153
|
+
> Deep Learning 2.0: Artificial Neurons that Matter: Reject Correlation - Embrace Orthogonality
|
154
|
+
>
|
155
|
+
> Deep Learning 2.1: Mind and Cosmos - Towards Cosmos-Inspired Interpretable Neural Networks
|
156
|
+
|
157
|
+
A BibTeX entry will be provided once the accompanying paper for this library is published.
|
158
|
+
|
159
|
+
## Citing
|
160
|
+
|
161
|
+
If you use this work, please cite the paper:
|
162
|
+
|
163
|
+
```bibtex
|
164
|
+
@article{taha2024dl2,
|
165
|
+
author = {Taha Bouhsine},
|
166
|
+
title = {Deep Learning 2.0: Artificial Neurons that Matter: Reject Correlation - Embrace Orthogonality},
|
167
|
+
}
|
168
|
+
```
|
169
|
+
|
170
|
+
|
171
|
+
```bibtex
|
172
|
+
@article{taha2025dl2,
|
173
|
+
author = {Taha Bouhsine},
|
174
|
+
title = {Deep Learning 2.1: Mind and Cosmos - Towards Cosmos-Inspired Interpretable Neural Networks},
|
175
|
+
}
|
176
|
+
```
|
nmn-0.1.5/README.md
ADDED
@@ -0,0 +1,162 @@
|
|
1
|
+
# nmn
|
2
|
+
Not the neurons we want, but the neurons we need
|
3
|
+
|
4
|
+
[](https://pypi.org/project/nmn/)
|
5
|
+
[](https://pepy.tech/project/nmn)
|
6
|
+
[](https://pepy.tech/project/nmn)
|
7
|
+
[](https://github.com/mlnomadpy/nmn)
|
8
|
+
[](https://github.com/mlnomadpy/nmn)
|
9
|
+
[](https://github.com/mlnomadpy/nmn/issues)
|
10
|
+
[](https://pypi.org/project/nmn/)
|
11
|
+
[](https://pypi.org/project/nmn/)
|
12
|
+
|
13
|
+
## Features
|
14
|
+
|
15
|
+
* **Activation-Free Non-linearity:** Learns complex, non-linear relationships without separate activation functions.
|
16
|
+
* **Multiple Frameworks:** Supports Flax (Linen & NNX), Keras, PyTorch, and TensorFlow.
|
17
|
+
* **Yat-Product & Yat-Conv:** Implements novel Yat-Product and Yat-Conv operations.
|
18
|
+
* **Inspired by Research:** Based on the principles from "Deep Learning 2.0/2.1: Artificial Neurons that Matter".
|
19
|
+
|
20
|
+
## Overview
|
21
|
+
|
22
|
+
**nmn** provides neural network layers for multiple frameworks (Flax, NNX, Keras, PyTorch, TensorFlow) that do not require activation functions to learn non-linearity. The main goal is to enable deep learning architectures where the layer itself is inherently non-linear, inspired by the papers:
|
23
|
+
|
24
|
+
> Deep Learning 2.0: Artificial Neurons that Matter: Reject Correlation - Embrace Orthogonality
|
25
|
+
>
|
26
|
+
> Deep Learning 2.1: Deep Learning 2.1: Mind and Cosmos - Towards Cosmos-Inspired Interpretable Neural Networks
|
27
|
+
|
28
|
+
## Math
|
29
|
+
|
30
|
+
Yat-Product:
|
31
|
+
$$
|
32
|
+
ⵟ(\mathbf{w},\mathbf{x}) := \frac{\langle \mathbf{w}, \mathbf{x} \rangle^2}{\|\mathbf{w} - \mathbf{x}\|^2 + \epsilon} = \frac{ \|\mathbf{x}\|^2 \|\mathbf{w}\|^2 \cos^2 \theta}{\|\mathbf{w}\|^2 - 2\mathbf{w}^\top\mathbf{x} + \|\mathbf{x}\|^2 + \epsilon} = \frac{ \|\mathbf{x}\|^2 \|\mathbf{w}\|^2 \cos^2 \theta}{((\mathbf{x}-\mathbf{w})\cdot(\mathbf{x}-\mathbf{w}))^2 + \epsilon}.
|
33
|
+
$$
|
34
|
+
|
35
|
+
**Explanation:**
|
36
|
+
- $\mathbf{w}$ is the weight vector, $\mathbf{x}$ is the input vector.
|
37
|
+
- $\langle \mathbf{w}, \mathbf{x} \rangle$ is the dot product between $\mathbf{w}$ and $\mathbf{x}$.
|
38
|
+
- $\|\mathbf{w} - \mathbf{x}\|^2$ is the squared Euclidean distance between $\mathbf{w}$ and $\mathbf{x}$.
|
39
|
+
- $\epsilon$ is a small constant for numerical stability.
|
40
|
+
- $\theta$ is the angle between $\mathbf{w}$ and $\mathbf{x}$.
|
41
|
+
|
42
|
+
This operation:
|
43
|
+
- **Numerator:** Squares the similarity (dot product) between $\mathbf{w}$ and $\mathbf{x}$, emphasizing strong alignments.
|
44
|
+
- **Denominator:** Penalizes large distances, so the response is high only when $\mathbf{w}$ and $\mathbf{x}$ are both similar in direction and close in space.
|
45
|
+
- **No activation needed:** The non-linearity is built into the operation itself, allowing the layer to learn complex, non-linear relationships without a separate activation function.
|
46
|
+
- **Geometric view:** The output is maximized when $\mathbf{w}$ and $\mathbf{x}$ are both large in norm, closely aligned (small $\theta$), and close together in Euclidean space.
|
47
|
+
|
48
|
+
Yat-Conv:
|
49
|
+
$$
|
50
|
+
ⵟ^*(\mathbf{W}, \mathbf{X}) := \frac{\langle \mathbf{w}, \mathbf{x} \rangle^2}{\|\mathbf{w} - \mathbf{x}\|^2 + \epsilon}
|
51
|
+
= \frac{\left(\sum_{i,j} w_{ij} x_{ij}\right)^2}{\sum_{i,j} (w_{ij} - x_{ij})^2 + \epsilon}
|
52
|
+
$$
|
53
|
+
|
54
|
+
Where:
|
55
|
+
- $\mathbf{W}$ and $\mathbf{X}$ are local patches (e.g., kernel and input patch in convolution)
|
56
|
+
- $w_{ij}$ and $x_{ij}$ are elements of the kernel and input patch, respectively
|
57
|
+
- $\epsilon$ is a small constant for numerical stability
|
58
|
+
|
59
|
+
This generalizes the Yat-product to convolutional (patch-wise) operations.
|
60
|
+
|
61
|
+
|
62
|
+
## Supported Frameworks & API
|
63
|
+
|
64
|
+
The `YatNMN` layer (for dense operations) and `YatConv` (for convolutional operations) are the core components. Below is a summary of their availability and features per framework:
|
65
|
+
|
66
|
+
| Framework | `YatNMN` Path | `YatConv` Path | Core Layer | DropConnect | Ternary Network | Recurrent Layer |
|
67
|
+
|----------------|-------------------------------|-------------------------------|------------|-------------|-----------------|-----------------|
|
68
|
+
| **Flax (Linen)** | `src/nmn/linen/nmn.py` | (Available) | ✅ | | | 🚧 |
|
69
|
+
| **Flax (NNX)** | `src/nmn/nnx/nmn.py` | `src/nmn/nnx/yatconv.py` | ✅ | ✅ | 🚧 | 🚧 |
|
70
|
+
| **Keras** | `src/nmn/keras/nmn.py` | (Available) | ✅ | | | 🚧 |
|
71
|
+
| **PyTorch** | `src/nmn/torch/nmn.py` | (Available) | ✅ | | | 🚧 |
|
72
|
+
| **TensorFlow** | `src/nmn/tf/nmn.py` | (Available) | ✅ | | | 🚧 |
|
73
|
+
|
74
|
+
*Legend: ✅ Implemented, 🚧 To be implemented / In Progress, (Available) - Assumed available if NMN is, specific path might vary or be part of the NMN module.*
|
75
|
+
|
76
|
+
## Installation
|
77
|
+
|
78
|
+
```bash
|
79
|
+
pip install nmn
|
80
|
+
```
|
81
|
+
|
82
|
+
## Usage Example (Flax NNX)
|
83
|
+
|
84
|
+
```python
|
85
|
+
import jax
|
86
|
+
import jax.numpy as jnp
|
87
|
+
from flax import nnx
|
88
|
+
from nmn.nnx.nmn import YatNMN
|
89
|
+
from nmn.nnx.yatconv import YatConv
|
90
|
+
|
91
|
+
# Example YatNMN (Dense Layer)
|
92
|
+
model_key, param_key, drop_key, input_key = jax.random.split(jax.random.key(0), 4)
|
93
|
+
in_features, out_features = 3, 4
|
94
|
+
layer = YatNMN(in_features=in_features, out_features=out_features, rngs=nnx.Rngs(params=param_key, dropout=drop_key))
|
95
|
+
dummy_input = jax.random.normal(input_key, (2, in_features)) # Batch size 2
|
96
|
+
output = layer(dummy_input)
|
97
|
+
print("YatNMN Output Shape:", output.shape)
|
98
|
+
|
99
|
+
# Example YatConv (Convolutional Layer)
|
100
|
+
conv_key, conv_param_key, conv_input_key = jax.random.split(jax.random.key(1), 3)
|
101
|
+
in_channels, out_channels = 3, 8
|
102
|
+
kernel_size = (3, 3)
|
103
|
+
conv_layer = YatConv(
|
104
|
+
in_features=in_channels,
|
105
|
+
out_features=out_channels,
|
106
|
+
kernel_size=kernel_size,
|
107
|
+
rngs=nnx.Rngs(params=conv_param_key)
|
108
|
+
)
|
109
|
+
dummy_conv_input = jax.random.normal(conv_input_key, (1, 28, 28, in_channels)) # Batch 1, 28x28 image, in_channels
|
110
|
+
conv_output = conv_layer(dummy_conv_input)
|
111
|
+
print("YatConv Output Shape:", conv_output.shape)
|
112
|
+
|
113
|
+
```
|
114
|
+
*Note: Examples for other frameworks (Keras, PyTorch, TensorFlow, Flax Linen) can be found in their respective `nmn.<framework>` modules and upcoming documentation.*
|
115
|
+
|
116
|
+
## Roadmap
|
117
|
+
|
118
|
+
- [ ] Implement recurrent layers (`YatRNN`, `YatLSTM`, `YatGRU`) for all supported frameworks.
|
119
|
+
- [ ] Develop Ternary Network versions of Yat layers for NNX.
|
120
|
+
- [ ] Add more comprehensive examples and benchmark scripts for various tasks (vision, language).
|
121
|
+
- [ ] Publish detailed documentation and API references.
|
122
|
+
- [ ] Conduct and publish thorough performance benchmarks against traditional layers.
|
123
|
+
|
124
|
+
## Contributing
|
125
|
+
|
126
|
+
Contributions are welcome! If you'd like to contribute, please feel free to:
|
127
|
+
- Open an issue on the [Bug Tracker](https://github.com/mlnomadpy/nmn/issues) to report bugs or suggest features.
|
128
|
+
- Submit a pull request with your improvements.
|
129
|
+
- Help expand the documentation or add more examples.
|
130
|
+
|
131
|
+
## License
|
132
|
+
|
133
|
+
This project is licensed under the **GNU Affero General Public License v3**. See the [LICENSE](LICENSE) file for details.
|
134
|
+
|
135
|
+
## Citation
|
136
|
+
|
137
|
+
If you use `nmn` in your research, please consider citing the original papers that inspired this work:
|
138
|
+
|
139
|
+
> Deep Learning 2.0: Artificial Neurons that Matter: Reject Correlation - Embrace Orthogonality
|
140
|
+
>
|
141
|
+
> Deep Learning 2.1: Mind and Cosmos - Towards Cosmos-Inspired Interpretable Neural Networks
|
142
|
+
|
143
|
+
A BibTeX entry will be provided once the accompanying paper for this library is published.
|
144
|
+
|
145
|
+
## Citing
|
146
|
+
|
147
|
+
If you use this work, please cite the paper:
|
148
|
+
|
149
|
+
```bibtex
|
150
|
+
@article{taha2024dl2,
|
151
|
+
author = {Taha Bouhsine},
|
152
|
+
title = {Deep Learning 2.0: Artificial Neurons that Matter: Reject Correlation - Embrace Orthogonality},
|
153
|
+
}
|
154
|
+
```
|
155
|
+
|
156
|
+
|
157
|
+
```bibtex
|
158
|
+
@article{taha2025dl2,
|
159
|
+
author = {Taha Bouhsine},
|
160
|
+
title = {Deep Learning 2.1: Mind and Cosmos - Towards Cosmos-Inspired Interpretable Neural Networks},
|
161
|
+
}
|
162
|
+
```
|