nmn 0.1.0__py3-none-any.whl → 0.1.2__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,119 @@
1
+ Metadata-Version: 2.4
2
+ Name: nmn
3
+ Version: 0.1.2
4
+ Summary: a neuron that matter
5
+ Project-URL: Homepage, https://github.com/mlnomadpy/nmn
6
+ Project-URL: Bug Tracker, https://github.com/mlnomadpy/my_package/issues
7
+ Author-email: Taha Bouhsine <yat@mlnomads.com>
8
+ License-File: LICENSE
9
+ Classifier: License :: OSI Approved :: GNU Affero General Public License v3
10
+ Classifier: Operating System :: OS Independent
11
+ Classifier: Programming Language :: Python :: 3
12
+ Requires-Python: >=3.8
13
+ Description-Content-Type: text/markdown
14
+
15
+ # nmn
16
+ Not the neurons we want, but the neurons we need
17
+
18
+ [![PyPI version](https://img.shields.io/pypi/v/nmn.svg)](https://pypi.org/project/nmn/)
19
+ [![Downloads](https://static.pepy.tech/badge/nmn)](https://pepy.tech/project/nmn)
20
+ [![Downloads/month](https://static.pepy.tech/badge/nmn/month)](https://pepy.tech/project/nmn)
21
+ [![GitHub stars](https://img.shields.io/github/stars/mlnomadpy/nmn?style=social)](https://github.com/mlnomadpy/nmn)
22
+ [![GitHub forks](https://img.shields.io/github/forks/mlnomadpy/nmn?style=social)](https://github.com/mlnomadpy/nmn)
23
+ [![GitHub issues](https://img.shields.io/github/issues/mlnomadpy/nmn)](https://github.com/mlnomadpy/nmn/issues)
24
+ [![PyPI - License](https://img.shields.io/pypi/l/nmn)](https://pypi.org/project/nmn/)
25
+ [![PyPI - Python Version](https://img.shields.io/pypi/pyversions/nmn)](https://pypi.org/project/nmn/)
26
+
27
+ ## Overview
28
+
29
+ **nmn** provides neural network layers for multiple frameworks (Flax, NNX, Keras, PyTorch, TensorFlow) that do not require activation functions to learn non-linearity. The main goal is to enable deep learning architectures where the layer itself is inherently non-linear, inspired by the paper:
30
+
31
+ > Deep Learning 2.0: Artificial Neurons that Matter: Reject Correlation - Embrace Orthogonality
32
+
33
+ ## Math
34
+
35
+ Yat-Product:
36
+ $$
37
+ ⵟ(\mathbf{w},\mathbf{x}) := \frac{\langle \mathbf{w}, \mathbf{x} \rangle^2}{\|\mathbf{w} - \mathbf{x}\|^2 + \epsilon} = \frac{ \|\mathbf{x}\|^2 \|\mathbf{w}\|^2 \cos^2 \theta}{\|\mathbf{w}\|^2 - 2\mathbf{w}^\top\mathbf{x} + \|\mathbf{x}\|^2 + \epsilon} = \frac{ \|\mathbf{x}\|^2 \|\mathbf{w}\|^2 \cos^2 \theta}{((\mathbf{x}-\mathbf{w})\cdot(\mathbf{x}-\mathbf{w})) + \epsilon}.
38
+ $$
39
+
40
+ **Explanation:**
41
+ - $\mathbf{w}$ is the weight vector, $\mathbf{x}$ is the input vector.
42
+ - $\langle \mathbf{w}, \mathbf{x} \rangle$ is the dot product between $\mathbf{w}$ and $\mathbf{x}$.
43
+ - $\|\mathbf{w} - \mathbf{x}\|^2$ is the squared Euclidean distance between $\mathbf{w}$ and $\mathbf{x}$.
44
+ - $\epsilon$ is a small constant for numerical stability.
45
+ - $\theta$ is the angle between $\mathbf{w}$ and $\mathbf{x}$.
46
+
47
+ This operation:
48
+ - **Numerator:** Squares the similarity (dot product) between $\mathbf{w}$ and $\mathbf{x}$, emphasizing strong alignments.
49
+ - **Denominator:** Penalizes large distances, so the response is high only when $\mathbf{w}$ and $\mathbf{x}$ are both similar in direction and close in space.
50
+ - **No activation needed:** The non-linearity is built into the operation itself, allowing the layer to learn complex, non-linear relationships without a separate activation function.
51
+ - **Geometric view:** The output is maximized when $\mathbf{w}$ and $\mathbf{x}$ are both large in norm, closely aligned (small $\theta$), and close together in Euclidean space.
52
+
53
+ Yat-Conv:
54
+ $$
55
+ ⵟ^*(\mathbf{W}, \mathbf{X}) := \frac{\langle \mathbf{w}, \mathbf{x} \rangle^2}{\|\mathbf{w} - \mathbf{x}\|^2 + \epsilon}
56
+ = \frac{\left(\sum_{i,j} w_{ij} x_{ij}\right)^2}{\sum_{i,j} (w_{ij} - x_{ij})^2 + \epsilon}
57
+ $$
58
+
59
+ Where:
60
+ - $\mathbf{W}$ and $\mathbf{X}$ are local patches (e.g., kernel and input patch in convolution)
61
+ - $w_{ij}$ and $x_{ij}$ are elements of the kernel and input patch, respectively
62
+ - $\epsilon$ is a small constant for numerical stability
63
+
64
+ This generalizes the Yat-product to convolutional (patch-wise) operations.
65
+
66
+
67
+ ## Supported Frameworks & Tasks
68
+
69
+ ### Flax (JAX)
70
+ - `YatNMN` layer implemented in `src/nmn/linen/nmn.py`
71
+ - **Tasks:**
72
+ - [x] Core layer implementation
73
+ - [ ] Recurrent layer (to be implemented)
74
+
75
+ ### NNX (Flax NNX)
76
+ - `YatNMN` layer implemented in `src/nmn/nnx/nmn.py`
77
+ - **Tasks:**
78
+ - [x] Core layer implementation
79
+ - [ ] Recurrent layer (to be implemented)
80
+
81
+ ### Keras
82
+ - `YatNMN` layer implemented in `src/nmn/keras/nmn.py`
83
+ - **Tasks:**
84
+ - [x] Core layer implementation
85
+ - [ ] Recurrent layer (to be implemented)
86
+
87
+ ### PyTorch
88
+ - `YatNMN` layer implemented in `src/nmn/torch/nmn.py`
89
+ - **Tasks:**
90
+ - [x] Core layer implementation
91
+ - [ ] Recurrent layer (to be implemented)
92
+
93
+ ### TensorFlow
94
+ - `YatNMN` layer implemented in `src/nmn/tf/nmn.py`
95
+ - **Tasks:**
96
+ - [x] Core layer implementation
97
+ - [ ] Recurrent layer (to be implemented)
98
+
99
+ ## Installation
100
+
101
+ ```bash
102
+ pip install nmn
103
+ ```
104
+
105
+ ## Usage Example (Flax)
106
+
107
+ ```python
108
+ from nmn.nnx.nmn import YatNMN
109
+ from nmn.nnx.yatconv import YatConv
110
+ # ... use as a Flax module ...
111
+ ```
112
+
113
+ ## Roadmap
114
+ - [ ] Implement recurrent layers for all frameworks
115
+ - [ ] Add more examples and benchmarks
116
+ - [ ] Improve documentation and API consistency
117
+
118
+ ## License
119
+ GNU Affero General Public License v3
@@ -5,7 +5,7 @@ nmn/nnx/nmn.py,sha256=hZDgMnGnSnBSqMbk-z7qUt8QsHEM-2o6CVWacXZfz3E,4870
5
5
  nmn/nnx/yatconv.py,sha256=EZx6g-KcuwrPNEVPl8YdQ16ZXkly_m0XvYCIoWVwFc0,11742
6
6
  nmn/tf/nmn.py,sha256=A-K65z9_aN62tAy12b0553nXxrzOofK1umGMRGJYjqw,6036
7
7
  nmn/torch/nmn.py,sha256=qOFOlH4_pCOQr_4ctGpEbnW3DAGQotijDTKu5aIEXaE,4609
8
- nmn-0.1.0.dist-info/METADATA,sha256=jn_ZGPLThl5Smnq1_eAwfYPjvPUkiUl5mJqUXtwm840,2189
9
- nmn-0.1.0.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
10
- nmn-0.1.0.dist-info/licenses/LICENSE,sha256=kbZSd5WewnN2PSjvAC6DprP7pXx6NUNsnltmU2Mz1yA,34519
11
- nmn-0.1.0.dist-info/RECORD,,
8
+ nmn-0.1.2.dist-info/METADATA,sha256=MxRIZIm8TIcvUAyW-5gYBu88g4hF-upahr3e2tfrWE8,5030
9
+ nmn-0.1.2.dist-info/WHEEL,sha256=qtCwoSJWgHk21S1Kb4ihdzI2rlJ1ZKaIurTj_ngOhyQ,87
10
+ nmn-0.1.2.dist-info/licenses/LICENSE,sha256=kbZSd5WewnN2PSjvAC6DprP7pXx6NUNsnltmU2Mz1yA,34519
11
+ nmn-0.1.2.dist-info/RECORD,,
@@ -1,76 +0,0 @@
1
- Metadata-Version: 2.4
2
- Name: nmn
3
- Version: 0.1.0
4
- Summary: a neuron that matter
5
- Project-URL: Homepage, https://github.com/mlnomadpy/nmn
6
- Project-URL: Bug Tracker, https://github.com/mlnomadpy/my_package/issues
7
- Author-email: Taha Bouhsine <yat@mlnomads.com>
8
- License-File: LICENSE
9
- Classifier: License :: OSI Approved :: GNU Affero General Public License v3
10
- Classifier: Operating System :: OS Independent
11
- Classifier: Programming Language :: Python :: 3
12
- Requires-Python: >=3.8
13
- Description-Content-Type: text/markdown
14
-
15
- # nmn
16
- Not the neurons we want, but the neurons we need
17
-
18
- ## Overview
19
-
20
- **nmn** provides neural network layers for multiple frameworks (Flax, NNX, Keras, PyTorch, TensorFlow) that do not require activation functions to learn non-linearity. The main goal is to enable deep learning architectures where the layer itself is inherently non-linear, inspired by the paper:
21
-
22
- > Deep Learning 2.0: Artificial Neurons that Matter: Reject Correlation - Embrace Orthogonality
23
-
24
- ## Supported Frameworks & Tasks
25
-
26
- ### Flax (JAX)
27
- - `YatNMN` layer implemented in `src/nmn/linen/nmn.py`
28
- - **Tasks:**
29
- - [x] Core layer implementation
30
- - [ ] Recurrent layer (to be implemented)
31
-
32
- ### NNX (Flax NNX)
33
- - `YatNMN` layer implemented in `src/nmn/nnx/nmn.py`
34
- - **Tasks:**
35
- - [x] Core layer implementation
36
- - [ ] Recurrent layer (to be implemented)
37
-
38
- ### Keras
39
- - `YatNMN` layer implemented in `src/nmn/keras/nmn.py`
40
- - **Tasks:**
41
- - [x] Core layer implementation
42
- - [ ] Recurrent layer (to be implemented)
43
-
44
- ### PyTorch
45
- - `YatNMN` layer implemented in `src/nmn/torch/nmn.py`
46
- - **Tasks:**
47
- - [x] Core layer implementation
48
- - [ ] Recurrent layer (to be implemented)
49
-
50
- ### TensorFlow
51
- - `YatNMN` layer implemented in `src/nmn/tf/nmn.py`
52
- - **Tasks:**
53
- - [x] Core layer implementation
54
- - [ ] Recurrent layer (to be implemented)
55
-
56
- ## Installation
57
-
58
- ```bash
59
- pip install nmn
60
- ```
61
-
62
- ## Usage Example (Flax)
63
-
64
- ```python
65
- from nmn.nnx.nmn import YatNMN
66
- from nmn.nnx.yatconv import YatConv
67
- # ... use as a Flax module ...
68
- ```
69
-
70
- ## Roadmap
71
- - [ ] Implement recurrent layers for all frameworks
72
- - [ ] Add more examples and benchmarks
73
- - [ ] Improve documentation and API consistency
74
-
75
- ## License
76
- GNU Affero General Public License v3
File without changes