oikan 0.0.2.1__tar.gz → 0.0.2.2__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {oikan-0.0.2.1 → oikan-0.0.2.2}/PKG-INFO +84 -21
- {oikan-0.0.2.1 → oikan-0.0.2.2}/README.md +83 -20
- {oikan-0.0.2.1 → oikan-0.0.2.2}/oikan.egg-info/PKG-INFO +84 -21
- {oikan-0.0.2.1 → oikan-0.0.2.2}/pyproject.toml +1 -1
- {oikan-0.0.2.1 → oikan-0.0.2.2}/LICENSE +0 -0
- {oikan-0.0.2.1 → oikan-0.0.2.2}/oikan/__init__.py +0 -0
- {oikan-0.0.2.1 → oikan-0.0.2.2}/oikan/exceptions.py +0 -0
- {oikan-0.0.2.1 → oikan-0.0.2.2}/oikan/model.py +0 -0
- {oikan-0.0.2.1 → oikan-0.0.2.2}/oikan/symbolic.py +0 -0
- {oikan-0.0.2.1 → oikan-0.0.2.2}/oikan/utils.py +0 -0
- {oikan-0.0.2.1 → oikan-0.0.2.2}/oikan.egg-info/SOURCES.txt +0 -0
- {oikan-0.0.2.1 → oikan-0.0.2.2}/oikan.egg-info/dependency_links.txt +0 -0
- {oikan-0.0.2.1 → oikan-0.0.2.2}/oikan.egg-info/requires.txt +0 -0
- {oikan-0.0.2.1 → oikan-0.0.2.2}/oikan.egg-info/top_level.txt +0 -0
- {oikan-0.0.2.1 → oikan-0.0.2.2}/setup.cfg +0 -0
- {oikan-0.0.2.1 → oikan-0.0.2.2}/setup.py +0 -0
@@ -1,6 +1,6 @@
|
|
1
1
|
Metadata-Version: 2.4
|
2
2
|
Name: oikan
|
3
|
-
Version: 0.0.2.
|
3
|
+
Version: 0.0.2.2
|
4
4
|
Summary: OIKAN: Optimized Interpretable Kolmogorov-Arnold Networks
|
5
5
|
Author: Arman Zhalgasbayev
|
6
6
|
License: MIT
|
@@ -17,7 +17,7 @@ Dynamic: license-file
|
|
17
17
|
|
18
18
|
<!-- logo in the center -->
|
19
19
|
<div align="center">
|
20
|
-
<img src="
|
20
|
+
<img src="docs/media/oikan_logo.png" alt="OIKAN Logo" width="200"/>
|
21
21
|
|
22
22
|
<h1>OIKAN: Optimized Interpretable Kolmogorov-Arnold Networks</h1>
|
23
23
|
</div>
|
@@ -43,9 +43,9 @@ OIKAN (Optimized Interpretable Kolmogorov-Arnold Networks) is a neuro-symbolic M
|
|
43
43
|
|
44
44
|
OIKAN is based on Kolmogorov's superposition theorem, which states that any multivariate continuous function can be represented as a composition of single-variable functions. We leverage this theory by:
|
45
45
|
|
46
|
-
1. Using neural networks to learn optimal basis functions
|
47
|
-
2.
|
48
|
-
3.
|
46
|
+
1. Using neural networks to learn optimal basis functions through interpretable edge transformations
|
47
|
+
2. Combining transformed features using learnable weights
|
48
|
+
3. Automatically extracting human-readable symbolic formulas
|
49
49
|
|
50
50
|
## Quick Start
|
51
51
|
|
@@ -114,22 +114,85 @@ model.save_symbolic_formula("classification_formula.txt")
|
|
114
114
|
|
115
115
|
## Architecture Details
|
116
116
|
|
117
|
-
OIKAN
|
118
|
-
|
119
|
-
1. **
|
120
|
-
-
|
121
|
-
|
122
|
-
|
123
|
-
|
124
|
-
2
|
125
|
-
|
126
|
-
|
127
|
-
|
128
|
-
|
129
|
-
|
130
|
-
|
131
|
-
|
132
|
-
|
117
|
+
OIKAN implements a novel neuro-symbolic architecture based on Kolmogorov-Arnold representation theory through three specialized components:
|
118
|
+
|
119
|
+
1. **Edge Symbolic Layer**: Learns interpretable single-variable transformations
|
120
|
+
- Adaptive basis function composition using 9 core functions:
|
121
|
+
```python
|
122
|
+
ADVANCED_LIB = {
|
123
|
+
'x': ('x', lambda x: x),
|
124
|
+
'x^2': ('x^2', lambda x: x**2),
|
125
|
+
'x^3': ('x^3', lambda x: x**3),
|
126
|
+
'exp': ('exp(x)', lambda x: np.exp(x)),
|
127
|
+
'log': ('log(x)', lambda x: np.log(abs(x) + 1)),
|
128
|
+
'sqrt': ('sqrt(x)', lambda x: np.sqrt(abs(x))),
|
129
|
+
'tanh': ('tanh(x)', lambda x: np.tanh(x)),
|
130
|
+
'sin': ('sin(x)', lambda x: np.sin(x)),
|
131
|
+
'abs': ('abs(x)', lambda x: np.abs(x))
|
132
|
+
}
|
133
|
+
```
|
134
|
+
- Each input feature is transformed through these basis functions
|
135
|
+
- Learnable weights determine the optimal combination
|
136
|
+
|
137
|
+
2. **Neural Composition Layer**: Multi-layer feature aggregation
|
138
|
+
- Direct feature-to-feature connections through KAN layers
|
139
|
+
- Dropout regularization (p=0.1 default) for robust learning
|
140
|
+
- Gradient clipping (max_norm=1.0) for stable training
|
141
|
+
- User-configurable hidden layer dimensions
|
142
|
+
|
143
|
+
3. **Symbolic Extraction Layer**: Generates production-ready formulas
|
144
|
+
- Weight-based term pruning (threshold=1e-4)
|
145
|
+
- Automatic coefficient optimization
|
146
|
+
- Human-readable mathematical expressions
|
147
|
+
- Exportable to lightweight production code
|
148
|
+
|
149
|
+
### Architecture Diagram
|
150
|
+
|
151
|
+

|
152
|
+
|
153
|
+
### Key Design Principles
|
154
|
+
|
155
|
+
1. **Interpretability First**: All transformations maintain clear mathematical meaning
|
156
|
+
2. **Scikit-learn Compatibility**: Familiar `.fit()` and `.predict()` interface
|
157
|
+
3. **Production Ready**: Export formulas as lightweight mathematical expressions
|
158
|
+
4. **Automatic Simplification**: Remove insignificant terms (|w| < 1e-4)
|
159
|
+
|
160
|
+
## Model Components
|
161
|
+
|
162
|
+
1. **Symbolic Edge Functions**
|
163
|
+
```python
|
164
|
+
class EdgeActivation(nn.Module):
|
165
|
+
"""Learnable edge activation with basis functions"""
|
166
|
+
def forward(self, x):
|
167
|
+
return sum(self.weights[i] * basis[i](x) for i in range(self.num_basis))
|
168
|
+
```
|
169
|
+
|
170
|
+
2. **KAN Layer Implementation**
|
171
|
+
```python
|
172
|
+
class KANLayer(nn.Module):
|
173
|
+
"""Kolmogorov-Arnold Network layer"""
|
174
|
+
def forward(self, x):
|
175
|
+
edge_outputs = [self.edges[i](x[:,i]) for i in range(self.input_dim)]
|
176
|
+
return self.combine(edge_outputs)
|
177
|
+
```
|
178
|
+
|
179
|
+
3. **Formula Extraction**
|
180
|
+
```python
|
181
|
+
def get_symbolic_formula(self):
|
182
|
+
"""Extract interpretable mathematical expression"""
|
183
|
+
terms = []
|
184
|
+
for i, edge in enumerate(self.edges):
|
185
|
+
if abs(self.weights[i]) > threshold:
|
186
|
+
terms.append(f"{self.weights[i]:.4f} * {edge.formula}")
|
187
|
+
return " + ".join(terms)
|
188
|
+
```
|
189
|
+
|
190
|
+
### Key Design Principles
|
191
|
+
|
192
|
+
- **Modular Architecture**: Each component is independent and replaceable
|
193
|
+
- **Interpretability First**: All transformations maintain symbolic representations
|
194
|
+
- **Automatic Simplification**: Removes insignificant terms and combines similar expressions
|
195
|
+
- **Production Ready**: Export formulas for lightweight deployment
|
133
196
|
|
134
197
|
## Contributing
|
135
198
|
|
@@ -1,6 +1,6 @@
|
|
1
1
|
<!-- logo in the center -->
|
2
2
|
<div align="center">
|
3
|
-
<img src="
|
3
|
+
<img src="docs/media/oikan_logo.png" alt="OIKAN Logo" width="200"/>
|
4
4
|
|
5
5
|
<h1>OIKAN: Optimized Interpretable Kolmogorov-Arnold Networks</h1>
|
6
6
|
</div>
|
@@ -26,9 +26,9 @@ OIKAN (Optimized Interpretable Kolmogorov-Arnold Networks) is a neuro-symbolic M
|
|
26
26
|
|
27
27
|
OIKAN is based on Kolmogorov's superposition theorem, which states that any multivariate continuous function can be represented as a composition of single-variable functions. We leverage this theory by:
|
28
28
|
|
29
|
-
1. Using neural networks to learn optimal basis functions
|
30
|
-
2.
|
31
|
-
3.
|
29
|
+
1. Using neural networks to learn optimal basis functions through interpretable edge transformations
|
30
|
+
2. Combining transformed features using learnable weights
|
31
|
+
3. Automatically extracting human-readable symbolic formulas
|
32
32
|
|
33
33
|
## Quick Start
|
34
34
|
|
@@ -97,22 +97,85 @@ model.save_symbolic_formula("classification_formula.txt")
|
|
97
97
|
|
98
98
|
## Architecture Details
|
99
99
|
|
100
|
-
OIKAN
|
101
|
-
|
102
|
-
1. **
|
103
|
-
-
|
104
|
-
|
105
|
-
|
106
|
-
|
107
|
-
2
|
108
|
-
|
109
|
-
|
110
|
-
|
111
|
-
|
112
|
-
|
113
|
-
|
114
|
-
|
115
|
-
|
100
|
+
OIKAN implements a novel neuro-symbolic architecture based on Kolmogorov-Arnold representation theory through three specialized components:
|
101
|
+
|
102
|
+
1. **Edge Symbolic Layer**: Learns interpretable single-variable transformations
|
103
|
+
- Adaptive basis function composition using 9 core functions:
|
104
|
+
```python
|
105
|
+
ADVANCED_LIB = {
|
106
|
+
'x': ('x', lambda x: x),
|
107
|
+
'x^2': ('x^2', lambda x: x**2),
|
108
|
+
'x^3': ('x^3', lambda x: x**3),
|
109
|
+
'exp': ('exp(x)', lambda x: np.exp(x)),
|
110
|
+
'log': ('log(x)', lambda x: np.log(abs(x) + 1)),
|
111
|
+
'sqrt': ('sqrt(x)', lambda x: np.sqrt(abs(x))),
|
112
|
+
'tanh': ('tanh(x)', lambda x: np.tanh(x)),
|
113
|
+
'sin': ('sin(x)', lambda x: np.sin(x)),
|
114
|
+
'abs': ('abs(x)', lambda x: np.abs(x))
|
115
|
+
}
|
116
|
+
```
|
117
|
+
- Each input feature is transformed through these basis functions
|
118
|
+
- Learnable weights determine the optimal combination
|
119
|
+
|
120
|
+
2. **Neural Composition Layer**: Multi-layer feature aggregation
|
121
|
+
- Direct feature-to-feature connections through KAN layers
|
122
|
+
- Dropout regularization (p=0.1 default) for robust learning
|
123
|
+
- Gradient clipping (max_norm=1.0) for stable training
|
124
|
+
- User-configurable hidden layer dimensions
|
125
|
+
|
126
|
+
3. **Symbolic Extraction Layer**: Generates production-ready formulas
|
127
|
+
- Weight-based term pruning (threshold=1e-4)
|
128
|
+
- Automatic coefficient optimization
|
129
|
+
- Human-readable mathematical expressions
|
130
|
+
- Exportable to lightweight production code
|
131
|
+
|
132
|
+
### Architecture Diagram
|
133
|
+
|
134
|
+

|
135
|
+
|
136
|
+
### Key Design Principles
|
137
|
+
|
138
|
+
1. **Interpretability First**: All transformations maintain clear mathematical meaning
|
139
|
+
2. **Scikit-learn Compatibility**: Familiar `.fit()` and `.predict()` interface
|
140
|
+
3. **Production Ready**: Export formulas as lightweight mathematical expressions
|
141
|
+
4. **Automatic Simplification**: Remove insignificant terms (|w| < 1e-4)
|
142
|
+
|
143
|
+
## Model Components
|
144
|
+
|
145
|
+
1. **Symbolic Edge Functions**
|
146
|
+
```python
|
147
|
+
class EdgeActivation(nn.Module):
|
148
|
+
"""Learnable edge activation with basis functions"""
|
149
|
+
def forward(self, x):
|
150
|
+
return sum(self.weights[i] * basis[i](x) for i in range(self.num_basis))
|
151
|
+
```
|
152
|
+
|
153
|
+
2. **KAN Layer Implementation**
|
154
|
+
```python
|
155
|
+
class KANLayer(nn.Module):
|
156
|
+
"""Kolmogorov-Arnold Network layer"""
|
157
|
+
def forward(self, x):
|
158
|
+
edge_outputs = [self.edges[i](x[:,i]) for i in range(self.input_dim)]
|
159
|
+
return self.combine(edge_outputs)
|
160
|
+
```
|
161
|
+
|
162
|
+
3. **Formula Extraction**
|
163
|
+
```python
|
164
|
+
def get_symbolic_formula(self):
|
165
|
+
"""Extract interpretable mathematical expression"""
|
166
|
+
terms = []
|
167
|
+
for i, edge in enumerate(self.edges):
|
168
|
+
if abs(self.weights[i]) > threshold:
|
169
|
+
terms.append(f"{self.weights[i]:.4f} * {edge.formula}")
|
170
|
+
return " + ".join(terms)
|
171
|
+
```
|
172
|
+
|
173
|
+
### Key Design Principles
|
174
|
+
|
175
|
+
- **Modular Architecture**: Each component is independent and replaceable
|
176
|
+
- **Interpretability First**: All transformations maintain symbolic representations
|
177
|
+
- **Automatic Simplification**: Removes insignificant terms and combines similar expressions
|
178
|
+
- **Production Ready**: Export formulas for lightweight deployment
|
116
179
|
|
117
180
|
## Contributing
|
118
181
|
|
@@ -1,6 +1,6 @@
|
|
1
1
|
Metadata-Version: 2.4
|
2
2
|
Name: oikan
|
3
|
-
Version: 0.0.2.
|
3
|
+
Version: 0.0.2.2
|
4
4
|
Summary: OIKAN: Optimized Interpretable Kolmogorov-Arnold Networks
|
5
5
|
Author: Arman Zhalgasbayev
|
6
6
|
License: MIT
|
@@ -17,7 +17,7 @@ Dynamic: license-file
|
|
17
17
|
|
18
18
|
<!-- logo in the center -->
|
19
19
|
<div align="center">
|
20
|
-
<img src="
|
20
|
+
<img src="docs/media/oikan_logo.png" alt="OIKAN Logo" width="200"/>
|
21
21
|
|
22
22
|
<h1>OIKAN: Optimized Interpretable Kolmogorov-Arnold Networks</h1>
|
23
23
|
</div>
|
@@ -43,9 +43,9 @@ OIKAN (Optimized Interpretable Kolmogorov-Arnold Networks) is a neuro-symbolic M
|
|
43
43
|
|
44
44
|
OIKAN is based on Kolmogorov's superposition theorem, which states that any multivariate continuous function can be represented as a composition of single-variable functions. We leverage this theory by:
|
45
45
|
|
46
|
-
1. Using neural networks to learn optimal basis functions
|
47
|
-
2.
|
48
|
-
3.
|
46
|
+
1. Using neural networks to learn optimal basis functions through interpretable edge transformations
|
47
|
+
2. Combining transformed features using learnable weights
|
48
|
+
3. Automatically extracting human-readable symbolic formulas
|
49
49
|
|
50
50
|
## Quick Start
|
51
51
|
|
@@ -114,22 +114,85 @@ model.save_symbolic_formula("classification_formula.txt")
|
|
114
114
|
|
115
115
|
## Architecture Details
|
116
116
|
|
117
|
-
OIKAN
|
118
|
-
|
119
|
-
1. **
|
120
|
-
-
|
121
|
-
|
122
|
-
|
123
|
-
|
124
|
-
2
|
125
|
-
|
126
|
-
|
127
|
-
|
128
|
-
|
129
|
-
|
130
|
-
|
131
|
-
|
132
|
-
|
117
|
+
OIKAN implements a novel neuro-symbolic architecture based on Kolmogorov-Arnold representation theory through three specialized components:
|
118
|
+
|
119
|
+
1. **Edge Symbolic Layer**: Learns interpretable single-variable transformations
|
120
|
+
- Adaptive basis function composition using 9 core functions:
|
121
|
+
```python
|
122
|
+
ADVANCED_LIB = {
|
123
|
+
'x': ('x', lambda x: x),
|
124
|
+
'x^2': ('x^2', lambda x: x**2),
|
125
|
+
'x^3': ('x^3', lambda x: x**3),
|
126
|
+
'exp': ('exp(x)', lambda x: np.exp(x)),
|
127
|
+
'log': ('log(x)', lambda x: np.log(abs(x) + 1)),
|
128
|
+
'sqrt': ('sqrt(x)', lambda x: np.sqrt(abs(x))),
|
129
|
+
'tanh': ('tanh(x)', lambda x: np.tanh(x)),
|
130
|
+
'sin': ('sin(x)', lambda x: np.sin(x)),
|
131
|
+
'abs': ('abs(x)', lambda x: np.abs(x))
|
132
|
+
}
|
133
|
+
```
|
134
|
+
- Each input feature is transformed through these basis functions
|
135
|
+
- Learnable weights determine the optimal combination
|
136
|
+
|
137
|
+
2. **Neural Composition Layer**: Multi-layer feature aggregation
|
138
|
+
- Direct feature-to-feature connections through KAN layers
|
139
|
+
- Dropout regularization (p=0.1 default) for robust learning
|
140
|
+
- Gradient clipping (max_norm=1.0) for stable training
|
141
|
+
- User-configurable hidden layer dimensions
|
142
|
+
|
143
|
+
3. **Symbolic Extraction Layer**: Generates production-ready formulas
|
144
|
+
- Weight-based term pruning (threshold=1e-4)
|
145
|
+
- Automatic coefficient optimization
|
146
|
+
- Human-readable mathematical expressions
|
147
|
+
- Exportable to lightweight production code
|
148
|
+
|
149
|
+
### Architecture Diagram
|
150
|
+
|
151
|
+

|
152
|
+
|
153
|
+
### Key Design Principles
|
154
|
+
|
155
|
+
1. **Interpretability First**: All transformations maintain clear mathematical meaning
|
156
|
+
2. **Scikit-learn Compatibility**: Familiar `.fit()` and `.predict()` interface
|
157
|
+
3. **Production Ready**: Export formulas as lightweight mathematical expressions
|
158
|
+
4. **Automatic Simplification**: Remove insignificant terms (|w| < 1e-4)
|
159
|
+
|
160
|
+
## Model Components
|
161
|
+
|
162
|
+
1. **Symbolic Edge Functions**
|
163
|
+
```python
|
164
|
+
class EdgeActivation(nn.Module):
|
165
|
+
"""Learnable edge activation with basis functions"""
|
166
|
+
def forward(self, x):
|
167
|
+
return sum(self.weights[i] * basis[i](x) for i in range(self.num_basis))
|
168
|
+
```
|
169
|
+
|
170
|
+
2. **KAN Layer Implementation**
|
171
|
+
```python
|
172
|
+
class KANLayer(nn.Module):
|
173
|
+
"""Kolmogorov-Arnold Network layer"""
|
174
|
+
def forward(self, x):
|
175
|
+
edge_outputs = [self.edges[i](x[:,i]) for i in range(self.input_dim)]
|
176
|
+
return self.combine(edge_outputs)
|
177
|
+
```
|
178
|
+
|
179
|
+
3. **Formula Extraction**
|
180
|
+
```python
|
181
|
+
def get_symbolic_formula(self):
|
182
|
+
"""Extract interpretable mathematical expression"""
|
183
|
+
terms = []
|
184
|
+
for i, edge in enumerate(self.edges):
|
185
|
+
if abs(self.weights[i]) > threshold:
|
186
|
+
terms.append(f"{self.weights[i]:.4f} * {edge.formula}")
|
187
|
+
return " + ".join(terms)
|
188
|
+
```
|
189
|
+
|
190
|
+
### Key Design Principles
|
191
|
+
|
192
|
+
- **Modular Architecture**: Each component is independent and replaceable
|
193
|
+
- **Interpretability First**: All transformations maintain symbolic representations
|
194
|
+
- **Automatic Simplification**: Removes insignificant terms and combines similar expressions
|
195
|
+
- **Production Ready**: Export formulas for lightweight deployment
|
133
196
|
|
134
197
|
## Contributing
|
135
198
|
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|
File without changes
|