sklearnk 0.1.0__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- sklearnk-0.1.0/PKG-INFO +85 -0
- sklearnk-0.1.0/README.md +75 -0
- sklearnk-0.1.0/pyproject.toml +20 -0
- sklearnk-0.1.0/setup.cfg +4 -0
- sklearnk-0.1.0/sklearnk/__init__.py +12 -0
- sklearnk-0.1.0/sklearnk/program1.py +81 -0
- sklearnk-0.1.0/sklearnk/program10.py +24 -0
- sklearnk-0.1.0/sklearnk/program11.py +25 -0
- sklearnk-0.1.0/sklearnk/program12.py +22 -0
- sklearnk-0.1.0/sklearnk/program2.py +40 -0
- sklearnk-0.1.0/sklearnk/program3.py +108 -0
- sklearnk-0.1.0/sklearnk/program4.py +43 -0
- sklearnk-0.1.0/sklearnk/program5.py +55 -0
- sklearnk-0.1.0/sklearnk/program6.py +44 -0
- sklearnk-0.1.0/sklearnk/program7.py +32 -0
- sklearnk-0.1.0/sklearnk/program8.py +36 -0
- sklearnk-0.1.0/sklearnk/program9.py +41 -0
- sklearnk-0.1.0/sklearnk.egg-info/PKG-INFO +85 -0
- sklearnk-0.1.0/sklearnk.egg-info/SOURCES.txt +20 -0
- sklearnk-0.1.0/sklearnk.egg-info/dependency_links.txt +1 -0
- sklearnk-0.1.0/sklearnk.egg-info/requires.txt +3 -0
- sklearnk-0.1.0/sklearnk.egg-info/top_level.txt +1 -0
sklearnk-0.1.0/PKG-INFO
ADDED
|
@@ -0,0 +1,85 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: sklearnk
|
|
3
|
+
Version: 0.1.0
|
|
4
|
+
Summary: A collection of AI & ML lab programs
|
|
5
|
+
Author-email: Antigravity <antigravity@example.com>
|
|
6
|
+
Description-Content-Type: text/markdown
|
|
7
|
+
Requires-Dist: numpy
|
|
8
|
+
Requires-Dist: matplotlib
|
|
9
|
+
Requires-Dist: scikit-learn
|
|
10
|
+
|
|
11
|
+
# sklearnk
|
|
12
|
+
|
|
13
|
+
A collection of AI & ML lab programs extracted from the ScalarVerse project.
|
|
14
|
+
|
|
15
|
+
## Installation
|
|
16
|
+
|
|
17
|
+
```bash
|
|
18
|
+
pip install .
|
|
19
|
+
```
|
|
20
|
+
|
|
21
|
+
### For Older Computers
|
|
22
|
+
If `pip` is not working or if the computer is very old, you can use the traditional setup script:
|
|
23
|
+
|
|
24
|
+
```bash
|
|
25
|
+
python setup.py install
|
|
26
|
+
```
|
|
27
|
+
|
|
28
|
+
## Usage
|
|
29
|
+
|
|
30
|
+
Each program is exposed as a function in the `sklearnk` package. You can run them sequentially:
|
|
31
|
+
|
|
32
|
+
```python
|
|
33
|
+
import sklearnk
|
|
34
|
+
|
|
35
|
+
# Run Tic Tac Toe
|
|
36
|
+
sklearnk.program1()
|
|
37
|
+
|
|
38
|
+
# Run Alpha Beta Pruning
|
|
39
|
+
sklearnk.program2()
|
|
40
|
+
|
|
41
|
+
# Run 8 Puzzle (A* Algorithm)
|
|
42
|
+
sklearnk.program3()
|
|
43
|
+
|
|
44
|
+
# Run Hill Climbing
|
|
45
|
+
sklearnk.program4()
|
|
46
|
+
|
|
47
|
+
# Run Logistic Regression
|
|
48
|
+
sklearnk.program5()
|
|
49
|
+
|
|
50
|
+
# Run Naive Bayes
|
|
51
|
+
sklearnk.program6()
|
|
52
|
+
|
|
53
|
+
# Run K-Nearest Neighbors
|
|
54
|
+
sklearnk.program7()
|
|
55
|
+
|
|
56
|
+
# Run K-Means Clustering
|
|
57
|
+
sklearnk.program8()
|
|
58
|
+
|
|
59
|
+
# Run Logistic Regression (sklearn version)
|
|
60
|
+
sklearnk.program9()
|
|
61
|
+
|
|
62
|
+
# Run Naive Bayes (sklearn version)
|
|
63
|
+
sklearnk.program10()
|
|
64
|
+
|
|
65
|
+
# Run KNN (sklearn version)
|
|
66
|
+
sklearnk.program11()
|
|
67
|
+
|
|
68
|
+
# Run K-Means (sklearn version)
|
|
69
|
+
sklearnk.program12()
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
## Programs Included
|
|
73
|
+
|
|
74
|
+
1. **program1**: Tic Tac Toe (Minimax)
|
|
75
|
+
2. **program2**: Alpha Beta Pruning
|
|
76
|
+
3. **program3**: 8 Puzzle (A* Algorithm)
|
|
77
|
+
4. **program4**: Hill Climbing
|
|
78
|
+
5. **program5**: Logistic Regression (from scratch)
|
|
79
|
+
6. **program6**: Naive Bayes (from scratch)
|
|
80
|
+
7. **program7**: K-Nearest Neighbors (from scratch)
|
|
81
|
+
8. **program8**: K-Means Clustering (from scratch)
|
|
82
|
+
9. **program9**: Logistic Regression (using sklearn)
|
|
83
|
+
10. **program10**: Naive Bayes (using sklearn)
|
|
84
|
+
11. **program11**: KNN (using sklearn)
|
|
85
|
+
12. **program12**: K-Means (using sklearn)
|
sklearnk-0.1.0/README.md
ADDED
|
@@ -0,0 +1,75 @@
|
|
|
1
|
+
# sklearnk
|
|
2
|
+
|
|
3
|
+
A collection of AI & ML lab programs extracted from the ScalarVerse project.
|
|
4
|
+
|
|
5
|
+
## Installation
|
|
6
|
+
|
|
7
|
+
```bash
|
|
8
|
+
pip install .
|
|
9
|
+
```
|
|
10
|
+
|
|
11
|
+
### For Older Computers
|
|
12
|
+
If `pip` is not working or if the computer is very old, you can use the traditional setup script:
|
|
13
|
+
|
|
14
|
+
```bash
|
|
15
|
+
python setup.py install
|
|
16
|
+
```
|
|
17
|
+
|
|
18
|
+
## Usage
|
|
19
|
+
|
|
20
|
+
Each program is exposed as a function in the `sklearnk` package. You can run them sequentially:
|
|
21
|
+
|
|
22
|
+
```python
|
|
23
|
+
import sklearnk
|
|
24
|
+
|
|
25
|
+
# Run Tic Tac Toe
|
|
26
|
+
sklearnk.program1()
|
|
27
|
+
|
|
28
|
+
# Run Alpha Beta Pruning
|
|
29
|
+
sklearnk.program2()
|
|
30
|
+
|
|
31
|
+
# Run 8 Puzzle (A* Algorithm)
|
|
32
|
+
sklearnk.program3()
|
|
33
|
+
|
|
34
|
+
# Run Hill Climbing
|
|
35
|
+
sklearnk.program4()
|
|
36
|
+
|
|
37
|
+
# Run Logistic Regression
|
|
38
|
+
sklearnk.program5()
|
|
39
|
+
|
|
40
|
+
# Run Naive Bayes
|
|
41
|
+
sklearnk.program6()
|
|
42
|
+
|
|
43
|
+
# Run K-Nearest Neighbors
|
|
44
|
+
sklearnk.program7()
|
|
45
|
+
|
|
46
|
+
# Run K-Means Clustering
|
|
47
|
+
sklearnk.program8()
|
|
48
|
+
|
|
49
|
+
# Run Logistic Regression (sklearn version)
|
|
50
|
+
sklearnk.program9()
|
|
51
|
+
|
|
52
|
+
# Run Naive Bayes (sklearn version)
|
|
53
|
+
sklearnk.program10()
|
|
54
|
+
|
|
55
|
+
# Run KNN (sklearn version)
|
|
56
|
+
sklearnk.program11()
|
|
57
|
+
|
|
58
|
+
# Run K-Means (sklearn version)
|
|
59
|
+
sklearnk.program12()
|
|
60
|
+
```
|
|
61
|
+
|
|
62
|
+
## Programs Included
|
|
63
|
+
|
|
64
|
+
1. **program1**: Tic Tac Toe (Minimax)
|
|
65
|
+
2. **program2**: Alpha Beta Pruning
|
|
66
|
+
3. **program3**: 8 Puzzle (A* Algorithm)
|
|
67
|
+
4. **program4**: Hill Climbing
|
|
68
|
+
5. **program5**: Logistic Regression (from scratch)
|
|
69
|
+
6. **program6**: Naive Bayes (from scratch)
|
|
70
|
+
7. **program7**: K-Nearest Neighbors (from scratch)
|
|
71
|
+
8. **program8**: K-Means Clustering (from scratch)
|
|
72
|
+
9. **program9**: Logistic Regression (using sklearn)
|
|
73
|
+
10. **program10**: Naive Bayes (using sklearn)
|
|
74
|
+
11. **program11**: KNN (using sklearn)
|
|
75
|
+
12. **program12**: K-Means (using sklearn)
|
|
@@ -0,0 +1,20 @@
|
|
|
1
|
+
[build-system]
|
|
2
|
+
requires = ["setuptools>=61.0"]
|
|
3
|
+
build-backend = "setuptools.build_meta"
|
|
4
|
+
|
|
5
|
+
[project]
|
|
6
|
+
name = "sklearnk"
|
|
7
|
+
version = "0.1.0"
|
|
8
|
+
description = "A collection of AI & ML lab programs"
|
|
9
|
+
readme = "README.md"
|
|
10
|
+
dependencies = [
|
|
11
|
+
"numpy",
|
|
12
|
+
"matplotlib",
|
|
13
|
+
"scikit-learn"
|
|
14
|
+
]
|
|
15
|
+
authors = [
|
|
16
|
+
{ name = "Antigravity", email = "antigravity@example.com" }
|
|
17
|
+
]
|
|
18
|
+
|
|
19
|
+
[tool.setuptools]
|
|
20
|
+
packages = ["sklearnk"]
|
sklearnk-0.1.0/setup.cfg
ADDED
|
@@ -0,0 +1,12 @@
|
|
|
1
|
+
from .program1 import run as program1
|
|
2
|
+
from .program2 import run as program2
|
|
3
|
+
from .program3 import run as program3
|
|
4
|
+
from .program4 import run as program4
|
|
5
|
+
from .program5 import run as program5
|
|
6
|
+
from .program6 import run as program6
|
|
7
|
+
from .program7 import run as program7
|
|
8
|
+
from .program8 import run as program8
|
|
9
|
+
from .program9 import run as program9
|
|
10
|
+
from .program10 import run as program10
|
|
11
|
+
from .program11 import run as program11
|
|
12
|
+
from .program12 import run as program12
|
|
@@ -0,0 +1,81 @@
|
|
|
1
|
+
def run():
|
|
2
|
+
class TicTacToe:
|
|
3
|
+
def __init__(self):
|
|
4
|
+
self.board = [[' ']*3 for _ in range(3)]
|
|
5
|
+
self.player = 'X'
|
|
6
|
+
|
|
7
|
+
def print_board(self):
|
|
8
|
+
for row in self.board:
|
|
9
|
+
print(' | '.join(row))
|
|
10
|
+
print('-' * 5)
|
|
11
|
+
|
|
12
|
+
def is_winner(self, player):
|
|
13
|
+
for row in self.board:
|
|
14
|
+
if row.count(player) == 3:
|
|
15
|
+
return True
|
|
16
|
+
for col in zip(*self.board):
|
|
17
|
+
if list(col).count(player) == 3:
|
|
18
|
+
return True
|
|
19
|
+
if all(self.board[i][i] == player for i in range(3)) or all(self.board[i][2-i] == player for i in
|
|
20
|
+
range(3)):
|
|
21
|
+
return True
|
|
22
|
+
return False
|
|
23
|
+
|
|
24
|
+
def is_draw(self):
|
|
25
|
+
return all(cell != ' ' for row in self.board for cell in row)
|
|
26
|
+
|
|
27
|
+
def dfs(self, player):
|
|
28
|
+
winner = 'X' if self.is_winner('X') else ('O' if self.is_winner('O') else None)
|
|
29
|
+
if winner:
|
|
30
|
+
return {'score': 1 if winner == 'X' else -1}
|
|
31
|
+
if self.is_draw():
|
|
32
|
+
return {'score': 0}
|
|
33
|
+
|
|
34
|
+
best = {'score': -float('inf')} if player == 'X' else {'score': float('inf')}
|
|
35
|
+
for i in range(3):
|
|
36
|
+
for j in range(3):
|
|
37
|
+
if self.board[i][j] == ' ':
|
|
38
|
+
self.board[i][j] = player
|
|
39
|
+
score = self.dfs('O' if player == 'X' else 'X')
|
|
40
|
+
self.board[i][j] = ' '
|
|
41
|
+
score['row'], score['col'] = i, j
|
|
42
|
+
|
|
43
|
+
if player == 'X' and score['score'] > best['score']:
|
|
44
|
+
best = score
|
|
45
|
+
elif player == 'O' and score['score'] < best['score']:
|
|
46
|
+
best = score
|
|
47
|
+
return best
|
|
48
|
+
|
|
49
|
+
def play(self):
|
|
50
|
+
while True:
|
|
51
|
+
self.print_board()
|
|
52
|
+
if self.is_winner('X'):
|
|
53
|
+
print("Player X wins!")
|
|
54
|
+
break
|
|
55
|
+
if self.is_winner('O'):
|
|
56
|
+
print("Player O wins!")
|
|
57
|
+
break
|
|
58
|
+
if self.is_draw():
|
|
59
|
+
print("It's a draw!")
|
|
60
|
+
break
|
|
61
|
+
if self.player == 'X':
|
|
62
|
+
best_move = self.dfs('X')
|
|
63
|
+
self.board[best_move['row']][best_move['col']] = 'X'
|
|
64
|
+
else:
|
|
65
|
+
while True:
|
|
66
|
+
try:
|
|
67
|
+
i, j = map(int, input("Enter row and column (0-2): ").split())
|
|
68
|
+
if self.board[i][j] == ' ':
|
|
69
|
+
self.board[i][j] = 'O'
|
|
70
|
+
break
|
|
71
|
+
else:
|
|
72
|
+
print("Invalid move. Try again.")
|
|
73
|
+
except (ValueError, IndexError):
|
|
74
|
+
print("Enter valid numbers between 0 and 2.")
|
|
75
|
+
self.player = 'O' if self.player == 'X' else 'X'
|
|
76
|
+
|
|
77
|
+
game = TicTacToe()
|
|
78
|
+
game.play()
|
|
79
|
+
|
|
80
|
+
if __name__ == '__main__':
|
|
81
|
+
run()
|
|
@@ -0,0 +1,24 @@
|
|
|
1
|
+
def run():
|
|
2
|
+
import numpy as np
|
|
3
|
+
from sklearn.datasets import load_iris
|
|
4
|
+
from sklearn.model_selection import train_test_split
|
|
5
|
+
from sklearn.naive_bayes import GaussianNB
|
|
6
|
+
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
|
|
7
|
+
|
|
8
|
+
iris = load_iris()
|
|
9
|
+
X, y = iris.data, iris.target
|
|
10
|
+
|
|
11
|
+
Xtr, Xte, ytr, yte = train_test_split(X, y, test_size=0.3, random_state=1)
|
|
12
|
+
|
|
13
|
+
nb = GaussianNB()
|
|
14
|
+
nb.fit(Xtr, ytr)
|
|
15
|
+
|
|
16
|
+
y_pred = nb.predict(Xte)
|
|
17
|
+
|
|
18
|
+
print("Accuracy: %.4f" % accuracy_score(yte, y_pred))
|
|
19
|
+
print("Predictions:", iris.target_names[y_pred])
|
|
20
|
+
print("\nConfusion Matrix:\n", confusion_matrix(yte, y_pred))
|
|
21
|
+
print("\nClassification Report:\n", classification_report(yte, y_pred, target_names=iris.target_names))
|
|
22
|
+
|
|
23
|
+
if __name__ == '__main__':
|
|
24
|
+
run()
|
|
@@ -0,0 +1,25 @@
|
|
|
1
|
+
def run():
|
|
2
|
+
import numpy as np
|
|
3
|
+
from sklearn.datasets import load_iris
|
|
4
|
+
from sklearn.model_selection import train_test_split
|
|
5
|
+
from sklearn.neighbors import KNeighborsClassifier
|
|
6
|
+
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
|
|
7
|
+
|
|
8
|
+
iris = load_iris()
|
|
9
|
+
X, y = iris.data, iris.target
|
|
10
|
+
class_names = iris.target_names
|
|
11
|
+
|
|
12
|
+
Xtr, Xte, ytr, yte = train_test_split(X, y, test_size=0.3, random_state=1)
|
|
13
|
+
|
|
14
|
+
knn = KNeighborsClassifier(n_neighbors=3)
|
|
15
|
+
knn.fit(Xtr, ytr)
|
|
16
|
+
|
|
17
|
+
y_pred = knn.predict(Xte)
|
|
18
|
+
|
|
19
|
+
print("Accuracy: %.4f" % accuracy_score(yte, y_pred))
|
|
20
|
+
print("Predictions:", class_names[y_pred])
|
|
21
|
+
print("\nConfusion Matrix:\n", confusion_matrix(yte, y_pred))
|
|
22
|
+
print("\nClassification Report:\n", classification_report(yte, y_pred, target_names=class_names))
|
|
23
|
+
|
|
24
|
+
if __name__ == '__main__':
|
|
25
|
+
run()
|
|
@@ -0,0 +1,22 @@
|
|
|
1
|
+
def run():
|
|
2
|
+
import numpy as np
|
|
3
|
+
import matplotlib.pyplot as plt
|
|
4
|
+
from sklearn.datasets import load_iris
|
|
5
|
+
from sklearn.cluster import KMeans
|
|
6
|
+
|
|
7
|
+
iris = load_iris()
|
|
8
|
+
X = iris.data
|
|
9
|
+
|
|
10
|
+
kmeans = KMeans(n_clusters=3, random_state=42, n_init=10)
|
|
11
|
+
labels = kmeans.fit_predict(X)
|
|
12
|
+
centroids = kmeans.cluster_centers_
|
|
13
|
+
|
|
14
|
+
colors = ['r','g','b']
|
|
15
|
+
|
|
16
|
+
for i in range(3): plt.scatter(X[labels==i,0], X[labels==i,1], c=colors[i], label=f'Cluster {i+1}')
|
|
17
|
+
plt.scatter(centroids[:,0], centroids[:,1], c='black', marker='x', s=100, label='Centroids')
|
|
18
|
+
plt.xlabel('Sepal Length'); plt.ylabel('Sepal Width'); plt.title('K-Means Clustering on Iris Dataset')
|
|
19
|
+
plt.legend(); plt.show()
|
|
20
|
+
|
|
21
|
+
if __name__ == '__main__':
|
|
22
|
+
run()
|
|
@@ -0,0 +1,40 @@
|
|
|
1
|
+
def run():
|
|
2
|
+
INF = 1000
|
|
3
|
+
|
|
4
|
+
def minimax(depth, index, is_max, values, alpha, beta):
|
|
5
|
+
# Leaf node
|
|
6
|
+
if depth == 3:
|
|
7
|
+
return values[index]
|
|
8
|
+
|
|
9
|
+
# MAX player's turn
|
|
10
|
+
if is_max:
|
|
11
|
+
best = -INF
|
|
12
|
+
for child in range(2):
|
|
13
|
+
value = minimax(depth + 1, index * 2 + child,
|
|
14
|
+
False, values, alpha, beta)
|
|
15
|
+
best = max(best, value)
|
|
16
|
+
alpha = max(alpha, best)
|
|
17
|
+
|
|
18
|
+
if beta <= alpha: # pruning
|
|
19
|
+
break
|
|
20
|
+
return best
|
|
21
|
+
|
|
22
|
+
# MIN player's turn
|
|
23
|
+
else:
|
|
24
|
+
best = INF
|
|
25
|
+
for child in range(2):
|
|
26
|
+
value = minimax(depth + 1, index * 2 + child,
|
|
27
|
+
True, values, alpha, beta)
|
|
28
|
+
best = min(best, value)
|
|
29
|
+
beta = min(beta, best)
|
|
30
|
+
|
|
31
|
+
if beta <= alpha: # pruning
|
|
32
|
+
break
|
|
33
|
+
return best
|
|
34
|
+
|
|
35
|
+
|
|
36
|
+
values = [3, 5, 6, 9, 1, 2, 0, -1]
|
|
37
|
+
print("Optimal value:", minimax(0, 0, True, values, -INF, INF))
|
|
38
|
+
|
|
39
|
+
if __name__ == '__main__':
|
|
40
|
+
run()
|
|
@@ -0,0 +1,108 @@
|
|
|
1
|
+
def run():
|
|
2
|
+
import heapq
|
|
3
|
+
|
|
4
|
+
# Possible moves: up, down, left, right
|
|
5
|
+
MOVES = [(-1,0), (1,0), (0,-1), (0,1)]
|
|
6
|
+
|
|
7
|
+
|
|
8
|
+
def print_state(state):
|
|
9
|
+
for row in state:
|
|
10
|
+
print(" ".join(map(str, row)))
|
|
11
|
+
print()
|
|
12
|
+
|
|
13
|
+
|
|
14
|
+
def heuristic(state, goal):
|
|
15
|
+
# Misplaced tiles heuristic (ignore blank = 0)
|
|
16
|
+
count = 0
|
|
17
|
+
for i in range(3):
|
|
18
|
+
for j in range(3):
|
|
19
|
+
if state[i][j] != 0:
|
|
20
|
+
if state[i][j] != goal[i][j]:
|
|
21
|
+
count += 1
|
|
22
|
+
return count
|
|
23
|
+
|
|
24
|
+
|
|
25
|
+
def find_blank(state):
|
|
26
|
+
for i in range(3):
|
|
27
|
+
for j in range(3):
|
|
28
|
+
if state[i][j] == 0:
|
|
29
|
+
return i, j
|
|
30
|
+
|
|
31
|
+
|
|
32
|
+
def get_neighbors(state):
|
|
33
|
+
x, y = find_blank(state)
|
|
34
|
+
neighbors = []
|
|
35
|
+
|
|
36
|
+
for dx, dy in MOVES:
|
|
37
|
+
nx, ny = x + dx, y + dy
|
|
38
|
+
if 0 <= nx < 3 and 0 <= ny < 3:
|
|
39
|
+
new_state = [list(row) for row in state]
|
|
40
|
+
new_state[x][y], new_state[nx][ny] = new_state[nx][ny], new_state[x][y]
|
|
41
|
+
neighbors.append(tuple(tuple(row) for row in new_state))
|
|
42
|
+
|
|
43
|
+
return neighbors
|
|
44
|
+
|
|
45
|
+
|
|
46
|
+
def a_star(start, goal):
|
|
47
|
+
start = tuple(tuple(row) for row in start)
|
|
48
|
+
goal = tuple(tuple(row) for row in goal)
|
|
49
|
+
|
|
50
|
+
open_list = []
|
|
51
|
+
heapq.heappush(open_list, (heuristic(start, goal), 0, start))
|
|
52
|
+
closed = set()
|
|
53
|
+
|
|
54
|
+
parent = {start: None}
|
|
55
|
+
g_score = {start: 0} # Track g-scores separately
|
|
56
|
+
|
|
57
|
+
while open_list:
|
|
58
|
+
f, g, current = heapq.heappop(open_list)
|
|
59
|
+
|
|
60
|
+
if current == goal:
|
|
61
|
+
# Reconstruct path
|
|
62
|
+
path = []
|
|
63
|
+
while current:
|
|
64
|
+
path.append(current)
|
|
65
|
+
current = parent[current]
|
|
66
|
+
path.reverse()
|
|
67
|
+
|
|
68
|
+
print("Solution path:\n")
|
|
69
|
+
for step in path:
|
|
70
|
+
print_state(step)
|
|
71
|
+
|
|
72
|
+
print("Total moves:", len(path) - 1)
|
|
73
|
+
return
|
|
74
|
+
|
|
75
|
+
closed.add(current)
|
|
76
|
+
|
|
77
|
+
for neighbor in get_neighbors(current):
|
|
78
|
+
if neighbor in closed:
|
|
79
|
+
continue
|
|
80
|
+
|
|
81
|
+
g_new = g + 1
|
|
82
|
+
|
|
83
|
+
# If we found a better path to this neighbor
|
|
84
|
+
if neighbor not in g_score or g_new < g_score[neighbor]:
|
|
85
|
+
parent[neighbor] = current
|
|
86
|
+
g_score[neighbor] = g_new
|
|
87
|
+
f_new = g_new + heuristic(neighbor, goal)
|
|
88
|
+
heapq.heappush(open_list, (f_new, g_new, neighbor))
|
|
89
|
+
|
|
90
|
+
print("No solution found")
|
|
91
|
+
|
|
92
|
+
|
|
93
|
+
initial_state = [
|
|
94
|
+
[1, 2, 3],
|
|
95
|
+
[8, 0, 4],
|
|
96
|
+
[7, 6, 5]
|
|
97
|
+
]
|
|
98
|
+
|
|
99
|
+
goal_state = [
|
|
100
|
+
[2, 8, 1],
|
|
101
|
+
[0, 4, 3],
|
|
102
|
+
[7, 6, 5]
|
|
103
|
+
]
|
|
104
|
+
|
|
105
|
+
a_star(initial_state, goal_state)
|
|
106
|
+
|
|
107
|
+
if __name__ == '__main__':
|
|
108
|
+
run()
|
|
@@ -0,0 +1,43 @@
|
|
|
1
|
+
def run():
|
|
2
|
+
def hill_climbing(func, start, step=0.01, max_iter=1000):
|
|
3
|
+
x = start
|
|
4
|
+
|
|
5
|
+
for _ in range(max_iter):
|
|
6
|
+
f_x = func(x)
|
|
7
|
+
f_right = func(x + step)
|
|
8
|
+
f_left = func(x - step)
|
|
9
|
+
|
|
10
|
+
if f_right > f_x and f_right >= f_left:
|
|
11
|
+
x += step
|
|
12
|
+
elif f_left > f_x:
|
|
13
|
+
x -= step
|
|
14
|
+
else:
|
|
15
|
+
break
|
|
16
|
+
|
|
17
|
+
return x, func(x)
|
|
18
|
+
|
|
19
|
+
|
|
20
|
+
# ---- User input ----
|
|
21
|
+
while True:
|
|
22
|
+
try:
|
|
23
|
+
func_str = input("\nEnter a function of x: ")
|
|
24
|
+
func = lambda x: eval(func_str)
|
|
25
|
+
func(0) # test
|
|
26
|
+
break
|
|
27
|
+
except:
|
|
28
|
+
print("Invalid function. Try again.")
|
|
29
|
+
|
|
30
|
+
while True:
|
|
31
|
+
try:
|
|
32
|
+
start = float(input("\nEnter starting value: "))
|
|
33
|
+
break
|
|
34
|
+
except:
|
|
35
|
+
print("Enter a valid number.")
|
|
36
|
+
|
|
37
|
+
maxima, max_value = hill_climbing(func, start)
|
|
38
|
+
|
|
39
|
+
print("The maxima is at x =", maxima)
|
|
40
|
+
print("The maximum value obtained is", max_value)
|
|
41
|
+
|
|
42
|
+
if __name__ == '__main__':
|
|
43
|
+
run()
|
|
@@ -0,0 +1,55 @@
|
|
|
1
|
+
def run():
|
|
2
|
+
import numpy as np
|
|
3
|
+
import matplotlib.pyplot as plt
|
|
4
|
+
from sklearn.datasets import load_iris
|
|
5
|
+
from sklearn.model_selection import train_test_split
|
|
6
|
+
from sklearn.preprocessing import StandardScaler
|
|
7
|
+
|
|
8
|
+
|
|
9
|
+
# Sigmoid
|
|
10
|
+
def sigmoid(z):
|
|
11
|
+
return 1 / (1 + np.exp(-z))
|
|
12
|
+
|
|
13
|
+
|
|
14
|
+
# Logistic Regression
|
|
15
|
+
def train(X, y, lr=0.001, iters=200):
|
|
16
|
+
w = np.zeros(X.shape[1])
|
|
17
|
+
for _ in range(iters):
|
|
18
|
+
w -= lr * (X.T @ (sigmoid(X @ w) - y)) / len(y)
|
|
19
|
+
return w
|
|
20
|
+
|
|
21
|
+
|
|
22
|
+
# Load data
|
|
23
|
+
iris = load_iris()
|
|
24
|
+
X = iris.data[:, :2]
|
|
25
|
+
y = (iris.target != 0).astype(int)
|
|
26
|
+
|
|
27
|
+
# Split & scale
|
|
28
|
+
Xtr, Xte, ytr, yte = train_test_split(X, y, test_size=0.4, random_state=9)
|
|
29
|
+
sc = StandardScaler()
|
|
30
|
+
Xtr = sc.fit_transform(Xtr)
|
|
31
|
+
Xte = sc.transform(Xte)
|
|
32
|
+
|
|
33
|
+
# Train
|
|
34
|
+
w = train(Xtr, ytr)
|
|
35
|
+
|
|
36
|
+
# Accuracy
|
|
37
|
+
pred = sigmoid(Xte @ w) > 0.5
|
|
38
|
+
print("Accuracy:", np.mean(pred == yte))
|
|
39
|
+
|
|
40
|
+
# Plot decision boundary
|
|
41
|
+
xx, yy = np.meshgrid(
|
|
42
|
+
np.arange(Xtr[:,0].min()-1, Xtr[:,0].max()+1, 0.1),
|
|
43
|
+
np.arange(Xtr[:,1].min()-1, Xtr[:,1].max()+1, 0.1)
|
|
44
|
+
)
|
|
45
|
+
|
|
46
|
+
Z = sigmoid(np.c_[xx.ravel(), yy.ravel()] @ w) > 0.5
|
|
47
|
+
plt.contourf(xx, yy, Z.reshape(xx.shape), alpha=0.4)
|
|
48
|
+
plt.scatter(Xtr[:,0], Xtr[:,1], c=ytr)
|
|
49
|
+
plt.xlabel("Sepal length")
|
|
50
|
+
plt.ylabel("Sepal width")
|
|
51
|
+
plt.title("Logistic Regression")
|
|
52
|
+
plt.show()
|
|
53
|
+
|
|
54
|
+
if __name__ == '__main__':
|
|
55
|
+
run()
|
|
@@ -0,0 +1,44 @@
|
|
|
1
|
+
def run():
|
|
2
|
+
import numpy as np
|
|
3
|
+
from sklearn.datasets import load_iris
|
|
4
|
+
from sklearn.model_selection import train_test_split
|
|
5
|
+
from sklearn.metrics import confusion_matrix, classification_report
|
|
6
|
+
|
|
7
|
+
iris = load_iris()
|
|
8
|
+
X, y = iris.data, iris.target
|
|
9
|
+
|
|
10
|
+
class NaiveBayes:
|
|
11
|
+
def fit(self, X, y):
|
|
12
|
+
self.classes = np.unique(y)
|
|
13
|
+
self.mean = np.array([X[y == c].mean(0) for c in self.classes])
|
|
14
|
+
self.var = np.array([X[y == c].var(0) for c in self.classes])
|
|
15
|
+
self.priors = np.array([np.mean(y == c) for c in self.classes])
|
|
16
|
+
|
|
17
|
+
def predict(self, X):
|
|
18
|
+
return np.array([self._predict(x) for x in X])
|
|
19
|
+
|
|
20
|
+
def _predict(self, x):
|
|
21
|
+
probs = []
|
|
22
|
+
for i in range(len(self.classes)):
|
|
23
|
+
likelihood = -0.5 * np.sum(np.log(2 * np.pi * self.var[i]))
|
|
24
|
+
likelihood -= np.sum((x - self.mean[i]) ** 2 / (2 * self.var[i]))
|
|
25
|
+
probs.append(np.log(self.priors[i]) + likelihood)
|
|
26
|
+
return self.classes[np.argmax(probs)]
|
|
27
|
+
|
|
28
|
+
Xtr, Xte, ytr, yte = train_test_split(X, y, test_size=0.3, random_state=1)
|
|
29
|
+
|
|
30
|
+
nb = NaiveBayes()
|
|
31
|
+
nb.fit(Xtr, ytr)
|
|
32
|
+
|
|
33
|
+
y_pred = nb.predict(Xte)
|
|
34
|
+
|
|
35
|
+
print("Accuracy: %.4f" % np.mean(y_pred == yte))
|
|
36
|
+
print("Predictions:", iris.target_names[y_pred])
|
|
37
|
+
print("\nConfusion Matrix:")
|
|
38
|
+
print(confusion_matrix(yte, y_pred))
|
|
39
|
+
print("\nClassification Report:")
|
|
40
|
+
print(classification_report(yte, y_pred, target_names=iris.target_names))
|
|
41
|
+
|
|
42
|
+
|
|
43
|
+
if __name__ == '__main__':
|
|
44
|
+
run()
|
|
@@ -0,0 +1,32 @@
|
|
|
1
|
+
def run():
|
|
2
|
+
import numpy as np
|
|
3
|
+
from sklearn.datasets import load_iris
|
|
4
|
+
from sklearn.model_selection import train_test_split
|
|
5
|
+
from sklearn.metrics import confusion_matrix, classification_report
|
|
6
|
+
from collections import Counter
|
|
7
|
+
|
|
8
|
+
iris = load_iris()
|
|
9
|
+
X, y = iris.data, iris.target
|
|
10
|
+
class_names = iris.target_names
|
|
11
|
+
|
|
12
|
+
Xtr, Xte, ytr, yte = train_test_split(X, y, test_size=0.3, random_state=1)
|
|
13
|
+
|
|
14
|
+
def knn_predict(Xtr, ytr, Xte, k=3):
|
|
15
|
+
preds = []
|
|
16
|
+
for x in Xte:
|
|
17
|
+
dists = np.linalg.norm(Xtr - x, axis=1)
|
|
18
|
+
k_labels = ytr[np.argsort(dists)[:k]]
|
|
19
|
+
preds.append(Counter(k_labels).most_common(1)[0][0])
|
|
20
|
+
return np.array(preds)
|
|
21
|
+
|
|
22
|
+
y_pred = knn_predict(Xtr, ytr, Xte, k=3)
|
|
23
|
+
|
|
24
|
+
print("Accuracy: %.4f" % np.mean(y_pred == yte))
|
|
25
|
+
print("Predictions:", class_names[y_pred])
|
|
26
|
+
print("\nConfusion Matrix:")
|
|
27
|
+
print(confusion_matrix(yte, y_pred))
|
|
28
|
+
print("\nClassification Report:")
|
|
29
|
+
print(classification_report(yte, y_pred))
|
|
30
|
+
|
|
31
|
+
if __name__ == '__main__':
|
|
32
|
+
run()
|
|
@@ -0,0 +1,36 @@
|
|
|
1
|
+
def run():
|
|
2
|
+
import numpy as np
|
|
3
|
+
import matplotlib.pyplot as plt
|
|
4
|
+
from sklearn.datasets import load_iris
|
|
5
|
+
|
|
6
|
+
iris = load_iris()
|
|
7
|
+
X = iris.data
|
|
8
|
+
|
|
9
|
+
def kmeans(X, k):
|
|
10
|
+
centroids = X[np.random.choice(X.shape[0], k, replace=False)]
|
|
11
|
+
|
|
12
|
+
for _ in range(100):
|
|
13
|
+
distances = np.linalg.norm(X[:, None] - centroids, axis=2)
|
|
14
|
+
labels = np.argmin(distances, axis=1)
|
|
15
|
+
centroids = np.array([X[labels == i].mean(axis=0) for i in range(k)])
|
|
16
|
+
|
|
17
|
+
return centroids, labels
|
|
18
|
+
|
|
19
|
+
k = 3
|
|
20
|
+
centroids, labels = kmeans(X, k)
|
|
21
|
+
|
|
22
|
+
colors = ['r', 'g', 'b']
|
|
23
|
+
|
|
24
|
+
for i in range(k):
|
|
25
|
+
plt.scatter(X[labels == i, 0], X[labels == i, 1], c=colors[i], label=f'Cluster {i+1}')
|
|
26
|
+
|
|
27
|
+
plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', c='black', label='Centroids')
|
|
28
|
+
|
|
29
|
+
plt.title('K-Means Clustering on Iris Dataset')
|
|
30
|
+
plt.xlabel('Sepal Length')
|
|
31
|
+
plt.ylabel('Sepal Width')
|
|
32
|
+
plt.legend()
|
|
33
|
+
plt.show()
|
|
34
|
+
|
|
35
|
+
if __name__ == '__main__':
|
|
36
|
+
run()
|
|
@@ -0,0 +1,41 @@
|
|
|
1
|
+
def run():
|
|
2
|
+
import numpy as np
|
|
3
|
+
import matplotlib.pyplot as plt
|
|
4
|
+
|
|
5
|
+
from sklearn.datasets import load_iris
|
|
6
|
+
from sklearn.model_selection import train_test_split
|
|
7
|
+
from sklearn.preprocessing import StandardScaler
|
|
8
|
+
from sklearn.linear_model import LogisticRegression
|
|
9
|
+
from sklearn.inspection import DecisionBoundaryDisplay
|
|
10
|
+
|
|
11
|
+
|
|
12
|
+
# Load data
|
|
13
|
+
iris = load_iris()
|
|
14
|
+
X = iris.data[:, :2]
|
|
15
|
+
y = (iris.target != 0).astype(int)
|
|
16
|
+
|
|
17
|
+
# Split
|
|
18
|
+
Xtr, Xte, ytr, yte = train_test_split(X, y, test_size=0.4, random_state=9)
|
|
19
|
+
|
|
20
|
+
# Scale
|
|
21
|
+
sc = StandardScaler()
|
|
22
|
+
Xtr = sc.fit_transform(Xtr)
|
|
23
|
+
Xte = sc.transform(Xte)
|
|
24
|
+
|
|
25
|
+
# Train
|
|
26
|
+
lr = LogisticRegression()
|
|
27
|
+
lr.fit(Xtr, ytr)
|
|
28
|
+
|
|
29
|
+
# Accuracy
|
|
30
|
+
print("Accuracy:", np.mean(lr.predict(Xte) == yte))
|
|
31
|
+
|
|
32
|
+
# Plot decision boundary
|
|
33
|
+
DecisionBoundaryDisplay.from_estimator(lr, Xtr, response_method="predict", alpha=0.3)
|
|
34
|
+
plt.scatter(Xtr[:,0], Xtr[:,1], c=ytr, edgecolor="k")
|
|
35
|
+
plt.xlabel("Sepal Length (scaled)")
|
|
36
|
+
plt.ylabel("Sepal Width (scaled)")
|
|
37
|
+
plt.title("Logistic Regression Decision Boundary")
|
|
38
|
+
plt.show()
|
|
39
|
+
|
|
40
|
+
if __name__ == '__main__':
|
|
41
|
+
run()
|
|
@@ -0,0 +1,85 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: sklearnk
|
|
3
|
+
Version: 0.1.0
|
|
4
|
+
Summary: A collection of AI & ML lab programs
|
|
5
|
+
Author-email: Antigravity <antigravity@example.com>
|
|
6
|
+
Description-Content-Type: text/markdown
|
|
7
|
+
Requires-Dist: numpy
|
|
8
|
+
Requires-Dist: matplotlib
|
|
9
|
+
Requires-Dist: scikit-learn
|
|
10
|
+
|
|
11
|
+
# sklearnk
|
|
12
|
+
|
|
13
|
+
A collection of AI & ML lab programs extracted from the ScalarVerse project.
|
|
14
|
+
|
|
15
|
+
## Installation
|
|
16
|
+
|
|
17
|
+
```bash
|
|
18
|
+
pip install .
|
|
19
|
+
```
|
|
20
|
+
|
|
21
|
+
### For Older Computers
|
|
22
|
+
If `pip` is not working or if the computer is very old, you can use the traditional setup script:
|
|
23
|
+
|
|
24
|
+
```bash
|
|
25
|
+
python setup.py install
|
|
26
|
+
```
|
|
27
|
+
|
|
28
|
+
## Usage
|
|
29
|
+
|
|
30
|
+
Each program is exposed as a function in the `sklearnk` package. You can run them sequentially:
|
|
31
|
+
|
|
32
|
+
```python
|
|
33
|
+
import sklearnk
|
|
34
|
+
|
|
35
|
+
# Run Tic Tac Toe
|
|
36
|
+
sklearnk.program1()
|
|
37
|
+
|
|
38
|
+
# Run Alpha Beta Pruning
|
|
39
|
+
sklearnk.program2()
|
|
40
|
+
|
|
41
|
+
# Run 8 Puzzle (A* Algorithm)
|
|
42
|
+
sklearnk.program3()
|
|
43
|
+
|
|
44
|
+
# Run Hill Climbing
|
|
45
|
+
sklearnk.program4()
|
|
46
|
+
|
|
47
|
+
# Run Logistic Regression
|
|
48
|
+
sklearnk.program5()
|
|
49
|
+
|
|
50
|
+
# Run Naive Bayes
|
|
51
|
+
sklearnk.program6()
|
|
52
|
+
|
|
53
|
+
# Run K-Nearest Neighbors
|
|
54
|
+
sklearnk.program7()
|
|
55
|
+
|
|
56
|
+
# Run K-Means Clustering
|
|
57
|
+
sklearnk.program8()
|
|
58
|
+
|
|
59
|
+
# Run Logistic Regression (sklearn version)
|
|
60
|
+
sklearnk.program9()
|
|
61
|
+
|
|
62
|
+
# Run Naive Bayes (sklearn version)
|
|
63
|
+
sklearnk.program10()
|
|
64
|
+
|
|
65
|
+
# Run KNN (sklearn version)
|
|
66
|
+
sklearnk.program11()
|
|
67
|
+
|
|
68
|
+
# Run K-Means (sklearn version)
|
|
69
|
+
sklearnk.program12()
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
## Programs Included
|
|
73
|
+
|
|
74
|
+
1. **program1**: Tic Tac Toe (Minimax)
|
|
75
|
+
2. **program2**: Alpha Beta Pruning
|
|
76
|
+
3. **program3**: 8 Puzzle (A* Algorithm)
|
|
77
|
+
4. **program4**: Hill Climbing
|
|
78
|
+
5. **program5**: Logistic Regression (from scratch)
|
|
79
|
+
6. **program6**: Naive Bayes (from scratch)
|
|
80
|
+
7. **program7**: K-Nearest Neighbors (from scratch)
|
|
81
|
+
8. **program8**: K-Means Clustering (from scratch)
|
|
82
|
+
9. **program9**: Logistic Regression (using sklearn)
|
|
83
|
+
10. **program10**: Naive Bayes (using sklearn)
|
|
84
|
+
11. **program11**: KNN (using sklearn)
|
|
85
|
+
12. **program12**: K-Means (using sklearn)
|
|
@@ -0,0 +1,20 @@
|
|
|
1
|
+
README.md
|
|
2
|
+
pyproject.toml
|
|
3
|
+
sklearnk/__init__.py
|
|
4
|
+
sklearnk/program1.py
|
|
5
|
+
sklearnk/program10.py
|
|
6
|
+
sklearnk/program11.py
|
|
7
|
+
sklearnk/program12.py
|
|
8
|
+
sklearnk/program2.py
|
|
9
|
+
sklearnk/program3.py
|
|
10
|
+
sklearnk/program4.py
|
|
11
|
+
sklearnk/program5.py
|
|
12
|
+
sklearnk/program6.py
|
|
13
|
+
sklearnk/program7.py
|
|
14
|
+
sklearnk/program8.py
|
|
15
|
+
sklearnk/program9.py
|
|
16
|
+
sklearnk.egg-info/PKG-INFO
|
|
17
|
+
sklearnk.egg-info/SOURCES.txt
|
|
18
|
+
sklearnk.egg-info/dependency_links.txt
|
|
19
|
+
sklearnk.egg-info/requires.txt
|
|
20
|
+
sklearnk.egg-info/top_level.txt
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
sklearnk
|