pyerualjetwork 4.5__py3-none-any.whl → 4.5.2__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -51,18 +51,12 @@ def fit(
51
51
  Creates a model to fitting data.,
52
52
 
53
53
  fit Args:
54
-
55
- x_train (aray-like[cupy]): List or cupy array of input data.
56
-
57
- y_train (aray-like[cupy]): List or cupy array of target labels. (one hot encoded)
58
-
59
- activation_potentiation (list): For deeper PLAN networks, activation function parameters. For more information please run this code: plan.activations_list() default: [None] (optional)
60
-
54
+ :param (aray-like[cupy]) x_train: (aray-like[cupy]): List or cupy array of input data.
55
+ :param (aray-like[cupy]) y_train: List or cupy array of target labels. (one hot encoded)
56
+ :param (list) activation_potentiation: For deeper PLAN networks, activation function parameters. For more information please run this code: plan.activations_list() default: [None] (optional)
61
57
  W (cupy.ndarray, optional): If you want to re-continue or update model
62
-
63
58
  auto_normalization (bool, optional): Normalization may solves overflow problem. Default: False
64
-
65
- dtype (cupy.dtype, optional): Data type for the arrays. cp.float32 by default. Example: cp.float64 or cp.float16. [fp32 for balanced devices, fp64 for strong devices, fp16 for weak devices: not reccomended!] (optional)
59
+ dtype (cupy.dtype, optional): Data type for the arrays. cp.float32 by default. Example: cp.float64 or cp.float16.
66
60
 
67
61
  Returns:
68
62
  cupyarray: (Weight matrix).
@@ -90,82 +84,55 @@ def learner(x_train, y_train, optimizer, fit_start=True, gen=None, batch_size=1,
90
84
  """
91
85
  Optimizes the activation functions for a neural network by leveraging train data to find
92
86
  the most accurate combination of activation potentiation for the given dataset.
93
-
87
+
94
88
  Why genetic optimization and not backpropagation?
95
89
  Because PLAN is different from other neural network architectures. In PLAN, the learnable parameters are not the weights; instead, the learnable parameters are the activation functions.
96
90
  Since activation functions are not differentiable, we cannot use gradient descent or backpropagation. However, I developed a more powerful genetic optimization algorithm: PLANEAT.
97
91
 
98
- Args:
99
-
100
- x_train (array-like): Training input data.
101
-
102
- y_train (array-like): Labels for training data.
103
-
104
- optimizer (function): PLAN optimization technique with hyperparameters. (PLAN using NEAT(PLANEAT) for optimization.) Please use this: from pyerualjetwork import planeat_cuda (and) optimizer = lambda *args, **kwargs: planeat_cuda.evolve(*args, 'here give your neat hyperparameters for example: activation_add_prob=0.85', **kwargs) Example:
105
- ```python
106
- optimizer = lambda *args, **kwargs: planeat_cuda.evolver(*args,
107
- activation_add_prob=0.05,
108
- strategy='aggressive',
109
- policy='more_selective',
110
- **kwargs)
111
-
112
- model = plan_cuda.learner(x_train,
113
- y_train,
114
- optimizer,
115
- fit_start=True,
116
- show_history=True,
117
- gen=15,
118
- batch_size=0.05,
119
- interval=16.67)
120
- ```
121
-
122
- weight_evolve (bool, optional): Activation combinations already optimizes by PLANEAT genetic search algorithm. Should the weight parameters also evolve or should the weights be determined according to the aggregating learning principle of the PLAN algorithm? Default: True (Evolves Weights)
123
-
124
- loss (str, optional): For visualizing and monitoring. PLAN neural networks doesn't need any loss function in training. options: ('categorical_crossentropy' or 'binary_crossentropy') Default is 'categorical_crossentropy'.
125
-
126
- target_acc (float, optional): The target accuracy to stop training early when achieved. Default is None.
127
-
128
- target_loss (float, optional): The target loss to stop training early when achieved. Default is None.
129
-
130
- fit_start (bool, optional): If the fit_start parameter is set to True, the initial generation population undergoes a simple short training process using the PLAN algorithm. This allows for a very robust starting point, especially for large and complex datasets. However, for small or relatively simple datasets, it may result in unnecessary computational overhead. When fit_start is True, completing the first generation may take slightly longer (this increase in computational cost applies only to the first generation and does not affect subsequent generations). If fit_start is set to False, the initial population will be entirely random. Options: True or False. Default: True
131
-
132
- gen (int, optional): The generation count for genetic optimization.
133
-
134
- batch_size (float, optional): Batch size is used in the prediction process to receive train feedback by dividing the train data into chunks and selecting activations based on randomly chosen partitions. This process reduces computational cost and time while still covering the entire train set due to random selection, so it doesn't significantly impact accuracy. For example, a batch size of 0.08 means each train batch represents %8 of the train set. Default is 1. (%100 of train)
135
-
136
- acc_impact (float, optional): Impact of accuracy for optimization [0-1]. Default: 0.9
137
-
138
- loss_impact (float, optional): Impact of loss for optimization [0-1]. Default: 0.1
139
-
140
- pop_size (int, optional): Population size of each generation. Default: count of activation functions
141
-
142
- early_stop (bool, optional): If True, implements early stopping during training.(If train accuracy not improves in two gen stops learning.) Default is False.
143
-
144
- show_current_activations (bool, optional): Should it display the activations selected according to the current strategies during learning, or not? (True or False) This can be very useful if you want to cancel the learning process and resume from where you left off later. After canceling, you will need to view the live training activations in order to choose the activations to be given to the 'start_this' parameter. Default is False
145
-
146
- show_history (bool, optional): If True, displays the training history after optimization. Default is False.
147
-
148
- auto_normalization (bool, optional): Normalization may solves overflow problem. Default: False
149
-
150
- interval (int, optional): The interval at which evaluations are conducted during training. (33.33 = 30 FPS, 16.67 = 60 FPS) Default is 100.
151
-
152
- target_acc (int, optional): The target accuracy to stop training early when achieved. Default is None.
153
-
154
- start_this_act (list, optional): To resume a previously canceled or interrupted training from where it left off, or to continue from that point with a different strategy, provide the list of activation functions selected up to the learned portion to this parameter. Default is None
155
-
156
- start_this_W (cupy.array, optional): To resume a previously canceled or interrupted training from where it left off, or to continue from that point with a different strategy, provide the weight matrix of this genome. Default is None
157
-
158
- neurons_history (bool, optional): Shows the history of changes that neurons undergo during the TFL (Test or Train Feedback Learning) stages. True or False. Default is False.
159
-
160
- neural_web_history (bool, optional): Draws history of neural web. Default is False.
161
-
162
- dtype (cupy.dtype): Data type for the arrays. np.float32 by default. Example: cp.float64 or cp.float16. [fp32 for balanced devices, fp64 for strong devices, fp16 for weak devices: not reccomended!] (optional)
163
-
164
- memory (str): The memory parameter determines whether the dataset to be processed on the GPU will be stored in the CPU's RAM or the GPU's RAM. Options: 'gpu', 'cpu'. Default: 'gpu'.
92
+ :Args:
93
+ :param x_train: (array-like): Training input data.
94
+ :param y_train: (array-like): Labels for training data.
95
+ :param optimizer: (function): PLAN optimization technique with hyperparameters. (PLAN using NEAT(PLANEAT) for optimization.) Please use this: from pyerualjetwork import planeat_cuda (and) optimizer = lambda *args, **kwargs: planeat_cuda.evolve(*args, 'here give your neat hyperparameters for example: activation_add_prob=0.85', **kwargs) Example:
96
+ ```python
97
+ optimizer = lambda *args, **kwargs: planeat_cuda.evolver(*args,
98
+ activation_add_prob=0.05,
99
+ strategy='aggressive',
100
+ policy='more_selective',
101
+ **kwargs)
102
+
103
+ model = plan_cuda.learner(x_train,
104
+ y_train,
105
+ optimizer,
106
+ fit_start=True,
107
+ show_history=True,
108
+ gen=15,
109
+ batch_size=0.05,
110
+ interval=16.67)
111
+ ```
112
+ :param fit_start: (bool, optional): If the fit_start parameter is set to True, the initial generation population undergoes a simple short training process using the PLAN algorithm. This allows for a very robust starting point, especially for large and complex datasets. However, for small or relatively simple datasets, it may result in unnecessary computational overhead. When fit_start is True, completing the first generation may take slightly longer (this increase in computational cost applies only to the first generation and does not affect subsequent generations). If fit_start is set to False, the initial population will be entirely random. Options: True or False. Default: True
113
+ :param gen: (int, optional): The generation count for genetic optimization.
114
+ :param batch_size: (float, optional): Batch size is used in the prediction process to receive train feedback by dividing the train data into chunks and selecting activations based on randomly chosen partitions. This process reduces computational cost and time while still covering the entire train set due to random selection, so it doesn't significantly impact accuracy. For example, a batch size of 0.08 means each train batch represents %8 of the train set. Default is 1. (%100 of train)
115
+ :param pop_size: (int, optional): Population size of each generation. Default: count of activation functions
116
+ :param weight_evolve: (bool, optional): Activation combinations already optimizes by PLANEAT genetic search algorithm. Should the weight parameters also evolve or should the weights be determined according to the aggregating learning principle of the PLAN algorithm? Default: True (Evolves Weights)
117
+ :param neural_web_history: (bool, optional): Draws history of neural web. Default is False.
118
+ :param show_current_activations: (bool, optional): Should it display the activations selected according to the current strategies during learning, or not? (True or False) This can be very useful if you want to cancel the learning process and resume from where you left off later. After canceling, you will need to view the live training activations in order to choose the activations to be given to the 'start_this' parameter. Default is False
119
+ :param auto_normalization: (bool, optional): Normalization may solves overflow problem. Default: False
120
+ :param target_acc: (float, optional): The target accuracy to stop training early when achieved. Default is None.
121
+ :param neurons_history: (bool, optional): Shows the history of changes that neurons undergo during the TFL (Test or Train Feedback Learning) stages. True or False. Default is False.
122
+ :param early_stop: (bool, optional): If True, implements early stopping during training.(If train accuracy not improves in two gen stops learning.) Default is False.
123
+ :param show_history: (bool, optional): If True, displays the training history after optimization. Default is False.
124
+ :param loss: (str, optional): For visualizing and monitoring. PLAN neural networks doesn't need any loss function in training. options: ('categorical_crossentropy' or 'binary_crossentropy') Default is 'categorical_crossentropy'.
125
+ :param interval: (int, optional): The interval at which evaluations are conducted during training. (33.33 = 30 FPS, 16.67 = 60 FPS) Default is 33.33.
126
+ :param target_loss: (float, optional): The target loss to stop training early when achieved. Default is None.
127
+ :param loss_impact: (float, optional): Impact of loss for optimization [0-1]. Default: 0.1
128
+ :param acc_impact: (float, optional): Impact of accuracy for optimization [0-1]. Default: 0.9
129
+ :param start_this_act: (list, optional): To resume a previously canceled or interrupted training from where it left off, or to continue from that point with a different strategy, provide the list of activation functions selected up to the learned portion to this parameter. Default is None
130
+ :param start_this_W: (cupy.array, optional): To resume a previously canceled or interrupted training from where it left off, or to continue from that point with a different strategy, provide the weight matrix of this genome. Default is None
131
+ :param dtype: (cupy.dtype): Data type for the arrays. np.float32 by default. Example: cp.float64 or cp.float16.
132
+ :param memory: (str): The memory parameter determines whether the dataset to be processed on the GPU will be stored in the CPU's RAM or the GPU's RAM. Options: 'gpu', 'cpu'. Default: 'gpu'.
165
133
 
166
134
  Returns:
167
135
  tuple: A list for model parameters: [Weight matrix, Preds, Accuracy, [Activations functions]]. You can acces this parameters in model_operations module. For example: model_operations.get_weights() for Weight matrix.
168
-
169
136
  """
170
137
 
171
138
  from .planeat_cuda import define_genomes
@@ -253,7 +220,7 @@ def learner(x_train, y_train, optimizer, fit_start=True, gen=None, batch_size=1,
253
220
  if weight_evolve is False:
254
221
  weight_pop[j] = fit(x_train_batch, y_train_batch, activation_potentiation=act_pop[j], auto_normalization=auto_normalization, dtype=dtype)
255
222
 
256
- model = evaluate(x_train_batch, y_train_batch, W=weight_pop[j], activation_potentiation=act_pop[j])
223
+ model = evaluate(x_train_batch, y_train_batch, W=weight_pop[j], activation_potentiation=act_pop[j], auto_normalization=auto_normalization)
257
224
  acc = model[get_acc()]
258
225
 
259
226
  if loss == 'categorical_crossentropy':
@@ -311,7 +278,7 @@ def learner(x_train, y_train, optimizer, fit_start=True, gen=None, batch_size=1,
311
278
  if target_acc is not None and best_acc >= target_acc:
312
279
  progress.close()
313
280
  train_model = evaluate(x_train, y_train, W=best_weight,
314
- activation_potentiation=final_activations)
281
+ activation_potentiation=final_activations, auto_normalization=auto_normalization)
315
282
  if loss == 'categorical_crossentropy':
316
283
  train_loss = categorical_crossentropy(y_true_batch=y_train,
317
284
  y_pred_batch=train_model[get_preds_softmax()])
@@ -331,7 +298,7 @@ def learner(x_train, y_train, optimizer, fit_start=True, gen=None, batch_size=1,
331
298
  if target_loss is not None and best_loss <= target_loss:
332
299
  progress.close()
333
300
  train_model = evaluate(x_train, y_train, W=best_weight,
334
- activation_potentiation=final_activations)
301
+ activation_potentiation=final_activations, auto_normalization=auto_normalization)
335
302
 
336
303
  if loss == 'categorical_crossentropy':
337
304
  train_loss = categorical_crossentropy(y_true_batch=y_train,
@@ -381,7 +348,7 @@ def learner(x_train, y_train, optimizer, fit_start=True, gen=None, batch_size=1,
381
348
  if best_acc_per_gen_list[i] == best_acc_per_gen_list[i-1]:
382
349
  progress.close()
383
350
  train_model = evaluate(x_train, y_train, W=best_weight,
384
- activation_potentiation=final_activations)
351
+ activation_potentiation=final_activations, auto_normalization=auto_normalization)
385
352
 
386
353
  if loss == 'categorical_crossentropy':
387
354
  train_loss = categorical_crossentropy(y_true_batch=y_train,
@@ -402,7 +369,7 @@ def learner(x_train, y_train, optimizer, fit_start=True, gen=None, batch_size=1,
402
369
  # Final evaluation
403
370
  progress.close()
404
371
  train_model = evaluate(x_train, y_train, W=best_weight,
405
- activation_potentiation=final_activations)
372
+ activation_potentiation=final_activations, auto_normalization=auto_normalization)
406
373
 
407
374
  if loss == 'categorical_crossentropy':
408
375
  train_loss = categorical_crossentropy(y_true_batch=y_train,
@@ -423,7 +390,8 @@ def evaluate(
423
390
  x_test,
424
391
  y_test,
425
392
  W,
426
- activation_potentiation=['linear']
393
+ activation_potentiation=['linear'],
394
+ auto_normalization=False
427
395
  ) -> tuple:
428
396
  """
429
397
  Evaluates the neural network model using the given test data.
@@ -436,11 +404,15 @@ def evaluate(
436
404
  W (cp.ndarray): Neural net weight matrix.
437
405
 
438
406
  activation_potentiation (list): Activation list. Default = ['linear'].
407
+
408
+ auto_normalization (bool, optional): Normalization for x_test ? Default = False.
439
409
 
440
410
  Returns:
441
411
  tuple: Model (list).
442
412
  """
443
413
 
414
+ if auto_normalization: x_test = normalization(x_test, dtype=x_test.dtype)
415
+
444
416
  x_test = apply_activation(x_test, activation_potentiation)
445
417
 
446
418
  result = x_test @ W.T
pyerualjetwork/planeat.py CHANGED
@@ -35,7 +35,7 @@ def define_genomes(input_shape, output_shape, population_size, dtype=np.float32)
35
35
 
36
36
  population_size (int): The number of genomes (individuals) in the population.
37
37
 
38
- dtype (numpy.dtype): Data type for the arrays. np.float32 by default. Example: np.float64 or np.float16. [fp32 for balanced devices, fp64 for strong devices, fp16 for weak devices: not reccomended!] (optional)
38
+ dtype (numpy.dtype): Data type for the arrays. np.float32 by default. Example: np.float64 or np.float16.
39
39
 
40
40
  Returns:
41
41
  tuple: A tuple containing:
@@ -37,7 +37,7 @@ def define_genomes(input_shape, output_shape, population_size, dtype=cp.float32)
37
37
 
38
38
  population_size (int): The number of genomes (individuals) in the population.
39
39
 
40
- dtype (cupy.dtype): Data type for the arrays. np.float32 by default. Example: cp.float64 or cp.float16. [fp32 for balanced devices, fp64 for strong devices, fp16 for weak devices: not reccomended!] (optional)
40
+ dtype (cupy.dtype): Data type for the arrays. np.float32 by default. Example: cp.float64 or cp.float16.
41
41
 
42
42
  Returns:
43
43
  tuple: A tuple containing:
@@ -97,7 +97,7 @@ def evolver(weights,
97
97
  weight_mutate_prob=1,
98
98
  dtype=cp.float32):
99
99
  """
100
- Applies the evolving process of a population of genomes using selection, crossover, mutation, and activation function potentiation.
100
+ Applies the evolving process of a population of genomes using selection, crossover, mutation, and activation function potentiation.
101
101
  The function modifies the population's weights and activation functions based on a specified policy, mutation probabilities, and strategy.
102
102
 
103
103
  'selection' args effects cross-over.
@@ -218,7 +218,7 @@ def evolver(weights,
218
218
 
219
219
  Example:
220
220
  ```python
221
- weights, activation_potentiations = planeat.evolver(weights, activation_potentiations, 1, fitness, show_info=True, strategy='normal_selective', policy='aggressive')
221
+ weights, activation_potentiations = planeat_cuda.evolver(weights, activation_potentiations, 1, fitness, show_info=True, strategy='normal_selective', policy='aggressive')
222
222
  ```
223
223
 
224
224
  - The function returns the updated weights and activations after processing based on the chosen strategy, policy, and mutation parameters.
pyerualjetwork/ui.py CHANGED
@@ -1,22 +1,23 @@
1
1
  from tqdm import tqdm
2
2
 
3
- def loading_bars():
4
-
5
- GREEN = "\033[92m"
6
- RESET = "\033[0m"
7
-
8
- bar_format_normal = f"{GREEN}{{bar}}{GREEN} {RESET} {{l_bar}} {{remaining}} {{postfix}}"
9
- bar_format_learner = f"{GREEN}{{bar}}{GREEN} {RESET} {{remaining}} {{postfix}}"
3
+ GREY = "\033[90m"
4
+ GREEN = "\033[92m"
5
+ RESET = "\033[0m"
10
6
 
7
+ def loading_bars():
8
+ bar_format_normal = "{bar} {l_bar} {remaining} {postfix}"
9
+ bar_format_learner = "{bar} {remaining} {postfix}"
11
10
  return bar_format_normal, bar_format_learner
12
11
 
12
+ def get_loading_bar_style():
13
+ return (f"{GREY}━{RESET}", f"{GREEN}━{RESET}")
13
14
 
14
- def initialize_loading_bar(total, desc, ncols, bar_format, leave=True):
15
+ def initialize_loading_bar(total, desc, ncols, bar_format, loading_bar_style=get_loading_bar_style(), leave=True):
15
16
  return tqdm(
16
17
  total=total,
17
18
  leave=leave,
18
19
  desc=desc,
19
- ascii="▱▰",
20
+ ascii=loading_bar_style,
20
21
  bar_format=bar_format,
21
22
  ncols=ncols,
22
23
  )
@@ -9,27 +9,27 @@ def draw_neural_web(W, ax, G, return_objs=False):
9
9
  """
10
10
  Visualizes a neural web by drawing the neural network structure.
11
11
 
12
- Parameters:
13
- W : numpy.ndarray
14
- A 2D array representing the connection weights of the neural network.
15
- ax : matplotlib.axes.Axes
16
- The matplotlib axes where the graph will be drawn.
17
- G : networkx.Graph
18
- The NetworkX graph representing the neural network structure.
19
- return_objs : bool, optional
20
- If True, returns the drawn objects (nodes and edges). Default is False.
12
+ Args:
13
+ W : numpy.ndarray
14
+ A 2D array representing the connection weights of the neural network.
15
+ ax : matplotlib.axes.Axes
16
+ The matplotlib axes where the graph will be drawn.
17
+ G : networkx.Graph
18
+ The NetworkX graph representing the neural network structure.
19
+ return_objs : bool, optional
20
+ If True, returns the drawn objects (nodes and edges). Default is False.
21
21
 
22
22
  Returns:
23
- art1 : matplotlib.collections.PathCollection or None
24
- Returns the node collection if return_objs is True; otherwise, returns None.
25
- art2 : matplotlib.collections.LineCollection or None
26
- Returns the edge collection if return_objs is True; otherwise, returns None.
27
- art3 : matplotlib.collections.TextCollection or None
28
- Returns the label collection if return_objs is True; otherwise, returns None.
23
+ art1 : matplotlib.collections.PathCollection or None
24
+ Returns the node collection if return_objs is True; otherwise, returns None.
25
+ art2 : matplotlib.collections.LineCollection or None
26
+ Returns the edge collection if return_objs is True; otherwise, returns None.
27
+ art3 : matplotlib.collections.TextCollection or None
28
+ Returns the label collection if return_objs is True; otherwise, returns None.
29
29
 
30
30
  Example:
31
- art1, art2, art3 = draw_neural_web(W, ax, G, return_objs=True)
32
- plt.show()
31
+ art1, art2, art3 = draw_neural_web(W, ax, G, return_objs=True)
32
+ plt.show()
33
33
  """
34
34
 
35
35
  for i in range(W.shape[0]):
@@ -9,27 +9,27 @@ def draw_neural_web(W, ax, G, return_objs=False):
9
9
  """
10
10
  Visualizes a neural web by drawing the neural network structure.
11
11
 
12
- Parameters:
13
- W : numpy.ndarray
14
- A 2D array representing the connection weights of the neural network.
15
- ax : matplotlib.axes.Axes
16
- The matplotlib axes where the graph will be drawn.
17
- G : networkx.Graph
18
- The NetworkX graph representing the neural network structure.
19
- return_objs : bool, optional
20
- If True, returns the drawn objects (nodes and edges). Default is False.
12
+ Args:
13
+ W : numpy.ndarray
14
+ A 2D array representing the connection weights of the neural network.
15
+ ax : matplotlib.axes.Axes
16
+ The matplotlib axes where the graph will be drawn.
17
+ G : networkx.Graph
18
+ The NetworkX graph representing the neural network structure.
19
+ return_objs : bool, optional
20
+ If True, returns the drawn objects (nodes and edges). Default is False.
21
21
 
22
22
  Returns:
23
- art1 : matplotlib.collections.PathCollection or None
24
- Returns the node collection if return_objs is True; otherwise, returns None.
25
- art2 : matplotlib.collections.LineCollection or None
26
- Returns the edge collection if return_objs is True; otherwise, returns None.
27
- art3 : matplotlib.collections.TextCollection or None
28
- Returns the label collection if return_objs is True; otherwise, returns None.
23
+ art1 : matplotlib.collections.PathCollection or None
24
+ Returns the node collection if return_objs is True; otherwise, returns None.
25
+ art2 : matplotlib.collections.LineCollection or None
26
+ Returns the edge collection if return_objs is True; otherwise, returns None.
27
+ art3 : matplotlib.collections.TextCollection or None
28
+ Returns the label collection if return_objs is True; otherwise, returns None.
29
29
 
30
30
  Example:
31
- art1, art2, art3 = draw_neural_web(W, ax, G, return_objs=True)
32
- plt.show()
31
+ art1, art2, art3 = draw_neural_web(W, ax, G, return_objs=True)
32
+ plt.show()
33
33
  """
34
34
  W = W.get()
35
35
  for i in range(W.shape[0]):
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.1
2
2
  Name: pyerualjetwork
3
- Version: 4.5
3
+ Version: 4.5.2
4
4
  Summary: PyerualJetwork is a machine learning library supported with GPU(CUDA) acceleration written in Python for professionals and researchers including with PLAN algorithm, PLANEAT algorithm (genetic optimization). Also includes data pre-process and memory manegament
5
5
  Author: Hasan Can Beydili
6
6
  Author-email: tchasancan@gmail.com
@@ -0,0 +1,25 @@
1
+ pyerualjetwork/__init__.py,sha256=gLefqpCFeKrA5712LsxchV-J2cN2QfDpGNwouaCaoAM,1279
2
+ pyerualjetwork/activation_functions.py,sha256=Ms0AGBqkJuCA42ht64MSQnO54Td_1eDGquedpoBDVbc,7642
3
+ pyerualjetwork/activation_functions_cuda.py,sha256=5y1Ti3GDfDteQDCUmODwe7tAyDAUlDTKmIikChQ8d6g,7772
4
+ pyerualjetwork/data_operations.py,sha256=Y_RdxkjLEszFgeo4VDWIX1keF2syP-88KesLXA5sRyY,15280
5
+ pyerualjetwork/data_operations_cuda.py,sha256=9tyD3Bbv5__stuUampgh3_GbMhb_kmTTJmZi7BJsvuA,17381
6
+ pyerualjetwork/fitness_functions.py,sha256=urRdeMvUhNgWxD4ZGHCRdQlIf9cTWYMvF3_aVBojRqY,1235
7
+ pyerualjetwork/help.py,sha256=nQ_YbYA2RtuafhuvkreNpX0WWL1I_nzlelwCtvei0_Y,775
8
+ pyerualjetwork/loss_functions.py,sha256=6PyBI232SQRGuFnG3LDGvnv_PUdWzT2_2mUODJiejGI,618
9
+ pyerualjetwork/loss_functions_cuda.py,sha256=C93IZJcrOpT6HMK9x1O4AHJWXYTkN5WZiqdssPbvAPk,617
10
+ pyerualjetwork/memory_operations.py,sha256=0yCOHcgiNyF4ccMcRlL1Q9F_byG4nzjhmkbpXE_yU6E,13401
11
+ pyerualjetwork/metrics.py,sha256=q7MkhnZDRbCjFBDDfUgrl8lBYnUT_1ro1LxeBq105pI,6077
12
+ pyerualjetwork/metrics_cuda.py,sha256=73h9GC7XwmnFCVzFEEiPQfF8CwHIz2wsCbxpZrJtYgw,5061
13
+ pyerualjetwork/model_operations.py,sha256=BLRL_5s_KSs8iCiLsEwWvhRcGiWCP_TD9lsjYWM7Zek,12746
14
+ pyerualjetwork/model_operations_cuda.py,sha256=b3Bkobbrhq28AmYZ0vGxf2Hf8V2LPvoiM0xaPAApqec,13254
15
+ pyerualjetwork/plan.py,sha256=UyIvPmvHCHwczlc9KHolE4y6CPEeBfhnRN5yznSbnoM,23028
16
+ pyerualjetwork/plan_cuda.py,sha256=iteqgv7x9Z2Pj4vGOZs6HXS3r0bNaF_smr7ZXaOdRnw,23990
17
+ pyerualjetwork/planeat.py,sha256=_dnGRVBzdRUgvVCnHZ721tdXYV9PSvCz-aUnj--5VpU,38697
18
+ pyerualjetwork/planeat_cuda.py,sha256=v-R_ZpnSeIFeSxfYOvSTXfetnfaECap2f84jBEu7X-Q,38736
19
+ pyerualjetwork/ui.py,sha256=JBTFYz5R24XwNKhA3GSW-oYAoiIBxAE3kFGXkvm5gqw,656
20
+ pyerualjetwork/visualizations.py,sha256=utnX9zQhzmtvBJLOLNGm2jecVVk4zHXABQdjb0XzJac,28352
21
+ pyerualjetwork/visualizations_cuda.py,sha256=gnoaaazZ-nc9E1ImqXrZBRgQ4Rnpi2qh2yGJ2eLKMlE,28807
22
+ pyerualjetwork-4.5.2.dist-info/METADATA,sha256=mLFwYOUwuZ7czsv52GiAMdtP59QAORXBOVrefWXadfw,7505
23
+ pyerualjetwork-4.5.2.dist-info/WHEEL,sha256=2wepM1nk4DS4eFpYrW1TTqPcoGNfHhhO_i5m4cOimbo,92
24
+ pyerualjetwork-4.5.2.dist-info/top_level.txt,sha256=BRyt62U_r3ZmJpj-wXNOoA345Bzamrj6RbaWsyW4tRg,15
25
+ pyerualjetwork-4.5.2.dist-info/RECORD,,
@@ -1,25 +0,0 @@
1
- pyerualjetwork/__init__.py,sha256=xOgL47nXk4gItE1UKQ_uEBevdRI2RUjN5RVuB-BRlHM,1277
2
- pyerualjetwork/activation_functions.py,sha256=Ms0AGBqkJuCA42ht64MSQnO54Td_1eDGquedpoBDVbc,7642
3
- pyerualjetwork/activation_functions_cuda.py,sha256=5y1Ti3GDfDteQDCUmODwe7tAyDAUlDTKmIikChQ8d6g,7772
4
- pyerualjetwork/data_operations.py,sha256=c2FWIdHJQw_h6QP_ir0yx22zP7a9CtRp249V8Ro9V-0,15357
5
- pyerualjetwork/data_operations_cuda.py,sha256=IqS0JoXGM0XiYfoFSAj9li1WWiBroNXIcN52JWhlNFk,18224
6
- pyerualjetwork/fitness_functions.py,sha256=GisM8mDJAivw8YailXXDAw2M-lW1MHwRnIlWUVe-UEg,1261
7
- pyerualjetwork/help.py,sha256=nQ_YbYA2RtuafhuvkreNpX0WWL1I_nzlelwCtvei0_Y,775
8
- pyerualjetwork/loss_functions.py,sha256=6PyBI232SQRGuFnG3LDGvnv_PUdWzT2_2mUODJiejGI,618
9
- pyerualjetwork/loss_functions_cuda.py,sha256=C93IZJcrOpT6HMK9x1O4AHJWXYTkN5WZiqdssPbvAPk,617
10
- pyerualjetwork/memory_operations.py,sha256=I7QiZ--xSyRkFF0wcckPwZV7K9emEvyx5aJ3DiRHZFI,13468
11
- pyerualjetwork/metrics.py,sha256=q7MkhnZDRbCjFBDDfUgrl8lBYnUT_1ro1LxeBq105pI,6077
12
- pyerualjetwork/metrics_cuda.py,sha256=73h9GC7XwmnFCVzFEEiPQfF8CwHIz2wsCbxpZrJtYgw,5061
13
- pyerualjetwork/model_operations.py,sha256=MCSCNYiiICRVZITobtS3ZIWmH5Q9gjyELuH32sAdgg4,12649
14
- pyerualjetwork/model_operations_cuda.py,sha256=NT01BK5nrDYE7H1x3KnSI8gmx0QTGGB0mP_LqEb1uuU,13157
15
- pyerualjetwork/plan.py,sha256=XEptIEWfWbqXEy92Eil5q_uujEiIBWnjejAGD7Lh0-w,22975
16
- pyerualjetwork/plan_cuda.py,sha256=rFHZa8jsgfol0uUo1rfCKAGtb-B4ivfzqqlnynsfMzQ,23966
17
- pyerualjetwork/planeat.py,sha256=DVJGptIPYKyz4ePwqRnbbwgHwQzbxMoBci0Te8kfCzk,38802
18
- pyerualjetwork/planeat_cuda.py,sha256=-GxZY8aMPayuYhwhcsRVn4LWq604o36VxkEtdoBum98,38835
19
- pyerualjetwork/ui.py,sha256=wu2BhU1k-w3Kcho5Jtq4SEKe68ftaUeRGneUOSCVDjU,575
20
- pyerualjetwork/visualizations.py,sha256=1QSisYAaGnO5kug6qo1qOssTkQM-MgR7L8h4c3sczzU,28294
21
- pyerualjetwork/visualizations_cuda.py,sha256=PYRqj4QYUbuYMYcNwO8yaTPB-jK7E6kZHhTrAi0lwPU,28749
22
- pyerualjetwork-4.5.dist-info/METADATA,sha256=njuvH-FdU7-KudHvxUEJvBeyOvDWzyWi52ZIpRjR9K4,7503
23
- pyerualjetwork-4.5.dist-info/WHEEL,sha256=2wepM1nk4DS4eFpYrW1TTqPcoGNfHhhO_i5m4cOimbo,92
24
- pyerualjetwork-4.5.dist-info/top_level.txt,sha256=BRyt62U_r3ZmJpj-wXNOoA345Bzamrj6RbaWsyW4tRg,15
25
- pyerualjetwork-4.5.dist-info/RECORD,,