functioneer 0.2.2__tar.gz → 0.3.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,6 +1,6 @@
1
- Metadata-Version: 2.2
1
+ Metadata-Version: 2.4
2
2
  Name: functioneer
3
- Version: 0.2.2
3
+ Version: 0.3.0
4
4
  Summary: Effortlessly explore function behavior with automated batch analysis.
5
5
  Author-email: Quinn Marsh <quinnmarsh@hotmail.com>
6
6
  Maintainer-email: Quinn Marsh <quinnmarsh@hotmail.com>
@@ -10,7 +10,7 @@ Project-URL: Issues, https://github.com/qthedoc/functioneer/issues
10
10
  Project-URL: Funding, https://donate.pypi.org
11
11
  Project-URL: Say Thanks!, http://quinnmarsh.com
12
12
  Keywords: functioneer,analysis,batch run,automation,autorun,trade space,digital twin
13
- Classifier: Development Status :: 3 - Alpha
13
+ Classifier: Development Status :: 4 - Beta
14
14
  Classifier: Intended Audience :: Science/Research
15
15
  Classifier: Topic :: Scientific/Engineering
16
16
  Classifier: License :: OSI Approved :: MIT License
@@ -27,6 +27,7 @@ License-File: LICENSE
27
27
  Requires-Dist: numpy>=1.18.5
28
28
  Requires-Dist: scipy>=1.5.2
29
29
  Requires-Dist: pandas>=1.0.5
30
+ Dynamic: license-file
30
31
 
31
32
  # Functioneer
32
33
 
@@ -34,7 +35,7 @@ Requires-Dist: pandas>=1.0.5
34
35
  **Date**: February 02, 2025\
35
36
  **PyPI**: https://pypi.org/project/functioneer/
36
37
 
37
- Functioneer lets you effortlessly explore function behavior with automated batch analysis. With just a few lines of code, you can queue up thousands or even millions of function evaluations, testing and optimizing with unlimited parameter combinations. Retrieve structured results in formats like pandas for seamless integration into your workflows. Perfect for parameter sweeps, engineering simulations, and digital twin optimization.
38
+ Functioneer lets you effortlessly explore function behavior with automated batch analysis. With just a few lines of code, you can queue up thousands or even millions of function evaluations, with various parameter combinations and/or optimizations. Retrieve structured results in formats like pandas for seamless integration into your workflows. Perfect for parameter sweeps, engineering simulations, and digital twin optimization.
38
39
 
39
40
  ## Use cases
40
41
 
@@ -44,15 +45,19 @@ Functioneer lets you effortlessly explore function behavior with automated batch
44
45
 
45
46
  ## How Functioneer Works
46
47
 
47
- At its core, functioneer organizes analyses as pipelines, where a set of *parameters* flows sequentially through a series of *analysis steps*. These steps modify the parameters in various ways, such as defining new parameters, updating existing parameter values, or performing operations like function evaluation and optimization. One of the key features of functioneer is the ability to introduce *forks*, which split the analysis into multiple *branches*, each exploring different values for a specific parameter. This structured approach enables highly flexible and dynamic analyses, suitable for a wide range of applications. Functioneer *Forks* are what let you queue up thousands or even millions of parameter combinations in only a few lines of code.
48
+ At its core, functioneer organizes analyses as tree where a *set of parameters* starts at the trunk and moves out towards the leaves. Along the way, the *set of parameters* 'flows' through a series of *analysis steps* (each of which can be defined in a single line of code). Each *analysis step* can modify or use the parameters in various ways, such as defining new parameters, modifying parameters, or using the parameters to evaluate or even optimize any function of your choice. One key feature of functioneer is the ability to introduce *forks*: a type of analysis step that splits the analysis into multiple parallel *branches*, each exploring different values for a specific parameter. Using many *Forks* in series allows you to queue up thousands or even millions of parameter combinations with only a few lines of code. This structured approach enables highly flexible and dynamic analyses, suitable for a wide range of applications.
49
+
50
+ Summary of most useful types of *analysis steps*:
51
+ - Define: Adds a new parameter to the analysis
52
+ - Fork: Splits the analysis into multiple parallel *branches*, each exploring different values for a specific parameter
53
+ - Execute: Calls a provided function using the parameters
54
+ - Optimize: Quickly set up an optimization by providing a function and defining which parameters are going to be optimized
48
55
 
49
56
  <details>
50
57
  <summary>
51
- Important Terms
58
+ <span style="font-size:1.5em;">Important Terms</span>
52
59
  </summary>
53
60
 
54
- ### Terms
55
-
56
61
  * AnalysisModule
57
62
  * Definition: The central container for an analysis pipeline.
58
63
  * Function: Holds a sequence of analysis steps and manages a set of parameters that flow through the pipeline.
@@ -90,17 +95,18 @@ pip install functioneer
90
95
  Below are a few quick examples of how to use Functioneer. Each example will build on the last, introducing one piece of functionality. By the end you will have witnessed the computational power of this fully armed and fully operational library.
91
96
 
92
97
  ### Choose a Function to Analyze
93
- Functioneer is designed to analyze ANY function(s) with ANY number of inputs and outputs. For the following examples, the [Rosenbrock Function](https://en.wikipedia.org/wiki/Rosenbrock_function) is used for its relative simplicity, 4 inputs (plenty to play with) and its historical significance as an optimization benchmark.
98
+ Functioneer is designed to analyze ANY function(s) with ANY number of inputs and outputs. For the following examples, we use the [Rosenbrock Function](https://en.wikipedia.org/wiki/Rosenbrock_function) for (1) its relative simplicity, (2) 4 inputs (plenty to play with) and (3) its historical significance as an optimization benchmark.
94
99
 
95
100
  ```
101
+ # Example Function
96
102
  # Rosenbrock function (known minimum of 0 at: x=1, y=1, a=1, b=100)
97
103
  def rosenbrock(x, y, a, b):
98
104
  return (a-x)**2 + b*(y-x**2)**2
99
105
  ```
100
106
 
101
107
  ### Example 1: The Basics (Defining Parameters and Executing a Function)
102
- Set up an *analysis sequence* by defining four parameters (the inputs needed for the Rosenbrock function), then executing the function (with parameter ids matched to kwargs)
103
- Note: Parameter IDs MUST match your function's args
108
+ Set up an *analysis sequence* by defining four parameters to match our function, then executing the function
109
+ Note: Parameter IDs MUST match your function's args, function executions inside functioneer are fully keyword arg based.
104
110
 
105
111
  ```
106
112
  import functioneer as fn
@@ -114,7 +120,7 @@ anal.add.define('b', 100) # Define parameter 'b'
114
120
  anal.add.define('x', 1) # Define parameter 'x'
115
121
  anal.add.define('y', 1) # Define parameter 'y'
116
122
 
117
- anal.add.execute(func=rosenbrock, output_param_ids='rosen') # Execute function with parameter ids matched to kwargs
123
+ anal.add.execute(func=rosenbrock) # Execute function with parameter ids matched to kwargs
118
124
 
119
125
  # Run the analysis sequence
120
126
  results = anal.run()
@@ -124,11 +130,11 @@ print(results['df'])
124
130
 
125
131
  ```
126
132
  Output:
127
- runtime a b x y rosen datetime
128
- 0 0.0 1 100 1 1 0 2025-01-03 17:06:21.252981
133
+ runtime a b x y rosenbrock datetime
134
+ 0 0.0 1 100 1 1 0 2025-06-24 03:26:48.842824
129
135
  ```
130
136
 
131
- As predicted, the `rosen` parameter evaluates to 0 when a=1, b=100, x=1, y=1
137
+ As we expect, the `rosenbrock` parameter evaluates to 0 when a=1, b=100, x=1, y=1
132
138
 
133
139
  Note: the `results['df']` is a pandas DataFrame containing all parameters in addition to *runtime* and *datetime* for the given branch
134
140
 
@@ -138,17 +144,19 @@ If you want to test a set of values for a parameter you can create a *fork* in t
138
144
 
139
145
  Say we want to evaluate and plot the Rosenbrock surface over the x-y domain. Let's evaluate Rosenbrock on a grid where x=(0, 1, 2) and y=(1, 10) which should result in 6 final *branches* / *leaves*...
140
146
 
147
+ Note: the parameter's name is by default set from the function name, but can be overridden using the 'assign_to' arg.
141
148
  Note: some boiler plate can be removed by defining initial parameters in the AnalysisModule() declaration
149
+ Note: initial parameter values will be overwritten as needed by parameter steps
142
150
  ```
143
151
  # Create new analysis
144
- init_params = dict(a=1, b=100, x=1, y=1) # initial parameters will be overwritten by forks, optimizations, etc
152
+ init_params = dict(a=1, b=100, x=1, y=1) # define initial parameters
145
153
  anal = fn.AnalysisModule(init_params)
146
154
 
147
155
  # Define analysis sequence
148
- anal.add.fork('x', value_sets=(0, 1, 2)) # Fork analysis, create a branch for each value of 'x': 0, 1, 2
149
- anal.add.fork('y', value_sets=(1, 10)) # Fork analysis, create a branch for each value of 'y': 1, 10
156
+ anal.add.fork('x', value_set=(0, 1, 2)) # Fork analysis, create a branch for each value of 'x': 0, 1, 2
157
+ anal.add.fork('y', value_set=(1, 10)) # Fork analysis, create a branch for each value of 'y': 1, 10
150
158
 
151
- anal.add.execute(func=rosenbrock, output_param_ids='rosen') # Execute function (for each branch) with parameters matched to kwargs
159
+ anal.add.execute(func=rosenbrock, assign_to='brock_purdy') # Execute function (for each branch) with parameters matched to kwargs
152
160
 
153
161
  # Run the analysis sequence
154
162
  results = anal.run()
@@ -156,13 +164,13 @@ print(results['df'].drop(columns='datetime'))
156
164
  ```
157
165
  ```
158
166
  Output:
159
- runtime a b x y rosen
160
- 0 0.000994 1 100 0 1 101
161
- 1 0.000994 1 100 0 10 10001
162
- 2 0.000994 1 100 1 1 0
163
- 3 0.000994 1 100 1 10 8100
164
- 4 0.000994 1 100 2 1 901
165
- 5 0.000994 1 100 2 10 3601
167
+ runtime a b x y brock_purdy
168
+ 0 0.001294 1 100 0 1 101
169
+ 1 0.000000 1 100 0 10 10001
170
+ 2 0.000000 1 100 1 1 0
171
+ 3 0.000000 1 100 1 10 8100
172
+ 4 0.000000 1 100 2 1 901
173
+ 5 0.000000 1 100 2 10 3601
166
174
  ```
167
175
  The parameters `x` and `y` were given 3 and 2 fork values respectively, this created 6 total *leaves* (end of each branch) in the analysis. `rosen` has been evaluated for each *leaf*. Essentially you have begun to map the Rosenbrock function over the x-y domain.
168
176
 
@@ -176,7 +184,7 @@ anal = fn.AnalysisModule(dict(x=0, y=0))
176
184
  anal.add.fork('a', value_set=(1, 2)) # Fork analysis, create a branch for each value of 'a': 0, 1, 2
177
185
  anal.add.fork('b', value_set=(0, 100, 200)) # Fork analysis, create a branch for each value of 'b': 0, 100, 200
178
186
 
179
- anal.add.optimize(func=rosenbrock, obj_param_id='rosen', opt_param_ids=('x', 'y'))
187
+ anal.add.optimize(func=rosenbrock, opt_param_ids=('x', 'y'))
180
188
 
181
189
  # Run the analysis sequence
182
190
  results = anal.run()
@@ -194,8 +202,7 @@ Output:
194
202
  ```
195
203
  For each branch, the Rosenbrock Function has been minimized and the solution values for `x`, `y` and `rosen` are shown.
196
204
 
197
- Note: the initial values (`x0`) used in the optimization are just the existing parameter values (in this case x and y are 0).
198
-
205
+ Note: the initial values (`x0`) used in the optimization are simply the existing parameter values (in this case x and y are 0) going into the optimization step.
199
206
  Note: due to optimization the runtimes for some of the analyses have gone up.
200
207
 
201
208
  ### Example 4: Multi-parameter Forks
@@ -206,8 +213,7 @@ anal = fn.AnalysisModule(dict(a=1, b=100))
206
213
 
207
214
  # Define analysis sequence
208
215
  anal.add.fork.multi(('x', 'y'), value_sets=((0, 1, 2), (0, 10, 20))) # Fork analysis, create a branch for each value of 'y': 1, 10
209
-
210
- anal.add.execute(func=rosenbrock, output_param_ids='rosen') # Execute function (for each branch) with parameters matched to kwargs
216
+ anal.add.execute(func=rosenbrock) # Execute function (for each branch) with parameters matched to kwargs
211
217
 
212
218
  # Run the analysis sequence
213
219
  results = anal.run()
@@ -233,7 +239,7 @@ anal = fn.AnalysisModule(dict(x=0, y=0))
233
239
  # Define analysis sequence
234
240
  anal.add.fork('a', value_set=(1, 2))
235
241
  anal.add.fork('b', value_set=(0, 100, 200))
236
- anal.add.optimize(func=rosenbrock, obj_param_id='rosen', opt_param_ids=('x', 'y'))
242
+ anal.add.optimize(func=rosenbrock, opt_param_ids=('x', 'y'))
237
243
 
238
244
  # Only evaluate 'expensive_func' if the optimized 'y' is above 0.5
239
245
  expensive_func = lambda x, y: x+y
@@ -244,15 +250,15 @@ print(results['df'].drop(columns='datetime'))
244
250
  ```
245
251
  ```
246
252
  Output:
247
- runtime a b x y rosen expensive_param
248
- 0 0.004001 1 0 1.000000 0.000000 4.930381e-32 NaN
249
- 1 0.009702 1 100 0.999763 0.999523 5.772481e-08 1.999286
250
- 2 0.017009 1 200 0.999939 0.999873 8.146869e-09 1.999811
251
- 3 0.000995 2 0 2.000000 0.000000 0.000000e+00 NaN
252
- 4 0.016001 2 100 1.999731 3.998866 4.067518e-07 5.998596
253
- 5 0.020995 2 200 1.999554 3.998225 2.136755e-07 5.997779
253
+ runtime x y a b rosenbrock expensive_param
254
+ 0 0.001997 1.000000 0.000000 1 0 4.930381e-32 NaN
255
+ 1 0.004206 0.999763 0.999523 1 100 5.772481e-08 1.999286
256
+ 2 0.012199 0.999939 0.999873 1 200 8.146869e-09 1.999811
257
+ 3 0.000965 2.000000 0.000000 2 0 0.000000e+00 NaN
258
+ 4 0.010121 1.999731 3.998866 2 100 4.067518e-07 5.998596
259
+ 5 0.012308 1.999554 3.998225 2 200 2.136755e-07 5.997779
254
260
  ```
255
- Notice how the evaluation of `expensive_param` has been skipped where the optimized `y` did not meet the criteria `y>0.5`
261
+ Notice how the evaluation of `expensive_param` has been skipped where the optimized `y` did not meet our criteria `y>0.5`
256
262
 
257
263
  ## License
258
264
 
@@ -4,7 +4,7 @@
4
4
  **Date**: February 02, 2025\
5
5
  **PyPI**: https://pypi.org/project/functioneer/
6
6
 
7
- Functioneer lets you effortlessly explore function behavior with automated batch analysis. With just a few lines of code, you can queue up thousands or even millions of function evaluations, testing and optimizing with unlimited parameter combinations. Retrieve structured results in formats like pandas for seamless integration into your workflows. Perfect for parameter sweeps, engineering simulations, and digital twin optimization.
7
+ Functioneer lets you effortlessly explore function behavior with automated batch analysis. With just a few lines of code, you can queue up thousands or even millions of function evaluations, with various parameter combinations and/or optimizations. Retrieve structured results in formats like pandas for seamless integration into your workflows. Perfect for parameter sweeps, engineering simulations, and digital twin optimization.
8
8
 
9
9
  ## Use cases
10
10
 
@@ -14,15 +14,19 @@ Functioneer lets you effortlessly explore function behavior with automated batch
14
14
 
15
15
  ## How Functioneer Works
16
16
 
17
- At its core, functioneer organizes analyses as pipelines, where a set of *parameters* flows sequentially through a series of *analysis steps*. These steps modify the parameters in various ways, such as defining new parameters, updating existing parameter values, or performing operations like function evaluation and optimization. One of the key features of functioneer is the ability to introduce *forks*, which split the analysis into multiple *branches*, each exploring different values for a specific parameter. This structured approach enables highly flexible and dynamic analyses, suitable for a wide range of applications. Functioneer *Forks* are what let you queue up thousands or even millions of parameter combinations in only a few lines of code.
17
+ At its core, functioneer organizes analyses as tree where a *set of parameters* starts at the trunk and moves out towards the leaves. Along the way, the *set of parameters* 'flows' through a series of *analysis steps* (each of which can be defined in a single line of code). Each *analysis step* can modify or use the parameters in various ways, such as defining new parameters, modifying parameters, or using the parameters to evaluate or even optimize any function of your choice. One key feature of functioneer is the ability to introduce *forks*: a type of analysis step that splits the analysis into multiple parallel *branches*, each exploring different values for a specific parameter. Using many *Forks* in series allows you to queue up thousands or even millions of parameter combinations with only a few lines of code. This structured approach enables highly flexible and dynamic analyses, suitable for a wide range of applications.
18
+
19
+ Summary of most useful types of *analysis steps*:
20
+ - Define: Adds a new parameter to the analysis
21
+ - Fork: Splits the analysis into multiple parallel *branches*, each exploring different values for a specific parameter
22
+ - Execute: Calls a provided function using the parameters
23
+ - Optimize: Quickly set up an optimization by providing a function and defining which parameters are going to be optimized
18
24
 
19
25
  <details>
20
26
  <summary>
21
- Important Terms
27
+ <span style="font-size:1.5em;">Important Terms</span>
22
28
  </summary>
23
29
 
24
- ### Terms
25
-
26
30
  * AnalysisModule
27
31
  * Definition: The central container for an analysis pipeline.
28
32
  * Function: Holds a sequence of analysis steps and manages a set of parameters that flow through the pipeline.
@@ -60,17 +64,18 @@ pip install functioneer
60
64
  Below are a few quick examples of how to use Functioneer. Each example will build on the last, introducing one piece of functionality. By the end you will have witnessed the computational power of this fully armed and fully operational library.
61
65
 
62
66
  ### Choose a Function to Analyze
63
- Functioneer is designed to analyze ANY function(s) with ANY number of inputs and outputs. For the following examples, the [Rosenbrock Function](https://en.wikipedia.org/wiki/Rosenbrock_function) is used for its relative simplicity, 4 inputs (plenty to play with) and its historical significance as an optimization benchmark.
67
+ Functioneer is designed to analyze ANY function(s) with ANY number of inputs and outputs. For the following examples, we use the [Rosenbrock Function](https://en.wikipedia.org/wiki/Rosenbrock_function) for (1) its relative simplicity, (2) 4 inputs (plenty to play with) and (3) its historical significance as an optimization benchmark.
64
68
 
65
69
  ```
70
+ # Example Function
66
71
  # Rosenbrock function (known minimum of 0 at: x=1, y=1, a=1, b=100)
67
72
  def rosenbrock(x, y, a, b):
68
73
  return (a-x)**2 + b*(y-x**2)**2
69
74
  ```
70
75
 
71
76
  ### Example 1: The Basics (Defining Parameters and Executing a Function)
72
- Set up an *analysis sequence* by defining four parameters (the inputs needed for the Rosenbrock function), then executing the function (with parameter ids matched to kwargs)
73
- Note: Parameter IDs MUST match your function's args
77
+ Set up an *analysis sequence* by defining four parameters to match our function, then executing the function
78
+ Note: Parameter IDs MUST match your function's args, function executions inside functioneer are fully keyword arg based.
74
79
 
75
80
  ```
76
81
  import functioneer as fn
@@ -84,7 +89,7 @@ anal.add.define('b', 100) # Define parameter 'b'
84
89
  anal.add.define('x', 1) # Define parameter 'x'
85
90
  anal.add.define('y', 1) # Define parameter 'y'
86
91
 
87
- anal.add.execute(func=rosenbrock, output_param_ids='rosen') # Execute function with parameter ids matched to kwargs
92
+ anal.add.execute(func=rosenbrock) # Execute function with parameter ids matched to kwargs
88
93
 
89
94
  # Run the analysis sequence
90
95
  results = anal.run()
@@ -94,11 +99,11 @@ print(results['df'])
94
99
 
95
100
  ```
96
101
  Output:
97
- runtime a b x y rosen datetime
98
- 0 0.0 1 100 1 1 0 2025-01-03 17:06:21.252981
102
+ runtime a b x y rosenbrock datetime
103
+ 0 0.0 1 100 1 1 0 2025-06-24 03:26:48.842824
99
104
  ```
100
105
 
101
- As predicted, the `rosen` parameter evaluates to 0 when a=1, b=100, x=1, y=1
106
+ As we expect, the `rosenbrock` parameter evaluates to 0 when a=1, b=100, x=1, y=1
102
107
 
103
108
  Note: the `results['df']` is a pandas DataFrame containing all parameters in addition to *runtime* and *datetime* for the given branch
104
109
 
@@ -108,17 +113,19 @@ If you want to test a set of values for a parameter you can create a *fork* in t
108
113
 
109
114
  Say we want to evaluate and plot the Rosenbrock surface over the x-y domain. Let's evaluate Rosenbrock on a grid where x=(0, 1, 2) and y=(1, 10) which should result in 6 final *branches* / *leaves*...
110
115
 
116
+ Note: the parameter's name is by default set from the function name, but can be overridden using the 'assign_to' arg.
111
117
  Note: some boiler plate can be removed by defining initial parameters in the AnalysisModule() declaration
118
+ Note: initial parameter values will be overwritten as needed by parameter steps
112
119
  ```
113
120
  # Create new analysis
114
- init_params = dict(a=1, b=100, x=1, y=1) # initial parameters will be overwritten by forks, optimizations, etc
121
+ init_params = dict(a=1, b=100, x=1, y=1) # define initial parameters
115
122
  anal = fn.AnalysisModule(init_params)
116
123
 
117
124
  # Define analysis sequence
118
- anal.add.fork('x', value_sets=(0, 1, 2)) # Fork analysis, create a branch for each value of 'x': 0, 1, 2
119
- anal.add.fork('y', value_sets=(1, 10)) # Fork analysis, create a branch for each value of 'y': 1, 10
125
+ anal.add.fork('x', value_set=(0, 1, 2)) # Fork analysis, create a branch for each value of 'x': 0, 1, 2
126
+ anal.add.fork('y', value_set=(1, 10)) # Fork analysis, create a branch for each value of 'y': 1, 10
120
127
 
121
- anal.add.execute(func=rosenbrock, output_param_ids='rosen') # Execute function (for each branch) with parameters matched to kwargs
128
+ anal.add.execute(func=rosenbrock, assign_to='brock_purdy') # Execute function (for each branch) with parameters matched to kwargs
122
129
 
123
130
  # Run the analysis sequence
124
131
  results = anal.run()
@@ -126,13 +133,13 @@ print(results['df'].drop(columns='datetime'))
126
133
  ```
127
134
  ```
128
135
  Output:
129
- runtime a b x y rosen
130
- 0 0.000994 1 100 0 1 101
131
- 1 0.000994 1 100 0 10 10001
132
- 2 0.000994 1 100 1 1 0
133
- 3 0.000994 1 100 1 10 8100
134
- 4 0.000994 1 100 2 1 901
135
- 5 0.000994 1 100 2 10 3601
136
+ runtime a b x y brock_purdy
137
+ 0 0.001294 1 100 0 1 101
138
+ 1 0.000000 1 100 0 10 10001
139
+ 2 0.000000 1 100 1 1 0
140
+ 3 0.000000 1 100 1 10 8100
141
+ 4 0.000000 1 100 2 1 901
142
+ 5 0.000000 1 100 2 10 3601
136
143
  ```
137
144
  The parameters `x` and `y` were given 3 and 2 fork values respectively, this created 6 total *leaves* (end of each branch) in the analysis. `rosen` has been evaluated for each *leaf*. Essentially you have begun to map the Rosenbrock function over the x-y domain.
138
145
 
@@ -146,7 +153,7 @@ anal = fn.AnalysisModule(dict(x=0, y=0))
146
153
  anal.add.fork('a', value_set=(1, 2)) # Fork analysis, create a branch for each value of 'a': 0, 1, 2
147
154
  anal.add.fork('b', value_set=(0, 100, 200)) # Fork analysis, create a branch for each value of 'b': 0, 100, 200
148
155
 
149
- anal.add.optimize(func=rosenbrock, obj_param_id='rosen', opt_param_ids=('x', 'y'))
156
+ anal.add.optimize(func=rosenbrock, opt_param_ids=('x', 'y'))
150
157
 
151
158
  # Run the analysis sequence
152
159
  results = anal.run()
@@ -164,8 +171,7 @@ Output:
164
171
  ```
165
172
  For each branch, the Rosenbrock Function has been minimized and the solution values for `x`, `y` and `rosen` are shown.
166
173
 
167
- Note: the initial values (`x0`) used in the optimization are just the existing parameter values (in this case x and y are 0).
168
-
174
+ Note: the initial values (`x0`) used in the optimization are simply the existing parameter values (in this case x and y are 0) going into the optimization step.
169
175
  Note: due to optimization the runtimes for some of the analyses have gone up.
170
176
 
171
177
  ### Example 4: Multi-parameter Forks
@@ -176,8 +182,7 @@ anal = fn.AnalysisModule(dict(a=1, b=100))
176
182
 
177
183
  # Define analysis sequence
178
184
  anal.add.fork.multi(('x', 'y'), value_sets=((0, 1, 2), (0, 10, 20))) # Fork analysis, create a branch for each value of 'y': 1, 10
179
-
180
- anal.add.execute(func=rosenbrock, output_param_ids='rosen') # Execute function (for each branch) with parameters matched to kwargs
185
+ anal.add.execute(func=rosenbrock) # Execute function (for each branch) with parameters matched to kwargs
181
186
 
182
187
  # Run the analysis sequence
183
188
  results = anal.run()
@@ -203,7 +208,7 @@ anal = fn.AnalysisModule(dict(x=0, y=0))
203
208
  # Define analysis sequence
204
209
  anal.add.fork('a', value_set=(1, 2))
205
210
  anal.add.fork('b', value_set=(0, 100, 200))
206
- anal.add.optimize(func=rosenbrock, obj_param_id='rosen', opt_param_ids=('x', 'y'))
211
+ anal.add.optimize(func=rosenbrock, opt_param_ids=('x', 'y'))
207
212
 
208
213
  # Only evaluate 'expensive_func' if the optimized 'y' is above 0.5
209
214
  expensive_func = lambda x, y: x+y
@@ -214,15 +219,15 @@ print(results['df'].drop(columns='datetime'))
214
219
  ```
215
220
  ```
216
221
  Output:
217
- runtime a b x y rosen expensive_param
218
- 0 0.004001 1 0 1.000000 0.000000 4.930381e-32 NaN
219
- 1 0.009702 1 100 0.999763 0.999523 5.772481e-08 1.999286
220
- 2 0.017009 1 200 0.999939 0.999873 8.146869e-09 1.999811
221
- 3 0.000995 2 0 2.000000 0.000000 0.000000e+00 NaN
222
- 4 0.016001 2 100 1.999731 3.998866 4.067518e-07 5.998596
223
- 5 0.020995 2 200 1.999554 3.998225 2.136755e-07 5.997779
224
- ```
225
- Notice how the evaluation of `expensive_param` has been skipped where the optimized `y` did not meet the criteria `y>0.5`
222
+ runtime x y a b rosenbrock expensive_param
223
+ 0 0.001997 1.000000 0.000000 1 0 4.930381e-32 NaN
224
+ 1 0.004206 0.999763 0.999523 1 100 5.772481e-08 1.999286
225
+ 2 0.012199 0.999939 0.999873 1 200 8.146869e-09 1.999811
226
+ 3 0.000965 2.000000 0.000000 2 0 0.000000e+00 NaN
227
+ 4 0.010121 1.999731 3.998866 2 100 4.067518e-07 5.998596
228
+ 5 0.012308 1.999554 3.998225 2 200 2.136755e-07 5.997779
229
+ ```
230
+ Notice how the evaluation of `expensive_param` has been skipped where the optimized `y` did not meet our criteria `y>0.5`
226
231
 
227
232
  ## License
228
233
 
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
4
4
 
5
5
  [project]
6
6
  name = "functioneer"
7
- version = "0.2.2"
7
+ version = "0.3.0"
8
8
  authors = [{ name = "Quinn Marsh", email = "quinnmarsh@hotmail.com" }]
9
9
  maintainers = [{ name = "Quinn Marsh", email = "quinnmarsh@hotmail.com" }]
10
10
  description = "Effortlessly explore function behavior with automated batch analysis."
@@ -19,7 +19,7 @@ dependencies = [
19
19
  ]
20
20
 
21
21
  classifiers = [
22
- "Development Status :: 3 - Alpha",
22
+ "Development Status :: 4 - Beta",
23
23
  "Intended Audience :: Science/Research",
24
24
  "Topic :: Scientific/Engineering",
25
25
  "License :: OSI Approved :: MIT License",
@@ -1,6 +1,4 @@
1
1
  # functioneer/__init__.py
2
-
3
- # Import specific functions and classes from your modules
4
-
2
+ __version__ = "0.3.0"
5
3
  from functioneer.analysis import AnalysisModule, AnalysisStep, Define, Fork, Execute, Optimize
6
4
  from functioneer.parameter import Parameter