numopt-js 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CODING_RULES.md +161 -0
- package/LICENSE +22 -0
- package/README.md +807 -0
- package/dist/core/adjointGradientDescent.d.ts +61 -0
- package/dist/core/adjointGradientDescent.d.ts.map +1 -0
- package/dist/core/adjointGradientDescent.js +764 -0
- package/dist/core/adjointGradientDescent.js.map +1 -0
- package/dist/core/constrainedGaussNewton.d.ts +44 -0
- package/dist/core/constrainedGaussNewton.d.ts.map +1 -0
- package/dist/core/constrainedGaussNewton.js +314 -0
- package/dist/core/constrainedGaussNewton.js.map +1 -0
- package/dist/core/constrainedLevenbergMarquardt.d.ts +46 -0
- package/dist/core/constrainedLevenbergMarquardt.d.ts.map +1 -0
- package/dist/core/constrainedLevenbergMarquardt.js +469 -0
- package/dist/core/constrainedLevenbergMarquardt.js.map +1 -0
- package/dist/core/constrainedUtils.d.ts +92 -0
- package/dist/core/constrainedUtils.d.ts.map +1 -0
- package/dist/core/constrainedUtils.js +364 -0
- package/dist/core/constrainedUtils.js.map +1 -0
- package/dist/core/convergence.d.ts +35 -0
- package/dist/core/convergence.d.ts.map +1 -0
- package/dist/core/convergence.js +51 -0
- package/dist/core/convergence.js.map +1 -0
- package/dist/core/createGradientFunction.d.ts +85 -0
- package/dist/core/createGradientFunction.d.ts.map +1 -0
- package/dist/core/createGradientFunction.js +93 -0
- package/dist/core/createGradientFunction.js.map +1 -0
- package/dist/core/effectiveJacobian.d.ts +90 -0
- package/dist/core/effectiveJacobian.d.ts.map +1 -0
- package/dist/core/effectiveJacobian.js +128 -0
- package/dist/core/effectiveJacobian.js.map +1 -0
- package/dist/core/finiteDiff.d.ts +171 -0
- package/dist/core/finiteDiff.d.ts.map +1 -0
- package/dist/core/finiteDiff.js +363 -0
- package/dist/core/finiteDiff.js.map +1 -0
- package/dist/core/gaussNewton.d.ts +29 -0
- package/dist/core/gaussNewton.d.ts.map +1 -0
- package/dist/core/gaussNewton.js +151 -0
- package/dist/core/gaussNewton.js.map +1 -0
- package/dist/core/gradientDescent.d.ts +35 -0
- package/dist/core/gradientDescent.d.ts.map +1 -0
- package/dist/core/gradientDescent.js +204 -0
- package/dist/core/gradientDescent.js.map +1 -0
- package/dist/core/jacobianComputation.d.ts +24 -0
- package/dist/core/jacobianComputation.d.ts.map +1 -0
- package/dist/core/jacobianComputation.js +38 -0
- package/dist/core/jacobianComputation.js.map +1 -0
- package/dist/core/levenbergMarquardt.d.ts +36 -0
- package/dist/core/levenbergMarquardt.d.ts.map +1 -0
- package/dist/core/levenbergMarquardt.js +286 -0
- package/dist/core/levenbergMarquardt.js.map +1 -0
- package/dist/core/lineSearch.d.ts +42 -0
- package/dist/core/lineSearch.d.ts.map +1 -0
- package/dist/core/lineSearch.js +106 -0
- package/dist/core/lineSearch.js.map +1 -0
- package/dist/core/logger.d.ts +77 -0
- package/dist/core/logger.d.ts.map +1 -0
- package/dist/core/logger.js +162 -0
- package/dist/core/logger.js.map +1 -0
- package/dist/core/types.d.ts +427 -0
- package/dist/core/types.d.ts.map +1 -0
- package/dist/core/types.js +15 -0
- package/dist/core/types.js.map +1 -0
- package/dist/index.d.ts +26 -0
- package/dist/index.d.ts.map +1 -0
- package/dist/index.js +29 -0
- package/dist/index.js.map +1 -0
- package/dist/utils/formatting.d.ts +27 -0
- package/dist/utils/formatting.d.ts.map +1 -0
- package/dist/utils/formatting.js +54 -0
- package/dist/utils/formatting.js.map +1 -0
- package/dist/utils/matrix.d.ts +63 -0
- package/dist/utils/matrix.d.ts.map +1 -0
- package/dist/utils/matrix.js +129 -0
- package/dist/utils/matrix.js.map +1 -0
- package/dist/utils/resultFormatter.d.ts +122 -0
- package/dist/utils/resultFormatter.d.ts.map +1 -0
- package/dist/utils/resultFormatter.js +342 -0
- package/dist/utils/resultFormatter.js.map +1 -0
- package/package.json +74 -0
package/README.md
ADDED
|
@@ -0,0 +1,807 @@
|
|
|
1
|
+
# numopt-js
|
|
2
|
+
|
|
3
|
+
A flexible numerical optimization library for JavaScript/TypeScript that works smoothly in browsers. This library addresses the lack of flexible continuous optimization libraries for JavaScript that work well in browser environments.
|
|
4
|
+
|
|
5
|
+
## Documentation
|
|
6
|
+
|
|
7
|
+
- **API Reference (GitHub Pages)**: https://takuto-na.github.io/numopt-js/
|
|
8
|
+
- **Source Repository**: https://github.com/takuto-NA/numopt-js
|
|
9
|
+
|
|
10
|
+
## Features
|
|
11
|
+
|
|
12
|
+
- **Gradient Descent**: Simple, robust optimization algorithm with line search support
|
|
13
|
+
- **Line Search**: Backtracking line search with Armijo condition for optimal step sizes (following Nocedal & Wright, *Numerical Optimization* (2nd ed.), Algorithm 3.1)
|
|
14
|
+
- **Gauss-Newton Method**: Efficient method for nonlinear least squares problems
|
|
15
|
+
- **Levenberg-Marquardt Algorithm**: Robust algorithm combining Gauss-Newton with damping
|
|
16
|
+
- **Constrained Gauss-Newton**: Efficient constrained nonlinear least squares using effective Jacobian
|
|
17
|
+
- **Constrained Levenberg-Marquardt**: Robust constrained nonlinear least squares with damping
|
|
18
|
+
- **Adjoint Method**: Efficient constrained optimization using adjoint variables (solves only one linear system per iteration instead of parameterCount systems)
|
|
19
|
+
- **Numerical Differentiation**: Automatic gradient and Jacobian computation via finite differences
|
|
20
|
+
- **Browser-Compatible**: Works seamlessly in modern browsers
|
|
21
|
+
- **TypeScript-First**: Full TypeScript support with comprehensive type definitions
|
|
22
|
+
- **Debug-Friendly**: Progress callbacks, verbose logging, and detailed diagnostics
|
|
23
|
+
|
|
24
|
+
## Requirements
|
|
25
|
+
|
|
26
|
+
- Node.js >= 18.0.0
|
|
27
|
+
- Modern browsers with ES2020 support (for browser builds)
|
|
28
|
+
|
|
29
|
+
## Installation
|
|
30
|
+
|
|
31
|
+
```bash
|
|
32
|
+
npm install numopt-js
|
|
33
|
+
```
|
|
34
|
+
|
|
35
|
+
## Examples
|
|
36
|
+
|
|
37
|
+
After installing dependencies with `npm install`, you can run the example scripts with `npm run <script>`:
|
|
38
|
+
|
|
39
|
+
- `npm run example:gradient` – Runs a basic gradient-descent optimization example.
|
|
40
|
+
- `npm run example:rosenbrock` – Optimizes the Rosenbrock function to show robust convergence behavior.
|
|
41
|
+
- `npm run example:lm` – Demonstrates Levenberg-Marquardt for nonlinear curve fitting.
|
|
42
|
+
- `npm run example:gauss-newton` – Shows Gauss-Newton applied to a nonlinear least-squares problem.
|
|
43
|
+
- `npm run example:adjoint` – Introduces the adjoint method for constrained optimization.
|
|
44
|
+
- `npm run example:adjoint-advanced` – Explores a more advanced adjoint-based constrained problem.
|
|
45
|
+
- `npm run example:constrained-gauss-newton` – Solves constrained nonlinear least squares via the effective Jacobian.
|
|
46
|
+
- `npm run example:constrained-lm` – Uses constrained Levenberg-Marquardt for robust constrained least squares.
|
|
47
|
+
|
|
48
|
+
## Quick Start
|
|
49
|
+
|
|
50
|
+
1. Ensure Node.js 18+ is installed.
|
|
51
|
+
2. Install the library with `npm install numopt-js`.
|
|
52
|
+
3. Run the minimal example below to verify your setup:
|
|
53
|
+
|
|
54
|
+
```typescript
|
|
55
|
+
import { gradientDescent } from 'numopt-js';
|
|
56
|
+
|
|
57
|
+
const cost = (params: Float64Array) => params[0] * params[0] + params[1] * params[1];
|
|
58
|
+
const grad = (params: Float64Array) => new Float64Array([2 * params[0], 2 * params[1]]);
|
|
59
|
+
|
|
60
|
+
const result = gradientDescent(new Float64Array([5, -3]), cost, grad, {
|
|
61
|
+
maxIterations: 200,
|
|
62
|
+
tolerance: 1e-6,
|
|
63
|
+
useLineSearch: true,
|
|
64
|
+
});
|
|
65
|
+
|
|
66
|
+
console.log(result.parameters);
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
**Pick an algorithm:**
|
|
70
|
+
|
|
71
|
+
- Gradient Descent — stable first choice for smooth problems (see below)
|
|
72
|
+
- Gauss-Newton — efficient for nonlinear least squares when residuals are available
|
|
73
|
+
- Levenberg–Marquardt — robust least-squares solver with damping
|
|
74
|
+
- Constrained methods & Adjoint — enforce constraints with effective Jacobians or adjoint variables
|
|
75
|
+
|
|
76
|
+
## Examples
|
|
77
|
+
|
|
78
|
+
After `npm install`, you can try the bundled scripts:
|
|
79
|
+
|
|
80
|
+
- `npm run example:gradient` — basic gradient descent on a quadratic bowl
|
|
81
|
+
- `npm run example:rosenbrock` — Rosenbrock optimization with line search
|
|
82
|
+
- `npm run example:gauss-newton` — nonlinear least squares with Gauss-Newton
|
|
83
|
+
- `npm run example:lm` — Levenberg–Marquardt curve fitting
|
|
84
|
+
- `npm run example:adjoint` — simple adjoint-based constrained optimization
|
|
85
|
+
- `npm run example:adjoint-advanced` — adjoint method with custom Jacobians
|
|
86
|
+
- `npm run example:constrained-gauss-newton` — constrained least squares via effective Jacobian
|
|
87
|
+
- `npm run example:constrained-lm` — constrained Levenberg–Marquardt
|
|
88
|
+
|
|
89
|
+
### Gradient Descent
|
|
90
|
+
|
|
91
|
+
Based on standard steepest-descent with backtracking line search (Nocedal & Wright, "Numerical Optimization" 2/e, Ch. 2; Boyd & Vandenberghe, "Convex Optimization", Sec. 9.3).
|
|
92
|
+
|
|
93
|
+
```typescript
|
|
94
|
+
import { gradientDescent } from 'numopt-js';
|
|
95
|
+
|
|
96
|
+
// Define cost function and gradient
|
|
97
|
+
const costFunction = (params: Float64Array) => {
|
|
98
|
+
return params[0] * params[0] + params[1] * params[1];
|
|
99
|
+
};
|
|
100
|
+
|
|
101
|
+
const gradientFunction = (params: Float64Array) => {
|
|
102
|
+
return new Float64Array([2 * params[0], 2 * params[1]]);
|
|
103
|
+
};
|
|
104
|
+
|
|
105
|
+
// Optimize
|
|
106
|
+
const initialParams = new Float64Array([5.0, -3.0]);
|
|
107
|
+
const result = gradientDescent(initialParams, costFunction, gradientFunction, {
|
|
108
|
+
maxIterations: 1000,
|
|
109
|
+
tolerance: 1e-6,
|
|
110
|
+
useLineSearch: true
|
|
111
|
+
});
|
|
112
|
+
|
|
113
|
+
console.log('Optimized parameters:', result.parameters);
|
|
114
|
+
console.log('Final cost:', result.finalCost);
|
|
115
|
+
console.log('Converged:', result.converged);
|
|
116
|
+
```
|
|
117
|
+
|
|
118
|
+
**Using Result Formatter**: For better formatted output, use the built-in result formatter:
|
|
119
|
+
|
|
120
|
+
```typescript
|
|
121
|
+
import { gradientDescent, printGradientDescentResult } from 'numopt-js';
|
|
122
|
+
|
|
123
|
+
const result = gradientDescent(initialParams, costFunction, gradientFunction, {
|
|
124
|
+
maxIterations: 1000,
|
|
125
|
+
tolerance: 1e-6,
|
|
126
|
+
useLineSearch: true
|
|
127
|
+
});
|
|
128
|
+
|
|
129
|
+
// Automatically formats and prints the result
|
|
130
|
+
printGradientDescentResult(result);
|
|
131
|
+
```
|
|
132
|
+
|
|
133
|
+
### Levenberg-Marquardt (Nonlinear Least Squares)
|
|
134
|
+
|
|
135
|
+
```typescript
|
|
136
|
+
import { levenbergMarquardt } from 'numopt-js';
|
|
137
|
+
|
|
138
|
+
// Define residual function
|
|
139
|
+
const residualFunction = (params: Float64Array) => {
|
|
140
|
+
const [a, b] = params;
|
|
141
|
+
const residuals = new Float64Array(xData.length);
|
|
142
|
+
|
|
143
|
+
for (let i = 0; i < xData.length; i++) {
|
|
144
|
+
const predicted = a * xData[i] + b;
|
|
145
|
+
residuals[i] = predicted - yData[i];
|
|
146
|
+
}
|
|
147
|
+
|
|
148
|
+
return residuals;
|
|
149
|
+
};
|
|
150
|
+
|
|
151
|
+
// Optimize (with automatic numerical Jacobian)
|
|
152
|
+
const initialParams = new Float64Array([0, 0]);
|
|
153
|
+
const result = levenbergMarquardt(initialParams, residualFunction, {
|
|
154
|
+
useNumericJacobian: true,
|
|
155
|
+
maxIterations: 100,
|
|
156
|
+
tolGradient: 1e-6
|
|
157
|
+
});
|
|
158
|
+
|
|
159
|
+
console.log('Optimized parameters:', result.parameters);
|
|
160
|
+
console.log('Final residual norm:', result.finalResidualNorm);
|
|
161
|
+
```
|
|
162
|
+
|
|
163
|
+
**Using Result Formatter**:
|
|
164
|
+
|
|
165
|
+
```typescript
|
|
166
|
+
import { levenbergMarquardt, printLevenbergMarquardtResult } from 'numopt-js';
|
|
167
|
+
|
|
168
|
+
const result = levenbergMarquardt(initialParams, residualFunction, {
|
|
169
|
+
useNumericJacobian: true,
|
|
170
|
+
maxIterations: 100,
|
|
171
|
+
tolGradient: 1e-6
|
|
172
|
+
});
|
|
173
|
+
|
|
174
|
+
printLevenbergMarquardtResult(result);
|
|
175
|
+
```
|
|
176
|
+
|
|
177
|
+
### With User-Provided Jacobian
|
|
178
|
+
|
|
179
|
+
```typescript
|
|
180
|
+
import { levenbergMarquardt } from 'numopt-js';
|
|
181
|
+
import { Matrix } from 'ml-matrix';
|
|
182
|
+
|
|
183
|
+
const jacobianFunction = (params: Float64Array) => {
|
|
184
|
+
// Compute analytical Jacobian
|
|
185
|
+
return new Matrix(/* ... */);
|
|
186
|
+
};
|
|
187
|
+
|
|
188
|
+
const result = levenbergMarquardt(initialParams, residualFunction, {
|
|
189
|
+
jacobian: jacobianFunction, // User-provided Jacobian in options
|
|
190
|
+
maxIterations: 100
|
|
191
|
+
});
|
|
192
|
+
```
|
|
193
|
+
|
|
194
|
+
### Numerical Differentiation
|
|
195
|
+
|
|
196
|
+
If you don't have analytical gradients or Jacobians, you can use numerical differentiation:
|
|
197
|
+
|
|
198
|
+
#### Option 1: Helper Functions (Recommended)
|
|
199
|
+
|
|
200
|
+
The easiest way to use numerical differentiation is with the helper functions:
|
|
201
|
+
|
|
202
|
+
```typescript
|
|
203
|
+
import { gradientDescent, createFiniteDiffGradient } from 'numopt-js';
|
|
204
|
+
|
|
205
|
+
const costFn = (params: Float64Array) => {
|
|
206
|
+
return Math.pow(params[0] - 3, 2) + Math.pow(params[1] - 2, 2);
|
|
207
|
+
};
|
|
208
|
+
|
|
209
|
+
// Create a gradient function automatically
|
|
210
|
+
const gradientFn = createFiniteDiffGradient(costFn);
|
|
211
|
+
|
|
212
|
+
const result = gradientDescent(
|
|
213
|
+
new Float64Array([0, 0]),
|
|
214
|
+
costFn,
|
|
215
|
+
gradientFn, // No parameter order confusion!
|
|
216
|
+
{ maxIterations: 100, tolerance: 1e-6 }
|
|
217
|
+
);
|
|
218
|
+
```
|
|
219
|
+
|
|
220
|
+
#### Option 2: Direct Usage
|
|
221
|
+
|
|
222
|
+
You can also use `finiteDiffGradient` directly:
|
|
223
|
+
|
|
224
|
+
```typescript
|
|
225
|
+
import { gradientDescent, finiteDiffGradient } from 'numopt-js';
|
|
226
|
+
|
|
227
|
+
const costFn = (params: Float64Array) => {
|
|
228
|
+
return Math.pow(params[0] - 3, 2) + Math.pow(params[1] - 2, 2);
|
|
229
|
+
};
|
|
230
|
+
|
|
231
|
+
const result = gradientDescent(
|
|
232
|
+
new Float64Array([0, 0]),
|
|
233
|
+
costFn,
|
|
234
|
+
(params) => finiteDiffGradient(params, costFn), // ⚠️ Note: params first!
|
|
235
|
+
{ maxIterations: 100, tolerance: 1e-6 }
|
|
236
|
+
);
|
|
237
|
+
```
|
|
238
|
+
|
|
239
|
+
**Important**: When using `finiteDiffGradient` directly, note the parameter order:
|
|
240
|
+
- ✅ Correct: `finiteDiffGradient(params, costFn)`
|
|
241
|
+
- ❌ Wrong: `finiteDiffGradient(costFn, params)`
|
|
242
|
+
|
|
243
|
+
#### Custom Step Size
|
|
244
|
+
|
|
245
|
+
Both approaches support custom step sizes for the finite difference approximation:
|
|
246
|
+
|
|
247
|
+
```typescript
|
|
248
|
+
// With helper function
|
|
249
|
+
const gradientFn = createFiniteDiffGradient(costFn, { stepSize: 1e-8 });
|
|
250
|
+
|
|
251
|
+
// Direct usage
|
|
252
|
+
const gradient = finiteDiffGradient(params, costFn, { stepSize: 1e-8 });
|
|
253
|
+
```
|
|
254
|
+
|
|
255
|
+
### Adjoint Method (Constrained Optimization)
|
|
256
|
+
|
|
257
|
+
The adjoint method efficiently solves constrained optimization problems by solving for an adjoint variable λ instead of explicitly inverting matrices. This requires solving only one linear system per iteration, making it much more efficient than naive approaches.
|
|
258
|
+
|
|
259
|
+
**Mathematical background**: For constraint `c(p, x) = 0`, the method computes `df/dp = ∂f/∂p - λ^T ∂c/∂p` where λ solves `(∂c/∂x)^T λ = (∂f/∂x)^T`.
|
|
260
|
+
|
|
261
|
+
**Constrained Least Squares**: For residual functions `r(p, x)` with constraints `c(p, x) = 0`, the library provides constrained Gauss-Newton and Levenberg-Marquardt methods. These use the effective Jacobian `J_eff = r_p - r_x C_x^+ C_p` to capture constraint effects, enabling quadratic convergence near the solution while maintaining constraint satisfaction.
|
|
262
|
+
|
|
263
|
+
```typescript
|
|
264
|
+
import { adjointGradientDescent } from 'numopt-js';
|
|
265
|
+
|
|
266
|
+
// Define cost function: f(p, x) = p² + x²
|
|
267
|
+
const costFunction = (p: Float64Array, x: Float64Array) => {
|
|
268
|
+
return p[0] * p[0] + x[0] * x[0];
|
|
269
|
+
};
|
|
270
|
+
|
|
271
|
+
// Define constraint: c(p, x) = p + x - 1 = 0
|
|
272
|
+
const constraintFunction = (p: Float64Array, x: Float64Array) => {
|
|
273
|
+
return new Float64Array([p[0] + x[0] - 1.0]);
|
|
274
|
+
};
|
|
275
|
+
|
|
276
|
+
// Initial values (should satisfy constraint: c(p₀, x₀) = 0)
|
|
277
|
+
const initialP = new Float64Array([2.0]);
|
|
278
|
+
const initialX = new Float64Array([-1.0]); // 2 + (-1) - 1 = 0
|
|
279
|
+
|
|
280
|
+
// Optimize
|
|
281
|
+
const result = adjointGradientDescent(
|
|
282
|
+
initialP,
|
|
283
|
+
initialX,
|
|
284
|
+
costFunction,
|
|
285
|
+
constraintFunction,
|
|
286
|
+
{
|
|
287
|
+
maxIterations: 100,
|
|
288
|
+
tolerance: 1e-6,
|
|
289
|
+
useLineSearch: true,
|
|
290
|
+
logLevel: 'DEBUG' // Enable detailed iteration logging
|
|
291
|
+
}
|
|
292
|
+
);
|
|
293
|
+
|
|
294
|
+
console.log('Optimized parameters:', result.parameters);
|
|
295
|
+
console.log('Final states:', result.finalStates);
|
|
296
|
+
console.log('Final cost:', result.finalCost);
|
|
297
|
+
console.log('Constraint norm:', result.finalConstraintNorm);
|
|
298
|
+
```
|
|
299
|
+
|
|
300
|
+
**With Residual Functions**: The method also supports residual functions `r(p, x)` where `f = 1/2 r^T r`:
|
|
301
|
+
|
|
302
|
+
```typescript
|
|
303
|
+
// Residual function: r(p, x) = [p - 0.5, x - 0.5]
|
|
304
|
+
const residualFunction = (p: Float64Array, x: Float64Array) => {
|
|
305
|
+
return new Float64Array([p[0] - 0.5, x[0] - 0.5]);
|
|
306
|
+
};
|
|
307
|
+
|
|
308
|
+
const result = adjointGradientDescent(
|
|
309
|
+
initialP,
|
|
310
|
+
initialX,
|
|
311
|
+
residualFunction, // Can use residual function directly
|
|
312
|
+
constraintFunction,
|
|
313
|
+
{ maxIterations: 100, tolerance: 1e-6 }
|
|
314
|
+
);
|
|
315
|
+
```
|
|
316
|
+
|
|
317
|
+
**With Analytical Derivatives**: For better performance, you can provide analytical partial derivatives:
|
|
318
|
+
|
|
319
|
+
```typescript
|
|
320
|
+
import { Matrix } from 'ml-matrix';
|
|
321
|
+
|
|
322
|
+
const result = adjointGradientDescent(
|
|
323
|
+
initialP,
|
|
324
|
+
initialX,
|
|
325
|
+
costFunction,
|
|
326
|
+
constraintFunction,
|
|
327
|
+
{
|
|
328
|
+
dfdp: (p: Float64Array, x: Float64Array) => new Float64Array([2 * p[0]]),
|
|
329
|
+
dfdx: (p: Float64Array, x: Float64Array) => new Float64Array([2 * x[0]]),
|
|
330
|
+
dcdp: (p: Float64Array, x: Float64Array) => new Matrix([[1]]),
|
|
331
|
+
dcdx: (p: Float64Array, x: Float64Array) => new Matrix([[1]]),
|
|
332
|
+
maxIterations: 100
|
|
333
|
+
}
|
|
334
|
+
);
|
|
335
|
+
```
|
|
336
|
+
|
|
337
|
+
### Constrained Gauss-Newton (Constrained Nonlinear Least Squares)
|
|
338
|
+
|
|
339
|
+
For constrained nonlinear least squares problems, use the constrained Gauss-Newton method:
|
|
340
|
+
|
|
341
|
+
```typescript
|
|
342
|
+
import { constrainedGaussNewton } from 'numopt-js';
|
|
343
|
+
|
|
344
|
+
// Define residual function: r(p, x) = [p - 0.5, x - 0.5]
|
|
345
|
+
const residualFunction = (p: Float64Array, x: Float64Array) => {
|
|
346
|
+
return new Float64Array([p[0] - 0.5, x[0] - 0.5]);
|
|
347
|
+
};
|
|
348
|
+
|
|
349
|
+
// Define constraint: c(p, x) = p + x - 1 = 0
|
|
350
|
+
const constraintFunction = (p: Float64Array, x: Float64Array) => {
|
|
351
|
+
return new Float64Array([p[0] + x[0] - 1.0]);
|
|
352
|
+
};
|
|
353
|
+
|
|
354
|
+
// Initial values (should satisfy constraint: c(p₀, x₀) = 0)
|
|
355
|
+
const initialP = new Float64Array([2.0]);
|
|
356
|
+
const initialX = new Float64Array([-1.0]); // 2 + (-1) - 1 = 0
|
|
357
|
+
|
|
358
|
+
// Optimize
|
|
359
|
+
const result = constrainedGaussNewton(
|
|
360
|
+
initialP,
|
|
361
|
+
initialX,
|
|
362
|
+
residualFunction,
|
|
363
|
+
constraintFunction,
|
|
364
|
+
{
|
|
365
|
+
maxIterations: 100,
|
|
366
|
+
tolerance: 1e-6
|
|
367
|
+
}
|
|
368
|
+
);
|
|
369
|
+
|
|
370
|
+
console.log('Optimized parameters:', result.parameters);
|
|
371
|
+
console.log('Final states:', result.finalStates);
|
|
372
|
+
console.log('Final cost:', result.finalCost);
|
|
373
|
+
console.log('Constraint norm:', result.finalConstraintNorm);
|
|
374
|
+
```
|
|
375
|
+
|
|
376
|
+
### Constrained Levenberg-Marquardt (Robust Constrained Least Squares)
|
|
377
|
+
|
|
378
|
+
For more robust constrained optimization, use the constrained Levenberg-Marquardt method:
|
|
379
|
+
|
|
380
|
+
```typescript
|
|
381
|
+
import { constrainedLevenbergMarquardt } from 'numopt-js';
|
|
382
|
+
|
|
383
|
+
const result = constrainedLevenbergMarquardt(
|
|
384
|
+
initialP,
|
|
385
|
+
initialX,
|
|
386
|
+
residualFunction,
|
|
387
|
+
constraintFunction,
|
|
388
|
+
{
|
|
389
|
+
maxIterations: 100,
|
|
390
|
+
tolGradient: 1e-6,
|
|
391
|
+
tolStep: 1e-6,
|
|
392
|
+
tolResidual: 1e-6,
|
|
393
|
+
lambdaInitial: 1e-3,
|
|
394
|
+
lambdaFactor: 10.0
|
|
395
|
+
}
|
|
396
|
+
);
|
|
397
|
+
```
|
|
398
|
+
|
|
399
|
+
**With Analytical Derivatives**: For better performance, provide analytical partial derivatives:
|
|
400
|
+
|
|
401
|
+
```typescript
|
|
402
|
+
import { Matrix } from 'ml-matrix';
|
|
403
|
+
|
|
404
|
+
const result = constrainedGaussNewton(
|
|
405
|
+
initialP,
|
|
406
|
+
initialX,
|
|
407
|
+
residualFunction,
|
|
408
|
+
constraintFunction,
|
|
409
|
+
{
|
|
410
|
+
drdp: (p: Float64Array, x: Float64Array) => new Matrix([[1], [0]]),
|
|
411
|
+
drdx: (p: Float64Array, x: Float64Array) => new Matrix([[0], [1]]),
|
|
412
|
+
dcdp: (p: Float64Array, x: Float64Array) => new Matrix([[1]]),
|
|
413
|
+
dcdx: (p: Float64Array, x: Float64Array) => new Matrix([[1]]),
|
|
414
|
+
maxIterations: 100
|
|
415
|
+
}
|
|
416
|
+
);
|
|
417
|
+
```
|
|
418
|
+
|
|
419
|
+
### Gradient Descent
|
|
420
|
+
|
|
421
|
+
```typescript
|
|
422
|
+
function gradientDescent(
|
|
423
|
+
initialParameters: Float64Array,
|
|
424
|
+
costFunction: CostFn,
|
|
425
|
+
gradientFunction: GradientFn,
|
|
426
|
+
options?: GradientDescentOptions
|
|
427
|
+
): GradientDescentResult
|
|
428
|
+
```
|
|
429
|
+
|
|
430
|
+
### Levenberg-Marquardt
|
|
431
|
+
|
|
432
|
+
```typescript
|
|
433
|
+
function levenbergMarquardt(
|
|
434
|
+
initialParameters: Float64Array,
|
|
435
|
+
residualFunction: ResidualFn,
|
|
436
|
+
options?: LevenbergMarquardtOptions
|
|
437
|
+
): LevenbergMarquardtResult
|
|
438
|
+
```
|
|
439
|
+
|
|
440
|
+
### Adjoint Gradient Descent
|
|
441
|
+
|
|
442
|
+
```typescript
|
|
443
|
+
function adjointGradientDescent(
|
|
444
|
+
initialParameters: Float64Array,
|
|
445
|
+
initialStates: Float64Array,
|
|
446
|
+
costFunction: ConstrainedCostFn | ConstrainedResidualFn,
|
|
447
|
+
constraintFunction: ConstraintFn,
|
|
448
|
+
options?: AdjointGradientDescentOptions
|
|
449
|
+
): AdjointGradientDescentResult
|
|
450
|
+
```
|
|
451
|
+
|
|
452
|
+
### Constrained Gauss-Newton
|
|
453
|
+
|
|
454
|
+
```typescript
|
|
455
|
+
function constrainedGaussNewton(
|
|
456
|
+
initialParameters: Float64Array,
|
|
457
|
+
initialStates: Float64Array,
|
|
458
|
+
residualFunction: ConstrainedResidualFn,
|
|
459
|
+
constraintFunction: ConstraintFn,
|
|
460
|
+
options?: ConstrainedGaussNewtonOptions
|
|
461
|
+
): ConstrainedGaussNewtonResult
|
|
462
|
+
```
|
|
463
|
+
|
|
464
|
+
### Constrained Levenberg-Marquardt
|
|
465
|
+
|
|
466
|
+
```typescript
|
|
467
|
+
function constrainedLevenbergMarquardt(
|
|
468
|
+
initialParameters: Float64Array,
|
|
469
|
+
initialStates: Float64Array,
|
|
470
|
+
residualFunction: ConstrainedResidualFn,
|
|
471
|
+
constraintFunction: ConstraintFn,
|
|
472
|
+
options?: ConstrainedLevenbergMarquardtOptions
|
|
473
|
+
): ConstrainedLevenbergMarquardtResult
|
|
474
|
+
```
|
|
475
|
+
|
|
476
|
+
### Options
|
|
477
|
+
|
|
478
|
+
All algorithms support common options:
|
|
479
|
+
|
|
480
|
+
- `maxIterations?: number` - Maximum number of iterations (default: 1000)
|
|
481
|
+
- `tolerance?: number` - Convergence tolerance (default: 1e-6)
|
|
482
|
+
- `onIteration?: (iteration: number, cost: number, params: Float64Array) => void` - Progress callback
|
|
483
|
+
- `verbose?: boolean` - Enable verbose logging (default: false)
|
|
484
|
+
|
|
485
|
+
#### Gradient Descent Options
|
|
486
|
+
|
|
487
|
+
- `stepSize?: number` - Fixed step size (learning rate). If not provided, line search is used (default: undefined, uses line search)
|
|
488
|
+
- `useLineSearch?: boolean` - Use line search to determine optimal step size (default: true)
|
|
489
|
+
|
|
490
|
+
#### Levenberg-Marquardt Options
|
|
491
|
+
|
|
492
|
+
- `jacobian?: JacobianFn` - Analytical Jacobian function (if provided, used instead of numerical differentiation)
|
|
493
|
+
- `useNumericJacobian?: boolean` - Use numerical differentiation for Jacobian (default: true)
|
|
494
|
+
- `jacobianStep?: number` - Step size for numerical Jacobian computation (default: 1e-6)
|
|
495
|
+
- `lambdaInitial?: number` - Initial damping parameter (default: 1e-3)
|
|
496
|
+
- `lambdaFactor?: number` - Factor for updating lambda (default: 10.0)
|
|
497
|
+
- `tolGradient?: number` - Tolerance for gradient norm convergence (default: 1e-6)
|
|
498
|
+
- `tolStep?: number` - Tolerance for step size convergence (default: 1e-6)
|
|
499
|
+
- `tolResidual?: number` - Tolerance for residual norm convergence (default: 1e-6)
|
|
500
|
+
|
|
501
|
+
**Levenberg-Marquardt References**
|
|
502
|
+
|
|
503
|
+
- Moré, J. J., "The Levenberg-Marquardt Algorithm: Implementation and Theory," in *Numerical Analysis*, Lecture Notes in Mathematics 630, 1978. DOI: https://doi.org/10.1007/BFb0067700
|
|
504
|
+
- Lourakis, M. I. A., "A Brief Description of the Levenberg-Marquardt Algorithm," 2005 tutorial. PDF: https://users.ics.forth.gr/lourakis/levmar/levmar.pdf
|
|
505
|
+
|
|
506
|
+
#### Gauss-Newton Options
|
|
507
|
+
|
|
508
|
+
- `jacobian?: JacobianFn` - Analytical Jacobian function (if provided, used instead of numerical differentiation)
|
|
509
|
+
- `useNumericJacobian?: boolean` - Use numerical differentiation for Jacobian (default: true)
|
|
510
|
+
- `jacobianStep?: number` - Step size for numerical Jacobian computation (default: 1e-6)
|
|
511
|
+
|
|
512
|
+
#### Adjoint Gradient Descent Options
|
|
513
|
+
|
|
514
|
+
- `dfdp?: (p: Float64Array, x: Float64Array) => Float64Array` - Analytical partial derivative ∂f/∂p (optional)
|
|
515
|
+
- `dfdx?: (p: Float64Array, x: Float64Array) => Float64Array` - Analytical partial derivative ∂f/∂x (optional)
|
|
516
|
+
- `dcdp?: (p: Float64Array, x: Float64Array) => Matrix` - Analytical partial derivative ∂c/∂p (optional)
|
|
517
|
+
- `dcdx?: (p: Float64Array, x: Float64Array) => Matrix` - Analytical partial derivative ∂c/∂x (optional)
|
|
518
|
+
- `stepSizeP?: number` - Step size for numerical differentiation w.r.t. parameters (default: 1e-6)
|
|
519
|
+
- `stepSizeX?: number` - Step size for numerical differentiation w.r.t. states (default: 1e-6)
|
|
520
|
+
- `constraintTolerance?: number` - Tolerance for constraint satisfaction check (default: 1e-6)
|
|
521
|
+
|
|
522
|
+
#### Constrained Gauss-Newton Options
|
|
523
|
+
|
|
524
|
+
- `drdp?: (p: Float64Array, x: Float64Array) => Matrix` - Analytical partial derivative ∂r/∂p (optional)
|
|
525
|
+
- `drdx?: (p: Float64Array, x: Float64Array) => Matrix` - Analytical partial derivative ∂r/∂x (optional)
|
|
526
|
+
- `dcdp?: (p: Float64Array, x: Float64Array) => Matrix` - Analytical partial derivative ∂c/∂p (optional)
|
|
527
|
+
- `dcdx?: (p: Float64Array, x: Float64Array) => Matrix` - Analytical partial derivative ∂c/∂x (optional)
|
|
528
|
+
- `stepSizeP?: number` - Step size for numerical differentiation w.r.t. parameters (default: 1e-6)
|
|
529
|
+
- `stepSizeX?: number` - Step size for numerical differentiation w.r.t. states (default: 1e-6)
|
|
530
|
+
- `constraintTolerance?: number` - Tolerance for constraint satisfaction check (default: 1e-6)
|
|
531
|
+
|
|
532
|
+
#### Constrained Levenberg-Marquardt Options
|
|
533
|
+
|
|
534
|
+
Extends `ConstrainedGaussNewtonOptions` with:
|
|
535
|
+
- `lambdaInitial?: number` - Initial damping parameter (default: 1e-3)
|
|
536
|
+
- `lambdaFactor?: number` - Factor for updating lambda (default: 10.0)
|
|
537
|
+
- `tolGradient?: number` - Tolerance for gradient norm convergence (default: 1e-6)
|
|
538
|
+
- `tolStep?: number` - Tolerance for step size convergence (default: 1e-6)
|
|
539
|
+
- `tolResidual?: number` - Tolerance for residual norm convergence (default: 1e-6)
|
|
540
|
+
|
|
541
|
+
**Note**: The constraint function `c(p, x)` does not need to return a vector with the same length as the state vector `x`. The adjoint method supports both square and non-square constraint Jacobians (overdetermined and underdetermined systems). For non-square matrices, the method uses QR decomposition or pseudo-inverse to solve the adjoint equation.
|
|
542
|
+
|
|
543
|
+
#### Numerical Differentiation Options
|
|
544
|
+
|
|
545
|
+
- `stepSize?: number` - Step size for finite difference approximation (default: 1e-6)
|
|
546
|
+
|
|
547
|
+
## Result Formatting
|
|
548
|
+
|
|
549
|
+
The library provides helper functions for formatting and displaying optimization results in a consistent, user-friendly manner. These functions replace repetitive `console.log` statements and provide better readability.
|
|
550
|
+
|
|
551
|
+
### Basic Usage
|
|
552
|
+
|
|
553
|
+
```typescript
|
|
554
|
+
import { gradientDescent, printGradientDescentResult } from 'numopt-js';
|
|
555
|
+
|
|
556
|
+
const result = gradientDescent(initialParams, costFunction, gradientFunction, {
|
|
557
|
+
maxIterations: 1000,
|
|
558
|
+
tolerance: 1e-6
|
|
559
|
+
});
|
|
560
|
+
|
|
561
|
+
// Print formatted result
|
|
562
|
+
printGradientDescentResult(result);
|
|
563
|
+
```
|
|
564
|
+
|
|
565
|
+
### Available Formatters
|
|
566
|
+
|
|
567
|
+
- `printOptimizationResult()` - For basic `OptimizationResult`
|
|
568
|
+
- `printGradientDescentResult()` - For `GradientDescentResult` (includes line search info)
|
|
569
|
+
- `printLevenbergMarquardtResult()` - For `LevenbergMarquardtResult` (includes lambda)
|
|
570
|
+
- `printConstrainedGaussNewtonResult()` - For constrained optimization results
|
|
571
|
+
- `printConstrainedLevenbergMarquardtResult()` - For constrained LM results
|
|
572
|
+
- `printAdjointGradientDescentResult()` - For adjoint method results
|
|
573
|
+
- `printResult()` - Type-safe overloaded function that works with any result type
|
|
574
|
+
|
|
575
|
+
### Customization Options
|
|
576
|
+
|
|
577
|
+
All formatters accept an optional `ResultFormatterOptions` object:
|
|
578
|
+
|
|
579
|
+
```typescript
|
|
580
|
+
import { printOptimizationResult } from 'numopt-js';
|
|
581
|
+
|
|
582
|
+
const startTime = performance.now();
|
|
583
|
+
const result = /* ... optimization ... */;
|
|
584
|
+
const elapsedTime = performance.now() - startTime;
|
|
585
|
+
|
|
586
|
+
printOptimizationResult(result, {
|
|
587
|
+
showSectionHeaders: true, // Show "=== Optimization Results ===" header
|
|
588
|
+
showExecutionTime: true, // Include execution time
|
|
589
|
+
elapsedTimeMs: elapsedTime, // Execution time in milliseconds
|
|
590
|
+
maxParametersToShow: 10, // Max parameters to display before truncating
|
|
591
|
+
parameterPrecision: 6, // Decimal places for parameters
|
|
592
|
+
costPrecision: 8, // Decimal places for cost/norms
|
|
593
|
+
constraintPrecision: 10 // Decimal places for constraint violations
|
|
594
|
+
});
|
|
595
|
+
```
|
|
596
|
+
|
|
597
|
+
### Formatting Strings Instead of Printing
|
|
598
|
+
|
|
599
|
+
If you need the formatted string instead of printing to console:
|
|
600
|
+
|
|
601
|
+
```typescript
|
|
602
|
+
import { formatOptimizationResult } from 'numopt-js';
|
|
603
|
+
|
|
604
|
+
const formattedString = formatOptimizationResult(result);
|
|
605
|
+
// Use formattedString as needed (e.g., save to file, send to API, etc.)
|
|
606
|
+
```
|
|
607
|
+
|
|
608
|
+
### Automatic Parameter Formatting
|
|
609
|
+
|
|
610
|
+
The formatters automatically handle parameter arrays:
|
|
611
|
+
- **Small arrays (≤3 elements)**: Displayed individually with labels (`p = 1.0, x = 2.0`)
|
|
612
|
+
- **Medium arrays (4-10 elements)**: Displayed as array (`[1.0, 2.0, 3.0, ...]`)
|
|
613
|
+
- **Large arrays (>10 elements)**: Truncated with "... and N more" (`[1.0, 2.0, ..., ... and 15 more]`)
|
|
614
|
+
|
|
615
|
+
## Examples
|
|
616
|
+
|
|
617
|
+
See the `examples/` directory for complete working examples:
|
|
618
|
+
|
|
619
|
+
- Gradient descent with Rosenbrock function
|
|
620
|
+
- Curve fitting with Levenberg-Marquardt
|
|
621
|
+
- Linear and nonlinear regression
|
|
622
|
+
- Constrained optimization with adjoint method
|
|
623
|
+
- Constrained Gauss-Newton method
|
|
624
|
+
- Constrained Levenberg-Marquardt method
|
|
625
|
+
|
|
626
|
+
To run the examples:
|
|
627
|
+
|
|
628
|
+
```bash
|
|
629
|
+
# Using npm scripts (recommended)
|
|
630
|
+
npm run example:gradient
|
|
631
|
+
npm run example:rosenbrock
|
|
632
|
+
npm run example:lm
|
|
633
|
+
npm run example:gauss-newton
|
|
634
|
+
|
|
635
|
+
# Or directly with tsx
|
|
636
|
+
npx tsx examples/gradient-descent-example.ts
|
|
637
|
+
npx tsx examples/curve-fitting-lm.ts
|
|
638
|
+
npx tsx examples/rosenbrock-optimization.ts
|
|
639
|
+
npx tsx examples/adjoint-example.ts
|
|
640
|
+
npx tsx examples/adjoint-advanced-example.ts
|
|
641
|
+
npx tsx examples/constrained-gauss-newton-example.ts
|
|
642
|
+
npx tsx examples/constrained-levenberg-marquardt-example.ts
|
|
643
|
+
```
|
|
644
|
+
|
|
645
|
+
## References
|
|
646
|
+
|
|
647
|
+
- Moré, J. J., "The Levenberg-Marquardt Algorithm: Implementation and Theory," in *Numerical Analysis*, Lecture Notes in Mathematics 630, 1978. DOI: https://doi.org/10.1007/BFb0067700
|
|
648
|
+
- Lourakis, M. I. A., "A Brief Description of the Levenberg-Marquardt Algorithm," 2005 tutorial. PDF: http://www.ics.forth.gr/~lourakis/publ/2005/LM.pdf
|
|
649
|
+
- Nocedal, J. & Wright, S. J., "Numerical Optimization" (2nd ed.), Chapter 12 (constrained optimization), 2006
|
|
650
|
+
|
|
651
|
+
## MVP Scope
|
|
652
|
+
|
|
653
|
+
### Included
|
|
654
|
+
|
|
655
|
+
- Gradient descent with line search
|
|
656
|
+
- Gauss-Newton method
|
|
657
|
+
- Levenberg-Marquardt algorithm
|
|
658
|
+
- Constrained Gauss-Newton method (nonlinear least squares with equality constraints)
|
|
659
|
+
- Constrained Levenberg-Marquardt method (robust constrained nonlinear least squares)
|
|
660
|
+
- Adjoint method for constrained optimization (equality constraints)
|
|
661
|
+
- Numerical differentiation (central difference)
|
|
662
|
+
- Browser compatibility
|
|
663
|
+
- TypeScript support
|
|
664
|
+
|
|
665
|
+
### Not Included (Future Work)
|
|
666
|
+
|
|
667
|
+
- Automatic differentiation
|
|
668
|
+
- Constraint handling (inequality constraints)
|
|
669
|
+
- Global optimization guarantees
|
|
670
|
+
- Evolutionary algorithms (CMA-ES, etc.)
|
|
671
|
+
- Other optimization algorithms (BFGS, etc.)
|
|
672
|
+
- Sparse matrix support
|
|
673
|
+
- Parallel computation
|
|
674
|
+
|
|
675
|
+
## Type Definitions
|
|
676
|
+
|
|
677
|
+
### Why Float64Array?
|
|
678
|
+
|
|
679
|
+
This library uses `Float64Array` instead of regular JavaScript arrays for:
|
|
680
|
+
- **Performance**: Float64Array provides better performance for numerical computations
|
|
681
|
+
- **Memory efficiency**: More memory-efficient storage for large parameter vectors
|
|
682
|
+
- **Type safety**: Ensures all values are 64-bit floating-point numbers
|
|
683
|
+
|
|
684
|
+
To convert from regular arrays:
|
|
685
|
+
```typescript
|
|
686
|
+
const regularArray = [1.0, 2.0, 3.0];
|
|
687
|
+
const float64Array = new Float64Array(regularArray);
|
|
688
|
+
```
|
|
689
|
+
|
|
690
|
+
### Why Matrix from ml-matrix?
|
|
691
|
+
|
|
692
|
+
The library uses `Matrix` from the `ml-matrix` package for Jacobian matrices because:
|
|
693
|
+
- **Efficient matrix operations**: Provides optimized matrix multiplication and linear algebra operations
|
|
694
|
+
- **Well-tested**: Mature library with comprehensive matrix operations
|
|
695
|
+
- **Browser-compatible**: Works seamlessly in browser environments
|
|
696
|
+
|
|
697
|
+
To create a Matrix from a 2D array:
|
|
698
|
+
```typescript
|
|
699
|
+
import { Matrix } from 'ml-matrix';
|
|
700
|
+
const matrix = new Matrix([[1, 2], [3, 4]]);
|
|
701
|
+
```
|
|
702
|
+
|
|
703
|
+
## Troubleshooting
|
|
704
|
+
|
|
705
|
+
### Common Errors and Solutions
|
|
706
|
+
|
|
707
|
+
#### Error: "Jacobian computation is required but not provided"
|
|
708
|
+
|
|
709
|
+
**Problem**: You're using `levenbergMarquardt` or `gaussNewton` without providing a Jacobian function and numerical Jacobian is disabled.
|
|
710
|
+
|
|
711
|
+
**Solutions**:
|
|
712
|
+
1. Enable numerical Jacobian (default behavior):
|
|
713
|
+
```typescript
|
|
714
|
+
levenbergMarquardt(params, residualFn, { useNumericJacobian: true })
|
|
715
|
+
```
|
|
716
|
+
|
|
717
|
+
2. Provide an analytical Jacobian function:
|
|
718
|
+
```typescript
|
|
719
|
+
const jacobianFn = (params: Float64Array) => {
|
|
720
|
+
// Your Jacobian computation
|
|
721
|
+
return new Matrix(/* ... */);
|
|
722
|
+
};
|
|
723
|
+
levenbergMarquardt(params, residualFn, { jacobian: jacobianFn, ...options })
|
|
724
|
+
```
|
|
725
|
+
|
|
726
|
+
#### Algorithm doesn't converge
|
|
727
|
+
|
|
728
|
+
**Possible causes**:
|
|
729
|
+
- Initial parameters are too far from the solution
|
|
730
|
+
- Tolerance is too strict
|
|
731
|
+
- Maximum iterations too low
|
|
732
|
+
- Step size (for gradient descent) is inappropriate
|
|
733
|
+
|
|
734
|
+
**Solutions**:
|
|
735
|
+
1. Try different initial parameters
|
|
736
|
+
2. Increase `maxIterations`
|
|
737
|
+
3. Adjust tolerance values (`tolerance`, `tolGradient`, `tolStep`, `tolResidual`)
|
|
738
|
+
4. For gradient descent, enable line search (`useLineSearch: true`) or adjust `stepSize`
|
|
739
|
+
5. Enable verbose logging (`verbose: true`) to see what's happening
|
|
740
|
+
|
|
741
|
+
#### Singular matrix error (Gauss-Newton)
|
|
742
|
+
|
|
743
|
+
**Problem**: The Jacobian matrix is singular or ill-conditioned, making the normal equations unsolvable.
|
|
744
|
+
|
|
745
|
+
**Solutions**:
|
|
746
|
+
1. Use Levenberg-Marquardt instead (handles singular matrices better)
|
|
747
|
+
2. Check your residual function for numerical issues
|
|
748
|
+
3. Try different initial parameters
|
|
749
|
+
4. Increase numerical Jacobian step size (`jacobianStep`)
|
|
750
|
+
|
|
751
|
+
#### Singular matrix error (Constrained Gauss-Newton)
|
|
752
|
+
|
|
753
|
+
**Problem**: The effective Jacobian `J_eff^T J_eff` is singular or ill-conditioned.
|
|
754
|
+
|
|
755
|
+
**Solutions**:
|
|
756
|
+
1. Use Constrained Levenberg-Marquardt instead (handles singular matrices better with damping)
|
|
757
|
+
2. Check that constraint Jacobian `∂c/∂x` is well-conditioned
|
|
758
|
+
3. Verify initial states satisfy constraints approximately
|
|
759
|
+
4. Try different initial parameters and states
|
|
760
|
+
|
|
761
|
+
#### Singular matrix error (Adjoint Method)
|
|
762
|
+
|
|
763
|
+
**Problem**: The constraint Jacobian `∂c/∂x` is singular or ill-conditioned, making the adjoint equation unsolvable.
|
|
764
|
+
|
|
765
|
+
**Solutions**:
|
|
766
|
+
1. Check that `∂c/∂x` is well-conditioned (if square) or has full rank (if non-square)
|
|
767
|
+
3. Verify initial states satisfy the constraint approximately (`c(p₀, x₀) ≈ 0`)
|
|
768
|
+
4. Try different initial values that don't make `∂c/∂x` singular
|
|
769
|
+
5. For nonlinear constraints, ensure initial values are on the constraint manifold
|
|
770
|
+
|
|
771
|
+
#### Results don't match expectations
|
|
772
|
+
|
|
773
|
+
**Check**:
|
|
774
|
+
1. Verify your cost/residual function is correct
|
|
775
|
+
2. Check that gradient/Jacobian functions are correct (if provided)
|
|
776
|
+
3. Try enabling `verbose: true` or `logLevel: 'DEBUG'` to see iteration details
|
|
777
|
+
4. Use `onIteration` callback to monitor progress
|
|
778
|
+
5. Verify initial parameters are reasonable
|
|
779
|
+
6. For adjoint method, ensure initial states satisfy constraints approximately
|
|
780
|
+
|
|
781
|
+
### Debugging Tips
|
|
782
|
+
|
|
783
|
+
1. **Enable verbose logging**: Set `verbose: true` to see detailed iteration information
|
|
784
|
+
2. **Use progress callbacks**: Use `onIteration` to monitor convergence:
|
|
785
|
+
```typescript
|
|
786
|
+
const result = gradientDescent(params, costFn, gradFn, {
|
|
787
|
+
onIteration: (iter, cost, params) => {
|
|
788
|
+
console.log(`Iteration ${iter}: cost = ${cost}`);
|
|
789
|
+
}
|
|
790
|
+
});
|
|
791
|
+
```
|
|
792
|
+
3. **Check convergence status**: Always check `result.converged` to see if optimization succeeded
|
|
793
|
+
4. **Monitor gradient/residual norms**: Check `finalGradientNorm` or `finalResidualNorm` to understand convergence quality
|
|
794
|
+
|
|
795
|
+
## Requirements
|
|
796
|
+
|
|
797
|
+
- Node.js >= 18.0.0
|
|
798
|
+
- Modern browsers with ES2020 support (required for running in-browser examples)
|
|
799
|
+
|
|
800
|
+
## License
|
|
801
|
+
|
|
802
|
+
MIT
|
|
803
|
+
|
|
804
|
+
## Contributing
|
|
805
|
+
|
|
806
|
+
Contributions are welcome! Please read `CODING_RULES.md` before submitting pull requests.
|
|
807
|
+
|