nashopt 1.0.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,539 @@
1
+ Metadata-Version: 2.4
2
+ Name: nashopt
3
+ Version: 1.0.0
4
+ Summary: NashOpt - A Python Library for Computing Generalized Nash Equilibria and Solving Game-Design and Game-Theoretic Control Problems.
5
+ Author-email: Alberto Bemporad <alberto.bemporad@imtlucca.it>
6
+ Project-URL: Homepage, https://github.com/bemporad/nashopt
7
+ Keywords: generalized nash equilibrium problems,game theory
8
+ Classifier: Programming Language :: Python :: 3
9
+ Classifier: License :: OSI Approved :: Apache Software License
10
+ Classifier: Operating System :: OS Independent
11
+ Requires-Python: >=3.9
12
+ Description-Content-Type: text/markdown
13
+ License-File: LICENSE
14
+ Requires-Dist: numpy
15
+ Requires-Dist: jax
16
+ Requires-Dist: jaxopt
17
+ Requires-Dist: scipy
18
+ Requires-Dist: highspy
19
+ Requires-Dist: osqp
20
+ Dynamic: license-file
21
+
22
+ <img src="http://cse.lab.imtlucca.it/~bemporad/nashopt/images/nashopt-logo.png" alt="nashopt" width=40%/>
23
+
24
+ # NashOpt
25
+ ### A Python library for computing generalized Nash equilibria and solving game-design and game-theoretic control problems
26
+
27
+ This repository includes a library for solving different classes of nonlinear **Generalized Nash Equilibrium Problems** (GNEPs). The decision variables and Lagrange multipliers that jointly satisfy the KKT conditions for all agents are determined by solving a nonlinear least-squares problem. If a zero residual is obtained, this corresponds to a potential generalized Nash equilibrium, a property that can be verified by evaluating the individual **best responses**. For the special case of **Linear-Quadratic Games**, one or more equilibria are obtained by solving mixed-integer linear programming problems. The package can also solve **game-design** problems by optimizing the parameters of a **multiparametric GNEP** by box-constrained nonlinear optimization, as well as **game-theoretic control** problems, such as **Linear Quadratic Regulation** and **Model Predictive Control** problems.
28
+
29
+ ---
30
+ ## Installation
31
+
32
+ ~~~python
33
+ pip install nashopt
34
+ ~~~
35
+
36
+
37
+ ## Overview
38
+
39
+ Consider a game with $N$ agents. Each agent $i$ solves the following problem
40
+
41
+ $$
42
+ x_i^\star \in \arg\min_{x_i \in \mathbb{R}^{n_i}} f_i(x)
43
+ $$
44
+
45
+ subject to the following shared and local constraints
46
+
47
+ $$
48
+ g(x) \leq 0, \qquad A_{\textrm eq}x = b_{\textrm eq}, \qquad h(x)=0, \qquad \ell_i \leq x_i \leq u_i
49
+ $$
50
+
51
+ where:
52
+
53
+ - $f_i$ is the objective of agent $i$, specified as a <a href="https://github.com/jax-ml/jax">JAX</a> function;
54
+ - $x = (x_1^\top \dots x_N^\top)^\top \in \mathbb{R}^n$ are the decision variables, $x_i\in\mathbb{R}^{n_i}$;
55
+ - $g : \mathbb{R}^n \to \mathbb{R}^{n_g}$ encodes shared inequality constraints (JAX function);
56
+ - $A_{\textrm eq}, b_{\textrm eq}$ define linear shared equality constraints;
57
+ - $h : \mathbb{R}^n \to \mathbb{R}^{n_h}$ encodes shared nonlinear equality constraints (JAX function);
58
+ - $\ell, u$ are local box constraints.
59
+
60
+ A **generalized Nash equilibrium** $x^\star$ is a vector such that no agent can reduce their cost given the others' strategies and feasibility constraints, i.e.,
61
+
62
+ $$f_i(x^\star_{i}, x^\star_{-i})\leq f_i(x_i, x^\star_{-i})$$
63
+
64
+ for all feasible $x=(x_i,x_{-i}^\star)$, or equivalently, in terms of **best responses**:
65
+
66
+ $$
67
+ \begin{aligned}
68
+ x_i^\star \in \arg\min_{\ell_{i}\leq x_{i}\leq u_{i}} &f_i(x)\\
69
+ \textrm{s.t.} \quad &g(x) \leq 0 \\
70
+ &A_{\textrm eq}x = b_{\textrm eq}\\
71
+ &h(x) = 0\\
72
+ &x_{-i}=x_{-i}^\star.
73
+ \end{aligned}
74
+ $$
75
+
76
+
77
+ ---
78
+
79
+ ## KKT Conditions
80
+
81
+ For each agent $i$, the necessary KKT conditions are:
82
+
83
+ **1. Stationarity**
84
+
85
+ $$ \nabla_{x_i} f_i(x) + \nabla_{x_i} g(x)^\top \lambda_i + [A_i^\top\ \nabla_{x_i} h(x)^\top] \mu_i - v_i + y_i = 0 $$
86
+
87
+ **2. Primal Feasibility**
88
+
89
+ $$
90
+ g(x) \leq 0, \qquad Ax = b, \qquad h(x) = 0, \qquad \ell \le x \le u
91
+ $$
92
+
93
+ **3. Dual Feasibility**
94
+
95
+ $$
96
+ \lambda_i \ge 0, \qquad v_i\geq 0, \qquad y_i\geq 0
97
+ $$
98
+
99
+ **4. Complementary Slackness**
100
+
101
+ $$
102
+ \lambda_{i,j} \, g_j(x) = 0
103
+ $$
104
+
105
+ $$
106
+ v_{i,k} \, (x_{i,k} - \ell_{i,k}) = 0
107
+ $$
108
+
109
+ $$
110
+ y_{i,k} \, (u_{i,k} - x_{i,k}) = 0
111
+ $$
112
+
113
+ For general nonlinear problems, in `nashopt` primal feasibility (with respect to inequalities), dual feasibility, and complementary slackness conditions, which can be summarized as complementarity pairs $0\leq a\perp b\geq 0$, are enforced by using the nonlinear complementarity problem (NCP) Fischer–Burmeister function [1]
114
+
115
+ $$
116
+ \phi(a, b) = \sqrt{a^2 + b^2} - a - b
117
+ $$
118
+
119
+ which has the property
120
+
121
+ $$
122
+ \phi(a,b) = 0 \;\Longleftrightarrow\; a \ge 0,\; b \ge 0,\; ab = 0.
123
+ $$
124
+
125
+ Therefore, the above KKT conditions can be rewritten as the nonlinear system of equalities
126
+
127
+ $$R(z)=0$$
128
+
129
+ where $z = (x, \{\lambda_i\}, \{\mu_i\}, \{v_i\}, \{y_i\})$. To find a solution, we solve the nonlinear least-squares problem
130
+
131
+ $$
132
+ \min_z \frac{1}{2}\|R(z)\|_2^2
133
+ $$
134
+
135
+ using `scipy`'s nonlinear least squares methods in <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.least_squares.html#scipy.optimize.least_squares">`least_squares`</a>, exploiting JAX's automatic differentiation capabilities.
136
+
137
+ After solving the nonlinear least-squares problem, if the residual $R(z^\star)=0$, we can check if indeed $x^\star$ is a GNE by computing the best responses of each agent
138
+
139
+ $$ \min_{\ell_i\leq x_i\leq u_i} f_i(x_i, x^\star_{-i}) $$
140
+
141
+ $$ \textrm{s.t.} \qquad g_i(x), \qquad Ax=b, \qquad h(x)=0$$
142
+
143
+ In `nashopt`, the best response of agent $i$ is computed by solving the following box-constrained nonlinear
144
+ programming problem with `scipy`'s <a href="https://jaxopt.github.io/stable/_autosummary/jaxopt.ScipyBoundedMinimize.html">`L-BFGS-B`</a> method via the `jaxopt` interface:
145
+
146
+ $$ \min_{x_i} f_i(x_i, x_{-i}) + \rho \left(\left(\sum_j \max(g_i(x), 0)^2\right) + \|A x - b\|_2^2 + \|h(x)\|_2^2\right) $$
147
+
148
+ $$ \textrm{s.t.} \qquad \ell_i \leq x_i \leq u_i$$
149
+
150
+ with $x_{-i}=x^\star_{-i}$, where $\rho\gg 1$ is a large penalty on the violation of shared constraints.
151
+
152
+ *Variational GNEs* can be obtained by making the Lagrange multipliers associated with the
153
+ shared constraints the same for all players, i.e., by replacing $\{\lambda_i\}$ with a single vector $\lambda$ and $\{\mu_i\}$ with a single vector $\mu$, which further reduces the dimension of the zero-finding problem.
154
+
155
+ ### Example
156
+
157
+ We want to solve a simple GNEP with 3 agents, $x_1\in\mathbb{R}^2$, $x_2\in\mathbb{R}$, $x_3\in\mathbb{R}$, defined as follows:
158
+
159
+ ```python
160
+ import numpy as np
161
+ import jax
162
+ import jax.numpy as jnp
163
+ from nashopt import GNEP
164
+
165
+ sizes = [2, 1, 1] # [n1, n2, n3]
166
+
167
+ # Agent 1 objective:
168
+ @jax.jit
169
+ def f1(x):
170
+ return jnp.sum((x[0:2] - jnp.array([1.0, -0.5]))**2)
171
+
172
+ # Agent 2 objective:
173
+ @jax.jit
174
+ def f2(x):
175
+ return (x[2] + 0.3)**2
176
+
177
+ # Agent 3 objective:
178
+ @jax.jit
179
+ def f3(x):
180
+ return (x[3] - 0.5*(x[0] + x[2]))**2
181
+
182
+ # Shared constraint:
183
+ def g(x):
184
+ return jnp.array([x[3] + x[0] + x[2] - 2.0])
185
+
186
+ lb=np.zeros(4) # lower bounds
187
+ ub=np.ones(4) # upper bounds
188
+
189
+ gnep = GNEP(sizes, f=[f1,f2,f3], g=g, ng=1, lb=lb, ub=ub)
190
+ ```
191
+
192
+ We call `solve()` to solve the problem defined above:
193
+
194
+ ```python
195
+ sol = gnep.solve()
196
+ x_star = sol.x
197
+ ```
198
+
199
+ which gives the following solution:
200
+
201
+ ```python
202
+ x* = [ 1. 0. 0. 0.5]
203
+ ```
204
+
205
+ We can check if the KKT conditions are satisfied by looking at the residual norm $||R(x)||_2$:
206
+
207
+ ```python
208
+ residual = sol.res
209
+ print(np.linalg.norm(residual))
210
+
211
+ 1.223145e-16
212
+ ```
213
+
214
+ We can also inspect the vector of Lagrange multipliers and other statistics about the solution process:
215
+
216
+ ```python
217
+ lam_star = sol.lam
218
+ stats = sol.stats
219
+ ```
220
+
221
+ After solving the problem, we can check if indeed $x^\star$ is an equilibrium by evaluating the agents' individual best responses:
222
+
223
+ ```python
224
+ for i in range(gnep.N):
225
+ sol = gnep.best_response(i, x_star)
226
+ print(sol.x)
227
+ ```
228
+
229
+ ```
230
+ [ 1. 0. -0. 0.5]
231
+ [ 1. -0. 0. 0.5]
232
+ [ 1. -0. -0. 0.5]
233
+ ```
234
+
235
+ To add linear equality constraints, use the following:
236
+
237
+ ```python
238
+ Aeq = np.array([[1,1,1,1]])
239
+ beq = np.array([2.0])
240
+
241
+ gnep = GNEP(sizes, f=[f1,f2,f3], g=g, ng=1, lb=lb, ub=ub, Aeq=Aeq, beq=beq)
242
+ ```
243
+
244
+ while for general nonlinear equality constraints:
245
+
246
+ ```python
247
+ gnep = GNEP(sizes, f=[f1,f2,f3], g=g, ng=1, lb=lb, ub=ub, h=h, nh=nh)
248
+ ```
249
+
250
+ where `h` is a vector function returning a `jax` array of length `nh`.
251
+
252
+
253
+ You can also specify an initial guess $x_0$ to the GNEP solver as follows:
254
+ ```python
255
+ sol = gnep.solve(x0)
256
+ ```
257
+
258
+ To compute a **variational GNE** solution, set flag `variational` = `True`:
259
+
260
+ ```python
261
+ gnep = GNEP( ... , variational=True)
262
+ ```
263
+
264
+ To decide the nonlinear least-squares solver used to compute the GNEP, use the following call:
265
+
266
+ ```python
267
+ sol = gnep.solve(x0, solver = "trf")
268
+ ```
269
+
270
+ or
271
+
272
+ ```python
273
+ sol = gnep.solve(x0, solver = "lm")
274
+ ```
275
+
276
+ where `trf` calls a trust-region reflective algorithm, while `lm` a Levenberg-Marquardt method.
277
+
278
+ ## Game Design
279
+
280
+ By leveraging the above characterization of GNEs, we consider the **multiparametric Generalized Nash Equilibrium Problem** (mpGNEP) with $N$ agents, in which each agent $i$ solves:
281
+
282
+ $$
283
+ \begin{aligned}
284
+ \min_{x_i} \quad & f_i(x,p)\\
285
+ \textrm{s.t.} \quad & g(x,p) \leq 0\\
286
+ & A_{\textrm{eq}} x = b_{\textrm{eq}} + S_{\textrm{eq}} p\\
287
+ & h(x,p) = 0\\
288
+ & \ell \leq x \leq u
289
+ \end{aligned}
290
+ $$
291
+
292
+ where $p\in\mathbb{R}^{n_p}$ is a vector of parameters defining the game. Our goal is to design the game-parameter vector $p$ to achieve a desired GNE, according to the following nested optimization problem:
293
+
294
+ $$
295
+ \begin{aligned}
296
+ \min_{x^\star,p}\quad & J(x^\star,p) \\
297
+ \text{s.t.} \quad & x_i^\star\in\arg\min_{x_i \in \mathbb{R}^{n_i}}\quad && f_i(x,p)\\
298
+ &\text{s.t. } \quad && g(x,p) \leq 0\\
299
+ &&&Ax = b+Sp\\
300
+ &&&h(x,p) = 0\\
301
+ &&&\ell \leq x\leq u\\
302
+ &&&x_{-i} = x_{-i}^\star,\qquad i = 1, \ldots, N
303
+ \end{aligned}
304
+ $$
305
+
306
+ where $J$ is the objective function of the designer used to shape the resulting GNE. For example,
307
+ given an observed agents' equilibrium $x_{\textrm des}$, we can solve the inverse-game theoretical problem
308
+ of finding a vector $p$ (if one exists) such that $x^\star\approx x_{\textrm des}$, by setting
309
+
310
+ $$J(x^\star,p)=\|x^\star-x_{\rm des}\|_2^2.$$
311
+
312
+ We solve the game-design problem as
313
+ $$
314
+ \begin{aligned}
315
+ \min_{z,p}\quad & J(x,p) + \frac{\rho}{2}\|R(z,p)\|_2^2\\
316
+ \text{s.t. }\quad & \ell_p\leq p\leq u_p
317
+ \end{aligned}
318
+ $$
319
+
320
+ via `L-BFGS-B`, where $R(z,p)$ is the parametric version of the KKT residual defined above and $\ell_p$, $u_p$ define the range of admissible $p$, $\ell_{pj}\in\mathbb{R}\cup \{-\infty\}$, $u_{pj}\in\mathbb{R}\cup \{+\infty\}$, $j=1,\ldots,n_p$.
321
+
322
+ Smooth and nonsmooth regularization terms $\alpha_1\|x\|_1 + \alpha_2\|x\|_2^2$ can be explicitly added to $J(x,p)$.
323
+
324
+ ### Example
325
+ To solve a **game-design** problem with objective $J$, use the following structure:
326
+
327
+ ```python
328
+ from nashopt import ParametricGNEP
329
+
330
+ pgnep = ParametricGNEP(sizes, npar=2, f=f, g=g, ng=1, lb=lb, ub=ub, Aeq=Aeq, beq=beq, h=n, nh=nh, Seq=Seq)
331
+
332
+ sol = pgnep.solve(J, pmin, pmax)
333
+ ```
334
+
335
+ where now the functions listed in `f`, `g`, `h`, and `J` take $x$ and $p$ as input arguments,
336
+ and `pmin`, `pmax` define the admissible range of the parameter-vector $p$ (infinite bounds are allowed).
337
+
338
+ Regularization terms
339
+
340
+ $$
341
+ \alpha_1\|x\|_1 + \alpha_2\|x\|_2^2
342
+ $$
343
+
344
+ where $\alpha_1,\alpha_2\geq 0$ can be added on the cost function $J$ as follows:
345
+
346
+ ```python
347
+ sol = pgnep.solve(J, pmin, pmax, alpha1=alpha1, alpha2=alpha2)
348
+ ```
349
+
350
+ You can specify two further flags: `gne_warm_start`, to warm-start the optimization by computing first a GNE, and `refine_gne`, to try getting a GNE after solving the problem by refining the solution $x$ for the optimal parameter $p$ found.
351
+
352
+ ## Linear-Quadratic Games
353
+ When the agents' cost functions are quadratic and convex with respect to $x_i$ and all the constraints are linear, i.e.,
354
+
355
+ $$
356
+ \begin{array}{rrl}
357
+ \min_{p, x^\star} \quad & J(x^\star,p)\\
358
+ \text{s.t. } & x^\star_i\in \arg\min_{x_i} &f_i(x,p)=\frac{1}{2} x^\top Q^i x + (c^i + F^i p)^\top x \\
359
+ & \text{s.t. } & A x \leq b + S p\\
360
+ &&A_{\mathrm{eq}} x = b_{\mathrm{eq}} + S_{\mathrm{eq}} p \\
361
+ &&\ell_i \leq x \leq u_i\\
362
+ &&x_{-i} = x^\star_{-i}\\
363
+ && i=1,\dots,N
364
+ \end{array}
365
+ $$
366
+
367
+ the equilibrium conditions can be expressed as a mixed-integer linear program (MILP) using a "big-M" approach. `nashopt` support both the open-source solver `HiGHS` and `Gurobi` to solve the MILP.
368
+
369
+ Example:
370
+
371
+ ```python
372
+ from nashopt import GNEP_LQ
373
+
374
+ gnep = GNEP_LQ(sizes, Q, c, F, lb=lb, ub=ub, pmin=pmin,
375
+ pmax=pmax, A=A, b=b, S=S, M=1e4, variational=variational, solver='highs')
376
+ sol = gnep.solve()
377
+ x = sol.x
378
+ ```
379
+
380
+ We can also extract multiple solutions, if any exist, that correspond to different combinations of active constraints at optimality. For example, to get a list of the first 10 solutions:
381
+
382
+ ```python
383
+ sol = gnep.solve(max_solutions=10)
384
+ ```
385
+
386
+ In addition, a game objective $J$ can be given as the (sum of) convex piecewise affine function(s)
387
+
388
+ $$
389
+ J(x,p) = \sum_{j=1}^{n_J}\max_{k=1,\dots,n_j} D^{PWA}_{jk} x + E^{PWA}_{jk} p + h^{PWA}_{jk}
390
+ $$
391
+
392
+ ```python
393
+ gnep_lq = GNEP_LQ(sizes, ... D_pwa=D_pwa, E_pwa=E_pwa, h_pwa=h_pwa, ...)
394
+ ```
395
+
396
+ and the optimal parameters $p$ are also determined by MILP, or as the convex quadratic function
397
+
398
+ $$
399
+ f(x,p) = \frac{1}{2} [x^T\ p^T] Q_J \begin{bmatrix}x \\ p\end{bmatrix} + c_J^T \begin{bmatrix}x \\ p\end{bmatrix}
400
+ $$
401
+
402
+ ```python
403
+ gnep_lq = GNEP_LQ(sizes, ... Q_J=Q_J, c_J=c_J, ...)
404
+ ```
405
+
406
+ or the sum of both, where in this case the optimal parameters $p$ are determined by MIQP (only Gurobi supported).
407
+
408
+ ## Game-Theoretic Control
409
+ We consider non-cooperative multi-agent control problems where each agent only controls a subset of the input vector $u$ of a discrete-time linear dynamical system
410
+
411
+ $$
412
+ \begin{aligned}
413
+ x(t+1) &= A x(t) + B u(t)\\
414
+ y(t) &= C x(t)
415
+ \end{aligned}
416
+ $$
417
+
418
+ where $u(t)$ stacks the agents' decision vectors $u_1(t),\ldots,u_N(t)$.
419
+
420
+ ### Game-Theoretic LQR
421
+ For solving non-cooperative linear quadratic regulation (LQR) games, you can use the `NashLQR` class:
422
+
423
+ ```python
424
+ from nashopt import NashLQR
425
+
426
+ nash_lqr = NashLQR(sizes, A, B, Q, R, dare_iters=dare_iters)
427
+ sol = nash_lqr.solve(verbose=2)
428
+ sol.K_Nash=K_Nash
429
+ ```
430
+
431
+ where `sizes` contains the input sizes $[n_1,\ldots,n_N]$, $Q=[Q_1,\ldots,Q_N]$ are the full-state weight matrices, and $R=[R_1,\ldots,R_N]$ the input weight matrices used by agent $i$ to weight $u_i$. The number `dare_iters` is the number of fixed-point iterations used to find an approximate solution of the discrete algebraic Riccati equation for each agent.
432
+
433
+ You can retrieve extra information after solving the Nash equilibrium problem, such as the KKT residual `sol.residual`, useful to verify whether an equilibrium was found, the centralized LQR gain `sol.K_centralized` (for comparison), and other statistics `sol.stats=stats`.
434
+
435
+
436
+ ### Game-Theoretic Model Predictive Control
437
+ We now want to make the output vector $y(t)$ of the system track a given setpoint $r(t)$.
438
+ Each agent optimizes a sequence of input increments $\{\Delta u_{i,k}\}_{k=0}^{T-1}$ over a prediction horizon of $T$ steps, where $\Delta u_k=u_k-u_{k-1}$, by solving:
439
+
440
+ $$
441
+ \Delta u_i,\epsilon_i \in\arg\min \sum_{k=0}^{T-1}
442
+ \left( (y_{k+1}-r(t))^\top Q_i (y_{k+1}-r(t))
443
+ + \Delta u_{i,k}^\top Q_{\Delta u,i}\Delta u_{i,k}\right)
444
+ + q_{\epsilon,i}^\top \epsilon_i
445
+ $$
446
+
447
+ $$
448
+ \begin{array}{rll}
449
+ \text{s.t. } & x_{k+1} = A x_k + B u_k& y_{k+1} = C x_{k+1}\\
450
+ & u_{k,i} = u_{k-1,i} + \Delta u_{k,i}& u_{-1} = u(t-1)\\
451
+ &\Delta u_{\rm min} \leq \Delta u_k \leq \Delta u_{\rm max}
452
+ & u_{\rm min} \leq u_k \leq u_{\rm max}\\
453
+ & y_{\min} - \sum_{i=1}^N \epsilon_i \leq y_{k+1} \leq y_{\max} + \sum_{i=1}^N \epsilon_i&
454
+ \epsilon_i \geq 0\\
455
+ & i=1,\ldots,N,\ k=0,\ldots,T-1.
456
+ \end{array}
457
+ $$
458
+ where $Q_i\succeq 0$, $Q_{\Delta u,i}\succeq 0$ and $\epsilon_i\geq 0$ is a slack variable
459
+ used to soften shared output constraints (with linear penalty $q_{\epsilon,i}\geq 0$). Each agent's MPC problem can be simplified by imposing the constraints only on a shorter constraint horizon of $T_c<T$ steps.
460
+
461
+ You can use the `NashLinearMPC` class to define the game-theoretic MPC problem:
462
+
463
+ ```python
464
+ from nashopt import NashLinearMPC
465
+
466
+ nash_mpc = NashLinearMPC(sizes, A, B, C, Qy, Qdu, T, ymin=ymin, ymax=ymax, umin=umin, umax=umax, dumin=dumin, dumax=dumax, Qeps=Qeps, Tc=Tc)
467
+ ```
468
+
469
+ and then evaluate the GNE control move `u` = $u(t)$ at each step $t$:
470
+
471
+ ```python
472
+ sol = nash_mpc.solve(x, u1, r)
473
+ u = sol.u
474
+ ```
475
+
476
+ where `r` = $r(t)$ is the current output reference signal, `x` = $x(t)$ the current state, and `u1` = $u(t-1)$ the previous input.
477
+
478
+ To compute a *variational* GNE solution, use
479
+
480
+ ```python
481
+ sol = nash_mpc.solve(x, u1, r, variational=True)
482
+ u = sol.u
483
+ ```
484
+
485
+ For comparison, you can compute instead the *centralized* MPC move, where the cost function is the sum of all agents' costs, via standard quadratic programming:
486
+
487
+ ```python
488
+ sol = nash_mpc.solve(x, u1, r, centralized=True)
489
+ u = sol.u
490
+ ```
491
+
492
+ To specify the MILP solver to use to compute the game-theoretic MPC law, use the following:
493
+
494
+ ```python
495
+ sol = nash_mpc.solve(x0, u1, ref, ..., solver='highs')
496
+ ```
497
+
498
+ or
499
+
500
+ ```python
501
+ sol = nash_mpc.solve(x0, u1, ref, ..., solver='gurobi')
502
+ ```
503
+
504
+
505
+ ## References
506
+
507
+ > [1] Alexander Fischer. *A special Newton-type optimization method.* **Optimization**, 24(3–4):269–284, 1992.
508
+
509
+ ## Citation
510
+
511
+ ```
512
+ @misc{nashopt,
513
+ author={A. Bemporad},
514
+ title={{NashOpt}: A {Python} Library for Computing Generalized {Nash} Equilibria and Game Design},
515
+ howpublished = {\url{https://github.com/bemporad/nashopt}},
516
+ year=2025
517
+ }
518
+ ```
519
+
520
+ ---
521
+ ## Related packages
522
+
523
+ <a href="https://github.com/bemporad/nash-mpqp">**nash-mpqp**</a> a solver for solving linear-quadratic multi-parametric generalized Nash equilibrium (GNE) problems in *explicit* form.
524
+
525
+ <a href="https://github.com/bemporad/gnep-learn">**gnep-learn**</a> a Python package for solving generalized Nash equilibrium problems by *active learning* of best-response models.
526
+
527
+ ---
528
+ ## License
529
+
530
+ Apache 2.0
531
+
532
+ (C) 2025 A. Bemporad
533
+
534
+ ## Acknowledgement
535
+ This work was funded by the European Union (ERC Advanced Research Grant COMPACT, No. 101141351). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
536
+
537
+ <p align="center">
538
+ <img src="erc-logo.png" alt="ERC" width="400"/>
539
+ </p>
@@ -0,0 +1,6 @@
1
+ nashopt.py,sha256=-yiXFuMCJishfgpZS3CBCVwwhnEdobpmdNOr-xvt8XI,106990
2
+ nashopt-1.0.0.dist-info/licenses/LICENSE,sha256=xx0jnfkXJvxRnG63LTGOxlggYnIysveWIZ6H3PNdCrQ,11357
3
+ nashopt-1.0.0.dist-info/METADATA,sha256=5r73xvkjYGjK7GDhcWLMYTRv62Bs8C3-xoC11fe0q98,18169
4
+ nashopt-1.0.0.dist-info/WHEEL,sha256=_zCd3N1l69ArxyTb8rzEoP9TpbYXkqRFSNOD5OuxnTs,91
5
+ nashopt-1.0.0.dist-info/top_level.txt,sha256=NuR1yd9NPYwmiknuGNUFNSw8tKTzPpaspAD7VtTvaFk,8
6
+ nashopt-1.0.0.dist-info/RECORD,,
@@ -0,0 +1,5 @@
1
+ Wheel-Version: 1.0
2
+ Generator: setuptools (80.9.0)
3
+ Root-Is-Purelib: true
4
+ Tag: py3-none-any
5
+