Linear solvers

We suppose that the KKT system has been assembled previously into a given AbstractKKTSystem. Then, it remains to compute the Newton step by solving the KKT system for a given right-hand-side (given as a AbstractKKTVector). That's exactly the role of the linear solver.

If we do not assume any structure, the KKT system writes in generic form

\[K x = b\]

with $K$ the KKT matrix and $b$ the current right-hand-side. MadNLP provides a suite of specialized linear solvers to solve the linear system.

Inertia detection

If the matrix $K$ has negative eigenvalues, we have no guarantee that the solution of the KKT system is a descent direction with regards to the original nonlinear problem. That's the reason why most of the linear solvers compute the inertia of the linear system when factorizing the matrix $K$. The inertia counts the number of positive, negative and zero eigenvalues in the matrix. If the inertia does not meet a given criteria, then the matrix $K$ is regularized by adding a multiple of the identity to it: $K_r = K + \alpha I$.

Note

We recall that the inertia of a matrix $K$ is given as a triplet $(n,m,p)$, with $n$ the number of positive eigenvalues, $m$ the number of negative eigenvalues and $p$ the number of zero eigenvalues.

Factorization algorithm

In nonlinear programming, it is common to employ a Bunch-Kaufman factorization (or LDL factorization) to factorize the matrix $K$, as this algorithm returns the inertia of the matrix directly as a result of the factorization.

Note

When MadNLP runs in inertia-free mode, the algorithm does not require to compute the inertia when factorizing the matrix $K$. In that case, MadNLP can use a classical LU or QR factorization to solve the linear system $Kx = b$.

Solving a KKT system with MadNLP

We suppose available a AbstractKKTSystem kkt, properly assembled following the procedure presented previously. We can query the assembled matrix $K$ as

K = MadNLP.get_kkt(kkt)
6×6 Matrix{Float64}:
 2.0    0.0   0.0   0.0  0.0  0.0
 0.0  200.0   0.0   0.0  0.0  0.0
 0.0    0.0   0.0   0.0  0.0  0.0
 0.0    0.0   0.0   0.0  0.0  0.0
 0.0    0.0  -1.0   0.0  0.0  0.0
 1.0    0.0   0.0  -1.0  0.0  0.0

Then, if we want to pass the KKT matrix K to Lapack, this translates to

linear_solver = LapackCPUSolver(K)
LapackCPUSolver{Float64}([2.0 0.0 … 0.0 0.0; 0.0 200.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 1.0 0.0 … 0.0 0.0], [2.0 0.0 … 0.0 0.0; 0.0 200.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 1.0 0.0 … 0.0 0.0], [6.93876647116395e-310], -1, Base.RefValue{Int64}(0), Dict{Symbol, Any}(), MadNLP.LapackOptions(MadNLP.BUNCHKAUFMAN), MadNLP.MadNLPLogger(MadNLP.INFO, MadNLP.INFO, nothing))

The instance linear_solver does not copy the matrix $K$ and instead keep a reference to it.

linear_solver.dense === K
true

That way every time we re-assemble the matrix $K$ in kkt, the values are directly updated inside linear_solver.

To compute the factorization inside linear_solver, one simply as to call:

MadNLP.factorize!(linear_solver)
LapackCPUSolver{Float64}([2.0 0.0 … 0.0 0.0; 0.0 200.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 1.0 0.0 … 0.0 0.0], [2.0 0.0 … 0.0 0.0; 0.0 200.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.5 0.0 … -1.0 -0.5], [384.0, NaN, 6.64764457e-316, 9.339114152046215e-284, 5.53e-322, 5.53e-322, 6.661144e-316, 6.66122703e-316, 6.44733415e-316, 6.6611883e-316  …  1.5753619e-316, 0.0, 2.1219958226e-314, 0.0, 0.0, 0.0, 5.38659695e-316, 7.71494009721244e-310, 5.4078991e-316, 7.16e-322], 384, Base.RefValue{Int64}(0), Dict{Symbol, Any}(:ipiv => [1, 2, -5, -5, -6, -6]), MadNLP.LapackOptions(MadNLP.BUNCHKAUFMAN), MadNLP.MadNLPLogger(MadNLP.INFO, MadNLP.INFO, nothing))

Once the factorization computed, computing the backsolve for a right-hand-side b amounts to

nk = size(kkt, 1)
b = rand(nk)
MadNLP.solve!(linear_solver, b)
6-element Vector{Float64}:
  0.49588143311978145
  0.0028758146732753387
 -0.9574108652845388
 -0.4054950529598293
 -0.11002040010956482
 -0.31541061110596413

The values of b being modified inplace to store the solution $x$ of the linear system $Kx =b$.