MadNLP.AbstractCondensedKKTSystemType
AbstractCondensedKKTSystem{T, MT} <: AbstractKKTSystem{T, MT}

The condensed KKT system simplifies further the `AbstractReducedKKTSystem by removing the rows associated to the slack variables $s$ and the inequalities.

At the primal-dual iterate $(x, y)$, the matrix writes

[Wₓₓ + Σₓ + Aᵢ' Σₛ Aᵢ    Aₑ']  [Δx]
[         Aₑ              0 ]  [Δy]

with

  • $Wₓₓ$: Hessian of the Lagrangian.
  • $Aₑ$: Jacobian of the equality constraints
  • $Aᵢ$: Jacobian of the inequality constraints
  • $Σₓ = X⁻¹ V$
  • $Σₛ = S⁻¹ W$
MadNLP.AbstractReducedKKTSystemType
AbstractReducedKKTSystem{T, MT} <: AbstractKKTSystem{T, MT}

The reduced KKT system is a simplification of the original Augmented KKT system. Comparing to AbstractUnreducedKKTSystem), AbstractReducedKKTSystem removes the two last rows associated to the bounds' duals $(ν, w)$.

At a primal-dual iterate $(x, s, y, z)$, the matrix writes

[Wₓₓ + Σₓ   0    Aₑ'   Aᵢ']  [Δx]
[ 0         Σₛ    0    -I ]  [Δs]
[Aₑ         0     0     0 ]  [Δy]
[Aᵢ        -I     0     0 ]  [Δz]

with

  • $Wₓₓ$: Hessian of the Lagrangian.
  • $Aₑ$: Jacobian of the equality constraints
  • $Aᵢ$: Jacobian of the inequality constraints
  • $Σₓ = X⁻¹ V$
  • $Σₛ = S⁻¹ W$
MadNLP.AbstractUnreducedKKTSystemType
AbstractUnreducedKKTSystem{T, MT} <: AbstractKKTSystem{T, MT}

Augmented KKT system associated to the linearization of the KKT conditions at the current primal-dual iterate $(x, s, y, z, ν, w)$.

The associated matrix is

[Wₓₓ  0  Aₑ'  Aᵢ'  -I   0 ]  [Δx]
[ 0   0   0   -I    0  -I ]  [Δs]
[Aₑ   0   0    0    0   0 ]  [Δy]
[Aᵢ  -I   0    0    0   0 ]  [Δz]
[V    0   0    0    X   0 ]  [Δν]
[0    W   0    0    0   S ]  [Δw]

with

  • $Wₓₓ$: Hessian of the Lagrangian.
  • $Aₑ$: Jacobian of the equality constraints
  • $Aᵢ$: Jacobian of the inequality constraints
  • $X = diag(x)$
  • $S = diag(s)$
  • $V = diag(ν)$
  • $W = diag(w)$
MadNLP.DenseKKTSystemType
DenseKKTSystem{T, VT, MT} <: AbstractReducedKKTSystem{T, MT}

Implement AbstractReducedKKTSystem with dense matrices.

Requires a dense linear solver to be factorized (otherwise an error is returned).

MadNLP.build_kkt!Function
build_kkt!(kkt::AbstractKKTSystem)

Assemble the KKT matrix before calling the factorization routine.

MadNLP.compress_hessian!Function
compress_hessian!(kkt::AbstractKKTSystem)

Compress the Hessian inside kkt's internals. This function is called every time a new Hessian is evaluated.

Default implementation do nothing.

MadNLP.compress_jacobian!Function
compress_jacobian!(kkt::AbstractKKTSystem)

Compress the Jacobian inside kkt's internals. This function is called every time a new Jacobian is evaluated.

By default, the function updates in the Jacobian the coefficients associated to the slack variables.

MadNLP.get_kktFunction
get_kkt(kkt::AbstractKKTSystem)::AbstractMatrix

Return a pointer to the KKT matrix implemented in kkt. The pointer is passed afterward to a linear solver.

MadNLP.initialize!Function
initialize!(kkt::AbstractKKTSystem)

Initialize KKT system with default values. Called when we initialize the InteriorPointSolver storing the current KKT system kkt.

MadNLP.is_inertia_correctFunction
is_inertia_correct(kkt::AbstractKKTSystem, n::Int, m::Int, p::Int)

Check if the inertia $(n, m, p)$ returned by the linear solver is adapted to the KKT system implemented in kkt.

MadNLP.jtprod!Function
jtprod!(y::AbstractVector, kkt::AbstractKKTSystem, x::AbstractVector)

Multiply with transpose of Jacobian and store the result in y, such that $y = A' x$ (with $A$ current Jacobian).

MadNLP.regularize_diagonal!Function
regularize_diagonal!(kkt::AbstractKKTSystem, primal_values::AbstractVector, dual_values::AbstractVector)

Regularize the values in the diagonal of the KKT system. Called internally inside the interior-point routine.

MadNLP.scale_constraints!Method
scale_constraints!(
    nlp::AbstractNLPModel,
    con_scale::AbstractVector,
    jac::AbstractMatrix;
    max_gradient=1e-8,
)

Compute the scaling of the constraints associated to the nonlinear model nlp. By default, Ipopt's scaling is applied. The user can write its own function to scale appropriately any custom AbstractNLPModel.

Notes

This function assumes that the Jacobian jac has been evaluated before calling this function.

MadNLP.scale_objectiveMethod
scale_objective(
    nlp::AbstractNLPModel,
    grad::AbstractVector;
    max_gradient=1e-8,
)

Compute the scaling of the objective associated to the nonlinear model nlp. By default, Ipopt's scaling is applied. The user can write its own function to scale appropriately the objective of any custom AbstractNLPModel.

Notes

This function assumes that the gradient gradient has been evaluated before calling this function.

MadNLP.set_jacobian_scaling!Function
set_jacobian_scaling!(kkt::AbstractKKTSystem, scaling::AbstractVector)

Set the scaling of the Jacobian with the vector scaling storing the scaling for all the constraints in the problem.