MadNLP.AbstractCondensedKKTSystemType
AbstractCondensedKKTSystem{T, VT, MT} <: AbstractKKTSystem{T, VT, MT}

The condensed KKT system simplifies further the AbstractReducedKKTSystem by removing the rows associated to the slack variables $s$ and the inequalities.

At the primal-dual iterate $(x, y)$, the matrix writes

[Wₓₓ + Σₓ + Aᵢ' Σₛ Aᵢ    Aₑ']  [Δx]
[         Aₑ              0 ]  [Δy]

with

  • $Wₓₓ$: Hessian of the Lagrangian.
  • $Aₑ$: Jacobian of the equality constraints
  • $Aᵢ$: Jacobian of the inequality constraints
  • $Σₓ = X⁻¹ V$
  • $Σₛ = S⁻¹ W$
MadNLP.AbstractKKTSystemType
AbstractKKTSystem{T, VT<:AbstractVector{T}, MT<:AbstractMatrix{T}}

Abstract type for KKT system.

MadNLP.AbstractKKTVectorType
AbstractKKTVector{T, VT}

Supertype for KKT's right-hand-side vectors $(x, s, y, z, ν, w)$.

MadNLP.AbstractLinearSolverType
AbstractLinearSolver

Abstract type for linear solver targeting the resolution of the linear system $Ax=b$.

MadNLP.AbstractReducedKKTSystemType
AbstractReducedKKTSystem{T, VT, MT} <: AbstractKKTSystem{T, VT, MT}

The reduced KKT system is a simplification of the original Augmented KKT system. Comparing to AbstractUnreducedKKTSystem), AbstractReducedKKTSystem removes the two last rows associated to the bounds' duals $(ν, w)$.

At a primal-dual iterate $(x, s, y, z)$, the matrix writes

[Wₓₓ + Σₓ   0    Aₑ'   Aᵢ']  [Δx]
[ 0         Σₛ    0    -I ]  [Δs]
[Aₑ         0     0     0 ]  [Δy]
[Aᵢ        -I     0     0 ]  [Δz]

with

  • $Wₓₓ$: Hessian of the Lagrangian.
  • $Aₑ$: Jacobian of the equality constraints
  • $Aᵢ$: Jacobian of the inequality constraints
  • $Σₓ = X⁻¹ V$
  • $Σₛ = S⁻¹ W$
MadNLP.AbstractUnreducedKKTSystemType
AbstractUnreducedKKTSystem{T, VT, MT} <: AbstractKKTSystem{T, VT, MT}

Augmented KKT system associated to the linearization of the KKT conditions at the current primal-dual iterate $(x, s, y, z, ν, w)$.

The associated matrix is

[Wₓₓ  0  Aₑ'  Aᵢ'  -I   0 ]  [Δx]
[ 0   0   0   -I    0  -I ]  [Δs]
[Aₑ   0   0    0    0   0 ]  [Δy]
[Aᵢ  -I   0    0    0   0 ]  [Δz]
[V    0   0    0    X   0 ]  [Δν]
[0    W   0    0    0   S ]  [Δw]

with

  • $Wₓₓ$: Hessian of the Lagrangian.
  • $Aₑ$: Jacobian of the equality constraints
  • $Aᵢ$: Jacobian of the inequality constraints
  • $X = diag(x)$
  • $S = diag(s)$
  • $V = diag(ν)$
  • $W = diag(w)$
MadNLP.DenseKKTSystemType
DenseKKTSystem{T, VT, MT} <: AbstractReducedKKTSystem{T, VT, MT}

Implement AbstractReducedKKTSystem with dense matrices.

Requires a dense linear solver to be factorized (otherwise an error is returned).

MadNLP.build_kkt!Function
build_kkt!(kkt::AbstractKKTSystem)

Assemble the KKT matrix before calling the factorization routine.

MadNLP.compress_hessian!Function
compress_hessian!(kkt::AbstractKKTSystem)

Compress the Hessian inside kkt's internals. This function is called every time a new Hessian is evaluated.

Default implementation do nothing.

MadNLP.compress_jacobian!Function
compress_jacobian!(kkt::AbstractKKTSystem)

Compress the Jacobian inside kkt's internals. This function is called every time a new Jacobian is evaluated.

By default, the function updates in the Jacobian the coefficients associated to the slack variables.

MadNLP.dualFunction
dual(X::AbstractKKTVector)

Return the dual values $(y, z)$ stored in the KKT vector X.

MadNLP.dual_lbFunction
dual_lb(X::AbstractKKTVector)

Return the dual values $ν$ associated to the lower-bound stored in the KKT vector X.

MadNLP.dual_ubFunction
dual_ub(X::AbstractKKTVector)

Return the dual values $w$ associated to the upper-bound stored in the KKT vector X.

MadNLP.factorize!Function
factorize!(::AbstractLinearSolver)

Factorize the matrix $A$ and updates the factors inside the AbstractLinearSolver instance.

MadNLP.fullFunction
full(X::AbstractKKTVector)

Return the all the values stored inside the KKT vector X.

MadNLP.get_kktFunction
get_kkt(kkt::AbstractKKTSystem)::AbstractMatrix

Return a pointer to the KKT matrix implemented in kkt. The pointer is passed afterward to a linear solver.

MadNLP.inertiaFunction
inertia(::AbstractLinearSolver)

Return the inertia (n, m, p) of the linear system as a tuple.

Note

The inertia is defined as a tuple $(n, m, p)$, with

  • $n$: number of positive eigenvalues
  • $m$: number of negative eigenvalues
  • $p$: number of zero eigenvalues
MadNLP.initialize!Function
initialize!(kkt::AbstractKKTSystem)

Initialize KKT system with default values. Called when we initialize the MadNLPSolver storing the current KKT system kkt.

MadNLP.introduceFunction
introduce(::AbstractLinearSolver)

Print the name of the linear solver.

MadNLP.is_inertiaFunction
is_inertia(::AbstractLinearSolver)

Return true if the linear solver supports the computation of the inertia of the linear system.

MadNLP.is_inertia_correctFunction
is_inertia_correct(kkt::AbstractKKTSystem, n::Int, m::Int, p::Int)

Check if the inertia $(n, m, p)$ returned by the linear solver is adapted to the KKT system implemented in kkt.

MadNLP.is_supportedMethod
is_supported(solver,T)

Return true if solver supports the floating point number type T.

Examples

julia> is_supported(UmfpackSolver,Float64)
true

julia> is_supported(UmfpackSolver,Float32)
false
MadNLP.jtprod!Function
jtprod!(y::AbstractVector, kkt::AbstractKKTSystem, x::AbstractVector)

Multiply with transpose of Jacobian and store the result in y, such that $y = A' x$ (with $A$ current Jacobian).

MadNLP.number_dualMethod
number_dual(X::AbstractKKTVector)

Get total number of dual values $(y, z)$ in KKT vector X.

MadNLP.number_primalMethod
number_primal(X::AbstractKKTVector)

Get total number of primal values $(x, s)$ in KKT vector X.

MadNLP.primalFunction
primal(X::AbstractKKTVector)

Return the primal values $(x, s)$ stored in the KKT vector X.

MadNLP.primal_dualFunction
primal_dual(X::AbstractKKTVector)

Return both the primal and the dual values $(x, s, y, z)$ stored in the KKT vector X.

MadNLP.regularize_diagonal!Function
regularize_diagonal!(kkt::AbstractKKTSystem, primal_values::AbstractVector, dual_values::AbstractVector)

Regularize the values in the diagonal of the KKT system. Called internally inside the interior-point routine.

MadNLP.scale_constraints!Method
scale_constraints!(
    nlp::AbstractNLPModel,
    con_scale::AbstractVector,
    jac::AbstractMatrix;
    max_gradient=1e-8,
)

Compute the scaling of the constraints associated to the nonlinear model nlp. By default, Ipopt's scaling is applied. The user can write its own function to scale appropriately any custom AbstractNLPModel.

Notes

This function assumes that the Jacobian jac has been evaluated before calling this function.

MadNLP.scale_objectiveMethod
scale_objective(
    nlp::AbstractNLPModel,
    grad::AbstractVector;
    max_gradient=1e-8,
)

Compute the scaling of the objective associated to the nonlinear model nlp. By default, Ipopt's scaling is applied. The user can write its own function to scale appropriately the objective of any custom AbstractNLPModel.

Notes

This function assumes that the gradient gradient has been evaluated before calling this function.

MadNLP.set_jacobian_scaling!Function
set_jacobian_scaling!(kkt::AbstractKKTSystem, scaling::AbstractVector)

Set the scaling of the Jacobian with the vector scaling storing the scaling for all the constraints in the problem.

MadNLP.solve!Function
solve!(::AbstractLinearSolver, x::AbstractVector)

Solve the linear system $Ax = b$.

This function assumes the linear system has been factorized previously with factorize!.

MadNLP.solve_refine!Function
solve_refine!(x, ::AbstractIterator, b)

Solve the linear system $Ax = b$ using iterative refinement. The object AbstractIterator stores an instance of a AbstractLinearSolver for the backsolve operations.

Notes

This function assumes the matrix stored in the linear solver has been factorized previously.

MadNLP.timing_callbacksMethod
timing_callbacks(ips::InteriorPointSolver; ntrials=10)

Return the average timings spent in each callback for ntrials different trials. Results are returned inside a named-tuple.

MadNLP.timing_linear_solverMethod
timing_linear_solver(ips::InteriorPointSolver; ntrials=10)

Return the average timings spent in the linear solver for ntrials different trials. Results are returned inside a named-tuple.

MadNLP.timing_madnlpMethod
timing_madnlp(ips::InteriorPointSolver; ntrials=10)

Return the average time spent in the callbacks and in the linear solver, for ntrials different trials.

Results are returned as a named-tuple.