MadNLP.AbstractCondensedKKTSystem
— TypeAbstractCondensedKKTSystem{T, VT, MT} <: AbstractKKTSystem{T, VT, MT}
The condensed KKT system simplifies further the AbstractReducedKKTSystem
by removing the rows associated to the slack variables $s$ and the inequalities.
At the primal-dual iterate $(x, y)$, the matrix writes
[Wₓₓ + Σₓ + Aᵢ' Σₛ Aᵢ Aₑ'] [Δx]
[ Aₑ 0 ] [Δy]
with
- $Wₓₓ$: Hessian of the Lagrangian.
- $Aₑ$: Jacobian of the equality constraints
- $Aᵢ$: Jacobian of the inequality constraints
- $Σₓ = X⁻¹ V$
- $Σₛ = S⁻¹ W$
MadNLP.AbstractKKTSystem
— TypeAbstractKKTSystem{T, VT<:AbstractVector{T}, MT<:AbstractMatrix{T}}
Abstract type for KKT system.
MadNLP.AbstractKKTVector
— TypeAbstractKKTVector{T, VT}
Supertype for KKT's right-hand-side vectors $(x, s, y, z, ν, w)$.
MadNLP.AbstractLinearSolver
— TypeAbstractLinearSolver
Abstract type for linear solver targeting the resolution of the linear system $Ax=b$.
MadNLP.AbstractReducedKKTSystem
— TypeAbstractReducedKKTSystem{T, VT, MT} <: AbstractKKTSystem{T, VT, MT}
The reduced KKT system is a simplification of the original Augmented KKT system. Comparing to AbstractUnreducedKKTSystem
), AbstractReducedKKTSystem
removes the two last rows associated to the bounds' duals $(ν, w)$.
At a primal-dual iterate $(x, s, y, z)$, the matrix writes
[Wₓₓ + Σₓ 0 Aₑ' Aᵢ'] [Δx]
[ 0 Σₛ 0 -I ] [Δs]
[Aₑ 0 0 0 ] [Δy]
[Aᵢ -I 0 0 ] [Δz]
with
- $Wₓₓ$: Hessian of the Lagrangian.
- $Aₑ$: Jacobian of the equality constraints
- $Aᵢ$: Jacobian of the inequality constraints
- $Σₓ = X⁻¹ V$
- $Σₛ = S⁻¹ W$
MadNLP.AbstractUnreducedKKTSystem
— TypeAbstractUnreducedKKTSystem{T, VT, MT} <: AbstractKKTSystem{T, VT, MT}
Augmented KKT system associated to the linearization of the KKT conditions at the current primal-dual iterate $(x, s, y, z, ν, w)$.
The associated matrix is
[Wₓₓ 0 Aₑ' Aᵢ' -I 0 ] [Δx]
[ 0 0 0 -I 0 -I ] [Δs]
[Aₑ 0 0 0 0 0 ] [Δy]
[Aᵢ -I 0 0 0 0 ] [Δz]
[V 0 0 0 X 0 ] [Δν]
[0 W 0 0 0 S ] [Δw]
with
- $Wₓₓ$: Hessian of the Lagrangian.
- $Aₑ$: Jacobian of the equality constraints
- $Aᵢ$: Jacobian of the inequality constraints
- $X = diag(x)$
- $S = diag(s)$
- $V = diag(ν)$
- $W = diag(w)$
MadNLP.DenseCondensedKKTSystem
— TypeDenseCondensedKKTSystem{T, VT, MT} <: AbstractCondensedKKTSystem{T, VT, MT}
Implement AbstractCondensedKKTSystem
with dense matrices.
Requires a dense linear solver to factorize the associated KKT system (otherwise an error is returned).
MadNLP.DenseKKTSystem
— TypeDenseKKTSystem{T, VT, MT} <: AbstractReducedKKTSystem{T, VT, MT}
Implement AbstractReducedKKTSystem
with dense matrices.
Requires a dense linear solver to be factorized (otherwise an error is returned).
MadNLP.ReducedKKTVector
— TypeReducedKKTVector{T, VT<:AbstractVector{T}} <: AbstractKKTVector{T, VT}
KKT vector $(x, s, y, z)$, associated to a AbstractReducedKKTSystem
.
Compared to UnreducedKKTVector
, it does not store the dual values associated to the primal's lower and upper bounds.
MadNLP.SparseKKTSystem
— TypeSparseKKTSystem{T, VT, MT} <: AbstractReducedKKTSystem{T, VT, MT}
Implement the AbstractReducedKKTSystem
in sparse COO format.
MadNLP.SparseUnreducedKKTSystem
— TypeSparseUnreducedKKTSystem{T, VT, MT} <: AbstractUnreducedKKTSystem{T, VT, MT}
Implement the AbstractUnreducedKKTSystem
in sparse COO format.
MadNLP.UnreducedKKTVector
— TypeUnreducedKKTVector{T, VT<:AbstractVector{T}} <: AbstractKKTVector{T, VT}
Full KKT vector $(x, s, y, z, ν, w)$, associated to a AbstractUnreducedKKTSystem
.
MadNLP.build_kkt!
— Functionbuild_kkt!(kkt::AbstractKKTSystem)
Assemble the KKT matrix before calling the factorization routine.
MadNLP.compress_hessian!
— Functioncompress_hessian!(kkt::AbstractKKTSystem)
Compress the Hessian inside kkt
's internals. This function is called every time a new Hessian is evaluated.
Default implementation do nothing.
MadNLP.compress_jacobian!
— Functioncompress_jacobian!(kkt::AbstractKKTSystem)
Compress the Jacobian inside kkt
's internals. This function is called every time a new Jacobian is evaluated.
By default, the function updates in the Jacobian the coefficients associated to the slack variables.
MadNLP.dual
— Functiondual(X::AbstractKKTVector)
Return the dual values $(y, z)$ stored in the KKT vector X
.
MadNLP.dual_lb
— Functiondual_lb(X::AbstractKKTVector)
Return the dual values $ν$ associated to the lower-bound stored in the KKT vector X
.
MadNLP.dual_ub
— Functiondual_ub(X::AbstractKKTVector)
Return the dual values $w$ associated to the upper-bound stored in the KKT vector X
.
MadNLP.factorize!
— Functionfactorize!(::AbstractLinearSolver)
Factorize the matrix $A$ and updates the factors inside the AbstractLinearSolver
instance.
MadNLP.full
— Functionfull(X::AbstractKKTVector)
Return the all the values stored inside the KKT vector X
.
MadNLP.get_hessian
— FunctionGet Hessian matrix
MadNLP.get_jacobian
— FunctionGet Jacobian matrix
MadNLP.get_kkt
— Functionget_kkt(kkt::AbstractKKTSystem)::AbstractMatrix
Return a pointer to the KKT matrix implemented in kkt
. The pointer is passed afterward to a linear solver.
MadNLP.hess_dense!
— FunctionDense Hessian callback
MadNLP.inertia
— Functioninertia(::AbstractLinearSolver)
Return the inertia (n, m, p)
of the linear system as a tuple.
Note
The inertia is defined as a tuple $(n, m, p)$, with
- $n$: number of positive eigenvalues
- $m$: number of negative eigenvalues
- $p$: number of zero eigenvalues
MadNLP.initialize!
— Functioninitialize!(kkt::AbstractKKTSystem)
Initialize KKT system with default values. Called when we initialize the MadNLPSolver
storing the current KKT system kkt
.
MadNLP.introduce
— Functionintroduce(::AbstractLinearSolver)
Print the name of the linear solver.
MadNLP.is_inertia
— Functionis_inertia(::AbstractLinearSolver)
Return true
if the linear solver supports the computation of the inertia of the linear system.
MadNLP.is_inertia_correct
— Functionis_inertia_correct(kkt::AbstractKKTSystem, n::Int, m::Int, p::Int)
Check if the inertia $(n, m, p)$ returned by the linear solver is adapted to the KKT system implemented in kkt
.
MadNLP.is_reduced
— FunctionReturn true if KKT system is reduced.
MadNLP.is_supported
— Methodis_supported(solver,T)
Return true
if solver
supports the floating point number type T
.
Examples
julia> is_supported(UmfpackSolver,Float64)
true
julia> is_supported(UmfpackSolver,Float32)
false
MadNLP.jac_dense!
— FunctionDense Jacobian callback
MadNLP.jtprod!
— Functionjtprod!(y::AbstractVector, kkt::AbstractKKTSystem, x::AbstractVector)
Multiply with transpose of Jacobian and store the result in y
, such that $y = A' x$ (with $A$ current Jacobian).
MadNLP.nnz_jacobian
— FunctionNonzero in Jacobian
MadNLP.num_variables
— FunctionNumber of primal variables associated to the KKT system.
MadNLP.number_dual
— Methodnumber_dual(X::AbstractKKTVector)
Get total number of dual values $(y, z)$ in KKT vector X
.
MadNLP.number_primal
— Methodnumber_primal(X::AbstractKKTVector)
Get total number of primal values $(x, s)$ in KKT vector X
.
MadNLP.primal
— Functionprimal(X::AbstractKKTVector)
Return the primal values $(x, s)$ stored in the KKT vector X
.
MadNLP.primal_dual
— Functionprimal_dual(X::AbstractKKTVector)
Return both the primal and the dual values $(x, s, y, z)$ stored in the KKT vector X
.
MadNLP.regularize_diagonal!
— Functionregularize_diagonal!(kkt::AbstractKKTSystem, primal_values::AbstractVector, dual_values::AbstractVector)
Regularize the values in the diagonal of the KKT system. Called internally inside the interior-point routine.
MadNLP.scale_constraints!
— Methodscale_constraints!(
nlp::AbstractNLPModel,
con_scale::AbstractVector,
jac::AbstractMatrix;
max_gradient=1e-8,
)
Compute the scaling of the constraints associated to the nonlinear model nlp
. By default, Ipopt's scaling is applied. The user can write its own function to scale appropriately any custom AbstractNLPModel
.
Notes
This function assumes that the Jacobian jac
has been evaluated before calling this function.
MadNLP.scale_objective
— Methodscale_objective(
nlp::AbstractNLPModel,
grad::AbstractVector;
max_gradient=1e-8,
)
Compute the scaling of the objective associated to the nonlinear model nlp
. By default, Ipopt's scaling is applied. The user can write its own function to scale appropriately the objective of any custom AbstractNLPModel
.
Notes
This function assumes that the gradient gradient
has been evaluated before calling this function.
MadNLP.set_jacobian_scaling!
— Functionset_jacobian_scaling!(kkt::AbstractKKTSystem, scaling::AbstractVector)
Set the scaling of the Jacobian with the vector scaling
storing the scaling for all the constraints in the problem.
MadNLP.solve!
— Functionsolve!(::AbstractLinearSolver, x::AbstractVector)
Solve the linear system $Ax = b$.
This function assumes the linear system has been factorized previously with factorize!
.
MadNLP.solve_refine!
— Functionsolve_refine!(x, ::AbstractIterator, b)
Solve the linear system $Ax = b$ using iterative refinement. The object AbstractIterator
stores an instance of a AbstractLinearSolver
for the backsolve operations.
Notes
This function assumes the matrix stored in the linear solver has been factorized previously.
MadNLP.timing_callbacks
— Methodtiming_callbacks(ips::InteriorPointSolver; ntrials=10)
Return the average timings spent in each callback for ntrials
different trials. Results are returned inside a named-tuple.
MadNLP.timing_linear_solver
— Methodtiming_linear_solver(ips::InteriorPointSolver; ntrials=10)
Return the average timings spent in the linear solver for ntrials
different trials. Results are returned inside a named-tuple.
MadNLP.timing_madnlp
— Methodtiming_madnlp(ips::InteriorPointSolver; ntrials=10)
Return the average time spent in the callbacks and in the linear solver, for ntrials
different trials.
Results are returned as a named-tuple.