MadNLP.AbstractCallback
— TypeAbstractCallback{T, VT}
Wrap the AbstractNLPModel
passed by the user in a form amenable to MadNLP.
An AbstractCallback
handles the scaling of the problem and the reformulations of the equality constraints and fixed variables.
MadNLP.AbstractCondensedKKTSystem
— TypeAbstractCondensedKKTSystem{T, VT, MT, QN} <: AbstractKKTSystem{T, VT, MT, QN}
The condensed KKT system simplifies further the AbstractReducedKKTSystem
by removing the rows associated to the slack variables $s$ and the inequalities.
At the primal-dual iterate $(x, y)$, the matrix writes
[Wₓₓ + Σₓ + Aᵢ' Σₛ Aᵢ Aₑ'] [Δx]
[ Aₑ 0 ] [Δy]
with
- $Wₓₓ$: Hessian of the Lagrangian.
- $Aₑ$: Jacobian of the equality constraints
- $Aᵢ$: Jacobian of the inequality constraints
- $Σₓ = X⁻¹ V$
- $Σₛ = S⁻¹ W$
MadNLP.AbstractEqualityTreatment
— TypeAbstractEqualityTreatment
Abstract type to define the reformulation of the equality constraints inside MadNLP.
MadNLP.AbstractFixedVariableTreatment
— TypeAbstractFixedVariableTreatment
Abstract type to define the reformulation of the fixed variables inside MadNLP.
MadNLP.AbstractHessian
— TypeAbstractHessian{T, VT}
Abstract type for representation of second-order information.
MadNLP.AbstractKKTSystem
— TypeAbstractKKTSystem{T, VT<:AbstractVector{T}, MT<:AbstractMatrix{T}, QN<:AbstractHessian{T}}
Abstract type for KKT system.
MadNLP.AbstractKKTVector
— TypeAbstractKKTVector{T, VT}
Supertype for KKT's right-hand-side vectors $(x, s, y, z, ν, w)$.
MadNLP.AbstractLinearSolver
— TypeAbstractLinearSolver
Abstract type for linear solver targeting the resolution of the linear system $Ax=b$.
MadNLP.AbstractQuasiNewton
— TypeAbstractQuasiNewton{T, VT} <: AbstractHessian{T, VT}
Abstract type for quasi-Newton approximation.
MadNLP.AbstractReducedKKTSystem
— TypeAbstractReducedKKTSystem{T, VT, MT, QN} <: AbstractKKTSystem{T, VT, MT, QN}
The reduced KKT system is a simplification of the original Augmented KKT system. Comparing to AbstractUnreducedKKTSystem
), AbstractReducedKKTSystem
removes the two last rows associated to the bounds' duals $(ν, w)$.
At a primal-dual iterate $(x, s, y, z)$, the matrix writes
[Wₓₓ + Σₓ 0 Aₑ' Aᵢ'] [Δx]
[ 0 Σₛ 0 -I ] [Δs]
[Aₑ 0 0 0 ] [Δy]
[Aᵢ -I 0 0 ] [Δz]
with
- $Wₓₓ$: Hessian of the Lagrangian.
- $Aₑ$: Jacobian of the equality constraints
- $Aᵢ$: Jacobian of the inequality constraints
- $Σₓ = X⁻¹ V$
- $Σₛ = S⁻¹ W$
MadNLP.AbstractUnreducedKKTSystem
— TypeAbstractUnreducedKKTSystem{T, VT, MT, QN} <: AbstractKKTSystem{T, VT, MT, QN}
Augmented KKT system associated to the linearization of the KKT conditions at the current primal-dual iterate $(x, s, y, z, ν, w)$.
The associated matrix is
[Wₓₓ 0 Aₑ' Aᵢ' -I 0 ] [Δx]
[ 0 0 0 -I 0 -I ] [Δs]
[Aₑ 0 0 0 0 0 ] [Δy]
[Aᵢ -I 0 0 0 0 ] [Δz]
[V 0 0 0 X 0 ] [Δν]
[0 W 0 0 0 S ] [Δw]
with
- $Wₓₓ$: Hessian of the Lagrangian.
- $Aₑ$: Jacobian of the equality constraints
- $Aᵢ$: Jacobian of the inequality constraints
- $X = diag(x)$
- $S = diag(s)$
- $V = diag(ν)$
- $W = diag(w)$
MadNLP.BFGS
— TypeBFGS{T, VT} <: AbstractQuasiNewton{T, VT}
BFGS quasi-Newton method. Update the direct Hessian approximation using
\[B_{k+1} = B_k - rac{(B_k s_k)(B_k s_k)^⊤}{s_k^⊤ B_k s_k} + rac{y_k y_k^⊤}{y_k^⊤ s_k}\]
Notes
The matrix is not updated if $s_k^⊤ y_k < 10^{-8}$.
MadNLP.CompactLBFGS
— TypeCompactLBFGS{T, VT} <: AbstractQuasiNewton
MadNLP.DenseCallback
— TypeDenseCallback{T, VT} < AbstractCallback{T, VT}
Wrap an AbstractNLPModel
using dense structures.
MadNLP.DenseCondensedKKTSystem
— TypeDenseCondensedKKTSystem{T, VT, MT, QN} <: AbstractCondensedKKTSystem{T, VT, MT, QN}
Implement AbstractCondensedKKTSystem
with dense matrices.
Requires a dense linear solver to factorize the associated KKT system (otherwise an error is returned).
MadNLP.DenseKKTSystem
— TypeDenseKKTSystem{T, VT, MT, QN, VI} <: AbstractReducedKKTSystem{T, VT, MT, QN}
Implement AbstractReducedKKTSystem
with dense matrices.
Requires a dense linear solver to be factorized (otherwise an error is returned).
MadNLP.EnforceEquality
— TypeEnforceEquality <: AbstractEqualityTreatment
Keep the equality constraints intact.
The solution returned by MadNLP will respect the equality constraints.
MadNLP.MadNLPExecutionStats
— TypeMadNLPExecutionStats{T, VT} <: AbstractExecutionStats
Store the results returned by MadNLP once the interior-point algorithm has terminated.
MadNLP.MadNLPSolver
— MethodMadNLPSolver(nlp::AbstractNLPModel{T, VT}; options...) where {T, VT}
Instantiate a new MadNLPSolver
associated to the nonlinear program nlp::AbstractNLPModel
. The options are passed as optional arguments.
The constructor allocates all the memory required in the interior-point algorithm, so the main algorithm remains allocation free.
MadNLP.MakeParameter
— TypeMakeParameter{VT, VI} <: AbstractFixedVariableTreatment
Remove the fixed variables from the optimization variables and define them as problem's parameters.
MadNLP.PrimalVector
— TypePrimalVector{T, VT<:AbstractVector{T}} <: AbstractKKTVector{T, VT}
Primal vector $(x, s)$.
MadNLP.RelaxBound
— TypeRelaxBound <: AbstractFixedVariableTreatment
Relax the fixed variables $x = x_{fixed}$ as bounded variables $x_{fixed} - ϵ ≤ x ≤ x_{fixed} + ϵ$, with $ϵ$ a small-enough parameter.
MadNLP.RelaxEquality
— TypeRelaxEquality <: AbstractEqualityTreatment
Relax the equality constraints $g(x) = 0$ with two inequality constraints, as $-ϵ ≤ g(x) ≤ ϵ$. The parameter $ϵ$ is usually small.
The solution returned by MadNLP will satisfy the equality constraints only up to a tolerance $ϵ$.
MadNLP.SparseCallback
— TypeSparseCallback{T, VT} < AbstractCallback{T, VT}
Wrap an AbstractNLPModel
using sparse structures.
MadNLP.SparseCondensedKKTSystem
— TypeSparseCondensedKKTSystem{T, VT, MT, QN} <: AbstractCondensedKKTSystem{T, VT, MT, QN}
Implement the AbstractCondensedKKTSystem
in sparse COO format.
MadNLP.SparseKKTSystem
— TypeSparseKKTSystem{T, VT, MT, QN} <: AbstractReducedKKTSystem{T, VT, MT, QN}
Implement the AbstractReducedKKTSystem
in sparse COO format.
MadNLP.SparseUnreducedKKTSystem
— TypeSparseUnreducedKKTSystem{T, VT, MT, QN} <: AbstractUnreducedKKTSystem{T, VT, MT, QN}
Implement the AbstractUnreducedKKTSystem
in sparse COO format.
MadNLP.UnreducedKKTVector
— TypeUnreducedKKTVector{T, VT<:AbstractVector{T}} <: AbstractKKTVector{T, VT}
Full KKT vector $(x, s, y, z, ν, w)$, associated to a AbstractUnreducedKKTSystem
.
MadNLP.Optimizer
— FunctionOptimizer()
Create a new MadNLP optimizer.
MadNLP.build_kkt!
— Functionbuild_kkt!(kkt::AbstractKKTSystem)
Assemble the KKT matrix before calling the factorization routine.
MadNLP.compress_hessian!
— Functioncompress_hessian!(kkt::AbstractKKTSystem)
Compress the Hessian inside kkt
's internals. This function is called every time a new Hessian is evaluated.
Default implementation do nothing.
MadNLP.compress_jacobian!
— Functioncompress_jacobian!(kkt::AbstractKKTSystem)
Compress the Jacobian inside kkt
's internals. This function is called every time a new Jacobian is evaluated.
By default, the function updates in the Jacobian the coefficients associated to the slack variables.
MadNLP.create_callback
— Functioncreate_callback(
::Type{Callback},
nlp::AbstractNLPModel{T, VT};
fixed_variable_treatment=MakeParameter,
equality_treatment=EnforceEquality,
) where {T, VT}
Wrap the nonlinear program nlp
using the callback wrapper with type Callback
. The option fixed_variable_treatment
decides if the fixed variables are relaxed (RelaxBound
) or removed (MakeParameter
). The option equality_treatment
decides if the the equality constraints are keep as is (EnforceEquality
) or relaxed (RelaxEquality
).
MadNLP.create_kkt_system
— Functioncreate_kkt_system(
::Type{KKT},
cb::AbstractCallback,
ind_cons::NamedTuple,
linear_solver::Type{LinSol};
opt_linear_solver=default_options(linear_solver),
hessian_approximation=ExactHessian,
) where {KKT<:AbstractKKTSystem, LinSol<:AbstractLinearSolver}
Instantiate a new KKT system with type KKT
, associated to the the nonlinear program encoded inside the callback cb
. The NamedTuple
ind_cons
stores the indexes of all the variables and constraints in the callback cb
. In addition, the user should pass the linear solver linear_solver
that will be used to solve the KKT system after it has been assembled.
MadNLP.dual
— Functiondual(X::AbstractKKTVector)
Return the dual values $(y, z)$ stored in the KKT vector X
.
MadNLP.dual_lb
— Functiondual_lb(X::AbstractKKTVector)
Return the dual values $ν$ associated to the lower-bound stored in the KKT vector X
.
MadNLP.dual_ub
— Functiondual_ub(X::AbstractKKTVector)
Return the dual values $w$ associated to the upper-bound stored in the KKT vector X
.
MadNLP.factorize!
— Functionfactorize!(::AbstractLinearSolver)
Factorize the matrix $A$ and updates the factors inside the AbstractLinearSolver
instance.
MadNLP.full
— Functionfull(X::AbstractKKTVector)
Return the all the values stored inside the KKT vector X
.
MadNLP.get_hessian
— FunctionGet Hessian matrix
MadNLP.get_index_constraints
— Methodget_index_constraints(nlp::AbstractNLPModel)
Analyze the bounds of the variables and the constraints in the AbstractNLPModel
nlp
. Return a named-tuple witht the following keys:return (
ind_eq
: indices of equality constraints.ind_ineq
: indices of inequality constraints.ind_fixed
: indices of fixed variables.ind_lb
: indices of variables with a lower-bound.ind_ub
: indices of variables with an upper-bound.ind_llb
: indices of variables with only a lower-bound.ind_uub
: indices of variables with only an upper-bound.
MadNLP.get_jacobian
— FunctionGet Jacobian matrix
MadNLP.get_kkt
— Functionget_kkt(kkt::AbstractKKTSystem)::AbstractMatrix
Return a pointer to the KKT matrix implemented in kkt
. The pointer is passed afterward to a linear solver.
MadNLP.hess_dense!
— FunctionDense Hessian callback
MadNLP.inertia
— Functioninertia(::AbstractLinearSolver)
Return the inertia (n, m, p)
of the linear system as a tuple.
Note
The inertia is defined as a tuple $(n, m, p)$, with
- $n$: number of positive eigenvalues
- $m$: number of negative eigenvalues
- $p$: number of zero eigenvalues
MadNLP.init!
— Functioninit!(
qn::AbstractHessian{T},
Bk::AbstractArray{T},
g0::AbstractVector{T},
f0::T,
) where T
Instantiate the Hessian estimate Bk
with the quasi-Newton algorithm qn
. The function uses the initial gradient g0
and the initial objective f0
to build the initial estimate.
MadNLP.initialize!
— Functioninitialize!(kkt::AbstractKKTSystem)
Initialize KKT system with default values. Called when we initialize the MadNLPSolver
storing the current KKT system kkt
.
MadNLP.introduce
— Functionintroduce(::AbstractLinearSolver)
Print the name of the linear solver.
MadNLP.is_inertia
— Functionis_inertia(::AbstractLinearSolver)
Return true
if the linear solver supports the computation of the inertia of the linear system.
MadNLP.is_inertia_correct
— Functionis_inertia_correct(kkt::AbstractKKTSystem, n::Int, m::Int, p::Int)
Check if the inertia $(n, m, p)$ returned by the linear solver is adapted to the KKT system implemented in kkt
.
MadNLP.is_supported
— Methodis_supported(solver,T)
Return true
if solver
supports the floating point number type T
.
Examples
julia> is_supported(UmfpackSolver,Float64)
true
julia> is_supported(UmfpackSolver,Float32)
false
MadNLP.jac_dense!
— FunctionDense Jacobian callback
MadNLP.jtprod!
— Functionjtprod!(y::AbstractVector, kkt::AbstractKKTSystem, x::AbstractVector)
Multiply with transpose of Jacobian and store the result in y
, such that $y = A' x$ (with $A$ current Jacobian).
MadNLP.madnlp
— Methodmadnlp(model::AbstractNLPModel; options...)
Build a MadNLPSolver
and solve it using the interior-point method. Return the solution as a MadNLPExecutionStats
.
MadNLP.nnz_jacobian
— FunctionNonzero in Jacobian
MadNLP.num_variables
— FunctionNumber of primal variables (including slacks) associated to the KKT system.
MadNLP.number_dual
— Methodnumber_dual(X::AbstractKKTVector)
Get total number of dual values $(y, z)$ in KKT vector X
.
MadNLP.number_primal
— Methodnumber_primal(X::AbstractKKTVector)
Get total number of primal values $(x, s)$ in KKT vector X
.
MadNLP.primal
— Functionprimal(X::AbstractKKTVector)
Return the primal values $(x, s)$ stored in the KKT vector X
.
MadNLP.primal_dual
— Functionprimal_dual(X::AbstractKKTVector)
Return both the primal and the dual values $(x, s, y, z)$ stored in the KKT vector X
.
MadNLP.regularize_diagonal!
— Functionregularize_diagonal!(kkt::AbstractKKTSystem, primal_values::Number, dual_values::Number)
Regularize the values in the diagonal of the KKT system. Called internally inside the interior-point routine.
MadNLP.solve_refine!
— Functionsolve_refine!(x::VT, ::AbstractIterator, b::VT, w::VT) where {VT <: AbstractKKTVector}
Solve the linear system $Ax = b$ using iterative refinement. The object AbstractIterator
stores an instance of a AbstractLinearSolver
for the backsolve operations.
Notes
This function assumes the matrix stored in the linear solver has been factorized previously.
MadNLP.timing_callbacks
— Methodtiming_callbacks(ips::InteriorPointSolver; ntrials=10)
Return the average timings spent in each callback for ntrials
different trials. Results are returned inside a named-tuple.
MadNLP.timing_linear_solver
— Methodtiming_linear_solver(ips::InteriorPointSolver; ntrials=10)
Return the average timings spent in the linear solver for ntrials
different trials. Results are returned inside a named-tuple.
MadNLP.timing_madnlp
— Methodtiming_madnlp(ips::InteriorPointSolver; ntrials=10)
Return the average time spent in the callbacks and in the linear solver, for ntrials
different trials.
Results are returned as a named-tuple.
MadNLP.update!
— Functionupdate!(
qn::AbstractQuasiNewton,
Bk::AbstractMatrix,
sk::AbstractVector,
yk::AbstractVector,
)
Update the matrix Bk
encoding the (direct) Hessian approximation with the secant vectors sk
and yk
.
Return true
if the update succeeded, false
otherwise.
SolverCore.solve!
— Functionsolve!(::AbstractLinearSolver, x::AbstractVector)
Solve the linear system $Ax = b$.
This function assumes the linear system has been factorized previously with factorize!
.