Reference
Contents
Index
FletcherPenaltySolver.AlgoData
FletcherPenaltySolver.DirectSolver
FletcherPenaltySolver.FPSSSolver
FletcherPenaltySolver.FletcherPenaltyNLP
FletcherPenaltySolver.GNSolver
FletcherPenaltySolver.IterativeSolver
FletcherPenaltySolver.LDLtSolver
FletcherPenaltySolver.LUSolver
FletcherPenaltySolver.QDSolver
FletcherPenaltySolver.Fletcher_penalty_optimality_check
FletcherPenaltySolver.TR_lsmr
FletcherPenaltySolver._compute_ys_gs!
FletcherPenaltySolver.cons_norhs!
FletcherPenaltySolver.feasibility_step
FletcherPenaltySolver.fps_solve
FletcherPenaltySolver.ghjvprod_nln!
FletcherPenaltySolver.go_log
FletcherPenaltySolver.hess_nln
FletcherPenaltySolver.hess_nln_coord!
FletcherPenaltySolver.hprod_nln!
FletcherPenaltySolver.linear_system2
FletcherPenaltySolver.random_restoration!
FletcherPenaltySolver.restoration_feasibility!
FletcherPenaltySolver.solve_least_norm
FletcherPenaltySolver.solve_least_square
FletcherPenaltySolver.solve_two_extras
FletcherPenaltySolver.solve_two_least_squares
FletcherPenaltySolver.solve_two_mixed
FletcherPenaltySolver.update_parameters!
FletcherPenaltySolver.update_parameters_unbdd!
FletcherPenaltySolver.AlgoData
— TypeAlgoData(; kwargs...)
AlgoData(T::DataType; kwargs...)
Structure containing all the parameters used in the fps_solve
call. T
is the datatype used in the algorithm, by default it is Float64
. Returns a AlgoData
structure.
Arguments
The keyword arguments may include:
σ_0::Real = T(1e3)
: Initialize subproblem's parameter σ;σ_max::Real = 1 / √eps(T)
: Maximum value for subproblem's parameter σ;σ_update::Real = T(2)
: Update subproblem's parameter σ;ρ_0::Real = one(T)
: Initialize subproblem's parameter ρ;ρ_max::Real = 1 / √eps(T)
: Maximum value for subproblem's parameter ρ;ρ_update::Real = T(2)
: Update subproblem's parameter ρ;δ_0::Real = √eps(T)
: Initialize subproblem's parameter δ;δ_max::Real = 1 / √eps(T)
: Maximum value for subproblem's parameter δ;δ_update::Real = T(10)
: Update subproblem's parameter δ;η_1::Real = zero(T)
: Initialize subproblem's parameter η;η_update::Real = one(T)
: Update subproblem's parameter η;yM::Real = typemax(T)
: bound on the Lagrange multipliers;Δ::Real = T(0.95)
: expected decrease in feasibility between two iterations;subproblem_solver::Function = ipopt
: solver used for the subproblem;subpb_unbounded_threshold::Real = 1 / √eps(T)
: below the opposite of this value, the subproblem is unbounded;subsolver_max_iter::Int = 20000
: maximum of iteration for the subproblem solver;atol_sub::Function = atol -> atol
: absolute tolerance for the subproblem in function ofatol
;rtol_sub::Function = rtol -> rtol
: relative tolerance for the subproblem in function ofrtol
;hessian_approx = Val(2)
: eitherVal(1)
orVal(2)
, it selects the hessian approximation;convex_subproblem::Bool = false
: true if the subproblem is convex. Useful to set theconvex
option inknitro
;lagrange_bound::T = 1 / sqrt(eps(T))
: upper-bound on the Lagrange multiplier.
For more details, we refer to the package documentation fine-tuneFPS.md.
FletcherPenaltySolver.DirectSolver
— TypeDirectSolver(nlp::AbstractNLPModel, ::T) <: QDSolver
FletcherPenaltySolver.FPSSSolver
— TypeFPSSSolver(nlp::AbstractNLPModel [, x0 = nlp.meta.x0]; kwargs...)
FPSSSolver(stp::NLPStopping; kwargs...)
Structure regrouping all the structure used during the fps_solve
call. It returns a FPSSSolver
structure.
Arguments
The keyword arguments may include:
stp::NLPStopping
:Stopping
structure for this algorithm workflow;meta::AlgoData{T}
: seeAlgoData
;workspace
: allocated space for the solver itself;qdsolver
: solver structure for the linear algebra part, contains allocation for this part. By default aLDLtSolver
, but an alternative isIterativeSolver
;subproblem_solver::AbstractOptimizationSolver
: by default asubproblem_solver_correspondence[Symbol(meta.subproblem_solver)]
;sub_stats::GenericExecutionStats
: stats structure for the result ofsubproblem_solver
;feasibility_solver
: by default aGNSolver
, seeGNSolver
;model::FletcherPenaltyNLP
: subproblem;sub_stp::NLPStopping
:Stopping
structure for the subproblem.
Note:
subproblem_solver
is accessible from thesubproblem_solver_correspondence::Dict
.- the
qdsolver
is accessible from the dictionaryqdsolver_correspondence
.
FletcherPenaltySolver.FletcherPenaltyNLP
— TypeFletcherPenaltyNLP(nlp, σ, hessian_approx, [x0 = nlp.meta.x0]; qds = LDLtSolver(nlp, S(0)))
FletcherPenaltyNLP(nlp, σ, ρ, δ, hessian_approx, [x0 = nlp.meta.x0]; qds = LDLtSolver(nlp, S(0)))
FletcherPenaltyNLP(nlp; σ_0 = 1, ρ_0 = 0, δ_0 = 0, hessian_approx = Val(2), x0 = nlp.meta.x0, qds = LDLtSolver(nlp, S(0)))
We consider here the implementation of Fletcher's exact penalty method for the minimization problem:
\[ minₓ f(x) s.t. c(x) = ℓ, l ≤ x ≤ u\]
using Fletcher penalty function:
\[ minₓ f(x) - (c(x) - ℓ)^T ys(x) + 0.5 ρ ||c(x) - ℓ||²₂ s.t. l ≤ x ≤ u\]
where
\[ ys(x) ∈ arg min\_y 0.5 ||A(x)y - g(x)||²₂ + σ (c(x) - ℓ)^T y + 0.5 || δ ||²₂\]
Arguments
nlp::AbstractNLPModel
: the model solved, seeNLPModels.jl
;x::AbstractVector
: Initial guess. Ifx
is not specified, thennlp.meta.x0
is used;σ
,ρ
,δ
parameters of the subproblem;hessian_approx
eitherVal(1)
orVal(2)
for the hessian approximation.qds
: solver structure for the linear algebra computations, seeLDLtSolver
orIterativeSolver
.
Notes:
- Evaluation of the
obj
,grad
,objgrad
functions evaluate functions from the orginialnlp
. These values are stored infx
,cx
,gx
. - The value of the penalty vector
ys
is also stored. - The hessian's structure is dense.
Examples
julia> using FletcherPenaltySolver, ADNLPModels
julia> nlp = ADNLPModel(x -> 100 * (x[2] - x[1]^2)^2 + (x[1] - 1)^2, [-1.2; 1.0])
julia> fp_sos = FletcherPenaltyNLP(nlp)
FletcherPenaltySolver.GNSolver
— TypeGNSolver(x, y; kwargs...)
Structure containing all the parameters used in the feasibility step. x
is an intial guess, and y
is an initial guess for the Lagrange multiplier. Returns a GNSolver
structure.
Arguments
The keyword arguments may include:
η₁::T=T(1e-3)
: Feasibility step: decrease the trust-region radius whenAred/Pred < η₁
.η₂::T=T(0.66)
: Feasibility step: increase the trust-region radius whenAred/Pred > η₂
.σ₁::T=T(0.25)
: Feasibility step: decrease coefficient of the trust-region radius.σ₂::T=T(2.0)
: Feasibility step: increase coefficient of the trust-region radius.Δ₀::T=one(T)
: Feasibility step: initial radius.bad_steps_lim::Integer=3
: Feasibility step: consecutive bad steps before using a second order step.feas_expected_decrease::T=T(0.95)
: Feasibility step: bad steps are when‖c(z)‖ / ‖c(x)‖ >feas_expected_decrease
.TR_compute_step = LsmrSolver(length(y0), length(x0), S)
: Compute the direction in feasibility step.aggressive_step = CgSolver(length(x0), length(x0), S)
: Compute the direction in feasibility step in agressive mode.
FletcherPenaltySolver.IterativeSolver
— TypeIterativeSolver(nlp::AbstractNLPModel, ::T) <: QDSolver
It uses Krylov.jl
methods to solve least-squares and least-norm problems.
FletcherPenaltySolver.LDLtSolver
— TypeLDLtSolver(nlp::AbstractNLPModel, ::T) <: QDSolver
It uses LDLFactorization.jl
methods to solve least-squares and least-norm problems.
FletcherPenaltySolver.LUSolver
— TypeLUSolver(nlp::AbstractNLPModel, ::T) <: QDSolver
FletcherPenaltySolver.QDSolver
— TypeQDSolver
Abstract structure handling parameters for the system
\[ [ In A' ] [ A -nlp.δ*Im ]\]
that we are solving twice.
Its implementation should define:
FletcherPenaltySolver.Fletcher_penalty_optimality_check
— MethodFletcher_penalty_optimality_check(pb::AbstractNLPModel, state::NLPAtX)
Optimality function used by default in the algorithm. An alternative is to use the function KKT
from Stopping.jl
.
The function returns a vector of length ncon + nvar containing:
- |c(x) - lcon| / |x|₂
- res / |λ|₂ ; x - max(min(x - res, uvar), lvar)) if it has bounds
The fields x
, cx
and res
need to be filled. If state.lambda
is nothing
then we take |λ|₂=1.
FletcherPenaltySolver.TR_lsmr
— MethodTR_lsmr(solver, cz, Jz, ctol, Δ, normcz, Jd)
Compute a direction d
such that
\[\begin{aligned} \min_{d} \quad & \|c + Jz' d \| \\ \text{s.t.} \quad & \|d\| \leq \Delta, \end{aligned}\]
using lsmr
method from Krylov.jl
.
Output
d
: solutionJd
: product of the solution withJ
.infeasible
:true
if the problem is infeasible.solved
:true
if the problem has been successfully solved.
FletcherPenaltySolver._compute_ys_gs!
— Methodgs, ys, v, w = _compute_ys_gs!(nlp, x)
Compute the Lagrange multipliers and the gradient of the Lagrangian function in-place.
FletcherPenaltySolver.cons_norhs!
— Methodcons_norhs!(nlp::FletcherPenaltyNLP, x, cx)
Redefine the NLPModel function cons!
to account for non-zero right-hand side in the equation constraint. It returns cons(nlp, x) - nlp.meta.lcon
.
FletcherPenaltySolver.feasibility_step
— Methodfeasibility_step(feasibility_solver, nlp, x, cx, normcx, Jx, ρ, ctol; kwargs...)
Approximately solves min ‖c(x) - l‖
, where l is nlp.meta.lcon, using a trust-region Levenberg-Marquardt method.
Arguments
η₁::AbstractFloat = feasibility_solver.feas_η₁
: decrease the trust-region radius when Ared/Pred < η₁.η₂::AbstractFloat = feasibility_solver.feas_η₂
: increase the trust-region radius when Ared/Pred > η₂.σ₁::AbstractFloat = feasibility_solver.feas_σ₁
: decrease coefficient of the trust-region radius.σ₂::AbstractFloat = feasibility_solver.feas_σ₂
:increase coefficient of the trust-region radius.Δ₀::T = feasibility_solver.feas_Δ₀
: initial trust-region radius.bad_steps_lim::Integer = feasibility_solver.bad_steps_lim
: consecutive bad steps before using a second order step.expected_decrease::T = feasibility_solver.feas_expected_decrease
: bad steps are when‖c(z)‖ / ‖c(x)‖ >feas_expected_decrease
.max_eval::Int = 1_000
: maximum evaluations.max_time::AbstractFloat = 60.0
: maximum time.max_feas_iter::Int = typemax(Int64)
: maximum number of iterations.
Output
z
,cz
,normcz
,Jz
: the new iterate, and updated evaluations.status
: Computation status. Possible outcomes are::success
,max_eval
,max_time
,max_iter
,unknown_tired
,:infeasible
,:unknown
.
FletcherPenaltySolver.fps_solve
— Methodfps_solve(nlp::AbstractNLPModel, x0::AbstractVector{T} = nlp.meta.x0; subsolver_verbose::Int = 0, kwargs...)
Compute a local minimum of a bound and equality-constrained optimization problem using Fletcher's penalty function and the implementation described in
Estrin, R., Friedlander, M. P., Orban, D., & Saunders, M. A. (2020).
Implementing a smooth exact penalty function for equality-constrained nonlinear optimization.
SIAM Journal on Scientific Computing, 42(3), A1809-A1835.
https://doi.org/10.1137/19M1238265
For advanced usage, the principal call to the solver uses a NLPStopping
, see Stopping.jl
.
fps_solve(stp::NLPStopping, fpssolver::FPSSSolver{T, QDS, US}; subsolver_verbose::Int = 0)
fps_solve(stp::NLPStopping; subsolver_verbose::Int = 0, kwargs...)
Arguments
nlp::AbstractNLPModel
: the model solved, seeNLPModels.jl
;x
: Initial guess. Ifx
is not specified, thennlp.meta.x0
is used.
Keyword arguments
fpssolver
: seeFPSSSolver
;verbose::Int = 0
: if > 0, display iteration information of the solver;subsolver_verbose::Int = 0
: if > 0, display iteration information of the subsolver;
All the information regarding stopping criteria can be set in the NLPStopping
object. Additional kwargs
are given to the NLPStopping
. By default, the optimality condition used to declare optimality is Fletcher_penalty_optimality_check
.
Output
The returned value is a GenericExecutionStats
, see SolverCore.jl
.
If one define a Stopping
before calling fps_solve
, it is possible to access all the information computed by the algorithm.
Notes
- If the problem has inequalities, we use slack variables to get only equalities and bounds via
NLPModelsModifiers.jl
. stp.current_state.res
contains the gradient of Fletcher's penalty function.subproblem_solver
must take anNLPStopping
as input, seeStoppingInterface.jl
.
Callback
The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats)
, and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user
will stop the algorithm. All relevant information should be available in nlp
and solver
. Notably, you can access, and modify, the following:
solver
: seeFPSSSolver
stats
: structure holding the output of the algorithm (GenericExecutionStats
), which contains, among other things:stats.dual_feas
: norm of current gradient of the Lagrangian;stats.primal_feas
: norm of current feasibility;stats.iter
: current iteration counter;stats.objective
: current objective function value;stats.solution
: current iterate;stats.multipliers
: current Lagrange multipliers estimate;stats.multipliers_L
andstats.multipliers_U
: current Lagrange multipliers estimate for the lower and upper bounds respectively;stats.status
: current status of the algorithm. Should be:unknown
unless the algorithm has attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use:user
to properly indicate the intention.stats.elapsed_time
: elapsed time in seconds.
Examples
julia> using FletcherPenaltySolver, ADNLPModels
julia> nlp = ADNLPModel(x -> 100 * (x[2] - x[1]^2)^2 + (x[1] - 1)^2, [-1.2; 1.0]);
julia> stats = fps_solve(nlp)
"Execution stats: first-order stationary"
FletcherPenaltySolver.ghjvprod_nln!
— Methodghjvprod_nln!(nlp::FletcherPenaltyNLP, x, y, v, Hv; obj_weight = one(S)) where {S}
Redefine the NLPModel function ghjvprod
to account for Lagrange multiplier of size < ncon.
FletcherPenaltySolver.go_log
— Methodgo_log(stp, sub_stp, fx, ncx, mess::String, verbose)
Logging shortcut.
FletcherPenaltySolver.hess_nln
— Methodhess_nln(nlp::FletcherPenaltyNLP, x, y; obj_weight = one(S)) where {S}
Redefine the NLPModel function hprod
to account for Lagrange multiplier of size < ncon.
FletcherPenaltySolver.hess_nln_coord!
— Methodhess_nln_nln!(nlp::FletcherPenaltyNLP, x, y, vals; obj_weight = one(S)) where {S}
Redefine the NLPModel function hprod
to account for Lagrange multiplier of size < ncon.
FletcherPenaltySolver.hprod_nln!
— Methodhprod_nln!(nlp::FletcherPenaltyNLP, x, y, v, Hv; obj_weight = one(S)) where {S}
Redefine the NLPModel function hprod
to account for Lagrange multiplier of size < ncon.
FletcherPenaltySolver.linear_system2
— Methodp1, q1, p2, q2 = linear_system2(nlp, x)
Call to solve_two_mixed(nlp, x, nlp.gx, nlp.cx)
, see solve_two_mixed
.
FletcherPenaltySolver.random_restoration!
— Methodrandom_restoration!(meta, stp, sub_stp)
Add a random perturbation to the current iterate.
FletcherPenaltySolver.restoration_feasibility!
— Methodrestoration_feasibility!(feasibility_solver, meta, stp, sub_stp, feas_tol, ncx, verbose)
Try to find a feasible point, see feasibility_step
.
FletcherPenaltySolver.solve_least_norm
— Methodsolve_least_norm(qdsolver::IterativeSolver, A, b, λ)
Solve least squares problem with regularization δ.
FletcherPenaltySolver.solve_least_square
— Methodsolve_least_square(qdsolver::IterativeSolver, A, b, λ)
Solve least squares problem with regularization λ.
FletcherPenaltySolver.solve_two_extras
— FunctioninvJtJJv, invJtJSsv = solve_two_extras(nlp, x, rhs1, rhs2)
The IterativeSolver
variant solve successively a regularized least square, see solve_least_square
, and a regularized minres
. It returns only a warning if the method failed.
The LDLtSolver
variant use successively a regularized cgls
and a regularized minres
.
FletcherPenaltySolver.solve_two_least_squares
— Functionp1, q1, p2, q2 = solve_two_least_squares(nlp, x, rhs1, rhs2)
Solve successively two least square regularized by √nlp.δ
:
\[ min || ∇c' q - rhs || + δ || q ||^2\]
rhs1
and rhs2
are both of size nlp.meta.nvar
.
The IterativeSolver
variant uses two calls to a Krylov.jl
method, see solve_least_square
. Note that nlp.Aop
is not re-evaluated in this case. It returns only a warning if the method failed.
The LDLtSolver
variant use an LDLt factorization to solve the large system.
FletcherPenaltySolver.solve_two_mixed
— Functionp1, q1, p2, q2 = solve_two_mixed(nlp, x, rhs1, rhs2)
Solve successively a least square regularized by √nlp.δ
:
\[ min || ∇c' q - rhs || + δ || q ||^2\]
and a least-norm problem.
rhs1
is of size nlp.meta.nvar
, and rhs2
is of size nlp.meta.ncon
.
The IterativeSolver
variant uses two calls to a Krylov.jl
method, see solve_least_square
and solve_least_norm
. It returns only a warning if the method failed.
The LDLtSolver
variant use an LDLt factorization to solve the large system.
FletcherPenaltySolver.update_parameters!
— Methodupdate_parameters!(meta, sub_stp, feas)
Update σ
. If the current iterate also update ρ
.
FletcherPenaltySolver.update_parameters_unbdd!
— Methodupdate_parameters_unbdd!(meta, sub_stp, feas)
Start or update δ
, then call update_parameters!(meta, sub_stp, feas)