Reference

Contents

Index

FletcherPenaltySolver.AlgoDataType
AlgoData(; kwargs...) 
AlgoData(T::DataType; kwargs...)

Structure containing all the parameters used in the fps_solve call. T is the datatype used in the algorithm, by default it is Float64. Returns a AlgoData structure.

Arguments

The keyword arguments may include:

  • σ_0::Real = T(1e3): Initialize subproblem's parameter σ;
  • σ_max::Real = 1 / √eps(T): Maximum value for subproblem's parameter σ;
  • σ_update::Real = T(2): Update subproblem's parameter σ;
  • ρ_0::Real = one(T): Initialize subproblem's parameter ρ;
  • ρ_max::Real = 1 / √eps(T): Maximum value for subproblem's parameter ρ;
  • ρ_update::Real = T(2): Update subproblem's parameter ρ;
  • δ_0::Real = √eps(T): Initialize subproblem's parameter δ;
  • δ_max::Real = 1 / √eps(T): Maximum value for subproblem's parameter δ;
  • δ_update::Real = T(10): Update subproblem's parameter δ;
  • η_1::Real = zero(T): Initialize subproblem's parameter η;
  • η_update::Real = one(T): Update subproblem's parameter η;
  • yM::Real = typemax(T): bound on the Lagrange multipliers;
  • Δ::Real = T(0.95): expected decrease in feasibility between two iterations;
  • subproblem_solver::Function = ipopt: solver used for the subproblem;
  • subpb_unbounded_threshold::Real = 1 / √eps(T): below the opposite of this value, the subproblem is unbounded;
  • subsolver_max_iter::Int = 20000: maximum of iteration for the subproblem solver;
  • atol_sub::Function = atol -> atol: absolute tolerance for the subproblem in function of atol;
  • rtol_sub::Function = rtol -> rtol: relative tolerance for the subproblem in function of rtol;
  • hessian_approx = Val(2): either Val(1) or Val(2), it selects the hessian approximation;
  • convex_subproblem::Bool = false: true if the subproblem is convex. Useful to set the convex option in knitro;
  • lagrange_bound::T = 1 / sqrt(eps(T)): upper-bound on the Lagrange multiplier.

For more details, we refer to the package documentation fine-tuneFPS.md.

FletcherPenaltySolver.FPSSSolverType
FPSSSolver(nlp::AbstractNLPModel [, x0 = nlp.meta.x0]; kwargs...)
FPSSSolver(stp::NLPStopping; kwargs...)

Structure regrouping all the structure used during the fps_solve call. It returns a FPSSSolver structure.

Arguments

The keyword arguments may include:

  • stp::NLPStopping: Stopping structure for this algorithm workflow;
  • meta::AlgoData{T}: see AlgoData;
  • workspace: allocated space for the solver itself;
  • qdsolver: solver structure for the linear algebra part, contains allocation for this part. By default a LDLtSolver, but an alternative is IterativeSolver ;
  • subproblem_solver::AbstractOptimizationSolver: by default a subproblem_solver_correspondence[Symbol(meta.subproblem_solver)];
  • sub_stats::GenericExecutionStats: stats structure for the result of subproblem_solver;
  • feasibility_solver: by default a GNSolver, see GNSolver;
  • model::FletcherPenaltyNLP: subproblem;
  • sub_stp::NLPStopping: Stopping structure for the subproblem.

Note:

  • subproblem_solver is accessible from the subproblem_solver_correspondence::Dict.
  • the qdsolver is accessible from the dictionary qdsolver_correspondence.
FletcherPenaltySolver.FletcherPenaltyNLPType
FletcherPenaltyNLP(nlp, σ, hessian_approx, [x0 = nlp.meta.x0]; qds = LDLtSolver(nlp, S(0)))
FletcherPenaltyNLP(nlp, σ, ρ, δ, hessian_approx, [x0 = nlp.meta.x0]; qds = LDLtSolver(nlp, S(0)))
FletcherPenaltyNLP(nlp; σ_0 = 1, ρ_0 = 0, δ_0 = 0, hessian_approx = Val(2), x0 = nlp.meta.x0, qds = LDLtSolver(nlp, S(0)))

We consider here the implementation of Fletcher's exact penalty method for the minimization problem:

\[ minₓ f(x) s.t. c(x) = ℓ, l ≤ x ≤ u\]

using Fletcher penalty function:

\[ minₓ f(x) - (c(x) - ℓ)^T ys(x) + 0.5 ρ ||c(x) - ℓ||²₂ s.t. l ≤ x ≤ u\]

where

\[ ys(x) ∈ arg min\_y 0.5 ||A(x)y - g(x)||²₂ + σ (c(x) - ℓ)^T y + 0.5 || δ ||²₂\]

Arguments

  • nlp::AbstractNLPModel: the model solved, see NLPModels.jl;
  • x::AbstractVector: Initial guess. If x is not specified, then nlp.meta.x0 is used;
  • σ, ρ, δ parameters of the subproblem;
  • hessian_approx either Val(1) or Val(2) for the hessian approximation.
  • qds: solver structure for the linear algebra computations, see LDLtSolver or IterativeSolver.

Notes:

  • Evaluation of the obj, grad, objgrad functions evaluate functions from the orginial nlp. These values are stored in fx, cx, gx.
  • The value of the penalty vector ys is also stored.
  • The hessian's structure is dense.

Examples

julia> using FletcherPenaltySolver, ADNLPModels
julia> nlp = ADNLPModel(x -> 100 * (x[2] - x[1]^2)^2 + (x[1] - 1)^2, [-1.2; 1.0])
julia> fp_sos  = FletcherPenaltyNLP(nlp)
FletcherPenaltySolver.GNSolverType
GNSolver(x, y; kwargs...)

Structure containing all the parameters used in the feasibility step. x is an intial guess, and y is an initial guess for the Lagrange multiplier. Returns a GNSolver structure.

Arguments

The keyword arguments may include:

  • η₁::T=T(1e-3): Feasibility step: decrease the trust-region radius when Ared/Pred < η₁.
  • η₂::T=T(0.66): Feasibility step: increase the trust-region radius when Ared/Pred > η₂.
  • σ₁::T=T(0.25): Feasibility step: decrease coefficient of the trust-region radius.
  • σ₂::T=T(2.0): Feasibility step: increase coefficient of the trust-region radius.
  • Δ₀::T=one(T): Feasibility step: initial radius.
  • bad_steps_lim::Integer=3: Feasibility step: consecutive bad steps before using a second order step.
  • feas_expected_decrease::T=T(0.95): Feasibility step: bad steps are when ‖c(z)‖ / ‖c(x)‖ >feas_expected_decrease.
  • TR_compute_step = LsmrSolver(length(y0), length(x0), S): Compute the direction in feasibility step.
  • aggressive_step = CgSolver(length(x0), length(x0), S): Compute the direction in feasibility step in agressive mode.
FletcherPenaltySolver.LDLtSolverType
LDLtSolver(nlp::AbstractNLPModel, ::T) <: QDSolver

It uses LDLFactorization.jl methods to solve least-squares and least-norm problems.

FletcherPenaltySolver.Fletcher_penalty_optimality_checkMethod
Fletcher_penalty_optimality_check(pb::AbstractNLPModel, state::NLPAtX)

Optimality function used by default in the algorithm. An alternative is to use the function KKT from Stopping.jl.

The function returns a vector of length ncon + nvar containing:

  • |c(x) - lcon| / |x|₂
  • res / |λ|₂ ; x - max(min(x - res, uvar), lvar)) if it has bounds

The fields x, cx and res need to be filled. If state.lambda is nothing then we take |λ|₂=1.

FletcherPenaltySolver.TR_lsmrMethod
TR_lsmr(solver, cz, Jz, ctol, Δ, normcz, Jd)

Compute a direction d such that

\[\begin{aligned} \min_{d} \quad & \|c + Jz' d \| \\ \text{s.t.} \quad & \|d\| \leq \Delta, \end{aligned}\]

using lsmr method from Krylov.jl.

Output

  • d: solution
  • Jd: product of the solution with J.
  • infeasible: true if the problem is infeasible.
  • solved: true if the problem has been successfully solved.
FletcherPenaltySolver.cons_norhs!Method
cons_norhs!(nlp::FletcherPenaltyNLP, x, cx)

Redefine the NLPModel function cons! to account for non-zero right-hand side in the equation constraint. It returns cons(nlp, x) - nlp.meta.lcon.

FletcherPenaltySolver.feasibility_stepMethod
feasibility_step(feasibility_solver, nlp, x, cx, normcx, Jx, ρ, ctol; kwargs...)

Approximately solves min ‖c(x) - l‖, where l is nlp.meta.lcon, using a trust-region Levenberg-Marquardt method.

Arguments

  • η₁::AbstractFloat = feasibility_solver.feas_η₁: decrease the trust-region radius when Ared/Pred < η₁.
  • η₂::AbstractFloat = feasibility_solver.feas_η₂: increase the trust-region radius when Ared/Pred > η₂.
  • σ₁::AbstractFloat = feasibility_solver.feas_σ₁: decrease coefficient of the trust-region radius.
  • σ₂::AbstractFloat = feasibility_solver.feas_σ₂:increase coefficient of the trust-region radius.
  • Δ₀::T = feasibility_solver.feas_Δ₀: initial trust-region radius.
  • bad_steps_lim::Integer = feasibility_solver.bad_steps_lim: consecutive bad steps before using a second order step.
  • expected_decrease::T = feasibility_solver.feas_expected_decrease: bad steps are when ‖c(z)‖ / ‖c(x)‖ >feas_expected_decrease.
  • max_eval::Int = 1_000: maximum evaluations.
  • max_time::AbstractFloat = 60.0: maximum time.
  • max_feas_iter::Int = typemax(Int64): maximum number of iterations.

Output

  • z, cz, normcz, Jz: the new iterate, and updated evaluations.
  • status: Computation status. Possible outcomes are: :success, max_eval, max_time, max_iter, unknown_tired, :infeasible, :unknown.
FletcherPenaltySolver.fps_solveMethod
fps_solve(nlp::AbstractNLPModel, x0::AbstractVector{T} = nlp.meta.x0; subsolver_verbose::Int = 0, kwargs...)

Compute a local minimum of a bound and equality-constrained optimization problem using Fletcher's penalty function and the implementation described in

Estrin, R., Friedlander, M. P., Orban, D., & Saunders, M. A. (2020).
Implementing a smooth exact penalty function for equality-constrained nonlinear optimization.
SIAM Journal on Scientific Computing, 42(3), A1809-A1835.
https://doi.org/10.1137/19M1238265

For advanced usage, the principal call to the solver uses a NLPStopping, see Stopping.jl.

fps_solve(stp::NLPStopping, fpssolver::FPSSSolver{T, QDS, US}; subsolver_verbose::Int = 0)
fps_solve(stp::NLPStopping; subsolver_verbose::Int = 0, kwargs...)

Arguments

  • nlp::AbstractNLPModel: the model solved, see NLPModels.jl;
  • x: Initial guess. If x is not specified, then nlp.meta.x0 is used.

Keyword arguments

  • fpssolver: see FPSSSolver;
  • verbose::Int = 0: if > 0, display iteration information of the solver;
  • subsolver_verbose::Int = 0: if > 0, display iteration information of the subsolver;

All the information regarding stopping criteria can be set in the NLPStopping object. Additional kwargs are given to the NLPStopping. By default, the optimality condition used to declare optimality is Fletcher_penalty_optimality_check.

Output

The returned value is a GenericExecutionStats, see SolverCore.jl.

If one define a Stopping before calling fps_solve, it is possible to access all the information computed by the algorithm.

Notes

  • If the problem has inequalities, we use slack variables to get only equalities and bounds via NLPModelsModifiers.jl.
  • stp.current_state.res contains the gradient of Fletcher's penalty function.
  • subproblem_solver must take an NLPStopping as input, see StoppingInterface.jl.

Callback

The callback is called at each iteration. The expected signature of the callback is callback(nlp, solver, stats), and its output is ignored. Changing any of the input arguments will affect the subsequent iterations. In particular, setting stats.status = :user will stop the algorithm. All relevant information should be available in nlp and solver. Notably, you can access, and modify, the following:

  • solver: see FPSSSolver
  • stats: structure holding the output of the algorithm (GenericExecutionStats), which contains, among other things:
    • stats.dual_feas: norm of current gradient of the Lagrangian;
    • stats.primal_feas: norm of current feasibility;
    • stats.iter: current iteration counter;
    • stats.objective: current objective function value;
    • stats.solution: current iterate;
    • stats.multipliers: current Lagrange multipliers estimate;
    • stats.multipliers_L and stats.multipliers_U: current Lagrange multipliers estimate for the lower and upper bounds respectively;
    • stats.status: current status of the algorithm. Should be :unknown unless the algorithm has attained a stopping criterion. Changing this to anything will stop the algorithm, but you should use :user to properly indicate the intention.
    • stats.elapsed_time: elapsed time in seconds.

Examples

julia> using FletcherPenaltySolver, ADNLPModels
julia> nlp = ADNLPModel(x -> 100 * (x[2] - x[1]^2)^2 + (x[1] - 1)^2, [-1.2; 1.0]);
julia> stats = fps_solve(nlp)
"Execution stats: first-order stationary"
FletcherPenaltySolver.ghjvprod_nln!Method
ghjvprod_nln!(nlp::FletcherPenaltyNLP, x, y, v, Hv; obj_weight = one(S)) where {S}

Redefine the NLPModel function ghjvprod to account for Lagrange multiplier of size < ncon.

FletcherPenaltySolver.hess_nlnMethod
hess_nln(nlp::FletcherPenaltyNLP, x, y; obj_weight = one(S)) where {S}

Redefine the NLPModel function hprod to account for Lagrange multiplier of size < ncon.

FletcherPenaltySolver.hess_nln_coord!Method
hess_nln_nln!(nlp::FletcherPenaltyNLP, x, y, vals; obj_weight = one(S)) where {S}

Redefine the NLPModel function hprod to account for Lagrange multiplier of size < ncon.

FletcherPenaltySolver.hprod_nln!Method
hprod_nln!(nlp::FletcherPenaltyNLP, x, y, v, Hv; obj_weight = one(S)) where {S}

Redefine the NLPModel function hprod to account for Lagrange multiplier of size < ncon.

FletcherPenaltySolver.solve_two_extrasFunction
invJtJJv, invJtJSsv = solve_two_extras(nlp, x, rhs1, rhs2)

The IterativeSolver variant solve successively a regularized least square, see solve_least_square, and a regularized minres. It returns only a warning if the method failed.

The LDLtSolver variant use successively a regularized cgls and a regularized minres.

FletcherPenaltySolver.solve_two_least_squaresFunction
p1, q1, p2, q2 = solve_two_least_squares(nlp, x, rhs1, rhs2)

Solve successively two least square regularized by √nlp.δ:

\[ min || ∇c' q - rhs || + δ || q ||^2\]

rhs1 and rhs2 are both of size nlp.meta.nvar.

The IterativeSolver variant uses two calls to a Krylov.jl method, see solve_least_square. Note that nlp.Aop is not re-evaluated in this case. It returns only a warning if the method failed.

The LDLtSolver variant use an LDLt factorization to solve the large system.

FletcherPenaltySolver.solve_two_mixedFunction
p1, q1, p2, q2 = solve_two_mixed(nlp, x, rhs1, rhs2)

Solve successively a least square regularized by √nlp.δ:

\[ min || ∇c' q - rhs || + δ || q ||^2\]

and a least-norm problem.

rhs1 is of size nlp.meta.nvar, and rhs2 is of size nlp.meta.ncon.

The IterativeSolver variant uses two calls to a Krylov.jl method, see solve_least_square and solve_least_norm. It returns only a warning if the method failed.

The LDLtSolver variant use an LDLt factorization to solve the large system.