Tutorial

FletcherPenaltySolver Tutorial

In this tutorial, we explore on small instances various possibilities offered by fps_solve defined in the package FletcherPenaltySolver.

Type stable algorithm

The algorithm is implemented in pure Julia, so if one also chooses an unconstrained solver in pure Julia, we can Julia's type stability to solve optimization problems in a precision different than Float64. In the following example, we use tron from JSOSolvers.jl on a simple example in Float32.

using ADNLPModels, FletcherPenaltySolver, JSOSolvers
T = Float32
nlp = ADNLPModel(x -> (1 - x[1])^2, T[-1.2; 1.0], x -> [10 * (x[2] - x[1]^2)], T[0.0], T[0.0])
stats = fps_solve(nlp, hessian_approx = Val(2), subproblem_solver = tron, rtol = T(1e-4), verbose = 1)
(stats.dual_feas, stats.primal_feas, stats.status, typeof(stats.solution))
(6.1559775f-5, 5.9604645f-7, :first_order, Vector{Float32})

A factorization-free solver

The main advantage of fps_solver is the possibility to use Hessian and Jacobian-vector products only, whenever one uses a subproblem solver with the same property. So, it is not necessary to compute and store explicitly those matrices. In the following example, we choose a problem with equality constraints from OptimizationProblems.jl.

using ADNLPModels, FletcherPenaltySolver, JSOSolvers, OptimizationProblems
nlp = OptimizationProblems.ADNLPProblems.hs28()
stats = fps_solve(nlp, subproblem_solver = tron, qds_solver = :iterative)
(stats.dual_feas, stats.primal_feas, stats.status, stats.elapsed_time)
(3.332255106767791e-14, 1.2212453270876722e-14, :first_order, 1.712021827697754)

Exploring nlp's counter, we can see that no Hessian or Jacobian matrix has been evaluated.

nlp.counters
  Counters:
             obj: █⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 4                 grad: █⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 5                 cons: █⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 5     
        cons_lin: █⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 5             cons_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0                 jcon: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0     
           jgrad: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0                  jac: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0              jac_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0     
         jac_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0                jprod: ████████████████████ 104          jprod_lin: ████████████████████ 104   
       jprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0               jtprod: ████████████████████ 109         jtprod_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0     
      jtprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0                 hess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0                hprod: █████████████⋅⋅⋅⋅⋅⋅⋅ 67    
           jhess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0               jhprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0     

We can compare this result with ipopt that uses the Jacobian and Hessian matrices.

using NLPModels, NLPModelsIpopt
reset!(nlp);
stats = fps_solve(nlp, subproblem_solver = ipopt, qds_solver = :iterative)
(stats.dual_feas, stats.primal_feas, stats.status, stats.elapsed_time)
(1.1603416640225207e-13, 2.220446049250313e-16, :first_order, 0.6078991889953613)
nlp.counters
  Counters:
             obj: █████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 2                 grad: ████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 3                 cons: ████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 3     
        cons_lin: ████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 3             cons_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0                 jcon: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0     
           jgrad: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0                  jac: ███⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1              jac_lin: ███⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 1     
         jac_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0                jprod: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 4            jprod_lin: ██████████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 4     
       jprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0               jtprod: ████████████████████ 8           jtprod_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0     
      jtprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0                 hess: █████⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 2                hprod: ████████████████████ 8     
           jhess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0               jhprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0     

Stopping-solver

using ADNLPModels, FletcherPenaltySolver, Stopping
f(x) = (x[1] - 1)^2 + 4 * (x[2] - x[1]^2)^2
c(x) = [x[1]^2 + x[2]^2 - 2]
T = Float64
x0 = T[-1.2; 1.0]
ℓ, u = zeros(T, 2), 2 * ones(T, 2)
nlp = ADNLPModel(f, x0, ℓ, u, c, zeros(1), zeros(1))

stp = NLPStopping(nlp)
stats = fps_solve(stp)
"Execution stats: first-order stationary"

It is then possible to explore the various quantities computed by the algorithm. For instance, recompute the gradient of the Lagrangian.

state = stp.current_state
state.gx + state.Jx' * state.lambda
2-element Vector{Float64}:
 0.0
 0.0

Another possibility is to reuse the Stopping for another solve.

new_x0 = 4 * ones(2)
reinit!(stp, rstate = true, x = new_x0)
Stopping.reset!(stp.pb)
stats = fps_solve(stp)
"Execution stats: first-order stationary"

We refer to Stopping.jl and https://solverstoppingjulia.github.io for tutorials and documentation.