SimpleNonlinearSolve.jl

These methods can be used independently of the rest of NonlinearSolve.jl

Interval Methods

These methods are suited for interval (scalar) root-finding problems, i.e. IntervalNonlinearProblem.

SimpleNonlinearSolve.ITPType
ITP(; k1::Real = 0.007, k2::Real = 1.5, n0::Int = 10)

ITP (Interpolate Truncate & Project)

Use the ITP method to find a root of a bracketed function, with a convergence rate between 1 and 1.62.

This method was introduced in the paper "An Enhancement of the Bisection Method Average Performance Preserving Minmax Optimality" (https://doi.org/10.1145/3423597) by I. F. D. Oliveira and R. H. C. Takahashi.

Tuning Parameters

The following keyword parameters are accepted.

  • n₀::Int = 1, the 'slack'. Must not be negative. When n₀ = 0 the worst-case is identical to that of bisection, but increacing n₀ provides greater oppotunity for superlinearity.
  • κ₁::Float64 = 0.1. Must not be negative. The recomended value is 0.2/(x₂ - x₁). Lower values produce tighter asymptotic behaviour, while higher values improve the steady-state behaviour when truncation is not helpful.
  • κ₂::Real = 2. Must lie in [1, 1+ϕ ≈ 2.62). Higher values allow for a greater convergence rate, but also make the method more succeptable to worst-case performance. In practice, κ=1,2 seems to work well due to the computational simplicity, as κ₂ is used as an exponent in the method.

Worst Case Performance

n½ + n₀ iterations, where n½ is the number of iterations using bisection (n½ = ⌈log2(Δx)/2tol⌉).

Asymptotic Performance

If f is twice differentiable and the root is simple, then with n₀ > 0 the convergence rate is √κ₂.

SimpleNonlinearSolve.AlefeldType
Alefeld()

An implementation of algorithm 4.2 from Alefeld.

The paper brought up two new algorithms. Here choose to implement algorithm 4.2 rather than algorithm 4.1 because, in certain sense, the second algorithm(4.2) is an optimal procedure.

SimpleNonlinearSolve.BisectionType
Bisection(; exact_left = false, exact_right = false)

A common bisection method.

Keyword Arguments

  • exact_left: whether to enforce whether the left side of the interval must be exactly zero for the returned result. Defaults to false.
  • exact_right: whether to enforce whether the right side of the interval must be exactly zero for the returned result. Defaults to false.
Warning

Currently, the keyword arguments are not implemented.

General Methods

These methods are suited for any general nonlinear root-finding problem, i.e. NonlinearProblem.

SolverIn-placeOut of PlaceNon-Allocating (Scalars)Non-Allocating (SArray)
SimpleNewtonRaphson✔️✔️✔️✔️
SimpleBroyden✔️✔️✔️✔️
SimpleHalley✔️✔️
SimpleKlement✔️✔️✔️✔️
SimpleTrustRegion✔️✔️✔️✔️
SimpleDFSane✔️✔️✔️[1]✔️
SimpleLimitedMemoryBroyden✔️✔️✔️✔️[2]

The algorithms which are non-allocating can be used directly inside GPU Kernels[3]. See PSOGPU.jl for more details.

SimpleNonlinearSolve.SimpleNewtonRaphsonType
SimpleNewtonRaphson(autodiff)
SimpleNewtonRaphson(; autodiff = nothing)

A low-overhead implementation of Newton-Raphson. This method is non-allocating on scalar and static array problems.

Note

As part of the decreased overhead, this method omits some of the higher level error catching of the other methods. Thus, to see better error messages, use one of the other methods like NewtonRaphson.

Keyword Arguments

  • autodiff: determines the backend used for the Jacobian. Defaults to nothing. Valid choices are AutoPolyesterForwardDiff(), AutoForwardDiff() or AutoFiniteDiff().
SimpleNonlinearSolve.SimpleBroydenType
SimpleBroyden(; linesearch = Val(false), alpha = nothing)

A low-overhead implementation of Broyden. This method is non-allocating on scalar and static array problems.

Keyword Arguments

  • linesearch: If linesearch is Val(true), then we use the LiFukushimaLineSearch [9] line search else no line search is used. For advanced customization of the line search, use Broyden from NonlinearSolve.jl.
  • alpha: Scale the initial jacobian initialization with alpha. If it is nothing, we will compute the scaling using 2 * norm(fu) / max(norm(u), true).
SimpleNonlinearSolve.SimpleHalleyType
SimpleHalley(autodiff)
SimpleHalley(; autodiff = AutoForwardDiff())

A low-overhead implementation of Halley's Method.

Note

As part of the decreased overhead, this method omits some of the higher level error catching of the other methods. Thus, to see better error messages, use one of the other methods like NewtonRaphson

Keyword Arguments

  • autodiff: determines the backend used for the Hessian. Defaults to nothing. Valid choices are AutoForwardDiff() or AutoFiniteDiff().
Warning

Inplace Problems are currently not supported by this method.

SimpleNonlinearSolve.SimpleTrustRegionType
SimpleTrustRegion(; autodiff = AutoForwardDiff(), max_trust_radius::Real = 0.0,
    initial_trust_radius::Real = 0.0, step_threshold::Real = 0.1,
    shrink_threshold::Real = 0.25, expand_threshold::Real = 0.75,
    shrink_factor::Real = 0.25, expand_factor::Real = 2.0, max_shrink_times::Int = 32)

A low-overhead implementation of a trust-region solver. This method is non-allocating on scalar and static array problems.

Keyword Arguments

  • autodiff: determines the backend used for the Jacobian. Defaults to nothing. Valid choices are AutoPolyesterForwardDiff(), AutoForwardDiff() or AutoFiniteDiff().
  • max_trust_radius: the maximum radius of the trust region. Defaults to max(norm(f(u0)), maximum(u0) - minimum(u0)).
  • initial_trust_radius: the initial trust region radius. Defaults to max_trust_radius / 11.
  • step_threshold: the threshold for taking a step. In every iteration, the threshold is compared with a value r, which is the actual reduction in the objective function divided by the predicted reduction. If step_threshold > r the model is not a good approximation, and the step is rejected. Defaults to 0.1. For more details, see Rahpeymaii, F.
  • shrink_threshold: the threshold for shrinking the trust region radius. In every iteration, the threshold is compared with a value r which is the actual reduction in the objective function divided by the predicted reduction. If shrink_threshold > r the trust region radius is shrunk by shrink_factor. Defaults to 0.25. For more details, see Rahpeymaii, F.
  • expand_threshold: the threshold for expanding the trust region radius. If a step is taken, i.e step_threshold < r (with r defined in shrink_threshold), a check is also made to see if expand_threshold < r. If that is true, the trust region radius is expanded by expand_factor. Defaults to 0.75.
  • shrink_factor: the factor to shrink the trust region radius with if shrink_threshold > r (with r defined in shrink_threshold). Defaults to 0.25.
  • expand_factor: the factor to expand the trust region radius with if expand_threshold < r (with r defined in shrink_threshold). Defaults to 2.0.
  • max_shrink_times: the maximum number of times to shrink the trust region radius in a row, max_shrink_times is exceeded, the algorithm returns. Defaults to 32.
SimpleNonlinearSolve.SimpleDFSaneType
SimpleDFSane(; σ_min::Real = 1e-10, σ_max::Real = 1e10, σ_1::Real = 1.0,
    M::Union{Int, Val} = Val(10), γ::Real = 1e-4, τ_min::Real = 0.1, τ_max::Real = 0.5,
    nexp::Int = 2, η_strategy::Function = (f_1, k, x, F) -> f_1 ./ k^2)

A low-overhead implementation of the df-sane method for solving large-scale nonlinear systems of equations. For in depth information about all the parameters and the algorithm, see La Cruz et al. [2].

Keyword Arguments

  • σ_min: the minimum value of the spectral coefficient σ_k which is related to the step size in the algorithm. Defaults to 1e-10.
  • σ_max: the maximum value of the spectral coefficient σ_k which is related to the step size in the algorithm. Defaults to 1e10.
  • σ_1: the initial value of the spectral coefficient σ_k which is related to the step size in the algorithm.. Defaults to 1.0.
  • M: The monotonicity of the algorithm is determined by a this positive integer. A value of 1 for M would result in strict monotonicity in the decrease of the L2-norm of the function f. However, higher values allow for more flexibility in this reduction. Despite this, the algorithm still ensures global convergence through the use of a non-monotone line-search algorithm that adheres to the Grippo-Lampariello-Lucidi condition. Values in the range of 5 to 20 are usually sufficient, but some cases may call for a higher value of M. The default setting is 10.
  • γ: a parameter that influences if a proposed step will be accepted. Higher value of γ will make the algorithm more restrictive in accepting steps. Defaults to 1e-4.
  • τ_min: if a step is rejected the new step size will get multiplied by factor, and this parameter is the minimum value of that factor. Defaults to 0.1.
  • τ_max: if a step is rejected the new step size will get multiplied by factor, and this parameter is the maximum value of that factor. Defaults to 0.5.
  • nexp: the exponent of the loss, i.e. $f_k=||F(x_k)||^{nexp}$. The paper uses nexp ∈ {1,2}. Defaults to 2.
  • η_strategy: function to determine the parameter η_k, which enables growth of $||F||^2$. Called as η_k = η_strategy(f_1, k, x, F) with f_1 initialized as $f_1=||F(x_1)||^{nexp}$, k is the iteration number, x is the current x-value and F the current residual. Should satisfy $η_k > 0$ and $∑ₖ ηₖ < ∞$. Defaults to $||F||^2 / k^2$.
SimpleNonlinearSolve.SimpleLimitedMemoryBroydenType
SimpleLimitedMemoryBroyden(; threshold::Union{Val, Int} = Val(27),
    linesearch = Val(false), alpha = nothing)

A limited memory implementation of Broyden. This method applies the L-BFGS scheme to Broyden's method.

If the threshold is larger than the problem size, then this method will use SimpleBroyden.

Keyword Arguments:

  • linesearch: If linesearch is Val(true), then we use the LiFukushimaLineSearch [9] line search else no line search is used. For advanced customization of the line search, use the LimitedMemoryBroyden algorithm in NonlinearSolve.jl.
  • alpha: Scale the initial jacobian initialization with alpha. If it is nothing, we will compute the scaling using 2 * norm(fu) / max(norm(u), true).

SimpleGaussNewton is aliased to SimpleNewtonRaphson for solving Nonlinear Least Squares problems.

  • 1Needs StaticArrays.jl to be installed and loaded for the non-allocating version.
  • 2This method is non-allocating if the termination condition is set to either nothing (default) or AbsNormTerminationMode.
  • 3Only the defaults are guaranteed to work inside kernels. We try to provide warnings if the used version is not non-allocating.