Globalization Subroutines

The following globalization subroutines are available.

NonlinearSolve.LiFukushimaLineSearchType
LiFukushimaLineSearch(; lambda_0 = 1, beta = 1 // 2, sigma_1 = 1 // 1000,
    sigma_2 = 1 // 1000, eta = 1 // 10, nan_max_iter::Int = 5, maxiters::Int = 100)

A derivative-free line search and global convergence of Broyden-like method for nonlinear equations [9].

NonlinearSolve.LineSearchesJLType
LineSearchesJL(; method = LineSearches.Static(), autodiff = nothing, α = true)

Wrapper over algorithms from LineSearches.jl. Allows automatic construction of the objective functions for the line search algorithms utilizing automatic differentiation for fast Vector Jacobian Products.

Arguments

  • method: the line search algorithm to use. Defaults to method = LineSearches.Static(), which means that the step size is fixed to the value of alpha.
  • autodiff: the automatic differentiation backend to use for the line search. Using a reverse mode automatic differentiation backend if recommended.
  • α: the initial step size to use. Defaults to true (which is equivalent to 1).
NonlinearSolve.RobustNonMonotoneLineSearchType
RobustNonMonotoneLineSearch(; gamma = 1 // 10000, sigma_0 = 1, M::Int = 10,
    tau_min = 1 // 10, tau_max = 1 // 2, n_exp::Int = 2, maxiters::Int = 100,
    η_strategy = (fn₁, n, uₙ, fₙ) -> fn₁ / n^2)

Robust NonMonotone Line Search is a derivative free line search method from DF Sane [2].

Keyword Arguments

  • M: The monotonicity of the algorithm is determined by a this positive integer. A value of 1 for M would result in strict monotonicity in the decrease of the L2-norm of the function f. However, higher values allow for more flexibility in this reduction. Despite this, the algorithm still ensures global convergence through the use of a non-monotone line-search algorithm that adheres to the Grippo-Lampariello-Lucidi condition. Values in the range of 5 to 20 are usually sufficient, but some cases may call for a higher value of M. The default setting is 10.
  • gamma: a parameter that influences if a proposed step will be accepted. Higher value of gamma will make the algorithm more restrictive in accepting steps. Defaults to 1e-4.
  • tau_min: if a step is rejected the new step size will get multiplied by factor, and this parameter is the minimum value of that factor. Defaults to 0.1.
  • tau_max: if a step is rejected the new step size will get multiplied by factor, and this parameter is the maximum value of that factor. Defaults to 0.5.
  • n_exp: the exponent of the loss, i.e. $f_n=||F(x_n)||^{n\_exp}$. The paper uses n_exp ∈ {1, 2}. Defaults to 2.
  • η_strategy: function to determine the parameter η, which enables growth of $||f_n||^2$. Called as η = η_strategy(fn_1, n, x_n, f_n) with fn_1 initialized as $fn_1=||f(x_1)||^{n\_exp}$, n is the iteration number, x_n is the current x-value and f_n the current residual. Should satisfy $η > 0$ and $∑ₖ ηₖ < ∞$. Defaults to $fn_1 / n^2$.
  • maxiters: the maximum number of iterations allowed for the inner loop of the algorithm. Defaults to 100.
NonlinearSolve.NoLineSearchType
NoLineSearch <: AbstractNonlinearSolveLineSearchAlgorithm

Don't perform a line search. Just return the initial step length of 1.

Radius Update Schemes for Trust Region

NonlinearSolve.RadiusUpdateSchemesModule
RadiusUpdateSchemes

RadiusUpdateSchemes is provides different types of radius update schemes implemented in the Trust Region method. These schemes specify how the radius of the so-called trust region is updated after each iteration of the algorithm. The specific role and caveats associated with each scheme are provided below.

Using RadiusUpdateSchemes

Simply put the desired scheme as follows: sol = solve(prob, alg = TrustRegion(radius_update_scheme = RadiusUpdateSchemes.Hei)).

Available Radius Update Schemes

NonlinearSolve.RadiusUpdateSchemes.SimpleConstant
RadiusUpdateSchemes.Simple

The simple or conventional radius update scheme. This scheme is chosen by default and follows the conventional approach to update the trust region radius, i.e. if the trial step is accepted it increases the radius by a fixed factor (bounded by a maximum radius) and if the trial step is rejected, it shrinks the radius by a fixed factor.

NonlinearSolve.RadiusUpdateSchemes.HeiConstant
RadiusUpdateSchemes.Hei

This scheme is proposed in Hei [10]. The trust region radius depends on the size (norm) of the current step size. The hypothesis is to let the radius converge to zero as the iterations progress, which is more reliable and robust for ill-conditioned as well as degenerate problems.

NonlinearSolve.RadiusUpdateSchemes.YuanConstant
RadiusUpdateSchemes.Yuan

This scheme is proposed by Yuan [6]. Similar to Hei's scheme, the trust region is updated in a way so that it converges to zero, however here, the radius depends on the size (norm) of the current gradient of the objective (merit) function. The hypothesis is that the step size is bounded by the gradient size, so it makes sense to let the radius depend on the gradient.

NonlinearSolve.RadiusUpdateSchemes.BastinConstant
RadiusUpdateSchemes.Bastin

This scheme is proposed by Bastin et al. [11]. The scheme is called a retrospective update scheme as it uses the model function at the current iteration to compute the ratio of the actual reduction and the predicted reduction in the previous trial step, and use this ratio to update the trust region radius. The hypothesis is to exploit the information made available during the optimization process in order to vary the accuracy of the objective function computation.

NonlinearSolve.RadiusUpdateSchemes.FanConstant
RadiusUpdateSchemes.Fan

This scheme is proposed by Fan [12]. It is very much similar to Hei's and Yuan's schemes as it lets the trust region radius depend on the current size (norm) of the objective (merit) function itself. These new update schemes are known to improve local convergence.