Globalization Subroutines
The following globalization subroutines are available.
NonlinearSolve.RadiusUpdateSchemes
NonlinearSolve.RadiusUpdateSchemes.Bastin
NonlinearSolve.RadiusUpdateSchemes.Fan
NonlinearSolve.RadiusUpdateSchemes.Hei
NonlinearSolve.RadiusUpdateSchemes.NLsolve
NonlinearSolve.RadiusUpdateSchemes.NocedalWright
NonlinearSolve.RadiusUpdateSchemes.Simple
NonlinearSolve.RadiusUpdateSchemes.Yuan
NonlinearSolve.LiFukushimaLineSearch
NonlinearSolve.LineSearchesJL
NonlinearSolve.NoLineSearch
NonlinearSolve.RobustNonMonotoneLineSearch
Line Search Algorithms
NonlinearSolve.LiFukushimaLineSearch
— TypeLiFukushimaLineSearch(; lambda_0 = 1, beta = 1 // 2, sigma_1 = 1 // 1000,
sigma_2 = 1 // 1000, eta = 1 // 10, nan_max_iter::Int = 5, maxiters::Int = 100)
A derivative-free line search and global convergence of Broyden-like method for nonlinear equations [9].
NonlinearSolve.LineSearchesJL
— TypeLineSearchesJL(; method = LineSearches.Static(), autodiff = nothing, α = true)
Wrapper over algorithms from LineSearches.jl. Allows automatic construction of the objective functions for the line search algorithms utilizing automatic differentiation for fast Vector Jacobian Products.
Arguments
method
: the line search algorithm to use. Defaults tomethod = LineSearches.Static()
, which means that the step size is fixed to the value ofalpha
.autodiff
: the automatic differentiation backend to use for the line search. Using a reverse mode automatic differentiation backend if recommended.α
: the initial step size to use. Defaults totrue
(which is equivalent to1
).
NonlinearSolve.RobustNonMonotoneLineSearch
— TypeRobustNonMonotoneLineSearch(; gamma = 1 // 10000, sigma_0 = 1, M::Int = 10,
tau_min = 1 // 10, tau_max = 1 // 2, n_exp::Int = 2, maxiters::Int = 100,
η_strategy = (fn₁, n, uₙ, fₙ) -> fn₁ / n^2)
Robust NonMonotone Line Search is a derivative free line search method from DF Sane [2].
Keyword Arguments
M
: The monotonicity of the algorithm is determined by a this positive integer. A value of 1 forM
would result in strict monotonicity in the decrease of the L2-norm of the functionf
. However, higher values allow for more flexibility in this reduction. Despite this, the algorithm still ensures global convergence through the use of a non-monotone line-search algorithm that adheres to the Grippo-Lampariello-Lucidi condition. Values in the range of 5 to 20 are usually sufficient, but some cases may call for a higher value ofM
. The default setting is 10.gamma
: a parameter that influences if a proposed step will be accepted. Higher value ofgamma
will make the algorithm more restrictive in accepting steps. Defaults to1e-4
.tau_min
: if a step is rejected the new step size will get multiplied by factor, and this parameter is the minimum value of that factor. Defaults to0.1
.tau_max
: if a step is rejected the new step size will get multiplied by factor, and this parameter is the maximum value of that factor. Defaults to0.5
.n_exp
: the exponent of the loss, i.e. $f_n=||F(x_n)||^{n\_exp}$. The paper usesn_exp ∈ {1, 2}
. Defaults to2
.η_strategy
: function to determine the parameterη
, which enables growth of $||f_n||^2$. Called asη = η_strategy(fn_1, n, x_n, f_n)
withfn_1
initialized as $fn_1=||f(x_1)||^{n\_exp}$,n
is the iteration number,x_n
is the currentx
-value andf_n
the current residual. Should satisfy $η > 0$ and $∑ₖ ηₖ < ∞$. Defaults to $fn_1 / n^2$.maxiters
: the maximum number of iterations allowed for the inner loop of the algorithm. Defaults to100
.
NonlinearSolve.NoLineSearch
— TypeNoLineSearch <: AbstractNonlinearSolveLineSearchAlgorithm
Don't perform a line search. Just return the initial step length of 1
.
Radius Update Schemes for Trust Region
NonlinearSolve.RadiusUpdateSchemes
— ModuleRadiusUpdateSchemes
RadiusUpdateSchemes
is provides different types of radius update schemes implemented in the Trust Region method. These schemes specify how the radius of the so-called trust region is updated after each iteration of the algorithm. The specific role and caveats associated with each scheme are provided below.
Using RadiusUpdateSchemes
Simply put the desired scheme as follows: sol = solve(prob, alg = TrustRegion(radius_update_scheme = RadiusUpdateSchemes.Hei))
.
Available Radius Update Schemes
NonlinearSolve.RadiusUpdateSchemes.Simple
— ConstantRadiusUpdateSchemes.Simple
The simple or conventional radius update scheme. This scheme is chosen by default and follows the conventional approach to update the trust region radius, i.e. if the trial step is accepted it increases the radius by a fixed factor (bounded by a maximum radius) and if the trial step is rejected, it shrinks the radius by a fixed factor.
NonlinearSolve.RadiusUpdateSchemes.Hei
— ConstantRadiusUpdateSchemes.Hei
This scheme is proposed in Hei [10]. The trust region radius depends on the size (norm) of the current step size. The hypothesis is to let the radius converge to zero as the iterations progress, which is more reliable and robust for ill-conditioned as well as degenerate problems.
NonlinearSolve.RadiusUpdateSchemes.Yuan
— ConstantRadiusUpdateSchemes.Yuan
This scheme is proposed by Yuan [6]. Similar to Hei's scheme, the trust region is updated in a way so that it converges to zero, however here, the radius depends on the size (norm) of the current gradient of the objective (merit) function. The hypothesis is that the step size is bounded by the gradient size, so it makes sense to let the radius depend on the gradient.
NonlinearSolve.RadiusUpdateSchemes.Bastin
— ConstantRadiusUpdateSchemes.Bastin
This scheme is proposed by Bastin et al. [11]. The scheme is called a retrospective update scheme as it uses the model function at the current iteration to compute the ratio of the actual reduction and the predicted reduction in the previous trial step, and use this ratio to update the trust region radius. The hypothesis is to exploit the information made available during the optimization process in order to vary the accuracy of the objective function computation.
NonlinearSolve.RadiusUpdateSchemes.Fan
— ConstantRadiusUpdateSchemes.Fan
This scheme is proposed by Fan [12]. It is very much similar to Hei's and Yuan's schemes as it lets the trust region radius depend on the current size (norm) of the objective (merit) function itself. These new update schemes are known to improve local convergence.
NonlinearSolve.RadiusUpdateSchemes.NLsolve
— ConstantRadiusUpdateSchemes.NLsolve
The same updating scheme as in NLsolve's (https://github.com/JuliaNLSolvers/NLsolve.jl) trust region dogleg implementation.
NonlinearSolve.RadiusUpdateSchemes.NocedalWright
— ConstantRadiusUpdateSchemes.NocedalWright
Trust region updating scheme as in Nocedal and Wright [see Alg 11.5, page 291].