Fold / Hopf Continuation

For this to work, it is important to have an analytical expression for the jacobian. See the tutorial Temperature model (simplest example for equilibria) for more details.

Newton refinement

Once a Fold/Hopf point has been detected after a call to br, _ = continuation(...), it can be refined using newton iterations. We have implemented a Minimally Augmented formulation. A simplified interface is provided.

Let us say that ind_bif is the index in br.bifpoint of a Fold/Hopf point. This guess can be refined by newton iterations by doing

outfold, hist, flag = newton(F, J, br::ContResult, ind_bif::Int64, 
	par, lens::Lens; Jᵗ = nothing, d2F = nothing, normN = norm, 
	options = br.contparams.newtonOptions, kwargs...)

where par is the set of parameters used in the call to continuation to get br and lens is the parameter axis which is used to find the Fold/Hopf point. For the options parameters, we refer to Newton.

It is important to note that for improved performances, a function implementing the expression of the hessian should be provided. This is by far the fastest. Reader interested in this advanced usage should look at the code example/chan.jl of the tutorial Temperature model (simplest example for equilibria).

Codim 2 continuation

To compute the codim 2 curve of Fold/Hopf points, one can call continuation with the following options

br_codim2, _ = continuation(F, J, br, ind_bif, 
	par, lens1::Lens, lens2::Lens, options_cont::ContinuationPar ;
	Jᵗ = nothing, d2F = nothing, kwargs...)

where the options are as above except with have two parameter axis lens1, lens2 which are used to locate the bifurcation points. See Temperature model (simplest example for equilibria) for an example of use.

Advanced use

Here, we expose the solvers that are used to perform newton refinement or codim 2 continuation in case the above methods fails. This is useful in case it is too involved to expose the linear solver options. An example of advanced use is the continuation of Folds of periodic orbits, see Continuation of Fold of periodic orbits.

BifurcationKit.newtonFoldFunction
newtonFold(F, J, foldpointguess, par, lens::Lens, eigenvec, options::NewtonPar; Jᵗ = nothing, d2F = nothing, normN = norm, kwargs...)

This function turns an initial guess for a Fold point into a solution to the Fold problem based on a Minimally Augmented formulation. The arguments are as follows

  • F = (x, p) -> F(x, p) where p is a set of parameters.
  • dF = (x, p) -> d_xF(x, p) associated jacobian
  • foldpointguess initial guess (x0, p0) for the Fold point. It should be a BorderedArray as returned by the function FoldPoint
  • par parameters used for the vector field
  • lens parameter axis used to locate the Fold point.
  • eigenvec guess for the 0 eigenvector
  • options::NewtonPar options for the Newton-Krylov algorithm, see NewtonPar.

Optional arguments:

  • Jᵗ = (x, p) -> transpose(d_xF(x, p)) jacobian adjoint, it should be implemented in an efficient manner. For matrix-free methods, transpose is not readily available and the user must provide a dedicated method. In the case of sparse based jacobian, Jᵗ should not be passed as it is computed internally more efficiently, i.e. it avoid recomputing the jacobian as it would be if you pass Jᵗ = (x, p) -> transpose(dF(x, p))
  • d2F = (x, p, v1, v2) -> d2F(x, p, v1, v2) a bilinear operator representing the hessian of F. It has to provide an expression for d2F(x,p)[v1,v2].
  • normN = norm
  • kwargs keywords arguments to be passed to the regular Newton-Krylov solver

Simplified call

Simplified call to refine an initial guess for a Fold point. More precisely, the call is as follows

newtonFold(F, J, br::ContResult, ind_fold::Int64, par, lens::Lens; Jᵗ = nothing, d2F = nothing, options = br.contparams.newtonOptions, kwargs...)

where the optional argument Jᵗ is the jacobian transpose and the Hessian is d2F. The parameters / options are as usual except that you have to pass the branch br from the result of a call to continuation with detection of bifurcations enabled and index is the index of bifurcation point in br you want to refine. You can pass newton parameters different from the ones stored in br by using the argument options.

Jacobian tranpose

The adjoint of the jacobian J is computed internally when Jᵗ = nothing by using tranpose(J) which works fine when J is an AbstractArray. In this case, do not pass the jacobian adjoint like Jᵗ = (x, p) -> transpose(d_xF(x, p)) otherwise the jacobian will be computed twice!

Hessian

The hessian of F, when d2F is not passed, is computed with Finite differences. This can be slow for many variables, e.g. ~1e6

BifurcationKit.newtonHopfFunction
newtonHopf(F, J, hopfpointguess::BorderedArray{vectypeR, T}, par, lens::Lens, eigenvec, eigenvec_ad, options::NewtonPar; Jᵗ = nothing, d2F = nothing, normN = norm) where {vectypeR, T}

This function turns an initial guess for a Hopf point into a solution to the Hopf problem based on a Minimally Augmented formulation. The arguments are as follows

  • F = (x, p) -> F(x, p) where p is a set of parameters.
  • dF = (x, p) -> d_xF(x, p) associated jacobian
  • hopfpointguess initial guess (x0, p0) for the Hopf point. It should a BorderedArray as returned by the function HopfPoint.
  • par parameters used for the vector field
  • lens parameter axis used to locate the Hopf point.
  • eigenvec guess for the iω eigenvector
  • eigenvec_ad guess for the -iω eigenvector
  • options::NewtonPar options for the Newton-Krylov algorithm, see NewtonPar.

Optional arguments:

  • Jᵗ = (x, p) -> transpose(d_xF(x, p)) jacobian adjoint, it should be implemented in an efficient manner. For matrix-free methods, transpose is not readily available and the user must provide a dedicated method. In the case of sparse based jacobian, Jᵗ should not be passed as it is computed internally more efficiently, i.e. it avoid recomputing the jacobian as it would be if you pass Jᵗ = (x, p) -> transpose(dF(x, p))
  • d2F = (x, p, v1, v2) -> d2F(x, p, v1, v2) a bilinear operator representing the hessian of F. It has to provide an expression for d2F(x,p)[v1,v2].
  • normN = norm
  • kwargs keywords arguments to be passed to the regular Newton-Krylov solver

Simplified call:

Simplified call to refine an initial guess for a Hopf point. More precisely, the call is as follows

newtonHopf(F, J, br::ContResult, ind_hopf::Int64, par, lens::Lens; Jᵗ = nothing, d2F = nothing, normN = norm, options = br.contparams.newtonOptions, kwargs...)

where the optional argument Jᵗ is the jacobian transpose and the Hessian is d2F. The parameters / options are as usual except that you have to pass the branch br from the result of a call to continuation with detection of bifurcations enabled and index is the index of bifurcation point in br you want to refine. You can pass newton parameters different from the ones stored in br by using the argument options.

Jacobian tranpose

The adjoint of the jacobian J is computed internally when Jᵗ = nothing by using tranpose(J) which works fine when J is an AbstractArray. In this case, do not pass the jacobian adjoint like Jᵗ = (x, p) -> transpose(d_xF(x, p)) otherwise the jacobian will be computed twice!

Hessian

The hessian of F, when d2F is not passed, is computed with Finite differences. This can be slow for many variables, e.g. ~1e6

BifurcationKit.continuationFoldFunction
continuationFold(F, J, foldpointguess, par, lens1, lens2, eigenvec, options_cont; Jᵗ, d2F, kwargs...)

Codim 2 continuation of Fold points. This function turns an initial guess for a Fold point into a curve of Fold points based on a Minimally Augmented formulation. The arguments are as follows

  • F = (x, p) -> F(x, p) where p is a set of parameters
  • J = (x, p) -> d_xF(x, p) associated jacobian
  • foldpointguess initial guess (x0, p10) for the Fold point. It should be a BorderedArray as returned by the function FoldPoint
  • par set of parameters
  • lens1 parameter axis for parameter 1
  • lens2 parameter axis for parameter 2
  • eigenvec guess for the 0 eigenvector at p1_0
  • options_cont arguments to be passed to the regular continuation

Optional arguments:

  • Jᵗ = (x, p) -> transpose(d_xF(x, p)) associated jacobian transpose
  • d2F = p -> ((x, p, v1, v2) -> d2F(x, p, v1, v2)) this is the hessian of F computed at (x, p) and evaluated at (v1, v2).
  • kwargs keywords arguments to be passed to the regular continuation

Simplified call

The call is as follows

continuationFold(F, J, br::ContResult, ind_fold::Int64, par, lens1::Lens, lens2::Lens, options_cont::ContinuationPar ; Jᵗ = nothing, d2F = nothing, kwargs...)

where the parameters are as above except that you have to pass the branch br from the result of a call to continuation with detection of bifurcations enabled and index is the index of Fold point in br you want to continue.

Jacobian tranpose

The adjoint of the jacobian J is computed internally when Jᵗ = nothing by using tranpose(J) which works fine when J is an AbstractArray. In this case, do not pass the jacobian adjoint like Jᵗ = (x, p) -> transpose(d_xF(x, p)) otherwise the jacobian would be computed twice!

Hessian

The hessian of F, when d2F is not passed, is computed with Finite differences. This can be slow for many variables, e.g. ~1e6

BifurcationKit.continuationHopfFunction
continuationHopf(F, J, hopfpointguess, par, lens1, lens2, eigenvec, eigenvec_ad, options_cont; Jᵗ, d2F, kwargs...)

codim 2 continuation of Hopf points. This function turns an initial guess for a Hopf point into a curve of Hopf points based on a Minimally Augmented formulation. The arguments are as follows

  • F = (x, p) -> F(x, p) where p is a set of parameters
  • J = (x, p) -> d_xF(x, p) associated jacobian
  • hopfpointguess initial guess (x0, p10) for the Hopf point. It should be a Vector or a BorderedArray
  • par set of parameters
  • lens1 parameter axis for parameter 1
  • lens2 parameter axis for parameter 2
  • eigenvec guess for the iω eigenvector at p1_0
  • eigenvec_ad guess for the -iω eigenvector at p1_0
  • options_cont keywords arguments to be passed to the regular continuation

Optional arguments:

  • Jᵗ = (x, p) -> adjoint(d_xF(x, p)) associated jacobian adjoint
  • d2F = p -> ((x, p, v1, v2) -> d2F(x, p, v1, v2)) this is the hessian of F computed at (x, p) and evaluated at (v1, v2).
  • kwargs keywords arguments to be passed to the regular continuation

Simplified call:

The call is as follows

continuationHopf(F, J, br::ContResult, ind_hopf::Int64, par, lens1::Lens, lens2::Lens, options_cont::ContinuationPar ;  Jᵗ = nothing, d2F = nothing, kwargs...)

where the parameters are as above except that you have to pass the branch br from the result of a call to continuation with detection of bifurcations enabled and index is the index of Hopf point in br you want to refine.

Hessian

The hessian of F, when d2F is not passed, is computed with Finite differences. This can be slow for many variables, e.g. ~1e6

Jacobian tranpose

The adjoint of the jacobian J is computed internally when Jᵗ = nothing by using tranpose(J) which works fine when J is an AbstractArray. In this case, do not pass the jacobian adjoint like Jᵗ = (x, p) -> transpose(d_xF(x, p)) otherwise the jacobian would be computed twice!

Hessian

The hessian of F, when d2F is not passed, is computed with Finite differences. This can be slow for many variables, e.g. ~1e6