Fold / Hopf Continuation
For this to work, it is important to have an analytical expression for the jacobian. See the tutorial Temperature model (simplest example for equilibria) for more details.
Newton refinement
Once a Fold/Hopf point has been detected after a call to br, _ = continuation(...)
, it can be refined using newton
iterations. We have implemented a Minimally Augmented formulation. A simplified interface is provided.
Let us say that ind_bif
is the index in br.bifpoint
of a Fold/Hopf point. This guess can be refined by newton iterations by doing
outfold, hist, flag = newton(F, J, br::ContResult, ind_bif::Int64,
par, lens::Lens; Jᵗ = nothing, d2F = nothing, normN = norm,
options = br.contparams.newtonOptions, kwargs...)
where par
is the set of parameters used in the call to continuation
to get br
and lens
is the parameter axis which is used to find the Fold/Hopf point. For the options parameters, we refer to Newton.
It is important to note that for improved performances, a function implementing the expression of the hessian should be provided. This is by far the fastest. Reader interested in this advanced usage should look at the code example/chan.jl
of the tutorial Temperature model (simplest example for equilibria).
Codim 2 continuation
To compute the codim 2 curve of Fold/Hopf points, one can call continuation
with the following options
br_codim2, _ = continuation(F, J, br, ind_bif,
par, lens1::Lens, lens2::Lens, options_cont::ContinuationPar ;
Jᵗ = nothing, d2F = nothing, kwargs...)
where the options are as above except with have two parameter axis lens1, lens2
which are used to locate the bifurcation points. See Temperature model (simplest example for equilibria) for an example of use.
Advanced use
Here, we expose the solvers that are used to perform newton refinement or codim 2 continuation in case the above methods fails. This is useful in case it is too involved to expose the linear solver options. An example of advanced use is the continuation of Folds of periodic orbits, see Continuation of Fold of periodic orbits.
BifurcationKit.newtonFold
— FunctionnewtonFold(F, J, foldpointguess, par, lens::Lens, eigenvec, options::NewtonPar; Jᵗ = nothing, d2F = nothing, normN = norm, kwargs...)
This function turns an initial guess for a Fold point into a solution to the Fold problem based on a Minimally Augmented formulation. The arguments are as follows
F = (x, p) -> F(x, p)
wherep
is a set of parameters.dF = (x, p) -> d_xF(x, p)
associated jacobianfoldpointguess
initial guess (x0, p0) for the Fold point. It should be aBorderedArray
as returned by the functionFoldPoint
par
parameters used for the vector fieldlens
parameter axis used to locate the Fold point.eigenvec
guess for the 0 eigenvectoroptions::NewtonPar
options for the Newton-Krylov algorithm, seeNewtonPar
.
Optional arguments:
Jᵗ = (x, p) -> transpose(d_xF(x, p))
jacobian adjoint, it should be implemented in an efficient manner. For matrix-free methods,transpose
is not readily available and the user must provide a dedicated method. In the case of sparse based jacobian,Jᵗ
should not be passed as it is computed internally more efficiently, i.e. it avoid recomputing the jacobian as it would be if you passJᵗ = (x, p) -> transpose(dF(x, p))
d2F = (x, p, v1, v2) -> d2F(x, p, v1, v2)
a bilinear operator representing the hessian ofF
. It has to provide an expression ford2F(x,p)[v1,v2]
.normN = norm
kwargs
keywords arguments to be passed to the regular Newton-Krylov solver
Simplified call
Simplified call to refine an initial guess for a Fold point. More precisely, the call is as follows
newtonFold(F, J, br::ContResult, ind_fold::Int64, par, lens::Lens; Jᵗ = nothing, d2F = nothing, options = br.contparams.newtonOptions, kwargs...)
where the optional argument Jᵗ
is the jacobian transpose and the Hessian is d2F
. The parameters / options are as usual except that you have to pass the branch br
from the result of a call to continuation
with detection of bifurcations enabled and index
is the index of bifurcation point in br
you want to refine. You can pass newton parameters different from the ones stored in br
by using the argument options
.
The adjoint of the jacobian J
is computed internally when Jᵗ = nothing
by using tranpose(J)
which works fine when J
is an AbstractArray
. In this case, do not pass the jacobian adjoint like Jᵗ = (x, p) -> transpose(d_xF(x, p))
otherwise the jacobian will be computed twice!
The hessian of F
, when d2F
is not passed, is computed with Finite differences. This can be slow for many variables, e.g. ~1e6
BifurcationKit.newtonHopf
— FunctionnewtonHopf(F, J, hopfpointguess::BorderedArray{vectypeR, T}, par, lens::Lens, eigenvec, eigenvec_ad, options::NewtonPar; Jᵗ = nothing, d2F = nothing, normN = norm) where {vectypeR, T}
This function turns an initial guess for a Hopf point into a solution to the Hopf problem based on a Minimally Augmented formulation. The arguments are as follows
F = (x, p) -> F(x, p)
wherep
is a set of parameters.dF = (x, p) -> d_xF(x, p)
associated jacobianhopfpointguess
initial guess (x0, p0) for the Hopf point. It should aBorderedArray
as returned by the functionHopfPoint
.par
parameters used for the vector fieldlens
parameter axis used to locate the Hopf point.eigenvec
guess for the iω eigenvectoreigenvec_ad
guess for the -iω eigenvectoroptions::NewtonPar
options for the Newton-Krylov algorithm, seeNewtonPar
.
Optional arguments:
Jᵗ = (x, p) -> transpose(d_xF(x, p))
jacobian adjoint, it should be implemented in an efficient manner. For matrix-free methods,transpose
is not readily available and the user must provide a dedicated method. In the case of sparse based jacobian,Jᵗ
should not be passed as it is computed internally more efficiently, i.e. it avoid recomputing the jacobian as it would be if you passJᵗ = (x, p) -> transpose(dF(x, p))
d2F = (x, p, v1, v2) -> d2F(x, p, v1, v2)
a bilinear operator representing the hessian ofF
. It has to provide an expression ford2F(x,p)[v1,v2]
.normN = norm
kwargs
keywords arguments to be passed to the regular Newton-Krylov solver
Simplified call:
Simplified call to refine an initial guess for a Hopf point. More precisely, the call is as follows
newtonHopf(F, J, br::ContResult, ind_hopf::Int64, par, lens::Lens; Jᵗ = nothing, d2F = nothing, normN = norm, options = br.contparams.newtonOptions, kwargs...)
where the optional argument Jᵗ
is the jacobian transpose and the Hessian is d2F
. The parameters / options are as usual except that you have to pass the branch br
from the result of a call to continuation
with detection of bifurcations enabled and index
is the index of bifurcation point in br
you want to refine. You can pass newton parameters different from the ones stored in br
by using the argument options
.
The adjoint of the jacobian J
is computed internally when Jᵗ = nothing
by using tranpose(J)
which works fine when J
is an AbstractArray
. In this case, do not pass the jacobian adjoint like Jᵗ = (x, p) -> transpose(d_xF(x, p))
otherwise the jacobian will be computed twice!
The hessian of F
, when d2F
is not passed, is computed with Finite differences. This can be slow for many variables, e.g. ~1e6
BifurcationKit.continuationFold
— FunctioncontinuationFold(F, J, foldpointguess, par, lens1, lens2, eigenvec, options_cont; Jᵗ, d2F, kwargs...)
Codim 2 continuation of Fold points. This function turns an initial guess for a Fold point into a curve of Fold points based on a Minimally Augmented formulation. The arguments are as follows
F = (x, p) -> F(x, p)
wherep
is a set of parametersJ = (x, p) -> d_xF(x, p)
associated jacobianfoldpointguess
initial guess (x0, p10) for the Fold point. It should be aBorderedArray
as returned by the functionFoldPoint
par
set of parameterslens1
parameter axis for parameter 1lens2
parameter axis for parameter 2eigenvec
guess for the 0 eigenvector at p1_0options_cont
arguments to be passed to the regularcontinuation
Optional arguments:
Jᵗ = (x, p) -> transpose(d_xF(x, p))
associated jacobian transposed2F = p -> ((x, p, v1, v2) -> d2F(x, p, v1, v2))
this is the hessian ofF
computed at(x, p)
and evaluated at(v1, v2)
.kwargs
keywords arguments to be passed to the regularcontinuation
Simplified call
The call is as follows
continuationFold(F, J, br::ContResult, ind_fold::Int64, par, lens1::Lens, lens2::Lens, options_cont::ContinuationPar ; Jᵗ = nothing, d2F = nothing, kwargs...)
where the parameters are as above except that you have to pass the branch br
from the result of a call to continuation
with detection of bifurcations enabled and index
is the index of Fold point in br
you want to continue.
The adjoint of the jacobian J
is computed internally when Jᵗ = nothing
by using tranpose(J)
which works fine when J
is an AbstractArray
. In this case, do not pass the jacobian adjoint like Jᵗ = (x, p) -> transpose(d_xF(x, p))
otherwise the jacobian would be computed twice!
The hessian of F
, when d2F
is not passed, is computed with Finite differences. This can be slow for many variables, e.g. ~1e6
BifurcationKit.continuationHopf
— FunctioncontinuationHopf(F, J, hopfpointguess, par, lens1, lens2, eigenvec, eigenvec_ad, options_cont; Jᵗ, d2F, kwargs...)
codim 2 continuation of Hopf points. This function turns an initial guess for a Hopf point into a curve of Hopf points based on a Minimally Augmented formulation. The arguments are as follows
F = (x, p) -> F(x, p)
wherep
is a set of parametersJ = (x, p) -> d_xF(x, p)
associated jacobianhopfpointguess
initial guess (x0, p10) for the Hopf point. It should be aVector
or aBorderedArray
par
set of parameterslens1
parameter axis for parameter 1lens2
parameter axis for parameter 2eigenvec
guess for the iω eigenvector at p1_0eigenvec_ad
guess for the -iω eigenvector at p1_0options_cont
keywords arguments to be passed to the regularcontinuation
Optional arguments:
Jᵗ = (x, p) -> adjoint(d_xF(x, p))
associated jacobian adjointd2F = p -> ((x, p, v1, v2) -> d2F(x, p, v1, v2))
this is the hessian ofF
computed at(x, p)
and evaluated at(v1, v2)
.kwargs
keywords arguments to be passed to the regularcontinuation
Simplified call:
The call is as follows
continuationHopf(F, J, br::ContResult, ind_hopf::Int64, par, lens1::Lens, lens2::Lens, options_cont::ContinuationPar ; Jᵗ = nothing, d2F = nothing, kwargs...)
where the parameters are as above except that you have to pass the branch br
from the result of a call to continuation
with detection of bifurcations enabled and index
is the index of Hopf point in br
you want to refine.
The hessian of F
, when d2F
is not passed, is computed with Finite differences. This can be slow for many variables, e.g. ~1e6
The adjoint of the jacobian J
is computed internally when Jᵗ = nothing
by using tranpose(J)
which works fine when J
is an AbstractArray
. In this case, do not pass the jacobian adjoint like Jᵗ = (x, p) -> transpose(d_xF(x, p))
otherwise the jacobian would be computed twice!
The hessian of F
, when d2F
is not passed, is computed with Finite differences. This can be slow for many variables, e.g. ~1e6