`ChaosTools.ChaosTools`

— Module**ChaosTools.jl**

A Julia module that offers various tools for analysing nonlinear dynamics and chaotic behaviour. It can be used as a standalone package, or as part of DynamicalSystems.jl.

To install it, run `import Pkg; Pkg.add("ChaosTools")`

.

All further information is provided in the documentation, which you can either find online or build locally by running the `docs/make.jl`

file.

*ChaosTools.jl is the jack-of-all-trades package of the DynamicalSystems.jl library: methods that are not extensive enough to be a standalone package are added here. You should see the full DynamicalSystems.jl library for other packages that may contain functionality you are looking for but did not find in ChaosTools.jl.*

`ChaosTools._ac_period`

— Method`_ac_period(v, t; ϵ = 0.2, L = length(v)÷10)`

Use the autocorrelation function (AC). The value where the AC first comes back close to 1 is the period of the signal. The keyword `L = length(v)÷10`

denotes the length of the AC (thus, given the default setting, this method will fail if there less than 10 periods in the signal). The keyword `ϵ = 0.2`

means that `1-ϵ`

counts as "1" for the AC.

`ChaosTools._ls_period`

— Method```
_ls_period(v, t;
minimum_period::Real = 2 * (t[end] - t[1]) / length(t),
maximum_period::Real = (t[end] - t[1]) / 1.5,
kwargs... # see the documentation of LombScargle.jl for these
)
```

Uses the Lomb-Scargle algorithm to compute a periodogram. The advantage of the Lomb-Scargle method is that it does not require an equally sampled dataset, and it performs well on undersampled datasets. Constraints have been set on the period, since Lomb-Scargle tends to have false peaks at very low frequencies. That being said, it's a very flexible method. It is extremely customizable, and the keyword arguments that can be passed to it are given in the documentation.

`ChaosTools._mt_period`

— Method```
_mt_period(v, t;
nw = 4, ntapers = DSP.ceil(2nw)-1,
window = DSP.dpss(length(s), nw, ntapers), kwargs...)
```

The multitaper method reduces estimation bias by obtaining multiple independent estimates from the same sample. Data tapers are then windowed and the power spectra are obtained. `nw`

is the time-bandwidth product, and `ntapers`

is the number of tapers. If `window`

is not specified, the signal is tapered with `ntapers`

discrete prolate spheroidal sequences with time-bandwidth product `nw`

. Each sequence is equally weighted; adaptive multitaper is not (yet) supported. If `window`

is specified, each column is applied as a taper. The sum of periodograms is normalized by the total sum of squares of `window`

.

`ChaosTools._periodogram_period`

— Method`_periodogram_period(v, t; kwargs...)`

Use the fast Fourier transform to compute a periodogram (power-spectrum) of the given data. Data must be evenly sampled.

`ChaosTools._zc_period`

— Method`_zc_period(v, t; line = mean(v))`

Find the zero crossings of the data, and use the average difference between zero crossings as the period. This is a naïve implementation, with no interpolation; however, it's useful as a sanity check.

The keyword `line`

controls where the "zero point" is.

The implementation of the function was inspired by this gist, and has been modified for performance and to support arbitrary time grids.

`ChaosTools.absolute_threshold`

— FunctionApplies the threshold step described in ^{[1]}. It returns index (period) corresponding to the first minimum below the `harmonic_threshod`

. If that doesn't exist, it returns the index of the global minimum.

`ChaosTools.broomhead_king`

— Method`broomhead_king(s::AbstractVector, d::Int) -> U, S, Vtr`

Return the Broomhead-King coordinates of a timeseries `s`

by performing `svd`

on high-dimensional delay embedding if `s`

with dimension `d`

with minimum delay.

**Description**

Broomhead and King coordinates is an approach proposed in ^{[Broomhead1987]} that applies the Karhunen–Loève theorem to delay coordinates embedding with smallest possible delay.

The function performs singular value decomposition on the `d`

-dimensional matrix $X$ of $s$,

\[X = \frac{1}{\sqrt{N}}\left( \begin{array}{cccc} x_1 & x_2 & \ldots & x_d \\ x_2 & x_3 & \ldots & x_{d+1}\\ \vdots & \vdots & \vdots & \vdots \\ x_{N-d+1} & x_{N-d+2} &\ldots & x_N \end{array} \right) = U\cdot S \cdot V^{tr}.\]

where $x := s - \bar{s}$. The columns of $U$ can then be used as a new coordinate system, and by considering the values of the singular values $S$ you can decide how many columns of $U$ are "important".

`ChaosTools.cumulative_mean_normalized_difference_function`

— MethodCompute cumulative mean normalized difference function (CMND), starting from the difference function `df`

. This corresponds to equation (8).

`ChaosTools.difference_function_original`

— Method`difference_function_original(x, W, τmax) -> df`

Computes the difference function of `x`

. `W`

is the window size, and `τmax`

is the maximum period. This corresponds to equation (6) in ^{[CheveigneYIN2002]}:

\[d_t(\tau) = \sum_{j=1}^W (x_j - x_{j+\tau})^2\]

`ChaosTools.dyca`

— Method`dyca(data, eig_threshold) -> eigenvalues, proj_mat, projected_data`

Compute the Dynamical Component analysis (DyCA) of the given `data`

^{[Uhl2018]} used for dimensionality reduction.

Return the eigenvalues, projection matrix, and reduced-dimension data (which are just `data*proj_mat`

).

**Keyword Arguments**

- norm_eigenvectors=false : if true, normalize the eigenvectors

**Description**

Dynamical Component Analysis (DyCA) is a method to detect projection vectors to reduce the dimensionality of multi-variate, high-dimensional deterministic datasets. Unlike methods like PCA or ICA that make a stochasticity assumption, DyCA relies on a determinacy assumption on the time-series and is based on the solution of a generalized eigenvalue problem. After choosing an appropriate eigenvalue threshold and solving the eigenvalue problem, the obtained eigenvectors are used to project the high-dimensional dataset onto a lower dimension. The obtained eigenvalues measure the quality of the assumption of linear determinism for the investigated data. Furthermore, the number of the generalized eigenvalues with a value of approximately 1.0 are a measure of the number of linear equations contained in the dataset. This property is useful in detecting regions with highly deterministic parts in the time-series and also as a preprocessing step for reservoir computing of high-dimensional spatio-temporal data.

The generalised eigenvalue problem we solve is:

\[C_1 C_0^{-1} C_1^{\top} \bar{u} = \lambda C_2 \bar{u} \]

where $C_0$ is the correlation matrix of the data with itself, $C_1$ the correlation matrix of the data with its derivative, and $C_2$ the correlation matrix of the derivative of the data with itself. The eigenvectors $\bar{u}$ with eigenvalues approximately 1 and their $C_1^{-1} C_2 u$ counterpart, form the space where the data is projected onto.

`ChaosTools.estimate_period`

— Function`estimate_period(v::Vector, method, t=0:length(v)-1; kwargs...)`

Estimate the period of the signal `v`

, with accompanying time vector `t`

, using the given `method`

.

If `t`

is an AbstractArray, then it is iterated through to ensure that it's evenly sampled (if necessary for the algorithm). To avoid this, you can pass any `AbstractRange`

, like a `UnitRange`

or a `LinRange`

, which are defined to be evenly sampled.

**Methods requiring evenly sampled data**

These methods are faster, but some are error-prone.

`:periodogram`

or`:pg`

: Use the fast Fourier transform to compute a periodogram (power-spectrum) of the given data. Data must be evenly sampled.`:multitaper`

or`mt`

: The multitaper method reduces estimation bias by using multiple independent estimates from the same sample. Data tapers are then windowed and the power spectra are obtained. Available keywords follow:`nw`

is the time-bandwidth product, and`ntapers`

is the number of tapers. If`window`

is not specified, the signal is tapered with`ntapers`

discrete prolate spheroidal sequences with time-bandwidth product`nw`

. Each sequence is equally weighted; adaptive multitaper is not (yet) supported. If`window`

is specified, each column is applied as a taper. The sum of periodograms is normalized by the total sum of squares of`window`

.`:autocorrelation`

or`:ac`

: Use the autocorrelation function (AC). The value where the AC first comes back close to 1 is the period of the signal. The keyword`L = length(v)÷10`

denotes the length of the AC (thus, given the default setting, this method will fail if there less than 10 periods in the signal). The keyword`ϵ = 0.2`

(`\epsilon`

) means that`1-ϵ`

counts as "1" for the AC.`:yin`

: The YIN algorithm. An autocorrelation-based method to estimate the fundamental period of the signal. See the original paper^{[CheveigneYIN2002]}or the implementation`yin`

. Sampling rate is taken as`sr = 1/mean(diff(t))`

if not given.

speech and music. The Journal of the Acoustical Society of America, 111(4), 1917-1930.

**Methods not requiring evenly sampled data**

These methods tend to be slow, but versatile and low-error.

`:lombscargle`

or`:ls`

: Use the Lomb-Scargle algorithm to compute a periodogram. The advantage of the Lomb-Scargle method is that it does not require an equally sampled dataset and performs well on undersampled datasets. Constraints have been set on the period, since Lomb-Scargle tends to have false peaks at very low frequencies. That being said, it's a very flexible method. It is extremely customizable, and the keyword arguments that can be passed to it are given in the documentation.`:zerocrossing`

or`:zc`

: Find the zero crossings of the data, and use the average difference between zero crossings as the period. This is a naïve implementation, with only linear interpolation; however, it's useful as a sanity check. The keyword`line`

controls where the "crossing point" is. It defaults to`mean(v)`

.

For more information on the periodogram methods, see the documentation of DSP.jl and LombScargle.jl.

`ChaosTools.exit_entry_times`

— Method`exit_entry_times(ds::DynamicalSystem, u₀, εs, T; kwargs...) → exits, entries`

Collect exit and entry times for balls or boxes centered at `u₀`

with radii `εs`

, in the state space of the given dynamical system. Return the exit and (re-)entry return times to the set(s), where each of these is a vector containing all collected times for the respective `ε`

-radius set, for `ε ∈ εs`

. The dynamical system is evolved up to `T`

total time.

Use `transit_return_times(exits, entries)`

to transform the output into transit and return times, and see also `mean_return_times`

.

The keyword `show_progress`

displays a progress bar. It is `false`

for discrete and `true`

for continuous systems by default.

**Description**

Transit and return time statistics are important for the transport properties of dynamical systems^{[Meiss1997]} and can be connected with fractal dimensions of chaotic sets^{[Boev2014]}.

The current algorithm collects exit and re-entry times to given sets in the state space, which are centered at the state `u₀`

. **The system evolution always starts from u₀** and the initial state of

`ds`

is irrelevant. `εs`

is always a `Vector`

.**Specification of sets to return to**

If each entry of `εs`

is a real number, then sets around `u₀`

are nested hyper-spheres of radius `ε ∈ εs`

. The sets can also be hyper-rectangles (boxes), if each entry of `εs`

is a vector itself. Then, the `i`

-th box is defined by the space covered by `u0 .± εs[i]`

(thus the actual box size is `2εs[i]`

!). In the future, state space sets will be specified more conveniently and a single argument `sets`

will be given instead of `u₀, εs`

.

The reason to input multiple `εs`

at once is purely for performance optimization (much faster than doing each `ε`

individually).

**Discrete time systems**

For discrete systems, exit time is recorded immediately after exiting of the set, and re-entry is recorded immediately on re-entry. This means that if an orbit needs 1 step to leave the set and then it re-enters immediately on the next step, the return time is 1.

**Continuous time systems**

For continuous systems, a steppable integrator supporting interpolation is used. The way to specify how to estimate exit and entry times is via the keyword `crossing_method`

whose values can be:

`CrossingLinearIntersection()`

: Linear interpolation is used between integrator steps and the intersection between lines and spheres is used to find the crossing times.`CrossingAccurateInterpolation(; abstol=1e-12, reltol=1e-6)`

: Extremely accurate high order interpolation is used between integrator steps. First, a minimization with Optim.jl finds the minimum distance of the trajectory to the set center. Then, Roots.jl is used to find the exact crossing point. The tolerances are given to both procedures.

Clearly, `CrossingAccurateInterpolation`

is much more accurate than `CrossingLinearIntersection`

, but also much slower. However, the smaller the steps the integrator takes (in case some very high accuracy solver is used), the closer the linear intersection gets to the accurate version. Benchmarks are advised for the individual specific case the algorithm is applied at, in order to choose the best method.

The keyword `threshold_distance = Inf`

provides a means to skip the interpolation check, if the current state of the integrator is too far from the set center. If the distance of the current state of the integrator is `threshold_distance`

or more distance away from the set center, attempts to interpolate are skipped. By default `threshold_distance = Inf`

and hence this never happens. Typically you'd want this to be 10-100 times the distance the trajectory covers at an average integrator step.

`ChaosTools.expansionentropy`

— Method`expansionentropy(ds::DynamicalSystem, sampler, isinside; kwargs...)`

Calculate the expansion entropy^{[Hunt2015]} of `ds`

, in the restraining region $S$ by estimating the slope (via linear regression) of the curve $\log E_{t0+T, t0}(f, S)$ versus $T$ (using `linear_region`

). This is an approximation of the expansion entropy $H_0$, according to^{[Hunt2015]}. Return $T$, $\log E$ and the calculated slope.

`sampler`

is a 0-argument function that generates a random initial conditions of `ds`

and `isinside`

is a 1-argument function that given a state it returns true if the state is inside the restraining region. Typically `sampler, isinside`

are the output of `statespace_sampler`

.

**Keyword arguments**

`N = 1000`

: Number of samples taken at each batch (same as $N$ of^{[Hunt2015]}).`steps = 40`

: The maximal steps for which the system will be run.`batches = 100`

: Number of batches to run the calculation, see below.`Δt = 1`

: Time evolution step size.`J = nothing`

: Jacobian function given to`TangentDynamicalSystem`

.

**Description**

`N`

samples are initialized and propagated forwards in time (along with their tangent space). At every time $t$ in `[t0+Δt, t0+2Δt, ..., t0+steps*Δt]`

we calculate $H$:

\[H[t] = \log E_{t0+T, t0}(f, S),\]

with

\[E_{t0+T, t0}(f, S) = \frac 1 N \sum_{i'} G(Df_{t0+t, t0}(x_i))\]

(using same notation as ^{[Hunt2015]}). In principle $E$ is the average largest possible growth ratio within the restraining region (sampled by the initial conditions). The summation is only over $x_i$ that stay inside the region $S$ defined by the boolean function `isinside`

. This process is done by the `ChaosTools.expansionentropy_sample`

function.

Then, this is repeated for `batches`

amount of times, as recommended in^{[Hunt2015]}. From all these batches, the mean and std of $H$ is computed at every time point. This is done by the `expansionentropy_batch`

function. When plotted versus $t$, these create the curves and error bars of e.g. Figs 2, 3 of [1].

This function `expansionentropy`

simply returns the slope of the biggest linear region of the curve $H$ versus $t$, which approximates the expansion entropy $H_0$. It is therefore *recommended* to use `expansionentropy_batch`

directly and evaluate the result yourself, as this step is known to be inaccurate for non-chaotic systems (where $H$ fluctuates strongly around 0).

`ChaosTools.expansionentropy_batch`

— Method`expansionentropy_batch(ds, sampler, restraining; kwargs...)`

Run `expansionentropy_sample`

`batch`

times, and return `times, mean(H), std(H)`

for all resulting `H`

, see `expansionentropy`

.

Accepts the same arguments as `expansionentropy`

.

`ChaosTools.expansionentropy_sample`

— Method`expansionentropy_sample(tands::TangentDynamicalSystem, sampler, isinside; kwargs...)`

Return `times, H`

for one sample of `ds`

(see `expansionentropy`

). Accepts the same arguments as `expansionentropy`

, besides `batches`

.

`ChaosTools.first_return_times`

— Method`first_return_times(ds::DynamicalSystem, u0, εs, T; diffeq = NamedTuple(), kwargs...) → t`

Return the first return times `t`

to the sets centered at `u0`

with radii `εs`

for the given dynamical system. Time evolution of `ds`

always starts from `u0`

.

This function operates on the same principles as `exit_entry_times`

, so see that docstring for more info on `u0, εs`

. The only differences here are:

- If the system did not return to the set within time
`T`

, then`0`

is returned. - For continuous systems, the exact returned time is from start of time evolution, up to the time to get closest back to
`u0`

, provided that this is at least`ε`

-close.

`ChaosTools.fixedpoints`

— Function`fixedpoints(ds::CoreDynamicalSystem, box, J = nothing; kwargs...) → fp, eigs, stable`

Return all fixed points `fp`

of the given out-of-place `ds`

(either `DeterministicIteratedMap`

or `CoupledODEs`

) that exist within the state space subset `box`

for parameter configuration `p`

. Fixed points are returned as a `StateSpaceSet`

. For convenience, a vector of the Jacobian eigenvalues of each fixed point, and whether the fixed points are stable or not, are also returned.

`box`

is an appropriate `IntervalBox`

from IntervalRootFinding.jl. E.g. for a 3D system it would be something like

```
v, z = -5..5, -2..2 # 1D intervals, can use `interval(-5, 5)` instead
box = v × v × z # `\times = ×`, or use `IntervalBox(v, v, z)` instead
```

`J`

is the Jacobian of the dynamic rule of `ds`

. It is like in `TangentDynamicalSystem`

, however in this case automatic Jacobian estimation does not work, hence a hand-coded version must be given.

Internally IntervalRootFinding.jl is used and as a result we are guaranteed to find all fixed points that exist in `box`

, regardless of stability. Since IntervalRootFinding.jl returns an interval containing a unique fixed point, we return the midpoint of the interval as the actual fixed point. Naturally, limitations inherent to IntervalRootFinding.jl apply here.

The output of `fixedpoints`

can be used in the BifurcationKit.jl as a start of a continuation process. See also `periodicorbits`

.

**Keyword arguments**

`method = IntervalRootFinding.Krawczyk`

configures the root finding method, see the docs of IntervalRootFinding.jl for all possibilities.`tol = 1e-15`

is the root-finding tolerance.`warn = true`

throw a warning if no fixed points are found.

`ChaosTools.gali`

— Method`gali(ds::DynamicalSystem, T, k::Int; kwargs...) -> GALI_k, t`

Compute $\text{GALI}_k$^{[Skokos2007]} for a given `k`

up to time `T`

. Return $\text{GALI}_k(t)$ and time vector $t$.

The third argument sets the order of `gali`

. `gali`

function simply initializes a `TangentDynamicalSystem`

with `k`

deviation vectors and calls the method below. This means that the automatic Jacobian is used by default. Initialize manually a `TangentDynamicalSystem`

if you have a hand-coded Jacobian.

**Keyword arguments**

`threshold = 1e-12`

: If`GALI_k`

falls below the`threshold`

iteration is terminated.`Δt = 1`

: Time-step between deviation vector normalizations. For continuous systems this is approximate.`u0`

: Initial state for the system. Defaults to`current_state(ds)`

.

**Description**

The Generalized Alignment Index, $\text{GALI}_k$, is an efficient (and very fast) indicator of chaotic or regular behavior type in $D$-dimensional Hamiltonian systems ($D$ is number of variables). The *asymptotic* behavior of $\text{GALI}_k(t)$ depends critically on the type of orbit resulting from the initial condition. If it is a chaotic orbit, then

\[\text{GALI}_k(t) \sim \exp\left[\sum_{j=1}^k (\lambda_1 - \lambda_j)t \right]\]

with $\lambda_j$ being the `j`

-th Lyapunov exponent (see `lyapunov`

, `lyapunovspectrum`

). If on the other hand the orbit is regular, corresponding to movement in $d$-dimensional torus with $1 \le d \le D/2$ then it holds

\[\text{GALI}_k(t) \sim \begin{cases} \text{const.}, & \text{if} \;\; 2 \le k \le d \; \; \text{and} \; \;d > 1 \\ t^{-(k - d)}, & \text{if} \;\; d < k \le D - d \\ t^{-(2k - D)}, & \text{if} \;\; D - d < k \le D \end{cases}\]

Traditionally, if $\text{GALI}_k(t)$ does not become less than the `threshold`

until `T`

the given orbit is said to be chaotic, otherwise it is regular.

Our implementation is not based on the original paper, but rather in the method described in^{[Skokos2016b]}, which uses the product of the singular values of $A$, a matrix that has as *columns* the deviation vectors.

`ChaosTools.gali`

— Method`gali(tands::TangentDynamicalSystem, T; threshold = 1e-12, Δt = 1)`

The low-level method that is called by `gali(ds::DynamicalSystem, ...)`

. Use this method for looping over different initial conditions or parameters by calling `reinit!`

to `tands`

.

The order of $\text{GALI}_k$ computed is the amount of deviation vectors in `tands`

.

Also use this method if you have a hand-coded Jacobian to pass when creating `tands`

.

`ChaosTools.lambdamatrix`

— Method`lambdamatrix(λ, inds::Vector{Int}, sings) -> Λk`

Return the matrix $\mathbf{\Lambda}_k$ used to create a new dynamical system with some unstable fixed points turned to stable in the function `periodicorbits`

.

**Arguments**

`λ<:Real`

: the multiplier of the $C_k$ matrix, with`0<λ<1`

.`inds::Vector{Int}`

: The`i`

th entry of this vector gives the*row*of the nonzero element of the`i`

th column of $C_k$.`sings::Vector{<:Real}`

: The element of the`i`

th column of $C_k$ is +1 if`signs[i] > 0`

and -1 otherwise (`sings`

can also be`Bool`

vector).

Calling `lambdamatrix(λ, D::Int)`

creates a random $\mathbf{\Lambda}_k$ by randomly generating an `inds`

and a `signs`

from all possible combinations. The *collections* of all these combinations can be obtained from the function `lambdaperms`

.

**Description**

Each element of `inds`

*must be unique* such that the resulting matrix is orthogonal and represents the group of special reflections and permutations.

Deciding the appropriate values for `λ, inds, sings`

is not trivial. However, in ref.^{[Pingel2000]} there is a lot of information that can help with that decision. Also, by appropriately choosing various values for `λ`

, one can sort periodic orbits from e.g. least unstable to most unstable, see^{[Diakonos1998]} for details.

`ChaosTools.lambdaperms`

— Method`lambdaperms(D) -> indperms, singperms`

Return two collections that each contain all possible combinations of indices (total of $D!$) and signs (total of $2^D$) for dimension `D`

(see `lambdamatrix`

).

`ChaosTools.local_growth_rates`

— Method`local_growth_rates(ds::DynamicalSystem, points::Dataset; kwargs...) → λlocal`

Compute the local exponential growth rate(s) of perturbations of the dynamical system `ds`

for initial conditions given in `points`

. For each initial condition `u ∈ points`

, `S`

total perturbations are created and evolved exactly for time `Δt`

. The exponential local growth rate is defined simply by `log(g/g0)/Δt`

with `g0`

the initial perturbation size and `g`

the size after `Δt`

. Thus, `λlocal`

is a matrix of size `(length(points), S)`

.

This function is a modification of `lyapunov`

. It uses the full nonlinear dynamics and a `ParallelDynamicalSystem`

to evolve the perturbations, but does not do any re-scaling, thus allowing probing state and time dependence of perturbation growth. The actual growth is given by `exp(λlocal * Δt)`

.

The output of this function is sometimes called "Nonlinear Local Lyapunov Exponent".

**Keyword arguments**

`S = 100`

`Δt = 5`

`perturbation`

: If given, it should be a function`perturbation(ds, u, j)`

that outputs a perturbation vector (preferrably`SVector`

) given the system, current initial condition`u`

and the counter`j ∈ 1:S`

. If not given, a random perturbation is generated with norm given by the keyword`e = 1e-6`

.

`ChaosTools.local_minima`

— MethodQuickly written version for finding local minima by comparing first neighbors only. #TODO: More detailed implementations available in Peaks.jl or Images.jl, maybe this should be replaced by them.

`ChaosTools.lyapunov`

— Method`lyapunov(ds::DynamicalSystem, Τ; kwargs...) -> λ`

Calculate the maximum Lyapunov exponent `λ`

using a method due to Benettin ^{[Benettin1976]}, which simply evolves two neighboring trajectories (one called "given" and one called "test") while constantly rescaling the test one.

`T`

denotes the total time of evolution (should be `Int`

for discrete time systems).

See also `lyapunovspectrum`

, `local_growth_rates`

.

**Keyword arguments**

`show_progress = false`

: Display a progress bar of the process.`u0 = initial_state(ds)`

: Initial condition.`Ttr = 0`

: Extra "transient" time to evolve the trajectories before starting to measure the exponent. Should be`Int`

for discrete systems.`d0 = 1e-9`

: Initial & rescaling distance between the two neighboring trajectories.`d0_lower = 1e-3*d0`

: Lower distance threshold for rescaling.`d0_upper = 1e+3*d0`

: Upper distance threshold for rescaling.`Δt = 1`

: Time of evolution between each check rescaling of distance. For continuous time systems this is approximate.`inittest = (u1, d0) -> u1 .+ d0/sqrt(length(u1))`

: A function that given`(u1, d0)`

initializes the test state with distance`d0`

from the given state`u1`

(`D`

is the dimension of the system). This function can be used when you want to avoid the test state appearing in a region of the phase-space where it would have e.g. different energy or escape to infinity.

**Description**

Two neighboring trajectories with initial distance `d0`

are evolved in time. At time $t_i$ if their distance $d(t_i)$ either exceeds the `d0_upper`

, or is lower than `d0_lower`

, the test trajectory is rescaled back to having distance `d0`

from the reference one, while the rescaling keeps the difference vector along the maximal expansion/contraction direction: $u_2 \to u_1+(u_2−u_1)/(d(t_i)/d_0)$.

The maximum Lyapunov exponent is the average of the time-local Lyapunov exponents

\[\lambda = \frac{1}{t_{n} - t_0}\sum_{i=1}^{n} \ln\left( a_i \right),\quad a_i = \frac{d(t_{i})}{d_0}.\]

**Performance notes**

This function simply initializes a `ParallelDynamicalSystem`

and calls the method below.

`ChaosTools.lyapunov`

— Method`lyapunov(pds::ParallelDynamicalSystem, T; Ttr, Δt, d0, d0_upper, d0_lower)`

The low-level method that is called by `lyapunov(ds::DynamicalSystem, ...)`

. Use this method for looping over different initial conditions or parameters by calling `reinit!`

to `pds`

.

`ChaosTools.lyapunov_from_data`

— Method`lyapunov_from_data(R::Dataset, ks; kwargs...)`

For the given dataset `R`

, which is expected to represent a trajectory of a dynamical system, calculate and return `E(k)`

, which is the average logarithmic distance between states of a neighborhood that are evolved in time for `k`

steps (`k`

must be integer). The slope of `E`

vs `k`

approximates the maximum Lyapunov exponent.

Typically `R`

is the result of delay coordinates embedding of a timeseries (see DelayEmbeddings.jl).

**Keyword arguments**

`refstates = 1:(length(R) - ks[end])`

: Vector of indices that notes which states of the dataset should be used as "reference states", which means that the algorithm is applied for all state indices contained in`refstates`

.`w::Int = 1`

: The Theiler window.`ntype = NeighborNumber(1)`

: The neighborhood type. Either`NeighborNumber`

or`WithinRange`

. See Neighborhoods for more info.`distance = FirstElement()`

: Specifies what kind of distance function is used in the logarithmic distance of nearby states. Allowed distances values are`FirstElement()`

or`Euclidean()`

, see below for more info. The metric for finding neighbors is always the Euclidean one.

**Description**

If the dataset exhibits exponential divergence of nearby states, then it should hold

\[E(k) \approx \lambda\cdot k \cdot \Delta t + E(0)\]

for a *well defined region* in the $k$ axis, where $\lambda$ is the approximated maximum Lyapunov exponent. $\Delta t$ is the time between samples in the original timeseries. You can use `linear_region`

with arguments `(ks .* Δt, E)`

to identify the slope (= $\lambda$) immediately, assuming you have chosen sufficiently good `ks`

such that the linear scaling region is bigger than the saturated region.

The algorithm used in this function is due to Parlitz^{[Skokos2016]}, which itself expands upon Kantz^{[Kantz1994]}. In sort, for each reference state a neighborhood is evaluated. Then, for each point in this neighborhood, the logarithmic distance between reference state and neighborhood state(s) is calculated as the "time" index `k`

increases. The average of the above over all neighborhood states over all reference states is the returned result.

If the `distance`

is `Euclidean()`

then use the Euclidean distance of the full `D`

-dimensional points (distance $d_E$ in ref.^{[Skokos2016]}). If however the `distance`

is `FirstElement()`

, calculate the absolute distance of *only the first elements* of the points of `R`

(distance $d_F$ in ref.^{[Skokos2016]}, useful when `R`

comes from delay embedding).

`ChaosTools.lyapunovspectrum`

— Function`lyapunovspectrum(ds::DynamicalSystem, N, k = dimension(ds); kwargs...) -> λs`

Calculate the spectrum of Lyapunov exponents ^{[Lyapunov1992]} of `ds`

by applying a QR-decomposition on the parallelepiped defined by the deviation vectors, in total for `N`

evolution steps. Return the spectrum sorted from maximum to minimum. The third argument `k`

is optional, and dictates how many lyapunov exponents to calculate (defaults to `dimension(ds)`

).

See also `lyapunov`

, `local_growth_rates`

.

**Note:** This function simply initializes a `TangentDynamicalSystem`

and calls the method below. This means that the automatic Jacobian is used by default. Initialize manually a `TangentDynamicalSystem`

if you have a hand-coded Jacobian.

**Keyword arguments**

`u0 = current_state(ds)`

: State to start from.`Ttr = 0`

: Extra transient time to evolve the system before application of the algorithm. Should be`Int`

for discrete systems. Both the system and the deviation vectors are evolved for this time.`Δt = 1`

: Time of individual evolutions between successive orthonormalization steps. For continuous systems this is approximate.`show_progress = false`

: Display a progress bar of the process.

**Description**

The method we employ is "H2" of ^{[Geist1990]}, originally stated in ^{[Benettin1980]}, and explained in educational form in ^{[DatserisParlitz2022]}.

The deviation vectors defining a `D`

-dimensional parallelepiped in tangent space are evolved using the tangent dynamics of the system (see `TangentDynamicalSystem`

). A QR-decomposition at each step yields the local growth rate for each dimension of the parallelepiped. At each step the parallelepiped is re-normalized to be orthonormal. The growth rates are then averaged over `N`

successive steps, yielding the lyapunov exponent spectrum.

`ChaosTools.lyapunovspectrum`

— Method`lyapunovspectrum(tands::TangentDynamicalSystem, N::Int; Ttr, Δt, show_progress)`

The low-level method that is called by `lyapunovspectrum(ds::DynamicalSystem, ...)`

. Use this method for looping over different initial conditions or parameters by calling `reinit!`

to `tands`

.

Also use this method if you have a hand-coded Jacobian to pass when creating `tands`

.

`ChaosTools.matrix_gradient`

— Method`matrix_gradient(matrix)`

Compute the gradient of a matrix along 1st axis.

**Description**

Compute the gradient using second order accurate central differences in the interior points and first order accurate one-sides differences at the boundaries. We find the standard second order approximation by using:

\[\hat{f}_i^{(1)} = \frac{f(x_{i+1}-f(x_{i-1})}{2h} + O(h^2)\]

The returned gradient matrix hence has the same shape as the input array. Here we compute the gradient along axis=1 (row-wise), so to compute gradient along axis=2 (column-wise), the transpose of the input matrix must be given.

`ChaosTools.maximalexpansion`

— Method`maximalexpansion(M)`

Calculate the maximal expansion rate of M, i.e. the product of all singular values of M that are greater than 1. In the notation of [1], it is the function $G$.

`ChaosTools.mean_return_times`

— Method`mean_return_times(ds::DynamicalSystem, u₀, εs, T; kwargs...) → τ, c`

Return the mean return times `τ`

, as well as the amount of returns `c`

, for subsets of the state space of `ds`

defined by `u₀, εs`

. The `ds`

is evolved for a maximum of `T`

time.

This function is a convenience wrapper around calls to `exit_entry_times`

and then to `transit_return`

and then some averaging. Thus see `exit_entry_times`

for the meaning of `u₀`

and `εs`

and further info.

`ChaosTools.mmsd`

— Methodmodified mean square displacement

`ChaosTools.orbitdiagram`

— Method`orbitdiagram(ds::DynamicalSystem, i, p_index, pvalues; kwargs...) → od`

Compute the orbit diagram (sometimes wrongly called bifurcation diagram) of the given dynamical system, saving the `i`

variable(s) for parameter values `pvalues`

. The `p_index`

specifies which parameter to change via `set_parameter!(ds, p_index, pvalue)`

. Works for any kind of `DynamicalSystem`

, although it mostly makes sense with one of `DeterministicIteratedMap, StroboscopicMap, PoincareMap`

.

An orbit diagram is simply a collection of the last `n`

states of `ds`

as `ds`

is evolved. This is done for each parameter value.

`i`

can be `Int`

or `AbstractVector{Int}`

. If `i`

is `Int`

, `od`

is a vector of vectors. Else `od`

is a vector of vectors of vectors. Each entry od `od`

are the points at each parameter value, so that `length(od) == length(pvalues)`

and `length(od[j]) == n, ∀ j`

.

**Keyword arguments**

`n::Int = 100`

: Amount of points to save for each parameter value.`Δt = 1`

: Stepping time between saving points.`u0 = nothing`

: Specify an initial state. If`nothing`

, the previous state after each parameter is used to seed the new initial condition at the new parameter (with the very first state being the system's state). This makes convergence to the attractor faster, necessitating smaller`Ttr`

. Otherwise`u0`

can be a standard state, or a vector of states, so that a specific state is used for each parameter.`Ttr::Int = 10`

: Each orbit is evolved for`Ttr`

first before saving output.`ulims = (-Inf, Inf)`

: only record system states within`ulims`

(only valid if`i isa Int`

). Iteration continues until`n`

states fall within`ulims`

.`show_progress = false`

: Display a progress bar (counting the parameter values).`periods = nothing`

: Only valid if`ds isa StroboscopicMap`

. If given, it must be a a container with same layout as`pvalues`

. Provides a value for the`period`

for each parameter value. Useful in case the orbit diagram is produced versus a driving frequency.

`ChaosTools.parabolic_interpolation`

— MethodCalculates the parabolic (quadratic) interpolation for all the indices of `y`

given in idxs_interpolate, and returns the coordinates of the respective parabola's minimum. Assumes space all adjacent x values of `y`

is 1.

`ChaosTools.periodicorbits`

— Method```
periodicorbits(ds::DeterministicIteratedMap,
o, ics [, λs, indss, singss]; kwargs...) -> FP
```

Find fixed points `FP`

of order `o`

for the map `ds`

using the algorithm due to Schmelcher & Diakonos^{[Schmelcher1997]}. `ics`

is a collection of initial conditions (container of vectors) to be evolved.

**Optional arguments**

The optional arguments `λs, indss, singss`

*must be containers* of appropriate values, besides `λs`

which can also be a number. The elements of those containers are passed to: `lambdamatrix(λ, inds, sings)`

, which creates the appropriate $\mathbf{\Lambda}_k$ matrix. If these arguments are not given, a random permutation will be chosen for them, with `λ=0.001`

.

**Keyword arguments**

`maxiters::Int = 100000`

: Maximum amount of iterations an i.c. will be iterated before claiming it has not converged.`disttol = 1e-10`

: Distance tolerance. If the 2-norm of a previous state with the next one is`≤ disttol`

then it has converged to a fixed point.`inftol = 10.0`

: If a state reaches`norm(state) ≥ inftol`

it is assumed that it has escaped to infinity (and is thus abandoned).`roundtol::Int = 4`

: The found fixed points are rounded to`roundtol`

digits before pushed into the list of returned fixed points`FP`

,*if*they are not already contained in`FP`

. This is done so that`FP`

doesn't contain duplicate fixed points (notice that this has nothing to do with`disttol`

).

**Description**

The algorithm used can detect periodic orbits by turning fixed points of the original map `ds`

to stable ones, through the transformation

\[\mathbf{x}_{n+1} = \mathbf{x}_n + \mathbf{\Lambda}_k\left(f^{(o)}(\mathbf{x}_n) - \mathbf{x}_n\right)\]

The index $k$ counts the various possible $\mathbf{\Lambda}_k$.

**Performance notes**

*All* initial conditions are evolved for *all* $\mathbf{\Lambda}_k$ which can very quickly lead to long computation times.

`ChaosTools.predictability`

— Method`predictability(ds::CoreDynamicalSystem; kwargs...) -> chaos_type, ν, C`

Determine whether `ds`

displays strongly chaotic, partially-predictable chaotic or regular behaviour, using the method by Wernecke et al. described in^{[Wernecke2017]}.

Return the type of the behavior, the cross-distance scaling coefficient `ν`

and the correlation coefficient `C`

. Typical values for `ν`

, `C`

and `chaos_type`

are given in Table 2 of^{[Wernecke2017]}:

`chaos_type` | `ν` | `C` |
---|---|---|

`:SC` | 0 | 0 |

`:PPC` | 0 | 1 |

`:REG` | 1 | 1 |

If none of these conditions apply, the return value is `:IND`

(for indeterminate).

**Keyword arguments**

`Ttr = 200`

: Extra transient time to evolve the system before sampling from the trajectory. Should be`Int`

for discrete systems.`T_sample = 1e4`

: Time to evolve the system for taking samples. Should be`Int`

for discrete systems.`n_samples = 500`

: Number of samples to take for use in calculating statistics.`λ_max = lyapunov(ds, 5000)`

: Value to use for largest Lyapunov exponent for finding the Lyapunov prediction time. If it is less than zero a regular result is returned immediately.`d_tol = 1e-3`

: tolerance distance to use for calculating Lyapunov prediction time.`T_multiplier = 10`

: Multiplier from the Lyapunov prediction time to the evaluation time.`T_max = Inf`

: Maximum time at which to evaluate trajectory distance. If the internally computed evaluation time is larger than`T_max`

, stop at`T_max`

instead.**It is strongly recommended to manually set this!**`δ_range = 10.0 .^ (-9:-6)`

: Range of initial condition perturbation distances to use to determine scaling`ν`

.`ν_threshold = C_threshold = 0.5`

: Thresholds for scaling coefficients (they become 0 or 1 if they are less or more than the threshold).

**Description**

The algorithm samples points from a trajectory of the system to be used as initial conditions. Each of these initial conditions is randomly perturbed by a distance `δ`

, and the trajectories for both the original and perturbed initial conditions are evolved up to the 'evaluation time' `T`

(see below its definition).

The average (over the samples) distance and cross-correlation coefficient of the state at time `T`

is computed. This is repeated for a range of `δ`

(defined by `δ_range`

), and linear regression is used to determine how the distance and cross-correlation scale with `δ`

, allowing for identification of chaos type.

The evaluation time `T`

is calculated as `T = T_multiplier*Tλ`

, where the Lyapunov prediction time `Tλ = log(d_tol/δ)/λ_max`

. This may be very large if the `λ_max`

is small, e.g. when the system is regular, so this internally computed time `T`

can be overridden by a smaller `T_max`

set by the user.

**Performance Notes**

For continuous systems, it is likely that the `maxiters`

used by the integrators needs to be increased, e.g. to 1e9. This is part of the `diffeq`

kwargs. In addition, be aware that this function does a *lot* of internal computations. It is operating in a different speed than e.g. `lyapunov`

.

`ChaosTools.produce_orbitdiagram`

— Method`produce_orbitdiagram(ds::CoupledODEs, P, args...; kwargs...)`

Shortcut function for producing an orbit diagram for a `CoupledODEs`

. The function simply transforms `ds`

to either a `PoincareMap`

or a `StroboscopicMap`

, based on `P`

, and then calls `orbitdiagram`(map, args...; kwargs...)`

.

If `P isa Union{Tuple, Vector}`

then a `PoincareMap`

is created with `P`

as the `plane`

. If `P isa Real`

, then a `StroboscopicMap`

is created with `P`

the period.

See ^{[DatserisParlitz2022]} chapter 4 for a discussion on why making a map is useful.

`ChaosTools.refine_local_minima`

— MethodReturns the refined local minima of `y`

, along with their refined `x`

value and the corresponding indices. For each minimum, by the minimum of the parabola obtained by interpolated the minimum with its first neighbors. Also returns the indices of `y`

(`x_nominal`

) and the value `x`

corresponding to each minima.

`ChaosTools.testchaos01`

— Function`testchaos01(x::Vector [, cs, N0]) -> chaotic?`

Perform the so called "0-1" test for chaos introduced by Gottwald and Melbourne^{[Gottwald2016]} on the timeseries `x`

. Return `true`

if `x`

is chaotic, `false`

otherwise.

**Description**

This method tests if the given timeseries is chaotic or not by transforming it into a two-dimensional diffusive process like so:

\[p_n = \sum_{j=1}^{n}\phi_j \cos(j c),\quad q_n = \sum_{j=1}^{n}\phi_j \sin(j c)\]

If the timeseries is chaotic, the mean square displacement of the process grows as `sqrt(length(x))`

, while it stays constant if the timeseries is regular.

The implementation here computes `K`

, a coefficient measuring the growth of the mean square displacement, and simply checks if `K > 0.5`

. `K`

is the median of $K_c$ over given `c`

, see the reference.

If you want to access the various `Kc`

you should call the method `testchaos01(x, c::Real, N0)`

which returns `Kc`

. In fact, the high level method is just `median(testchaos01(x, c, N0) for c in cs) > 0.5`

.

`cs`

defaults to `3π/5*rand(100) + π/4`

and `N0`

, the length of the two-dimensional process, is `N0 = length(x)/10`

.

For data sampled from continuous dynamical systems, some care must be taken regarding the values of `cs`

. Also note that this method performs rather poorly with even the slight amount of noise, returning `true`

for even small amounts of noise noisy timeseries. Some possibilities to alleviate this exist, but are context specific on the application. See ^{[Gottwald2016]} for more info.

`ChaosTools.transit_return_times`

— Method`transit_return_times(exits, entries) → transits, returns`

Convert the output of `exit_entry_times`

to the transit and return times. The outputs here are vectors of vectors just like in `exit_entry_times`

.

`ChaosTools.yin`

— Method`yin(sig::Vector, sr::Int; kwargs...) -> F0s, frame_times`

Estimate the fundamental frequency (F0) of the signal `sig`

using the YIN algorithm ^{[1]}. The signal `sig`

is a vector of points uniformly sampled at a rate `sr`

.

**Keyword arguments**

`w_len`

: size of the analysis window [samples == number of points]`f_step`

: size of the lag between two consecutive frames [samples == number of points]`f0_min`

: Minimum fundamental frequency that can be detected [linear frequency]`f0_max`

: Maximum fundamental frequency that can be detected [linear frequency]`harmonic_threshold`

: Threshold of detection. The algorithm returns the first minimum of the CMNDF function below this threshold.`diffference_function`

: The difference function to be used (by default`ChaosTools.difference_function_original`

).

**Description**

The YIN algorithm ^{[CheveigneYIN2002]} estimates the signal's fundamental frequency `F0`

by basically looking for the period `τ0`

which minimizes the signal's autocorrelation. This autocorrelation is calculated for signal segments (frames), composed of two windows of length `w_len`

. Each window is separated by a distance `τ`

, and the idea is that the distance which minimizes the pairwise difference between each window is considered to be the fundamental period `τ0`

of that frame.

More precisely, the algorithm first computes the cumulative mean normalized difference function (MNDF) between two windows of a frame for several candidate periods `τ`

ranging from `τ_min=sr/f0_max`

to `τ_max=sr/f0_min`

. The MNDF is defined as

\[d_t^\prime(\tau) = \begin{cases} 1 & \text{if} ~ \tau=0 \\ d_t(\tau)/\left[{(\frac 1 \tau) \sum_{j=1}^{\tau} d_{t}(j)}\right] & \text{otherwise} \end{cases}\]

where `d_t`

is the difference function:

\[d_t(\tau) = \sum_{j=1}^W (x_j - x_{j+\tau})^2\]

It then refines the local minima of the MNDF using parabolic (quadratic) interpolation. This is done by taking each minima, along with their first neighbor points, and finding the minimum of the corresponding interpolated parabola. The MNDF minima are substituted by the interpolation minima. Finally, the algorithm chooses the minimum with the smallest period and with a corresponding MNDF below the `harmonic threshold`

. If this doesn't exist, it chooses the period corresponding to the global minimum. It repeats this for frames starting at the first signal point, and separated by a distance `f_step`

(frames can overlap), and returns the vector of frequencies `F0=sr/τ0`

for each frame, along with the start times of each frame.

As a note, the physical unit of the frequency is 1/[time], where [time] is decided by the sampling rate `sr`

. If, for instance, the sampling rate is over seconds, then the frequency is in Hertz.

speech and music. The Journal of the Acoustical Society of America, 111(4), 1917-1930.

- Broomhead1987Broomhead, Jones, King, J. Phys. A
**20**, 9, pp L563 (1987) - Uhl2018B Seifert, K Korn, S Hartmann, C Uhl,
*Dynamical Component Analysis (DYCA): Dimensionality Reduction for High-Dimensional Deterministic Time-Series*, 10.1109/mlsp.2018.8517024, 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP) - CheveigneYIN2002De Cheveigné, A., & Kawahara, H. (2002). YIN, a fundamental frequency estimator for
- Meiss1997Meiss, J. D.
*Average exit time for volume-preserving maps*, Chaos (1997) - Boev2014Boev, Vadivasova, & Anishchenko,
*Poincaré recurrence statistics as an indicator of chaos synchronization*, Chaos (2014) - Hunt2015Hunt & Ott, ‘Defining Chaos’, Chaos 25.9 (2015)
- Skokos2007Skokos, C. H.
*et al.*, Physica D**231**, pp 30–54 (2007) - Skokos2016bSkokos, C. H.
*et al.*,*Chaos Detection and Predictability*- Chapter 5 (section 5.3.1 and ref. [85] therein), Lecture Notes in Physics**915**, Springer (2016) - Pingel2000D. Pingel
*et al.*, Phys. Rev. E**62**, pp 2119 (2000) - Diakonos1998F. K. Diakonos
*et al.*, Phys. Rev. Lett.**81**, pp 4349 (1998) - Benettin1976G. Benettin
*et al.*, Phys. Rev. A**14**, pp 2338 (1976) - Skokos2016Skokos, C. H.
*et al.*,*Chaos Detection and Predictability*- Chapter 1 (section 1.3.2), Lecture Notes in Physics**915**, Springer (2016) - Kantz1994Kantz, H., Phys. Lett. A
**185**, pp 77–87 (1994) - Lyapunov1992A. M. Lyapunov,
*The General Problem of the Stability of Motion*, Taylor & Francis (1992) - Geist1990K. Geist
*et al.*, Progr. Theor. Phys.**83**, pp 875 (1990) - Benettin1980G. Benettin
*et al.*, Meccanica**15**, pp 9-20 & 21-30 (1980) - DatserisParlitz2022Datseris & Parlitz 2022,
*Nonlinear Dynamics: A Concise Introduction Interlaced with Code*, Springer Nature, Undergrad. Lect. Notes In Physics - Schmelcher1997P. Schmelcher & F. K. Diakonos, Phys. Rev. Lett.
**78**, pp 4733 (1997) - Wernecke2017Wernecke, H., Sándor, B. & Gros, C.
*How to test for partially predictable chaos*. Scientific Reports**7**, (2017). - DatserisParlitz2022Datseris & Parlitz 2022,
*Nonlinear Dynamics: A Concise Introduction Interlaced with Code*, Springer Nature, Undergrad. Lect. Notes In Physics - Gottwald2016Gottwald & Melbourne, “The 0-1 test for chaos: A review” Lect. Notes Phys., vol. 915, pp. 221–247, 2016.
- CheveigneYIN2002De Cheveigné, A., & Kawahara, H. (2002). YIN, a fundamental frequency estimator for