DiffEqFlux.DimMoverType
DimMover(from, to)

Constructs a Dimension Mover Layer.

We can have Lux's conventional order (data, channel, batch) by using it as the last layer of AbstractExplicitLayer to swap the batch-index and the time-index of the Neural DE's output considering that each time point is a channel.

DiffEqFlux.FFJORDType
FFJORD(model, tspan, input_dims, args...; ad = nothing, basedist = nothing, kwargs...)

Constructs a continuous-time recurrent neural network, also known as a neural ordinary differential equation (neural ODE), with fast gradient calculation via adjoints [1] and specialized for density estimation based on continuous normalizing flows (CNF) [2] with a stochastic approach [2] for the computation of the trace of the dynamics' jacobian. At a high level this corresponds to the following steps:

  1. Parameterize the variable of interest x(t) as a function f(z, θ, t) of a base variable z(t) with known density p_z.
  2. Use the transformation of variables formula to predict the density p_x as a function of the density p_z and the trace of the Jacobian of f.
  3. Choose the parameter θ to minimize a loss function of p_x (usually the negative likelihood of the data).

After these steps one may use the NN model and the learned θ to predict the density p_x for new values of x.

Arguments:

  • model: A Flux.Chain or Lux.AbstractExplicitLayer neural network that defines the dynamics of the model.
  • basedist: Distribution of the base variable. Set to the unit normal by default.
  • input_dims: Input Dimensions of the model.
  • tspan: The timespan to be solved on.
  • args: Additional arguments splatted to the ODE solver. See the Common Solver Arguments documentation for more details.
  • ad: The automatic differentiation method to use for the internal jacobian trace. Defaults to AutoForwardDiff() if full jacobian needs to be computed, i.e. monte_carlo = false. Else we use AutoZygote().
  • kwargs: Additional arguments splatted to the ODE solver. See the Common Solver Arguments documentation for more details.

References:

[1] Pontryagin, Lev Semenovich. Mathematical theory of optimal processes. CRC press, 1987.

[2] Chen, Ricky TQ, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. "Neural ordinary differential equations." In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 6572-6583. 2018.

[3] Grathwohl, Will, Ricky TQ Chen, Jesse Bettencourt, Ilya Sutskever, and David Duvenaud. "Ffjord: Free-form continuous dynamics for scalable reversible generative models." arXiv preprint arXiv:1810.01367 (2018).

DiffEqFlux.FFJORDDistributionType

FFJORD can be used as a distribution to generate new samples by rand or estimate densities by pdf or logpdf (from Distributions.jl).

Arguments:

  • model: A FFJORD instance.
  • regularize: Whether we use regularization (default: false).
  • monte_carlo: Whether we use monte carlo (default: true).
DiffEqFlux.HamiltonianNNType
HamiltonianNN(model; ad = AutoForwardDiff())

Constructs a Hamiltonian Neural Network [1]. This neural network is useful for learning symmetries and conservation laws by supervision on the gradients of the trajectories. It takes as input a concatenated vector of length 2n containing the position (of size n) and momentum (of size n) of the particles. It then returns the time derivatives for position and momentum.

Note

This doesn't solve the Hamiltonian Problem. Use NeuralHamiltonianDE for such applications.

Arguments:

  1. model: A Flux.Chain or Lux.AbstractExplicitLayer neural network that returns the Hamiltonian of the system.
  2. ad: The autodiff framework to be used for the internal Hamiltonian computation. The default is AutoZygote().
Note

If training with Zygote, ensure that the chunksize for AutoForwardDiff is set to nothing.

References:

[1] Greydanus, Samuel, Misko Dzamba, and Jason Yosinski. "Hamiltonian Neural Networks." Advances in Neural Information Processing Systems 32 (2019): 15379-15389.

DiffEqFlux.NeuralCDDEType
NeuralCDDE(model, tspan, hist, lags, alg = nothing, args...;
    sensealg = TrackerAdjoint(), kwargs...)

Constructs a neural delay differential equation (neural DDE) with constant delays.

Arguments:

  • model: A Flux.Chain or Lux.AbstractExplicitLayer neural network that defines the derivative function. Should take an input of size [x; x(t - lag_1); ...; x(t - lag_n)] and produce and output shaped like x.
  • tspan: The timespan to be solved on.
  • hist: Defines the history function h(u, p, t) for values before the start of the integration. Note that u is supposed to be used to return a value that matches the size of u.
  • lags: Defines the lagged values that should be utilized in the neural network.
  • alg: The algorithm used to solve the ODE. Defaults to nothing, i.e. the default algorithm from DifferentialEquations.jl.
  • sensealg: The choice of differentiation algorithm used in the backpropogation. Defaults to using reverse-mode automatic differentiation via Tracker.jl
  • kwargs: Additional arguments splatted to the ODE solver. See the Common Solver Arguments documentation for more details.
DiffEqFlux.NeuralDAEType
NeuralDAE(model, constraints_model, tspan, args...; differential_vars = nothing,
    sensealg = TrackerAdjoint(), kwargs...)

Constructs a neural differential-algebraic equation (neural DAE).

Arguments:

  • model: A Flux.Chain or Lux.AbstractExplicitLayer neural network that defines the derivative function. Should take an input of size x and produce the residual of f(dx,x,t) for only the differential variables.
  • constraints_model: A function constraints_model(u,p,t) for the fixed constraints to impose on the algebraic equations.
  • tspan: The timespan to be solved on.
  • alg: The algorithm used to solve the ODE. Defaults to nothing, i.e. the default algorithm from DifferentialEquations.jl.
  • sensealg: The choice of differentiation algorithm used in the backpropogation. Defaults to using reverse-mode automatic differentiation via Tracker.jl
  • kwargs: Additional arguments splatted to the ODE solver. See the Common Solver Arguments documentation for more details.
DiffEqFlux.NeuralDSDEType
NeuralDSDE(drift, diffusion, tspan, alg = nothing, args...; sensealg = TrackerAdjoint(),
    kwargs...)

Constructs a neural stochastic differential equation (neural SDE) with diagonal noise.

Arguments:

  • drift: A Flux.Chain or Lux.AbstractExplicitLayer neural network that defines the drift function.
  • diffusion: A Flux.Chain or Lux.AbstractExplicitLayer neural network that defines the diffusion function. Should output a vector of the same size as the input.
  • tspan: The timespan to be solved on.
  • alg: The algorithm used to solve the ODE. Defaults to nothing, i.e. the default algorithm from DifferentialEquations.jl.
  • sensealg: The choice of differentiation algorithm used in the backpropogation.
  • kwargs: Additional arguments splatted to the ODE solver. See the Common Solver Arguments documentation for more details.
DiffEqFlux.NeuralHamiltonianDEType
NeuralHamiltonianDE(model, tspan, args...; kwargs...)

Constructs a Neural Hamiltonian DE Layer for solving Hamiltonian Problems parameterized by a Neural Network HamiltonianNN.

Arguments:

  • model: A Flux.Chain, Lux.AbstractExplicitLayer, or Hamiltonian Neural Network that predicts the Hamiltonian of the system.
  • tspan: The timespan to be solved on.
  • kwargs: Additional arguments splatted to the ODE solver. See the Common Solver Arguments documentation for more details.
DiffEqFlux.NeuralODEType
NeuralODE(model, tspan, alg = nothing, args...; kwargs...)

Constructs a continuous-time recurrant neural network, also known as a neural ordinary differential equation (neural ODE), with a fast gradient calculation via adjoints [1]. At a high level this corresponds to solving the forward differential equation, using a second differential equation that propagates the derivatives of the loss backwards in time.

Arguments:

  • model: A Flux.Chain or Lux.AbstractExplicitLayer neural network that defines the ̇x.
  • tspan: The timespan to be solved on.
  • alg: The algorithm used to solve the ODE. Defaults to nothing, i.e. the default algorithm from DifferentialEquations.jl.
  • sensealg: The choice of differentiation algorithm used in the backpropogation. Defaults to an adjoint method. See the Local Sensitivity Analysis documentation for more details.
  • kwargs: Additional arguments splatted to the ODE solver. See the Common Solver Arguments documentation for more details.

References:

[1] Pontryagin, Lev Semenovich. Mathematical theory of optimal processes. CRC press, 1987.

DiffEqFlux.NeuralODEMMType
NeuralODEMM(model, constraints_model, tspan, mass_matrix, alg = nothing, args...;
    sensealg = InterpolatingAdjoint(autojacvec = ZygoteVJP()), kwargs...)

Constructs a physically-constrained continuous-time recurrant neural network, also known as a neural differential-algebraic equation (neural DAE), with a mass matrix and a fast gradient calculation via adjoints [1]. The mass matrix formulation is:

\[Mu' = f(u,p,t)\]

where M is semi-explicit, i.e. singular with zeros for rows corresponding to the constraint equations.

Arguments:

  • model: A Flux.Chain or Lux.AbstractExplicitLayer neural network that defines the ̇f(u,p,t)
  • constraints_model: A function constraints_model(u,p,t) for the fixed constraints to impose on the algebraic equations.
  • tspan: The timespan to be solved on.
  • mass_matrix: The mass matrix associated with the DAE.
  • alg: The algorithm used to solve the ODE. Defaults to nothing, i.e. the default algorithm from DifferentialEquations.jl. This method requires an implicit ODE solver compatible with singular mass matrices. Consult the DAE solvers documentation for more details.
  • sensealg: The choice of differentiation algorithm used in the backpropogation. Defaults to an adjoint method. See the Local Sensitivity Analysis documentation for more details.
  • kwargs: Additional arguments splatted to the ODE solver. See the Common Solver Arguments documentation for more details.
DiffEqFlux.NeuralSDEType
NeuralSDE(drift, diffusion, tspan, nbrown, alg = nothing, args...;
    sensealg=TrackerAdjoint(), kwargs...)

Constructs a neural stochastic differential equation (neural SDE).

Arguments:

  • drift: A Flux.Chain or Lux.AbstractExplicitLayer neural network that defines the drift function.
  • diffusion: A Flux.Chain or Lux.AbstractExplicitLayer neural network that defines the diffusion function. Should output a matrix that is nbrown x size(x, 1).
  • tspan: The timespan to be solved on.
  • nbrown: The number of Brownian processes.
  • alg: The algorithm used to solve the ODE. Defaults to nothing, i.e. the default algorithm from DifferentialEquations.jl.
  • sensealg: The choice of differentiation algorithm used in the backpropogation.
  • kwargs: Additional arguments splatted to the ODE solver. See the Common Solver Arguments documentation for more details.
DiffEqFlux.SplineLayerType
SplineLayer(time_span, time_step, spline_basis, init_saved_points = nothing)

Constructs a Spline Layer. At a high-level, it performs the following:

  1. Takes as input a one-dimensional training dataset, a time span, a time step and an interpolation method.
  2. During training, adjusts the values of the function at multiples of the time-step such that the curve interpolated through these points has minimum loss on the corresponding one-dimensional dataset.

Arguments:

  • time_span: Tuple of real numbers corresponding to the time span.
  • time_step: Real number corresponding to the time step.
  • spline_basis: Interpolation method to be used yb the basis (current supported interpolation methods: ConstantInterpolation, LinearInterpolation, QuadraticInterpolation, QuadraticSpline, CubicSpline).
  • init_saved_points: values of the function at multiples of the time step. Initialized by default to a random vector sampled from the unit normal. Alternatively, can take a function with the signature init_saved_points(rng, time_span, time_step).
DiffEqFlux.AugmentedNDELayerMethod
AugmentedNDELayer(nde, adim::Int)

Constructs an Augmented Neural Differential Equation Layer.

Arguments:

  • nde: Any Neural Differential Equation Layer.
  • adim: The number of dimensions the initial conditions should be lifted.

References:

[1] Dupont, Emilien, Arnaud Doucet, and Yee Whye Teh. "Augmented neural ODEs." In Proceedings of the 33rd International Conference on Neural Information Processing Systems, pp. 3140-3150. 2019.

DiffEqFlux.ChebyshevBasisMethod
ChebyshevBasis(n)

Constructs a Chebyshev basis of the form [T_{0}(x), T_{1}(x), ..., T_{n-1}(x)] where T_j(.) is the j-th Chebyshev polynomial of the first kind.

Arguments:

  • n: number of terms in the polynomial expansion.
DiffEqFlux.CosBasisMethod
CosBasis(n)

Constructs a cosine basis of the form [cos(x), cos(2x), ..., cos(nx)].

Arguments:

  • n: number of terms in the cosine expansion.
DiffEqFlux.FourierBasisMethod
FourierBasis(n)

Constructs a Fourier basis of the form F_j(x) = j is even ? cos((j÷2)x) : sin((j÷2)x) => [F_0(x), F_1(x), ..., F_n(x)].

Arguments:

  • n: number of terms in the Fourier expansion.
DiffEqFlux.LegendreBasisMethod
LegendreBasis(n)

Constructs a Legendre basis of the form [P_{0}(x), P_{1}(x), ..., P_{n-1}(x)] where P_j(.) is the j-th Legendre polynomial.

Arguments:

  • n: number of terms in the polynomial expansion.
DiffEqFlux.PolynomialBasisMethod
PolynomialBasis(n)

Constructs a Polynomial basis of the form [1, x, ..., x^(n-1)].

Arguments:

  • n: number of terms in the polynomial expansion.
DiffEqFlux.SinBasisMethod
SinBasis(n)

Constructs a sine basis of the form [sin(x), sin(2x), ..., sin(nx)].

Arguments:

  • n: number of terms in the sine expansion.
DiffEqFlux.TensorLayerMethod
TensorLayer(model, out_dim::Int, init_p::F = randn) where {F <: Function}

Constructs the Tensor Product Layer, which takes as input an array of n tensor product basis, [B_1, B_2, ..., B_n] a data point x, computes z[i] = W[i, :] ⨀ [B_1(x[1]) ⨂ B_2(x[2]) ⨂ ... ⨂ B_n(x[n])], where W is the layer's weight, and returns [z[1], ..., z[out]].

Arguments:

  • model: Array of TensorProductBasis [B_1(n_1), ..., B_k(n_k)], where k corresponds to the dimension of the input.
  • out: Dimension of the output.
  • p: Optional initialization of the layer's weight. Initialized to standard normal by default.
DiffEqFlux.collocate_dataFunction
u′, u = collocate_data(data, tpoints, kernel = TriangularKernel(), bandwidth=nothing)
u′, u = collocate_data(data, tpoints, tpoints_sample, interp, args...)

Computes a non-parametrically smoothed estimate of u' and u given the data, where each column is a snapshot of the timeseries at tpoints[i].

For kernels, the following exist:

  • EpanechnikovKernel
  • UniformKernel
  • TriangularKernel
  • QuarticKernel
  • TriweightKernel
  • TricubeKernel
  • GaussianKernel
  • CosineKernel
  • LogisticKernel
  • SigmoidKernel
  • SilvermanKernel

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2631937/

Additionally, we can use interpolation methods from DataInterpolations.jl to generate data from intermediate timesteps. In this case, pass any of the methods like QuadraticInterpolation as interp, and the timestamps to sample from as tpoints_sample.

DiffEqFlux.group_rangesMethod
group_ranges(datasize, groupsize)

Get ranges that partition data of length datasize in groups of groupsize observations. If the data isn't perfectly dividable by groupsize, the last group contains the reminding observations.

Arguments:

  • datasize: amount of data points to be partitioned.
  • groupsize: maximum amount of observations in each group.

Example:

julia> group_ranges(10, 5)
3-element Vector{UnitRange{Int64}}:
 1:5
 5:9
 9:10
DiffEqFlux.multiple_shootMethod
multiple_shoot(p, ode_data, tsteps, ensembleprob, ensemblealg, loss_function,
    [continuity_loss = _default_continuity_loss], solver, group_size;
    continuity_term = 100, kwargs...)

Returns a total loss after trying a 'Direct multiple shooting' on ODE data and an array of predictions from each of the groups (smaller intervals). In Direct Multiple Shooting, the Neural Network divides the interval into smaller intervals and solves for them separately. The default continuity term is 100, implying any losses arising from the non-continuity of 2 different groups will be scaled by 100.

Arguments:

  • p: The parameters of the Neural Network to be trained.
  • ode_data: Original Data to be modelled.
  • tsteps: Timesteps on which ode_data was calculated.
  • ensemble_prob: Ensemble problem that the Neural Network attempts to solve.
  • ensemble_alg: Ensemble algorithm, e.g. EnsembleThreads().
  • prob: ODE problem that the Neural Network attempts to solve.
  • loss_function: Any arbitrary function to calculate loss.
  • continuity_loss: Function that takes states $\hat{u}_{end}$ of group $k$ and $u_{0}$ of group $k+1$ as input and calculates prediction continuity loss between them. If no custom continuity_loss is specified, sum(abs, û_end - u_0) is used.
  • solver: ODE Solver algorithm.
  • group_size: The group size achieved after splitting the ode_data into equal sizes.
  • continuity_term: Weight term to ensure continuity of predictions throughout different groups.
  • kwargs: Additional arguments splatted to the ODE solver. Refer to the Local Sensitivity Analysis and Common Solver Arguments documentation for more details.
Note

The parameter 'continuity_term' should be a relatively big number to enforce a large penalty whenever the last point of any group doesn't coincide with the first point of next group.

DiffEqFlux.multiple_shootMethod
multiple_shoot(p, ode_data, tsteps, prob, loss_function,
    [continuity_loss = _default_continuity_loss], solver, group_size;
    continuity_term = 100, kwargs...)

Returns a total loss after trying a 'Direct multiple shooting' on ODE data and an array of predictions from each of the groups (smaller intervals). In Direct Multiple Shooting, the Neural Network divides the interval into smaller intervals and solves for them separately. The default continuity term is 100, implying any losses arising from the non-continuity of 2 different groups will be scaled by 100.

Arguments:

  • p: The parameters of the Neural Network to be trained.
  • ode_data: Original Data to be modelled.
  • tsteps: Timesteps on which ode_data was calculated.
  • prob: ODE problem that the Neural Network attempts to solve.
  • loss_function: Any arbitrary function to calculate loss.
  • continuity_loss: Function that takes states $\hat{u}_{end}$ of group $k$ and $u_{0}$ of group $k+1$ as input and calculates prediction continuity loss between them. If no custom continuity_loss is specified, sum(abs, û_end - u_0) is used.
  • solver: ODE Solver algorithm.
  • group_size: The group size achieved after splitting the ode_data into equal sizes.
  • continuity_term: Weight term to ensure continuity of predictions throughout different groups.
  • kwargs: Additional arguments splatted to the ODE solver. Refer to the Local Sensitivity Analysis and Common Solver Arguments documentation for more details.
Note

The parameter 'continuity_term' should be a relatively big number to enforce a large penalty whenever the last point of any group doesn't coincide with the first point of next group.