FMIFlux.CS_NeuralFMUType

Structure definition for a NeuralFMU, that runs in mode Co-Simulation (CS).

FMIFlux.CS_NeuralFMUMethod

Constructs a CS-NeuralFMU where the FMU is at an arbitrary location inside of the ANN.

Arguents

- `fmu` the considered FMU inside the ANN 
- `model` the ANN topology (e.g. Flux.Chain)
- `tspan` simulation time span

Keyword arguments

- `recordValues` additionally records FMU variables
FMIFlux.CS_NeuralFMUMethod
ToDo: Docstring for Arguments, Keyword arguments, ...

Evaluates the CSNeuralFMU in the timespan given during construction or in a custum timespan from `tstarttotstopwith a given time step sizetstep`.

Via optional argument reset, the FMU is reset every time evaluation is started (default=true).

FMIFlux.FMUTimeLayerType

A neutral layer that calls a function fct with current FMU time as input.

FMIFlux.LossAccumulationSchedulerType
Computes all batch element losses. Picks the batch element with the greatest accumulated loss as next training element. If picked, accumulated loss is resetted.
(Prevents starvation of batch elements with little loss)
FMIFlux.ME_NeuralFMUType
TODO: Signature, Arguments and Keyword-Arguments descriptions.

Evaluates the MENeuralFMU in the timespan given during construction or in a custom timespan from `tstarttotstopfor a given start statexstart`.

Keyword arguments

- `reset`, the FMU is reset every time evaluation is started (default=`true`).
- `setup`, the FMU is set up every time evaluation is started (default=`true`).
FMIFlux.ME_NeuralFMUType

Constructs a ME-NeuralFMU where the FMU is at an arbitrary location inside of the NN.

Arguments

- `fmu` the considered FMU inside the NN 
- `model` the NN topology (e.g. Flux.chain)
- `tspan` simulation time span
- `solver` an ODE Solver (default=`nothing`, heurisitically determine one)

Keyword arguments

- `recordValues` additionally records internal FMU variables
FMIFlux.ME_NeuralFMUType

Structure definition for a NeuralFMU, that runs in mode Model Exchange (ME).

FMIFlux.NeuralFMUType

The mutable struct representing an abstract (simulation mode unknown) NeuralFMU.

FMIFlux.WorstElementSchedulerType
Computes all batch element losses. Picks the batch element with the greatest loss as next training element.
FMIFlux.WorstGrowSchedulerType
Computes all batch element losses. Picks the batch element with the greatest grow in loss (derivative) as next training element.
FMIFlux.fmi2EvaluateMEFunction

DEPRECATED:

Performs something similar to fmiDoStep for ME-FMUs (note, that fmiDoStep is for CS-FMUs only). Event handling (state- and time-events) is supported. If you don't want events to be handled, you can disable event-handling for the NeuralFMU nfmu with the attribute eventHandling = false.

Optional, additional FMU-values can be set via keyword arguments setValueReferences and setValues. Optional, additional FMU-values can be retrieved by keyword argument getValueReferences.

Function takes the current system state array ("x") and returns an array with state derivatives ("x dot") and optionally the FMU-values for getValueReferences. Setting the FMU time via argument t is optional, if not set, the current time of the ODE solver around the NeuralFMU is used.

FMIFlux.fmi2InputDoStepCSOutputMethod

DEPRECATED:

fmi2InputDoStepCSOutput(comp::FMU2Component, 
                        dt::Real, 
                        u::Array{<:Real})

Sets all FMU inputs to u, performs a ´´´fmi2DoStep´´´ and returns all FMU outputs.

FMIFlux.mse_interpolateMethod

Compares non-equidistant (or equidistant) datapoints by linear interpolating and comparing at given interpolation points t_comp. (Zygote-friendly: Zygote can differentiate through via AD.)

FMIFlux.roundToLengthMethod
Rounds a given `number` to a string with a maximum length of `len`.
Exponentials are used if suitable.
FMIFlux.train!Method
train!(loss, neuralFMU::Union{ME_NeuralFMU, CS_NeuralFMU}, data, optim; gradient::Symbol=:ReverseDiff, kwargs...)

A function analogous to Flux.train! but with additional features and explicit parameters (faster).

Arguments

  • loss a loss function in the format loss(p)
  • neuralFMU a object holding the neuralFMU with its parameters
  • data the training data (or often an iterator)
  • optim the optimizer used for training

Keywords

  • gradient a symbol determining the AD-library for gradient computation, available are :ForwardDiff, :Zygote and :ReverseDiff (default)
  • cb a custom callback function that is called after every training step (default nothing)
  • chunk_size the chunk size for AD using ForwardDiff (ignored for other AD-methods) (default :auto_fmiflux)
  • printStep a boolean determining wheater the gradient min/max is printed after every step (for gradient debugging) (default false)
  • proceed_on_assert a boolean that determins wheater to throw an ecxeption on error or proceed training and just print the error (default false)
  • multiThreading: a boolean that determins if multiple gradients are generated in parallel (default false)
  • multiObjective: set this if the loss function returns multiple values (multi objective optimization), currently gradients are fired to the optimizer one after another (default false)
FMIFlux.transferFlatParams!Function

Writes/Copies flatted (Flux.destructure) training parameters p_net to non-flat model net with data offset c.