FMIFlux.CS_NeuralFMU
— TypeStructure definition for a NeuralFMU, that runs in mode Co-Simulation
(CS).
FMIFlux.CS_NeuralFMU
— MethodConstructs a CS-NeuralFMU where the FMU is at an arbitrary location inside of the ANN.
Arguents
- `fmu` the considered FMU inside the ANN
- `model` the ANN topology (e.g. Flux.Chain)
- `tspan` simulation time span
Keyword arguments
- `recordValues` additionally records FMU variables
FMIFlux.CS_NeuralFMU
— MethodToDo: Docstring for Arguments, Keyword arguments, ...
Evaluates the CSNeuralFMU in the timespan given during construction or in a custum timespan from `tstartto
tstopwith a given time step size
tstep`.
Via optional argument reset
, the FMU is reset every time evaluation is started (default=true
).
FMIFlux.FMUParameterRegistrator
— TypeToDo.
FMIFlux.FMUTimeLayer
— TypeA neutral layer that calls a function fct
with current FMU time as input.
FMIFlux.LossAccumulationScheduler
— TypeComputes all batch element losses. Picks the batch element with the greatest accumulated loss as next training element. If picked, accumulated loss is resetted.
(Prevents starvation of batch elements with little loss)
FMIFlux.ME_NeuralFMU
— TypeTODO: Signature, Arguments and Keyword-Arguments descriptions.
Evaluates the MENeuralFMU in the timespan given during construction or in a custom timespan from `tstartto
tstopfor a given start state
xstart`.
Keyword arguments
- `reset`, the FMU is reset every time evaluation is started (default=`true`).
- `setup`, the FMU is set up every time evaluation is started (default=`true`).
FMIFlux.ME_NeuralFMU
— TypeConstructs a ME-NeuralFMU where the FMU is at an arbitrary location inside of the NN.
Arguments
- `fmu` the considered FMU inside the NN
- `model` the NN topology (e.g. Flux.chain)
- `tspan` simulation time span
- `solver` an ODE Solver (default=`nothing`, heurisitically determine one)
Keyword arguments
- `recordValues` additionally records internal FMU variables
FMIFlux.ME_NeuralFMU
— TypeStructure definition for a NeuralFMU, that runs in mode Model Exchange
(ME).
FMIFlux.NeuralFMU
— TypeThe mutable struct representing an abstract (simulation mode unknown) NeuralFMU.
FMIFlux.ParameterRegistrator
— TypeToDo.
FMIFlux.RandomScheduler
— TypePicks a random batch element as next training element.
FMIFlux.ScaleShift
— TypeToDo.
FMIFlux.SequentialScheduler
— TypeSequentially runs over all elements.
FMIFlux.ShiftScale
— TypeToDo.
FMIFlux.SimultaniousZeroCrossing
— TypeForces a simultaniuos zero crossing together with a given value by function.
FMIFlux.WorstElementScheduler
— TypeComputes all batch element losses. Picks the batch element with the greatest loss as next training element.
FMIFlux.WorstGrowScheduler
— TypeComputes all batch element losses. Picks the batch element with the greatest grow in loss (derivative) as next training element.
FMIFlux.fmi2EvaluateME
— FunctionDEPRECATED:
Performs something similar to fmiDoStep
for ME-FMUs (note, that fmiDoStep is for CS-FMUs only). Event handling (state- and time-events) is supported. If you don't want events to be handled, you can disable event-handling for the NeuralFMU nfmu
with the attribute eventHandling = false
.
Optional, additional FMU-values can be set via keyword arguments setValueReferences
and setValues
. Optional, additional FMU-values can be retrieved by keyword argument getValueReferences
.
Function takes the current system state array ("x") and returns an array with state derivatives ("x dot") and optionally the FMU-values for getValueReferences
. Setting the FMU time via argument t
is optional, if not set, the current time of the ODE solver around the NeuralFMU is used.
FMIFlux.fmi2InputDoStepCSOutput
— MethodDEPRECATED:
fmi2InputDoStepCSOutput(comp::FMU2Component,
dt::Real,
u::Array{<:Real})
Sets all FMU inputs to u
, performs a ´´´fmi2DoStep´´´ and returns all FMU outputs.
FMIFlux.fmiDoStepCS
— FunctionDEPRECATED:
Wrapper. Call fmi2DoStepCS
for more information.
FMIFlux.fmiEvaluateME
— FunctionDEPRECATED:
Wrapper. Call fmi2EvaluateME
for more information.
FMIFlux.fmiInputDoStepCSOutput
— MethodDEPRECATED:
Wrapper. Call fmi2InputDoStepCSOutput
for more information.
FMIFlux.mse_interpolate
— MethodCompares non-equidistant (or equidistant) datapoints by linear interpolating and comparing at given interpolation points t_comp
. (Zygote-friendly: Zygote can differentiate through via AD.)
FMIFlux.roundToLength
— MethodRounds a given `number` to a string with a maximum length of `len`.
Exponentials are used if suitable.
FMIFlux.train!
— Methodtrain!(loss, neuralFMU::Union{ME_NeuralFMU, CS_NeuralFMU}, data, optim; gradient::Symbol=:ReverseDiff, kwargs...)
A function analogous to Flux.train! but with additional features and explicit parameters (faster).
Arguments
loss
a loss function in the formatloss(p)
neuralFMU
a object holding the neuralFMU with its parametersdata
the training data (or often an iterator)optim
the optimizer used for training
Keywords
gradient
a symbol determining the AD-library for gradient computation, available are:ForwardDiff
,:Zygote
and :ReverseDiff (default)cb
a custom callback function that is called after every training step (defaultnothing
)chunk_size
the chunk size for AD using ForwardDiff (ignored for other AD-methods) (default:auto_fmiflux
)printStep
a boolean determining wheater the gradient min/max is printed after every step (for gradient debugging) (defaultfalse
)proceed_on_assert
a boolean that determins wheater to throw an ecxeption on error or proceed training and just print the error (defaultfalse
)multiThreading
: a boolean that determins if multiple gradients are generated in parallel (defaultfalse
)multiObjective
: set this if the loss function returns multiple values (multi objective optimization), currently gradients are fired to the optimizer one after another (defaultfalse
)
FMIFlux.transferFlatParams!
— FunctionWrites/Copies flatted (Flux.destructure) training parameters p_net
to non-flat model net
with data offset c
.