FlightSims
FlightSims.jl is a general-purpose numerical simulator by defining nested environments.
Plans
- Some controllers and utilities will be separated in
v0.5
; see FaultTolerantControl.jl. - Convenient logger will be added in
v0.6
; see the related project and #77.
Notes
Why is it FlightSims.jl?
This package is for any kind of numerical simulation with dynamical systems although it was supposed to be dedicated for flight simulations.
Packages based on FlightSims.jl
- FaultTolerantControl.jl: fault tolerant control (FTC) with various models and algorithms of faults, fault detection and isolation (FDI), and reconfiguration (R) control.
Features
Compatibility
- It is highly based on OrdinaryDiffEq.jl. Supporting full compatibility with DifferentialEquations.jl is not on the road map for now.
- The construction of nested environments are based on ComponentArrays.jl.
If you want more functionality, please feel free to report an issue!
Nested Environments and Zoo
- Environments usually stand for dynamical systems but also contain other utilities, for example, controllers.
- One can generate user-defined nested environments using provided APIs.
Also, some predefined environments are provided for reusability (i.e., environment zoo).
Take a look at
src/environments
. - Examples include
- basics
- (Linear system)
LinearSystemEnv
- (Reference model)
ReferenceModelEnv
- (Nonlinear system)
TwoDimensionalNonlinearPolynomialEnv
- (Linear system)
- multicopters
- (Quadcopter)
IslamQuadcopterEnv
,GoodarziQuadcopterEnv
- (Hexacopter)
LeeHexacopterEnv
- (Quadcopter)
- controllers
- (Linear quadratic regulator)
LQR
- (Linear quadratic regulator)
- integrated_environments
- See
src/environments/integrated_environments
.
- See
- basics
Utilities
- Some utilities are also provided for dynamical system simulation.
- Examples include
- Function approximator
- (Approximator)
LinearApproximator
,PolynomialBasis
- (Approximator)
- Data manipulation for machine learning
- (Split data)
partitionTrainTest
- (Split data)
- Reference trajectory generator
- (Command generator)
HelixCommandGenerator
,PowerLoop
- (Command generator)
- Ridig body rotation
- (Rotations)
euler
- (Rotations)
- Function approximator
APIs
Main APIs are provided in src/APIs
.
Note that among APIs, closure (a function whose output is a function) will have the uppercase first letter (#55).
Make an environment
AbstractEnv
: an abstract type for user-defined and predefined environments. In general, environments is a sub-type ofAbstractEnv
.State(env::AbstractEnv)
: return a function that produces structured states.Dynamics!(env::AbstractEnv)
,Dynamics(env::AbstractEnv)
: return a function that maps in-place (recommended) and out-of-place dynamics (resp.), compatible with DifferentialEquations.jl. User can extend these methods or simply define other methods.apply_inputs(func; kwargs...)
: It is borrowed from an MRAC example of ComponentArrays.jl. By using this, user can easily apply various kind of inputs into the environment.
Note that these interfaces are also provided for some integrated environments, e.g., State(system, controller)
.
Simulation, logging, and data saving & loading
Core APIs
sim
: returnprob::DEProblem
andsol::DESolution
.- For now, only in-place method (iip) is supported.
@Loggable
(usage:@Loggable function my_func(dx, x, p, t) #blahblah end
): make your ODE function loggable. Use this macro when defining your ODEFunction. This actually makes a hidden dictionary and return it.-
DO NOT use
return
syntax in the function annotated by@Loggable
. Instead, just mutatedx
or simply leave any result withoutreturn
as@Loggable function my_func(dx, x, p, t) dx .= x nothing # `return nothing` would yield undesirable behaviour. Actually, you can simply remove this line. end
-
__LOGGER_DICT__
: This is an alias of the hidden dictionary generated by@Loggable
. NEVER USE THE NAME__LOGGER_DICT__
in usual cases.
-
@log
(usage:@log var_name = val
): variables annotated by this macro will be logged (Actually this stands for__LOGGER_DICT__[var_name] = val
).@log_only
: the same as@log
but it will be activated only when logging variables. It is not activated when solving DEProblem.- This macro is highly inspired by SimulationLogs.jl.
@nested_log
(usage:@nested_log env_name ODEFunction_call
): this macro will save all variables logged inODEFunction_call
as__LOGGER_DICT__[env_name]
.ODEFunction_call
should be annotated by@Loggable
.
Will be deprecated
DatumFormat(env::AbstractEnv)
: return a function(x, t, integrator::DiffEqBase.DEIntegrator) -> nt::NamedTuple
for saving data.- It is recommended users to use
DatumFormat(env::AbstractEnv)
for saving basic information ofenv
. - Default setting: time and state histories will be saved as
df.time
anddf.state
.
- It is recommended users to use
save_inputs(func; kwargs...)
: this mimicsapply_inputs(func; kwargs...)
.- It is recommended users to use
save_inputs(func; kwargs...)
for saving additional information.
- It is recommended users to use
Process(env::AbstractEnv)
: return a function that processesprob
andsol
to get simulation data.- It is recommended users to use
Process(env::AbstractEnv)
when the simulation is deterministic (including parameter updates).
- It is recommended users to use
Not actively maintained
save
: saveenv
,prob
,sol
, and optionallyprocess
,- Not actively maintained. Please report issues about new features of saving data.
in a
.jld2
file.
- Not actively maintained. Please report issues about new features of saving data.
in a
load
: loadenv
,prob
,sol
, and optionallyprocess
, from a.jld2
file.- Not actively maintained. Please report issues about new features of loading data.
Examples
Optimal control and reinforcement learning
- For an example of infinite-horizon continuous-time linear quadratic regulator (LQR),
see the following example code (
test/lqr.jl
).
using FlightSims
const FS = FlightSims
using LinearAlgebra
using Plots
function test()
# linear system
A = [0 1;
0 0]
B = [0;
1]
n, m = 2, 1
env = LinearSystemEnv(A, B) # exported from FlightSims
x0 = State(env)([1.0, 2.0])
# optimal control
Q = Matrix(I, n, n)
R = Matrix(I, m, m)
lqr = LQR(A, B, Q, R) # exported from FlightSims
u_lqr = FS.OptimalController(lqr) # (x, p, t) -> -K*x; minimise J = ∫ (x' Q x + u' R u) from 0 to ∞
# simulation
tf = 10.0
# @Loggable will generate a hidden dictionary (NEVER USE THE PRIVILEGED NAME, `__LOGGER_DICT__`)
# @Loggable will also automatically return the privileged dictionary
# @Loggable will also copy the state `x` to avoid view issue; https://diffeq.sciml.ai/stable/features/callback_library/#Constructor-5
# @log will automatically log annotated data in the privileged dictionary
@Loggable function dynamics!(dx, x, p, t; u)
@log state = x
@log input = u
Dynamics!(env)(dx, x, p, t; u) # predefined dynamics exported from FlightSims
# NEVER RETURN SOMETHING; just mutate dx
end
prob, sol, df = sim(
x0, # initial condition
apply_inputs(dynamics!; u=u_lqr); # dynamics with input of LQR
tf=tf, # final time
savestep=0.01,
)
plot(df.time, hcat(df.state...)'; title="state variable", label=["x1" "x2"]) # Plots
savefig("figures/x_lqr.png")
plot(df.time, hcat(df.input...)'; title="control input", label="u") # Plots
savefig("figures/u_lqr.png")
end
-
For an example of nested environments and nested logging, see the following example code (
test/nested_envs.jl
). -
For an example of continuous-time value-iteration adaptive dynamic programming (CT-VI-ADP), take a look at
test/continuous_time_vi_adp.jl
. -
For an example of continuous-time integral reinforcement learning for linear system (CT-IRL), take a look at
test/continuous_time_linear_irl.jl
.
Nonlinear control
- For an example of backstepping position tracking controller for quadcopters, visit FaultTolerantControl.jl.
Scientific machine learning
- Add examples for newbies!
- For an example usage of Flux.jl, see
main/flux_example.jl
. - For an example code of an imitation learning algorithm, behavioural cloning, see
main/behavioural_cloning.jl
.