FlightSims
FlightSims.jl is a general-purpose numerical simulator supporting nested environments and convenient macro-based data logging.
Plans and Changes
v0.8
- find a good way of saving and loading simulation data
v0.7
-
df::DataFrame
, one of the outputs ofsim
, contains (nested)NamedTuple
. - Separate logging tools as another package SimulationLogger.jl.
- Previous logging tools, e.g.,
Process
andDatumFormat
have been deprecated.
- Previous logging tools, e.g.,
- Utilities related to
rotation
are deprecated. See ReferenceFrameRotations.jl for reference frame rotation and Rotations.jl for rotation of vectors. - Add a renderer (see
test/render.jl
). Currently, onlyLeeHexacopterEnv
is supported.
Notes
Why is it FlightSims.jl?
This package is for any kind of numerical simulation with dynamical systems although it was supposed to be dedicated for flight simulations.
Packages related to FlightSims.jl
- SimulationLogger.jl: A convenient logging tools compatible with DifferentialEquations.jl.
- FaultTolerantControl.jl: fault tolerant control (FTC) with various models and algorithms of faults, fault detection and isolation (FDI), and reconfiguration (R) control.
Features
Compatibility
- It is highly based on DifferentialEquations.jl but mainly focusing on ODE (ordinary differential equations).
- The construction of nested environments are based on ComponentArrays.jl.
- The structure of the resulting data from simulation result is based on DataFrames.jl.
- Logging tool is based on SimulationLogger.jl.
If you want more functionality, please feel free to report an issue!
Nested Environments and Zoo
- Environments usually stand for dynamical systems but also include other utilities, for example, controllers.
- One can generate user-defined nested environments using provided APIs.
Also, some predefined environments are provided for reusability (i.e., environment zoo).
Take a look at
src/environments
. - Examples include
- basics
- (Linear system)
LinearSystemEnv
- (Reference model)
ReferenceModelEnv
- (Nonlinear system)
TwoDimensionalNonlinearPolynomialEnv
- (Multiple Envs)
MultipleEnvs
for multi-agent simulation
- (Linear system)
- multicopters
- (Quadcopter)
IslamQuadcopterEnv
,GoodarziQuadcopterEnv
- (Hexacopter)
LeeHexacopterEnv
- (Quadcopter)
- allocator (control allocation)
- (Moore-Penrose pseudo inverse control allocation)
PseudoInverseAllocator
- (Moore-Penrose pseudo inverse control allocation)
- controllers
- (Linear quadratic regulator)
LQR
- (Proportional-Integral-Derivative controller)
PID
- Note that the derivative term is obtained via second-order filter.
- (Linear quadratic regulator)
- integrated_environments
- See
src/environments/integrated_environments
.
- See
- basics
Utilities
- Some utilities are also provided for dynamical system simulation.
- Examples include
- Function approximator
- (Approximator)
LinearApproximator
,PolynomialBasis
- (Approximator)
- Data manipulation for machine learning
- (Split data)
partitionTrainTest
- (Split data)
- Reference trajectory generator
- (Command generator)
HelixCommandGenerator
,PowerLoop
- (Command generator)
- Ridig body rotation
- (Rotations)
euler
- (Rotations)
- Function approximator
APIs
Main APIs are provided in src/APIs
.
Note that among APIs, most closures (a function whose output is a function) will have the uppercase first letter (#55).
Make an environment
AbstractEnv
: an abstract type for user-defined and predefined environments. In general, environments is a sub-type ofAbstractEnv
.State(env::AbstractEnv)
: return a function that produces structured states.Dynamics!(env::AbstractEnv)
,Dynamics(env::AbstractEnv)
: return a function that maps in-place (recommended) and out-of-place dynamics (resp.), compatible with DifferentialEquations.jl. User can extend these methods or simply define other methods.
Note that these interfaces are also provided for some integrated environments, e.g., State(system, controller)
.
Simulation, logging, and data saving & loading
Main APIs
sim
- return
prob::DEProblem
anddf::DataFrame
. - For now, only in-place method (iip) is supported.
- return
apply_inputs(func; kwargs...)
- By using this, user can easily apply external inputs into environments. It is borrowed from an MRAC example of ComponentArrays.jl and extended to be compatible with SimulationLogger.jl.
- (Limitations) for now, dynamical equations wrapped by
apply_inputs
will automatically generate logging function (even without@Loggable
). In this case, all data will be an array of emptyNamedTuple
.
- Macros for logging data:
@Loggable
,@log
,@onlylog
,@nested_log
- For more details, see SimulationLogger.jl.
Deprecated APIs
DatumFormat(env::AbstractEnv)
: return a function(x, t, integrator::DiffEqBase.DEIntegrator) -> nt::NamedTuple
for saving data.- It is recommended users to use
DatumFormat(env::AbstractEnv)
for saving basic information ofenv
. - Default setting: time and state histories will be saved as
df.time
anddf.state
.
- It is recommended users to use
save_inputs(func; kwargs...)
: this mimicsapply_inputs(func; kwargs...)
.- It is recommended users to use
save_inputs(func; kwargs...)
for saving additional information.
- It is recommended users to use
Process(env::AbstractEnv)
: return a function that processesprob
andsol
to get simulation data.- It is recommended users to use
Process(env::AbstractEnv)
when the simulation is deterministic (including parameter updates).
- It is recommended users to use
save
- Save
env
,prob
,sol
, and optionallyprocess
, - Not actively maintained. Please report issues about new features of saving data.
in a
.jld2
file.
- Save
load
- Load
env
,prob
,sol
, and optionallyprocess
, from a.jld2
file. - Not actively maintained. Please report issues about new features of loading data.
- Load
Examples
Optimal control and reinforcement learning
- For an example of infinite-horizon continuous-time linear quadratic regulator (LQR),
see the following example code (
test/lqr.jl
).
using FlightSims
const FS = FlightSims
using DifferentialEquations
using LinearAlgebra
using Plots
using Test
using Transducers
function test()
# linear system
A = [0 1;
0 0] # 2 x 2
B = [0 1]' # 2 x 1
n, m = 2, 1
env = LinearSystemEnv(A, B) # exported from FlightSims
x0 = State(env)([1.0, 2.0])
p0 = zero.(x0) # auxiliary parameter
# optimal control
Q = Matrix(I, n, n)
R = Matrix(I, m, m)
lqr = LQR(A, B, Q, R) # exported from FlightSims
u_lqr = FS.OptimalController(lqr) # (x, p, t) -> -K*x; minimise J = ∫ (x' Q x + u' R u) from 0 to ∞
# simulation
tf = 10.0
Δt = 0.01
affect!(integrator) = integrator.p = copy(integrator.u) # auxiliary callback funciton
cb = PeriodicCallback(affect!, Δt; initial_affect=true) # auxiliary callback
@Loggable function dynamics!(dx, x, p, t)
@onlylog p # activate this line only when logging data
u = u_lqr(x)
@log x, u
@nested_log Dynamics!(env)(dx, x, p, t; u=u) # exported `state` and `input` from `Dynamics!(env)`
end
prob, df = sim(
x0, # initial condition
dynamics!, # dynamics with input of LQR
p0;
tf=tf, # final time
callback=cb,
savestep=Δt,
)
ts = df.time
xs = df.sol |> Map(datum -> datum.x) |> collect
us = df.sol |> Map(datum -> datum.u) |> collect
ps = df.sol |> Map(datum -> datum.p) |> collect
states = df.sol |> Map(datum -> datum.state) |> collect
inputs = df.sol |> Map(datum -> datum.input) |> collect
@test xs == states
@test us == inputs
p_x = plot(ts, hcat(states...)';
title="state variable", label=["x1" "x2"], color=[:black :black], lw=1.5,
) # Plots
plot!(p_x, ts, hcat(ps...)';
ls=:dash, label="param", color=[:red :orange], lw=1.5
)
savefig("figures/x_lqr.png")
plot(ts, hcat(inputs...)'; title="control input", label="u") # Plots
savefig("figures/u_lqr.png")
df
end
julia> test()
1001×2 DataFrame
Row │ time sol
│ Float64 NamedTup…
──────┼────────────────────────────────────────────
1 │ 0.0 (p = [1.01978, 1.95564], state =…
2 │ 0.01 (p = [1.01978, 1.95564], state =…
3 │ 0.02 (p = [1.03911, 1.91186], state =…
4 │ 0.03 (p = [1.05802, 1.86863], state =…
5 │ 0.04 (p = [1.07649, 1.82596], state =…
⋮ │ ⋮ ⋮
998 │ 9.97 (p = [-0.00093419, 0.00103198], …
999 │ 9.98 (p = [-0.000923913, 0.00102347],…
1000 │ 9.99 (p = [-0.00091372, 0.001015], st…
1001 │ 10.0 (p = [-0.00091372, 0.001015], st…
992 rows omitted
- For an example of continuous-time value-iteration adaptive dynamic programming (CT-VI-ADP), take a look at
test/continuous_time_vi_adp.jl
. - For an example of continuous-time integral reinforcement learning for linear system (CT-IRL), take a look at
test/continuous_time_linear_irl.jl
.
Nonlinear control
- For an example of backstepping position tracking controller for quadcopters, visit FaultTolerantControl.jl.
Multicopter rendering
- See
test/render.jl
.
Scientific machine learning
- Add examples for newbies!
- For an example usage of Flux.jl, see
main/flux_example.jl
. - For an example code of an imitation learning algorithm, behavioural cloning, see
main/behavioural_cloning.jl
.