# Getting Started

Setting up and solving a problem with ALTRO is very straight-forward. Let's walk through an example of getting a Dubins car to drive through some circular obstacles.

## 1. Load the packages

Our first step is to load the required packages. Since we need to define our dynamics model, we need RobotDynamics.jl, and we need TrajectoryOptimization.jl to define our problem. We'll also import StaticArrays.jl for fast, allocation-free matrix methods, and the `LinearAlgebra`

module. To avoid having to type `TrajectoryOptimization`

and `RobotDynamics`

all the time, we also create some convenient aliases.

```
using Altro
using TrajectoryOptimization
using RobotDynamics
using StaticArrays, LinearAlgebra
const TO = TrajectoryOptimization
const RD = RobotDynamics
```

## 2. Set up the dynamics model

We now define our dynamics model using RobotDynamics.jl. We define a new type `Car`

that inherits from `RobotDynamics.ContinuousDynamics`

. We can store any of our model parameters in this type. After defining the state and control dimensions and the continuous dynamics, we're done defining our model. Integration of the dynamics and the dynamics derivatives can be done automatically.

```
using ForwardDiff # needed for @autodiff
using FiniteDiff # needed for @autodiff
RD.@autodiff struct DubinsCar <: RD.ContinuousDynamics end
RD.state_dim(::DubinsCar) = 3
RD.control_dim(::DubinsCar) = 2
function RD.dynamics(::DubinsCar,x,u)
ẋ = @SVector [u[1]*cos(x[3]),
u[1]*sin(x[3]),
u[2]]
end
```

The code above is the minimal amount of code we need to write to define our dynamics. We can also define in-place evaluation methods and an analytic Jacobian:

```
function RD.dynamics!(::DubinsCar, xdot, x, u)
xdot[1] = u[1] * cos(x[3])
xdot[2] = u[1] * sin(x[3])
xdot[3] = u[2]
return nothing
end
function RD.jacobian!(::DubinsCar, J, xdot, x, u)
J .= 0
J[1,3] = -u[1] * sin(x[3])
J[1,4] = cos(x[3])
J[2,3] = u[1] * cos(x[3])
J[2,4] = sin(x[3])
J[3,5] = 1.0
return nothing
end
# Specify the default method to be used when calling the dynamics
# options are `RD.StaticReturn()` or `RD.InPlace()`
RD.default_signature(::DubinsCar) = RD.StaticReturn()
# Specify the default method for evaluating the dynamics Jacobian
# options are `RD.ForwardAD()`, `RD.FiniteDifference()`, or `RD.UserDefined()`
RD.default_diffmethod(::DubinsCar) = RD.UserDefined()
```

We use `RobotDynamics.@autodiff`

to automatically define methods to evaluate the Jacobian of our dynamics function. See the RobotDynamics documentation for more details. Note that we have to include the FiniteDiff and ForwardDiff packages to use this method.

## 3. Set up the objective

Once we've defined the model, we can now start defining our problem. Let's start by defining the discretization:

```
model = DubinsCar()
dmodel = RD.DiscretizedDynamics{RD.RK4}(model)
n,m = size(model) # get state and control dimension
N = 101 # number of time steps (knot points). Should be odd.
tf = 3.0 # total time (sec)
dt = tf / (N-1) # time step (sec)
```

Note that we need a discrete version of our dynamics model, which we can obtain using the `RobotDynamics.DiscretizedDynamics`

type. This type creates a `RobotDynamics.DiscreteDynamics`

type from a `RobotDynamics.ContinuousDynamics`

type using a supplied integrator. Here we use the 4th-order explicit Runge-Kutta method provided by RobotDynamics.jl.

Now we specify our initial and final conditions:

```
x0 = SA_F64[0,0,0] # start at the origin
xf = SA_F64[1,2,pi] # goal state
```

For our objective, let's define a quadratic cost function that penalizes distance from the goal state:

```
Q = Diagonal(SA[0.1,0.1,0.01])
R = Diagonal(SA[0.01, 0.1])
Qf = Diagonal(SA[1e2,1e2,1e3])
obj = LQRObjective(Q,R,Qf,xf,N)
```

## 4. Add the constraints

Now let's define the constraints for our problem. We're going to bound the workspace of the robot, and add two obstacles. We start by defining a `ConstraintList`

, which is going to hold all of the constraints and make sure they're dimensions are consistent. Here we add a goal constraint at the last time step, a workspace constraint, and then the circular obstacle constraint.

```
cons = ConstraintList(n,m,N)
# Goal constraint
goal = GoalConstraint(xf)
add_constraint!(cons, goal, N)
# Workspace constraint
bnd = BoundConstraint(n,m, x_min=[-0.1,-0.1,-Inf], x_max=[5,5,Inf])
add_constraint!(cons, bnd, 1:N-1)
# Obstacle Constraint
# Defines two circular obstacles:
# - Position (1,1) with radius 0.2
# - Position (2,1) with radius 0.3
obs = CircleConstraint(n, SA_F64[1,2], SA_F64[1,1], SA[0.2, 0.3])
add_constraint!(cons, bnd, 1:N-1)
```

## 5. Define the problem

With the dynamics model, discretization, objective, constraints, and initial condition defined, we're ready to define the problem, which we do with `TrajectoryOptimization.Problem`

.

`prob = Problem(model, obj, x0, tf, xf=xf, constraints=cons)`

Initialization is key when nonlinear optimization problems with gradient-based methods. Since this problem is pretty easy, we'll just initialize it with small random noise on the controls and then simulate the system forward in time.

```
initial_controls!(prob, [@SVector rand(m) for k = 1:N-1])
rollout!(prob) # simulate the system forward in time with the new controls
```

## 6. Intialize the solver

With the problem now defined, we're ready to start using Altro.jl (everything up to this point used only RobotDynamics.jl or TrajectoryOptimization.jl). All we need to do is create an `ALTROSolver`

.

`solver = ALTROSolver(prob)`

### Setting Solver Options

We can set solver options via keyword arguments to the constructor, or by passing in a `SolverOptions`

type:

```
# Set up solver options
opts = SolverOptions()
opts.cost_tolerance = 1e-5
# Create a solver, adding in additional options
solver = ALTROSolver(prob, opts, show_summary=false)
```

You can also use the `set_options!`

method on a solver once it's created.

## 7. Solve the problem

With the solver initialized, we can now solve the problem with a simple call to `solve!`

:

`solve!(solver)`

## 8. Post-analysis

### Checking solve status

Once the solve is complete, we can look at a few things. The first is to check if the solve is successful:

`status(solver)`

`SOLVE_SUCCEEDED::TerminationStatus = 2`

We can also extract some more information

```
println("Number of iterations: ", iterations(solver))
println("Final cost: ", cost(solver))
println("Final constraint satisfaction: ", max_violation(solver))
```

```
Number of iterations: 17
Final cost: 12.480778190315869
Final constraint satisfaction: 9.890399610412715e-10
```

### Extracting the solution

We can extract the state and control trajectories, which are returned as vectors of `SVector`

s:

```
X = states(solver) # alternatively states(prob)
U = controls(solver) # alternatively controls(prob)
```

```
100-element Vector{StaticArrays.SVector{2, Float64}}:
[0.10408338761761966, 2.545171016023059]
[0.7789576528718943, 2.518515492418613]
[1.3567924762262082, 2.4688262024962206]
[1.8387938266993624, 2.402010332682858]
[2.2294919245006026, 2.3233261919008794]
[2.5357790120383017, 2.23725332728432]
[2.7660073211933005, 2.147455267166783]
[2.929222256590714, 2.056811793238993]
[3.0345634028560715, 1.9674958940440583]
[3.0908339302629493, 1.8810736475044596]
⋮
[0.10448423248154139, 0.7290808684609847]
[0.11049068153429419, 0.7283195340450292]
[0.1173845524581254, 0.7276105705945931]
[0.12523469867225648, 0.7269519959298408]
[0.13411705448378589, 0.7263417143070643]
[0.14411521854270606, 0.7257775082050499]
[0.15532109373979355, 0.7252570309766845]
[0.16783558816180572, 0.7247778005228647]
[0.18176938147897018, 0.7243371948305967]
```

If you prefer to work with matrices, you can convert them easily:

```
Xm = hcat(Vector.(X)...) # convert to normal Vector before concatenating so it's fast
Um = hcat(Vector.(U)...)
```

```
2×100 Matrix{Float64}:
0.104083 0.778958 1.35679 1.83879 … 0.155321 0.167836 0.181769
2.54517 2.51852 2.46883 2.40201 0.725257 0.724778 0.724337
```

Converting a matrix into a vector of vectors is also very easy:

`X = [col for col in eachcol(Xm)]`

Or if you want static vectors:

`X = [SVector{n}(col) for col in eachcol(Xm)]`

### Extracting the final feedback gains

Since ALTRO uses iLQR, the solver computes a locally optimal linear feedback policy which can be useful for tracking purposes. We can extract it from the internal `Altro.iLQRSolver`

:

```
ilqr = Altro.get_ilqr(solver)
K = ilqr.K # feedback gain matrices
d = ilqr.d # feedforward gains. Should be small.
```

```
100-element Vector{SubArray{Float64, 1, Matrix{Float64}, Tuple{Base.Slice{Base.OneTo{Int64}}, Int64}, true}}:
[1.888556537587665e-5, 5.424416860329993e-5]
[3.576298200890173e-6, 3.1800731846350605e-5]
[-7.180982907086135e-6, 1.3054338005665968e-5]
[-1.4000749514040444e-5, -2.3822012403540527e-6]
[-1.752967199159603e-5, -1.492845985320956e-5]
[-1.8394333816832812e-5, -2.4995830943637672e-5]
[-1.716847968223438e-5, -3.296367169219426e-5]
[-1.4355770699116827e-5, -3.9167917854429965e-5]
[-1.038390945528641e-5, -4.389813047937199e-5]
[-5.6064041023404235e-6, -4.7399635884500625e-5]
⋮
[-2.7656795125300307e-6, -1.576579815553976e-6]
[-3.609750649775886e-6, -1.3901966641308486e-6]
[-4.474417001185117e-6, -1.1935041052863558e-6]
[-5.375185407903191e-6, -9.849196794986724e-7]
[-6.275766250237468e-6, -7.638245039210931e-7]
[-6.945563383233105e-6, -5.339612454149326e-7]
[-6.614655890251374e-6, -3.125779155892564e-7]
[-3.920792188033517e-6, -1.3530938277716532e-7]
[8.553955301636048e-7, -1.3762794471735182e-9]
```

### Additional solver stats

We can extract more detailed information on the solve from `SolverStats`

`Altro.stats(solver)`

The most relevant fields are the `cost`

, `c_max`

, and `gradient`

. These give the history of these values for each iteration. The `iteration_outer`

can also be helpful to know which iterations were outer loop (augmented Lagrangian) iterations. The `tsolve`

field gives the total solve time in milliseconds.