# ExpressionTreeForge.jl Tutorial

ExpressionTreeForge.jl analyzes and manipulates expression trees. It interfaces several implementations of expression trees to the internal type `Type_expr_tree`

(with `transform_to_expr_tree()`

).

The main expression trees supported are:

- julia
`Expr`

```
using ExpressionTreeForge
expr_julia = :((x[1]+x[2])^2 + (x[2]+x[3])^2)
expr_tree_Expr = transform_to_expr_tree(expr_julia)
```

+ ^2 + x[1] x[2] ^2 + x[2] x[3]

`Expr`

from JuMP model (with`MathOptInterface`

)

```
using JuMP, MathOptInterface
m = Model()
n = 3
@variable(m, x[1:n])
@NLobjective(m, Min, (x[1]+x[2])^2 + (x[2]+x[3])^2)
evaluator = JuMP.NLPEvaluator(m)
MathOptInterface.initialize(evaluator, [:ExprGraph])
expr_jump = MathOptInterface.objective_expr(evaluator)
expr_tree_JuMP = transform_to_expr_tree(expr_jump)
```

+ ^2.0 + x[1] x[2] ^2.0 + x[2] x[3]

Both trees have the same shape

`expr_tree_Expr == expr_tree_JuMP`

false

- expression tree from a julia function created by Symbolics.jl

```
using Symbolics
function f(y)
return sum((y[i] + y[i+1])^2 for i = 1:(length(y)-1))
end
n = 3
Symbolics.@variables x[1:n] # must be x
mtk_tree = f(x)
expr_tree_Symbolics = transform_to_expr_tree(mtk_tree)
```

(x[1] + x[2])^2 + (x[2] + x[3])^2

which may perform automatically some simplifications and/or reorder the terms. However, `expr_tree_Expr`

, `expr_tree_JuMP`

and `expr_tree_Symbolics`

share the same type `::Type_expr_tree`

:

`typeof(expr_tree_Expr) == typeof(expr_tree_JuMP)`

true

`typeof(expr_tree_Expr) == typeof(expr_tree_Symbolics)`

true

With a `Type_expr_tree`

, you can:

- detect partial separability;
- evaluate the expression, and its first and second derivatives;
- propagate bounds;
- detect convexity.

### Detection of the partially separable structure

The original purpose of `ExpressionTreeForge.jl`

is to detect the partially-separable structure of a function $f : \R^n \to \R$

\[f(x) = \sum_{=1}^N \hat{f}_i (U_i x), \quad \hat f_i:\R^{n_i} \to \R, \quad U_i \in \R^{n_i \times n}, \quad n_i \ll n,\]

which means `ExpressionTreeForge.jl`

detects that $f$ is a sum, and returns:

- the element functions $\hat{f}_i$;
- the variables appearing in $\hat{f}_i$ (i.e.
*elemental variables*), which are represented via $U_i$.

You detect the element functions with `extract_element_functions()`

, which returns a vector of `Type_expr_tree`

s:

```
expr_tree = copy(expr_tree_Expr)
element_functions = extract_element_functions(expr_tree)
show(element_functions[2])
```

^2 + x[2] x[3]

**Warning**: the `element_functions`

are pointers to nodes of `expr_tree`

. Any modification to `element_functions`

will be applied to `expr_tree`

!

You extract the elemental variables by applying `get_elemental_variables()`

on every element function expression tree

`Us = get_elemental_variables.(element_functions)`

2-element Vector{Vector{Int64}}: [1, 2] [2, 3]

Then you can replace the index variables of an element function expression tree so they stay in the range `1:length(Us[i])`

:

```
# change the indices of the second element function
normalize_indices!(element_functions[2], Us[2])
```

^2 + x[1] x[2]

### Evaluate a `Type_expr_tree`

and its derivatives

ExpressionTreeForge.jl offers methods to evaluate an expression tree and its derivatives. `evaluate_expr_tree()`

evaluates a `Type_expr_tree`

at a point `y`

of suitable size:

```
y = ones(n)
fx = expr_tree_Expr(y)
```

8.0

The gradient computation of an expression tree can either use `ForwardDiff`

or `ReverseDiff`

```
∇f_forward = gradient_forward(expr_tree_Expr, y)
∇f_reverse = gradient_reverse(expr_tree_Expr, y)
```

3-element Vector{Float64}: 4.0 8.0 4.0

`gradient_forward == gradient_reverse`

false

and the Hessian is computed with

`hess = hessian(expr_tree_Expr, y)`

3×3 Matrix{Float64}: 2.0 2.0 0.0 2.0 4.0 2.0 0.0 2.0 2.0

AD methods can be applied to the element-function expression trees:

```
y1 = ones(length(Us[1]))
f1 = element_functions[1]
f1x = evaluate_expr_tree(f1, y1)
∇f1_forward = gradient_forward(f1, y1)
∇f1_reverse = gradient_reverse(f1, y1)
hess1 = hessian(f1, y1)
```

See PartitionedStructures.jl for more details about partial separability and how the partitioned derivatives of the element functions are stored.

### Bounds and convexity

To compute bounds and convexity we use a `Complete_expr_tree`

, a richer structure than `Type_expr_tree`

. `Complete_expr_tree`

is similar to `Type_expr_tree`

, but in addition it stores: the lower bound, the upper bound and the convexity status of each node. You can define a `Complete_expr_tree`

for any `Type_expr_tree`

:

`completetree = complete_tree(expr_tree_Expr)`

+ ~(-Inf, Inf) ~not_treated ^2 ~(-Inf, Inf) ~not_treated + ~(-Inf, Inf) ~not_treated x[1] ~(-Inf, Inf) ~not_treated x[2] ~(-Inf, Inf) ~not_treated ^2 ~(-Inf, Inf) ~not_treated + ~(-Inf, Inf) ~not_treated x[2] ~(-Inf, Inf) ~not_treated x[3] ~(-Inf, Inf) ~not_treated

You compute the bounds and the convexity status afterward

```
# propagate the bounds from the variables
set_bounds!(completetree)
# deduce the convexity status of each node
set_convexity!(completetree)
# get the root bounds
bounds = get_bounds(completetree)
```

(0.0, Inf)

```
# get the root convexity status
convexity_status = get_convexity_status(completetree)
is_convex(convexity_status)
```

true

You can observe the bounds and convexity status of each node of `completetree`

by walking the graph

```
# convexity statuses of the root's children
statuses = get_convexity_status.(completetree.children)
# bounds of the root's children
bounds = get_bounds.(completetree.children)
```

2-element Vector{Tuple{Float64, Float64}}: (0.0, Inf) (0.0, Inf)