# Principle: Bayes tree prototyping

This page describes how to visualize, study, test, and compare Bayes (Junction) tree concepts with special regard for variable ordering.

## Why a Bayes (Junction) tree

The tree is algebraicly equivalent–-but acyclic–-structure to the factor graph: i.) Inference is easier on on acyclic graphs; ii.) We can exploit Smart Message Passing benefits (known from the full conditional independence structure encoded in the tree), since the tree represents the "complete form" when marginalizing each variable one at a time (also known as elimination game, marginalization, also related to smart factors). In loose terms, the Bayes (Junction) tree has implicit access to all Schur complements (if it parametric and linearized) of each variable to all others. Please see this page more information regarding advanced topics on the Bayes tree.

## What is a Bayes (Junction) tree

The Bayes tree data structure is a rooted and directed Junction tree (maximal elimination clique tree). It allows for exact inference to be carried out by leveraging and exposing the variables' conditional independence and, very interestingly, can be directly associated with the sparsity pattern exhibited by a system's factorized upper triangular square root information matrix (see picture below).

Following this matrix-graph parallel, the picture also shows what the associated matrix interpretation is for a factor graph (~first order expansion in the form of a measurement Jacobian) and its corresponding Markov random field (sparsity pattern corresponding to the information matrix).

The procedure for obtaining the Bayes (Junction) tree is outlined in the figure shown below (factor graph to chrodal Bayes net via bipartite elimination game, and chordal Bayes net to Bayes tree via maximum cardinality search algorithm).

## Constructing a Tree

Note

A visual illustration of factor graph to Bayes net to Bayes tree can be found in this PDF

Trees and factor graphs are separated in the implementation, allowing the user to construct multiple different trees from one factor graph except for a few temporary values in the factor graph.

using IncrementalInference # RoME or Caesar will work too

## construct a distributed factor graph object
fg = generateGraph_Kaess()
# ...

## build the tree
tree = buildTreeReset!(fg)

The temporary values are reset from the distributed factor graph object fg<:AbstractDFG and a new tree is constructed. This buildTreeReset! call can be repeated as many times the user desires and results should be consistent for the same factor graph structure (regardless of numerical values contained within).

IncrementalInference.buildTreeReset!Function
buildTreeReset!(dfg; ...)
buildTreeReset!(
dfg,
eliminationOrder;
ordering,
drawpdf,
show,
filepath,
viewerapp,
imgs,
ensureSolvable,
eliminationConstraints
)


Build a completely new Bayes (Junction) tree, after first wiping clean all temporary state in fg from a possibly pre-existing tree.

DevNotes

• replaces resetBuildTreeFromOrder!

Related:

buildTreeFromOrdering!,

## Variable Ordering

### Getting the AMD Variable Ordering

The variable ordering is described as a ::Vector{Symbol}. Note the automated methods can be varied between AMD, CCOLAMD, and others.

# get the automated variable elimination order
vo = getEliminationOrder(fg)

It is also possible to manually define the Variable Ordering

vo = [:x1; :l3; :x2; ...]

And then reset the factor graph and build a new tree

buildTreeReset!(fg, vo)
Note

a list of variables or factors can be obtained through the ls and related functions, see Querying the Factor Graph.

## Interfacing with the MM-iSAMv2 Solver

The following parmaters (set before calling solveTree!) will show the solution progress on the tree visualization:

getSolverParams(fg).drawtree = true
getSolverParams(fg).showtree = true

# asybc process will now draw and show the tree in linux
tree = solveTree!(fg)
Note

See the Solving Graphs section for more details on the solver.

### Get the Elimination Order Used

The solver internally uses buildTreeReset! which sometimes requires the user extract the variable elimination order after the fact. This can be done with:

IncrementalInference.getEliminationOrderFunction
getEliminationOrder(dfg; ordering, solvable, constraints)


Determine the variable ordering used to construct both the Bayes Net and Bayes/Junction/Elimination tree.

Notes

• Heuristic method – equivalent to QR or Cholesky.
• Are using Blas QR function to extract variable ordering.
• NOT USING SUITE SPARSE – which would requires commercial license.
• For now A::Array{<:Number,2} as a dense matrix.
• Columns of A are system variables, rows are factors (without differentiating between partial or full factor).
• default is to use solvable=1 and ignore factors and variables that might be used for dead reckoning or similar.

Future

• TODO: A should be sparse data structure (when we exceed 10'000 var dims)
• TODO: Incidence matrix is rectagular and adjacency is the square.
getEliminationOrder(treel)


Return the variable elimination order stored in a tree object.

## Visualizing

IncrementalInference.jl includes functions for visualizing the Bayes tree, and uses outside packages such as GraphViz (standard) and Latex tools (experimental, optional) to do so.

### GraphViz

drawTree(tree, show=true) # , filepath="/tmp/caesar/mytree.pdf"
IncrementalInference.drawTreeFunction
drawTree(
treel;
show,
suffix,
filepath,
xlabels,
dpi,
viewerapp,
imgs
)


Draw the Bayes (Junction) tree by means of graphviz .dot files. Ensure Linux packages are installed sudo apt-get install graphviz xdot.

Notes

• xlabels is optional cliqid=>xlabel.

### Latex Tikz (Optional)

EXPERIMENTAL, requiring special import.

First make sure the following packages are installed on your system:

$sudo apt-get install texlive-pictures dot2tex$ pip install dot2tex

Then in Julia you should be able to do:

import IncrementalInference: generateTexTree

generateTexTree(tree)

An example Bayes (Junction) tree representation obtained through generateTexTree(tree) for the sample factor graph shown above can be seen in the following image.

It is also possible to see the upward message passing variable/factor association matrix for each clique, requiring the Gadfly.jl package:

using Gadfly

spyCliqMat(tree, :x1) # provided by IncrementalInference

#or embedded in graphviz
drawTree(tree, imgs=true, show=true)

## Clique State Machine

The mmisam solver is based on a state machine design to handle the inter and intra clique operations during a variety of situations. Use of the clique state machine (CSM) makes debugging, development, verification, and modification of the algorithm real easy. Contact us for any support regarding modifications to the default algorithm. For pre-docs on working with CSM, please see IIF #443.

### STATUS of a Clique

CSM currently uses the following statusses for each of the cliques during the inference process.

[:initialized;:upsolved;:marginalized;:downsolved;:uprecycled]

### Bayes Tree Legend (from IIF)

The color legend for the refactored CSM from issue.

• Blank / white – uninitialized or unprocessed,
• Orange – recycled clique upsolve solution from previous tree passed into solveTree! – TODO,
• Blue – fully marginalized clique that will not be updated during upsolve (maybe downsolved),
• Light blue – completed downsolve,
• Green – trying to up initialize,
• Darkgreen – initUp some could up init,
• Lightgreen – initUp no aditional variables could up init,
• Olive – trying to down initialize,
• Seagreen – initUp some could down init,
• Khaki – initUp no aditional variables could down init,
• Brown – initialized but not solved yet (likely child cliques that depend on downward autoinit msgs),
• Light red – completed upsolve,
• Tomato – partial dimension upsolve but finished,
• Red – CPU working on clique's Chapman-Kolmogorov inference (up),
• Maroon – CPU working on clique's Chapman-Kolmogorov inference (down),
• Red – If finished cliques in red are in ERROR_STATUS