# Probability Functions

These functions provide probability scalars, vectors or scalars as output.

## Contents

## Index

`DiscreteMarkovChains.exit_probabilities`

`DiscreteMarkovChains.first_passage_probabilities`

`DiscreteMarkovChains.stationary_distribution`

## Documentation

`DiscreteMarkovChains.stationary_distribution`

— Function`stationary_distribution(x)`

**Definitions**

A stationary distribution of a Markov chain is a probability distribution that remains unchanged in the Markov chain as time progresses. It is a row vector, $w$ such that its elements sum to 1 and it satisfies $wT = w$. $T$ is the one-step transiton matrix of the Markov chain.

In other words, $w$ is invariant by the matrix $T$.

For simplicity, this function returns a column vector instead of a row vector.

For continuous Markov chains, the stationary_distribution is given by the solution to $wQ = 0$ where $Q$ is the transition intensity matrix.

**Arguments**

`x`

: some kind of Markov chain.

**Returns**

A column vector, $w$, that satisfies the equation $w'T = w'$.

**Examples**

The stationary distribution will always exist. However, it might not be unique.

If it is unique there are no problems.

```
using DiscreteMarkovChains
T = [
4 2 4;
1 0 9;
3 5 2;
]//10
X = DiscreteMarkovChain(T)
stationary_distribution(X)
# output
3-element Array{Rational{Int64},1}:
35//129
12//43
58//129
```

If there are infinite solutions then the principle solution is taken (every free variable is set to 0). A Moore-Penrose inverse is used.

```
T = [
0.4 0.6 0.0;
0.6 0.4 0.0;
0.0 0.0 1.0;
]
X = DiscreteMarkovChain(T)
stationary_distribution(X)
# output
3-element Array{Float64,1}:
0.33333333333333337
0.33333333333333337
0.33333333333333337
```

**References**

`DiscreteMarkovChains.exit_probabilities`

— Function`exit_probabilities(x)`

**Arguments**

`x`

: some kind of Markov chain.

**Returns**

An array where element $(i, j)$ is the probability that transient state $i$ will enter recurrent state $j$ on its first step out of the transient states. That is, $e_{i,j}$.

**Examples**

The following should be fairly obvious. States 1, 2 and 3 are the recurrent states and state 4 is the single transient state that must enter one of these 3 on the next time step. There is no randomness at play here.

```
using DiscreteMarkovChains
T = [
0.2 0.2 0.6 0.0;
0.5 0.4 0.1 0.0;
0.6 0.2 0.2 0.0;
0.2 0.3 0.5 0.0;
]
X = DiscreteMarkovChain(T)
exit_probabilities(X)
# output
1×3 Array{Float64,2}:
0.2 0.3 0.5
```

So state 4 has probabilities 0.2, 0.3 and 0.5 of reaching states 1, 2 and 3 respectively on the first step out of the transient states (consisting only of state 4).

The following is less obvious.

```
T = [
1.0 0.0 0.0 0.0;
0.0 1.0 0.0 0.0;
0.1 0.3 0.3 0.3;
0.2 0.3 0.4 0.1;
]
X = DiscreteMarkovChain(T)
exit_probabilities(X)
# output
2×2 Array{Float64,2}:
0.294118 0.705882
0.352941 0.647059
```

So state 3 has a 29% chance of entering state 1 on the first time step out (and the remaining 71% chance of entering state 2). State 4 has a 35% chance of reaching state 1 on the first time step out.

`DiscreteMarkovChains.first_passage_probabilities`

— Function`first_passage_probabilities(x, t, i=missing, j=missing)`

**Definitions**

This is the probability that the process enters state $j$ for the first time at time $t$ given that the process started in state $i$ at time 0. That is, $f^{(t)}_{i,j}$. If no `i`

or `j`

is given, then it will return a matrix instead with entries $f^{(t)}_{i,j}$ for `i`

and `j`

in the state space of `x`

.

**Why Do We Use A Slow Algorithm?**

So that `t`

can be symbolic if nessesary. That is, if symbolic math libraries want to use this library, it will pose no hassle.

**Arguments**

`x`

: some kind of Markov chain.`t`

: the time to calculate the first passage probability.`i`

: the state that the prcess starts in.`j`

: the state that the process must reach for the first time.

**Returns**

A scalar value or a matrix depending on whether `i`

and `j`

are given.

**Examples**

```
using DiscreteMarkovChains
T = [
0.1 0.9;
0.3 0.7;
]
X = DiscreteMarkovChain(T)
first_passage_probabilities(X, 2)
# output
2×2 Array{Float64,2}:
0.27 0.09
0.21 0.27
```

If `X`

has a custom state space, then `i`

and `j`

must be in that state space.

```
T = [
0.1 0.9;
0.3 0.7;
]
X = DiscreteMarkovChain(["Sunny", "Rainy"], T)
first_passage_probabilities(X, 2, "Sunny", "Rainy")
# output
0.09000000000000001
```

Notice how this is the (1, 2) entry in the first example.

**References**