# Mean Time Functions

These functions provide scalars, vectors or matrices representing expected times for some event.

## Documentation

DiscreteMarkovChains.fundamental_matrixFunction
fundamental_matrix(x)

Definitions

The fundamental matrix of a markov chain is defined to be $(I-C)^{-1}$ where $C$ is the sub-transition matrix that takes transient states to transient states.

The $(i, j)$th entry of the fundamental matrix is the expected number of times the chain is in state $j$ over the whole process given that the chain started in state $i$.

Arguments

• x: some kind of Markov chain.

Returns

The fundamental matrix of the Markov chain.

Notes

It is advised to have x in canonical form already. This is to avoid confusion of what states make up each element of the array ouput.

Examples

Here is a typical example of the fundamental matrix.

using DiscreteMarkovChains
T = [
1.0 0.0 0.0;
0.0 0.4 0.6;
0.2 0.2 0.6;
]
X = DiscreteMarkovChain(T)

fundamental_matrix(X)

# output

2×2 Array{Float64,2}:
3.33333  5.0
1.66667  5.0

If the chain is ergodic, then the fundamental matrix is a 0x0 matrix (since there are no transient states).

T = [
0.0 1.0 0.0;
0.5 0.0 0.5;
0.0 1.0 0.0;
]
X = DiscreteMarkovChain(T)

fundamental_matrix(X)

# output

0×0 Array{Any,2}
DiscreteMarkovChains.mean_time_to_absorptionFunction
mean_time_to_absorption(x)

Definitions

The expected time to absorption is the expected number of steps that the process will take while in the transient super class given that the process started in state $i$.

Arguments

• x: some kind of Markov chain.

Returns

A 1D array where element $i$ is the total number of revisits to transient state $i$ before leaving the transient super class.

Notes

It is advised to have x in canonical form already. This is to avoid confusion of what states make up each element of the array ouput.

Examples

See the following where we have a chain with 2 transient states (states 3 and 4) that go between eachother. One of those states will enter a pre-absorbing state (state 2). And then the pre-absorbing state will enter the absorbing state (state 1) on the next step.

So state 2 should spend no more steps in the transient states and hence should have a time to absorption of 0. State 4 should have 1 more step than state 3 since state 4 must enter state 3 to exit the transient states.

using DiscreteMarkovChains
T = [
1.0 0.0 0.0 0.0;
1.0 0.0 0.0 0.0;
0.0 0.2 0.0 0.8;
0.0 0.0 1.0 0.0;
]
X = DiscreteMarkovChain(T)

mean_time_to_absorption(X)

# output

3-element Array{Float64,1}:
0.0
9.000000000000002
10.000000000000002
DiscreteMarkovChains.mean_recurrence_timeFunction
mean_recurrence_time(x)

Definitions

This is the expected number of steps for the process to return to state i given that the process started in state $i$.

Arguments

• x: some kind of Markov chain.

Returns

A 1D array where the ith entry is the mean recurrence time of state $i$.

Notes

x must be irreducible (i.e. ergodic).

Examples

using DiscreteMarkovChains
T = [
0.1 0.2 0.7;
0.3 0.0 0.7;
0.4 0.4 0.2;
]
X = DiscreteMarkovChain(T)

mean_recurrence_time(X)

# output

3-element Array{Float64,1}:
3.4615384615384603
4.090909090909091
2.1428571428571432

If we have a reducible chain, then no error will be raised but the output will be nonsense.

T = [
1.0 0.0 0.0;
0.1 0.5 0.4;
0.0 0.3 0.7;
]
X = DiscreteMarkovChain(T)

mean_recurrence_time(X)

# output

3-element Array{Float64,1}:
1.0
-Inf
-Inf
DiscreteMarkovChains.mean_first_passage_timeFunction
mean_first_passage_time(x)

Definitions

This is the expected number of steps for the process to reach state $j$ given that the process started in state $i$. We say that the mean first passage time between a state and itself is 0.

Arguments

• x: some kind of Markov chain.

Returns

A matrix where the $(i,j)$th entry is the mean recurrence time of state $i$. Diagonal elements are 0. The diagonals would have represented the mean recurrence time.

Notes

x must be irreducible (i.e. ergodic).

Examples

using DiscreteMarkovChains
T = [
0.1 0.2 0.7;
0.3 0.0 0.7;
0.4 0.4 0.2;
]
X = DiscreteMarkovChain(T)

mean_first_passage_time(X)

# output

3×3 Array{Float64,2}:
0.0      3.40909  1.42857
2.88462  0.0      1.42857
2.69231  2.95455  0.0

If we have a reducible chain, then no error will be raised but the output will be nonsense.

T = [
1.0 0.0 0.0;
0.1 0.5 0.4;
0.0 0.3 0.7;
]
X = DiscreteMarkovChain(T)

mean_first_passage_time(X)

# output

3×3 Array{Float64,2}:
0.0     -Inf  -Inf
23.3333  NaN   -Inf
26.6667  -Inf  NaN