# Core Types

These are types that are most often used. They are simple and are used by all other functions.

There is basic support for continuous Markov chains. All applicable functions also work for continuous chains.

## Contents

## Index

`DiscreteMarkovChains.ContinuousMarkovChain`

`DiscreteMarkovChains.DiscreteMarkovChain`

`DiscreteMarkovChains.embedded`

`DiscreteMarkovChains.probability_matrix`

`DiscreteMarkovChains.state_space`

`DiscreteMarkovChains.transition_matrix`

## Documentation

The main type is `DiscreteMarkovChain`

:

`DiscreteMarkovChains.DiscreteMarkovChain`

— Type```
DiscreteMarkovChain(transition_matrix)
DiscreteMarkovChain(state_space, transition_matrix)
DiscreteMarkovChain(continuous_markov_chain)
```

Creates a new discrete Markov chain object.

**Arguments**

`state_space`

: The names of the states that make up the Markov chain.`transition_matrix`

: The single step transition probability matrix.`continuous_markov_chain`

: An instance of`ContinuousMarkovChain`

.

**Examples**

The following shows a basic Sunny-Cloudy-Rainy weather model.

```
using DiscreteMarkovChains
T = [
0.9 0.1 0;
0.5 0.2 0.3;
0.1 0.4 0.5
]
X = DiscreteMarkovChain(["Sunny", "Cloudy", "Rainy"], T)
println(state_space(X))
# output
["Sunny", "Cloudy", "Rainy"]
```

```
println(transition_matrix(X))
# output
[0.9 0.1 0.0; 0.5 0.2 0.3; 0.1 0.4 0.5]
```

**References**

`DiscreteMarkovChains.ContinuousMarkovChain`

— Type```
ContinuousMarkovChain(transition_matrix)
ContinuousMarkovChain(state_space, transition_matrix)
ContinuousMarkovChain(discrete_markov_chain)
```

Creates a new continuous Markov chain object. This is also known as a Markov jump process.

Note that an irreducible finite continuous-time Markov chain is always positive recurrent and its stationary distribution always exists, is unique and is equal to the limiting distribution.

**Arguments**

`state_space`

: The names of the states that make up the Markov chain.`transition_matrix`

: The transition intensity matrix. Also known as the generator matrix.`discrete_markov_chain`

: An instance of`DiscreteMarkovChain`

.

**Examples**

The following shows a basic Sunny-Cloudy-Rainy weather model.

```
using DiscreteMarkovChains
T = [
-.1 0.1 0.0;
0.5 -.8 0.3;
0.1 0.4 -.5;
]
X = ContinuousMarkovChain(["Sunny", "Cloudy", "Rainy"], T)
println(state_space(X))
# output
["Sunny", "Cloudy", "Rainy"]
```

```
println(transition_matrix(X))
# output
[-0.1 0.1 0.0; 0.5 -0.8 0.3; 0.1 0.4 -0.5]
```

**References**

`DiscreteMarkovChains.state_space`

— Function`state_space(x)`

**Definitions**

The state space of a Markov chain is the (ordered) set of values that the process is able to take on. For example, in a Sunny-Cloudy-Rainy weather model, the state space is `["Sunny", "Cloudy", "Rainy"]`

.

**Arguments**

`x`

: some kind of Markov chain.

**Returns**

The state space of the Markov chain.

`DiscreteMarkovChains.transition_matrix`

— Function`transition_matrix(x)`

**Definitions**

The one-step transition matrix, $T$, of a discrete Markov chain or the generator matrix of a continuous Markov chain.

**Arguments**

`x`

: some kind of Markov chain.

**Returns**

The transition matrix of the Markov chain.

`DiscreteMarkovChains.probability_matrix`

— Function`probability_matrix(x)`

**Definitions**

The one-step transition matrix, $T$, of a Markov chain, $\{X_t\}$ is a matrix whose $(i,j)$th entry is the probability of the process being in state $j$ at time 1 given that the process started in state $i$ at time 0. That is

\[T = \{\{p^{(1)}_{i,j}\}\}_{i,j ∈ S} = \{\{\mathbb{P}(X_1=j | X_0=i)\}\}_{i,j ∈ S}\]

where $S$ is the state space of $\{X_t\}$.

**Arguments**

`x`

: some kind of Markov chain.

**Returns**

The one-step probability matrix of the Markov chain.

`DiscreteMarkovChains.embedded`

— Function`embedded(x)`

**Arguments**

`x`

: some kind of Markov chain.

**Returns**

If the Markv chain is continuous, then it returns the embedded Markov chain. If the Markov chain is discrete, then it returns the given chain, `x`

.

**Notes**

If the equivalent chain is preffered rather than the embedded chain, then use `DiscreteMarkovChain(x)`

.