MeasureTheory

API

MeasureTheory.DensityType
struct Density{M,B}
    μ::M
    base::B
end

For measures μ and ν with μ≪ν, the density of μ with respect to ν (also called the Radon-Nikodym derivative dμ/dν) is a function f defined on the support of ν with the property that for any measurable a ⊂ supp(ν), μ(a) = ∫ₐ f dν.

Because this function is often difficult to express in closed form, there are many different ways of computing it. We therefore provide a formal representation to allow comptuational flexibilty.

MeasureTheory.DensityMeasureType
struct DensityMeasure{F,B} <: AbstractMeasure
    density :: F
    base    :: B
end

A DensityMeasure is a measure defined by a density with respect to some other "base" measure

MeasureTheory.KernelType
kernel(f, M)
kernel((f1, f2, ...), M)

A kernel κ = kernel(f, m) returns a wrapper around a function f giving the parameters for a measure of type M, such that κ(x) = M(f(x)...) respective κ(x) = M(f1(x), f2(x), ...)

If the argument is a named tuple (;a=f1, b=f1), κ(x) is defined as M(;a=f(x),b=g(x)).

Reference

  • https://en.wikipedia.org/wiki/Markov_kernel
MeasureTheory.LKJLType

The LKJ distribution (Lewandowski et al 2009) for the Cholesky factor L of correlation matrices.

A correlation matrix $Ω=LL'$ has the density $|Ω|^{η-1}$. However, it is usually not necessary to construct $Ω$, so this distribution is formulated for the Cholesky decomposition L*L', and takes L directly.

Note that the methods does not check if L yields a valid correlation matrix. Valid values are $η > 0$. When $η > 1$, the distribution is unimodal at Ω=I, while $0 < η < 1$ has a trough. $η = 2$ is recommended as a vague prior. When $η = 1$, the density is uniform in Ω, but not in L, because of the Jacobian correction of the transformation.

Adapted from https://github.com/tpapp/AltDistributions.jl

MeasureTheory.SuperpositionMeasureType
struct SuperpositionMeasure{X,NT} <: AbstractMeasure
    components :: NT
end

Superposition of measures is analogous to mixture distributions, but (because measures need not be normalized) requires no scaling.

The superposition of two measures μ and ν can be more concisely written as μ + ν.

MeasureTheory.:≃Method
≃(μ,ν)

Equivalence of Measure

Measures μ and ν on the same space X are equivalent, written μ ≃ ν, if μ ≪ ν and ν ≪ μ. Note that this is often written ~ in the literature, but this is overloaded in probabilistic programming, so we use this alternate notation.

Also note that equivalence is very different from equality. For two equivalent measures, the sets of non-zero measure will be identical, but what that measure is in each case can be very different.

MeasureTheory.:≪Function
≪(μ,ν)

Absolute continuity

A measure μ is absolutely continuous with respect to ν, written μ ≪ ν, if ν(A)==0 implies μ(A)==0 for every ν-measurable set A.

Less formally, suppose we have a set A with ν(A)==0. If μ(A)≠0, then there can be no way to "reweight" ν to get to μ. We can't make something from nothing.

This "reweighting" is really a density function. If μ≪ν, then there is some function f that makes μ == ∫(f,ν) (see the help section for ).

We can get this f directly via the Radon-Nikodym derivative, f == 𝒹(μ,ν) (see the help section for 𝒹).

Note that is not a partial order, because it is not antisymmetric. That is to say, it's possible (in fact, common) to have two different measures μ and ν with μ ≪ ν and ν ≪ μ. A simple example of this is

μ = Normal()
ν = Lebesgue(ℝ)

When holds in both directions, the measures μ and ν are equivalent, written μ ≃ ν. See the help section for for more information.

MeasureTheory.asparamsFunction

asparams build on TransformVariables.as to construct bijections to the parameter space of a given parameterized measure. Because this is only possible for continuous parameter spaces, we allow constraints to assign values to any subset of the parameters.


asparams(::Type{<:ParameterizedMeasure}, ::Val{::Symbol})

Return a transformation for a particular parameter of a given parameterized measure. For example,

julia> asparams(Normal, Val(:σ))
asℝ₊

asparams(::Type{<: ParameterizedMeasure{N}}, constraints::NamedTuple) where {N}

Return a transformation for a given parameterized measure subject to the named tuple constraints. For example,

julia> asparams(Binomial{(:p,)}, (n=10,))
TransformVariables.TransformTuple{NamedTuple{(:p,), Tuple{TransformVariables.ScaledShiftedLogistic{Float64}}}}((p = as𝕀,), 1)

aspararams(::ParameterizedMeasure)

Return a transformation with no constraints. For example,

julia> asparams(Normal{(:μ,:σ)})
TransformVariables.TransformTuple{NamedTuple{(:μ, :σ), Tuple{TransformVariables.Identity, TransformVariables.ShiftedExp{true, Float64}}}}((μ = asℝ, σ = asℝ₊), 2)
MeasureTheory.basemeasureFunction
basemeasure(μ)

Many measures are defined in terms of a logdensity relative to some base measure. This makes it important to be able to find that base measure.

For measures not defined in this way, we'll typically have basemeasure(μ) == μ.

MeasureTheory.isprimitiveMethod
isprimitive(μ)

Most measures are defined in terms of other measures, for example using a density or a pushforward. Those that are not are considered (in this library, it's not a general measure theory thing) to be primitive. The canonical example of a primitive measure is Lebesgue(X) for some X.

The default method is isprimitive(μ) = false

So when adding a new primitive measure, it's necessary to add a method for its type that returns true.

MeasureTheory.logdensityFunction
logdensity(μ::AbstractMeasure{X}, x::X)

Compute the logdensity of the measure μ at the point x. This is the standard way to define logdensity for a new measure. the base measure is implicit here, and is understood to be basemeasure(μ).

Methods for computing density relative to other measures will be

MeasureTheory.∫Method
∫(f, base::AbstractMeasure; log=true)

Define a new measure in terms of a density f over some measure base. If log=true (the default), f is considered as a log-density.

MeasureTheory.𝒹Method
𝒹(μ::AbstractMeasure, base::AbstractMeasure; log=true)

Compute the Radom-Nikodym derivative (or its log, if log=true) of μ with respect to base.

MeasureTheory.@domainMacro
@domain(name, T)

Defines a new singleton struct T, and a value name for building values of that type.

For example, MeasureTheory.@domain ℝ RealNumbers is equivalent to

struct RealNumbers <: MeasureTheory.AbstractDomain end

export ℝ

ℝ = MeasureTheory.RealNumbers()

Base.show(io::IO, ::RealNumbers) = print(io, "ℝ")
MeasureTheory.@measureMacro
@measure <declaration>

The <declaration> gives a measure and its default parameters, and specifies its relation to its base measure. For example,

@measure Normal(μ,σ) ≃ Lebesgue{X}

declares the Normal is a measure with default parameters μ and σ, and it is equivalent to its base measure, which is Lebesgue{X}

You can see the generated code like this:

julia> MacroTools.prettify(@macroexpand @measure Normal(μ,σ) ≃ Lebesgue{X})
quote
    struct Normal{P, X} <: AbstractMeasure
        par::P
    end
    function Normal(nt::NamedTuple)
        P = typeof(nt)
        return Normal{P, eltype(Normal{P})}
    end
    Normal(; kwargs...) = Normal((; kwargs...))
    (basemeasure(μ::Normal{P, X}) where {P, X}) = Lebesgue{X}
    Normal(μ, σ) = Normal(; Any[:μ, :σ])
    ((:≪)(::Normal{P, X}, ::Lebesgue{X}) where {P, X}) = true
    ((:≪)(::Lebesgue{X}, ::Normal{P, X}) where {P, X}) = true
end

Note that the eltype function needs to be defined separately by the user.