KernelFunctions.ARDTransform
— TypeARDTransform(v::AbstractVector)
Transformation that multiplies the input elementwise by v
.
Examples
julia> v = rand(10); t = ARDTransform(v); X = rand(10, 100);
julia> map(t, ColVecs(X)) == ColVecs(v .* X)
true
KernelFunctions.ARDTransform
— MethodARDTransform(s::Real, dims::Integer)
Create an ARDTransform
with vector fill(s, dims)
.
KernelFunctions.ChainTransform
— TypeChainTransform(transforms)
Transformation that applies a chain of transformations ts
to the input.
The transformation first(ts)
is applied first.
Examples
julia> l = rand(); A = rand(3, 4); t1 = ScaleTransform(l); t2 = LinearTransform(A);
julia> X = rand(4, 10);
julia> map(ChainTransform([t1, t2]), ColVecs(X)) == ColVecs(A * (l .* X))
true
julia> map(t2 ∘ t1, ColVecs(X)) == ColVecs(A * (l .* X))
true
KernelFunctions.ColVecs
— TypeColVecs(X::AbstractMatrix)
A lightweight wrapper for an AbstractMatrix
which interprets it as a vector-of-vectors, in which each column of X
represents a single vector.
That is, by writing x = ColVecs(X)
, you are saying "x
is a vector-of-vectors, each of which has length size(X, 1)
. The total number of vectors is size(X, 2)
."
Phrased differently, ColVecs(X)
says that X
should be interpreted as a vector of horizontally-concatenated column-vectors, hence the name ColVecs
.
julia> X = randn(2, 5);
julia> x = ColVecs(X);
julia> length(x) == 5
true
julia> X[:, 3] == x[3]
true
ColVecs
is related to RowVecs
via transposition:
julia> X = randn(2, 5);
julia> ColVecs(X) == RowVecs(X')
true
KernelFunctions.ConstantKernel
— TypeConstantKernel(; c::Real=1.0)
Kernel of constant value c
.
Definition
For inputs $x, x'$, the kernel of constant value $c \geq 0$ is defined as
\[k(x, x') = c.\]
See also: ZeroKernel
KernelFunctions.CosineKernel
— TypeCosineKernel(; metric=Euclidean())
Cosine kernel with respect to the metric
.
Definition
For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the cosine kernel is defined as
\[k(x, x') = \cos(\pi d(x, x')).\]
By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.
KernelFunctions.ExponentialKernel
— TypeExponentialKernel(; metric=Euclidean())
Exponential kernel with respect to the metric
.
Definition
For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the exponential kernel is defined as
\[k(x, x') = \exp\big(- d(x, x')\big).\]
By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.
See also: GammaExponentialKernel
KernelFunctions.ExponentiatedKernel
— TypeExponentiatedKernel()
Exponentiated kernel.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the exponentiated kernel is defined as
\[k(x, x') = \exp(x^\top x').\]
KernelFunctions.EyeKernel
— TypeEyeKernel()
Alias of WhiteKernel
.
KernelFunctions.FBMKernel
— TypeFBMKernel(; h::Real=0.5)
Fractional Brownian motion kernel with Hurst index h
.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the fractional Brownian motion kernel with Hurst index $h \in [0,1]$ is defined as
\[k(x, x'; h) = \frac{\|x\|_2^{2h} + \|x'\|_2^{2h} - \|x - x'\|^{2h}}{2}.\]
KernelFunctions.FunctionTransform
— TypeFunctionTransform(f)
Transformation that applies function f
to the input.
Make sure that f
can act on an input. For instance, if the inputs are vectors, use f(x) = sin.(x)
instead of f = sin
.
Examples
julia> f(x) = sum(x); t = FunctionTransform(f); X = randn(100, 10);
julia> map(t, ColVecs(X)) == ColVecs(sum(X; dims=1))
true
KernelFunctions.GammaExponentialKernel
— TypeGammaExponentialKernel(; γ::Real=1.0, metric=Euclidean())
γ-exponential kernel with respect to the metric
and with parameter γ
.
Definition
For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the γ-exponential kernel[RW] with parameter $\gamma \in (0, 2]$ is defined as
\[k(x, x'; \gamma) = \exp\big(- d(x, x')^{\gamma}\big).\]
By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.
See also: ExponentialKernel
, SqExponentialKernel
KernelFunctions.GammaRationalKernel
— TypeGammaRationalKernel(; α::Real=2.0, γ::Real=1.0, metric=Euclidean())
γ-rational kernel with respect to the metric
with shape parameters α
and γ
.
Definition
For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the γ-rational kernel with shape parameters $\alpha > 0$ and $\gamma \in (0, 2]$ is defined as
\[k(x, x'; \alpha, \gamma) = \bigg(1 + \frac{d(x, x')^{\gamma}}{\alpha}\bigg)^{-\alpha}.\]
By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.
The GammaExponentialKernel
is recovered in the limit as $\alpha \to \infty$.
See also: RationalKernel
, RationalQuadraticKernel
KernelFunctions.GaussianKernel
— TypeGaussianKernel()
Alias of SqExponentialKernel
.
KernelFunctions.GibbsKernel
— TypeGibbsKernel(; lengthscale)
Gibbs Kernel with lengthscale function lengthscale
.
The Gibbs kernel is a non-stationary generalisation of the squared exponential kernel. The lengthscale parameter $l$ becomes a function of position $l(x)$.
Definition
For inputs $x, x'$, the Gibbs kernel with lengthscale function $l(\cdot)$ is defined as
\[k(x, x'; l) = \sqrt{\left(\frac{2 l(x) l(x')}{l(x)^2 + l(x')^2}\right)} \quad \exp{\left(-\frac{(x - x')^2}{l(x)^2 + l(x')^2}\right)}.\]
For a constant function $l \equiv c$, one recovers the SqExponentialKernel
with lengthscale c
.
References
Mark N. Gibbs. "Bayesian Gaussian Processes for Regression and Classication." PhD thesis, 1997
Christopher J. Paciorek and Mark J. Schervish. "Nonstationary Covariance Functions for Gaussian Process Regression". NeurIPS, 2003
Sami Remes, Markus Heinonen, Samuel Kaski. "Non-Stationary Spectral Kernels". arXiV:1705.08736, 2017
Sami Remes, Markus Heinonen, Samuel Kaski. "Neural Non-Stationary Spectral Kernel". arXiv:1811.10978, 2018
KernelFunctions.IdentityTransform
— TypeIdentityTransform()
Transformation that returns exactly the input.
KernelFunctions.IndependentMOKernel
— TypeIndependentMOKernel(k::Kernel)
Kernel for multiple independent outputs with kernel k
each.
Definition
For inputs $x, x'$ and output dimensions $p, p'$, the kernel $\widetilde{k}$ for independent outputs with kernel $k$ each is defined as
\[\widetilde{k}\big((x, p), (x', p')\big) = \begin{cases} k(x, x') & \text{if } p = p', \\ 0 & \text{otherwise}. \end{cases}\]
Mathematically, it is equivalent to a matrix-valued kernel defined as
\[\widetilde{K}(x, x') = \mathrm{diag}\big(k(x, x'), \ldots, k(x, x')\big) \in \mathbb{R}^{m \times m},\]
where $m$ is the number of outputs.
KernelFunctions.IntrinsicCoregionMOKernel
— TypeIntrinsicCoregionMOKernel(; kernel::Kernel, B::AbstractMatrix)
Kernel associated with the intrinsic coregionalization model.
Definition
For inputs $x, x'$ and output dimensions $p, p'$, the kernel is defined as[ARL]
\[k\big((x, p), (x', p'); B, \tilde{k}\big) = B_{p, p'} \tilde{k}\big(x, x'\big),\]
where $B$ is a positive semidefinite matrix of size $m \times m$, with $m$ being the number of outputs, and $\tilde{k}$ is a scalar-valued kernel shared by the latent processes.
KernelFunctions.KernelProduct
— TypeKernelProduct <: Kernel
Create a product of kernels. One can also use the overloaded operator *
.
There are various ways in which you create a KernelProduct
:
The simplest way to specify a KernelProduct
would be to use the overloaded *
operator. This is equivalent to creating a KernelProduct
by specifying the kernels as the arguments to the constructor.
julia> k1 = SqExponentialKernel(); k2 = LinearKernel(); X = rand(5);
julia> (k = k1 * k2) == KernelProduct(k1, k2)
true
julia> kernelmatrix(k1 * k2, X) == kernelmatrix(k1, X) .* kernelmatrix(k2, X)
true
julia> kernelmatrix(k, X) == kernelmatrix(k1 * k2, X)
true
You could also specify a KernelProduct
by providing a Tuple
or a Vector
of the kernels to be multiplied. We suggest you to use a Tuple
when you have fewer components and a Vector
when dealing with a large number of components.
julia> KernelProduct((k1, k2)) == k1 * k2
true
julia> KernelProduct([k1, k2]) == KernelProduct((k1, k2)) == k1 * k2
true
KernelFunctions.KernelSum
— TypeKernelSum <: Kernel
Create a sum of kernels. One can also use the operator +
.
There are various ways in which you create a KernelSum
:
The simplest way to specify a KernelSum
would be to use the overloaded +
operator. This is equivalent to creating a KernelSum
by specifying the kernels as the arguments to the constructor.
julia> k1 = SqExponentialKernel(); k2 = LinearKernel(); X = rand(5);
julia> (k = k1 + k2) == KernelSum(k1, k2)
true
julia> kernelmatrix(k1 + k2, X) == kernelmatrix(k1, X) .+ kernelmatrix(k2, X)
true
julia> kernelmatrix(k, X) == kernelmatrix(k1 + k2, X)
true
You could also specify a KernelSum
by providing a Tuple
or a Vector
of the kernels to be summed. We suggest you to use a Tuple
when you have fewer components and a Vector
when dealing with a large number of components.
julia> KernelSum((k1, k2)) == k1 + k2
true
julia> KernelSum([k1, k2]) == KernelSum((k1, k2)) == k1 + k2
true
KernelFunctions.KernelTensorProduct
— TypeKernelTensorProduct
Tensor product of kernels.
Definition
For inputs $x = (x_1, \ldots, x_n)$ and $x' = (x'_1, \ldots, x'_n)$, the tensor product of kernels $k_1, \ldots, k_n$ is defined as
\[k(x, x'; k_1, \ldots, k_n) = \Big(\bigotimes_{i=1}^n k_i\Big)(x, x') = \prod_{i=1}^n k_i(x_i, x'_i).\]
Construction
The simplest way to specify a KernelTensorProduct
is to use the overloaded tensor
operator or its alias ⊗
(can be typed by \otimes<tab>
).
julia> k1 = SqExponentialKernel(); k2 = LinearKernel(); X = rand(5, 2);
julia> kernelmatrix(k1 ⊗ k2, RowVecs(X)) == kernelmatrix(k1, X[:, 1]) .* kernelmatrix(k2, X[:, 2])
true
You can also specify a KernelTensorProduct
by providing kernels as individual arguments or as an iterable data structure such as a Tuple
or a Vector
. Using a tuple or individual arguments guarantees that KernelTensorProduct
is concretely typed but might lead to large compilation times if the number of kernels is large.
julia> KernelTensorProduct(k1, k2) == k1 ⊗ k2
true
julia> KernelTensorProduct((k1, k2)) == k1 ⊗ k2
true
julia> KernelTensorProduct([k1, k2]) == k1 ⊗ k2
true
KernelFunctions.LaplacianKernel
— TypeLaplacianKernel()
Alias of ExponentialKernel
.
KernelFunctions.LatentFactorMOKernel
— TypeLatentFactorMOKernel(g::AbstractVector{<:Kernel}, e::MOKernel, A::AbstractMatrix)
Kernel associated with the semiparametric latent factor model.
Definition
For inputs $x, x'$ and output dimensions $p_x, p_{x'}'$, the kernel is defined as[STJ]
\[k\big((x, p_x), (x, p_{x'})\big) = \sum^{Q}_{q=1} A_{p_xq}g_q(x, x')A_{p_{x'}q} + e\big((x, p_x), (x', p_{x'})\big),\]
where $g_1, \ldots, g_Q$ are $Q$ kernels, one for each latent process, $e$ is a multi-output kernel for $m$ outputs, and $A$ is a matrix of weights for the kernels of size $m \times Q$.
KernelFunctions.LinearKernel
— TypeLinearKernel(; c::Real=0.0)
Linear kernel with constant offset c
.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the linear kernel with constant offset $c \geq 0$ is defined as
\[k(x, x'; c) = x^\top x' + c.\]
See also: PolynomialKernel
KernelFunctions.LinearMixingModelKernel
— TypeLinearMixingModelKernel(k::Kernel, H::AbstractMatrix)
LinearMixingModelKernel(Tk::AbstractVector{<:Kernel},Th::AbstractMatrix)
Kernel associated with the linear mixing model, taking a vector of Q
kernels and a Q × m
mixing matrix H for a function with m
outputs. Also accepts a single kernel k
for use across all Q
basis vectors.
Definition
For inputs $x, x'$ and output dimensions $p, p'$, the kernel is defined as[BPTHST]
\[k\big((x, p), (x, p')\big) = H_{:,p}K(x, x')H_{:,p'}\]
where $K(x, x') = Diag(k_1(x, x'), ..., k_Q(x, x'))$ with zero off-diagonal entries. $H_{:,p}$ is the $p$-th column (p
-th output) of $H \in \mathbb{R}^{Q \times m}$ representing $Q$ basis vectors for the $m$ dimensional output space of $f$. $k_1, \ldots, k_Q$ are $Q$ kernels, one for each latent process, $H$ is a mixing matrix of $Q$ basis vectors spanning the output space.
KernelFunctions.LinearTransform
— TypeLinearTransform(A::AbstractMatrix)
Linear transformation of the input realised by the matrix A
.
The second dimension of A
must match the number of features of the target.
Examples
julia> A = rand(10, 5); t = LinearTransform(A); X = rand(5, 100);
julia> map(t, ColVecs(X)) == ColVecs(A * X)
true
KernelFunctions.MOInput
— TypeMOInput(x::AbstractVector, out_dim::Integer)
A data type to accommodate modelling multi-dimensional output data. MOInput(x, out_dim)
has length length(x) * out_dim
.
julia> x = [1, 2, 3];
julia> MOInput(x, 2)
6-element KernelFunctions.MOInputIsotopicByOutputs{Int64, Vector{Int64}, Int64}:
(1, 1)
(2, 1)
(3, 1)
(1, 2)
(2, 2)
(3, 2)
As shown above, an MOInput
represents a vector of tuples. The first length(x)
elements represent the inputs for the first output, the second length(x)
elements represent the inputs for the second output, etc. See Inputs for Multiple Outputs in the docs for more info.
MOInput
will be deprecated in version 0.11 in favour of MOInputIsotopicByOutputs
, and removed in version 0.12.
KernelFunctions.MOInputIsotopicByFeatures
— TypeMOInputIsotopicByFeatures(x::AbstractVector, out_dim::Integer)
MOInputIsotopicByFeatures(x, out_dim)
has length out_dim * length(x)
.
julia> x = [1, 2, 3];
julia> KernelFunctions.MOInputIsotopicByFeatures(x, 2)
6-element KernelFunctions.MOInputIsotopicByFeatures{Int64, Vector{Int64}, Int64}:
(1, 1)
(1, 2)
(2, 1)
(2, 2)
(3, 1)
(3, 2)
Accommodates modelling multi-dimensional output data where all outputs are always observed.
As shown above, an MOInputIsotopicByFeatures
represents a vector of tuples. The first out_dim
elements represent all outputs for the first input, the second out_dim
elements represent the outputs for the second input, etc.
See Inputs for Multiple Outputs in the docs for more info.
KernelFunctions.MOInputIsotopicByOutputs
— TypeMOInputIsotopicByOutputs(x::AbstractVector, out_dim::Integer)
MOInputIsotopicByOutputs(x, out_dim)
has length length(x) * out_dim
.
julia> x = [1, 2, 3];
julia> KernelFunctions.MOInputIsotopicByOutputs(x, 2)
6-element KernelFunctions.MOInputIsotopicByOutputs{Int64, Vector{Int64}, Int64}:
(1, 1)
(2, 1)
(3, 1)
(1, 2)
(2, 2)
(3, 2)
Accommodates modelling multi-dimensional output data where all outputs are always observed.
As shown above, an MOInputIsotopicByOutputs
represents a vector of tuples. The first length(x)
elements represent the inputs for the first output, the second length(x)
elements represent the inputs for the second output, etc.
KernelFunctions.MOKernel
— TypeMOKernel
Abstract type for kernels with multiple outpus.
KernelFunctions.Matern12Kernel
— TypeMatern12Kernel()
Alias of ExponentialKernel
.
KernelFunctions.Matern32Kernel
— TypeMatern32Kernel(; metric=Euclidean())
Matérn kernel of order $3/2$ with respect to the metric
.
Definition
For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the Matérn kernel of order $3/2$ is given by
\[k(x, x') = \big(1 + \sqrt{3} d(x, x') \big) \exp\big(- \sqrt{3} d(x, x') \big).\]
By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.
See also: MaternKernel
KernelFunctions.Matern52Kernel
— TypeMatern52Kernel(; metric=Euclidean())
Matérn kernel of order $5/2$ with respect to the metric
.
Definition
For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the Matérn kernel of order $5/2$ is given by
\[k(x, x') = \bigg(1 + \sqrt{5} d(x, x') + \frac{5}{3} d(x, x')^2\bigg) \exp\big(- \sqrt{5} d(x, x') \big).\]
By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.
See also: MaternKernel
KernelFunctions.MaternKernel
— TypeMaternKernel(; ν::Real=1.5, metric=Euclidean())
Matérn kernel of order ν
with respect to the metric
.
Definition
For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the Matérn kernel of order $\nu > 0$ is defined as
\[k(x,x';\nu) = \frac{2^{1-\nu}}{\Gamma(\nu)}\big(\sqrt{2\nu} d(x, x')\big) K_\nu\big(\sqrt{2\nu} d(x, x')\big),\]
where $\Gamma$ is the Gamma function and $K_{\nu}$ is the modified Bessel function of the second kind of order $\nu$. By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.
A Gaussian process with a Matérn kernel is $\lceil \nu \rceil - 1$-times differentiable in the mean-square sense.
Differentiation with respect to the order ν is not currently supported.
See also: Matern12Kernel
, Matern32Kernel
, Matern52Kernel
KernelFunctions.NeuralKernelNetwork
— TypeNeuralKernelNetwork(primitives, nn)
Constructs a Neural Kernel Network (NKN) [1].
primitives
are the based kernels, combined by nn
.
k1 = 0.6 * (SEKernel() ∘ ScaleTransform(0.5))
k2 = 0.4 * (Matern32Kernel() ∘ ScaleTransform(0.1))
primitives = Primitive(k1, k2)
nkn = NeuralKernelNetwork(primitives, Chain(LinearLayer(2, 2), product))
[1] - Sun, Shengyang, et al. "Differentiable compositional kernel learning for Gaussian processes." International Conference on Machine Learning. PMLR, 2018.
KernelFunctions.NeuralNetworkKernel
— TypeNeuralNetworkKernel()
Kernel of a Gaussian process obtained as the limit of a Bayesian neural network with a single hidden layer as the number of units goes to infinity.
Definition
Consider the single-layer Bayesian neural network $f \colon \mathbb{R}^d \to \mathbb{R}$ with $h$ hidden units defined by
\[f(x; b, v, u) = b + \sqrt{\frac{\pi}{2}} \sum_{i=1}^{h} v_i \mathrm{erf}\big(u_i^\top x\big),\]
where $\mathrm{erf}$ is the error function, and with prior distributions
\[\begin{aligned} b &\sim \mathcal{N}(0, \sigma_b^2),\\ v &\sim \mathcal{N}(0, \sigma_v^2 \mathrm{I}_{h}/h),\\ u_i &\sim \mathcal{N}(0, \mathrm{I}_{d}/2) \qquad (i = 1,\ldots,h). \end{aligned}\]
As $h \to \infty$, the neural network converges to the Gaussian process
\[g(\cdot) \sim \mathcal{GP}\big(0, \sigma_b^2 + \sigma_v^2 k(\cdot, \cdot)\big),\]
where the neural network kernel $k$ is given by
\[k(x, x') = \arcsin\left(\frac{x^\top x'}{\sqrt{\big(1 + \|x\|^2_2\big) \big(1 + \|x'\|_2^2\big)}}\right)\]
for inputs $x, x' \in \mathbb{R}^d$.[CW]
KernelFunctions.NormalizedKernel
— TypeNormalizedKernel(k::Kernel)
A normalized kernel derived from k
.
Definition
For inputs $x, x'$, the normalized kernel $\widetilde{k}$ derived from kernel $k$ is defined as
\[\widetilde{k}(x, x'; k) = \frac{k(x, x')}{\sqrt{k(x, x) k(x', x')}}.\]
KernelFunctions.NystromFact
— TypeNystromFact
Type for storing a Nystrom factorization. The factorization contains two fields: W
and C
, two matrices satisfying:
\[\mathbf{K} \approx \mathbf{C}^{\intercal}\mathbf{W}\mathbf{C}\]
KernelFunctions.PeriodicKernel
— TypePeriodicKernel(; r::AbstractVector=ones(Float64, 1))
Periodic kernel with parameter r
.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the periodic kernel with parameter $r_i > 0$ is defined[DM] as
\[k(x, x'; r) = \exp\bigg(- \frac{1}{2} \sum_{i=1}^d \bigg(\frac{\sin\big(\pi(x_i - x'_i)\big)}{r_i}\bigg)^2\bigg).\]
KernelFunctions.PeriodicKernel
— TypePeriodicKernel([T=Float64, dims::Int=1])
Create a PeriodicKernel
with parameter r=ones(T, dims)
.
KernelFunctions.PeriodicTransform
— TypePeriodicTransform(f)
Transformation that maps the input elementwise onto the unit circle with frequency f
.
Samples from a GP with a kernel with this transformation applied to the inputs will produce samples with frequency f
.
Examples
julia> f = rand(); t = PeriodicTransform(f); x = rand();
julia> t(x) == [sinpi(2 * f * x), cospi(2 * f * x)]
true
KernelFunctions.PiecewisePolynomialKernel
— TypePiecewisePolynomialKernel(; dim::Int, degree::Int=0, metric=Euclidean())
PiecewisePolynomialKernel{degree}(; dim::Int, metric=Euclidean())
Piecewise polynomial kernel of degree degree
for inputs of dimension dim
with support in the unit ball with respect to the metric
.
Definition
For inputs $x, x'$ of dimension $m$ and metric $d(\cdot, \cdot)$, the piecewise polynomial kernel of degree $v \in \{0,1,2,3\}$ is defined as
\[k(x, x'; v) = \max(1 - d(x, x'), 0)^{\alpha(v,m)} f_{v,m}(d(x, x')),\]
where $\alpha(v, m) = \lfloor \frac{m}{2}\rfloor + 2v + 1$ and $f_{v,m}$ are polynomials of degree $v$ given by
\[\begin{aligned} f_{0,m}(r) &= 1, \\ f_{1,m}(r) &= 1 + (j + 1) r, \\ f_{2,m}(r) &= 1 + (j + 2) r + \big((j^2 + 4j + 3) / 3\big) r^2, \\ f_{3,m}(r) &= 1 + (j + 3) r + \big((6 j^2 + 36j + 45) / 15\big) r^2 + \big((j^3 + 9 j^2 + 23j + 15) / 15\big) r^3, \end{aligned}\]
where $j = \lfloor \frac{m}{2}\rfloor + v + 1$. By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.
The kernel is $2v$ times continuously differentiable and the corresponding Gaussian process is hence $v$ times mean-square differentiable.
KernelFunctions.PolynomialKernel
— TypePolynomialKernel(; degree::Int=2, c::Real=0.0)
Polynomial kernel of degree degree
with constant offset c
.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the polynomial kernel of degree $\nu \in \mathbb{N}$ with constant offset $c \geq 0$ is defined as
\[k(x, x'; c, \nu) = (x^\top x' + c)^\nu.\]
See also: LinearKernel
KernelFunctions.RBFKernel
— TypeRBFKernel()
Alias of SqExponentialKernel
.
KernelFunctions.RationalKernel
— TypeRationalKernel(; α::Real=2.0, metric=Euclidean())
Rational kernel with shape parameter α
and given metric
.
Definition
For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the rational kernel with shape parameter $\alpha > 0$ is defined as
\[k(x, x'; \alpha) = \bigg(1 + \frac{d(x, x')}{\alpha}\bigg)^{-\alpha}.\]
By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.
The ExponentialKernel
is recovered in the limit as $\alpha \to \infty$.
See also: GammaRationalKernel
KernelFunctions.RationalQuadraticKernel
— TypeRationalQuadraticKernel(; α::Real=2.0, metric=Euclidean())
Rational-quadratic kernel with respect to the metric
and with shape parameter α
.
Definition
For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the rational-quadratic kernel with shape parameter $\alpha > 0$ is defined as
\[k(x, x'; \alpha) = \bigg(1 + \frac{d(x, x')^2}{2\alpha}\bigg)^{-\alpha}.\]
By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.
The SqExponentialKernel
is recovered in the limit as $\alpha \to \infty$.
See also: GammaRationalKernel
KernelFunctions.RowVecs
— TypeRowVecs(X::AbstractMatrix)
A lightweight wrapper for an AbstractMatrix
which interprets it as a vector-of-vectors, in which each row of X
represents a single vector.
That is, by writing x = RowVecs(X)
, you are saying "x
is a vector-of-vectors, each of which has length size(X, 2)
. The total number of vectors is size(X, 1)
."
Phrased differently, RowVecs(X)
says that X
should be interpreted as a vector of vertically-concatenated row-vectors, hence the name RowVecs
.
Internally, the data continues to be represented as an AbstractMatrix
, so using this type does not introduce any kind of performance penalty.
julia> X = randn(5, 2);
julia> x = RowVecs(X);
julia> length(x) == 5
true
julia> X[3, :] == x[3]
true
RowVecs
is related to ColVecs
via transposition:
julia> X = randn(5, 2);
julia> RowVecs(X) == ColVecs(X')
true
KernelFunctions.SEKernel
— TypeSEKernel()
Alias of SqExponentialKernel
.
KernelFunctions.ScaleTransform
— TypeScaleTransform(l::Real)
Transformation that multiplies the input elementwise with l
.
Examples
julia> l = rand(); t = ScaleTransform(l); X = rand(100, 10);
julia> map(t, ColVecs(X)) == ColVecs(l .* X)
true
KernelFunctions.ScaledKernel
— TypeScaledKernel(k::Kernel, σ²::Real=1.0)
Scaled kernel derived from k
by multiplication with variance σ²
.
Definition
For inputs $x, x'$, the scaled kernel $\widetilde{k}$ derived from kernel $k$ by multiplication with variance $\sigma^2 > 0$ is defined as
\[\widetilde{k}(x, x'; k, \sigma^2) = \sigma^2 k(x, x').\]
KernelFunctions.SelectTransform
— TypeSelectTransform(dims)
Transformation that selects the dimensions dims
of the input.
Examples
julia> dims = [1, 3, 5, 6, 7]; t = SelectTransform(dims); X = rand(100, 10);
julia> map(t, ColVecs(X)) == ColVecs(X[dims, :])
true
KernelFunctions.SqExponentialKernel
— TypeSqExponentialKernel(; metric=Euclidean())
Squared exponential kernel with respect to the metric
.
Definition
For inputs $x, x'$ and metric $d(\cdot, \cdot)$, the squared exponential kernel is defined as
\[k(x, x') = \exp\bigg(- \frac{d(x, x')^2}{2}\bigg).\]
By default, $d$ is the Euclidean metric $d(x, x') = \|x - x'\|_2$.
See also: GammaExponentialKernel
KernelFunctions.Transform
— TypeTransform
Abstract type defining a transformation of the input.
KernelFunctions.TransformedKernel
— TypeTransformedKernel(k::Kernel, t::Transform)
Kernel derived from k
for which inputs are transformed via a Transform
t
.
The preferred way to create kernels with input transformations is to use the composition operator ∘
or its alias compose
instead of TransformedKernel
directly since this allows optimized implementations for specific kernels and transformations.
See also: ∘
KernelFunctions.WhiteKernel
— TypeWhiteKernel()
White noise kernel.
Definition
For inputs $x, x'$, the white noise kernel is defined as
\[k(x, x') = \delta(x, x').\]
KernelFunctions.WienerKernel
— TypeWienerKernel(; i::Int=0)
WienerKernel{i}()
The i
-times integrated Wiener process kernel function.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the $i$-times integrated Wiener process kernel with $i \in \{-1, 0, 1, 2, 3\}$ is defined[SDH] as
\[k_i(x, x') = \begin{cases} \delta(x, x') & \text{if } i=-1,\\ \min\big(\|x\|_2, \|x'\|_2\big) & \text{if } i=0,\\ a_{i1}^{-1} \min\big(\|x\|_2, \|x'\|_2\big)^{2i + 1} + a_{i2}^{-1} \|x - x'\|_2 r_i\big(\|x\|_2, \|x'\|_2\big) \min\big(\|x\|_2, \|x'\|_2\big)^{i + 1} & \text{otherwise}, \end{cases}\]
where the coefficients $a$ are given by
\[a = \begin{bmatrix} 3 & 2 \\ 20 & 12 \\ 252 & 720 \end{bmatrix}\]
and the functions $r_i$ are defined as
\[\begin{aligned} r_1(t, t') &= 1,\\ r_2(t, t') &= t + t' - \frac{\min(t, t')}{2},\\ r_3(t, t') &= 5 \max(t, t')^2 + 2 tt' + 3 \min(t, t')^2. \end{aligned}\]
The WhiteKernel
is recovered for $i = -1$.
KernelFunctions.ZeroKernel
— TypeZeroKernel()
Zero kernel.
Definition
For inputs $x, x'$, the zero kernel is defined as
\[k(x, x') = 0.\]
The output type depends on $x$ and $x'$.
See also: ConstantKernel
Base.:∘
— Methodkernel ∘ transform
∘(kernel, transform)
compose(kernel, transform)
Compose a kernel
with a transformation transform
of its inputs.
The prefix forms support chains of multiple transformations: ∘(kernel, transform1, transform2) = kernel ∘ transform1 ∘ transform2
.
Definition
For inputs $x, x'$, the transformed kernel $\widetilde{k}$ derived from kernel $k$ by input transformation $t$ is defined as
\[\widetilde{k}(x, x'; k, t) = k\big(t(x), t(x')\big).\]
Examples
julia> (SqExponentialKernel() ∘ ScaleTransform(0.5))(0, 2) == exp(-0.5)
true
julia> ∘(ExponentialKernel(), ScaleTransform(2), ScaleTransform(0.5))(1, 2) == exp(-1)
true
See also: TransformedKernel
KernelFunctions.gaborkernel
— Methodgaborkernel(;
sqexponential_transform=IdentityTransform(), cosine_tranform=IdentityTransform()
)
Construct a Gabor kernel with transformations sqexponential_transform
and cosine_transform
of the inputs of the underlying squared exponential and cosine kernel, respectively.
Definition
For inputs $x, x' \in \mathbb{R}^d$, the Gabor kernel with transformations $f$ and $g$ of the inputs to the squared exponential and cosine kernel, respectively, is defined as
\[k(x, x'; f, g) = \exp\bigg(- \frac{\| f(x) - f(x')\|_2^2}{2}\bigg) \cos\big(\pi \|g(x) - g(x')\|_2 \big).\]
KernelFunctions.kernelmatrix
— Functionkernelmatrix(κ::Kernel, x::AbstractVector)
Compute the kernel κ
for each pair of inputs in x
. Returns a matrix of size (length(x), length(x))
satisfying kernelmatrix(κ, x)[p, q] == κ(x[p], x[q])
.
kernelmatrix(κ::Kernel, x::AbstractVector, y::AbstractVector)
Compute the kernel κ
for each pair of inputs in x
and y
. Returns a matrix of size (length(x), length(y))
satisfying kernelmatrix(κ, x, y)[p, q] == κ(x[p], y[q])
.
kernelmatrix(κ::Kernel, X::AbstractMatrix; obsdim)
kernelmatrix(κ::Kernel, X::AbstractMatrix, Y::AbstractMatrix; obsdim)
If obsdim=1
, equivalent to kernelmatrix(κ, RowVecs(X))
and kernelmatrix(κ, RowVecs(X), RowVecs(Y))
, respectively. If obsdim=2
, equivalent to kernelmatrix(κ, ColVecs(X))
and kernelmatrix(κ, ColVecs(X), ColVecs(Y))
, respectively.
KernelFunctions.kernelmatrix!
— Functionkernelmatrix!(K::AbstractMatrix, κ::Kernel, x::AbstractVector)
kernelmatrix!(K::AbstractMatrix, κ::Kernel, x::AbstractVector, y::AbstractVector)
In-place version of kernelmatrix
where pre-allocated matrix K
will be overwritten with the kernel matrix.
kernelmatrix!(K::AbstractMatrix, κ::Kernel, X::AbstractMatrix; obsdim)
kernelmatrix!(
K::AbstractMatrix,
κ::Kernel,
X::AbstractMatrix,
Y::AbstractMatrix;
obsdim,
)
If obsdim=1
, equivalent to kernelmatrix!(K, κ, RowVecs(X))
and kernelmatrix(K, κ, RowVecs(X), RowVecs(Y))
, respectively. If obsdim=2
, equivalent to kernelmatrix!(K, κ, ColVecs(X))
and kernelmatrix(K, κ, ColVecs(X), ColVecs(Y))
, respectively.
KernelFunctions.kernelmatrix
— Methodkernelmatrix(CᵀWC::NystromFact)
Compute the approximate kernel matrix based on the Nystrom factorization.
KernelFunctions.kernelmatrix_diag
— Functionkernelmatrix_diag(κ::Kernel, x::AbstractVector)
Compute the diagonal of kernelmatrix(κ, x)
efficiently.
kernelmatrix_diag(κ::Kernel, x::AbstractVector, y::AbstractVector)
Compute the diagonal of kernelmatrix(κ, x, y)
efficiently. Requires that x
and y
are the same length.
kernelmatrix_diag(κ::Kernel, X::AbstractMatrix; obsdim)
kernelmatrix_diag(κ::Kernel, X::AbstractMatrix, Y::AbstractMatrix; obsdim)
If obsdim=1
, equivalent to kernelmatrix_diag(κ, RowVecs(X))
and kernelmatrix_diag(κ, RowVecs(X), RowVecs(Y))
, respectively. If obsdim=2
, equivalent to kernelmatrix_diag(κ, ColVecs(X))
and kernelmatrix_diag(κ, ColVecs(X), ColVecs(Y))
, respectively.
KernelFunctions.kernelmatrix_diag!
— Functionkernelmatrix_diag!(K::AbstractVector, κ::Kernel, x::AbstractVector)
kernelmatrix_diag!(K::AbstractVector, κ::Kernel, x::AbstractVector, y::AbstractVector)
In place version of kernelmatrix_diag
.
kernelmatrix_diag!(K::AbstractVector, κ::Kernel, X::AbstractMatrix; obsdim)
kernelmatrix_diag!(
K::AbstractVector,
κ::Kernel,
X::AbstractMatrix,
Y::AbstractMatrix;
obsdim
)
If obsdim=1
, equivalent to kernelmatrix_diag!(K, κ, RowVecs(X))
and kernelmatrix_diag!(K, κ, RowVecs(X), RowVecs(Y))
, respectively. If obsdim=2
, equivalent to kernelmatrix_diag!(K, κ, ColVecs(X))
and kernelmatrix_diag!(K, κ, ColVecs(X), ColVecs(Y))
, respectively.
KernelFunctions.median_heuristic_transform
— Methodmedian_heuristic_transform(distance, x::AbstractVector)
Create a ScaleTransform
that divides the input elementwise by the median distance
of the data points in x
.
The distance
has to support pairwise evaluation with KernelFunctions.pairwise
. All PreMetric
s of the package Distances.jl such as Euclidean
satisfy this requirement automatically.
Examples
julia> using Distances, Statistics
julia> x = ColVecs(rand(100, 10));
julia> t = median_heuristic_transform(Euclidean(), x);
julia> y = map(t, x);
julia> median(euclidean(y[i], y[j]) for i in 1:10, j in 1:10 if i != j) ≈ 1
true
KernelFunctions.nystrom
— MethodKernelFunctions.nystrom
— MethodKernelFunctions.nystrom
— Methodnystrom(k::Kernel, X::AbstractVector, S::AbstractVector{<:Integer})
Compute a factorization of a Nystrom approximation of the square kernel matrix of data vector X
with respect to kernel k
, using indices S
. Returns a NystromFact
struct which stores a Nystrom factorization satisfying:
\[\mathbf{K} \approx \mathbf{C}^{\intercal}\mathbf{W}\mathbf{C}\]
KernelFunctions.nystrom
— Methodnystrom(k::Kernel, X::AbstractVector, r::Real)
Compute a factorization of a Nystrom approximation of the square kernel matrix of data vector X
with respect to kernel k
using a sample ratio of r
. Returns a NystromFact
struct which stores a Nystrom factorization satisfying:
\[\mathbf{K} \approx \mathbf{C}^{\intercal}\mathbf{W}\mathbf{C}\]
KernelFunctions.prepare_heterotopic_multi_output_data
— Methodprepare_heterotopic_multi_output_data(
x::AbstractVector, y::AbstractVector{<:Real}, output_indices::AbstractVector{Int},
)
Utility functionality to convert a collection of inputs x
, observations y
, and output_indices
into a format suitable for use with multi-output kernels. Handles the situation in which only one (or a subset) of outputs are observed at each feature. Ensures that all arguments are compatible with one another, and returns a vector of inputs and a vector of outputs.
y[n]
should be the observed value associated with output output_indices[n]
at feature x[n]
.
julia> x = [1.0, 2.0, 3.0];
julia> y = [-1.0, 0.0, 1.0];
julia> output_indices = [3, 2, 1];
julia> inputs, outputs = prepare_heterotopic_multi_output_data(x, y, output_indices);
julia> inputs
3-element Vector{Tuple{Float64, Int64}}:
(1.0, 3)
(2.0, 2)
(3.0, 1)
julia> outputs
3-element Vector{Float64}:
-1.0
0.0
1.0
See also prepare_isotopic_multi_output_data
.
KernelFunctions.prepare_isotopic_multi_output_data
— Methodprepare_isotopic_multi_output_data(x::AbstractVector, y::ColVecs)
Utility functionality to convert a collection of N = length(x)
inputs x
, and a vector-of-vectors y
(efficiently represented by a ColVecs
) into a format suitable for use with multi-output kernels.
y[n]
is the vector-valued output corresponding to the input x[n]
. Consequently, it is necessary that length(x) == length(y)
.
For example, if outputs are initially stored in a num_outputs × N
matrix:
julia> x = [1.0, 2.0, 3.0];
julia> Y = [1.1 2.1 3.1; 1.2 2.2 3.2]
2×3 Matrix{Float64}:
1.1 2.1 3.1
1.2 2.2 3.2
julia> inputs, outputs = prepare_isotopic_multi_output_data(x, ColVecs(Y));
julia> inputs
6-element KernelFunctions.MOInputIsotopicByFeatures{Float64, Vector{Float64}, Int64}:
(1.0, 1)
(1.0, 2)
(2.0, 1)
(2.0, 2)
(3.0, 1)
(3.0, 2)
julia> outputs
6-element Vector{Float64}:
1.1
1.2
2.1
2.2
3.1
3.2
See also prepare_heterotopic_multi_output_data
.
KernelFunctions.prepare_isotopic_multi_output_data
— Methodprepare_isotopic_multi_output_data(x::AbstractVector, y::RowVecs)
Utility functionality to convert a collection of N = length(x)
inputs x
and output vectors y
(efficiently represented by a RowVecs
) into a format suitable for use with multi-output kernels.
y[n]
is the vector-valued output corresponding to the input x[n]
. Consequently, it is necessary that length(x) == length(y)
.
For example, if outputs are initial stored in an N × num_outputs
matrix:
julia> x = [1.0, 2.0, 3.0];
julia> Y = [1.1 1.2; 2.1 2.2; 3.1 3.2]
3×2 Matrix{Float64}:
1.1 1.2
2.1 2.2
3.1 3.2
julia> inputs, outputs = prepare_isotopic_multi_output_data(x, RowVecs(Y));
julia> inputs
6-element KernelFunctions.MOInputIsotopicByOutputs{Float64, Vector{Float64}, Int64}:
(1.0, 1)
(2.0, 1)
(3.0, 1)
(1.0, 2)
(2.0, 2)
(3.0, 2)
julia> outputs
6-element Vector{Float64}:
1.1
2.1
3.1
1.2
2.2
3.2
See also prepare_heterotopic_multi_output_data
.
KernelFunctions.spectral_mixture_kernel
— Methodspectral_mixture_kernel(
h::Kernel=SqExponentialKernel(),
αs::AbstractVector{<:Real},
γs::AbstractMatrix{<:Real},
ωs::AbstractMatrix{<:Real},
)
where αs are the weights of dimension (A, ), γs is the covariance matrix of dimension (D, A) and ωs are the mean vectors and is of dimension (D, A). Here, D is input dimension and A is the number of spectral components.
h
is the kernel, which defaults to SqExponentialKernel
if not specified.
If you want to make sure that the constructor is type-stable, you should provide StaticArrays
arguments: αs
as a StaticVector
, γs
and ωs
as StaticMatrix
.
Generalised Spectral Mixture kernel function. This family of functions is dense in the family of stationary real-valued kernels with respect to the pointwise convergence.[1]
\[ κ(x, y) = αs' (h(-(γs' * t)^2) .* cos(π * ωs' * t), t = x - y\]
References:
[1] Generalized Spectral Kernels, by Yves-Laurent Kom Samo and Stephen J. Roberts
[2] SM: Gaussian Process Kernels for Pattern Discovery and Extrapolation,
ICML, 2013, by Andrew Gordon Wilson and Ryan Prescott Adams,
[3] Covariance kernels for fast automatic pattern discovery and extrapolation
with Gaussian processes, Andrew Gordon Wilson, PhD Thesis, January 2014.
http://www.cs.cmu.edu/~andrewgw/andrewgwthesis.pdf
[4] http://www.cs.cmu.edu/~andrewgw/pattern/.
KernelFunctions.spectral_mixture_product_kernel
— Methodspectral_mixture_product_kernel(
h::Kernel=SqExponentialKernel(),
αs::AbstractMatrix{<:Real},
γs::AbstractMatrix{<:Real},
ωs::AbstractMatrix{<:Real},
)
where αs are the weights of dimension (D, A), γs is the covariance matrix of dimension (D, A) and ωs are the mean vectors and is of dimension (D, A). Here, D is input dimension and A is the number of spectral components.
Spectral Mixture Product Kernel. With enough components A, the SMP kernel can model any product kernel to arbitrary precision, and is flexible even with a small number of components [1]
h
is the kernel, which defaults to SqExponentialKernel
if not specified.
\[ κ(x, y) = Πᵢ₌₁ᴷ Σ(αsᵢᵀ .* (h(-(γsᵢᵀ * tᵢ)²) .* cos(ωsᵢᵀ * tᵢ))), tᵢ = xᵢ - yᵢ\]
References:
[1] GPatt: Fast Multidimensional Pattern Extrapolation with GPs,
arXiv 1310.5288, 2013, by Andrew Gordon Wilson, Elad Gilboa,
Arye Nehorai and John P. Cunningham
KernelFunctions.with_lengthscale
— Methodwith_lengthscale(kernel::Kernel, lengthscales::AbstractVector{<:Real})
Construct a transformed "ARD" kernel with different lengthscales
for each dimension.
Examples
julia> kernel = with_lengthscale(SqExponentialKernel(), [0.5, 2.5]);
julia> x = rand(2);
julia> y = rand(2);
julia> kernel(x, y) ≈ (SqExponentialKernel() ∘ ARDTransform([2, 0.4]))(x, y)
true
KernelFunctions.with_lengthscale
— Methodwith_lengthscale(kernel::Kernel, lengthscale::Real)
Construct a transformed kernel with lengthscale
.
Examples
julia> kernel = with_lengthscale(SqExponentialKernel(), 2.5);
julia> x = rand(2);
julia> y = rand(2);
julia> kernel(x, y) ≈ (SqExponentialKernel() ∘ ScaleTransform(0.4))(x, y)
true
KernelFunctions.TestUtils.example_inputs
— Methodexample_inputs(rng::AbstractRNG, type)
Return a tuple of 4 inputs of type type
. See methods(example_inputs)
for information around supported types. It is recommended that you utilise StableRNGs.jl
for rng
here to ensure consistency across Julia versions.
KernelFunctions.TestUtils.test_interface
— Functiontest_interface([rng::AbstractRNG], k::Kernel, ::Type{T}=Float64; kwargs...) where {T}
Run the test_interface
tests for randomly generated inputs of types Vector{T}
, Vector{Vector{T}}
, ColVecs{T}
, and RowVecs{T}
.
For other input types, please provide the data manually.
The keyword arguments are forwarded to the invocations of test_interface
with the randomly generated inputs.
KernelFunctions.TestUtils.test_interface
— Methodtest_interface(
k::Kernel,
x0::AbstractVector,
x1::AbstractVector,
x2::AbstractVector;
rtol=1e-6,
atol=rtol,
)
Run various consistency checks on k
at the inputs x0
, x1
, and x2
. x0
and x1
should be of the same length with different values, while x0
and x2
should be of different lengths.
These tests are intended to pick up on really substantial issues with a kernel implementation (e.g. substantial asymmetry in the kernel matrix, large negative eigenvalues), rather than to test the numerics in detail, which can be kernel-specific.
KernelFunctions.TestUtils.test_type_stability
— Methodtest_type_stability(
k::Kernel,
x0::AbstractVector,
x1::AbstractVector,
x2::AbstractVector,
)
Run type stability checks over k(x,y)
and the different functions of the API (kernelmatrix
, kernelmatrix_diag
). x0
and x1
should be of the same length with different values, while x0
and x2
should be of different lengths.
KernelFunctions.TestUtils.test_with_type
— Methodtest_with_type(f, rng::AbstractRNG, k::Kernel, ::Type{T}; kwargs...) where {T}
Run the functions f
, (for example test_interface
or test_type_stable
) for randomly generated inputs of types Vector{T}
, Vector{Vector{T}}
, ColVecs{T}
, and RowVecs{T}
.
For other input types, please provide the data manually.
The keyword arguments are forwarded to the invocations of f
with the randomly generated inputs.
- RWC. E. Rasmussen & C. K. I. Williams (2006). Gaussian Processes for Machine Learning.
- ARLM. Álvarez, L. Rosasco, & N. Lawrence (2012). Kernels for Vector-Valued Functions: a Review.
- STJM. Seeger, Y. Teh, & M. I. Jordan (2005). Semiparametric Latent Factor Models.
- BPTHSTWessel P. Bruinsma, Eric Perim, Will Tebbutt, J. Scott Hosking, Arno Solin, Richard E. Turner (2020). Scalable Exact Inference in Multi-Output Gaussian Processes.
- CWC. K. I. Williams (1998). Computation with infinite neural networks.
- DMD. J. C. MacKay (1998). Introduction to Gaussian Processes.
- SDHSchober, Duvenaud & Hennig (2014). Probabilistic ODE Solvers with Runge-Kutta Means.